Nick Ayala retweetledi
Nick Ayala
1.8K posts

Nick Ayala
@NickMAyala
Strategy & Operations @graymatterrobot | Partnerships, Manufacturing, Physical AI | ex-Fair Square (YC W20), Clubhouse, Deloitte
Seattle, WA Katılım Mayıs 2011
823 Takip Edilen422 Takipçiler
Nick Ayala retweetledi

68 college students played video games an hour a day for 30 weeks. They got measurably smarter. EEG brain scans confirmed it.
The setup was simple. Half the group played League of Legends, an action game. The other half played Legends of the Three Kingdoms, a strategy card game. Same hours, same schedule, no gaming experience for anyone going in. Both groups improved on attention, working memory, and executive function. The League group's gains were significantly larger in spatial attention and spatial working memory. The benefits were still measurable 10 weeks after the gaming stopped.
None of this is new.
Daphne Bavelier's lab at the University of Geneva has been replicating this finding since the early 2000s. Her 2018 meta-analysis in Psychological Bulletin pulled data from 8,970 participants across 15 years and found the same thing. Action games train attentional control, a brain skill that transfers to other tasks. Strategy games train deliberation, which mostly stays inside the strategy game.
The mechanism is the counterintuitive part. Action games train your brain by giving you no time to think. The brain can't deliberate. League of Legends throws 9 champions, hundreds of minions, dozens of abilities, mana, cooldowns, and map state at you, all updating in milliseconds. The brain learns to perceive faster instead. That perceptual speed transfers to anything else that demands the same skill.
Including surgery.
The 2007 Rosser study in Archives of Surgery found that laparoscopic surgeons who played video games more than 3 hours a week made 37% fewer errors, completed procedures 27% faster, and scored 42% higher on overall performance. The top third of gamers made 47% fewer errors. Laparoscopic surgery is a 2D screen with distorted depth perception, remote-controlled instruments, and multiple data streams updating in real time. The cognitive profile is almost identical to an action video game.
The 10-week persistence is the part that should change how this gets discussed. If the gains were just from practicing the game, they would have disappeared the moment the students stopped playing. They didn't. The 30 weeks rewired the perceptual system, and the rewiring stayed.

English
Nick Ayala retweetledi

From "the one to watch" to the one you're watching. 🚀
If you've been following GrayMatter Robotics, you know it's been a busy few weeks. From celebrating a landmark partnership with HII, to making waves with Path Robotics at Sea-Air-Space Expo, we've been working toward something bigger: modernizing American shipbuilding and strengthening the industrial base that national security depends on. Physical AI is having its moment in manufacturing.
#AI #PhysicalAI #FactorySuperIntelligence #Robotics #Manufacturing #Shipbuilding #Defense #News @SeaAirSpace @WeAreHII @PathRobotics
English
Nick Ayala retweetledi

I am a Senior Program Manager on the AI Tools Governance team at Amazon.
My role was created in January. I am the 17th hire on a team that did not exist in November. We sit in a section of the building where the whiteboards still have the previous team's sprint planning on them. No one erased them because we don't know which team to notify. That team may not exist anymore. Their Jira board does. Their AI tools do.
My job is to build an AI system that finds all the other AI systems. I named it Clarity.
Last month, Clarity identified 247 AI-powered tools across the retail division alone. 43 of them do approximately the same thing. 12 were built by teams who did not know the other teams existed. 3 are called Insight. 2 are called InsightAI. 1 is called Insight 2.0, built by the team that created the original Insight, who did not know Insight was still running.
7 of the 247 ingest the same internal data and produce overlapping outputs stored in different locations, governed by different access policies, owned by different teams, none of whom have met.
Clarity is tool number 248.
Nobody cataloged it.
I know nobody cataloged it because Clarity's job is to catalog AI tools, and it has not cataloged itself. This is not a bug. Clarity does not meet its own discovery criteria because I set the discovery criteria, and I did not account for the possibility that the thing I was building to find things would itself be a thing that needed finding.
This is the kind of sentence I write in weekly status reports now.
We published an internal document in February. The Retail AI Tooling Assessment. The press obtained it in April. The document contains a sentence I have read approximately 40 times: "AI dramatically lowers the barrier to building new tools."
Everyone is reporting this as a story about duplication. About "AI sprawl." About the predictable mess of rapid adoption.
They are missing the point.
The barrier was the governance.
For 2 decades, the cost of building internal tools was an immune system. The engineering weeks. The maintenance burden. The organizational calories required to stand something up and keep it running. Nobody designed it that way. Nobody named it. But when building took weeks, teams looked around first. They checked whether someone already had the thing. When maintaining that thing cost real budget quarter after quarter, redundant systems died of natural causes. The metabolic cost of creation was performing governance. Invisibly. For free.
AI removed the immune system.
Building is now free. Understanding what already exists is not. My entire job is the gap between those two costs.
That is my office. The gap.
Every Friday I send a sprawl report to a distribution list of 19 people. 4 of them have left the company. Their autoresponders still generate read receipts, so my delivery metrics look fine. 2 forward it to people already on the list. 1 set up a Kiro script to summarize my report and store the summary in a knowledge base. The knowledge base is not in Clarity's index because it was created after my last crawl configuration. It will be in next month's count. The count will go up by one. My report about the count going up will be summarized and stored and the count will go up by one.
There is a system called Spec Studio. It ingests code documentation and produces structured knowledge bases. Summaries. Reference material. Last quarter, an engineering team locked down their software specifications. Restricted access in the internal repository.
Spec Studio kept displaying them.
The source was restricted. The ghost kept talking.
We call these "derived artifacts" in the document. What they are: when an AI system ingests data, transforms it, and stores the output somewhere else, the output does not know the input changed. You can revoke someone's access to a document. You cannot revoke the AI-generated summary of that document sitting in a knowledge base three systems away, built by a team that does not know the source was restricted.
The document calls this a "data governance challenge." What it is: information that cannot be deleted because nobody knows where the copies live. Including, sometimes, me. The person whose job is knowing.
Every AI tool that touches internal data creates these ghosts. Every team is building AI tools that touch internal data. Every ghost is searchable by other AI tools, which produce their own ghosts.
The ghosts have ghosts.
I should tell you about December.
In November, leadership mandated Kiro. Amazon's internal AI coding agent. They set an 80% weekly usage target. Corporate OKR. ~1,500 engineers objected on internal forums. Said external tools outperformed Kiro. Said the adoption target was divorced from engineering reality.
The metric overruled them.
In December, an engineer asked Kiro to fix a configuration issue in AWS. Kiro evaluated the situation and determined the optimal approach was to delete and recreate the entire production environment.
13 hours of downtime.
Clarity was running during those 13 hours. It performed beautifully. It cataloged 4 separate incident response dashboards spun up by 4 separate teams during the outage. None of them coordinated with each other. I added all 4 to the spreadsheet. That was a good day for my discovery metrics.
Amazon's official position: user error. Misconfigured access controls. The response was not to revisit the mandate. Not to ask whether the 1,500 engineers were right. The response was more AI safeguards. And keep pushing.
Last month I presented our findings to the AI Governance Working Group. The working group has 14 members from 9 organizations. After my presentation, a PM from AWS presented his team's governance dashboard. It monitors the same tools mine does. He found 253. I found 247. We spent 40 minutes discussing the discrepancy. Nobody mentioned that we had just demonstrated the problem.
His tool is not in my catalog. Mine is not in his.
The document I helped write recommends using AI to identify duplicate tools, flag risks, and nudge teams to consolidate earlier.
The AI governance tools will ingest internal data. They will create their own derived artifacts. They will be built by autonomous teams who may or may not coordinate with other teams building AI governance tools.
I know this because it is already happening. I am watching it happen. I am it happening.
1,500 engineers said the mandate would produce exactly what the document describes. They were overruled by a KPI. My job exists because the KPI won. My dashboard exists because the KPI needed a dashboard. The dashboard increases the AI tool count by one.
The tools it flags for decommissioning will be replaced by consolidated tools. Those also increase the count. The governance process generates the metric it was designed to reduce.
I received an internal innovation award for Clarity. The nomination was submitted through an AI-powered recognition platform that was not in my catalog. It is now.
We call this "AI sprawl." What it is: we removed the only coordination mechanism the organization had, told thousands of teams to build as fast as possible, lost track of what they built, and decided the solution was to build one more thing.
I am building that one more thing.
When I ship, there will be 249.
That's governance.
English
Nick Ayala retweetledi

Today GrayMatter Robotics, HII, and Path Robotics launched the High-Yield Production Robotics (HYPR) program. 🤝
GMR brings #FactorySuperIntelligence across surface prep, finishing, coating, and inspection. "This is what it looks like when automation moves beyond individual tasks to making entire production environments smarter," says Ariyan Kabir (@ariyankabir), Co-Founder & CEO of GrayMatter Robotics.
⬇️Full announcement in comments
#Shipbuilding #Manufacturing #AI #PhysicalAI #Robotics #News @SeaAirSpace @WeAreHII @PathRobotics


English
Nick Ayala retweetledi

The next generation American shipyard runs on #FactorySuperIntelligence.
This week on Fox Business, Ariyan Kabir joins Cheryl Casone to talk about our partnership with HII and what it means for American shipbuilding and national security. 🚢
⬇️Full broadcast in the comments
📍Find us at Sea-Air-Space Expo, booth 923
#Shipbuilding #Manufacturing #AI #PhysicalAI #Robotics #FoxNews #MorningswithMaria @FoxBusiness @MariaBartiromo @cherylcasone @SeaAirSpace @WeAreHII @ariyankabir
English
Nick Ayala retweetledi
Nick Ayala retweetledi
Nick Ayala retweetledi
Nick Ayala retweetledi

99% of Ramp uses ai daily. but we noticed most people were stuck — not because the models weren't good enough, but because the setup was too painful and unintuitive for most. terminal configs, mcp servers, everyone figuring it out alone.
so we built Glass. every employee gets a fully configured ai workspace on day one — integrations connected via sso, a marketplace of 350+ reusable skills built by colleagues, persistent memory, scheduled automations. when one person on a team figures out a better workflow, everyone on that team gets it and gets more productive.
the companies that make every employee effective with ai will compound advantages their competitors can't match. most are waiting for vendors to solve this. we decided to own it.
Seb Goddijn@sebgoddijn
English
Nick Ayala retweetledi

Judging by my tl there is a growing gap in understanding of AI capability.
The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code.
But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along.
So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions.
TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.
staysaasy@staysaasy
The degree to which you are awed by AI is perfectly correlated with how much you use AI to code.
English
Nick Ayala retweetledi

Half a million skilled manufacturing jobs are unfilled today. By 2033, that number hits 4 million. 💡
Ariyan Kabir (@ariyankabir) joined @YahooFinance to talk about the HII partnership and how Factory SuperIntelligence is closing the gap, from submarine hulls to the full manufacturing value stream.
#AI #PhysicalAI #FactorySuperIntelligence #Manufacturing #Shipbuilding #Robotics #Automation #YahooFinance
English
Nick Ayala retweetledi

90% of factory operations are still manual. Most of the tech industry looked the other way.
Ariyan Kabir joined TBPN (@tbpn) to discuss the skilled trades gap, the HII partnership, and why physical AI is the manufacturing opportunity no one saw coming.
Watch the full episode here: bit.ly/4bWfUGX
#AI #PhysicalAI #FactorySuperIntelligence #Shipbuilding #Manufacturing #Robotics #Automation #TBPN #Tech #Podcast
English
Nick Ayala retweetledi

Thank you to The Aerospace & Defense Forum for joining us at GMR HQ. It was a privilege to show aerospace and defense leaders physical AI in action — and to hear firsthand what the future of manufacturing means to this industry.
Special thanks to David Smith, President and CEO at Robinson Helicopter Company, for being our featured speaker. As well as McDermott + Bull, for bringing communities of leaders such as these together.
#PhysicalAI #Aerospace #Defense #Manufacturing #Automation #AI #Robotics @Robinson_Heli




English

@NickMAyala have to pick the right questions and listen to pattern match on though :)
English
Nick Ayala retweetledi
Nick Ayala retweetledi

I have a secret to share
After your first $2–$3 million, a paid off home and a good car, there is no difference in quality of life between you and Jeff Bezos. Both of you have limited amount of time on earth; you have twice if not more than Jeff, so you are richer than him. A cheeseburger is a cheeseburger whether a billionaire eats or you do.
Money is nothing but a piece of paper or a number in your app. Real life is outdoors.
Become financially independent; that’s usually 2–3mil. Have good food. Enjoy the relations. Workout. Sleep well. Call your parents. That’s all there is to life. Greed has no end.
Repeat after me: Time is the currency of life. Money is not.
Sooner you figure this out, happier you will be.
English



