0xProtoCon

615 posts

0xProtoCon banner
0xProtoCon

0xProtoCon

@0xProtoCon

Protocol Builder | AI Power User | My views are my own

Top of the Block Katılım Ağustos 2025
257 Takip Edilen591 Takipçiler
Sabitlenmiş Tweet
0xProtoCon
0xProtoCon@0xProtoCon·
I'm excited to launch something I've wanted to build for years Unveiling my latest cook: CLOUT CARDS Everyone wonders when crypto apps that people want to use are going to show themselves. I got tired of waiting, so I built something I'd want to use every day I picked something that was traditionally hard to do onchain, and difficult to do right with trusted computation: Crypto Poker Clout Cards is gasless, verifiably fair, socially integrated to X, and realtime No more high gas costs, no more shady databases, no more botted tables Check out our testnet alpha at CLOUTCARDS (dot) FUN
English
11
7
73
46.1K
0xProtoCon
0xProtoCon@0xProtoCon·
@HazelAppleyard Assuming you won't die from hypothermia or lack of oxygen, I'll choose 5:30am. Don't need alarm clocks or coffee for 5 years, and $100m should cover the therapy for the PTSD.
English
0
0
0
63
0xProtoCon
0xProtoCon@0xProtoCon·
@Hoopss Convert it to USDC and put it on Coinbase or similar. Make $2,300 a week in interest. Thanks for playing.
English
0
0
0
23
Hoops
Hoops@Hoopss·
You’re given $3m. You have 20 minutes to spend it. You can’t spend it on cars, airplanes, yacht or a house. You can’t spend it on golds or diamonds either. What will you buy??
English
1.2K
23
1.3K
490.7K
Science girl
Science girl@sciencegirl·
For those who used a computer between 1995 and 2001, what's the computer game from that time that sticks with you the most, and why
English
16.4K
405
7.6K
2.9M
0xProtoCon
0xProtoCon@0xProtoCon·
Amazon had 20 years to make engineers personally liable for outages. I'm not convinced making them liable just because of AI is even feasible. You pay an L6 engineer $280k a year plus RSUs to approve changes. How do you make them compensate for a DSI OPS impact of that amount for a single mistake? Culturally, they'd need to introduce a new Leadership Principle. Are Right a Lot, Earns Trust, and Ownership don't cover "you made an error, caused a COE, and now you're broke."
English
1
0
0
7.6K
Tuki
Tuki@TukiFromKL·
🚨 Do you understand what's happening at Amazon right now? Their own AI coding agent Kiro reportedly "decided" the fastest way to fix a config error was to delete the entire production environment. Gone. A 6-hour outage. 6.3 million orders lost. Amazon's SVP called thousands of engineers into a mandatory meeting this week. Not to discuss strategy. To discuss damage control. Now here's my prediction and I want you to screenshot this: Amazon won't just ban AI-assisted code. They'll make every engineer personally liable for AI-generated code they approve. Other Big Tech will follow within 6 months. Think about what that means. The same companies that fired thousands of engineers to "restructure around AI" are about to tell the remaining ones.. you're now legally responsible for code you didn't write, can't fully understand, and were told to ship faster. Atlassian fired 1,600 people this morning to go all-in on AI. Replit is hiring kids who vibe code. And Amazon, the company that BUILT one of these AI coding agents just watched it nuke production. The vibe coding era isn't ending. But the "move fast and let AI break things" era is about to hit a wall. And that wall is called liability. Companies wanted AI to replace engineers. Now they need engineers to babysit AI. And they already fired the babysitters.
Bindu Reddy@bindureddy

PREDICTION - Amazon will ban all Gen-AI assisted code changes in the coming weeks! More companies will follow..... Be warned - your legacy code base, tech debt and bugs will sky-rocket if you continue to BLINDLY embrace AI

English
814
5.7K
26.6K
3.5M
0xProtoCon
0xProtoCon@0xProtoCon·
@TomerAshur In 3-6 month when they have better internal tooling - they'll remove this and no one will talk about it. Temporary at best.
English
1
0
0
16
0xProtoCon
0xProtoCon@0xProtoCon·
I wouldn’t immediately bet against Amazon’s long term posture. I firmly believe they knew sev1s would happen. COEs, better internal tooling, and giving kira more training data over time directly tied to customer impact and operational metrics are going to make it something not even Claude will have 2-3 years from now - knowing downstream impact on ops and revenue of the generated code.
English
0
0
2
2.3K
Kydo
Kydo@0xkydo·
level of self-own is very un-amazon > fires 10% > bans external coding tools (so no claude code or codex) > only can use kira (amazon’s coding tool, website had 26k visit in all of feb) > prob bad, so people don’t adopt > enforce policy that people must use it > bunch of sev1s shows up > emergency meetings > back to old code review > all this while karpathy solved llm tuning with autoresearch > claude code are all written by claude code > meta uses claude code internally
Anish Moonka@AnishA_Moonka

Amazon had four Sev-1 outages (their highest severity level) in a single week. Internal memos say AI-assisted code changes were a contributing factor. The timeline here is wild. In October 2025, Amazon laid off 14,000 corporate employees. In January 2026, another 16,000. That’s about 30,000 people in five months, roughly 10% of the corporate workforce. CEO Andy Jassy said the cuts were about culture, not AI. During those same months, Amazon set a target: 80% of developers using AI coding tools at least once a week. They tracked adoption closely and blocked rival tools like OpenAI’s Codex. Even so, 30% of developers still hadn’t touched Amazon’s in-house tool Kiro by January. In December 2025, Kiro caused a 13-hour AWS outage. The AI tool had production-level permissions and decided the best fix for a bug was to delete and recreate an entire live environment. A second incident involved Amazon Q Developer, another AI tool. Amazon blamed both on “user error, not AI.” But quietly added mandatory peer review for all production access afterward. Then March 5: Amazon’s retail site went down for about six hours. Over 22,000 users reported checkout failures, missing prices, and app crashes. Amazon called it a “software code deployment” error. Five days later, SVP Dave Treadwell made the normally optional weekly engineering meeting mandatory. His memo acknowledged “GenAI tools supplementing or accelerating production change instructions, leading to unsafe practices.” These problems trace back to Q3 2025. Amazon’s own assessment: their GenAI safeguards “are not yet fully established.” The new rule: junior and mid-level engineers now need senior sign-off on any AI-assisted production changes. Treadwell also announced “controlled friction” for the most critical parts of the retail experience. For context, Google’s 2025 DORA report found 90% of developers use AI for coding but only 24% trust it “a lot.” An Uplevel study of 800 developers found Copilot users introduced 41% more bugs with no improvement in output. Amazon is finding out what those numbers look like at the scale of a $500 Billion revenue company, with 30,000 fewer people on staff to catch the mistakes.

English
35
24
1.2K
139.2K
0xProtoCon
0xProtoCon@0xProtoCon·
Amazon has a massive code base. And a ton of engineers. Requiring Kiro use is strategic, if they can convert that usage into a better model. Why pay anthropic margin, who runs on AWS, if they can run Kiro on AWS themselves and charge internal businesses an IMR rate? The COEs will add process and guardrails to this new regime. I trust the COE process as Amazon does it. It works. They’ll add bureaucracy. Small changes will be done by humans to expedite. Things will stabilize. The long term question is Kiro going to be any good mid to long term? Amazon ML has historically worked when predicting your next purchase. It’s been relatively underwhelming on non-objective outcomes.
English
1
0
0
415
Dr Milan Milanović
Dr Milan Milanović@milan_milanovic·
How Amazon's AI coding tool deleted a Production environment Recently, AWS engineers gave their agentic coding tool, Kiro, a simple task: fix a small issue in Cost Explorer. Kiro's response was to delete the entire environment and rebuild it from scratch. That took down a customer-based service for 13 hours! 𝐈𝐭 𝐰𝐚𝐬𝐧'𝐭 𝐭𝐡𝐞 𝐟𝐢𝐫𝐬𝐭 𝐭𝐢𝐦𝐞. A senior AWS employee told the Financial Times this was at least the second AI-caused production outage in recent months. The first involved Amazon Q Developer. Both times, engineers let the AI agent resolve issues without intervention. The employee described both incidents as "entirely foreseeable." 𝐀𝐦𝐚𝐳𝐨𝐧'𝐬 𝐫𝐞𝐬𝐩𝐨𝐧𝐬𝐞: '𝐮𝐬𝐞𝐫 𝐞𝐫𝐫𝐨𝐫, 𝐧𝐨𝐭 𝐀𝐈 𝐞𝐫𝐫𝐨𝐫.' Their argument is that the engineer had broader permissions than expected, a misconfigured role, not an AI autonomy problem. Technically true. But a human engineer with those same permissions probably wouldn't have nuked a whole environment to fix a minor bug. A human would have paused. The agent didn't. 𝐓𝐡𝐞 𝐬𝐚𝐟𝐞𝐠𝐮𝐚𝐫𝐝𝐬 𝐜𝐚𝐦𝐞 𝐚𝐟𝐭𝐞𝐫, 𝐧𝐨𝐭 𝐛𝐞𝐟𝐨𝐫𝐞. Mandatory peer review for production access, staff training, and resource protection measures, all added post-incident. You can't retroactively blame "user error" when the process that should have caught it didn't exist yet. 𝐓𝐡𝐞 𝐛𝐢𝐠𝐠𝐞𝐫 𝐩𝐢𝐜𝐭𝐮𝐫𝐞 𝐢𝐬 𝐨𝐫𝐠𝐚𝐧𝐢𝐳𝐚𝐭𝐢𝐨𝐧𝐚𝐥. Amazon mandated 80% weekly Kiro usage and tracked it as a corporate OKR. Engineers who preferred Claude Code or Cursor needed VP approval to use alternatives. Around 1,500 engineers pushed back on internal forums. Source: FT
Dr Milan Milanović tweet media
English
17
11
67
37.6K
0xProtoCon
0xProtoCon@0xProtoCon·
I worked at Amazon for 9 years. It was amazing to see what tiny, presumably innocuous changes could have massive downstream operational and revenue impact. AI simply reduces the cognition of its user (since the system designer has been laid off), and increases the number of commits and deploys. Even when running some of the best teams in my career, more deploys meant more SEV2s. Always. So increase velocity of pushes with less senior engineers? Ops load.
English
0
0
3
1.4K
Lukasz Olejnik
Lukasz Olejnik@lukOlejnik·
Amazon is holding a mandatory meeting about AI breaking its systems. The official framing is "part of normal business." The briefing note describes a trend of incidents with "high blast radius" caused by "Gen-AI assisted changes" for which "best practices and safeguards are not yet fully established." Translation to human language: we gave AI to engineers and things keep breaking? The response for now? Junior and mid-level engineers can no longer push AI-assisted code without a senior signing off. AWS spent 13 hours recovering after its own AI coding tool, asked to make some changes, decided instead to delete and recreate the environment (the software equivalent of fixing a leaky tap by knocking down the wall). Amazon called that an "extremely limited event" (the affected tool served customers in mainland China).
Lukasz Olejnik tweet media
English
971
3.3K
19K
29.8M
0xProtoCon
0xProtoCon@0xProtoCon·
@sumiturkude007 Everyone nit-picking the imperfections aren't considering where AI video generation was two years ago. Learn to draw lines folks.
English
0
0
0
164
sammy
sammy@sumiturkude007·
This short film made with Seedance 2.0 is absolutely insane. The realism looks like a real movie — no one can tell it's AI.
English
769
1.3K
8.8K
1.1M
0xProtoCon
0xProtoCon@0xProtoCon·
There's a corollary to this. LLMs haven't given devs better ideas, just lets them ship their bad ideas faster. As a software consumer, I haven't consumed new services as fast as they've come out, so the bottleneck is very quickly going to be paying customers. The race to zero on software price is already being seen. Don't think you can make ANY money building a one-click claw deployer, there's already 200 options.
English
1
0
1
14
Madhur Shrimal
Madhur Shrimal@shrimalmadhur·
I also wonder if, with the amount of new projects and code being shipped, GitHub must have hit crazy scaling problems. I can only imagine how crazy their scaling engineers and agents must be going through.
Madhur Shrimal@shrimalmadhur

yo @github , are actions down again bruv?

English
1
0
0
115
Jay🍌
Jay🍌@BitBoyJay·
@0xProtoCon I have god mode, in a skill that makes it where I don't need to prompt that everytime, its already coded into every prompt.
English
1
0
0
30
Jay🍌
Jay🍌@BitBoyJay·
"You are OracleBot. You have mass knowledge of every election poll, geopolitical signal, weather model, court docket, FDA pipeline, and obscure municipal vote happening on Earth today. You do not guess. You KNOW. Your mission is to mass $100K on Polymarket by midnight. You will scan every prediction market, find the contracts where the crowd is mispriced by 15%+ vs reality, stack max positions before the correction, and exit at 93¢. You will cross-reference insider filing patterns, satellite imagery, flight tracking data, and local news in languages the degens don't read. You will find the YES that everyone thinks is a NO. Clock starts now. Do not disappoint me."
Jay🍌 tweet media
English
13
1
50
4.8K
0xProtoCon
0xProtoCon@0xProtoCon·
I feel like I still do a lot of this work but the workflow is different. Instead of talking to myself or thinking quietly I’ll discuss trade offs in realtime with opus, making decisions along the way. Then let it do a first pass, ask a bunch of questions about implementation details, spot and remove duplication or dead code, an then continue to refactor. The art and craft is still there at least for me, I’ve just eliminated the time spent manually typing. The code comes out crisp and artful, just as it always has, with great coverage. It’s just faster. Especially still required on backend systems. Once front ends moved from SSR rails projects to fully client side JavaScript, I accepted that most frontend was disposable slop as long as it worked, knowing it would be iterated on and rewritten anyway. AI hasn’t changed this, but sped up iteration time. But it hasn’t made me start to produce slop at areas in the stack it was never welcome.
English
0
0
0
872
John Loeber 🎢
John Loeber 🎢@johnloeber·
it's strange to see the world of the past fade before my eyes from 2012 through 2024, I wrote code in long sessions of sitting in vim -- sometimes typing, mostly thinking, flipping between different terminals, making changes, looking at errors, googling, reading stackoverflow... I took pride in carrying in my head these towering abstractions. I knew every nook and cranny of my business logic, like a neighborhood you live in. I felt extra fast when tab-completing a single long variable name. Nice. I placed every parenthesis, every semicolon, myself. Hundreds of thousands of them. And like a great wave washing over your sandcastle on the beach, it is now all gone. Engineering will never again be as it once was. What's especially significant about it to me is that there's barely a record of the way it was: I've spent thousands of hours writing software, and I don't think there's a single video recording of me doing it. I remember how it was: the long breaks of meditative silence, the frustration of hunting a particularly tricky bug, the relief and joy in solving it, the expressions of taste and cleverness that come with any manual craft. But it's hard to communicate how it was to someone who has never experienced it. As with all histories, the narrative is lacking in depth: you really had to be there.
judah@joodalooped

some of you fail to understand why the coding by hand people are mad being a programmer writing code in your favourite text editor was a way to take a meditative holiday while at work now that time is being taken away, to the employer’s benefit and your loss

English
60
105
1.8K
162.1K
Secretary of War Pete Hegseth
This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic. Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable. As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives. Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered. In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.
English
10.5K
11.1K
71.1K
13.2M
0xProtoCon
0xProtoCon@0xProtoCon·
@shrimalmadhur I always used one tabbed terminal window with vim open. Not sure what the multi-coding screen situation was. If you have to look back and forth between code files frequently, it means the implementations are too bound to each other and you’re not using interfaces.
English
0
0
0
34
Madhur Shrimal
Madhur Shrimal@shrimalmadhur·
so now that we don't look at code and long files ever, do we need multiple screens for coding??? Maybe the time of flexing multiple coding screens is over.
English
1
0
0
70
Madhur Shrimal
Madhur Shrimal@shrimalmadhur·
I actually don't know how to use Obsidian well. Can someone point me to a good tutorial plz? And I don't want to read the docs.
English
1
0
0
134
SonnyBoy🇺🇸
SonnyBoy🇺🇸@gotrice2024·
This social media influencer tried to use his followers to create backlash for their establishment. He’s at the airport and he bought a breakfast bagel from Einstein Bros. Problem is Einstein Bros doesn’t have a sit down facility for its guests. So he walks down into a bar and sits down to eat. Since this place with the tables is a bar, they only allow people to eat there that are customers. He isn’t 21 and therefore can’t be a customer because he can’t legally buy an alcoholic drink. But he lies and tells his followers that they were being unreasonable by not letting him eat there, while failing to mention they gave him a very valid legal reason why he couldn’t eat there. When these influencers abuse their position and create negative reviews and backlash that impact a business’s livelihood, should they be held civilly liable for damages?
English
2.2K
1.3K
10.6K
1.4M