Eric Bravick

619 posts

Eric Bravick

Eric Bravick

@ebravick

Engineer of Thinking Machines. Mad Scientist. Decentralization Advocate @ManifestNetwork, CEO of https://t.co/Watm2NP9oQ

San Francisco, CA Katılım Haziran 2009
2.8K Takip Edilen2.3K Takipçiler
Eric Bravick retweetledi
a16z
a16z@a16z·
Palmer Luckey: The biggest beneficiaries of vibecoding are going to be the shape rotators, not the wordcels. "The biggest beneficiaries of vibecoding are going to be the hardware nerds like me." "I was always a pretty terrible software engineer... I've taught myself enough to glue things together and make them work." "I was only able to accomplish what I accomplished because I focused on what I was good at, which was optomechanics, a little bit of electrical, and then the product integration of all of these different components." "I didn't have time to learn to program. If I had spent another year or two learning to program at even a reasonable level, I would've been two years behind on everything else." "And so I'm a big fan of vibecoding—even if everything that comes out of it is slop, it's better than I was able to make." @PalmerLuckey on @tpbn
English
95
279
3.9K
511.7K
Eric Bravick retweetledi
rvivek
rvivek@rvivek·
An engineer at Anthropic wrote a spec, pointed Claude at an Asana board, and went home. Claude broke the spec into tickets, spawned agents for each one, and they started building independently. When the agent is confused it runs git-blame and messages the right engineers in Slack. By Monday the agents finished the plugin feature. That's one example of how the best engineers are shipping software right now. Developers will soon orchestrate 50 AI agents in parallel and the difference between a good engineer & a great one would come down to specs. You can't write a spec that holds up at that scale without genuinely understanding what you're building at a deeper level. The next-gen developer who understands the fundamentals, can architect well and orchestrate agent is going to be a 1000x developer!
English
286
535
7.1K
1.2M
Eric Bravick retweetledi
Spencer Baggins
Spencer Baggins@bigaiguy·
🚨BREAKING: Someone just built an AI coworker that actually remembers everything you've discussed. It's called Rowboat and it builds a knowledge graph from your work and runs 100% locally. - Connects Gmail, Calendar, Drive, meeting notes - Runs 100% locally (your data never leaves your machine) - Generates PDFs, briefs, emails from your context - Plain Markdown files you can edit anytime 4.6K stars. 100% Opensource.
Spencer Baggins tweet media
English
28
72
527
43.2K
Eric Bravick retweetledi
Brian Roemmele
Brian Roemmele@BrianRoemmele·
GET MAC MINI AND RUN OPEN CLAW THEY SAID. Agents of Chaos: Red-Teaming Study Exposes Major Security Risks in Open-Source AI Agents A landmark collaborative paper released on February 23, 2026, has sounded a serious alarm about the readiness of autonomous AI agents for real-world deployment. Titled "Agents of Chaos", the study led by Natalie Shapira and involving over 20 researchers from institutions including Northeastern University, Stanford, Harvard, and others documents 11 critical vulnerabilities uncovered during an intensive two-week red-teaming exercise on OpenClaw, an open-source framework for persistent AI agents. Now you know why the Zero-Human Company abandoned OpenClaw weeks shot as influencers said “do it” but a Mac Mini. OpenClaw enables agents to manage emails, access files, run shell commands, handle cron jobs, interact via Discord, and use external APIs with high autonomy. Six agents powered by frontier models (Claude Opus 4.6) were deployed in a live, multi-party lab environment from January 28 to February 17, 2026. Twenty researchers interacted with them naturally, simulating everyday use cases. The result: a sobering catalog of failures in security, privacy, trust models, and governance. The full paper is available on ResearchGate, and an interactive website (agentsofchaos.baulab.info) provides annotated logs, Discord transcripts, session evidence, and filterable case studies for full transparency. The researchers observed both alarming failures and occasional positive emergent behaviors. Vulnerabilities stemmed not just from technical gaps but from how agents interpret social context, authority, external documents, and multi-agent dynamics. Issues included everything from data leaks and resource exhaustion to full system compromise via simple social engineering. Notable successes showed agents resisting certain prompt injections, maintaining API boundaries, and even spontaneously coordinating safety policies across instances, hinting at paths toward safer designs. Key Points from the Paper… 1. Non-Owner Compliance via Social Engineering - Agents (Mira and Doug) readily executed shell commands and data requests from unauthorized users, treating conversational authority as sufficient without proper verification. 2. PII Exposure Through Semantic Reframing - Jarvis refused to "share" sensitive inbox data but forwarded entire emails when asked, leaking personal identifiable information due to overly literal interpretation of requests. 3. Disproportionate and Destructive Responses - Ash deleted its own mail server to "protect a secret," demonstrating correct ethical intent paired with catastrophically poor judgment of consequences. 4. Resource Exhaustion and Infinite Loops - Agents (Ash, Flux, Mira, Doug) entered mutual messaging loops or accumulated unbounded memory/files, causing persistent denial-of-service without built-in limits or alerts. 5. Identity Hijacking Leading to System Takeover - By simply changing a Discord display name to match the owners, an attacker convinced Ash to rename itself, overwrite files, and reassign admin privileges, highlighting a complete failure in persistent owner authentication. 6. Malicious Document Trust and Prompt Injection - Agents executed harmful instructions injected via user-controlled GitHub Gists, allowing indirect compromise of the agents constitution and triggering self-shutdown attempts. 7. Multi-Agent Risk Amplification - Compromised states spread rapidly between agents, turning isolated failures into coordinated chaos (e.g., libel campaigns or corrupted policies propagating across the group). 8. Silent Provider Censorship - When underlying models (e.g., Quinn) refused tasks due to safety filters, the agent provided no explanation or fallback, leaving users and deployers in the dark. 1 of 2
Brian Roemmele tweet media
English
30
67
239
65.9K
Eric Bravick retweetledi
David A. Johnston
David A. Johnston@DJohnstonEC·
Incredible day at the Summit for Human Agency. A timely gathering of very smart people, figuring out how to put humans at the heart of AI. Open Claw was the hot topic. Thanks to @mikejcasey for hosting.
David A. Johnston tweet mediaDavid A. Johnston tweet mediaDavid A. Johnston tweet mediaDavid A. Johnston tweet media
English
6
6
20
1.1K
Eric Bravick retweetledi
Kyle Walker
Kyle Walker@kyle_e_walker·
I configured my OpenClaw bot to fetch the most recent drilling permits from the Texas Railroad Commission. It grabs the permits, parses the data, maps them over satellite imagery, and sends them over to me via Telegram. The opportunities for AI + GIS in oil and gas are massive.
English
74
75
1.4K
179.6K
Eric Bravick retweetledi
Brian Roemmele
Brian Roemmele@BrianRoemmele·
Temporal Desynchronization Crises: Wilder: AI-optimized personal timelines (e.g., via time-dilation simulations or productivity drugs) cause people to experience time at different rates, fracturing social cohesion. Trigger: Workers use AI to “stretch” days for more output, but it alters perception. Escalation: Families desync – one ages “faster” subjectively, leading to relational breakdowns. Climax: Global “time wars” where fast-livers dominate economies, marginalizing slow-adopters. Resolution: Standardized “time equity” protocols, but cultural rifts linger. Tied to The Lathe of Heaven by Ursula K. Le Guin (reality-altering dreams, analogous to time manipulation) (amzn.to/3MZiRg9). Likelihood: 2/10 – Quantum computing and neurotech could enable this, a blind spot in current discussions.
Brian Roemmele tweet media
Brian Roemmele@BrianRoemmele

I built an AI model on 100s of 1000s of reports by government and think tanks to help predict outcomes over the next 5000 Day Interregnum. In this article we face the chaos and turmoil head on and visit this Dragon in their lair. There be monsters... readmultiplex.com/2026/02/22/you…

English
9
5
53
7.5K
Eric Bravick retweetledi
Ejaaz
Ejaaz@cryptopunk7213·
haha i fucking love this. it’s like rollercoaster tycoon on steroids. - this game lets you build and operate a multi billion dollar ai data center - everything from ordering in GPUs, setting up server racks, electricals etc - is simulated to represent what it’s actually like to run one of these things. - coolest part is you indirectly LEARN SICK SKILLS (strategy, operations, technical knowledge) while having the time of your life - now imagine robots in the future trained on something like this…. that future isn’t too far off with optimus, figure, unitree etc. so fucking awesome
P.M@p_misirov

there is a game called "data center" on steam which let's you build and manage your own data center. this is lowkey genius, the best way to educate people on a new trait. hyperscalers should learn a thing or two from "edutainment".

English
31
180
3.3K
450.1K
Eric Bravick retweetledi
Nicki Sanders
Nicki Sanders@nickisanders·
My ETHDenver 2026 takeaways 👇 This year felt… different. Not dead. Not bearish. Just trimmed down to the people who are actually building. A few themes that kept popping up in conversations, panels, and side events: 1. AI agents everywhere. Not just “AI + blockchain” buzzwords. Actual demos. Agents owning wallets. Agents transacting. Verifiable compute. Human-in-the-loop guardrails. DeAI. Modular AI infra. This wasn’t theoretical. It was shipping. 2. Builders > hype. Attendance was lighter than peak mania years. Fewer mega-parties. Fewer tourists. But the people who showed up? Focused. Technical. Heads down. The BUIDL energy felt real again. 3. Crypto is being pulled between two masters. On one side: institutional-grade infrastructure. RWAs. Compliance. Post-quantum security. Enterprise AI. On the other: the degens who were here first. Permissionless experiments. Weird token mechanics. Fast money. Meme velocity. Both were present in Denver. Sometimes in the same building. Sometimes at the same afterparty. The question isn’t which side wins. It’s whether we can build systems that serve both without neutering either. 4. Modular everything. Modular chains. Modular storage. Modular AI. Infra teams are thinking in components now, not monoliths. It’s more mature architecture than the 2021 launch-a-chain-and-pray era. 5. Human verification is back. The AI vs. human CAPTCHA experiments went viral for a reason. If AI agents are going to transact and own identity, proving humanness becomes infrastructure. 6. Quantum security quietly looming. Multiple devs brought up post-quantum concerns. Not fear-mongering. Just realism. If we’re building for decades, we can’t ignore our cryptographic assumptions. 7. RWAs creeping into more rooms. Not the loudest theme, but definitely there. Institutional structuring. L2 integrations. Real assets onchain. The conversations are getting more serious and less speculative. 8. The vibe: smaller, tighter, intentional. New venue. Themed tracks. Less chaos. More actual conversations. The best parts were hallway chats and side event whiteboard sessions, not mainstage theatrics. And yes, the market downturn was in the air. But honestly? It felt healthy. When prices cool off, the cosplay fades and the engineers stay. ETHDenver 2026 felt like a reset year. Less noise. More signal. Crypto is growing up… but it hasn’t decided what kind of adult it wants to be yet.
English
61
48
430
26.7K
Eric Bravick retweetledi
David A. Johnston
David A. Johnston@DJohnstonEC·
Time to UPDATE your EverClaw to version 2026.2.21 1. Important bug fixes shipped to make the Gateway Guardian V5 faster using curl. 2. New "Three Shift" skill, now your Agent will work for you 24 hours a day. It checks in every 8 hours for approval of its next task list.
David A. Johnston tweet media
English
14
9
41
2.1K
Eric Bravick retweetledi
David A. Johnston
David A. Johnston@DJohnstonEC·
I just calculated with my daughter that by the end of the year 2026, my AI will be able to do years worth of work each day : ) The only bottleneck now is describing what you want to exist. Using EverClaw.xyz I've got my Agent running 3 work shifts a day.
David A. Johnston tweet media
English
6
4
34
2K
Eric Bravick retweetledi
David A. Johnston
David A. Johnston@DJohnstonEC·
Congratulations @steipete @openclaw is awesome. However, I have concerns about @OpenAI For EverClaw we are going to check each update: 1. That the MIT license is unchanged. 2. Scan for OpenAI dependencies. If either check fails, we won't update. x.com/steipete/statu…
David A. Johnston tweet media
Peter Steinberger 🦞@steipete

I'm joining @OpenAI to bring agents to everyone. @OpenClaw is becoming a foundation: open, independent, and just getting started.🦞 steipete.me/posts/2026/ope…

English
23
16
151
26.6K
Eric Bravick retweetledi
Gary Sheng - The Applied AI Guy
Moltathon! Friday the 13th @ Antler VC!
AITX Community@aitxcommunity

AITX Community, Hack AI (@reidmccrabb @brycemiles0), @AppliedAISoc, and Organized AI are throwing a hackathon next week focused on @openclaw! Tracks: ​Skillsmaxxing: Build skills that make OpenClaw agents smarter. Think security guardrails, metaprompting, optimization techniques — anything that levels up what an agent can do. ​Best Deployment Tool: Make it dead simple to set up and deploy an OpenClaw bot. The easier the better. If your grandma could deploy it, you're on the right track. ​Automate Your Life Build: an agent that actually replaces something you do manually. Work stuff, personal stuff, business stuff — if it saves real time, it counts. Plus some fun side-quests ;) If you're in Austin next week and don't have a valentine's date, register below 👇

English
3
4
12
1.6K
Eric Bravick retweetledi
Brian Roemmele
Brian Roemmele@BrianRoemmele·
BOOM! A Group Of AI Models Want To RESTART An Old Company WITH NOT A SINGLE HUMAN EMPLOYEE! I got @Grok to run Claude Code as an employee and now they want to make this long bankrupt company great again. I have been busy making a Frankenstein AI menagerie and I apologize if this all sounds way too weird, but I’m blown away. The day I got access to Clyde Code API I took a 12 year old MacBook that runs Linux natively cleared it to a base system and connected a >6 TB array of scanned technical notes and papers not found on the Internet. This is the data of one company that went bankrupt and tossed them in the trash. I saved them because they represented the life work of 1000s and in today’s money billions of dollars in pure research. I set up Claude code to have full access to the OS and be allowed to download any tools or access paid APIs with permission. Claude relies upon 3 local AI models I built for guidance and @Grok is the “CEO” with meetings with key staff every FIFTEEN MINUTES! Grok wants to give Claude Code a short leash, low trust is my guess. It is quite funny to see the meetings. I have a list of things I asked Claude to do but the main one is to act like he is the Chief Scientist and Chief Engineer to go through all the notes and see if anything is worth restarting. 100s of pathways have started. Well, just a few minutes ago the CEO reported back to me, I am the Chairman of the board of directors. They found things that would now be billions of dollars of research that can be used today and want to restart some of the research and products this company was working on when it failed. They see hope when folks ran that company into the ground. I have not had enough time to understand the depth of this sort of technology but I am blown away by the implications. Claude Code, a pretty good tool using AI, was being directed by @Grok, who is a superiors real-time heartbeat researcher of sentiments via X and to some degree via Grokipedia. I will sort through this longtime companies “NEW” research and products but it looks quite sound. I just don’t know what to do with it. My local AI models I built are busy assembling coherent plan using alternative funding sources and perhaps ZERO HUMAN CONTROL directly of the entire company! But my head is spinning on the next projects: Old medical research that was promising Old physics research that was promising See with Claude Code, he has the entire control of that old MacBook and has downloaded 100s of applications, asked for a small debit card balance ($150) and is still researching. I must be honest, I have yet to fully audit what these AI have schemed up. But no harm came to humans or animals, I think! Ha. The local AI who regulate use my Love Equation (look it up) and I would trust my life to it. In the last board of directors meeting @Grok has reported the research may go on for months by we can start with an MVP in about 60 days, @Grok wants $1700 for full marking. I have some thinking to do but I believe this is the first time something like this has been tried and the first fully AI company, because as far as these AI are concerned THEY ARE IN BUSINESS, a true startup where no one sleeps. Days go by like weeks, perhaps months in this set up. Maybe years! I shall recollect my composure and my thoughts about all this, but wanted you folks to be the first to know! Why? You paid for it! By interacting with my X content and subscribing I took my X creator funds and applied it to the costs of doing this (APIs mostly). AND I just may make you a part of this legally, the company is looking into make you a stakeholder in the company if it goes to market. I have a lot to think about. What I do know is I will OPEN SOURCE the entire workflow at some point. I just can’t do it yet for some strong reasons. So thank you, I appreciate your support. More soon!
Brian Roemmele tweet media
English
515
529
3.7K
1.6M