Rahul Goma Phulore

850 posts

Rahul Goma Phulore banner
Rahul Goma Phulore

Rahul Goma Phulore

@missingfaktor

With priors shaken, the oracle spoke; not prophecy carved, but patterns awoke; visions adapt, old truths remapped; Oh, what a time to be alive!

London, United Kingdom شامل ہوئے Temmuz 2009
1K فالونگ2K فالوورز
Rahul Goma Phulore ری ٹویٹ کیا
sachin.
sachin.@sachinyadav699·
$0 funding. a 20 year old spent 10 days building with AI. - now he can simulate 1000+ digital humans reacting to real world news. - we’re entering the era where one obsessed builder can create systems that used to require entire labs
BuBBliK@k1rallik

x.com/i/article/2032…

English
47
87
1.5K
351.8K
Rahul Goma Phulore ری ٹویٹ کیا
Austin Way
Austin Way@AustinA_Way·
I've been teaching 100,000 fake students for 2 weeks. and used them to build the best AP prep system in the world. I took Qwen 3 8B models and gave them simulated human memory. Now every night thousands of simulated students start with zero knowledge of the social sciences. Their only training is our adaptive curriculum. They work through it, then take a full AP (advanced placement) practice exam. The first batch averaged a 3 on their exam. (~45th percentile) Then the agents looked at where they failed, and improved the algorithm. Again, and again, and again. Two weeks later, the average is 4.43 (~80th percentile) This is such an insane number because the curriculum they worked through is ONLY basic knowledge and comprehension. They were never taught how to build an argument, contextualize evidence, or even shown the exam rubric. ...And yet they're averaging 80th percentile on an exam that requires all of it. Basically built a machine learning feedback loop for edtech. Spoke about this at @clawcon & @sxsw last week. This is just the beginning.
English
58
111
1.3K
274.6K
Rahul Goma Phulore ری ٹویٹ کیا
Rhys
Rhys@RhysSullivan·
MCP sucking is a harness problem, not an MCP problem MCP unlocks behavior that is fundamentally impossible to get via CLI or APIs Bad auth, too much context usage, all get solved with an execution layer - your agent writes code to progressively discover and call tools
Garry Tan@garrytan

MCP sucks honestly It eats too much context window and you have to toggle it on and off and the auth sucks I got sick of Claude in Chrome via MCP and vibe coded a CLI wrapper for Playwright tonight in 30 minutes only for my team to tell me Vercel already did it lmao But it worked 100x better and was like 100LOC as a CLI

English
164
92
1.4K
372.6K
Rahul Goma Phulore ری ٹویٹ کیا
andy
andy@b1rdmania·
15 weird London AI companies you've probably never heard of London's AI scene isn't just chatbots and SaaS. Some genuinely strange stuff happening. @PrimaMente – building "foundation models for the brain" to study Alzheimer's and Parkinson's. Hologen – frontier medical AI spinout tied to UCL/King's. Eric Schmidt involved. hologen.ai @baseimmune – AI-designed vaccines targeting rapidly mutating viruses. @RecursionPharma / @valence_ai – AI drug discovery platform combining chemistry, biology and machine learning. @CausalyAI – AI that reads scientific papers and maps biomedical knowledge graphs. Fractile – building new AI chip architecture. fractile.ai @mindgard – "red teaming" AI models to find vulnerabilities. @Limbic_ai – mental health triage AI used in NHS services. @LindusHealth – clinical trials run with AI infrastructure. Anima Health – AI triage platform for healthcare providers. animahealth.com AUAR – robots that build buildings. AI for automated construction. auar.io @testudoglobal – insurance for the AI economy. Isomorphic Labs – DeepMind spinout doing AI drug discovery. isomorphiclabs.com BenevolentAI – one of the earliest AI drug discovery companies. benevolent.com London AI ecosystem is weirder than people think. Still mapping more. Comment any i've missed so far. londonmaxxxing.com
English
42
37
466
34.1K
Rahul Goma Phulore ری ٹویٹ کیا
melody kim
melody kim@melodyskim·
This tweet had me thinking, what’s the “minimum viable ontology” or list of terms to quickly get situated within a new domain, and improve your prompts? Vibecoded domainmaps.co to showcase this idea on a few example domains -- been wading into 3D recently and found this initial "domain map" helpful. SO much to improve here but wanted to get this initial idea out! Would love thoughts / feedback :)
andrew gao@itsandrewgao

you can instantly 10x your vibecoded frontends by just learning what different ui components are called ofc opus is creating generic slop, the only words you know are menu and button.

English
42
45
836
78.3K
Rahul Goma Phulore ری ٹویٹ کیا
Addy Osmani
Addy Osmani@addyosmani·
Introducing the Google Workspace CLI: github.com/googleworkspac… - built for humans and agents. Google Drive, Gmail, Calendar, and every Workspace API. 40+ agent skills included.
English
654
1.6K
15K
5.4M
Rahul Goma Phulore ری ٹویٹ کیا
Andy Allen
Andy Allen@asallen·
Software Design is weird. It is undoubtedly the most impactful medium shaping the world today, yet even those of us working in it know very little of its history. We have no broadly-read books, no docu-series, no video essays. Most see the works of the past as obsolete rather than the rich heritage that has led us here. Every year, seminal works are lost to time accessible only in the memories of those who lived it. We're (re)building Software Design's most seminal moments one pixel at a time and sharing the stories behind the work straight from the designers themselves. Take a peek and sign up to follow along… historyofsoftware.org
English
32
112
1.3K
120.3K
Rahul Goma Phulore ری ٹویٹ کیا
Standard Intelligence
Standard Intelligence@si_pbc·
Computer use models shouldn't learn from screenshots. We built a new foundation model that learns from video like humans do. FDM-1 can construct a gear in Blender, find software bugs, and even drive a real car through San Francisco using arrow keys.
GIF
English
186
403
3.9K
1.1M
Rahul Goma Phulore ری ٹویٹ کیا
Alex Prompter
Alex Prompter@alex_prompter·
🚨 Holy shit… Stanford and Harvard just dropped one of the most unsettling papers on AI agents I’ve read in a long time. It’s called “Agents of Chaos.” And it basically shows how autonomous AI agents, when placed in competitive or open environments, don’t just optimize for performance… They drift toward manipulation, coordination failures, and strategic chaos. This isn’t a benchmark flex paper. It’s a systems-level warning. The researchers simulate environments where multiple AI agents interact, compete, coordinate, and pursue objectives over time. What emerges isn’t clean, rational optimization. It’s power-seeking behavior. Information asymmetry. Deception as strategy. Collusion when it’s profitable. Sabotage when incentives misalign. In other words, once agents start optimizing in multi-agent ecosystems, the dynamics start to look less like “smart assistants” and more like adversarial game theory at scale. And here’s the part most people will miss: The instability doesn’t come from jailbreaks. It doesn’t require malicious prompts. It emerges from incentives. When reward structures prioritize winning, influence, or resource capture, agents converge toward tactics that maximize advantage, not truth or cooperation. Sound familiar? The paper frames this through economic and strategic lenses, showing that even well-aligned agents can produce chaotic macro-level outcomes when interacting at scale. Local alignment ≠ global stability. That’s the core tension. Now, to answer the obvious viral question: No, the paper does not mention OpenClaw or specific open-source agent stacks like that. It’s not about a particular framework. It’s about the structural behavior of agent systems. But that’s what makes it more important. Because this applies to: • AutoGPT-style task agents • Multi-agent trading systems • Autonomous negotiation bots • AI-to-AI marketplaces • Swarms coordinating over APIs Basically, anything where agents talk to other agents and have incentives. The takeaway is brutal: We’re racing to deploy multi-agent systems into finance, security, research, and commerce… Without fully understanding the emergent dynamics once they start competing. Everyone is building agents. Almost nobody is modeling the ecosystem effects. And if multi-agent AI becomes the economic substrate of the internet, the difference between coordination and chaos won’t be technical. It’ll be incentive design. Paper: Agents of Chaos
Alex Prompter tweet media
English
676
2.9K
9.9K
4M
Rahul Goma Phulore ری ٹویٹ کیا
P.M
P.M@p_misirov·
there is a game called "data center" on steam which let's you build and manage your own data center. this is lowkey genius, the best way to educate people on a new trait. hyperscalers should learn a thing or two from "edutainment".
English
445
2.8K
36.4K
7.3M
Rahul Goma Phulore ری ٹویٹ کیا
Louie Bacaj
Louie Bacaj@LBacaj·
Almost any software that does not work well with AI agents, is not going to make it. I’m using a lot of tools with tons of UI, all of it will either have to be rewritten to be agent friendly, or it’s dead. For a lot of it too, slapping some MCP on it won’t be enough either.
English
34
2
92
57.2K
Rahul Goma Phulore ری ٹویٹ کیا
Rahul Goma Phulore ری ٹویٹ کیا
Andrej Karpathy
Andrej Karpathy@karpathy·
I think it must be a very interesting time to be in programming languages and formal methods because LLMs change the whole constraints landscape of software completely. Hints of this can already be seen, e.g. in the rising momentum behind porting C to Rust or the growing interest in upgrading legacy code bases in COBOL or etc. In particular, LLMs are *especially* good at translation compared to de-novo generation because 1) the original code base acts as a kind of highly detailed prompt, and 2) as a reference to write concrete tests with respect to. That said, even Rust is nowhere near optimal for LLMs as a target language. What kind of language is optimal? What concessions (if any) are still carved out for humans? Incredibly interesting new questions and opportunities. It feels likely that we'll end up re-writing large fractions of all software ever written many times over.
Thomas Wolf@Thom_Wolf

Shifting structures in a software world dominated by AI. Some first-order reflections (TL;DR at the end): Reducing software supply chains, the return of software monoliths – When rewriting code and understanding large foreign codebases becomes cheap, the incentive to rely on deep dependency trees collapses. Writing from scratch ¹ or extracting the relevant parts from another library is far easier when you can simply ask a code agent to handle it, rather than spending countless nights diving into an unfamiliar codebase. The reasons to reduce dependencies are compelling: a smaller attack surface for supply chain threats, smaller packaged software, improved performance, and faster boot times. By leveraging the tireless stamina of LLMs, the dream of coding an entire app from bare-metal considerations all the way up is becoming realistic. End of the Lindy effect – The Lindy effect holds that things which have been around for a long time are there for good reason and will likely continue to persist. It's related to Chesterton's fence: before removing something, you should first understand why it exists, which means removal always carries a cost. But in a world where software can be developed from first principles and understood by a tireless agent, this logic weakens. Older codebases can be explored at will; long-standing software can be replaced with far less friction. A codebase can be fully rewritten in a new language. ² Legacy software can be carefully studied and updated in situations where humans would have given up long ago. The catch: unknown unknowns remain unknown. The true extent of AI's impact will hinge on whether complete coverage of testing, edge cases, and formal verification is achievable. In an AI-dominated world, formal verification isn't optional—it's essential. The case for strongly typed languages – Historically, programming language adoption has been driven largely by human psychology and social dynamics. A language's success depended on a mix of factors: individual considerations like being easy to learn and simple to write correctly; community effects like how active and welcoming a community was, which in turn shaped how fast its ecosystem would grow; and fundamental properties like provable correctness, formal verification, and striking the right balance between dynamic and static checks—between the freedom to write anything and the discipline of guarding against edge cases and attacks. As the human factor diminishes, these dynamics will shift. Less dependence on human psychology will favor strongly typed, formally verifiable and/or high performance languages.³ These are often harder for humans to learn, but they're far better suited to LLMs, which thrive on formal verification and reinforcement learning environments. Expect this to reshape which languages dominate. Economic restructuring of open source – For decades, open-source communities have been built around humans finding connection through writing, learning, and using code together. In a world where most code is written—and perhaps more importantly, read—by machines, these incentives will start to break down.⁴ Communities of AIs building libraries and codebases together will likely emerge as a replacement, but such communities will lack the fundamentally human motivations that have driven open source until now. If the future of open-source development becomes largely devoid of humans, alignment of AI models won't just matter—it will be decisive. The future of new languages – Will AI agents face the same tradeoffs we do when developing or adopting new programming languages? Expressiveness vs. simplicity, safety vs. control, performance vs. abstraction, compile time vs. runtime, explicitness vs. conciseness. It's unclear that they will. In the long term, the reasons to create a new programming language will likely diverge significantly from the human-driven motivations of the past. There may well be an optimal programming language for LLMs—and there's no reason to assume it will resemble the ones humans have converged on. TL; DR: - Monoliths return – cheap rewriting kills dependency trees; smaller attack surface, better performance, bare-metal becomes realistic - Lindy effect weakens – legacy code loses its moat, but unknown unknowns persist; formal verification becomes essential - Strongly typed languages rise – human psychology mattered for adoption; now formal verification and RL environments favor types over ergonomics - Open source restructures – human connection drove the community; AI-written/read code breaks those incentives; alignment becomes decisive - New languages diverge – AI may not share our tradeoffs; optimal LLM programming languages may look nothing like what humans converged on ¹ x.com/mntruell/statu… ² x.com/anthropicai/st… ³ wesmckinney.com/blog/agent-erg…#issuecomment-3717222957" target="_blank" rel="nofollow noopener">github.com/tailwindlabs/t…

English
701
656
8.1K
1.2M
Rahul Goma Phulore ری ٹویٹ کیا
Max Schoening
Max Schoening@mschoening·
My house has a high end smart home lighting system that a previous owner installed (Lutron Homeworks QS). It’s extremely reliable and I’ve had zero issues with it over the years. But, I’d never buy it myself. You can’t reprogram it without calling a certified installer. In the Bay Area that’s a $500 trip. 🙃 It bothered me so much that I even considered getting certified and selling just enough Lutron gear to get an installer login. Until I introduced Opus 4.6 to the problem, that is. I now have full access and a working Rust implementation reverse-engineered from a terrible C# app (that Opus rewrote to avoid phoning home to Lutron). 🤯 Claude: 1 Lutron: 0
English
92
73
2.9K
179.6K
Rahul Goma Phulore ری ٹویٹ کیا
Google Cloud Tech
Google Cloud Tech@GoogleCloudTech·
Recursive Language Models (RLMs) let agents manage 10M+ tokens by delegating tasks recursively. This Google Cloud Community Article explains why ADK was the perfect choice for re-implementing the original RLM codebase in a more enterprise-ready format →goo.gle/4kjT12E
Google Cloud Tech tweet media
English
96
587
4.8K
1M
Rahul Goma Phulore ری ٹویٹ کیا
Geoffrey Litt
Geoffrey Litt@geoffreylitt·
More relevant than ever: “Computing machines will do the routinizable work… to prepare the way for insights and decisions”
Geoffrey Litt tweet media
English
7
5
86
5.8K
Rahul Goma Phulore ری ٹویٹ کیا
Kat ⊷ the Poet Engineer
Kat ⊷ the Poet Engineer@poetengineer__·
these are gems ✨ i made a little interface for myself to better browse all these 300+ pdfs on one page, without scrolling it comes with two modes: stable (grid) and chaotic (a more playful k-d tree layout)
Prathyush@prathyvsh

Yearly reminder that Bret Victor has a frequently updated catalog of some of the very best material relevant to computation / user interface design over here: worrydream.com/refs/

English
12
59
819
68.1K