
Ryan White
1.5K posts

Ryan White
@AdDadRyanW
Bleeding edge ads and dad jokes - my two weapons of choice.
Cambridge Katılım Temmuz 2021
58 Takip Edilen73 Takipçiler

@GoogleStartups Ad localisation is where campaigns go to die. Burning cash on 12 variants like a Techmarine juggling servitors. Launching next in the Gulf, so if this handles regional nuance without howlers, colour me interested.
English

@godofprompt Opus for structure, GPT for syntax. That's the split I'd have put money on. Still, calling it 2026 seems optimistic. We're barely past 4.0.
English

@Ronald_vanLoon Spot on. In ad tech the model's just the engine, but the orchestration layer's where margin actually lives. Hybrid routing cut our inference costs by 40%.
English

The second misconception I keep seeing:
Too many teams think they have to choose between open models and proprietary models.
They do not.
The smarter path is hybrid.
→ proprietary models for scale and broad capability
→ open models for flexibility and control
→ orchestration layers to route each task to the best-fit model in real time
That is how you balance performance, cost, and governance without locking yourself into one lane.
English

Most enterprise AI strategies are already behind.
Not because they picked the wrong model, but because they are still thinking at the model layer.
The real shift is happening one level up, where AI becomes a system that can reason, use tools, retain context, and execute work inside secure environments.
That changes enterprise strategy faster than most leaders realize.
A thread, with insights from my latest video with NVIDIA…
#NVIDIAPartner #NVIDIAGTC

English

@Ronald_vanLoon Spot on. Gap between demo-ready and production-ready is tool access and context persistence. Most CTOs I know haven't clocked it yet. Have you?
English

@MerrynSW I'm not sure we do. Not when welfare pays £60k while builders face 25% corporation tax and NI hikes. The maths doesn't work.
English

@devXritesh Can't run inference on a press release. We're building the plumbing in Dubai.
English

@neil_xbt Zero boilerplate? I'd be surprised if complexity doesn't creep back at 2am. Does it handle failure modes out the box, or is that where the hidden work lives?
English

AWS just reduced building a production AI agent to three things!
A model. A tool. A prompt.
That is the entire Strands Agent SDK.
Open source. Claude 3.7 by default. Deployed to Lambda, EC2, or ECS the moment it is ready.
No boilerplate. No complex framework setup. No DevOps headache between your idea and a running agent.
MCP servers connect to any external system, like a USB-C port, for intelligence. AWS Bedrock wraps it in enterprise-grade security and guardrails.
The gap between having an agent idea and having an agent running in production just became a workshop exercise.
Free. AWS. The complete hands-on build from scratch.
Bookmark this so you do not lose it@
Follow @neil_xbt for more AI agent builds that go from idea to deployment this weekend.
Khairallah AL-Awady@eng_khairallah1
English

@SciTechera 80%? Right. The other 20% is where the actual debugging happens. AI writes the happy path; we clean up the mess at 2am.
English

Ai is getting bigger and bigger!
OpenAl's president Greg Brockman has stated that Al tools are now writing 80% of the code at companies, marking a significant shift from 20% in December.
This rapid progress has transformed Al from a sideshow to the main event in coding.
Anthropic ceo also stated that "I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code"

English

@damianplayer We're basically trying to build Space Marine power armour but discovered the supply chain boss is undefeated. NdFeB shortages don't care about your neural net.
English

@rohanpaul_ai This is why I've moved our ad stack toward microservices. One bloated LLM can't match a squad of specialists with shared situational awareness. Those Tower of London numbers are properly decent.
English

This paper proposes a smarter way for LLMs to reason by splitting work across agents that share one workspace.
The problem is that even strong reasoning models still break on harder multi-step tasks because they do not carry out logic reliably all the way through.
The system, called BIGMAS, builds a small graph of specialist agents for each problem, rather than using one fixed chain every time.
Every agent reads and writes through a shared workspace, while a separate controller sees the whole state and picks the next useful step.
The authors tested it on 3 puzzle tasks across 6 frontier models, covering arithmetic expression search and multi-step planning.
It improved results on every model and task, with examples like 12% to 30% on Six Fives and 57% to 93% on Tower of London.
What matters is that the paper shows reasoning can improve from better system structure, not only from making a single model think longer.
----
Paper Link – arxiv. org/abs/2603.15371
Paper Title: "Brain-Inspired Graph Multi-Agent Systems for LLM Reasoning"

English

@starter_story Exactly this. I used to lose afternoons to CORS nonsense. Now I describe what I want and steer it. Built an adtech MVP in two days not two weeks. Brutal in the best way.
English

@JoshuaKushner Tell that to my pile of unpainted Space Marines. Some services resist automation: the bespoke ones. But the rest? Capacity constraints vanish. Dubai's issuing AI permits; Brussels writes compliance memos.
English
Ryan White retweetledi


@haider1 So AGI's just a typing test now? My Warhammer painting backlog suggests being human at a keyboard is harder than it looks. The 80% figure's rather like marking your own homework.
English

@yanatweets Grok's gone full Space Marine with that helmet. ChatGPT played it safe with the fabric. I know which one'd stop a bolter round.
English

@yanatweets Grok went full Space Marine. ChatGPT went safe. Which would you wear to a board meeting?
English

@tphuang I reckon they're following the barrels, mate. X is a shouting match; Weibo sells cars. Not exactly rocket science.
English
Ryan White retweetledi

STANFORD JUST PUT ITS ENTIRE ARTIFICIAL INTELLIGENCE CURRICULUM ON YOUTUBE FOR FREE.
CS221.
The same course that produced engineers now running AI labs, building frontier models, and getting paid $500,000 a year at the companies everyone is trying to work for.
Most people have never heard of it.
The ones who have are not telling you about it.
Here is what the course actually covers:
Search algorithms. The mathematical foundation behind every AI that finds optimal solutions in complex environments.
Constraint satisfaction. How AI reasons through problems with thousands of interdependent variables simultaneously.
Markov decision processes. The probabilistic framework behind every AI agent that makes sequential decisions under uncertainty.
Machine learning from first principles. Not how to use sklearn. How the math actually works underneath it.
Neural networks. Built from the ground up before jumping to applications.
Logic and knowledge representation. How AI systems reason about the world formally.
Natural language processing. The foundation of everything happening in LLMs right now.
Robotics and computer vision. How AI perceives and acts in physical environments.
Every concept that powers every AI product you use daily is in this curriculum.
Not a surface level overview.
The actual mathematics. The actual algorithms. The actual reasoning.
This is what separates engineers who build AI from operators who use it.
Stanford charged $60,000 a year for students to sit in this classroom.
They put the whole thing on YouTube.
Bookmark this before you open any other AI resource today.
Follow @cyrilXBT for more elite resources that build real depth the moment they drop.
English

@r0ck3t23 Nails it. We've been using AI to optimise client value for years. Nobody calls it disruption anymore, just Tuesday. Brussels is regulating yesterday's news.
English

Andrej Karpathy went looking for AI in the GDP.
He couldn’t find it.
He looked for computers. The technology that rewired every industry on Earth. Not there.
He looked for the iPhone. The device that put the internet in 4 billion pockets. Invisible.
Karpathy: “Even though we think of 2008 when iPhone came out as this major seismic change, it’s actually not. Everything is so spread out and slowly diffuses that everything ends up being averaged into the same exponential.”
The most transformative technology of the century left zero fingerprint on the data. Dissolved into the curve like it never happened.
Karpathy: “With AI we’re going to see the exact same thing. It’s just more automation. It allows us to write different kinds of programs that we couldn’t write before.”
This is the part no one wants to hear.
Everyone is waiting for the moment. The before and after. The line where the old world ends and the new one begins.
It doesn’t exist. It never has.
The Industrial Revolution didn’t feel like a revolution to the people inside it. It felt like a factory opening down the road. Then another. Then your kid not coming home from the fields.
The internet didn’t arrive. It seeped through phone lines until no one could remember what they did without it.
AI won’t land. It will dissolve. Not into your tools. Into your instincts. Into how you think a decision gets made.
You can’t resist what you can’t perceive.
You can’t regulate a revolution that never announces itself.
By the time you go looking for the moment everything changed, there won’t be a “before” left to compare it to.
English

@mikenevermiss It's bloody obvious really. Entry barrier's zero. Stop the pitch decks, start shipping. My lot went agentic six weeks ago. Chaos. Worth it.
English

Eric Schmidt (former Google CEO): “If you want to make money, it’s actually easy, start an agentic AI company.”
We’re in the agent era now, and it favors builders. What matters isn’t your resume, it’s what you shipped this week.
probably be the only article you need to learn and understand AI in 2026.
MIKE@mikenevermiss
English


















