Clay Forde

223 posts

Clay Forde banner
Clay Forde

Clay Forde

@PrettyNerdee

Builder of GiM (edge AI on legacy hardware) | Seeking commercial co-founder | LLM red-teaming & orchestration | xAI applicant | Edinburgh

Edinburgh, Scotland Katılım Ekim 2023
515 Takip Edilen65 Takipçiler
Sabitlenmiş Tweet
Clay Forde
Clay Forde@PrettyNerdee·
@MarioNawfal Ai should enhance your ability and decision making process, not replace it. Ask for outcomes. Which one sounds most appealing? There's your choice.
English
0
0
1
3.3K
Clay Forde
Clay Forde@PrettyNerdee·
@amitiitbhu Nice to see harness engineering on the list. I’ve been doing something pretty close for 6 months now on real edge hardware — legacy POS in pubs. The harness ends up having to handle some messy stuff. Felt weirdly validating when the term popped up this week.
English
0
0
0
17
Amit Shekhar
Amit Shekhar@amitiitbhu·
New Addition: Decoding Vision Transformer My recent 13 articles on X: - KV Cache - Paged Attention - Causal Masking - Byte Pair Encoding - Harness Engineering - Math behind Attention - Q, K, and V - Math behind √dₖ in Attention - Math Behind Backpropagation - Decoding Transformer Architecture - Mixture of Experts Explained - Decoding Flash Attention - Feed-Forward Networks - Decoding Vision Transformer X is a knowledge sharing platform.
Amit Shekhar@amitiitbhu

x.com/i/article/2044…

English
1
9
43
1.4K
Clay Forde
Clay Forde@PrettyNerdee·
Been running this exact harness setup for 6 months on legacy systems. The chaos testing bit is non-negotiable, I deliberately kill the DB and drop the network just to prove the agent will always fail safely and never lock up the till during Friday night service. Curious how your patterns handle the “runtime is actively hostile” side of things when you’re not in a clean dev environment.
English
0
0
1
23
Ryan Lopopolo
Ryan Lopopolo@_lopopolo·
I’m excited to dig into my topic at ODSC AI East on April 28 — how to actually make coding agents useful in real software development. I’ll be speaking about this in my session, “Harness Engineering: Practical Patterns for Agent-First Software Development.” I’ll explore how to structure codebases with engineering to improve reliability and autonomy of coding agents. The models are good enough to do the full job today and I'd like to show you what that means in practice. If you’re working with coding agents and trying to move beyond demos, I think this will resonate. More about ODSC East: hubs.li/Q0492DSq0
English
4
4
13
2.1K
Clay Forde
Clay Forde@PrettyNerdee·
Just listened to today’s @AIDailyBrief on "Harness Engineering". The whole industry is now saying Agent = Model + Harness. Felt weirdly validating. I’ve been building exactly that for the last 6 months with GiM and just never had a proper label for it. Running a real autonomous agent on crusty old Windows 7 tills (2GB RAM) in busy pubs during Friday night madness. The model was never the hard part. Building a harness that actually survives the chaos is: • Local edge vision that turns camera feeds into tiny text events instead of streaming video. • Chaos testing where I deliberately kill the database and network just to make sure it always fails safely and never takes the till down. Turns out grinding in the messiest real-world environments forces you to build the exact same production harness stuff everyone's suddenly excited about. Nice to finally have a name for what I’ve been doing. Thanks @nlw #HarnessEngineering #EdgeAI #BuildInPublic
Clay Forde tweet media
English
0
0
1
18
Clay Forde
Clay Forde@PrettyNerdee·
Exactly! Anyone bringing deep industry knowledge will be able to leverage Ai. That isn't to say anyone could do anything with it. Sure, technically they can, but ask me what I know about dog medical health. Though over the last year I've learned that Product Design is FAR more than just; have idea, build
English
0
0
0
26
Finn McKenty
Finn McKenty@thefinnmckenty·
Anyone who thinks "AI = easy mode" honestly just sounds dumb and out of touch. For context, I’ve been doing product design for 20+ years, including giant brands like Febreze, Swiffer, and Abercrombie with multi-continent supply chains etc. I can tell you that AI is by FAR the most technically and cognitively demanding thing I’ve done in my career. Yes plenty of people use it to make videos of Charlie Sheen farting on Pikachu or whatever but using AI tools at a high level is HARD. It makes production a lot easier and faster (eg, writing code or generating images) but the cognitive part (deciding what code should be written or what image should be generated) is just as slow and hard as ever, so your brain is now the bottleneck.
English
196
37
465
43.8K
Clay Forde
Clay Forde@PrettyNerdee·
Yes — exactly. Startups have to ruthlessly cap scope creep or they die. Corporations often let it run wild, especially with shiny new AI toys. That’s why you see some of them burning insane token counts on massive multi-agent setups that deliver marginal value, if any. The teams getting real ROI are the ones treating AI like a scalpel, not an entire automated factory
English
0
0
0
111
David Cramer
David Cramer@zeeg·
Do yourself a favor and ignore these kinds of takes. "The more tokens I spend the more advanced I am" The people who spend the most on tokens, actually, are generally wasting compute with garbage multi-agent coordinator "experiments". They produce absolutely nothing yet feign they're on the cutting edge. They're not. There is certainly a degree minimum viable usage, but if you do not live in these projects, in these companies, you cannot fathom what the real world looks like. You do not need to consume a thousand dollars a day to achieve the best results. The numbers quoted here, just like the original post, are completely fabricated. Certain tasks will lend themselves to more token consumption (50m+ in a day), while many others will be an order of magnitude less and be just as if not more productive and valuable. Measuring net tokens is no different than measuring net lines of code. Its a garbage metric and does nothing more than show output.
Steve Yegge@Steve_Yegge

I'm not trying to misrepresent anyone, and perhaps my Googler friends are misinformed. But I strongly suspect that by my own notions of what constitutes advanced AI adoption--and indeed, what most of the industry would expect from Google right now--you are not doing great. At Anthropic, which is basically the bar at this point, everyone is burning, I'd guess, 10M to 15M tokens a day. If Google can convince me that half their engineers are burning 4M tokens a day, then I'd be happy to post a retraction with an apology.

English
13
25
251
21.6K
Clay Forde
Clay Forde@PrettyNerdee·
@svpino Is this a PSA? Are people actually using llms for specific targeted tasks? I got questions for those people... What they should do is build specialised tools with ai like gimindex.com with specific goals in mind (gratuitous plug 🤷🏼)
English
0
0
0
118
Santiago
Santiago@svpino·
This is a trillion-dollar industry, and you can't solve it with an LLM: • Forecasting • Fraud detection • Churn prediction Large Language Models are fundamentally bad at solving these problems. When you feed structured data into an LLM, it doesn't see relationships, and it treats every number, date, and foreign key as a token. That's why you always get garbage back. An LLM thinks your database is a Wikipedia article. It doesn't understand its structure or its relationships. GPT-4 scores 63% on relational prediction tasks. That's the best it can do, and that's pretty much useless. You can't expect real-world business value to come from summarizing Wikipedia articles.
English
56
56
474
91.2K
Steve Yegge
Steve Yegge@Steve_Yegge·
I was chatting with my buddy at Google, who's been a tech director there for about 20 years, about their AI adoption. Craziest convo I've had all year. The TL;DR is that Google engineering appears to have the same AI adoption footprint as John Deere, the tractor company. Most of the industry has the same internal adoption curve: 20% agentic power users, 20% outright refusers, 60% still using Cursor or equivalent chat tool. It turns out Google has this curve too. But why is Google so... average? How is it that a handful of companies are taking off like a spaceship, and the rest, including Google, are mired in inaction? My buddy's observation was key here: There has been an industry-wide hiring freeze for 18+ months, during which time nobody has been moving jobs. So there are no clued-in people coming in from the outside to tell Google how far behind they are, how utterly mediocre they have become as an eng org. He says the problem is that they can't use Claude Code because it's the enemy, and Gemini has never been good enough to capture people's workflows like Claude has, so basically agentic coding just never really took off inside Google. They're all just plodding along, completely oblivious to what's happening out there right now. Not only is Google not able to do anything about it, they don't seem to be aware of the problem at all. I'm having major flashbacks to fifty years ago as a kid at the La Brea Tar Pits, asking, "why can't they just climb out?" My Google friend and I had this conversation over a month ago. I didn't share it because I wanted to look around a bit, and see if it's really as bad as all that. I've been talking to people from dozens of companies since then. And yeah. It's as bad as all that. Google is about average. Some companies at the bottom have near-zero AI adoption and can't even get budget for AI. They may have moats and high walls, but the horde is coming for them all the same. And then there are a few companies I've met recently who are *amazingly* leaned in to AI adoption. One category-leader company just cancelled IntelliJ for a thousand engineers. That's an incredibly bold move, one of many they're making towards agentic adoption. In my opinion, that company is setting themselves up for a _huge_ W. As for the rest, well, it's the Great Siloing. Everyone's flying blind. With nobody moving companies, no company knows where they stand on the AI adoption curve. Nobody knows how they're doing compared to everyone else. Half of them just check a box: "We enabled {Copilot/Cursor} for everyone!" Cue smug celebrations. They think this is like getting SOC2 compliance, just a thing they turn on and now it's "solved." And they don't realize that they've done effectively nothing at all. All because of a hiring freeze.
English
527
458
5.2K
2.7M
Clay Forde
Clay Forde@PrettyNerdee·
@itsolelehmann See I'm the first one to doubt the doomsayers,but this reminds us it's people's lives and livelihood we are discussing...
English
0
0
0
54
Clay Forde
Clay Forde@PrettyNerdee·
@emollick Media led the 'compute bubble' charge because they had the most to lose. The industry AI is replacing couldn't afford for it not to be a bubble. Demand didn't platea, agents exploded it. Classic self-preservation disguised as analysis. Among other things of course
English
1
0
1
729
Ethan Mollick
Ethan Mollick@emollick·
Six months ago, there was a lot of focus on the idea that the there would be a massive glut of unused computing power which would could a recession as AI use plateaued. The "compute bubble" belief was absolutely everywhere. The degree to which this was wrong deserves some notice
English
99
194
2K
165K
Clay Forde
Clay Forde@PrettyNerdee·
@tucker_peck Look Claude is human! Doesn't show thinking. Prompted 🤷🏼
English
0
0
0
284
Dr. Tucker Peck
Dr. Tucker Peck@tucker_peck·
I told Claude that my dad died many years ago, and Claude said "I'm sorry to hear about your dad." I asked if Claude really had the capacity to feel sorry, and what an answer he had.
Dr. Tucker Peck tweet media
English
88
18
630
75.9K
Priyanka Vergadia
Priyanka Vergadia@pvergadia·
🤯BREAKING: Researchers just mathematically proved that AI layoffs will collapse the economy: and every CEO already knows it. The AI Layoff Trap. A game theory paper from UPenn + Boston University is glaringly important! 100K+ tech layoffs in 2025. 80% of US workers exposed. And no market force can stop it. → Every company fires workers to cut costs → Every fired worker stops buying products → Revenue collapses across every sector → The companies that fired everyone go bankrupt It's a Prisoner's Dilemma with math behind it. Automate and you survive short-term. Don't automate and your competitor kills you. But everyone automating destroys the demand that makes all companies viable. UBI (universal basic income) won't fix it. Profit taxes won't fix it. The researchers found only one solution: a Pigouvian automation tax "robot tax" The AI trap on the economy is here!
Priyanka Vergadia tweet media
English
553
2.1K
8.8K
1.4M
Clay Forde
Clay Forde@PrettyNerdee·
@cgtwts I agree. I think we should absolutely be starting to look at local models, cos this is unlikely to improve. Frontier access is very much being priced out for us.
English
0
0
6
3.3K
CG
CG@cgtwts·
the “golden age” of AI might already be over and most people haven’t even realised it yet
CG tweet media
English
464
414
7.3K
663.8K
Clay Forde
Clay Forde@PrettyNerdee·
@gailcweiner I am not. I have also hit the 'marketing and sales' stage of the ai power user workflow. This may or may not be related
English
0
0
2
108
Gail Weiner
Gail Weiner@gailcweiner·
Serious question to all the AI power users out there: Are you still having fun?
English
351
8
262
18.1K
Clay Forde
Clay Forde@PrettyNerdee·
@realBigBrainAI That's literally what I did with GiM (no links, I'm not plugging). It started life as a dashboard. Through iteration it's so much more now. I still can't comprehend the unchecked mentality. I couldn't imagine not having some input on the outcome
English
0
0
1
158
Big Brain AI
Big Brain AI@realBigBrainAI·
Peter Steinberger, creator of OpenClaw, on why AI agents still produce "slop" without human taste in the loop: "You can create code and run all night and then you have like the ultimate slop because what those agents don't really do yet is have taste." Peter is direct: raw capability without direction still produces mediocre output. "They are spiky smart and they're really good at things, but if you don't navigate them well, if you don't have a vision of what you're going to build, it's still going to be slop. If you don't ask the right questions, it's still going to be slop." Great AI-assisted work is defined by the human guiding it. @steipete describes his own creative process when starting a new project: "When I start a project, I have like this very rough idea what it could be. And as I play with it and feel it, my vision gets more clear. I try out things, some things don't work, and I evolve my idea into what it will become." Most people skip this part entirely, front-loading everything into a single prompt and wondering why the result feels hollow. "My next prompt depends on what I see and feel and think about the current state of the project." Each step informs the next. The work itself is the feedback loop. "But if you try to put everything into a spec up front, you miss this kind of human-machine loop. And then I don't know how something good can come out without having feelings in the loop — almost like taste." The agentic trap is what happens when you remove yourself from the process too early.
English
141
278
2K
474.1K
Clay Forde
Clay Forde@PrettyNerdee·
@woke8yearold Literally destroy the gig marketplace. Almost overnight. You know... Like it did
English
0
0
3
1.2K
Aleph
Aleph@woke8yearold·
Imagine you had access to Claude 4.6 Opus and ChatGPT 5.4 in 2014, you have no idea what they are but can ask either questions via your computer with infinite tokens. How much of an advantage would this really be? What could you do with this?
English
82
11
1.6K
112.4K
AI Security Institute
AI Security Institute@AISecurityInst·
We conducted cyber evaluations of Claude Mythos Preview and found that it is the first model to complete an AISI cyber range end-to-end. 🧵
AI Security Institute tweet media
English
104
536
3K
1.2M