Andrew

3K posts

Andrew banner
Andrew

Andrew

@Grokton

Decades long experience pro-dev. Doing "on-the-side" research into specialist super-efficient-ML-algorithms (not using matrices!). Working on vSaaS/micro-SaaS.

UK Katılım Temmuz 2025
231 Takip Edilen61 Takipçiler
Sabitlenmiş Tweet
Andrew
Andrew@Grokton·
How to code with AI. Key is understanding your system enough so you can make changes to it safely. youtu.be/eIoohUmYpGI
YouTube video
YouTube
English
1
0
1
820
Andrew retweetledi
“paula”
“paula”@paularambles·
they call them crisps there
“paula” tweet media
English
160
1.2K
19.5K
468.2K
Andrew
Andrew@Grokton·
@GergelyOrosz A touch concerning they make people question genuine anonymous accounts. I like being anonymous as I can share my views on tech without upsetting employers or future employers. I make a point of not using AI in comments. Just the odd screenshot where it's clear it's not me.
English
0
0
2
156
Gergely Orosz
Gergely Orosz@GergelyOrosz·
The beauty of the post is that the text smells AI-generated entirely, with the instruction to make it lowercase I’m getting to the point that I don’t trust anonymous accounts w even a hint of AI-written content on the internet. A good chance it’s all made up, even if believable
Grady Booch@Grady_Booch

Thoughts and prayers.

English
43
7
272
22.8K
Andrew
Andrew@Grokton·
@thdxr Sounds like he made you an offer for Opencode you could not resist.
English
0
0
0
40
dax
dax@thdxr·
pretty much every competitor in our space has been very easy to deal with except openai, they're the only company that understands building things for a lot of people we basically have no shot at directly competing
English
99
34
4K
283.5K
Andrew retweetledi
Christopher David
Christopher David@Tazerface16·
People understand that LLMs aren't actually "thinking," right?
Drexel-Alvernon, AZ 🇺🇸 English
1.5K
572
12.3K
634K
Andrew retweetledi
Dan O'Dowd
Dan O'Dowd@RealDanODowd·
Silicon Valley snake oil salesman @DarioAmodei claims AI will soon replace half of entry-level lawyers. But, back in the real world, lawyers are getting sanctioned for filing AI slop riddled with hallucinations and mistakes. AI cannot be relied upon for anything important:
English
120
118
811
50.5K
Andrew
Andrew@Grokton·
@omarsar0 In this case, he knows the tide has turned, he wants us to forget what he said earlier about "AGI soon" and make Dario look bad compared to him. I just feel Sam never says what he believes, but what people want to, that suits his investors or gets one over on a rival.
English
0
0
0
20
elvis
elvis@omarsar0·
I often don't agree with Sam Altman, but I appreciate this tweet. You can believe it. But it's important to also say it. So many of the AI narratives are around job doomerism, which I find outright lazy and dishonest. Let's all try to build AI and tooling to elevate and augment us. I feel like it's the more challenging path, but it feels right. Most of the AI models and harnesses are not built like that today. But it doesn't mean we can't mold it to help augment the work we do. Wrote more about this here: x.com/omarsar0/statu… Terence Tao's "Copernican view of intelligence" feels right, and it's totally achievable with proper alignment and effort.
Sam Altman@sama

we want to build tools to augment and elevate people, not entities to replace them.

English
9
4
32
5.9K
Andrew
Andrew@Grokton·
@ash_twtz @ConsciousRide They wrongly assumed they had the right approach and just needed to throw crazy money scaling it up quickly. All to try to gain a monopoly.
English
0
0
0
6
Andrew
Andrew@Grokton·
@ash_twtz @ConsciousRide There's many engineers with great alternative ideas who would work for a tiny faction of what the auto regressive LLM people get paid.
English
1
0
0
7
Mr Ash
Mr Ash@ash_twtz·
If you were Sam Altman today, what’s the first change you’d make to OpenAI?
Mr Ash tweet media
English
118
3
111
6.6K
Andrew
Andrew@Grokton·
@sama Then it might be wise to not put all the eggs in the one basket of auto regressive transformers. So much money has been spent on them already, that could have gone into pure research (and compute) into other forms of AI.
English
0
0
0
38
Sam Altman
Sam Altman@sama·
i'm hopeful for a future where people who want to work really hard have incredibly fulfilling things to do, and people who don't want to work hard don't have to and can still have an amazing life of prosperity.
English
326
145
3K
241K
Sam Altman
Sam Altman@sama·
we want to build tools to augment and elevate people, not entities to replace them.
English
2.5K
751
11.7K
3M
Andrew
Andrew@Grokton·
@_ibarz @m_g_nichols @rohanpaul_ai As there are markers in place that indicates a pattern should be used. But why it's out of place here requires super deep knowledge of everything going on. Well beyond an LLM.
English
0
0
0
25
Andrew
Andrew@Grokton·
@_ibarz @m_g_nichols @rohanpaul_ai Not when picking up mistakes, ommisions which is hard to do. Especially when they "look right" because they match another pattern. It requires real and deep understanding to see if that pattern is out of place here. LLMs are not that deep, precise or deterministic.
English
2
0
1
19
Rohan Paul
Rohan Paul@rohanpaul_ai·
New Microsoft paper shows that current AI assistants often damage documents during long editing jobs. Even the frontier models still ended up corrupting about 25% of document content on average, while many other models damaged far more. The problem is that delegated AI work only makes sense if a model can keep a document correct across many edits, not just do 1 step well. The paper tests this with reversible task pairs, where a model edits a file and then tries to undo that edit, so a reliable system should return to the original document. The authors built real work setups across 52 domains, from coding and science to accounting and music notation, and ran 19 models through 20 editing interactions. The failures were usually not lots of tiny slips but occasional big mistakes that silently broke parts of the document and then compounded over time. Agentic tool use did not help in their tests, and bigger files, longer workflows, and irrelevant extra documents made the corruption worse. The reason this matters is that current LLMs can look strong in short demos or narrow coding tasks yet still be unreliable delegates for long real-world document work. ---- Paper Link – arxiv. org/abs/2604.15597 Paper Title: "LLMs Corrupt Your Documents When You Delegate"
Rohan Paul tweet media
English
24
77
313
50.1K
Mr Ton
Mr Ton@MrTon23·
@sukh_saroy Asking a dev that will eventually be replaced by AI . Interesting exercise.
English
2
0
3
790
Andrew retweetledi
Sukh Sroay
Sukh Sroay@sukh_saroy·
A new study just blew up the entire "vibe coding" movement. Researchers from UC San Diego and Cornell tracked 112 experienced software developers using AI agents in their actual jobs. The finding is the opposite of every viral demo on your timeline. Professional developers don't vibe code. They control. Here's what they actually found. The researchers ran two studies. 13 developers were observed live as they coded with agents in real production work. 99 more answered a deep qualitative survey. Every participant had at least 3 years of professional experience. Some had 25. The viral pitch of agentic coding goes like this. Hand the agent a vague prompt. Don't read the diff. Forget the code even exists. Trust the vibes. Andrej Karpathy coined the term. Tens of thousands of developers on X claim to run "dozens of agents at once" building entire production systems hands-off. The data says almost nobody serious actually works that way. Here is what experienced developers do instead. → They plan before they prompt. They write out the architecture, the constraints, and the edge cases first, then hand the agent a tightly scoped task. → They review every diff. Not because they're paranoid. Because they've seen what happens when you don't. → They constrain the agent's blast radius. Small, well-defined tasks only. The moment a problem touches multiple systems or has unclear requirements, they take over. → They treat the agent like a fast junior dev that needs supervision, not a senior engineer that can be trusted alone. The researchers also found something darker buried in the data. A separate randomized trial they cite showed that experienced open source maintainers were 19% slower when allowed to use AI. A different agentic system deployed in a real issue tracker had only 8% of its invocations result in a merged pull request. 92% failure rate in production. 19% productivity drop for senior devs. The viral demos lied to you. The paper's biggest insight is in one sentence: experienced developers feel positive about AI agents only when they remain in control. The moment they let go, quality collapses, and they know it. This matches what every serious shop has quietly figured out. The developers shipping the most with AI right now aren't the ones vibing. They're the ones with the strictest review processes, the tightest task scoping, and the clearest mental model of what the agent can and cannot do. Vibe coding makes for great Twitter videos. It does not make great software. The next time someone tells you they let Claude build their entire SaaS in a weekend, ask them how much of that code they've actually read. The honest answer separates real engineers from the demo crowd.
Sukh Sroay tweet media
English
194
323
1.7K
250.4K
Andrew retweetledi
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
Dear "AI will solve biology!" crowd... First make a deep learning model that significantly outperforms XGBoost at predicting bioactivities of molecules. Then we'll talk.
English
32
4
119
8.3K
Rohan Paul
Rohan Paul@rohanpaul_ai·
Sam Altman: "There was a time when we used to make fun of the “idea guy,” who only had an idea and needed someone technical to build it. But now, people who just really deeply understand their users and can’t code at all, I want to fund those people."
English
91
124
1.5K
389.2K