Nick Thorne

7K posts

Nick Thorne banner
Nick Thorne

Nick Thorne

@nickdthorne

Long term tech dev. Watersports enthusiast, cyclist, armchair economist. Rockstone Data Ltd, previously Philips.

Southampton Katılım Kasım 2008
271 Takip Edilen491 Takipçiler
Nick Thorne
Nick Thorne@nickdthorne·
Note to #Wingfoil companies Gong , @duotonewindsurf , @NorthSails , Axis etc. Would a wing with adjustable flaps in the wing body work as a means of ‘reefing’ a wing? Sort of a mesh covered window…? Last night we had extremely variable wind speeds over a short time period and would saved a lot trips back to shore.
Nick Thorne tweet media
English
0
0
0
29
Nick Thorne
Nick Thorne@nickdthorne·
@timleunig Don’t forget Liverpool too. Went there for the first time last year. Great place and kept thinking they’d missed drinks off the bar tab
English
0
0
0
55
Tim Leunig
Tim Leunig@timleunig·
I don't know the answer to why the government is not pursuring a San Fran/Silicon Valley approach to the North West of England... Scotland could plausibly do it on the M8 Glasgow-Edinburgh route as well.
Nick Harrison@NickHarrison73

Good to hear your optimism on the North @timleunig. I agree there is untapped talent, way cheaper housing, and a great standard of living for those who live / move there. An interesting idea to enable more building to double down on growth. What do you think is standing in governments way? Seems like an obvious win.

English
5
1
9
2.9K
Nick Thorne
Nick Thorne@nickdthorne·
Nice piece in @TheEconomist from Ethan Mollick. Agree with The Lab point, where genAI is used not just to do what we did before but to think what we can now do.
Nick Thorne tweet media
English
0
0
0
11
Alex Bainbridge 🆎
Alex Bainbridge 🆎@alexbainbridge·
I love multi AI agent setups, but whilst they argue with each other, they are burning my tokens! Output is great though
English
1
0
1
37
Nick Thorne
Nick Thorne@nickdthorne·
@MerrynSW There are layers being built on top of LLMs to manage the hallucination. And the context memory. Eg GSD. Multi-sub agent approaches with detailed soec/verify/check/test and phased delivery. Though token pricing is a risk. github.com/gsd-build/get-…
English
1
0
0
493
Merryn Somerset Webb
Merryn Somerset Webb@MerrynSW·
My piece on LLMs today less of a minority opinion than you might think. Here's Joachim Klements on the same. "If these three results (the prevalence of hallucinations, the inability to remove the neurons that create them and free replication of basic models without the need to pay for more complex models) are true, then OpenAI, Anthropic, Google, Meta and others are in serious trouble. Large-scale LLMs will not be able to replace mission-critical software because of the inherent hallucination problem, which does not go away due to the very structure of the models. At the same time, for everyday use cases where they are good enough, there are free models that already can do what the large models do, and every business can simply use these without having to pay OpenAI or any other money. So, where is the business model for these genAI companies?"
English
50
88
687
97.6K
Nick Thorne
Nick Thorne@nickdthorne·
How about they set some S.M.A.R.T. goals instead of using meaningless words? •S — Specific: Clearly defined and unambiguous •M — Measurable: Quantifiable or at least objectively assessable •A — Achievable (sometimes “Attainable”): Realistic given constraints •R — Relevant: Aligned with broader objectives or business goals •T — Time-bound: Has a defined deadline or timeframe
English
0
0
0
125
Nick Thorne
Nick Thorne@nickdthorne·
Interesting point. So maybe an enforced 2 year period like ‘national service’ but where students are not allowed to use AI? Or adjust the AI so that it makes us work for the answers? ‘But the collective capacity to ask questions nobody has asked before, to build the frameworks that generate new knowledge rather than retrieve existing knowledge, that capacity is quietly disappearing.’
Muhammad Ayan@socialwithaayan

MIT's Nobel Prize-winning economist just published a model with one of the most alarming conclusions in the AI literature so far. If AI becomes accurate enough, it can destroy human civilization's ability to generate new knowledge entirely. Not gradually degrade it. Collapse it. The paper is called AI, Human Cognition and Knowledge Collapse. Authors: Daron Acemoglu, Dingwen Kong, and Asuman Ozdaglar. MIT. Published February 20, 2026. Acemoglu won the Nobel Prize in Economics in 2024. He is not a doomer blogger. He is the most cited economist of his generation, and his models tend to be taken seriously by the people who set policy. Here is the argument in plain terms. Human knowledge is not just a collection of facts stored in individuals. It is a living system that requires continuous reproduction. People learn things. They apply them. They teach others. They build on prior work to generate new work. The entire engine of science, medicine, technology, and innovation runs on this cycle of active human cognition. What happens when AI provides personalized, accurate answers to every question people would otherwise have to learn themselves? Individually, each person is better off. They get correct answers faster. They make fewer errors. Their immediate outcomes improve. But they stop doing the cognitive work that sustains the collective knowledge base. Acemoglu's model shows this produces a non-monotone welfare curve. Modest AI accuracy: net positive. AI helps at the margin, humans still do enough learning to sustain collective knowledge, everyone gains. High AI accuracy: net catastrophic. AI is accurate enough that learning yourself feels unnecessary. Human learning effort collapses. The knowledge base that AI was trained on is no longer being refreshed or extended. Innovation stalls. Then stops. The model proves the existence of two stable steady states. A high-knowledge steady state where human learning and AI assistance coexist productively. A knowledge-collapse steady state where collective human knowledge has effectively vanished, individuals still receive good personalized AI recommendations, but the shared intellectual infrastructure that enables new discoveries is gone. And the transition between them is not gradual. It is a threshold effect. Below a certain level of AI accuracy, society stays in the high-knowledge equilibrium. Above that threshold, the system tips. And once it tips, the collapse is self-reinforcing. Because the people who would have learned the things that would have pushed the frontier forward never learned them. And the AI cannot push the frontier on its own. It can only recombine what humans already knew when it was trained. The dark irony at the center of the model: The AI does not fail. It keeps giving accurate, personalized, useful answers right through the collapse. From the individual's perspective, nothing looks wrong. You ask a question, you get a correct answer. But the collective capacity to ask questions nobody has asked before, to build the frameworks that generate new knowledge rather than retrieve existing knowledge, that capacity is quietly disappearing. Acemoglu has been the most prominent mainstream economist skeptical of transformative AI productivity claims. His prior work found that AI's actual measured productivity gains were much smaller than the technology industry projected. This paper is a different kind of warning. Not that AI will fail to deliver promised gains. But that if it succeeds too completely, it will undermine the human cognitive infrastructure that makes long-run progress possible at all. The welfare effect is non-monotone. That is the sentence worth sitting with. Helpful until it is not. Beneficial until it crosses a threshold. And past that threshold, the same accuracy that made it so useful is precisely what makes it devastating. Every student who uses AI instead of working through a problem is a data point. Every researcher who uses AI instead of developing intuition is a data point. Every generation that grows up with accurate AI answers and no incentive to develop deep domain knowledge is a data point. Individually rational. Collectively catastrophic. Acemoglu proved this is not just a cultural concern or a vague anxiety about screen time. It is a mathematically coherent equilibrium that a sufficiently accurate AI system will push society toward. And there is no visible warning sign before the threshold is crossed.

English
1
0
0
30
Nick Thorne
Nick Thorne@nickdthorne·
Folks who love crafting every last byte of code are finding it hardest to adapt to the new coding tools. Folks who focus on the benefits to the business of the resultant functionality love them. "People don't want drill bits, they want holes"
English
1
1
4
586
Nick Thorne
Nick Thorne@nickdthorne·
Now have 6 friends with second homes in France. Is this a thing that happens when you're over 50 in the UK ?
English
0
0
0
36
Nick Thorne
Nick Thorne@nickdthorne·
Very much a 'K shaped' response to AI software build tools amongst the software engineering fraternity.
English
0
0
0
9
Dustin
Dustin@r0ck3t23·
Jensen Huang just gutted the AI job panic with one profession. Radiology. The field AI was supposed to kill first. Jensen Huang: “Computer vision was superhuman in 2019. And yet, the number of radiologists grew.” Not competitive. Not close. Superhuman. Every forecast said radiologists were finished. Every forecast was wrong. Not slightly wrong. Directionally wrong. There are now fewer radiologists than the world needs. A global shortage. In the exact specialty AI was supposed to erase. Why? Because the task was never the job. Huang: “The purpose of your job and the tasks and the tools that you use to do your job are related. Not the same.” Reading a scan is a task. Diagnosing disease is a purpose. AI handled the task. The purpose didn’t shrink. It compounded. Faster reads meant more patients seen. More patients seen meant more disease caught. More disease caught meant more demand for the people who decide what to do about it. The tool did not kill the job. It fed it. Then the fear did what the technology never could. Huang: “The alarmist warning went too far and it scared people from doing this profession that is so important to society. It did harm.” People heard radiologists were finished and walked away from the field. Medicine bled talent it could not afford to lose. Not because the work vanished. Because the panic said it would. The prediction was wrong. The damage was real. Huang: “The number of software engineers at Nvidia is going to grow, not decline.” Not hold steady. Grow. The company building the infrastructure that automates code is hiring more of the people who write it. Huang: “I wanted my software engineers to solve problems. I didn’t care how many lines of code they wrote.” Nobody ever hired an engineer to type. They hired them to think. When the machine handles syntax, the engineer does not become obsolete. The bottleneck just moves upstream. To architecture. To edge cases. To the kind of reasoning no model handles alone. The world was never short on unsolved problems. It was short on people free to chase them. That is the part the fear narrative misses every single time. 340,000 women once worked as telephone switchboard operators. That job is gone. Nobody mourns it. What replaced it created millions of roles that nobody in 1920 had the vocabulary to describe. The losses are always visible. The gains are always invisible until they arrive. That pattern has survived every technological shift in history. It is surviving this one. The people forecasting mass displacement are making the same mistake as the people who forecasted the end of radiology. They can see the task being automated. They cannot see the purpose expanding underneath it. That blindness is not just wrong. It is expensive. Every person scared out of a career that AI will actually make more valuable is a cost the economy absorbs for nothing. Not because of the technology. Because of the story told about it.
English
172
403
2.3K
551K
Nick Thorne
Nick Thorne@nickdthorne·
No need for mind altering drugs to blow your mind, just go and have a chat with a molecular biologist about how the immune system searches for antibodies (long flexible hooks of protein) - it's pretty insane. Even less sane is there's a startup trying to replicate the process with deep learning models...
English
1
0
1
21
Nick Thorne retweetledi
Robert Hoffmann
Robert Hoffmann@itechnologynet·
@MaMoMVPY Also, i predict, that by the end of 2026 you will be able to run a Opus 4.5 ...or at least a Sonnet 4.5 level agent on local hardware for less than $2000 The innovations are compounding fast...
English
2
1
4
2.3K