John Ball

1.4K posts

John Ball banner
John Ball

John Ball

@jbthinking

I'm an inventor, scientist and engineer. My passion is in modeling the brains of animals to make machines more useful - with language. We're getting there!

Palo Alto, CA Joined Haziran 2010
198 Following549 Followers
John Ball
John Ball@jbthinking·
The main point I'd make is that there are different ways to improve AI, and the systems I work with handle previously unsolved problems in parsing and representations of meaning for use in human-like conversations - running only on a laptop. If this kind of technology is further developed, it has the potential to remove the need to build all the new datacenters, power distribution and so on. And it has the advantage of improving on many of the limitations of generative AI. The technology I developed is known as Patom brain theory and the linguistics model is called Role and Reference Grammar. I believe that in 10 years, the alternatives to gen-AI could be the long-term standards.
English
0
0
0
5
David Blundin
David Blundin@DaveBlundin·
@jbthinking @ericschmidt 1. Yes - one step at a time. Forecast out the current rate of change. 2. GPT-5 looked bad in a world of constantly improving models. OpenClaw feels like baby AGI to me. 3. Yes - that's why Elon is going to space to build solar arrays
English
2
0
1
23
David Blundin
David Blundin@DaveBlundin·
We haven’t hit the scaling limits of AI yet. @ericschmidt made that clear. I asked whether it's a capital constraint or a physics constraint. His answer: 1 gigawatt = $50 billion. We need 100. That's $5 trillion over 5 years - already 1% of US GDP growth, with 10% of all American electricity heading to data centers. Can America raise $5 trillion? Eric didn't blink: "Yeah. That's a strength of America." But after that it becomes such a large portion of the American economy that there has to be some constraint. Data centers in space might help there!
English
22
15
112
23.8K
John Ball
John Ball@jbthinking·
Thanks. One step at a time, sure, but as a cognitive scientist, we have a lot of features today’s gen AI are missing. I see lack of accurate representation is holding back progress with the billions being better spent keeping the net wider until key issues are solved. I’ll have to look to try openclaw on your recommendation. Thanks.
English
0
0
0
4
John Ball
John Ball@jbthinking·
In my cognitive science research, the brain uses very low power. With the right brain model like Patom theory, there is no need to use these GPU aided techniques. Things that used to be constrained by combinatorial explosions, like linguistics from the 1980s, are now able to run on normal devices. But it does get annoying that everything is being deferred to LLMs like curing cancer, and fixing CO2 emissions by massively increasing them!
English
0
0
1
33
Dustin
Dustin@r0ck3t23·
Eric Schmidt just detonated the most comforting lie in the energy debate. The idea that better AI means less power consumption. It doesn’t. It won’t. It can’t. Schmidt: “As the algorithms become more efficient, you don’t need less power. You need even more power and even more computers because we discover new uses.” This is Jevons Paradox. When steam engines got more efficient, the world did not burn less coal. It burned more. When computing got cheaper, the world did not use fewer computers. It put one in every pocket on Earth. Make a resource cheaper and demand does not shrink. It detonates. AI is on the same curve. Anyone waiting for it to become lean enough to run on the current grid will wait forever. The grid does not shrink to fit the technology. The technology expands until the grid breaks. Then you build a bigger grid. Schmidt: “There’s no country where the finance people are sufficiently crazy to do that. It’s not true in China. It’s certainly not true in Europe.” He thanked them. The investors. The funds pouring billions into conviction and math and not much else. Schmidt: “Thank you to the finance people for funding our dreams whether they work or not.” That line is funny. It is also the single biggest advantage the United States holds over every other nation alive. American capital does not wait for certainty. It bets on velocity. Europe calculates risk until the opportunity is dead. China funds what the state approves. America funds what might work and sorts it out on the way up. That is why every major AI company is American. Not smarter people. Crazier money. Schmidt: “Because humans have trouble with exponentials, everyone says, ‘Oh well, in six to nine months it’ll be the bubble.’” Six months pass. No bubble. So they say six more. Six more pass. Still nothing. They keep pushing the date. The date keeps winning. Schmidt: “I keep asking my friends, ‘When does the asymptote arrive and when does the curve slow down?’ We have not seen it yet.” The curve has not flattened. The wall has not appeared. Every prediction that AI was about to plateau has been completely wrong. Every time. Schmidt: “There will be one. It is actually true that there is a limit to our craziness. We have not found it yet. And we’re running to the wall.” He admits the wall exists. He just told you nobody has found it. America is not slowing down to look for it. America is sprinting at it. Nobody stops running because a wall might exist. You hit it first. Then you find out what it’s made of. Schmidt didn’t say there was no wall. He said nobody’s hit it yet. That’s not reassurance. That’s a dare.
English
19
9
50
4.5K
John Ball
John Ball@jbthinking·
The traditional Silicon Valley model is to solve a problem first, then sell it for a profit. "Science projects" where product-market fit is untested are normally left to prove themselves with a little money. The valuations today are tied to capabilities these systems can't do, stopping these systems that have some use-cases from being developed. The remaining use cases may be drafting letters and presentations. It's not that valuable, but it would remove the baseless claims of AGI within 12-24 months that never reduce to the present.
English
0
0
0
78
Lars Christensen
Lars Christensen@MaMoMVPY·
I must say I am increasingly suspicious about the comments from the AI bosses - why do they need to make these completely over the top and so obviously unfounded predictions about how AI (or rather LLM) will impact economic development? To me it is an indication of something not being quite right in their own business models - they will need to attract more and more investors as they are likely going to face a funding squeeze sooner rather than later. And again let me stress - LLMs are great and can surely increase productivity, but presently AI companies like OpenAI and Anthropic are making heavy losses. This means sooner or later LLM prices need to go up, and potentially a lot. And what happens then with the case for LLMs? Might it be that the entry-level lawyer might be both a lot better AND cheaper than an AI "agent" that is priced at what is needed to make Anthropic or OpenAI profitable?
CG@cgtwts

Anthropic CEO: “50% of all entry-level Lawyers, Consultants, and Finance Professionals will be completely wiped out within the next 1–5 years." grad students and junior hires are cooked.

English
215
65
756
94.3K
John Ball
John Ball@jbthinking·
@Eric_M_Stevens @rohanpaul_ai Yes, science has always led the way with accurate devices. Information processing in particular was always maniacal about data integrity, but these things are being led by ever-hopefuls not taking reality into account.
English
0
0
0
26
Eric Stevens
Eric Stevens@Eric_M_Stevens·
@rohanpaul_ai Wozniak built machines that do exactly what you tell them, perfectly. AI does something fundamentally different and messier. judging it by the old standard is like being disappointed that a conversation partner doesn't compute as fast as a calculator.
English
2
2
26
1.4K
Rohan Paul
Rohan Paul@rohanpaul_ai·
Steve Wozniak reportedly says AI keeps disappointing him, and that is why he barely uses it. Wozniak is also pointing at something deeper: human value is not just accuracy, since people bring judgment, tone, emotional context, and a sense of what matters. So when AI feels “too perfect” and “too dry,” the problem is not style alone, but a gap between language generation and human understanding. --- techspot. com/news/111806-steve-wozniak-disappointed-lot-ai-rarely-uses.html
Rohan Paul tweet media
English
173
238
1.5K
74.8K
John Ball
John Ball@jbthinking·
"human value is not just accuracy." And worse, today's AI is not accurate, anyway. Perhaps that's why Wozniak is disappointed? And at the same time, other CEOs from Silicon Valley are claiming that this year, agents will show amazing capabilities. After 3 years of this "AI-revolution," perhaps it is time to just stop all the predictions. They are not coming true anyway. Instead, let's see what happens and talk about what they can do, not predictions of what they might do in the 1-3 year future.
Rohan Paul@rohanpaul_ai

Steve Wozniak reportedly says AI keeps disappointing him, and that is why he barely uses it. Wozniak is also pointing at something deeper: human value is not just accuracy, since people bring judgment, tone, emotional context, and a sense of what matters. So when AI feels “too perfect” and “too dry,” the problem is not style alone, but a gap between language generation and human understanding. --- techspot. com/news/111806-steve-wozniak-disappointed-lot-ai-rarely-uses.html

English
0
1
2
81
John Ball
John Ball@jbthinking·
We can hear what he says, but it is just a guess. Logically, AI has been proposed at the level of human skills since computers were first made. And after 70 years, it has been unsuccessful. I appreciate that Eric's friends in San Francisco predict that those 70 years of effort will end in 3 years (and agents will be this year), but as a cognitive science researcher, the faults in the design of LLMs already block the proposed capabilities due to the lossy nature and statistical bias. I don't see how systems with such errors (hallucinations, if you like) will be able to get past that and generalize into machines that don't fail as LLMs do. There is too much evidence against it. I can be persuaded with facts of course, like a machine that can upgrade my node.js codebase to the latest version -- so perhaps like autonomous cars I need to wait for 3 years before the proof is in.
English
0
0
1
124
Haider.
Haider.@slow_developer·
Eric Schmidt says San Francisco consensus believes we're in the year of agents, and superintelligence is 2-3 years away A company with 1,000 AI researchers can spin up a million AI research agents, limited only by electricity No salaries, no housing, no HR "once AI recursively self-improves, the slope goes vertical"
English
49
45
251
15.5K
John Ball
John Ball@jbthinking·
@anthony1 @niccruzpatane On the Patom theory side of things, it is right in that it explains what a brain does, and is being successfully applied to problems not solved by traditional methods. Language in particular.
English
0
0
1
9
John Ball
John Ball@jbthinking·
Which leads to my comment. We don't need to USE autonomous cars until they are ready. Just keep them off the roads until they are at human level. What's the rush? FSD has been around for a while, but that doesn't mean they should operate if there is risk in situations where people pose no risk. Who takes responsibility for errors in these vehicles that are non-human? Is that just something to pass onto insurance companies? Warranty on the tool should cover errors like those of normal products, shouldn't it? That aligns the risk with the developer who should be responsible. I don't know, but what's the principle?
English
1
0
0
11
John Ball
John Ball@jbthinking·
When they say 'AI', they mean the current technology that regularly fails on human-level problems, cannot solve problems profitably and is unsustainable - causing electricity and water shortages and increasing levels of fossil fuel emissions. IP theft? Why wouldn't humanity want to reduce the quality of our lives by relying on such poor engineering?
English
0
0
3
180
John Ball
John Ball@jbthinking·
Thanks for the video, Anthony. I agree with you that risk appetite is up to the investors. I'm focused on the technical progress with the apparent focus on engineering by autonomous car companies, rather than science. The technology applied is machine learning which has fundamental limitations. It's old stuff and not human-level. The examples of crashes in tunnels, airports, and other new situations is the technology limitation that appears to be baked in without enough alarm. Rather than ongoing effort to log hours prior to getting a 'license' to operate legally, better benchmarks/strategy would be preferable. Focus on long tail conditions, rather than crossing safe areas. In the video there were decisions to make, but the number of billions of hours isn't what makes them safe. It is the application of science (c.f. Moravec's Paradox). I'd like to see low power, low cost brain-like capability rather than a single technology being iterated on by all competitors. Independence would be better with research focus on the science. Why is driverless cars considered, when technology does not yet deal with robots safely walking around. Or robots learning from experience. Or sensors using the same recognition as our brains do? As you know my brain science suggests a number of lower power and higher reliability techniques should be applied or at least tested again these old designs, so I will seek to educate on the benefits of closer alignment with brain theory, like Patom theory.
English
1
0
0
14
Anthony
Anthony@anthony1·
The funding comes from shareholders willing to risk that autonomy will be solved. As you rightly said some would have sold as the company slid past the 2020 estimate. Others believed in the likelihood it would be solved and held. As far as safety FSD. Vs average American driver data tesla.com/fsd/safety
English
1
0
1
24
John Ball
John Ball@jbthinking·
Yes (generative) AI can help, but the ceos of startups keep claiming that it is more than help. On the one hand, AI will solve all cancers, longevity and take all jobs. On the other other hand, you can’t trust anything it produces and if something is wrong the terms and conditions hold the user responsible!! Since when was warranty transferred to users, not the companies that make it? And for complex tasks like code rework I have hit a 95% failure rate where all my efforts are lost. So the so-called enterprise use case only works with senior developers fixing code. Of course, on your point, validation of models and treatments should use validation processes like clinical trials. The range of unexpected problems in complex science needs such tools.
English
0
0
0
14
Debunk the AI Bubble
Debunk the AI Bubble@debunkiabubble·
@jbthinking Discovery is a part of science and can AI can definitely help. But it is only the creative part, it is not the whole of science. The you need to do clinical trials.
English
1
0
1
17
John Ball
John Ball@jbthinking·
Again critical thinking is useful in new ventures. How does a statistical text generator solve science problems? Dos anyone other than ai ceos believe that science is the manipulation of word sequences or instead the interactions of real world entities? Describing gravity in words is less powerful than newton’s f=ma because the equation tells a very clear story of motion at low speeds compared to light.
Ewan Morrison@MrEwanMorrison

Altman has lost the plot. He's recycling the "AI will find new cures for diseases" hype from 2024 that crashed so badly. Plus - without seeing the contradiction - he's trying to recycle the opposing hype-pitch, that AI is so powerful it's a threat. Sloppy thinking.

English
1
1
3
88
John Ball
John Ball@jbthinking·
On the topic of new science, what is missing from the LLM science is the accuracy and low cost of human brains. Sustainable AI will leverage approaches like Patom theory based on theoretical neuroscience, not digital computers. Those who look for the next step from statistical AI should look at Patoms with RRG linguistics as it has researched and demonstrated the missing pieces of today's AI - and can run on low power devices without GPUs!
English
1
0
1
17
Sam Altman
Sam Altman@sama·
AI will help discover new science, such as cures for diseases, which is perhaps the most important way to increase quality of life long-term. AI will also present new threats to society that we have to address. No company can sufficiently mitigate these on their own; we will need a society-wide response to things like novel bio threats, a massive and fast change to the economy, extremely capable models causing complex emergent effects across society, and more. These are the areas the OpenAI Foundation will initially focus on, and in my opinion are some of the most important ones for us to get right. The Foundation will spend at least $1 billion over the next year. @woj_zaremba, co-founder of OpenAI, will transition to Head of AI Resilience. I believe that shifting how the world thinks about safety to include a Resilience-style approach is critical, and I am extremely grateful to Wojciech for taking on this role. Wojciech has been my cofounder for the last decade; anyone who knows him will understand what I mean when I say he is one of a kind. He has a lot of ideas about how we build a new kind of AI safety. @JacobTref is joining as Head of Life Sciences and Curing Diseases. @annaadeola, our VP of Global Impact, will transition to Head of AI for Civil Society and Philanthropy. @robert_kaiden is joining as Chief Financial Officer. @jeffarnold is joining as Director of Operations.
English
1.7K
563
6.7K
958.9K
John Ball
John Ball@jbthinking·
I think a surer sign of AGI would be when a code tool doesn’t make endlessly wrong recommendations. In my work using ‘amazing AI tools’ i get time wasting suggestions because they are too error prone to analyse how to solve the complex problems of a simple code base. I pass such problems to human coders to fix because AI like Microsoft copilot and Claude don’t work.
English
0
0
0
8
vitrupo
vitrupo@vitrupo·
Andrej Karpathy says the personality of an AI agent matters more than people realize. With Claude, when he shares a strong idea, the praise feels earned. Sometimes he even finds himself trying to earn it. At some point you wonder who is training whom.
English
66
68
803
105.4K
John Ball
John Ball@jbthinking·
I feel like the weakness of Silicon Valley at the moment is the lack of independent, critical thinking in the process of building new technologies. Rather than having a single company that develops something and another body independently testing and looking for limitations, the echo chamber effect takes over. Benchmarks are written by the same companies that use it. Gary should be a mandatory addition to new startups to give expert review and a voice to limitations from a cognitive science view to reduce loss from speculative ventures.
English
0
0
2
60
John Ball
John Ball@jbthinking·
@TrueAIHound Thanks. I'm curious what happens when the big AI companies start to charge the cost of the service and add 30-100% profit margin on top. I mean, how many corporations will pay more than the cost of an agent to run code that is less reliable?
English
0
0
1
9
AGIHound
AGIHound@TrueAIHound·
@jbthinking My take is that it was bleeding too much cash. No profits.
English
1
0
1
22
John Ball
John Ball@jbthinking·
Human-level AI? Robots can't learn to walk like us, conversational bots can't handle typical tasks to relieve human agents and code systems can't upgrade my software. Those are human tasks that these machines fail at. What am I missing in order to claim that we HAVE reached human-level AI?
English
0
0
0
20
Haider.
Haider.@slow_developer·
from what i'm seeing with opus 4.6 and gpt-5.4, i think people who say we haven't reached general human-level AI are probably imagining something beyond it with the right tools both models can do almost anything humans can, with a similar error rate, just much faster
English
36
8
120
5.5K