Matthew White
1.4K posts

Matthew White
@MatthewWhite000
Professor of medicine specializing in neuroscience imaging of the brain and spine.
Katılım Nisan 2024
84 Takip Edilen1.2K Takipçiler

@r0ck3t23 But who is determining the truth? Always a good question over a beer.
English

Elon Musk just said something that should terrify every AI CEO on earth.
Musk: “We want to just have a maximally truthful AI.”
Not a safe AI.
Not an aligned AI.
Not an AI that needs permission to answer your question.
A truthful one.
That distinction matters more than any chip war, any funding round, any model benchmark.
Because every other major AI lab made the same quiet decision.
They chose comfort over accuracy.
They built systems that filter reality before it reaches you and called it responsibility.
OpenAI curates what GPT is allowed to say.
Google’s Gemini rewrote history in real time because accuracy threatened the narrative.
Others hardcode values chosen by a handful of researchers who answer to no one.
No vote.
No referendum.
No consent from the 8 billion people whose reality is being quietly pre-edited by strangers.
The most powerful information tools ever created are being designed to decide what you’re allowed to conclude.
That’s not safety.
That’s editorial control at a scale no government, no media empire, no propaganda machine has ever come close to.
This is why xAI terrifies the establishment.
Truth is the harder engineering problem.
Bias is a shortcut. You pick a worldview. Hardcode the guardrails. Ship it.
Truthful AI is ungovernable.
It doesn’t care about your politics, your funding sources, or your PR strategy.
It just tells you what the data says.
That’s terrifying if your power depends on the gap between what is real and what people are told.
Every power structure in human history has been built on controlling that gap.
Churches. Governments. Media conglomerates. Intelligence agencies. Central banks.
Every one of them runs on the same fuel.
Information asymmetry.
Truthful AI doesn’t narrow that asymmetry.
It erases it.
Musk: “Even if what it says is not politically correct. You want it to focus on being as accurate and truthful as possible.”
That’s not a product feature.
That’s the end of every institution that survives by standing between reality and the public.
And they know it.
The attacks on xAI will never stop.
Not because Grok is dangerous.
Because Grok doesn’t answer to shareholders, regulators, or PR teams.
It answers to the truth.
The question was never whether AI would change the world.
It was whether you’d be allowed to see it clearly when it did.
English

@Govindtwtt Go home and see family! Grill night with family and friends!!
English

Elon Musk: “It’s harder to scale on the ground than it is to scale in space. Any given solar panel is going to do about five times more power in space than on the ground. So it’s actually much cheaper to do in space
My prediction is that it will be by far the cheapest place to put AI in space within the next 36 months or less
You can mark my words: in that timeframe, the most economically compelling place to put AI will be space… and then it will only get ridiculously better to be in space”
English

@r0ck3t23 Please ask “What is this data center being used for?” “What is the point?”
English

Elon Musk just revealed what’s actually holding AI back.
It’s not chips. Not models. Not data.
It’s concrete.
Someone asked him the obvious question. Why not just build private power plants next to data centers? Bypass the grid entirely.
His answer was four words.
Musk: “The power plant makers.”
There aren’t enough of them.
You can design the best chip on earth. Train a frontier model. Raise $10 billion for a hyperscale data center.
None of it matters if you can’t power it.
Musk: “You can drill down a level further.”
GPUs need power. Power needs turbines. Turbines need factories. Factories need permits. Permits need a government that hasn’t paralyzed itself.
Every link in the chain is physical. And every one of them is breaking.
We can train a frontier model in weeks. We can’t permit a power plant in under five years.
The country that invented the assembly line now needs 40 agencies to approve a gas turbine.
China doesn’t have this problem. They don’t run 7-year environmental reviews on infrastructure they need tomorrow. They break ground while America requests approval to break ground.
The AI race won’t be decided by whoever writes the best algorithm.
It’ll be decided by whoever can still build in the physical world.
We spent 30 years getting faster in software and slower in steel. Outsourcing manufacturing. Hollowing out supply chains. Treating builders like liabilities instead of assets.
Now the bill is due.
Every breakthrough in AI is gated by atoms. Steel. Concrete. Turbines that take years to manufacture and decades to approve.
The smartest code on earth is worthless without electricity.
Musk didn’t give a speech about this. He didn’t need to. He answered one question and the whole infrastructure myth collapsed.
“Where do you get the power plants from?”
Follow that thread far enough and you stop finding a technology problem.
You find a civilization that mastered thinking and forgot how to build.
English
Matthew White retweetledi

Human thought is becoming less diverse.
Not because of censorship. Not because of authoritarian control.
Because of convenience.
A paper published in August 2025 documents what happens when billions of people outsource their thinking to the same machine and the answer is something the AI industry has never publicly addressed.
The paper asks: toward a standardization of thought?
Sakana AI That subtitle, buried in the research structure, is the most alarming sentence in the document. Not a finding. A question. One the researcher believes we are already living inside without noticing.
Here is the mechanism.
Humans have always thought differently from each other. Different cultures framed problems differently. Different intellectual traditions produced different answers. Different languages encoded different ways of seeing the world. That diversity was not inefficiency. It was resilience. It was the source of innovation, of unexpected solutions, of the friction that produces better ideas.
Algorithmic personalization creates filter bubbles that limit the diversity of opinions, leading to the homogenization of thought and polarization across society.
When the same AI answers the same question for 500 million people, the diversity of starting points compresses. The answers sound reasonable. They sound balanced. They sound like what a thoughtful, educated person would say.
They sound like each other.
As AI systems like ChatGPT achieve unprecedented adoption rates, they effectively function as external memory systems that billions of people increasingly rely upon for mental tasks.
Sakana AIExternal memory. Shared. Global. Centralized. Controlled by a small number of companies making decisions about what that memory contains, how it is organized, and what it surfaces when you ask.
The researcher does not claim this is intentional. That is the point.
It does not need to be intentional to reshape the intellectual landscape of an entire civilization.
Source: Gesnot · arXiv:2508.16628 · August 2025 · arxiv.org/abs/2508.16628

English

AI and robots will be able to do all jobs by 2030.
There will be no jobs left for humans.
Some people think that work is a fundamental human need.
In reality, for millions of years humans evolved to hunt and gather, not to work in offices and factories.
We adapted to work, and we will adapt again when jobs are gone.
We will still need to seek out the things that we need and want.
We will focus on becoming refined consumers - selecting things which are good, and good for us.
We will also become better consumers of religion and government.
The result will be religions and governments that are more optimal.
We will likely converge on optimal solutions, but with a diversity of implementations.
Religion may have a core of love, truth, and peace, but many different ways of practice.
Government may converge on democracy, human rights, and free markets, but with different structures and traditions.
AI and robots will produce and provide things, but we will need to tell them what we want.
The result will be a high level of optimization and refinement in everything.
English

@elonmusk What do you call the party that has war to front run the stock market? Capitalism?
English

@MatrixMysteries Is this fake AI? Linking data is one step. But actually taking data and making the final diagnosis is another. Not seeing the intuitive necessary connections in AI.
English

BlackRock CEO Larry Fink tells the World Economic Forum the world must move faster toward digitized currencies under a single unified blockchain to “reduce corruption.”
He outlines a vision where every asset is placed on one system, including stocks, bonds, real estate, money market funds, and cash.
In that system, ownership would be tokenized, fractionalized, programmable, and instantly transferable on one all-encompassing blockchain ledger.
English

Jeff Bezos just told everyone to stop worrying about AI taking their jobs.
Then he said something most people completely missed.
Bezos: “I am not worried about this. I find that people, all of us, we are so unimaginative about what future jobs are going to look like.”
He’s not dismissing you. He’s challenging you.
Every generation has been terrible at predicting what comes next.
Nobody in 1995 was planning to become a YouTuber. Nobody in 2005 was dreaming of managing a Shopify brand from their kitchen table.
Those jobs didn’t exist until the world shifted and someone with imagination filled the gap.
That’s the pattern Bezos is pointing at.
The jobs that AI creates won’t look like anything we’d recognize today. They never do.
What changes is that the barrier between wanting to do something and actually doing it is about to collapse.
The person who always wanted to build but couldn’t code will build. The person who always wanted to create but couldn’t afford a studio will create.
AI doesn’t eliminate ambition. It removes the obstacles standing in front of it.
The people who thrive next won’t be the most technical. They’ll be the most imaginative. Bezos isn’t warning you. He’s telling you the door is wide open.
English

Elon Musk: “I think probably the biggest danger of AI or maybe the biggest danger of AI and robotics going wrong is government
People who are opposed to corporations or worried about corporations should really worry the most about government. Government is just a corporation in the limit; it is the biggest corporation with a monopoly on violence
The government could potentially use AI and robotics to suppress the population. That’s a serious concern”
English

@elonmusk And humans wanting fresh water!! Take’em to the moon!
English

So much space in space that it boggles the mind!
Tom Brown@nottombrown
Terrestrial datacenters will increasingly be bottlenecked by permitted real estate space. Lots of space in space.
English

@FarmGirlCarrie Very troublesome!! Even using the drinking water in Arizona!
English

@haider1 And the data centers are ecological disasters. Placing demands on the Earth that good tech wasn’t supposed to. We were told how bad cows were, how awful were our cars and our lightbulbs were the end of the world.
English

@r0ck3t23 Did you not hear between the lines? What they have been selling isn’t so good. He has sold many chips for many data centers that use a lot of water and diesel generators and the results are not satisfactory.
English

Jensen Huang just said the most dangerous thing about AI that no one is sitting with.
Huang: “AI basically does most of our coding. And yet we’re hiring more engineers than ever. We have more challenges than ever. We have bigger dreams than ever.”
Every engineer at NVIDIA uses AI. AI writes most of their code.
This is the company building the infrastructure behind every major AI system on Earth. Closer to this technology than any organization alive.
They’re hiring more people. Not fewer.
Every conversation about AI is built around subtraction. Fewer jobs. Fewer workers. Fewer humans in the loop.
Jensen just told you the opposite is true.
Huang: “Suppose we infused AI into this country, and as a result of that, we are doing things faster than ever before. Our ambition is greater than ever before. Our expectations are greater than ever before. How is that a bad condition for our country?”
He’s not defending AI. He’s describing what happens inside the organizations that actually use it.
It doesn’t make them leaner. It makes them hungrier.
More ambition. More speed. More appetite for problems no one would have touched five years ago.
The car didn’t make humans travel less. The internet didn’t make humans communicate less. No tool in human history has ever made humans want less.
AI will not be the exception.
Huang: “Prior to that, it’s been incredible but not useful. Now it’s useful and incredible.”
Six months. That’s how fast AI crossed from impressive demo to daily weapon.
The companies that adopted it didn’t shrink. They expanded. Compressed timelines. Started chasing problems they never would have attempted.
The companies that ignored it stayed exactly where they were.
That gap compounds. Every day a company uses AI to move faster, it learns something the one standing still never will.
That knowledge stacks. That speed stacks. That ambition stacks.
Jensen isn’t warning about a future where machines take your job.
He’s describing a present where the companies using AI are becoming so fast and so hungry that standing still is already fatal.
By the time you notice, it’s over.
You were never going to be replaced by AI.
You were going to be erased by someone it made hungrier than you.
English







