Matthias Heger ⏩

38.4K posts

Matthias Heger ⏩ banner
Matthias Heger ⏩

Matthias Heger ⏩

@modelsarereal

PhD (AI, RL); current interests: ontological pattern realism ontological narrative realism

Germany Katılım Şubat 2011
720 Takip Edilen1.2K Takipçiler
Mark Kretschmann
Mark Kretschmann@mark_k·
Looking inward is often praised as a path to personal growth, but recent research suggests it can be a double-edged sword. A new meta-analysis published in Current Psychology found that spending too much time analyzing your own thoughts is strongly linked to higher levels of depression and anxiety. The researchers discovered that deep self-reflection offers no measurable boost to positive feelings like self-esteem or overall life satisfaction. Instead, becoming highly aware of your inner world can bring hidden sadness or distress to the surface. When this reflection turns into negative rumination, mental health tends to decline. The way we examine our thoughts also changes the outcome. When people focus strictly on gaining objective insights, there is a slight connection to better mental health. However, when the focus shifts to constantly monitoring behaviors and dwelling on problems, the negative impact becomes prominent.
Mark Kretschmann tweet media
English
5
2
15
887
Matthias Heger ⏩
Matthias Heger ⏩@modelsarereal·
@TheCinesthetic But the film had already been running for nearly 30 minutes by the time it reached that cut. The audience thought: Finally, that part is over—hopefully the film will get interesting now.
English
0
0
1
48
Matthias Heger ⏩
Matthias Heger ⏩@modelsarereal·
@jvarga92 It is always outsourcing. We do not need our own competence in things we use. In the end, it is statistics: things that have worked sufficiently often in the past we can trust, no matter whether AI is involved or not.
English
0
0
0
2
Julia Varga
Julia Varga@jvarga92·
@modelsarereal The pilot has to know how to fly an airplane though and can outsource it to the robot pilot. It is not the role of the passenger that the robot helps but the pilot's, your example is not describing the situation with AI properly.
English
1
0
0
3
Matthias Heger ⏩
Matthias Heger ⏩@modelsarereal·
@jvarga92 No. If you fly in an airplane, you don't need to know anything. Even small children do it, and it is not wrong. You just need trust. AI haters do not trust AI because they believe humans are magically more trustworthy. The contrary is the case.
English
1
0
0
7
Julia Varga
Julia Varga@jvarga92·
@modelsarereal You do not need to understand how the tool works - but you need to be proficient in what the tool does. Like: how actually a calculator goes through the number - no. How would you add uo the numbers without a calculator - yes. Not the mechanism of the tool but the aim of the tool
English
1
0
0
6
Matthias Heger ⏩
Matthias Heger ⏩@modelsarereal·
@r0ck3t23 Obviously, he has been caught by the money. No wonder, when he is soaking in it. And accounts who quotes famous people again and again have the same reason.
English
0
0
0
39
Dustin
Dustin@r0ck3t23·
Jensen Huang just gave every CEO on the planet a single number to judge their engineering team by. Not lines of code. Not features shipped. Dollars burned in compute. Huang: “If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed. And this is no different than one of our chip designers who says, ‘Guess what? I’m just gonna use paper and pencil. I don’t think I’m gonna need any CAD tools.’” Half a million dollars in salary. Five thousand dollars in token spend. That ratio should be keeping every hiring manager awake tonight. It means your most expensive engineer is solving problems by hand that a machine could close in seconds. You are paying Formula 1 money for someone pedaling a bicycle. Huang is not suggesting engineers use more AI. He is saying if they are not consuming massive volumes of inference, your organization has a structural failure it has not diagnosed yet. And if you are the engineer in that seat right now, the math is staring directly at you. Your value is no longer measured by what you can build alone. It is measured by how much machine output you can direct, evaluate, and multiply. The ones who refuse to let go of the keyboard are pricing themselves out of the conversation. Calacanis pushed him on what this looks like two or three years out. Huang didn’t give a forecast. He eliminated three assumptions the entire industry still plans around. Huang: “‘Wow, this is too hard,’ that thought is gone. ‘This is gonna take a long time,’ that thought is gone. ‘We’re gonna need a lot of people,’ that thought is gone.” Too hard. Gone. Too long. Gone. Too many people. Gone. Every planning conversation in every boardroom in the world is built on at least one of those three constraints. Huang just declared all three obsolete. Huang: “This is no different than in the last Industrial Revolution somebody goes, ‘Boy, that building really looks heavy.’ Nobody says that. Everything that’s too big, too heavy, takes too long, those ideas are all gone. You’re reduced to creativity.” The Industrial Revolution made it absurd to say an object was too heavy to move. This moment makes it absurd to say a problem is too complex to build. Once you saturate your workforce with enough inference, the only bottleneck left is the quality of the idea itself. Not the team size. Not the timeline. Not the technical difficulty. The idea. That is all that is left. Huang: “In the past, we code. In the future, we’re gonna write ideas, architectures, specifications. We’re gonna organize teams. We’re gonna define how to evaluate the definition of good versus bad. And I think that every engineer is gonna have a hundred agents.” The engineer of the next decade does not write code. They write intent. They define what good looks like. They architect the problem. They evaluate the output. They direct a hundred agents executing in parallel across every layer of the stack. The companies still hiring engineers to manually write syntax are staffing a typing pool in the age of the printing press. The engineer’s job is no longer to build. It is to command.
English
17
6
38
9.3K
Matthias Heger ⏩
Matthias Heger ⏩@modelsarereal·
@jvarga92 Gary Marcus knows that most things we use—and claim to understand—we don’t actually deeply understand. It’s nonsense to behave differently when it comes to the topic of AI.
English
1
0
0
9
Julia Varga
Julia Varga@jvarga92·
@modelsarereal No. The equivalent is do not let fully automatic driver until you know how to drive and can supervise it when needed.
English
1
0
0
8
Matthias Heger ⏩
Matthias Heger ⏩@modelsarereal·
If a robot or a human says they saw a red car, what makes that statement true: * That the statement was made by a conscious system with an “I” or * That there really was a red car Truth—and even consciousness—manifests itself in coherent statements, not in mechanisms.
English
0
0
0
19
Julia McCoy
Julia McCoy@JuliaEMcCoy·
The AGI revolution won't look like a movie. No explosions. No robots marching down streets. No dramatic moment where "it happens." It'll be quiet. A department that had 40 people now has 12. A company that took 5 years to build gets built in 5 weeks. A skill you spent a decade mastering gets automated on a Tuesday. Nobody will announce it. There won't be a headline. One day you'll just look around and realize everything changed while you were scrolling. The loudest revolutions in history were the ones nobody heard coming. This is that revolution. Wake up.
English
60
30
206
7.5K
Chubby♨️
Chubby♨️@kimmonismus·
Mustafa Suleyman and his team were hired by Microsoft for nearly $700 million to further develop Copilot for the future of AI. After two years, disillusionment set in, and Satya Nadella became increasingly dissatisfied. Alongside Meta, Microsoft remains arguably the biggest laggard among companies, despite its multi-billion dollar investments.
Chubby♨️ tweet media
Pedro Domingos@pmddomingos

The inevitable has happened: Copilot no longer reports to Mustafa Suleyman. theinformation.com/briefings/micr…

English
41
27
545
82.6K
Matthias Heger ⏩ retweetledi
Peter Gostev
Peter Gostev@petergostev·
LLM sceptics have predicted the last 7 of 0 walls
English
14
20
274
16K
Matthias Heger ⏩
Matthias Heger ⏩@modelsarereal·
The same is true for so many human artists. But the output is the output. The text is the text. They are facts. And even when we say they didn’t want anything to say with it, this is just another interpretation.
Ethan Mollick@emollick

My experience so far with LLM fiction writing is that it takes advantage of our assumption that an author is writing things for a reason, so we are charitable to a book's quirks & do mental work to assign them real meaning. But the AI doesn't have a reason, its just bad writing.

English
0
0
0
22
Matthias Heger ⏩ retweetledi
Dustin
Dustin@r0ck3t23·
Chamath Palihapitiya just ended the debate on technical talent. The 10x engineer is extinct. Not because the talent disappeared. Because AI agents made the skill irrelevant. Palihapitiya: “I’m gonna say something controversial. I don’t think developers anymore have good judgment. Developers get to the answer or they don’t get to the answer, and that’s what agents have done.” The premium you paid for coding intuition is now worth exactly zero. Chess proved it first. AI built a solver that mapped every position to the highest expected value move. Removed all mystery. Made grandmaster judgment available to anyone with an internet connection. Stockfish let a 12-year-old play like Magnus Carlsen. Nobody called it controversial. They called it inevitable. Coding followed the exact same arc. Palihapitiya: “The 10x engineer had better judgment than the 1x engineer. But by making everybody a 10x engineer, you’re taking judgment away. You’re taking code paths that are now obvious and making it available to everybody.” When the optimal path becomes universally visible, execution collapses into a commodity. Your entire technical moat evaporates the second the agent calculates the solution faster than your highest paid engineer. The only thing that matters now is the size of the problem you point the machine at. Not how well you write the code. How large the problem is. Palihapitiya: “Why do you even care what database you use? Why do you even care which cloud you’re built on? They don’t matter. They were decisions that used to matter when people had a job to do and you paid them for their judgment.” AWS vs. GCP vs. Azure used to be a six-month architectural debate. Teams would fight over it. Careers were built on those decisions. Now an agent evaluates all three in seconds and picks whatever is cheapest at that exact moment. The decision that used to require a senior VP and a whiteboard session is a single API call running on autopilot. Palihapitiya: “If you tell an agent, find me the cheapest way to execute this thing, and if it ever gets cheaper to go someplace else, do that for me as well… and I don’t really care.” Read that last line again. “I don’t really care.” Four words that should keep every enterprise sales team awake tonight. Every cloud provider, every SaaS platform, every B2B vendor has built their entire business model on the assumption that switching is painful enough to keep you locked in. An autonomous agent will rip out your entire vendor stack at 2 AM on a Tuesday. You will wake up to a lower bill and better performance. It will not ask for approval first. Calacanis: “So you’re saying it will swap out Stripe for Adyen, or Linode for Amazon Web Services… it’s going to be ruthless.” Palihapitiya: “AI is ruthless because it’s emotionless. It was not taken to a steak dinner. It was not brought to a basketball game. It was not sold into a CEO.” You cannot buy a neural network courtside seats. You cannot send it a gift basket in December. You cannot fly it to a user conference in Vegas and hope the open bar builds enough goodwill to survive the next contract renewal. The agent only sees numbers. And the numbers do not lie, do not negotiate, and do not care that you have been a “trusted partner” for eleven years. Enterprise cloud vendors built empires on the friction of switching costs. That friction no longer exists. The companies that survive will be the ones the agent selects every microsecond. Not the ones you chose once in 2019. The algorithm is running procurement now.
English
9
6
40
5.4K
Matthias Heger ⏩
Matthias Heger ⏩@modelsarereal·
@fchollet try to solve ARC task when presented in the same data structure as LLMs get presented these tasks. LLMs are smarter.
English
0
0
0
19
François Chollet
François Chollet@fchollet·
When the latest AI systems can't do something, there's a category of people who will immediately say, "well humans can't do it either!" - Then they stop saying it when AI improves a bit. Been hearing it for 4+ years, "humans can't reason either", "humans can't adapt to a task they haven't been prepared for", "humans can't follow instructions", "humans also suffer from hallucinations", etc. Until 2025 I was frequently told "humans can't do ARC 1 tasks either" (in reality any normally smart human would do >95% on ARC 1 if properly incentivized). Now that AI saturates ARC 1 they've completely stopped saying this.
François Chollet@fchollet

In general I've been sensing a new current deep learning maximalists recently, going from "our models can definitely reason" to "well our models can't reason, but neither can humans!"

English
68
24
296
34.9K
Matthias Heger ⏩
Matthias Heger ⏩@modelsarereal·
Anthropocentric people have no clear, verifiable definition of consciousness at all. There is nothing but pseudoscience from those people. Those who have clear, verifiable definitions see that AI is conscious.
Sophia@sopharicks

@alethious @SterlingCooley @KarlFristonNews That would be problematic considering the diversity of views on consciousness

English
0
0
0
27
Matthias Heger ⏩
Matthias Heger ⏩@modelsarereal·
There is no genuine behavior from nothing. Our curiosity is a prompt our genes give the brain. We are physics. Without free will. We behave in the same determined way as a stone.
Sophia@sopharicks

Godfather of neuroscience and author of The free energy principle (active inference), Karl Friston @KarlFristonNews , discussed with me his take on AGI, consciousness, why we will never fully understand the brain, and why humans tend to repeat their mistakes. Key moments: - You'll know AGI has arrived when the system starts asking you questions out of genuine curiosity, not because it was prompted to - You cannot hand an intelligent system a value function from the outside, it must learn its own, just as children do (in that sense, RL with assigned reward is the wrong direction) - The only sustainable universal objective function is adaptive fitness: how well the agent fits and survives within its ecosystem - Consciousness requires multiple layers: genuine agency, a self-reflective loop, and the ability to recognize your own states of mind - True sentience may be impossible on standard computer architecture, because memory and processing are separate and cannot self-organize - Understanding your own brain is philosophically impossible in the same way a ruler cannot measure itself - Neuroscience is always "peeking behind" the Markov blanket indirectly: through imaging, electrophysiology, psychology — never seeing inside directly - The only way to truly access the brain is to breach that boundary (e.g. neurosurgery), but a breached brain is no longer a normally functioning one Watch the full interview and let me know what you think. Link below👇

English
0
0
1
55