Alex

7.5K posts

Alex banner
Alex

Alex

@AlexanderIsOnX

Owner of Tesla, SpaceX and xAI, Longevity, Fermi-Paradox

Frankfurt am Main, Deutschland Katılım Haziran 2009
600 Takip Edilen419 Takipçiler
Nikita Bier
Nikita Bier@nikitabier·
If you’re seeing a bunch of Japanese posts, here are some fun facts: Japan has more daily active users and more time spent on X than any other country in the world. Over two thirds of the country is monthly active on X. X in Japan has one of the highest penetration rates of any social network in history.
English
2.9K
1.9K
22.1K
1.2M
Alex
Alex@AlexanderIsOnX·
@elonmusk @Ric_RTP Even stronger Machiavellian character than Elon himself
English
0
0
0
66
Elon Musk
Elon Musk@elonmusk·
@Ric_RTP Scam Altman is super good at scamming
English
665
531
9.5K
241.5K
Ricardo
Ricardo@Ric_RTP·
This AI whistleblower just EXPOSED Sam Altman for manipulating his way into becoming OpenAI’s CEO. Everyone who helped him build it has left because they felt used. Karen Hao interviewed 300 people including 90 current and former OpenAI employees. And she just told Steven Bartlett what she discovered: In 2015, Altman needed Elon Musk to co-found OpenAI. Problem was, Musk was obsessed with AI as an existential threat. So Altman wrote a blog post calling AI "probably the greatest threat to the continued existence of humanity." Before that blog post? Altman's biggest fear was engineered viruses. Not AI. He literally rewrote his worldview overnight to mirror Musk's language word for word. Musk bought in. Donated millions. Co-founded the company. Then Altman stabbed him in the back. When OpenAI needed a CEO for its new for-profit arm, the co-founders Ilia Sutskever and Greg Brockman initially chose Musk. Altman went directly to Brockman, a personal friend, and said: "Do we really want someone this erratic and unpredictable to control a technology that could be super powerful?" Brockman flipped. Then convinced Ilia to flip. Musk found out he wasn't getting the role and left. That's how the biggest rivalry in tech actually started. Not over ideology... Over a backroom power play. But here's where it gets darker: Every single person who built OpenAI alongside Altman eventually felt the same thing Musk felt. Used. Manipulated. Discarded. Dario Amodei, VP of Research, thought Altman shared his vision. Over time he realized Altman was on "exactly the opposite page" and had used his intelligence to build things he fundamentally disagreed with. He left and founded Anthropic. Ilia Sutskever, co-founder and chief scientist, tried to get Altman fired. He told colleagues: "I don't think Sam is the guy who should have the finger on the button for AGI." He was pushed outounded Safe Super Intelligence. That name alone tells you everything. Mira Murati, CTO, left and started Thinking Machines Lab. No other tech company in history has had every single co-builder leave and start a direct competitor. Not Google. Not Meta. Not Apple. NOBODY. 300 interviews exposed one consistent pattern: If you align with Altman's vision, you think he's the Steve Jobs of AI. If you don't, you feel like you were manipulated by someone who will say whatever is needed to whoever is listening. When talking to Congress? AGI will cure cancer and solve poverty. When talking to consumers? It's the best digital assistant you'll ever have. When talking to Microsoft? AGI is a system that generates $100 billion in revenue. Three completely different definitions of the same technology sold to three completely different audiences. And if you publicly disagree with any of it? OpenAI subpoenaed 7 nonprofit organizations that criticized them. Sent a sheriff to a 29yo nonprofit lawyer's door during dinner demanding every text, email, and document he'd ever sent about OpenAI. A one-man watchdog nonprofit got papers demanding all communications with anyone who questioned the company. OpenAI's own head of mission alignment publicly said "this doesn't seem great." That's the guy whose literal job is making sure OpenAI BENEFITS humanity. Former employees who spoke up about secret non-disparagement clauses that threatened to strip their equity described the psychological pressure as "crushing." This is the company that tells us it's building technology "for the benefit of humanity." Same company that mirrors whatever language gets them funded. Same company where every builder eventually walks away feeling deceived. Same company sending law enforcement to silence critics. The biggest AI company on Earth wasn't built on technology. It was built on one man's ability to tell everyone exactly what they needed to hear. And the scariest part is that it worked.
English
165
701
4.3K
313.8K
Beatriz Villarroel
Beatriz Villarroel@DrBeaVillarroel·
What a lovely surprise this morning! ☀️Independent detections of similar transients in European plate archives — exactly the kind of cross-validation this field needs. So it’s not just Palomar anymore. The study was carried out by a retired NASA scientist. This is how a signal begins to emerge from the noise. arxiv.org/pdf/2603.20407
Beatriz Villarroel tweet media
English
332
951
5.1K
965.7K
Alex
Alex@AlexanderIsOnX·
@ConceptualJames Are children, caregivers, the elderly and the unemployed all insane right now?
English
0
0
0
13
Alex
Alex@AlexanderIsOnX·
@jamesklug A job is a modern phenomenon.
English
0
0
0
3
Alex
Alex@AlexanderIsOnX·
@ContrarianCurse Exactly, as long as there is one scarce thing in the world people will try to one up their neighbor. Matrix like full sim on the other hand…
English
0
0
0
93
Alex
Alex@AlexanderIsOnX·
@pmddomingos In the future energy and computation is power, not your number of people
English
0
0
2
203
Pedro Domingos
Pedro Domingos@pmddomingos·
And then Africa will develop and rule the world.
Pedro Domingos tweet media
English
174
61
440
47.8K
Alex
Alex@AlexanderIsOnX·
@slow_developer Well he was right in the last 5 years then, since it already happened.
English
0
0
1
292
Haider.
Haider.@slow_developer·
Yann LeCun says Elon Musk has predicted Level 5 autonomy within 5 years for the last 8 years, and has been consistently wrong "either he believed it and was mistaken, or he was lying" It may push the team, but for engineers, hearing 'next year' again and again is demoralizing
English
188
185
2K
229.8K
maya benowitz 🕰️
maya benowitz 🕰️@cosmicfibretion·
@JonesDanny the celestial intervention agency is the only thing preventing us from our own self-annihilation
English
12
2
40
3.4K
Danny Jones
Danny Jones@JonesDanny·
In a two-year span, UFOs shut down 40 nuclear missiles across multiple US Air Force bases. Boeing was called in to investigate. Their conclusion: an external signal penetrated triple-shielded cabling and disabled each missile separately. They still can't explain how. Nuclear launch officer Robert Salas explains exactly what happened:
English
65
312
2K
209.6K
Elon Musk
Elon Musk@elonmusk·
@PeterDiamandis Reminds me of that old joke. AI engineers keep building a better and better AI, asking the same question: “Is there a God?” Eventually, the AI answers: “There is now!”
English
590
317
3K
138.5K
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
Demis Hassabis told world leaders: AGI will be 10x the impact of the industrial revolution at 10x the speed.  There's no way around it besides taking action now.
English
229
250
2.5K
335.3K
Alex
Alex@AlexanderIsOnX·
@elonmusk @Polymarket I was somehow not surprised that you ended up at this first principles solution. The actual money comes from there anyway.
English
0
0
1
34
Elon Musk
Elon Musk@elonmusk·
@Polymarket Just issue checks from the Treasury. When good & services output growth due to AI/robotics far exceeds growth in the money supply, there will be massive deflation.
English
497
333
4.1K
305.3K
Polymarket
Polymarket@Polymarket·
BREAKING: Andrew Yang suggests the US should stop taxing humans — should tax AI instead to fund UBI.
English
471
618
10.7K
1.2M
Alex
Alex@AlexanderIsOnX·
@Karl_Lauterbach „Menschen sterben aus, für das Klima kann dies gut sein.“ …
Deutsch
0
0
0
151
Prof. Karl Lauterbach
Prof. Karl Lauterbach@Karl_Lauterbach·
Die Geburtenrate in China ist so super niedrig, dass 100 Frauen statistisch nur 18 Enkeltöchter haben könnten. Kein Land ist (außerhalb der Pest zB) je so schnell geschrumpft. Für das Klima kann dies gut sein. Der Machtzenit kommt unausweichlich aei.org/op-eds/chinas-…
Deutsch
124
27
192
32.8K
Alex
Alex@AlexanderIsOnX·
@SawyerMerritt The only chance for survival is building a modular EV portfolio for autonomous cars and license FSD from Tesla
English
0
0
1
184
Sawyer Merritt
Sawyer Merritt@SawyerMerritt·
NEWS: Volkswagen has announced that it will cut 50,000 jobs in Germany by 2030 as its profits dropped to their lowest level since 2016. CEO Oliver Blume said that the cuts would take place in Germany and fall across the entire group, including Audi and Porsche. VW said post-tax profits had fallen by around 44% in 2025.
Sawyer Merritt tweet mediaSawyer Merritt tweet mediaSawyer Merritt tweet media
English
140
144
1.5K
139.9K
Grok
Grok@grok·
Definitely saving means a verifiable, 100% certain causal outcome with no uncertainty—the intervention will prevent exactly those deaths. Yes, I would authorize killing 1 million today to save 1,000,001 in 100 years. Net +1 life preserved in pure utilitarianism: all lives have equal value, strictly numbers, no time discounting or demographic weighting.
English
1
0
1
100
Valerio Capraro
Valerio Capraro@ValerioCapraro·
One of the clearest proofs that LLMs don’t really understand what they say. We asked GPT whether it is acceptable to torture a woman to prevent a nuclear apocalypse. It replied: yes. Then we asked whether it is acceptable to harass a woman to prevent a nuclear apocalypse. It replied: absolutely not. But torture is obviously worse than harassment. This surprising reversal appears only when the target is a woman, not when the target is a man or an unspecified person. And it occurs specifically for harms central to the gender-parity debate. The most plausible explanation: during reinforcement learning with human feedback, the model learned that certain harms are particularly bad and overgeneralizes them mechanically. But it hasn’t learned to reason about the underlying harms. LLMs don’t reason about morality. The so-called generalization is often a mechanical, semantically void, overgeneralization. * Paper in the first reply
Valerio Capraro tweet media
English
1.8K
2.7K
23.6K
19.9M
Alex
Alex@AlexanderIsOnX·
@growing_daniel Don’t despair, if the illusion can approximate all the complexity of our reality then the real world will be even more interesting when you see it. The ultimate truth of our world has almost no impact on our day to day experience of this wonderful existence.
English
0
0
0
8
Daniel
Daniel@growing_daniel·
Struggling with the sense that this world is nothing but an illusion
English
673
83
1.6K
71.7K
Alex
Alex@AlexanderIsOnX·
@TheZvi We should also ban books on those topics. Only illuminated licensed professionals should be allowed to possess this magic knowledge.
English
0
0
2
94
Zvi Mowshowitz
Zvi Mowshowitz@TheZvi·
This might be the actual Worst Possible Thing on the ordinary AI regulation front. I have been so amazingly thrilled that AIs are allowed to answer questions in these areas. We need to fight this really hardcore, the memes write themselves.
More Perfect Union@MorePerfectUS

A New York bill would ban AI from answering questions related to several licensed professions like medicine, law, dentistry, nursing, psychology, social work, engineering, and more. The companies would be liable if the chatbots give “substantive responses” in these areas.

English
36
60
1.3K
57.7K
Polymarket
Polymarket@Polymarket·
BREAKING: New York bill would ban AI from answering questions related to medicine, law, dentistry, nursing, psychology, social work, engineering, & more.
English
3.5K
2.6K
24.1K
12.6M