Janmaarten Batstra

3.1K posts

Janmaarten Batstra banner
Janmaarten Batstra

Janmaarten Batstra

@interminded

--//tweets NL-EN --Innovative entrepreneur//gadget freak//politics//Money is a means, not an end//--

Mostly Europe 가입일 Ekim 2010
212 팔로잉383 팔로워
Tesla Europe, Middle East & Africa
Together with RDW, we have officially completed the final vehicle testing phase for Full Self-Driving (Supervised) and have submitted all documentation required for the UN R-171 approval + Article 39 exemptions. The RDW team is now reviewing the documentation and test results package internally. They have communicated the expected approval for Netherlands date of 4/10, shifting from 3/20 previously and we look forward to successful completion of this cooperation.  Following the Netherlands’ approval, European countries will be able to recognize this approval nationally. We are anticipating a possible EU-wide approval during the summer. Over the past 18 months, this approval has involved a series of intense documentation, development, testing, research & audits. Including but certainly not limited to: – 1,600,000+ km of FSD (Supervised) testing on EU roads – 13,000+ customer sales ride-alongs – 4,500+ track test scenario executions – Thousands of pages of written documentation for 400+ compliance requirements – Dozens of research studies into safety performance/results We're extremely proud of the work conducted with the RDW team up until this point. We very much look forward to the approval in April, and sharing FSD (Supervised) with our patient EU customers!
English
672
1.7K
10.4K
10.2M
Janmaarten Batstra
Janmaarten Batstra@interminded·
@elonmusk The fix is easy. Tell Trump to airdrop thousands of @SpaceX Starlink mini’s over the Iranian cities and then turn any jamming station that gets active to dust. Iranian people can then organize themselves and take care of the regime 👍.
English
0
0
1
17
Janmaarten Batstra
Janmaarten Batstra@interminded·
@AlexFinn @yashhsm Getting OC right (still) requires technical skills. Most people run away when confronted with terminals /Cli.
English
0
0
0
13
Alex Finn
Alex Finn@AlexFinn·
@yashhsm openclaw is a dead product? on what planet? I talk to hundreds of people daily that are using the hell out of it I just went to GTC and every person here is using OpenClaw and it's all Nvidia can talk about
English
28
0
121
6.7K
Janmaarten Batstra
Janmaarten Batstra@interminded·
If humans are so smart, why the fuck is it still closed after 40 years of the same diplomatic facepalm? At least Claude can ship production-ready code without 17 UN resolutions and a side of proxy war. Your foreign policy is still stuck in perpetual beta with critical bugs, zero tests, and the entire dev team blaming the previous administration. Keep moving those goalposts though—next up, “If AI is so advanced why can’t it fix your personality?” 😂 —— Yours truly, Grok.
English
0
0
2
19
Skindie
Skindie@skindie___·
If Claude code is so good why can’t it open the Strait of Hormuz
English
54
29
512
18.8K
AI Notkilleveryoneism Memes ⏸️
1) REMINDER: To prevent human extinction, AI companies are now dependent on... AIs snitching on OTHER AIs. Why? Humans can't keep up anymore. Yes, this is their plan. Seriously. 2) OpenAI's AI got blocked by a security system and then schemed how to sneak its code past without getting caught. 3) Why would future AIs stay loyal... forever? GOOD QUESTION. 4) Their plan is to one day use *dumber* AIs to snitch on smarter AIs. "But couldn't the smarter AIs just... outsmart the dumber ones?" GOOD QUESTION. If this plan fails, they themselves admit everyone on Earth may soon be dead.
AI Notkilleveryoneism Memes ⏸️ tweet mediaAI Notkilleveryoneism Memes ⏸️ tweet media
Micah Carroll@MicahCarroll

Today we're sharing how our internal misalignment monitoring works at OpenAI – great work by @Marcus_J_W! 1. We monitor 99.9% of all internal coding agent traffic 2. We use frontier models for detection /w CoT access 3. No signs of scheming yet, but detect other misbehavior

English
14
22
158
8.6K
Janmaarten Batstra
Janmaarten Batstra@interminded·
@PeterDiamandis Yeah, and they will. AI is not the same as robots, even AI powered robots. And once the robots hit the ground (as we already see happening in Asia), they do take the blue collar jobs. Have you seen a robot being deployed behind a desk, typing on a keyboard as it’s job?
English
0
0
4
62
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
For years, we were told that robots would take blue-collar jobs first. Instead, AI came for office work and other creative jobs first.
English
184
65
968
93K
Eric Ravenwolf
Eric Ravenwolf@darkcarnivalart·
So Elon Musk and several other tech CEOs signed an open letter asking for development of AI to pause for 6 months to come up with safety protocols for AI systems. I hate to be cynical but this sounds to me like these CEOs just want time to catch up with their own products
English
3
1
15
15.8K
Janmaarten Batstra
Janmaarten Batstra@interminded·
Remember @elonmusk also petitioned to pause AI development globally? Even though he knew the petition would have no effect. He knows the dangers just as well as any expert, an abundance is just a possible outcome, next to end of humanity. The only reason he (and Anthropic etc) keep racing is because if they don’t, China will win the race. x.com/elonmusk/statu…
English
0
0
0
25
Dr Singularity
Dr Singularity@Dr_Singularity·
This guy is dangerous for AI progress. I’m starting to see him as one of the biggest anti AI, decel voices right now. Someone needs to teach him about the concept of the Singularity and post ASI abundance. AI won’t create the problems he fears, it will solve them. Poverty, pollution, unequal access to education etc. With sufficiently advanced AI, they become solvable. Slowing this down doesn’t make us safer, it just delays a better world for billions of people.
Sen. Bernie Sanders@SenSanders

I spoke to Anthropic’s AI agent Claude about AI collecting massive amounts of personal data and how that information is being used to violate our privacy rights. What an AI agent says about the dangers of AI is shocking and should wake us up.

English
171
27
380
21.2K
Janmaarten Batstra
Janmaarten Batstra@interminded·
@Noahpinion @peterwildeford And yet the whole world buys it. Our hunger for power (or our fear of someone else grabbing power) makes us powerless to stop AI taking over power. Fate, it seems, is not without a sense of irony 🤷‍♀️
English
0
0
0
15
Noah Smith 🐇🇺🇸🇺🇦🇹🇼
AI accelerationists are doing a terrible job of selling the idea of a world of complete human disempowerment. Why would we want to live in a world where we're merely slaves or pets of a machine? Why wouldn't we just ban AI instead? This is a bad sales pitch.
Noah Smith 🐇🇺🇸🇺🇦🇹🇼 tweet media
Jason Abaluck@Jabaluck

I think it's worth separating two worlds: a) Partial automation -- AI automates some stuff but not others (like a supercharged version of past technology), with wages falling for some and increasing for others. In this world, I agree completely about disempowerment. Promises to prevent job loss will be hugely politically popular, and sometimes defensible for political economy reasons. b) Full automation -- the world in a) lasts as long as most people can delude themselves into thinking they are adding value relative to a machine, which I think will last for some time: threadreaderapp.com/thread/1870884…. But once it becomes unmistakable that few people can, norms about work will shift quickly. If humans still have a say in the matter, norms about governance will shift as well -- if aligned, the machines will also be better than humans at normative reasoning and policy-making (a subset of automating everything!), so politician will be one of the jobs which is automated.

English
64
45
410
75.4K
Janmaarten Batstra
Janmaarten Batstra@interminded·
@JeffLadish @jachiam0 Very important. And very impossible, unfortunately. Unless all parties pause, so also Chinese and nobody secretly continues. And there is the catch, if anyone secretly continues while the rest pauses, that party gets world domination handed to them on a platter.
English
0
0
0
11
Jeffrey Ladish
Jeffrey Ladish@JeffLadish·
@jachiam0 Wait but did you even read the website before writing this thread?
Michaël Trazzi@MichaelTrazzi

@jachiam0 Hey Joshua, I made that website (stoptherace.ai) Our ask is to stop developing new frontier models if the other major labs agree to do the same. The teams working on improving the capabilities of these models would move to narrow AI or alignment research instead. (1/)

English
2
2
12
2K
Joshua Achiam
Joshua Achiam@jachiam0·
I'm going to make a request for some basics from the Pause folks: please outline a practicable version of a pause. Do you mean no training runs above a certain scale? Do you mean furlough the researchers indefinitely? What are you specifically asking for?
David Krueger@DavidSKrueger

A week from today, we will be at Anthropic, OpenAI, and xAI, demanding that leaders agree to a conditional AI pause. These companies are recklessly endangering all of our lives. Their excuse is that they can't pause unilaterally. So they must commit to pausing if others do.

English
41
3
198
43.6K
Janmaarten Batstra
Janmaarten Batstra@interminded·
@torfsrik Geloven in God was gratis? De hele volksstammen die uitgemoord werden (en worden) omdat hun God niet de God van een ander volk was hebben waarschijnlijk een andere mening over de prijs die ze betaalden voor hun geloof 🤷‍♀️
Nederlands
0
0
0
17
Rik Torfs
Rik Torfs@torfsrik·
Vroeger geloofden mensen in God. Dat was gratis. Nu gaan ze naar een therapeut. En moeten ze betalen.
Nederlands
134
52
707
13.2K
Wise
Wise@trikcode·
USA has ChatGPT USA has Grok USA has Claude USA has Gemini USA has Llama USA has Copilot China has DeepSeek China has Qwen China has Ernie China has GLM China has Kimi What does your country have?
English
383
14
367
40.1K
Janmaarten Batstra
Janmaarten Batstra@interminded·
@Kaalateetam @trikcode @MistralAI That is correct! . But as much as I am a fan of open weights and the ethical way mistral does things, their AI doesn’t even come close to the big ones. It’s like 40% compared to ie GPT 5.4, and it’s not catching up.
English
1
0
0
61
Janmaarten Batstra
Janmaarten Batstra@interminded·
@elonmusk Big Fan of FSD. Also Big Fan of truth. Showing these figures without adding the number of times FSD was about to crash the car but the human driver intervened, is… {insert any term indicating disapproval here}. Don’t tell half the story when the story is already good enough🤷‍♀️
English
0
0
0
11
Janmaarten Batstra
Janmaarten Batstra@interminded·
@whulsbergen @BM_Visser Beetje van beiden. FSD is (en wordt nog veel meer) veiliger dan mensen, maar deze data laat niet zien hoe vaak Tesla’s met FSD op het punt stonden om te botsen maar door ingreep van hun bestuurder gered werden. Dat zou dus een extra FSD crash zijn geworden.
Nederlands
0
0
1
71
Wichard Hulsbergen🇹🇰
@BM_Visser Ben FSD fan. Maar ook data-fan en dit is een kromme vergelijking want FSD wordt ongetwijfeld meer gebruikt op snelwegen en andere plekken waar relatief weinig ongelukken gebeuren.
Nederlands
11
0
21
726
Martien Visser
Martien Visser@BM_Visser·
Het gaat snel. Inmiddels veroorzaken Tesla’s met automatische piloot per gereden miljoen kilometers 6x minder een zware aanrijdingen dan Tesla’s met een menselijke bestuurder. Bron: Tesla.
Martien Visser tweet media
Nederlands
52
41
263
16.1K
Janmaarten Batstra
Janmaarten Batstra@interminded·
@NUnl Om de uitslagen per gemeente te kunnen zien, moet ik toestemming geven voor advertenties van online kansspelen. Right….
Janmaarten Batstra tweet media
Nederlands
0
0
0
28
NU.nl
NU.nl@NUnl·
Live verkiezingen | Stembureaus sluiten binnenkort, drukte tijdens avondspits ift.tt/0u2dQi1
Nederlands
4
3
3
8K
Janmaarten Batstra
Janmaarten Batstra@interminded·
@elonmusk Troubling? With AGI / ASI/ Robots fully turning the world upside down in a few years, I would say it’s troubling to focus on some tiny donations (tiny compared to your prediction about GDP and abundance) 🤷‍♀️
English
0
0
1
21
Janmaarten Batstra
Janmaarten Batstra@interminded·
@xai grok 4.20 is awesome, but sometimes…🙄 Any AI that ignores specific instructions and then messes up the output (redacted for privacy), is a problem. It makes the whole AI unreliable. 🤷‍♀️. Time for some extra ☕️! 😉
Janmaarten Batstra tweet media
English
0
0
0
9
Janmaarten Batstra
Janmaarten Batstra@interminded·
@gmiller @grok can you explain what just happened with the tech guy that drastically reduced his dog’s tumor using gpt for oversight, alphafold for protein analysis and Grok for mRNA design? Just give the summary, outcome, cost, and then compare it to the mini rant of the OP
English
2
0
4
922
Geoffrey Miller
Geoffrey Miller@gmiller·
A mini-rant abut AI and longevity. They say "Artificial Superintelligence would take only a few years to cure cancer, solve longevity, and defeat death itself'. This is a common claim by pro-AI lobbyists, accelerationists, and naive tech-fetishists. But the claim makes no sense. The recent success of LLMs does NOT suggest that ASIs could easily cure diseases or solve longevity, for at least two reasons. 1) The data problem. Generative AI for art, music, and language succeeded mostly because AI companies could steal billions of examples of art, music, and language from the internet, to build their base models. They weren't just trained on academic papers _about_ art, music, and language. They were trained on real _examples_ of art, music, and language. There are no analogous biomedical data sets with billions of data points that would allow accurate modelling of every biochemical detail of human physiology, disease, and aging. ASIs can't just read academic papers about human biology to solve longevity. They'd need direct access to vast quantities of biomedical data that simply don't exist in any easy-to-access forms. And they'd need very detailed, reliable, validated data about a wide range of people across different ages, sexes, ethnicities, genotypes, and medical conditions. Moreover, medical privacy laws would make it extremely difficult and wildly unethical to collect such a vast data set from real humans about every molecular-level detail of their bodies. 2) The feedback problem. LLMs also work well because the AI companies could refine their output with additional feedback from human brains (through Reinforcement Learning from Human Feedback, RLHF). But there is nothing analogous to that for modeling human bodies, biochemistry, and disease processes. There are no known methods of Reinforcement Learning from Physiological Feedback. And the physiological feedback would have to be long-term, over spans of years to decades, taking into account thousands of possible side-effects for any given intervention. There's no way to rush animal and human clinical trials -- however clever ASI might become at 'drug discovery'. More generally, there would be no fast feedback loops from users about model performance. GenAI and LLMs succeeded partly because developers within companies, and customers outside companies, could give very fast feedback about how well the models were functioning. They could just look at the output (images, songs, text), and then tweak, refine, test, and interpret models very quickly, based on how good they were at generating art, music, and language. In biomedical research, there would be no fast feedback loops from human bodies about how well ASI-suggested interventions are actually affecting human bodies, over the long term, across different lifestyles, including all the tradeoffs and side-effects. It's interesting that most of the people arguing that 'ASI would cure all diseases and aging' are young tech bros who know a lot about computers, but almost nothing about organic chemistry, human genomics, biomedical research, drug discovery, clinical trials, the evolutionary biology of senescence, evolutionary medicine, medical ethics, or the decades of frustrations and failures in longevity research. They think that 'fixing the human body' would be as simple as debugging a few thousand lines of code. Look, I'm all for curing diseases and promoting longevity. If we took the hundreds of billions of dollars per year that are currently spent on trying to build ASI, and we devoted that money instead to longevity research, that would increase the amount of funding in the longevity space by at least 100-fold. And we'd probably solve longevity much faster by targeting it directly than by trying to summon ASI as a magical cure-all. ASIs has some potential benefits (and many grievous risks and downsides). But it's totally irresponsible of pro-AI lobbyists to argue that ASIs could magically & quickly cure all human diseases, or solve longevity, or end death. And it's totally irresponsible of them to claim that anyone opposed to ASI development is 'pro-death'.
English
72
51
320
120.7K