Bart de Witte

14.3K posts

Bart de Witte banner
Bart de Witte

Bart de Witte

@OpenMedFuture

🇧🇪 lived in 🇨🇭🇦🇹🇸🇰 now 🇩🇪 Med AI Democratisation 25+ yrs Digital Health leadership, father ex-IBM ex-SAP @isaree_ai https://t.co/TsBCgl9Ldm eu/acc

Berlin & Antwerp Katılım Kasım 2007
2.3K Takip Edilen7.4K Takipçiler
Sabitlenmiş Tweet
Bart de Witte
Bart de Witte@OpenMedFuture·
AI won't displace doctors, but doctors who don't advocate for open-source innovation risk being displaced by those who monopolize medical knowledge.
English
8
2
23
7.5K
Bart de Witte
Bart de Witte@OpenMedFuture·
Just as the efficiency-driven Jevons paradox eventually gives way to the prestige of Veblen goods, AI is transitioning from a commodity explosion into an era of luxury economics. In the physical market, raw leather remains a cheap global commodity, yet Hermès untethers its handbags from material reality by commanding up to $50,000 for a product with an estimated production cost of just $1,400. This demonstrates a classic Veblen paradox, where consumer demand scales alongside escalating prices rather than collapsing under standard economic rules, because buyers are purchasing an elite social signal manufactured through artificial scarcity. In exactly the same way, AI is rapidly turning raw human knowledge into a cheap, mass-market commodity, meaning future economic value will shift away from raw information and toward highly curated human execution, premium experiences, and unique personal branding. As elite labs now hint at highly expensive, next-generation reasoning models, we are entering a phase where top-tier intelligence is deliberately priced at a massive premium to signal superior capability, turning compute into the ultimate luxury asset.
English
0
1
2
66
Bart de Witte
Bart de Witte@OpenMedFuture·
Enron & WorldCom were straight-up accounting frauds, crimes, not 'open markets.' Markets bankrupted them; the law prosecuted them. 2008 GFC? Greenspan’s own Fed easy money + government housing mandates (Fannie/Freddie, CRA) inflated the subprime bubble. Bailouts and moral hazard made it worse, not unfettered competition. 1937-1980 'regulated era' ended in stagflation & sclerosis. Post-deregulation = massive tech boom. EU regs (GDPR + DMA) impose crushing compliance costs that entrench Big Tech monopolies by killing startups. Berlin (where I live) rent controls slashed supply, yet rents still surged ~70% in a decade. Heavy regulation protects cronies and reduces supply. Open competition drives AI progress. Wake up!
English
1
0
0
11
Matt
Matt@808_38hz·
Same old bullshit here. This fever dream of unfettered markets drove Greenspan to laying in the bathtub sucking his thumb for 6 months when he realised that was pretty much wrong. “Open and competitive” ideology drove Enron, Worldcom, subprime mortgage financial engineering and derivatives-gone-wild into the GFC, almost destroying the world. Meanwhile if you look at the more regulated post Depression period from ‘37 to ‘80, we had less of these problems, and far less ultra deviant wealth creation.
English
1
0
0
21
Jon Hernandez
Jon Hernandez@JonhernandezIA·
📁 Tristan Harris, co-founder of the Center for Humane Technology, says the AI race is no longer about augmenting human work but replacing it at planetary scale. Once GDP depends more on data centers than people, governments stop investing in humans because humans no longer drive growth. That is how an intelligence economy quietly becomes anti-human without anyone explicitly choosing it.
English
68
163
347
23K
Bart de Witte
Bart de Witte@OpenMedFuture·
Is it just me, or is @ManusAI current price-gouging going exponential too? Time to switch to Hermes Agent before it is too late?
English
0
0
2
74
Bart de Witte
Bart de Witte@OpenMedFuture·
There is nothing aggressive in what I have written. I am simply pointing out to readers here that you are spreading misinformation, and are protected doing so. As you advised us entrepreneurs: it is time to stop whining when you receive legitimate criticism. Recent economic analysis shows that maintaining the full dual regulatory burden (MDR/IVDR + standalone AI Act Chapter III) creates a structural subsidy for US Big Tech while imposing crushing costs on European medical AI builders: A French hospital that simply rents GPT-4 via Microsoft Azure for clinical decision support is classified as a deployer. First-year compliance: €50K–€100K. No notified body. No full conformity assessment. A European startup building the same clinical capability from the ground up is a provider. It faces MDR plus the full AI Act Chapter III: 12–18 months delay, €180K-€450K in costs before first revenue. Same clinical output. Three times the cost, and often more, for the EU-native builder. DIGITALEUROPE estimates the EU-wide annual compliance bill at €3.3 billion. For a typical 50-person medical AI company this means €320K-€600K upfront plus €150K every year thereafter. I’ve been in the industry for over 25 years, and it’s sad to see European politicians trying to ruin our startups and SMEs in order to establish monopolies for Big Tech companies instead. No wonder this is coming from the Green Party of all places.
English
0
1
3
36
Sergey Lagodinsky
Sergey Lagodinsky@SLagodinsky·
@OpenMedFuture Dear Bart de Witte, dont think I offended or provoked anyone, so no reason to be so aggressive. My information is based on analysis I trust. So I hope, the discourse can remain informed, civlized and respectful: table.media/assets/documen…
English
1
0
0
47
Bart de Witte
Bart de Witte@OpenMedFuture·
@SLagodinsky, a DSA/AI Act rapporteur, enjoys parliamentary immunity. His absolute claim that “no sectoral legislation has ever considered AI” is simply pure disinformation. Sectoral rules have explicitly regulated AI and machine learning components for years, especially in the highest-risk area (healthcare), well before the AI Act was even proposed in April 2021: Medical Device Regulation 2017/745 (adopted 5 April 2017) and In Vitro Diagnostic Regulation (IVDR) 2017/746 both regulate software, including AI/ML models, via Annex VIII Rule 11. Diagnostic or therapeutic AI is classified as Class IIa or higher depending on the risk it poses to patients. This is not a vague “software” rule, it was deliberately written to capture modern ML systems. @POLITICOEurope
English
2
1
7
159
MatrixMysteries
MatrixMysteries@MatrixMysteries·
“I don’t want AI helping doctors. I want it running global healthcare.” “We’re linking medical records, biometric IDs, payment systems — one system, one stream of patient data.” Your body turned into data, inside a system you don’t control.
English
678
719
1.2K
328.5K
Tijl De Bie
Tijl De Bie@TijlDeBie·
If Europe was serious about protecting European citizens' privacy from big tech and their controllers (i.e. 🇺🇸 and allies), the absolute minimum they would have done is make such practices illegal. @HennaVirkkunen @vonderleyen
International Cyber Digest@IntCyberDigest

‼️🚨 ALARMING: Google now treats privacy as suspicious behavior by default. Users of GrapheneOS, CalyxOS, /e/OS, and other deGoogled Android phones are being locked out of millions of websites unless they install the exact Google Play Services software they deliberately removed. GrapheneOS is recommended by the EFF and used by journalists, lawyers, and activists in high-risk environments. The audience most likely to read Google's data practices and refuse its terms is now flagged as fraudulent for that exact decision. What happened?: ▪️ Google announced "Cloud Fraud Defense" at Cloud Next on April 22-23, 2026, branding it "the next evolution of reCAPTCHA." Existing reCAPTCHA customers were auto-migrated. ▪️ When the system flags traffic as suspicious, the old click-the-bus puzzle is gone. Users get a QR code instead. ▪️ Scanning the QR code requires Google Play Services running on the device. Internet Archive snapshots show this requirement has been live since at least October 2025, silently rolled out for 7 months before anyone noticed. ▪️ No Play Services = no QR scan = locked out. The bigger picture: ▪️ Google already tried this in 2023. It was called Web Environment Integrity (WEI), and it would have let Google decide which devices were "real enough" to access the web. Standards bodies and the public pushed back hard, and Google killed it. Three years later, the same idea is back, just hidden behind a QR code instead of a browser feature. ▪️ reCAPTCHA runs on millions of websites. Every developer who keeps using it is now, by default, telling deGoogled Android users they're not welcome...

English
9
29
105
2.9K
Bart de Witte
Bart de Witte@OpenMedFuture·
Da benötigt man nicht die KI „Expertin“ Luiza. Die erste vielzitierte Studie dazu erschien 2021 und untersuchte Daten aus der Zeit nach AlphaGos Sieg über Lee Sedol im 2016; sie zeigte, dass Go-Spieler, die gegen KI trainierten, ihre Zugqualität im Schnitt um etwa 25 Prozent verbesserten ohne KI.
Deutsch
0
0
1
94
Florian Gallwitz
Florian Gallwitz@FlorianGallwitz·
Da ist sicher was dran.
Luiza Jarovsky, PhD@LuizaJarovsky

🚨 University professors have been saying AI is completely destroying learning and that we'll soon have an AI-powered, semi-illiterate workforce. Here's a glimpse into the educational apocalypse: "Sarah, a freshman at Wilfrid Laurier University in Ontario, said she first used ChatGPT to cheat during the spring semester of her final year of high school. (...) After getting acquainted with the chatbot, Sarah used it for all her classes: Indigenous studies, law, English, and a “hippie farming class” called Green Industries. “My grades were amazing,” she said. “It changed my life.” Sarah continued to use AI when she started college this past fall. Why wouldn’t she? Rarely did she sit in class and not see other students’ laptops open to ChatGPT. Toward the end of the semester, she began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. “I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.” - "By November, Williams estimated that at least half of his students were using AI to write their papers. Attempts at accountability were pointless. Williams had no faith in AI detectors, and the professor teaching the class instructed him not to fail individual papers, even the clearly AI-smoothed ones. “Every time I brought it up with the professor, I got the sense he was underestimating the power of ChatGPT, and the departmental stance was, ‘Well, it’s a slippery slope, and we can’t really prove they’re using AI,’” Williams said. “I was told to grade based on what the essay would’ve gotten if it were a ‘true attempt at a paper.’ So I was grading people on their ability to use ChatGPT.” - AI in education is a serious topic, and many schools and universities are blindly jumping into the "AI-first" wave without considering short and long-term consequences. It would be great to hear more from teachers and educators to understand potential solutions. This might be a great opportunity for rethinking the education system and how students are assessed. - 👉 Link to the full article below. 👉 To learn more about AI's legal and ethical challenges, join my newsletter's 94,700+ subscribers (link below).

Deutsch
9
5
56
5.3K
Bart de Witte
Bart de Witte@OpenMedFuture·
Cognitive sovereignty is the ability to maintain autonomy over your own mind and thoughts. It’s the right to mental autonomy. Which system variant is the greatest threat to this sovereignty?
English
1
3
7
1K
Bart de Witte
Bart de Witte@OpenMedFuture·
@POLITICOEurope @SLagodinsky We’re aware of the factors that may make us less attractive to investors. This, however, comes across as an attempt to limit or discourage criticism.
English
0
0
1
44
POLITICOEurope
POLITICOEurope@POLITICOEurope·
“Whining is not sexy. It makes us unattractive to investors.” German MEP @SLagodinsky urged tech companies to quit “whining” about the EU’s rules. #POLITICOAITech
English
4
3
6
7K
Bart de Witte
Bart de Witte@OpenMedFuture·
@MichaelAlbertMD Has the h-index evolved into social scoring tool that influences behavior? I know plenty that share your sentiments, but keep silent and thud compliant.
English
0
0
1
124
Michael Albert, MD
Michael Albert, MD@MichaelAlbertMD·
You pay to publish YOUR research. You pay extra to make YOUR research “open access” so the public can read it. Then publishers sell YOUR research—and often your peer review labor—to AI/LLM vendors for massive licensing deals. Academia…what are we doing here? Researchers generate the ideas, conduct the studies, write the manuscripts, review the papers, and often even fund the work through grants or taxpayer dollars. Yet the value extraction happens elsewhere. At some point, academics need to stop treating this system as immutable. Researchers should be charging the journals. We should stop participating in the racket.
Perplexity@perplexity_ai

Perplexity and Computer now connect to premium health sources, starting with NEJM and BMJ Group, with 9 more medical journals and clinical databases on the way. Ask health questions and get answers cited from the same sources relied on by hospitals and research institutions.

English
13
24
115
16.8K
Bart de Witte
Bart de Witte@OpenMedFuture·
@andrewarruda yes - patient to agent to agent to physician. 🙃 Physicians are become builders of agents too.
English
0
0
0
58
Bart de Witte
Bart de Witte@OpenMedFuture·
Was Deutschland noch lernen muss: Damit Robotics, KI und Energie wirklich konvergieren und ‚Made in Germany‘ zurück an die Weltspitze führen können, braucht es eine kulturelle Transformation. Wenn ich bottom-up Open-Source-Systeme mit echter Made-in-Germany-Präzision sehe, dann ist es endlich soweit.
Deutsch
0
0
2
274
Christian Miele
Christian Miele@christianmiele·
Die Verbindung von Robotics, KI und Energie ist der Lottogewinn für Deutschlands Industrie. Unsere einzigartige Kompetenz, hochpräzise Hardware mit moderner Software zu verschmelzen, ist der Schlüssel für die nächste Innovationswelle. Robust, zuverlässig, innovativ. So führen wir „Made in Germany“ zurück an die Weltspitze.
Deutsch
113
37
507
96K
Bart de Witte
Bart de Witte@OpenMedFuture·
Exactly. In biology, researchers use longitudinal data from breast cancer patients to distill real clinical outcomes into new models that replicate patented predictor tests like Oncotype DX for chemo effectiveness. We celebrate this as scientific progress, it advances medicine, cuts costs, and helps more patients.
English
0
0
1
173
Marc Andreessen 🇺🇸
Overheard in Silicon Valley: “There is a fine line between ‘distilling’ and ‘using’… no, actually there isn’t, they’re the same thing.”
English
38
8
271
33.4K
Bart de Witte
Bart de Witte@OpenMedFuture·
When someone needs to “vibe code” complex molecular assemblies with AI, the likelihood of him killing himself or not succeeding is basically 100%. Congrats, you just got a gorgeous protocol from the model… that’ll still leave you Darwin-Awarding yourself in a BSL-2 garage before you finish the first ligation. Nature’s safety training data remains undefeated. 😂
Jake Wintermute 🧬/acc@SynBio1

It’s hard to collect data about how bioterrorists might try to use AI. Few people want to create a bioweapon, and those who might aren’t talking. On the other hand, it's easy to predict how the news will cover bioterrorism and how social media responds. We have years of clickbait headlines and viral scareposts to train on. This makes it much simpler to build a biosecurity policy around avoiding bad headlines, rather than installing safeguards that would actually stop bad actors. I have a PhD in Synthetic Biology. I know roughly what it would take to make a bioweapon. It would be enormously difficult and dangerous. Most of the work is in the physical world, where AI tools would be only marginally useful. None of the relevant uses of AI look anything like the examples cited in the NY Times story below. - Printing 8,000 word protocols for methods already in the public domain - Making a list of common cattle diseases - Generating a shopping list of test tubes and media - Describing how to use a weather balloon The actual biosecurity questions that need answers are technical and too boring to cover in a major media outlet. - How can we tell the difference between a dangerous DNA sequence and a harmless one? - What separates a python script used to discover a therapeutic from one used to discover a toxin? - Which practical R&D bottlenecks are being rapidly opened by AI and which are not? Much of the work of biology happens in the real world and doesn’t involve AI much at all. A serious biosecurity policy needs to focus on how bad actors might access physical hardware, specialized facilities and trained personnel. These are infinitely more important barriers than what Claude might tell someone about weather balloons. My point here is that the people telling you to be afraid, and the media outlets who cover them, are putting us all in danger. The big AI shops are going to lock down their models, not to stop bad actors, but to stop bad press. Training models to stop using scary words is easy, the real work of biosecurity is hard. If we don’t push back, we’re going to end up with an industry dedicated to performative biosecurity theater. nytimes.com/2026/04/29/us/…

English
0
1
10
1.3K