Dr Sumaiya Shaikh 🇸🇪🇦🇺

14.1K posts

Dr Sumaiya Shaikh 🇸🇪🇦🇺 banner
Dr Sumaiya Shaikh 🇸🇪🇦🇺

Dr Sumaiya Shaikh 🇸🇪🇦🇺

@Neurophysik

Neuroscientist PhD | AI Ethics, Countering Violent Extremism, Exit work #CVE | YouTube neurophysik10 | Back on twitter after a hiatus. Mum to a human & a dog

Swedish-Australian Katılım Mayıs 2009
1.2K Takip Edilen57.7K Takipçiler
Sabitlenmiş Tweet
Dr Sumaiya Shaikh 🇸🇪🇦🇺
🧵 THREAD: Top Science Breakthroughs of 2025 - From Nature, Science, NEJM & other top journals. What actually moved the frontier this year with DOIs and its significance (1/14) #scienceexplained
English
2
3
10
3.2K
Dr Sumaiya Shaikh 🇸🇪🇦🇺
Cashed out some crypto profits and picked up a new car. Ran a trade with my coach today and cleared over $40K in about three hours. Big thanks to @jenniferr880 hard work paying off
Dr Sumaiya Shaikh 🇸🇪🇦🇺 tweet mediaDr Sumaiya Shaikh 🇸🇪🇦🇺 tweet media
English
0
0
2
1.2K
Dr Sumaiya Shaikh 🇸🇪🇦🇺
Thank You God🙏🏾. Making big moves everyday and this is just the beginning.🎉Finally gotten my dream car! 🚘 A big thank you to @jenniferr880 with the help of crypto trading what an incredible coach! She took my investments from slow lane to fast track! Contact her @jenniferr880
Dr Sumaiya Shaikh 🇸🇪🇦🇺 tweet mediaDr Sumaiya Shaikh 🇸🇪🇦🇺 tweet media
English
11
0
17
14.9K
Dr Sumaiya Shaikh 🇸🇪🇦🇺 retweetledi
Mark Gadala-Maria
Mark Gadala-Maria@markgadala·
This story is actually insane: • dude drops $2000 on a DJI robot vacuum like a lunatic • refuses to use the normal app like a peasant • Sammy Azdoufal fires up Claude to crack the API so he can drive it with an xbox controller • Claude delivers the goods • pulls an auth token from their servers, connects successfully • except the system thinks he controls 7000 vacuums • checks again • yep, seven thousand • DJI built authentication with zero device ownership verification • any valid token works for any unit on the planet • Sammy now has eyes inside homes across 24 countries • live vacuum camera feeds everywhere • full floor plans from the mapping data • some guy in germany eating cereal at 3am, unaware his roomba is snitching • one API call away from being the most informed burglar in history • all he wanted was to steer his vacuum with a joystick • does the right thing and reports it • DJI fixes it in two days • back to normal life with his stupidly expensive floor cleaner • IoT companies stay undefeated at shipping garbage security
Mark Gadala-Maria tweet media
English
1.1K
9.8K
64.4K
8.6M
Dr Sumaiya Shaikh 🇸🇪🇦🇺 retweetledi
nature
nature@Nature·
An artificial-intelligence system uses clinical data, genetic information and literature searches to suggest diagnoses and provides the underlying reasoning go.nature.com/4cCjDKq
English
15
87
360
38.9K
Dr Sumaiya Shaikh 🇸🇪🇦🇺 retweetledi
Bo Wang
Bo Wang@BoWang87·
Welcome to the Lab of the Future! 🧬🤖 Excited to share LUMI-lab, out today in @CellCellPress — a self-driving platform that pairs an AI foundation model with a robotic lab to autonomously discover ionizable lipids (LNPs) for mRNA delivery. The core problem: Designing lipid nanoparticles (LNPs) is hard. The chemical space of ionizable lipids is vast, experimental cycles are slow, and — critically — historical LNP datasets are far too small to train a predictive model from scratch. Most AI approaches in this space hit a wall immediately: not enough data to learn from. Our solution: lab-in-the-loop foundation model learning. Instead of training on LNP data alone, LUMI starts as a transformer-based foundation model pretrained across broad chemical space, building rich molecular representations before it ever sees a single LNP experiment. Then it enters a closed loop with a robotic synthesis platform: predict → synthesize → assay → update. Each round of real wet-lab experiments fine-tunes the model, which then proposes smarter candidates for the next round. The lab isn't just validating AI predictions — it's actively teaching the model, continuously. What happened when we let it run: LUMI-lab autonomously synthesized and screened 1,700+ ionizable lipids in human bronchial epithelial cells. The top candidate — LUMI-6 — features a brominated lipid tail, a structural motif that had been largely overlooked in LNP design. LUMI found it without being told where to look. When formulated into LNPs and delivered intratracheally to mice, LUMI-6 achieved 20.3% gene editing efficiency in lung epithelial cells — a compelling result for one of the hardest-to-reach therapeutic targets, directly relevant to diseases like cystic fibrosis and alpha-1 antitrypsin deficiency. Why this matters beyond LNPs: This is a proof of concept for a broader thesis — that foundation model pretraining + active learning + robotic experimentation can overcome the data scarcity bottleneck that plagues AI-driven discovery in biology. You don't need a massive domain-specific dataset to start. You need a model that can generalize, a lab that can generate the right data, and a loop that connects them. Huge congratulations to first authors Yue Xu, @HAOTIANCUI1, and Kuan Pang, and to the entire @BowenLi_Lab team. Grateful to our collaborators at @UHN and @UofTPharmacy, and to Princess Margaret Cancer Centre Research @PMResearch_UHN. 📄 Paper: cell.com/cell/fulltext/…
English
19
72
481
57K
Dr Sumaiya Shaikh 🇸🇪🇦🇺 retweetledi
L. David Fairchild
L. David Fairchild@David_Fairchild·
He's not just defending AI energy use. He is smuggling in a whole anthropology where humans are basically inefficient meat computers that you have to pour food and years into before they become useful. And once you accept that, the next move is obvious. If people are just costly biological training runs, then burning mountains of electricity to build synthetic intelligence starts to feel not only equal, but superior, even if it negatively impacts actual humans. That is the dystopian. It makes human development sound like a bug in the system, and it makes sacrificing human and creational flourishing for more computational power sound logical. To him, the grid gets strained, prices go up, ecosystems get hit, but hey, humans eat too, so what's the difference? The difference is that humans aren't an inefficient line item. They're the point. If your worldview can look at a child growing into an adult and describe it as energy spent to train intelligence, you haven't said something profound. You've revealed a horrifically rotten worldview.
Chief Nerd@TheChiefNerd

🚨 SAM ALTMAN: “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”

English
693
10.2K
43.5K
1.6M
Dr Sumaiya Shaikh 🇸🇪🇦🇺 retweetledi
Mushtaq Bilal, PhD
Mushtaq Bilal, PhD@MushtaqBilalPhD·
This is a very interesting finding. Immigrants (and their descendants) in Denmark usually have a higher crime rate than locals. But this new study by the Center for Political Studies find that foreign workers have lower crime rate than Danes. And most "criminal offences committed by foreigners are of a less serious nature" and "a larger proportion of the sentences are purely fines." Denmark is attracting highly-qualified, law-abiding international workers. The only problem: Denmark is simply unable to retain them. Last year, a record number of international workers left Denmark.
Mushtaq Bilal, PhD tweet media
English
5
6
20
4K
Dr Sumaiya Shaikh 🇸🇪🇦🇺 retweetledi
Andrew Côté
Andrew Côté@Andercot·
A realistic picture of how closely packed molecules are inside a synapse of a nerve cell.
Andrew Côté tweet media
English
76
433
3.1K
301.8K
Dr Sumaiya Shaikh 🇸🇪🇦🇺 retweetledi
nature
nature@Nature·
Newly released files from the investigation of convicted sex offender Jeffrey Epstein reveal that his ties to the scientific community were deeper than previously known go.nature.com/3MTbl6t
English
17
83
229
33.5K
Dr Sumaiya Shaikh 🇸🇪🇦🇺 retweetledi
Weidi Xie
Weidi Xie@WeidiXie·
Glad to share DeepRare, published on @Nature ! nature.com/articles/s4158… Super grateful for the news & views from Prof. Timo Lassmann, it really gives the best views on DeepRare. nature.com/articles/d4158… This is the first agentic system of its kind, designed to solve the complex puzzle of rare disease diagnosis. ✅ Outperforms current best methods by ~24% ✅ 95.4% expert agreement on reasoning chains ✅ Handles clinical notes, HPO terms & genetic data
Weidi Xie tweet mediaWeidi Xie tweet mediaWeidi Xie tweet mediaWeidi Xie tweet media
English
13
39
238
13.5K
Dr Sumaiya Shaikh 🇸🇪🇦🇺 retweetledi
All day Astronomy
All day Astronomy@forallcurious·
🚨: Scientists mapped 1 mm³ of a human brain ─ less than a grain of rice ─ and a microscopic cosmos appeared.
All day Astronomy tweet mediaAll day Astronomy tweet media
English
520
3.8K
31.1K
8.4M
Dr Sumaiya Shaikh 🇸🇪🇦🇺
The only reason for internet this week is to watch every video of Punch - the baby monkey with his orangutan plushy 🐒❤️🦧
English
0
0
1
391
Dr Sumaiya Shaikh 🇸🇪🇦🇺 retweetledi
Big Brain AI
Big Brain AI@realBigBrainAI·
Pioneer of causal AI, Judea Pearl, argues that no amount of scaling will get LLMs to AGI. He believes current large language models face fundamental mathematical limitations that can't be solved by making them bigger. "There are certain limitations, mathematical limitation that are not crossable by scaling up." His core argument: LLMs don't learn how the world works. They learn from *human interpretations* of how the world works. "What LLM's doing right now is they summarize world models authored by people like you and me available on the web and they do some sort of mysterious summary of it, rather than discovering those world models directly from the data." He illustrates this with healthcare data. When hospitals collect data on treatment effects, that raw data never reaches the LLMs. Instead, the models consume doctors' written interpretations. Analyses shaped by people who already have a mental model of how disease and treatment work. In other words, LLMs are learning from the map, not the territory. The missing piece, according to Pearl, is causal reasoning — the ability to understand not just *what* happens, but *why*. And he's clear this isn't a gap that more parameters or training data will close. It raises a uncomfortable question... If AGI requires machines that build their own world models from raw data rather than summarising ours, are we even on the right road?
English
158
248
1.1K
178.5K
Dr Sumaiya Shaikh 🇸🇪🇦🇺 retweetledi
Keith Siau
Keith Siau@drkeithsiau·
A doctor using AI is very different from a patient using AI.
English
123
478
2.8K
117.7K
Dr Sumaiya Shaikh 🇸🇪🇦🇺 retweetledi
Jay Van Bavel, PhD
Jay Van Bavel, PhD@jayvanbavel·
People often get public opinion wrong, assuming their own views are unpopular when in fact many others share them--known as "pluralistic ignorance" This new paper offers practical solutions for pluralistic ignorance: -in loose cultures, sharing accurate information works best -in tight ones, lowering the costs of speaking up can spark social change. pnas.org/doi/10.1073/pn…
Jay Van Bavel, PhD tweet media
English
10
59
255
15.3K
Dr Sumaiya Shaikh 🇸🇪🇦🇺
It is not an unpopular opinion, as neuroscientists, we are studying the ways AI is harming cognitive integrity and to what extent. This is reality. I say take some time to be dumb with some pen and paper. Normalise going to the bathroom without your phones. And normalise reading without a smartphone
English
0
1
5
439
Wise
Wise@trikcode·
Unpopular opinion. Excessive use of AI will make you dumb. Very dumb.
English
1.2K
957
7.9K
802.5K
Dr Sumaiya Shaikh 🇸🇪🇦🇺
Trusting a AI tech founder to theorise what a PhD can entail? Yes AI models can make novel contributions. A PhD is not just about novel contributions, it also entails developing the skills to do lab work/clinical work, critical thinking, analysis, writing, communication and mentoring without so much as an efficient “detailed prompt”. It can teach you write prompts so simplify the said PhD, but not acquire it.
Kevin Cannon@kmcannon

There are PhDs being handed out each day to people living in the past: the students, their advisors, their universities. Dissertations that took 5 years of work, and which 4.6 Opus could re-produce then improve on in an afternoon.

English
1
1
2
530