Houcemeddine Turki

17.5K posts

Houcemeddine Turki

Houcemeddine Turki

@Csisc1994

Medical Student (@Univ_Sfax, ASc 18') CS Student (@UoPeople, ASc 25', BSc 26') Former Board Member (@WikimediaTN, @WikiLibrary, @WikimediaAffCom, @DES_Unit)

Tunisie Katılım Şubat 2016
1.2K Takip Edilen1.1K Takipçiler
Houcemeddine Turki retweetledi
African Maps
African Maps@MapsAfrican·
Best Education Systems in Africa 📚🏫 1. 🇸🇨 Seychelles 2. 🇹🇳 Tunisia 3. 🇲🇺 Mauritius 4. 🇿🇦 South Africa 5. 🇩🇿 Algeria....... Show more
African Maps tweet media
English
65
148
1.4K
223.3K
Houcemeddine Turki retweetledi
Fabien
Fabien@Fabien_Mikol·
Le spécialiste des réseaux de neurones et dir de recherche CNRS Bernard Victorri évoque les risques de perte de contrôle des modèles agentiques de plus en plus autonomes. Loin des ricanements de certains, il prend la chose au sérieux, et défend le travail de @Yoshua_Bengio 👍
Français
4
46
148
13.9K
Houcemeddine Turki retweetledi
Christian Amouo
Christian Amouo@Chris_amouo·
J’ai passé quelques pages de mon mémoire de fin de master, soutenu en 2014, dans un détecteur AI, et le résultat m’a fait rigoler: 97% de mon texte aurait été généré par l’IA pourtant il n’avait pas d’IA en 2014.
Christian Amouo tweet mediaChristian Amouo tweet media
Français
83
527
7.6K
375.6K
Houcemeddine Turki retweetledi
Dailymeow
Dailymeow@Dailymeoww1·
In Istanbul, a stray cat was captured on camera using a trash bin as a toilet 🐱🗑️. A tourist who witnessed the moment couldn’t hide their surprise 😲, while locals once again emphasized how clean and intelligent cats in Istanbul are . Truly an environmentally conscious cat 😂🌿
English
53
483
6.1K
180K
Houcemeddine Turki retweetledi
Houcemeddine Turki retweetledi
Anish Moonka
Anish Moonka@AnishA_Moonka·
Terence Tao is arguably the greatest mathematician alive. He just sat down with Dwarkesh for ~84 minutes on AI, math, and what actually counts as scientific progress. Here is the clearest thinking I have heard on what AI can and cannot do for science. Our notes: 𝟭. 𝗔𝗜 𝗵𝗮𝘀 𝗺𝗮𝗱𝗲 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗻𝗴 𝗶𝗱𝗲𝗮𝘀 𝗮𝗹𝗺𝗼𝘀𝘁 𝗳𝗿𝗲𝗲. 𝗧𝗵𝗲 𝗵𝗮𝗿𝗱 𝗽𝗮𝗿𝘁 𝗶𝘀 𝗻𝗼𝘄 𝗰𝗵𝗲𝗰𝗸𝗶𝗻𝗴 𝘄𝗵𝗶𝗰𝗵 𝗼𝗻𝗲𝘀 𝗮𝗿𝗲 𝗿𝗲𝗮𝗹. The internet made it nearly free to send a message to anyone. AI has done the same thing for coming up with possible explanations for scientific problems. You can now produce thousands of theories in minutes. But figuring out which ones are actually correct, and which are garbage? That part has not gotten any faster. Every company and research lab should be thinking about this gap. We can generate endlessly. We cannot verify at the same speed. 𝟮. 𝗞𝗲𝗽𝗹𝗲𝗿 𝘀𝗽𝗲𝗻𝘁 𝟮𝟬 𝘆𝗲𝗮𝗿𝘀 𝘁𝗿𝘆𝗶𝗻𝗴 𝗿𝗮𝗻𝗱𝗼𝗺 𝘁𝗵𝗲𝗼𝗿𝗶𝗲𝘀 𝗯𝗲𝗳𝗼𝗿𝗲 𝗵𝗲 𝗴𝗼𝘁 𝗶𝘁 𝗿𝗶𝗴𝗵𝘁. Johannes Kepler (the astronomer who figured out how planets orbit the Sun) started with a beautiful but completely wrong theory involving 3D geometric shapes nested between the planets. He kept guessing for two decades. The book where he finally published his correct law is mostly notes about astrology, about how Earth's musical notes cause famine. @Dwarkesh_sp puts it perfectly: the random idea generator is only useful if there is a reliable dataset to test against. Without the astronomical observations that another scientist (Tycho Brahe) had painstakingly collected, Kepler would never have found the right answer. 𝟯. 𝗣𝗮𝘁𝘁𝗲𝗿𝗻𝘀 𝗳𝗿𝗼𝗺 𝘀𝗺𝗮𝗹𝗹 𝘀𝗮𝗺𝗽𝗹𝗲𝘀 𝗰𝗮𝗻 𝗯𝗲 𝗰𝗼𝗺𝗽𝗹𝗲𝘁𝗲 𝗳𝗹𝘂𝗸𝗲𝘀. Kepler's law about orbital timing was based on just six data points, one per known planet. A later astronomer, named Bode, found a similar pattern and predicted a missing planet. Two new discoveries matched. People got excited. Then Neptune was discovered, and the pattern completely broke. It was a numerical coincidence from too few examples. I think about this every time someone shows a "law" based on a handful of cherry-picked data points. Kepler got lucky. Bode did not. 𝟰. 𝗧𝗵𝗲 𝗰𝗼𝗿𝗿𝗲𝗰𝘁 𝘁𝗵𝗲𝗼𝗿𝘆 𝗼𝗳𝘁𝗲𝗻 𝗹𝗼𝗼𝗸𝘀 𝘄𝗼𝗿𝘀𝗲 𝘁𝗵𝗮𝗻 𝘁𝗵𝗲 𝘄𝗿𝗼𝗻𝗴 𝗼𝗻𝗲 𝗮𝘁 𝗳𝗶𝗿𝘀𝘁. When Copernicus proposed that the Earth goes around the Sun, his model was actually less accurate than the old (wrong) Earth-centered model. The old model had a thousand years of tweaks, making it precise. Copernicus was simpler but rougher. Newton's theory of gravity left mysteries that Einstein, centuries later, resolved. Any AI system that scores ideas purely on "how accurate is this right now" would have dismissed most of history's biggest breakthroughs. That should make everyone pause before building benchmarks that only measure today's accuracy. 𝟱. 𝗔𝗜 𝗶𝗻 𝗺𝗮𝘁𝗵 𝗰𝗮𝗻 𝗷𝘂𝗺𝗽, 𝗯𝘂𝘁 𝗶𝘁 𝗰𝗮𝗻𝗻𝗼𝘁 𝗰𝗹𝗶𝗺𝗯. Tao's analogy: imagine a mountain range of walls, all different heights, all in the dark. Humans slowly feel their way up, finding handholds and mapping routes. AI is a machine that can jump straight up two meters. Sometimes it clears a short wall. Sometimes it jumps in the wrong direction and crashes. But it cannot grab a ledge, pull itself up, and jump again from a higher position. That inability to build on partial progress is the gap. Anyone who has worked on a hard problem where each small step makes the next one possible will recognize what is missing here. 𝟲. 𝗔𝗜 𝗰𝗹𝗲𝗮𝗿𝗲𝗱 𝘁𝗵𝗲 𝗲𝗮𝘀𝘆 𝗺𝗮𝘁𝗵 𝗽𝗿𝗼𝗯𝗹𝗲𝗺𝘀 𝗳𝗮𝘀𝘁, 𝘁𝗵𝗲𝗻 𝗵𝗶𝘁 𝗮 𝘄𝗮𝗹𝗹. There is a famous list of about 1,100 unsolved math challenges (called Erdos problems, named after a legendary mathematician who collected them). AI solved about 50 of them in a burst. Almost all were problems nobody had seriously tried before. Then progress stalled. Three separate teams threw the best AI models at every remaining problem and got almost nothing new. I keep seeing this same pattern across industries. The wins get posted on social media. The systematic failure rates stay quiet. If you only follow the highlights, your picture of AI progress is way off. 𝟳. 𝗧𝗵𝗲𝗿𝗲 𝗶𝘀 𝗮 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗯𝗲𝘁𝘄𝗲𝗲𝗻 𝗰𝗹𝗲𝘃𝗲𝗿𝗻𝗲𝘀𝘀 𝗮𝗻𝗱 𝗶𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲. When two mathematicians solve a problem together, they try something, it almost works, they adjust, try again, and each failed attempt teaches them something that shapes the next one. AI mostly just guesses, fails, guesses again, fails again. It does not learn from each failure to make the next attempt smarter. Tao calls what AI does right now "artificial cleverness." That is the most precise two-word description of these systems we have seen anyone use. 𝟴. 𝗧𝗮𝗼'𝘀 𝗽𝗮𝗽𝗲𝗿𝘀 𝗮𝗿𝗲 𝗿𝗶𝗰𝗵𝗲𝗿 𝗻𝗼𝘄, 𝗯𝘂𝘁 𝘁𝗵𝗲 𝗰𝗼𝗿𝗲 𝘄𝗼𝗿𝗸 𝗶𝘀 𝘂𝗻𝗰𝗵𝗮𝗻𝗴𝗲𝗱. His research papers now include more charts, code, and numerical examples because AI makes those easy. Recreating his current papers without AI would take 5x longer. But the hardest part of the job, actually solving the mathematical puzzle, still happens with pen and paper. AI handles the side tasks. This is such an honest assessment. The 5x number is real, but it measures extras rather than the actual breakthrough. I think most knowledge workers are quietly discovering the same thing about their own jobs right now. 𝟵. 𝗪𝗲 𝗸𝗻𝗼𝘄 𝗵𝗼𝘄 𝘁𝗼 𝗳𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲 𝗽𝗿𝗼𝗼𝗳, 𝗯𝘂𝘁 𝗻𝗼𝘁 𝗶𝗻𝘁𝘂𝗶𝘁𝗶𝗼𝗻. We now have computer systems (like the programming language Lean) that can check whether a mathematical proof is logically valid. AI has gotten good at using those. But there is no equivalent system for the softer questions: "Is this idea worth pursuing?" "Does this approach seem promising?" That kind of scientific intuition still requires human judgment and years of experience. If someone builds a way to formalize that kind of reasoning, it will be one of the most important tools of the decade. Formalizing scientific taste sounds impossible, but formalizing deductive logic also sounded impossible for 2,000 years before it happened. 𝟭𝟬. 𝗗𝗮𝗿𝘄𝗶𝗻 𝘀𝘂𝗰𝗰𝗲𝗲𝗱𝗲𝗱 𝗽𝗮𝗿𝘁𝗹𝘆 𝗯𝗲𝗰𝗮𝘂𝘀𝗲 𝗵𝗲 𝘄𝗿𝗼𝘁𝗲 𝘄𝗲𝗹𝗹. Darwin wrote in plain English and wove scattered evidence into a story people could follow. Newton wrote in Latin, invented new math to explain his ideas, and kept his best insights secret out of rivalry. It took decades for other scientists to translate Newton into terms that regular people could understand. How persuasive an explanation is turns out to matter hugely in science. And that is exactly the kind of thing that is very hard to teach an AI to optimize for. Maybe it should stay that way. 𝟭𝟭. 𝗧𝗮𝗼 𝘁𝗵𝗶𝗻𝗸𝘀 𝘆𝗼𝘂 𝘀𝗵𝗼𝘂𝗹𝗱 𝘀𝘁𝗼𝗽 𝗼𝘃𝗲𝗿-𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗶𝗻𝗴 𝘆𝗼𝘂𝗿 𝘁𝗶𝗺𝗲. He once spent a year at a research institute with zero distractions. After a few months, he ran out of ideas. He finds that the events he reluctantly attends outside his comfort zone often produce his best unexpected encounters. A certain amount of randomness and distraction is necessary for creative work. I built an app through Vibe coding precisely because we stumbled into it by accident. The most interesting things in any career tend to come from the unplanned detours. The greatest mathematician alive is telling you to stop maximizing your schedule. 𝟭𝟮. 𝗜𝗳 𝗮 𝗰𝗲𝗿𝘁𝗮𝗶𝗻 𝗺𝗮𝘁𝗵 𝗰𝗼𝗻𝗷𝗲𝗰𝘁𝘂𝗿𝗲 𝘁𝘂𝗿𝗻𝘀 𝗼𝘂𝘁 𝘁𝗼 𝗯𝗲 𝘄𝗿𝗼𝗻𝗴, 𝗼𝘂𝗿 𝗲𝗻𝗰𝗿𝘆𝗽𝘁𝗶𝗼𝗻 𝗰𝗼𝘂𝗹𝗱 𝗯𝗿𝗲𝗮𝗸. There is an unproven mathematical conjecture called the Riemann hypothesis about how prime numbers (numbers divisible only by 1 and themselves, like 7, 11, 13) are distributed. Much of modern encryption relies on the assumption that prime numbers have no hidden patterns. Tao says if this conjecture turned out to be false, it would mean there is a secret pattern in the primes that nobody knows about. And if one hidden pattern exists, there are probably more that could be exploited to break encryption. That is the single scariest sentence about internet security I have ever heard a Fields Medalist (the highest honor in mathematics) say out loud. Tao on careers: "We live in a particularly unpredictable era. Things we have taken for granted for centuries may not hold anymore." He points out that even high school students can now contribute to frontier math research using AI tools, something that used to require a PhD. The full podcast is worth listening to. Link in thread.
Anish Moonka tweet media
English
8
98
446
43K
Houcemeddine Turki retweetledi
Massimo
Massimo@Rainmaker1973·
Happy 257th birthday to Joseph Fourier, whose transform is used in - quantum mechanics - signal processing - spectroscopy - digital compression of images and data - solving differential equations - designing electrical circuits - ...
Massimo tweet media
English
15
208
900
42.4K
Houcemeddine Turki retweetledi
Beauty of music and nature 🌺🌺
If you can build that kind of bond with a lynx, then you truly have a guardian cat that is far stronger than any guard dog out there. 😎👑
English
140
705
7.4K
248.6K
Houcemeddine Turki retweetledi
Jon Hernandez
Jon Hernandez@JonhernandezIA·
📁 Mustafa Suleyman, CEO of Microsoft AI and DeepMind cofounder, says we are entering a new phase. Computers no longer require you to learn their language. They are learning yours. They see what you see, hear how you speak and understand context in real time. And that stops being a tool. It starts to feel like something more human
English
29
29
127
9.9K
Houcemeddine Turki retweetledi
No Cats No Life
No Cats No Life@NoCatsNoLife_m·
Can you guess where this cat is from?”🙃
No Cats No Life tweet media
English
52
83
2.1K
27.9K
Houcemeddine Turki retweetledi
Frank Nielsen
Frank Nielsen@FrnkNlsn·
"Adaptive k-nearest neighbor classifier based on the local estimation of the shape operator" arxiv.org/abs/2409.05084
Frank Nielsen tweet media
English
1
37
297
11K
Houcemeddine Turki retweetledi
Robert Youssef
Robert Youssef@rryssf_·
🚨 BREAKING: Meta researchers showed a model 2 million hours of video. No labels. No physics textbook. No supervision at all. Then they showed it a clip where an object disappears behind a wall and never comes back. The model flagged it as wrong. 🤯 It had learned object permanence. Shape consistency. Collision dynamics. Entirely from watching. What's more surprising: even a model trained on just one week of unique video achieved above-chance performance on physics violation detection. That's not a fluke. That's a principle. The key insight from the paper: this only works when the model predicts in a learned representation space, not in raw pixels. The model has to build an internal world model, compressed and abstract, and predict against that. Pixel-space prediction fails. Multimodal LLMs that reason through text fail. Only the architecture that builds abstract representations while predicting missing sensory input, something close to how neuroscientists describe predictive coding, actually acquires physics intuition. Which means the core knowledge researchers assumed had to be hardwired may just be observation at scale. Babies learn object permanence by watching things. Turns out the same principle holds here. Now here's the part nobody's talking about. If observation alone teaches a model the rules of the physical world, what happens when you apply the same principle to production systems? Production has physics too. Not gravity. But rules just as consistent: which deploys cause incidents at 3am, which config combinations interact dangerously, which code paths quietly degrade under load, which service changes cause failures two hops away. These patterns are embedded in thousands of trajectories. Code pushes, metric shifts, customer tickets, incident timelines. Largely unobserved. Certainly unlabeled. Nobody writes a runbook that says "if service A deploys with flag X active and service B is above 70% CPU, latency on service C degrades 40% within 6 minutes." But that pattern exists. It's repeatable. And it's sitting in your observability data right now, invisible because no one has built a model to find it. That's the gap @playerzeroai is trying to close. Not another test runner. Not another alert threshold. A production world model that learns which things break from accumulated observation, the same way Meta's model learned gravity. It doesn't check your test coverage. It predicts failure trajectories. One week of video was enough to learn that solid objects don't pass through walls. The question is how much production observation your system needs before a model starts predicting where yours will break next. The Meta paper suggests the bar might be lower than anyone expects.
Robert Youssef tweet media
English
77
174
1.4K
234.2K
Houcemeddine Turki retweetledi
The Figen
The Figen@TheFigen_·
There's always that one friend who ruins things. 😂
English
387
2.9K
25.2K
1.5M
Houcemeddine Turki retweetledi
Massimo
Massimo@Rainmaker1973·
Tiramisù served in Venice [🇮🇹 irina_yev]
Italiano
351
586
5.9K
6M
Houcemeddine Turki retweetledi
Elon Musk
Elon Musk@elonmusk·
Elon Musk tweet media
ZXX
8K
31.3K
288.2K
27.9M
Houcemeddine Turki retweetledi
Kevin Boucher-Rappet
Kevin Boucher-Rappet@kevboucher·
Mon monde s’est arrêté hier. Mon Shadow est parti. Certains ne comprennent pas, se disent que ce ne sont que des bêtes. Mais pour nous, c’est un membre de la famille. C’est notre fils. La lumière de notre vie dans un monde toujours plus dur. C’est l’amour le plus pur, le plus sincère, le plus entier et le plus sain possible. Je ne t’oublierai jamais et je ne cesserai jamais de t’aimer mon Shadow.
Kevin Boucher-Rappet tweet media
Français
1.2K
591
9.4K
178.6K
Houcemeddine Turki retweetledi
Freda Shi
Freda Shi@fredahshi·
Our workshop was rejected by #ICML2026. Despite having 3 professors (2 full profs) and 2 senior research scientists, the only reason for rejection was "you got an undergrad on the organizing committee," who is actually a highly competent incoming PhD student. (1/)
Freda Shi tweet media
English
21
31
547
142.7K
Houcemeddine Turki retweetledi
Massimo
Massimo@Rainmaker1973·
Xing Xing the Tibetan macaque tries petting a cat.
English
104
1.3K
14.2K
608.2K