Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن

2.3K posts

Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن banner
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن

Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن

@ibomohsin

AI Research Scientist at @GoogleDeepmind

Zurich, Switzerland Katılım Temmuz 2009
831 Takip Edilen1.2K Takipçiler
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
Joe Kent
Joe Kent@joekent16jan19·
After much reflection, I have decided to resign from my position as Director of the National Counterterrorism Center, effective today. I cannot in good conscience support the ongoing war in Iran. Iran posed no imminent threat to our nation, and it is clear that we started this war due to pressure from Israel and its powerful American lobby. It has been an honor serving under @POTUS and @DNIGabbard and leading the professionals at NCTC. May God bless America.
Joe Kent tweet media
English
73K
220K
848K
100.2M
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
Pedro Sánchez
Pedro Sánchez@sanchezcastejon·
The world, Europe, and Spain have faced this critical moment before. In 2003, a few irresponsible leaders dragged us into an illegal war in the Middle East that brought nothing but insecurity and pain. Our response then must be our response now: NO to violations of international law. NO to the illusion that we can solve the world’s problems with bombs. NO to repeating the mistakes of the past. NO TO WAR. lamoncloa.gob.es/presidente/int…
English
12.9K
68.9K
298.5K
10.2M
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
Bo Wang
Bo Wang@BoWang87·
Prof. Donald Knuth opened his new paper with "Shock! Shock!" Claude Opus 4.6 had just solved an open problem he'd been working on for weeks — a graph decomposition conjecture from The Art of Computer Programming. He named the paper "Claude's Cycles." 31 explorations. ~1 hour. Knuth read the output, wrote the formal proof, and closed with: "It seems I'll have to revise my opinions about generative AI one of these days." The man who wrote the bible of computer science just said that. In a paper named after an AI. Paper: cs.stanford.edu/~knuth/papers/…
Bo Wang tweet media
English
155
1.9K
9.1K
1.2M
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
Jeff Dean
Jeff Dean@JeffDean·
⚡ Excited to announce Gemini 3.1 Flash-Lite! We’ve set a new standard for efficiency and capability to give developers our fastest, most cost-effective Gemini 3 model yet. We engineered this model with thinking levels, allowing it to handle high-volume queries instantly, while scaling up its reasoning for complex edge cases. By the numbers: ⏱️ 2.5X faster time-to-first-token than 2.5 Flash while being significantly higher quality 📉 $0.25 per 1M input tokens 📊 1432 Elo on LMArena & 86.9% on GPQA Diamond Thrilled to see what developers build with this kind of speed and quality at scale. Available now in Google AI Studio and Vertex AI. blog.google/innovation-and…
Jeff Dean tweet media
English
68
122
1.3K
116.5K
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
Pete Hegseth
Pete Hegseth@PeteHegseth·
Thank you for your attention to this matter. cc: @AnthropicAI @DarioAmodei
Pete Hegseth tweet mediaPete Hegseth tweet media
English
5K
8.4K
59.5K
6.8M
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
Anthropic
Anthropic@AnthropicAI·
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
English
7.3K
6.3K
55K
33.6M
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
Tucker Carlson Network
Tucker Carlson Network@TCNetwork·
BREAKING: US Ambassador to Israel Mike Huckabee tells Tucker Carlson that Israel has the Biblical right to take over all of the Middle East. “It would be fine if they took it all.”
English
3.7K
9.7K
35.1K
11M
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
Jeff Dean
Jeff Dean@JeffDean·
Today, we’re continuing to push the boundaries of AI with our release of Gemini 3.1 Pro. This updated model scores 77.1% on ARC-AGI-2, more than double the reasoning performance of its predecessor, Gemini 3 Pro. Check out the visible improvement in this side-by-side comparison, showing Gemini 3.1 Pro’s crisp animation built with pure code. Read more about today’s 3.1 Pro update: blog.google/innovation-and…
English
235
425
5.6K
1.1M
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
Noam Shazeer
Noam Shazeer@NoamShazeer·
Last week we upgraded Gemini 3 Deep Think. Today, we’re shipping the core intelligence that makes those breakthroughs possible: Gemini 3.1 Pro. A noticeably smarter, more capable baseline for your hardest challenges. Available now: blog.google/innovation-and…
Noam Shazeer tweet media
English
28
54
956
35.7K
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
Sundar Pichai
Sundar Pichai@sundarpichai·
Gemini 3 Deep Think is getting a significant upgrade. We’ve refined Deep Think in close partnership with scientists and researchers to tackle tough, real-world challenges. And it’s pushing the frontier across the most challenging benchmarks, achieving an unprecedented 84.6% on ARC-AGI-2. It also sets a new standard on Humanity’s Last Exam - 48.4% without tools.
Sundar Pichai tweet media
English
366
707
9.1K
1M
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
Axiom
Axiom@axiommathai·
1/ AxiomProver has solved Fel’s open conjecture on syzygies of numerical semigroups, autonomously generating a formal proof in Lean with zero human guidance. This is the first time an AI system has settled an unsolved research problem in theory-building math and self verifies.
Axiom tweet mediaAxiom tweet media
English
87
448
2.4K
1M
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
Sundar Pichai
Sundar Pichai@sundarpichai·
Our Q4/FY’25 results are in. Thanks to our partners & employees, it was a tremendous quarter, exceeding $400B in annual revenue for the first time. Our full AI stack is fueling our progress, and Gemini 3 adoption has been faster than any other model in our history. We’re really well positioned and excited going into 2026. Much more to come!
Sundar Pichai tweet media
English
634
1.4K
19.4K
2.6M
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
KAUST
KAUST@KAUST_News·
MenaML Winter School 2026 convened emerging AI leaders at KAUST. In collaboration with @MENAML_ and @McitGovSa more than 300 scholars from 30+ MENA countries advanced frontier machine learning through hands-on labs and collaboration with global experts.
KAUST tweet mediaKAUST tweet mediaKAUST tweet mediaKAUST tweet media
English
2
6
30
2.8K
Jvnior
Jvnior@Jvnior·
Covenant of Muhammad ﷺ with Christians It states: “They (Christians) are my citizens; I shall defend them against any harm. No compulsion shall be on them… Their churches shall not be destroyed.” “It is my command that Muslims protect them until the end of time.” (Signed by the Prophet ﷺ, sealed by Ali رضي الله عنه)
Jvnior tweet media
English
342
1.1K
5.4K
242.6K
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
hardmaru
hardmaru@hardmaru·
One of my favorite findings: Positional embeddings are just training wheels. They help convergence but hurt long-context generalization. We found that if you simply delete them after pretraining and recalibrate for < 1% of the original budget, you unlock massive context windows.
Sakana AI@SakanaAILabs

Introducing DroPE: Extending the Context of Pretrained LLMs by Dropping Their Positional Embeddings pub.sakana.ai/DroPE/ We are releasing a new method called DroPE to extend the context length of pretrained LLMs without the massive compute costs usually associated with long-context fine-tuning. The core insight of this work challenges a fundamental assumption in Transformer architecture. We discovered that explicit positional embeddings like RoPE are critical for training convergence but eventually become the primary bottleneck preventing models from generalizing to longer sequences. Our solution is radically simple: We treat positional embeddings as a temporary training scaffold rather than a permanent architectural necessity. Real-world workflows like reviewing massive code diffs or analyzing legal contracts require context windows that break standard pretrained models. While models without positional embeddings (NoPE) generalize better to these unseen lengths, they are notoriously unstable to train from scratch. Here, we achieve the best of both worlds by using embeddings to ensure stability during pretraining and then dropping them to unlock length extrapolation during inference. Our approach unlocks seamless zero-shot context extension without any expensive long-context training. We demonstrated this on a range of off-the-shelf open-source LLMs. In our tests, recalibrating any model with DroPE requires less than 1% of the original pretraining budget, yet it significantly outperforms established methods on challenging benchmarks like LongBench and RULER. We have released the code and the full paper to encourage the community to rethink the role of positional encodings in modern LLMs. Paper: arxiv.org/abs/2512.12167 Code: github.com/SakanaAI/DroPE

English
49
241
2.5K
345.6K
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
Aaron Rupar
Aaron Rupar@atrupar·
words fail at the brazenness of the dishonesty in the White House's new January 6 timeline: whitehouse.gov/j6/
Aaron Rupar tweet media
English
490
3K
13.6K
3M
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن retweetledi
Senator Chris Van Hollen
Senator Chris Van Hollen@ChrisVanHollen·
Another outrageous action. The Netanyahu govt is banning over 30 aid orgs — including CARE, Mercy Corps & Doctors Without Borders groups — from providing life-saving aid to people in Gaza.  Another betrayal of humanity. And, once again, the Trump Admin says and does nothing.  bbc.com/news/articles/…
Senator Chris Van Hollen tweet media
English
848
2.3K
5.1K
154.8K