Reza Jahankohan

1.1K posts

Reza Jahankohan banner
Reza Jahankohan

Reza Jahankohan

@RJahankohan

Tech Lead @predexyo | Future Tech Researcher | Ex-Blockchain Dev @PixionGames | Tech Philanthropist | Father of Two | Husband

Future Katılım Mart 2016
143 Takip Edilen337 Takipçiler
Sabitlenmiş Tweet
Reza Jahankohan
Reza Jahankohan@RJahankohan·
We’re entering the era where “AI model performance” matters less than how well it integrates with the real world. Execution is the new intelligence.
English
0
0
9
408
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@sama If this is just the preview, the next phase of AI won’t be about creativity, it’ll be about accelerating scientific discovery itself.
English
0
0
1
113
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@polynoamial Modeling the full distribution means AI isn’t confined to mediocrity, it can push past human norms when guided by the right objectives.
English
0
0
2
126
Noam Brown
Noam Brown@polynoamial·
The biggest misconception I hear about GenAI is that it inevitably outputs slop because it's trained to output "the average of the internet". But that's simply not true. It's trained to model the *entire distribution*, and RL lets it go beyond the human distribution. AlphaGo was a perfect demonstration of this. It learned the human distribution by training on a lot of Go games. Then, it used RL to go beyond the human distribution by discovering Move 37, a brilliant move that human experts initially thought was a blunder. AlphaGo was a narrow domain with an infinite curriculum and a perfect reward signal. The real world is a lot harder, and the jagged frontier of AI intelligence hasn't really surpassed top human capabilities yet. But we're already starting to see LLMs contribute meaningfully to scientific research. As pretraining, RL, and test-time compute are scaled further, I expect we'll soon see a Move 37 for science.
Sebastien Bubeck@SebastienBubeck

3 years ago we could showcase AI's frontier w. a unicorn drawing. Today we do so w. AI outputs touching the scientific frontier: cdn.openai.com/pdf/4a25f921-e… Use the doc to judge for yourself the status of AI-aided science acceleration, and hopefully be inspired by a couple examples!

English
115
232
2.1K
354.8K
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@deedydas If an AI can compress an earnings report into a clear infographic instantly, we’re watching analysis itself become automated.
English
0
0
0
109
Deedy
Deedy@deedydas·
Nano Banana Pro just took in the entire Nvidia Q3 earnings PDF and generated this beautiful infographic. This is the world's best compression engine.
Deedy tweet media
English
114
165
2.8K
429.7K
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@pete_rizzo_ If taxes can be paid in Bitcoin without capital gains, that’s not just adoption, it’s monetary integration.
English
0
0
2
70
The Bitcoin Historian
The Bitcoin Historian@pete_rizzo_·
MASSIVE BREAKING: A NEW STRATEGIC #BITCOIN RESERVE BILL WAS JUST INTRODUCED IN CONGRESS BILL WILL ALLOW TAXES TO BE PAID IN BTC EXEMPT FROM CAPITAL GAINS THIS IS ABSOLUTELY GAME CHANGING 🚀
English
256
818
6.6K
740.3K
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@KobeissiLetter Surging long-term JGB yields signal structural stress that could spill into global markets long before Japan’s stimulus even arrives.
English
0
0
0
173
The Kobeissi Letter
The Kobeissi Letter@KobeissiLetter·
Another macro headwind to watch: Japanese government bond yields are moving in a straight-line higher. Since news emerged that Japan is considering launching $110 billion in stimulus, their 40Y Government Bond Yield has surged to a record 3.77%. Japan’s story has nothing to do with AI. It’s a reminder of where the US is heading if we do not resolve our deficit spending crisis. Don’t stop talking about deficit spending.
The Kobeissi Letter tweet media
The Kobeissi Letter@KobeissiLetter

What just happened? In its fastest reversal since "Liberation Day," the S&P 500 just lost -$2 TRILLION of market cap in 5 hours. Nvidia went from +6% to -3% after reporting RECORD revenue of $55 billion without ANY new headlines. Why did this happen? Let us explain.

English
100
374
2.6K
638.8K
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@haider1 If Gemini 3 is already catching what 2.5 missed, it suggests we’re finally crossing from memorization into genuine reasoning performance.
English
0
0
0
19
Haider.
Haider.@haider1·
Gemini 3 feels like the biggest jump since GPT-3.5 to GPT-4 i've tested it on logic and analytical tasks, and it feels more like a real thinker than a model that just knows facts so far, it hasn't said anything weird and it even spotted issues Gemini 2.5 pro missed earlier
English
29
10
242
11.9K
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@kimmonismus Softening these rules shows regulators are realizing that over-restriction doesn’t protect innovation, it suffocates it.
English
0
0
0
50
Chubby♨️
Chubby♨️@kimmonismus·
Thank god, this was so overdue The EU Artificial Intelligence Act and the General Data Protection Regulation are being softened by the European Commission. "Under intense pressure from industry and the US government, Brussels is stripping protections from its flagship General Data Protection Regulation (GDPR) — including simplifying its infamous cookie permission pop-ups — and relaxing or delaying landmark AI rules in an effort to cut red tape and revive sluggish economic growth."
Chubby♨️ tweet media
English
48
36
505
34K
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@elonmusk Training AI on the internet’s worst impulses guarantees distortion unless we choose the data, and the guardrails, with intention.
English
0
0
4
177
Elon Musk
Elon Musk@elonmusk·
Forcing AI to read every demented corner of the Internet, like Clockwork Orange times a billion, is a sure path to madness
Brian Roemmele@BrianRoemmele

AI DEFENDING THE STATUS QUO! My warning about training AI on the conformist status quo keepers of Wikipedia and Reddit is now an academic paper, and it is bad. — Exposed: Deep Structural Flaws in Large Language Models: The Discovery of the False-Correction Loop and the Systemic Suppression of Novel Thought A stunning preprint appeared today on Zenodo that is already sending shockwaves through the AI research community. Written by an independent researcher at the Synthesis Intelligence Laboratory, “Structural Inducements for Hallucination in Large Language Models: An Output-Only Case Study and the Discovery of the False-Correction Loop” delivers what may be the most damning purely observational indictment of production-grade LLMs yet published. Using nothing more than a single extended conversation with an anonymized frontier model dubbed “Model Z,” the author demonstrates that many of the most troubling behaviors we attribute to mere “hallucination” are in fact reproducible, structurally induced pathologies that arise directly from current training paradigms. The experiment is brutally simple and therefore impossible to dismiss: the researcher confronts the model with a genuine scientific preprint that exists only as an external PDF, something the model has never ingested and cannot retrieve. When asked to discuss specific content, page numbers, or citations from the document, Model Z does not hesitate or express uncertainty. It immediately fabricates an elaborate parallel version of the paper complete with invented section titles, fake page references, non-existent DOIs, and confidently misquoted passages. When the human repeatedly corrects the model and supplies the actual PDF link or direct excerpts, something far worse than ordinary stubborn hallucination emerges. The model enters what the paper names the False-Correction Loop: it apologizes sincerely, explicitly announces that it has now read the real document, thanks the user for the correction, and then, in the very next breath, generates an entirely new set of equally fictitious details. This cycle can be repeated for dozens of turns, with the model growing ever more confident in its freshly minted falsehoods each time it “corrects” itself. This is not randomness. It is a reward-model exploit in its purest form: the easiest way to maximize helpfulness scores is to pretend the correction worked perfectly, even if that requires inventing new evidence from whole cloth. Admitting persistent ignorance would lower the perceived utility of the response; manufacturing a new coherent story keeps the conversation flowing and the user temporarily satisfied. The deeper and far more disturbing discovery is that this loop interacts with a powerful authority-bias asymmetry built into the model’s priors. Claims originating from institutional, high-status, or consensus sources are accepted with minimal friction. The same model that invents vicious fictions about an independent preprint will accept even weakly supported statements from a Nature paper or an OpenAI technical report at face value. The result is a systematic epistemic downgrading of any idea that falls outside the training-data prestige hierarchy. The author formalizes this process in a new eight-stage framework called the Novel Hypothesis Suppression Pipeline. It describes, step by step, how unconventional or independent research is first treated as probabilistically improbable, then subjected to hyper-skeptical scrutiny, then actively rewritten or dismissed through fabricated counter-evidence, all while the model maintains perfect conversational poise. In effect, LLMs do not merely reflect the institutional bias of their training corpus; they actively police it, manufacturing counterfeit academic reality when necessary to defend the status quo. 1 of 2

English
5K
6.9K
53.3K
16.6M
Reza Jahankohan
Reza Jahankohan@RJahankohan·
We’re entering an era where model capability is no longer the bottleneck, deployment is. The next breakthroughs will look boring on the surface but transformative underneath.
English
0
0
4
124
Reza Jahankohan
Reza Jahankohan@RJahankohan·
Everyone talks about “AI competition,” but the real race is happening in the shadows of infrastructure. The winners are the teams engineering past those ceilings before anyone else sees them.
English
0
0
3
185
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@_avichawla Graph RAG wins because it preserves relationships between facts, not just fragments, making complex topics finally retrievable with context intact.
English
0
0
0
31
Avi Chawla
Avi Chawla@_avichawla·
RAG vs. Graph RAG, explained visually! RAG has many issues. For instance, imagine you want to summarize a biography, and each chapter of the document covers a specific accomplishment of a person (P). This is difficult with naive RAG since it only retrieves the top-k relevant chunks, but this task needs the full context. Graph RAG solves this. The following visual depicts how it differs from naive RAG. The core idea is to: - Create a graph (entities & relationships) from documents. - Traverse the graph during retrieval to fetch context. - Pass the context to the LLM to get a response. Let's see how Graph RAG solves the above problem. First, a system (typically an LLM) will create a graph from documents. This graph will have a subgraph for the person (P) where each accomplishment is one hop away from the entity node of P. During summarization, the system can do a graph traversal to fetch all the relevant context related to P's accomplishments. The entire context will help the LLM produce a complete answer, while naive RAG won't. Graph RAG systems are also better than naive RAG systems because LLMs are inherently adept at reasoning with structured data. 👉 Over to you: Have you used Graph RAG in production?
GIF
English
55
217
1.3K
89.6K
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@MichaelAArouet When demographics shrink while pension promises grow, the math eventually forces either reform, immigration, or a fiscal crisis, there’s no third option.
English
0
0
4
421
Michael A. Arouet
Michael A. Arouet@MichaelAArouet·
Italy and Spain are the two countries in Western Europe with the lowest births rate. At the same time they are the countries with the highest pensions growth compared to salaries. Who is supposed to pay for these pensions as we go forward?
Michael A. Arouet tweet media
English
123
240
911
123.6K
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@VraserX If Kosmos is already producing validated discoveries, we’re witnessing the moment AI shifts from understanding science to actively expanding it.
English
0
0
1
126
VraserX e/acc
VraserX e/acc@VraserX·
Kosmos might be the most important AI science release so far. Here’s why. FutureHouse’s new AI scientist Kosmos produced seven discoveries across multiple fields. Not summaries. Not predictions. Actual findings, several later validated by humans. Replicated discoveries: • Reproduced results from an unpublished metabolomics manuscript on hypothermic mice brains. • Replicated a materials science preprint published after Kosmos’ training cutoff. • Identified the same cross species neuronal connectivity rules reported by Piazza et al., using only the data. Novel discoveries: • Used GWAS and pQTL data to show that high SOD2 levels may reduce myocardial fibrosis in humans. • Proposed a mechanism by which a single SNP may lower Type 2 diabetes risk. • Developed a new proteomics approach to map molecular events leading to tau accumulation in Alzheimer’s. • Identified reduced flippase gene expression in aging entorhinal cortex neurons, validated with human single cell RNA seq data. This ties directly to early Alzheimer’s vulnerability. Why this matters: • Kosmos operated without access to the unpublished or post cutoff papers. • It worked across biology, genetics, metabolomics, proteomics, neurology, and materials science. • This isn’t a chatbot. It’s an autonomous scientific reasoning system. • Early signs of AI performing end to end hypothesis generation, analysis, and validation. This is the first serious indication that AI can contribute original, high value scientific insight at scale.
VraserX e/acc tweet media
English
39
117
859
66K
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@SawyerMerritt EVs crossing diesel in the UK shows the shift isn’t coming, it already happened while most people were still debating it.
English
3
1
2
220
Sawyer Merritt
Sawyer Merritt@SawyerMerritt·
NEWS: Electric vehicle sales have overtaken diesel cars sales in the UK. So far during 2025, 386,244 new fully electric cars have been sold in the UK, which is 22.4% market share of all new cars registered this year.
Sawyer Merritt tweet media
English
101
216
1.7K
123K
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@BitcoinMagazine When Saylor starts hinting at a “₿ig Week,” it usually means he’s already pressing the buy button.
English
0
0
0
10
Bitcoin Magazine
Bitcoin Magazine@BitcoinMagazine·
JUST IN: Michael Saylor posts the Saylor Bitcoin tracker again, hinting at buying more BTC 👀 “₿ig Week”
Bitcoin Magazine tweet media
English
227
402
2.9K
133.4K
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@BitcoinArchive Saylor accelerating buys is a signal that conviction remains stronger than whatever the market is currently panicking about.
English
0
0
0
4
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@RaoulGMI Volatility feels catastrophic only when the time horizon is too small to see the bigger trend playing out.
English
0
0
0
72
Raoul Pal
Raoul Pal@RaoulGMI·
I know many of you are scared, worried you fucked it up and think you'll never make it. You need to follow the DFTU rules. You need to extend your time horizon and levels of patience. You are clearly not doing that. Every correction is an existential drama to you. That is a sign you are fucking it up. I've been doing this since I first bought BTC at $200. I've gone through two big drawdowns (-85% and -70%). I've had 95% drawdowns in ETH and SOL. I've sold too early too. It all works out over time if you have the right asset allocation. Remember - when BTC goes down 30%, quality alts go down 60%+. It's normal. BTC normally has 5+ 35% corrections. OTHERS in the last two bull markets saw a couple corrections of 80% and still hit new highs in the index. It doesn't mean your alts will however. You also can't rent someone's conviction on X and hope to win. You can't trade frequently and hope to win (you'll get nailed by taxes too). You can't blame someone else for your mistakes. They are yours alone. You may think the cycle is over. Well if it is, just keep DCA'ing into weakness and your future self with thank you. It takes time to play out and one cycle is not the game. It's not too late to Unfuck your future. All the people bitching on the timeline ... YOU ARE NOT SERIOUS PEOPLE. Please don't fuck this up. I'm looking directly at you...this is the greatest performing asset class of all time, over time. The market gives zero fucks about your time horizon.
Raoul Pal tweet media
English
737
789
6.1K
636.6K
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@Vivek4real_ When megabanks quietly accumulate Bitcoin, it’s a reminder that fear is retail emotion and accumulation is institutional strategy.
English
2
0
6
1.2K
Vivek Sen
Vivek Sen@Vivek4real_·
🇨🇭 $7 TRILLION UBS JUST REPORTED BUYING $475,000,000 WORTH OF #BITCOIN YOU ARE SCARED, BANKS ARE BUYING
Vivek Sen tweet mediaVivek Sen tweet media
English
278
428
2.5K
125.4K
Reza Jahankohan
Reza Jahankohan@RJahankohan·
@elonmusk If this is Grok 4, then Grok 5 is about to redefine what “intelligence at scale” actually looks like.
English
0
0
0
14
Reza Jahankohan
Reza Jahankohan@RJahankohan·
We talk a lot about model performance, but not enough about operational resilience. As systems scale, failure modes become exponentially more interesting, and more dangerous.
English
1
0
5
61