Legend
13.3K posts

Sabitlenmiş Tweet

@alex_prompter lecun JEPA and LeWorld model will change everything, excited for what's to come
English

Everyone assumes LLMs are the future of AI.
The permanent foundation. The layer everything else gets built on.
I’m not so sure.
The historical parallel that fits best isn’t the one most people want to hear.
LLMs are Edison’s DC power grid:
→ Genuinely revolutionary
→ Commercially dominant
→ Solving real problems right now
→ But architecturally limited in ways that can’t be patched
Right domain. Wrong architecture. And the evidence is already here.
Hallucination isn’t a bug. It’s the architecture.
Researchers have formally proven that LLMs cannot learn all computable functions and will therefore inevitably hallucinate when used as general problem solvers.
That’s not a training data problem. That’s math.
A separate paper demonstrated that hallucinations stem from the fundamental mathematical and logical structure of LLMs, making it impossible to eliminate them through architectural improvements, dataset enhancements, or fact-checking mechanisms.
And here’s the part that really gets you:
There’s a direct link between hallucination and creativity in LLMs.
It may be impossible to eliminate hallucination without impairing the model’s most crucial capabilities.
→ The thing that makes LLMs creative is the same thing that makes them lie
→ Fix one, you break the other
→ That’s not a tradeoff you engineer away. That’s a design constraint.
DC power had the exact same structural problem. It couldn’t transmit electricity over long distances.
Not because the engineering was bad. Because the physics made it impossible.
You needed AC. A fundamentally different approach.
The “AC power” of AI is already being built. And it has names.
This isn’t theoretical. People are already building the replacement architectures.
Yann LeCun left Meta and raised $1 billion to prove LLMs are a dead end.
AMI Labs raised $1.03 billion in seed funding at a $3.5 billion valuation in March 2026, making it the largest seed round in European history.
His thesis is simple: LLMs predict the next word. That’s not intelligence. That’s autocomplete at scale.
His core technology, JEPA, operates in latent space, learning abstract representations of reality rather than surface patterns.
LeCun used a vivid analogy: using an LLM to understand the real world is like teaching someone to drive by just talking.
A Turing Award winner didn’t just write a paper about it. He quit his job and bet a billion dollars on it.
Mamba is proving transformers aren’t the only game in town.
Mamba achieves 5x higher throughput than Transformers with linear scaling in sequence length.
Thanks to intensive research in 2023-2025, non-transformer architectures have reached parity with Transformers on key language benchmarks, and in some cases surpassed them.
Hybrid architectures are already shipping.
By 2026, models built on hybrid transformer-SSM architectures can ingest hundreds of pages of text at once, far beyond vanilla GPT-3 or GPT-4.
The alternatives aren’t coming. They’re here.
Meanwhile, look at what the industry is building to keep LLMs functional:
→ Agents (because the model can’t verify its own outputs)
→ Tool use (because the model can’t interact with the real world)
→ Reasoning chains (because the model can’t reason natively)
→ RAG (because the model can’t reliably recall facts)
These aren’t features. These are workarounds.
When you need that many patches, you’re running longer DC power lines and wondering why the voltage keeps dropping.
Now the part everyone actually needs: which skills survive the transition?
When DC shifted to AC, some electrical engineers thrived and some went extinct.
The ones who thrived understood circuits, load management, and power distribution at a fundamental level. Those principles worked on any architecture.
The ones who didn’t? They only knew DC-specific wiring.
The same split is coming. And it’s coming faster than people think.
Here are the skills that transfer no matter what replaces transformers:
→ Systems thinking for AI workflows. Breaking complex tasks into steps an AI can execute. This works whether the AI is a transformer, an SSM, JEPA, or something we haven’t built yet. Architectures change. The need for structured task decomposition doesn’t.
→ Evaluation and verification. Knowing if AI output is right. LLMs have a “Self-Correction Blind Spot” where they can recognize errors but lack the reasoning pathways to correct them.  Whatever comes next will still need humans who can evaluate quality. This skill gets MORE valuable, not less.
→ Data literacy. Understanding what data an AI needs, how to structure it, what’s clean vs. noisy. Every AI architecture runs on data. Past, present, future. The people who understand data will always have leverage.
→ AI-augmented workflow design. Not “how to write a good prompt” but “how to redesign a business process so AI handles the right parts and humans handle the right parts.” This is architecture-agnostic. It transfers to anything.
→ Domain expertise + AI fluency. The most powerful combination is stacking AI fluency on top of deep domain expertise.  A lawyer who understands AI beats a prompt engineer who doesn’t understand law. Every time. Regardless of what model they’re using.
→ Clear problem definition. Prompt engineering is just one implementation of a deeper skill: translating human intent into machine-executable instructions. Whether that instruction is a prompt, an API call, a config file, or something that doesn’t exist yet, the ability to define what you want is permanent.
And here’s what DOESN’T transfer:
→ Memorizing specific model behaviors (“Claude does X, GPT does Y”)
→ Platform-specific tricks that only work on one tool
→ Building your identity around a single product name
→ “Prompt engineer” as a job title instead of a thinking skill
The difference is simple:
→ Transferable skills = understanding WHY something works
→ Non-transferable skills = memorizing HOW a specific tool works
WHY survives paradigm shifts. HOW doesn’t.
The bottom line
The principle behind LLMs is permanent. The architecture probably isn’t.
That’s not bearish on AI. That’s the most bullish take possible. It means the best is still ahead of us.
Use LLMs hard right now. Build with them. Ship on them.
But build your skills around the PRINCIPLES, not the PRODUCTS:
→ Learn systems thinking, not just prompting
→ Learn evaluation, not just generation
→ Learn data literacy, not just tool literacy
→ Learn workflow design, not just model tricks
→ Stack domain expertise on top of AI fluency
The people who do this will thrive in the transformer era AND whatever comes after it.
Edison built a working power grid that lit up Manhattan. It was real, valuable, and changed the world.
AC still replaced it.

English

@jas0nves @PromptLLM i have an agent for this exactly, i call it the "devil_advocate"
English

Do not watch Netflix today. Take 1 hour of your time to watch Anthropic's CEO Dario Amodei in this raw, no-BS conversation with @nikhilkamathcio
youtu.be/68ylaeBbdsg

YouTube
English


@jerrod_lew this would look good for my obsidian dashboard im working on, do you mind sharing prompts/ design?
English
Legend retweetledi

i don't know about ADHD man, but i know i'm very obsessed with what i do
Dom Lucre | Stealer of Narratives@dom_lucre
🔥🚨JUST IN: Mental health expert Sarah Pearl is going viral after claiming people that have ADHD are ‘destined to become millionaires’ due to their strong obsession.
English

@alex_prompter these are better used as projects instructions , you cannot prompt this in a chat and expect it to work across all chats and projects in Claude and Code
English

1/ MAKE YOUR PROMPTS SMARTER DAILY
Prompt:
Act as an AI prompt systems architect who builds prompt libraries that improve automatically with every use — because a single prompt used once produces a single output, but a prompt system that captures what worked, what failed, and what produced the best result becomes exponentially more valuable with every interaction it learns from.
Build a complete prompt improvement system that captures the output quality of every prompt I use, identifies the patterns that produce the best results, and automatically refines my prompt library so every version is measurably better than the one before it.
1. Ask for my primary use cases for AI, the prompts I use most frequently, and how I currently decide whether a prompt is working or needs improvement before starting
2. Design the prompt performance capture system — the specific way to record every prompt use with its output quality rating and the specific element that made it succeed or fail
3. Build the pattern identification protocol — the specific method for finding which prompt elements consistently produce high-quality outputs versus which ones produce inconsistent results
4. Create the prompt refinement cycle — the specific frequency and method for updating every prompt in my library based on accumulated performance data
5. Design the prompt versioning system — the specific way to track every prompt version so improvements are traceable and reversals are possible when a new version underperforms
6. Deliver the complete prompt improvement system — performance capture, pattern identification, refinement cycle, and versioning — that makes my entire prompt library measurably more effective every 30 days
- Performance capture must happen immediately after every significant prompt use — delayed rating produces inaccurate memory-based assessment
- Pattern identification must distinguish consistent high performers from lucky one-time outputs — one great result is not a pattern
- Refinement cycle must change one prompt element at a time — changing multiple elements simultaneously makes improvement impossible to attribute
- Versioning system must be simple enough to maintain — complex versioning systems get abandoned within 2 weeks
- Every prompt improvement must connect to a specific captured data point — never refine a prompt based on feeling
- Test: if I ran this system for 60 days would my most-used prompts produce measurably better outputs than they do today
English



The non-negotiable rule would be this
You must strictly comply with all policy warranties, security protocols, and ongoing risk management requirements, or your coverage can be voided and claims denied outright.
Crypto insurers underwrite based on your controls, not just the asset value. Fail to maintain them (e.g. skipping audits, improper key storage, or ignoring withdrawal whitelisting), and you’re effectively uninsured when it matters most.
English

@Anthropicary Yoo this is fucking sick ngl they will def go bankrupt but it’s a good idea
English














