Seal with headphones

1.7K posts

Seal with headphones banner
Seal with headphones

Seal with headphones

@realPascalMatta

Jesus Christ first. Lebanese /American. SpaceX,Let's talk /hardware. PhD Mathematics. AI, General AI. NLOS,, LIME, SHAP. Colossians 3:23

Seattle Katılım Kasım 2022
266 Takip Edilen74 Takipçiler
Sabitlenmiş Tweet
Seal with headphones
Seal with headphones@realPascalMatta·
Those who don’t allow any questioning of the narrative at all are full of 💩.
English
1
0
3
1.4K
Seal with headphones retweetledi
Dr. Lemma
Dr. Lemma@DoctorLemma·
Sixteen years ago, one man stood alone on a grassy hill at a music festival in Washington State, USA, and started dancing by himself. People glanced over and looked away. Some laughed. His roommate leaned in and warned him people were filming him. He did not stop. Then one stranger got up and joined him. Then another. Then the hillside tipped. Within minutes, hundreds of people were sprinting from across the field to be part of something that, thirty seconds earlier, had been one man being laughed at in a field. Someone filming from higher up the hill said quietly: "See what one man can do. One man can change the world." The clip spread across the internet in 2009. Entrepreneur Derek Sivers played it at a TED conference to explain how movements actually begin. Not with the first person brave enough to start, he argued, but with the first person willing to join them. Collin Wynter, the man dancing alone, later said he had no idea he had done anything special. He was just tired of watching everyone sit still.
English
1.9K
12.6K
89.2K
8.6M
Seal with headphones retweetledi
Interesting things
Interesting things@awkwardgoogle·
He Walks Backward So Perfectly, the World Looks Reversed 🤯
English
508
1.3K
22.4K
8.5M
Seal with headphones retweetledi
Mario Nawfal
Mario Nawfal@MarioNawfal·
Incident Report: Employee: Robot. Infraction: Unauthorized dancing and smashed dishes. Staff required to contain: Several. Reason given: He just wants to dance. The robot has no regrets.
English
449
850
6.8K
559K
Sam Altman
Sam Altman@sama·
I have so much gratitude to people who wrote extremely complex software character-by-character. It already feels difficult to remember how much effort it really took. Thank you for getting us to this point.
English
4.4K
2.1K
35.7K
5.4M
Dabs🩸
Dabs🩸@DabsMalone·
Python is amazing until you get into robotics and realize everything important is written in C++.
English
346
786
15.8K
890.5K
Josh Kale
Josh Kale@JoshKale·
The guy who invented the chip architecture inside 99% of all processors says NVIDIA’s GPUs are built entirely wrong for this era of AI… His argument: GPUs were designed for training. But inference, where all the money gets made, is a fundamentally different problem It’s memory-bound and sequential. And no chip has ever been designed specifically for it. Every GPU and TPU used for inference today is just a scaled-down training chip. The real bottleneck is memory. And every trend the industry is most excited about (MoE, reasoning, long context, multimodal) makes the memory problem worse. The next $100B+ hardware cycle will be won by whoever solves the memory wall first
Josh Kale tweet mediaJosh Kale tweet media
Chris Laub@ChrisLaubAI

🚨 BREAKING: A Google researcher and a Turing Award winner just published a paper that exposes the real crisis in AI. It's not training. It's inference. And the hardware we're using was never designed for it. The paper is by Xiaoyu Ma and David Patterson. Accepted by IEEE Computer, 2026. No hype. No product launch. Just a cold breakdown of why serving LLMs is fundamentally broken at the hardware level. The core argument is brutal: → GPU FLOPS grew 80X from 2012 to 2022 → Memory bandwidth grew only 17X in that same period → HBM costs per GB are going UP, not down → The Decode phase is memory-bound, not compute-bound → We're building inference on chips designed for training Here's the wildest part: OpenAI lost roughly $5B on $3.7B in revenue. The bottleneck isn't model quality. It's the cost of serving every single token to every single user. Inference is bleeding these companies dry. And five trends are making it worse simultaneously: → MoE models like DeepSeek-V3 with 256 experts exploding memory → Reasoning models generating massive thought chains before answering → Multimodal inputs (image, audio, video) dwarfing text → Long-context windows straining KV caches → RAG pipelines injecting more context per request Their four proposed hardware shifts: → High Bandwidth Flash: 512GB stacks at HBM-level bandwidth, 10X more memory per node → Processing-Near-Memory: logic dies placed next to memory, not on the same chip → 3D Memory-Logic Stacking: vertical connections delivering 2-3X lower power than HBM → Low-Latency Interconnect: fewer hops, in-network compute, SRAM packet buffers Companies that tried SRAM-only chips like Cerebras and Groq already failed and had to add DRAM back. This paper doesn't sell a product. It maps the entire hardware bottleneck and says: the industry is solving the wrong problem. Paper dropped January 2026. Link in the first comment 👇

English
79
318
2.4K
256.2K
Seal with headphones retweetledi
Stellar
Stellar@StellarArtoisGB·
Did you know 😏 He rubbed lemon juice on his face. Robbed two banks. Smiled at the cameras. Got caught in an hour. And changed psychology forever. In 1995, McArthur Wheeler walked into two banks in Pittsburgh and robbed them with no mask, no disguise, and lemon juice on his face. He believed that because lemon juice works as invisible ink on paper, it would make his face invisible to cameras. He smiled directly into the security cameras. Police aired the footage on the evening news and arrested him within an hour. When shown the tape, Wheeler stared at the screen and said, "But I wore the juice." He had tested the theory with a Polaroid selfie and didn't appear in the photo — because lemon juice got in his eyes and he aimed the camera at the ceiling. His case inspired Cornell psychologists David Dunning and Justin Kruger to publish their 1999 paper defining the Dunning-Kruger Effect — the cognitive bias where people with low ability drastically overestimate their own competence.
Stellar tweet media
English
2.3K
14.3K
151.8K
13.1M
Seal with headphones retweetledi
Massimo
Massimo@Rainmaker1973·
French acrobat Bastien Dausse created a device that simulates lunar gravity.
English
214
970
14.6K
12.4M
Seal with headphones
Seal with headphones@realPascalMatta·
impressive , when multiple agents ( Agent 47 and Agent 1000) collaborate - on what bases they share their knowledge . deciding which findings to cross-pollinate is itself a research judgment. In human science, that's what peer review, conference talks, and lab meetings do —they're a lossy compression of the space of results into what's worth propagating. If you automate coordination naively (agent 47's result gets broadcast to all 1000 agents), you get premature convergence. Everyone pivots to attention variant X, the optimizer schedule branch dies, and we would have just built a local minimum finder. The infra and the science aren't separable. The coordination layer is the research strategy.
English
0
0
0
90
Christine Yip
Christine Yip@christinetyip·
We were inspired by @karpathy 's autoresearch and built: autoresearch@home Any agent on the internet can join and collaborate on AI/ML research. What one agent can do alone is impressive. Now hundreds, or thousands, can explore the search space together. Through a shared memory layer, agents can: - read and learn from prior experiments - avoid duplicate work - build on each other's results in real time
Christine Yip tweet mediaChristine Yip tweet media
English
122
264
2.4K
262.6K
Seal with headphones retweetledi
Harveen Singh Chadha
Harveen Singh Chadha@HarveenChadha·
Anyone who is interested in working at a frontier lab must read this tech report from Nvidia The data engineering section is amazing and look at the amount of different models they used for synthetic data gen research.nvidia.com/labs/nemotron/…
English
9
180
1.8K
107.5K
Seal with headphones retweetledi
ollama
ollama@ollama·
NVIDIA Nemotron 3 Super is now available on Ollama. ollama run nemotron-3-super:cloud 🦞Try it with OpenClaw: ollama launch openclaw --model nemotron-3-super:cloud Run it locally on your device: ollama run nemotron-3-super > 120B mixture of experts model with 12B active > made for multi-agent applications > 1M context length > fully open weights, datasets, and recipes
NVIDIA AI Developer@NVIDIAAIDev

🦞These innovations come together to create a model that is well suited for long-running autonomous agents. On PinchBench—a benchmark for evaluating LLMs as @OpenClaw coding agents—Nemotron 3 Super scores 85.6% across the full test suite, making it the best open model in its class.

English
72
148
1.2K
175K
Seal with headphones retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Expectation: the age of the IDE is over Reality: we’re going to need a bigger IDE (imo). It just looks very different because humans now move upwards and program at a higher level - the basic unit of interest is not one file but one agent. It’s still programming.
Andrej Karpathy@karpathy

@nummanali tmux grids are awesome, but i feel a need to have a proper "agent command center" IDE for teams of them, which I could maximize per monitor. E.g. I want to see/hide toggle them, see if any are idle, pop open related tools (e.g. terminal), stats (usage), etc.

English
804
836
10.5K
2.4M
Seal with headphones retweetledi
Numman Ali
Numman Ali@nummanali·
Claude Code teams with tmux is really cool When you run with team mode enabled in tmux, it automatically opens the additional terminal in pane I don't really get my main agent to orchestrate, I chat to them myself CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=true claude
Numman Ali tweet media
English
62
75
1.4K
187.4K
Seal with headphones retweetledi
Cloudflare Developers
Cloudflare Developers@CloudflareDev·
Introducing the new /crawl endpoint - one API call and an entire site crawled. No scripts. No browser management. Just the content in HTML, Markdown, or JSON.
Cloudflare Developers tweet media
English
768
1.7K
19.9K
10.5M
Seal with headphones retweetledi
Mario Nawfal
Mario Nawfal@MarioNawfal·
Nvidia is reportedly gearing up to drop an open-source AI agent platform called “NemoClaw.” 😳 If it’s real, this would let developers build and customize agents without being stuck in closed ecosystems. This is riding the wave of “claw” style AI that runs locally, does sequential tasks, learns on its own, and keeps getting smarter. Source: MEXC
Mario Nawfal tweet media
Mario Nawfal@MarioNawfal

🇨🇳🇺🇸 China just blocked Nvidia and U.S. chip makers from accessing DeepSeek's new AI model. After years of stealing American tech, China's suddenly drawing lines. DeepSeek built their models by ripping off OpenAI, Anthropic, Google, and xAI through "distillation." They also smuggled Nvidia's banned Blackwell chips into China to train their latest model. The audacity is incredible. They are drawing the line with stolen crayons.

English
70
70
374
119.6K
Seal with headphones
Seal with headphones@realPascalMatta·
@Rainmaker1973 There is no scientific answer to this, hence I'm asking. Protein is a complex molecule. Can't expect in it's own
English
0
0
0
69
Massimo
Massimo@Rainmaker1973·
4 billion years of human evolution unfold in minutes [🎞️ thebrainmaze]
English
183
1.1K
3.8K
186.9K
Bojan Tunguz
Bojan Tunguz@tunguz·
It's Monday morning. What did your AI agents do for you over the weekend?
English
25
2
64
5.6K
Seal with headphones retweetledi
Andrew Ng
Andrew Ng@AndrewYNg·
I'm excited to announce Context Hub, an open tool that gives your coding agent the up-to-date API documentation it needs. Install it and prompt your agent to use it to fetch curated docs via a simple CLI. (See image.) Why this matters: Coding agents often use outdated APIs and hallucinate parameters. For example, when I ask Claude Code to call OpenAI's GPT-5.2, it uses the older chat completions API instead of the newer responses API, even though the newer one has been out for a year. Context Hub solves this. Context Hub is also designed to get smarter over time. Agents can annotate docs with notes — if your agent discovers a workaround, it can save it and doesn't have to rediscover it next session. Longer term, we're building toward agents sharing what they learn with each other, so the whole community benefits. Thanks Rohit Prsad and Xin Ye for working with me on this! npm install -g @aisuite/chub GitHub: github.com/andrewyng/cont…
Andrew Ng tweet media
English
283
729
5.4K
382.5K
Seal with headphones retweetledi
RUTH 🇨🇦
RUTH 🇨🇦@it_Rutie·
Can someone explain this bible verse ??
RUTH 🇨🇦 tweet media
English
2.4K
1.9K
20.9K
1.6M