Patricio Lobos

1K posts

Patricio Lobos

Patricio Lobos

@strandedinoslo

“I cannot fiddle, but I can make a BI system into an insight factory.”

Oslo, Norway Katılım Nisan 2016
94 Takip Edilen169 Takipçiler
Patricio Lobos
Patricio Lobos@strandedinoslo·
@DrFrederickChen any news on SRAM? asking for a friend... we need a breakthrough in density and price.
English
0
0
0
659
Fred Chen
Fred Chen@DrFrederickChen·
Samsung DRAM breakthrough (ISSCC): 1) Periphery and array separated onto two hybrid bonded wafers => looser design rules for periphery. 2) D1a pitches on 4F2 process gives D0a cell area. 3) 4F2 has floating body. (My take: IGZO may be deployed in future.) ieeexplore.ieee.org/document/11409…
English
4
9
86
31.6K
Patricio Lobos
Patricio Lobos@strandedinoslo·
@ATaylorFPGA @Landeur We have the same here in Norway, they want it to kill oil and gas industry and live of green energy and now they want to pass a tax to all households for parking, meaning if you have a house with a parking space or garage, etc you need to pay a monthly tax.
English
0
0
0
12
Adam Taylor
Adam Taylor@ATaylorFPGA·
@Landeur She is a communist, her goal was to destroy Germany.
English
1
1
28
611
Patricio Lobos
Patricio Lobos@strandedinoslo·
@sama @OpenAI total a and utter destruction of @ChatGPTapp in front of 30 executives, taking 25 minutes to make a single call against an MCP versus @claudeai that does it in 0.5 seconds and be done with a "week long" brand management plan in 5 minutes, I have to concluded that you guys have nothing to do in the corporate world. If there is any consolation neither do @xai and @GeminiApp As of March 2026 there is only one game and that is @bcherny and its team. When are you going to show some effort and actually fix your app and model?
English
0
0
0
20
Patricio Lobos
Patricio Lobos@strandedinoslo·
So, you mean that a CEO of a Bank runs CLI on his PC? Wonder what will Infra, IT, Cyber and Governance will say about that.... There two surfaces here, we at "X bubble" developing using CLI, another very different in the Norwegian Oil fund running MCPs. "reddite ergo quae sunt Caesaris Caesari et quae sunt Dei Deo"
English
0
0
0
710
Pietro Schirano
Pietro Schirano@skirano·
MCP was a mistake. Long live CLIs.
English
145
88
1.7K
259.2K
@levelsio
@levelsio@levelsio·
The biggest fumble in business ever might be Philips spinning off ASML, TSMC and NXP Philips co-founded ASML in 1984, then co-founded TSMC in 1987, then they founded NXP They sold each of them for short term profits in the 2000s ASML is now worth $545B TSMC is worth $1.76T NXP is worth $50B Philips today is worth just $27B If they'd never sold, Philips would be the largest company in the EU today, worth $650B Philips CEO Cor Boonstra called it "making money with the success of the past" 🤡
English
218
420
5.4K
519.6K
Andrew Feldman
Andrew Feldman@andrewdfeldman·
NVIDIA's biggest GTC announcement was a $20 billion bet on the same problem we solved 6 years ago. Their next-gen inference chip - not available yet - has 140x less memory bandwidth than @cerebras. To run a single 2 trillion parameter model, you need 2,000+ Groq chips. On Cerebras, that's just over 20 wafers. Even paired with GPUs, Groq maxes out at ~1,000 tokens per second. We run at thousands of tokens per second today. And every day. In production now. Why? When you connect 2,000 chips together, every interconnect has latency. Every cable has overhead. It doesn't matter what your memory bandwidth is on paper if you're bottlenecked by the wiring between thousands of tiny chips. We solved this with wafer scale. One integrated system. Little interconnect tax. Jensen told the world that fast inference is where the value is. He’s right - it’s why the world’s leading AI companies and hyperscalers are choosing Cerebras.
Andrew Feldman tweet media
English
69
72
744
150.6K
Patricio Lobos
Patricio Lobos@strandedinoslo·
@OpenAI @sama ChatGPT with MCP tools: "Based on the naming convention, this tool likely..." "A typical workflow would probably..." "The honest answer is I cannot..." @claudeai with the same tools: calls tool "Done. STATUS=0x03." 5 min vs 2 sec. Please fix.
English
0
0
0
23
Patricio Lobos
Patricio Lobos@strandedinoslo·
@adidas what a joke of a company, I need to return a pair of shoes, they send the 100 different tickets for return, they ask me to go 100 different time to return them. I recieved 100 "invalid" code returns, I contact 100 times customer server, they keep sending tickets... now I have a USD 300 dollars shoes i cannot use.
Patricio Lobos tweet media
English
0
0
0
21
Patricio Lobos
Patricio Lobos@strandedinoslo·
@andrewdfeldman At the moment you have 43 GB of Sram, that is a out a 143K * 143K dense array, to put it into perspective the biggest AMD FPGA has 771 Mb so roughly 6K * 6K array. That cost USD 20K, yours? 1 - 3 mill. Do you have an idea of a middle ground? ASIC in 32 K * 32 K??
English
0
0
1
437
Andrew Feldman
Andrew Feldman@andrewdfeldman·
GPUs are slow at AI inference because they hit the memory wall. Cerebras pioneered the SRAM based AI accelerator because GPUs were memory bandwidth constrained.   Let me explain.   There are two types of memory. Memory that can store a lot, but is slow. And memory that is fast, but can’t store much per square milimeter of silicon.   The former is called DRAM (or HBM) and the latter is SRAM. Graphics Processing Units use HBM. In fact, graphics was the perfect use case for HBM. It required a lot of data stored. But didn’t need it moved very often. This is why graphics processing units use HBM.   But AI inference has different characteristics than graphics. It moves data constantly from memory to compute. To generate each token, it needs to move all of the weights from memory to compute. And for the next token, it needs to do it again. For every single token in the answer. Because HBM is slow, moving data is time consuming. The GPU is waiting for data to get to it. It sits idle. Pulling power. Doing no work.   Cerebras chose to use SRAM so we could move data from memory to compute faster. Not a little bit faster but more than 2,600 times faster than NVIDIA Blackwell GPUs. As a result, we can generate tokens faster 15 times faster. This is why we are the fastest in the world.   But what about the weakness of SRAM? QSurely there is a tradeoff. SRAM can’t store very much data per square millimeter. This is why Cerebras went to wafer scale. By building a chip the size of a dinner plate, a chip that is 58 times larger than the largest GPU, Cerebras could stuff it to the gills with SRAM. We couldn’t make SRAM store more data per square millimeter, but we could provide more square millimeters by building a bigger chip.   If you build a solution with little chips and try to use SRAM you need to link thousands of them together to support a larger model. There simply isn’t enough room on the little chips for lots of SRAM and lots of compute cores. Thousdands of little chips connected together with cables, is slower and more power hungry than if all that traffic stayed on a big chip, or even several big chips.   And since communication between chips is slow, and communication on chip is fast, lots of little chips is slower at inference as well.
English
31
31
366
35.2K
Patricio Lobos
Patricio Lobos@strandedinoslo·
@claudeai @bcherny Is there any glitch with you invoicing? I'm getting invoiced every 10 minutes, 10 USD!! what the hell!
English
0
0
0
9
Patricio Lobos
Patricio Lobos@strandedinoslo·
Buy the Alinx below, download AMD Vivado 2020.2 enterprise follow installation and select ultrascale + That SoM is the «biggest» you can go before having to shell out USD 4350 for a Vivado enterpise license. Btw, you dont need to open Vivado all is CLI, in fact I have never use the interface…. AXKU5 V2.1 AXKU5 FPGA Dev Board & Kit with AMD Kintex™ US+ XCKU5P The AXKU5 FPGA development board equipped with the AMD Kintex UltraScale+™ series device, delivers standout performance with DDR4 SDRAM, QSPI Flash, PCIe, FMC HPC, Gigabit Ethernet.
English
0
0
0
251
kache
kache@yacineMTB·
Is there a FPGA out there that doesn't force me to use dogshit closed source software?
English
72
5
372
38.3K
Patricio Lobos
Patricio Lobos@strandedinoslo·
@trq212 Enjoying not only that, but the unlock for my customers with the new visuals, finally they do not need to enter a @PowerBI dashboard all directly, plus graphs, networks, @bcherny live streaming dashboards as an artifact that can be retrieve it at any time will be a killer option
Patricio Lobos tweet mediaPatricio Lobos tweet media
English
0
0
1
406
Claude
Claude@claudeai·
Claude can now build interactive charts and diagrams, directly in the chat. Available today in beta on all plans, including free. Try it out: claude.ai
English
1.6K
3.5K
42K
10.8M
Patricio Lobos
Patricio Lobos@strandedinoslo·
5/ This isn't computing about thermodynamics. It's computing with thermodynamics — programmed physical law as the computational substrate. The Hamiltonian is the program, temperature is a knob, and free energy is the answer. Patent pending. (hopefully this week)
English
0
0
0
20
Patricio Lobos
Patricio Lobos@strandedinoslo·
4/ We verified it works: sub-4% accuracy on free-energy differences from d=3 to d=200 dimensions, using 40,000 samples per measurement. The σ/T scaling law — never tested above d=3 — holds to 0.6% precision at d=200. The theory is exact where it was never tested before.
English
1
0
0
27
Patricio Lobos
Patricio Lobos@strandedinoslo·
1/ We built a machine that takes a Hamiltonian as input and returns a free energy as output. Not a simulation — real Langevin dynamics in hardware. Load a coupling matrix, inject noise at temperature T, measure thermodynamic work. We call it a programmable free-energy computer. @yacineMTB @yoheinakajima @predict_addict
English
1
0
0
41
Patricio Lobos
Patricio Lobos@strandedinoslo·
10/ Paper forthcoming. The PROMETHEUS platform is patent-pending. This opens a new experimental regime: systematic mapping of nonequilibrium estimator convergence across dimension, perturbation strength, and temperature. A phase diagram no molecular system can provide.
English
0
0
0
32
Patricio Lobos
Patricio Lobos@strandedinoslo·
9/ The entire experiment was run with Claude Opus 4.6 as AI co-pilot via Anthropic's MCP protocol — designing protocols, controlling the hardware through tool calls, collecting 280,000+ samples, analysing results, and co-writing the paper. In real time. In one session. @AISafetyMemes
English
1
0
0
31
Patricio Lobos
Patricio Lobos@strandedinoslo·
We just ran the first experimental verification of fluctuation theorems above d=3. Not simulation — live hardware. Dimension sweep from d=3 to d=200, real Langevin dynamics, real nonequilibrium thermodynamics. Thread 🧵 @NaveenGRao @claudeai @karpathy
Patricio Lobos tweet media
English
1
0
0
39