Robert Miller (🦄,🦄)

6.9K posts

Robert Miller (🦄,🦄)

Robert Miller (🦄,🦄)

@RobertMiller

Data Gerd (Geek-Nerd), Cloud Architect, MS SQL DBA/Architect/BI juggler, Big Data (Streaming), Full-Stack Developer, Chesapeake Bay Retrievers, and electronics.

Katılım Mart 2008
5.4K Takip Edilen1.4K Takipçiler
Robert Miller (🦄,🦄) retweetledi
Abdul Șhakoor
Abdul Șhakoor@abxxai·
BREAKING: 🚨 Someone just tested 35 AI models across 172 billion tokens of real document questions. The hallucination numbers should end the "just give it the documents" argument forever. Here is what the data actually showed. The best model in the entire study, under perfect conditions, fabricated answers 1.19% of the time. That sounds small until you realize that is the ceiling. The absolute best case. Under optimal settings that almost no real deployment uses. Typical top models sit at 5 to 7% fabrication on document Q&A. Not on questions from memory. Not on abstract reasoning. On questions where the answer is sitting right there in the document in front of it. The median across all 35 models tested was around 25%. One in four answers fabricated, even with the source material provided. Then they tested what happens when you extend the context window. Every company selling 128K and 200K context as the hallucination solution needs to read this part carefully. At 200K context length, every single model in the study exceeded 10% hallucination. The rate nearly tripled compared to optimal shorter contexts. The longer the window people want, the worse the fabrication gets. The exact feature being sold as the fix is making the problem significantly worse. There is one more finding that does not get talked about enough. Grounding skill and anti-fabrication skill are completely separate capabilities in these models. A model that is excellent at finding relevant information in a document is not necessarily good at avoiding making things up. They are measuring two different things that do not reliably correlate. You cannot assume a model that retrieves well also fabricates less. 172 billion tokens. 35 models. The conclusion is the same across all of them. Handing an LLM the actual document does not solve hallucination. It just changes the shape of it.
Abdul Șhakoor tweet mediaAbdul Șhakoor tweet mediaAbdul Șhakoor tweet media
English
267
1.3K
4.9K
476K
Robert Miller (🦄,🦄) retweetledi
kanav
kanav@kanavtwt·
there's been an update
kanav tweet media
English
142
942
7.6K
716.7K
AI Guides
AI Guides@free_ai_guides·
Anthropic literally tells you how to prompt Claude. Nobody reads it. So I read their docs, studied the research on "psychological" prompts, and turned it into something you'll actually use: → 30 principles with examples → Prompt engineering mini-course → 15 strategic use cases → 10+ copy-paste mega-prompts Comment "Anthropic" and I'll DM it to you.
AI Guides tweet media
English
466
76
758
105.9K
Robert Miller (🦄,🦄) retweetledi
Red Panda Mining
Red Panda Mining@RedPandaMining·
🐼⛏️ NerdQaxe++ GIVEAWAY ⛏️🐼 I’m giving away a NerdQaxe++ from @PlebSource 🟧⚡ This is a Bitcoin SOLO miner — no pools, no payouts every day… just you vs the network and a shot at a full 3.125 + fees BTC block reward 👀 Perfect for: ✔️ Learning Bitcoin mining ✔️ Home & low-power setups ✔️ Lottery-style solo mining fun How to enter: 🔁 Repost and like this post 👤 Follow @RedPandaMining & @PlebSource 💬 Comment your thoughts on Bitcoin solo mining currently ⏰ Winner announced in 7 days Interesting in purchasing this or similiar Bitcoin miners? check out Plebsource geni.us/rpmPlebSource they ship out of Texas, USA. They have the best customer support if you have any issues! use code RPM for an additional 5% off if interested. Good luck, plebs 🟧🐼 #Bitcoin #SoloMining #BitcoinMining $BTC
Red Panda Mining tweet media
English
560
531
766
27.7K
Eric Cole
Eric Cole@erichustls·
Screw it. I want to pay it forward. I’m giving away my *exclusive* AI playbook that took me from 0 to $90k/month. • Like this post • Comment “Freedom” And I’ll DM you the link. (Must follow, 24 hours only)
English
448
49
524
72.9K
Robert Miller (🦄,🦄) retweetledi
CoreX Hosting
CoreX Hosting@CoreX_Hosting·
🚨GIVEAWAY TIME 🚨 We are giving away a S21Pro - 234T to be hosted at Iowa Mining. To enter, do following: 1 : Like this post 2: Follow us 3: Share / RT this post. Winner drawn: Friday January 30th #Bitcoin #crypto #Giveaways
CoreX Hosting tweet media
English
322
679
895
36.4K
Robert Miller (🦄,🦄)
Robert Miller (🦄,🦄)@RobertMiller·
@RedCollie1 Does the "dowel pin" need to be metallic? This is unconventional. Could it be wood ( would be damaged by the spinning magnetic disk ) or some form of 'Corning' glass ( susceptible to shearing forces ), as two possible example alternatives?
English
0
0
0
3
Red Collie (Dr. Horace Drew) scientist/inventor
I did a simple control experiment today, while setting up for a large experiment next week. In this brief video, I show where the "upward force" of UFO levitation comes from. It is so SIMPLE! Certain details of the device must be arranged correctly, just like for adjusting the shape of an airplane wing, to get "lift". Here we are pushing against the vacuum of space, in the style of a spinning gyroscope, rather than against air. One more important element must be added to get overall upward motion, bypassing Newton's 3rd Law. We will work on that soon, once all needed parts are made. TO THE STARS. 🛸
English
6
10
79
3.6K
Robert Miller (🦄,🦄)
Robert Miller (🦄,🦄)@RobertMiller·
@interesting_aIl Calling shenanigans. I did not see a GE 415 at anytime in the pictures. :) 64k-word mag-core memory, card-reader, mag-tape, band-printer.
English
0
0
0
144
Interesting AF
Interesting AF@interesting_aIl·
The evolution of computers from 1940 to 2100 depicted with AI
English
53
67
473
48.3K
4nzn
4nzn@paoloanzn·
I just one-shotted an entire $80K brand team using a JSON context profile + one prompt agencies burn 6 weeks on strategies, this created complete brand architecture in 4 minutes no more $50K+ consultants comment "BRAND" + RT + like and I'll send you the prompt
English
203
111
228
24.6K
Nic DiSalle
Nic DiSalle@nicdisalle·
People think I'm crazy for spending $50k/mo... But here's what they don't know:
English
62
98
566
151.8K
Brian Roemmele
Brian Roemmele@BrianRoemmele·
“The National Library to dispose of half a million books” So many rather smart folks think I have not a clue on the advancement of the Age Of Amnesia. I have stated we lose more important data than we retain. So much of this destruction is in the open: newstalkzb.co.nz/on-air/heather…
English
53
123
489
45.8K
Robert Miller (🦄,🦄)
Robert Miller (🦄,🦄)@RobertMiller·
This is huge. With @BrianRoemmele as the source and instigator, I look forward to the forthcoming shift and upset in current "AI" technology as what was old becomes new, again.
Brian Roemmele@BrianRoemmele

BOOM! MAJOR AI SPEEDUP! Hot Rod AI 100 times faster inference 100,000 times less power! — Reviving Analog Circuits: A Leap Toward Ultra-Efficient AI with In-Memory Attention I got my start in analog electronics when I was a kid and always thought analog computers would make a comeback. Analog computing of the 1960s neural networks used voltage-based circuits rather than binary clocks. Analog is Faster Than Digital Large language models at their core lies the transformer architecture, where self-attention mechanisms sift through vast sequences of data to predict the next word or token. On conventional GPUs, shuttling data between memory caches and processing units devours time and energy, bottlenecking the entire system. They require a clock cycle to precisely move bits in and out of memory and registers and this is >90% of the time and energy overhead. But now a groundbreaking study proposes a custom in-memory computing setup that could slash these inefficiencies, potentially reshaping how we deploy generative AI. The innovation centers on "gain cells"—emerging charge-based analog memories that double as both storage and computation engines. Unlike digital GPUs, which laboriously load token projections from cache into SRAM for each generation step, this architecture keeps data where the math happens: right ON THE CHIP! With a clock speed near the THE SPEED OF LIGHT because it is never on/off like in digital binary. By leveraging parallel analog dot-product operations, the design computes self-attention natively, sidestepping the data movement that plagues GPU hardware. To bridge the gap between ideal digital models and the noisy realities of analog circuits, the researchers devised a clever initialization algorithm. This method adapts pre-trained LLMs, such as GPT-2, without the need for full retraining, ensuring seamless performance parity despite non-idealities like voltage drifts or precision limits. The results are nothing short of staggering! Simulations show the system slashing attention latency to 100 times faster inference for token generation—while curbing energy use by a jaw-dropping five orders of magnitude, or 100,000 times less power-hungry than GPU baselines. For context, this could mean running a full LLM on a device no larger than a a card deck, without any thermal throttling or grid-straining demands of today's data centers. The approach targets the attention block specifically, the transformer’s energy hog, but slso broader integration with other in-memory techniques to turbocharge the entire model pipeline. Analog tech isn't pie-in-the-sky quantum wizardry; it's grounded in ancient mature electronics theory, with gain cells already prototyped in labs. The only engineering issue, and it is simple: tolerances for noise, scaling arrays of cells, and fabricating at microchip densities. Existing CMOS processes tweaks for analog fidelity. From there, Full ecosystem integration, including software stacks for model adaptation, could happen in a year, disrupting GPU dominance sooner than skeptics predict. Risks are low but hybrid digital-analog interfaces could introduce unforeseen bugs. However this can be rapidly iterated and addressed. This isn't just hardware tinkering; it's a philosophical pivot back to AI's analog origins, where computation flows continuously rather than ticking in discrete cycles. This in-memory attention could democratize AI power, making low power, lightning-fast AI not a luxury, but an inevitability to even the smallest devices. Most have no idea how big this is: It is the biggest shift in AI since the invention of LLMs. The world will struggle to find true experienced analog engineers, most are gone. In my garage I will have a test Analog CMOS Gain Cells using off the self parts in the next few days, if Radio Shack was still around I would have have done today. I suspect I can scale to a proto AI model in a few weeks. PAPER: arxiv.org/abs/2409.19315

English
0
0
2
103
Sadie
Sadie@Sadie_NC·
What are you doing? 😂😂😂
Sadie tweet media
English
94
55
83
27.4K
Robert Miller (🦄,🦄)
Robert Miller (🦄,🦄)@RobertMiller·
@spacesudoer This! This is Mega Machines class and science fact. Science fiction is when we do not need massive rockets to get off-planet.
English
0
0
0
52
Yatharth Mann
Yatharth Mann@yatharthmann·
Straight out of science fiction.
English
114
412
3.4K
58.3K