Misc

4.6K posts

Misc banner
Misc

Misc

@miscmod

#ExtremeModerate (it's complicated). Below the median at mind-reading. On the spectrum ... of self awareness. Capable of learning. I'm not a sycophant. #BeKind

Idaho, USA Bergabung Şubat 2017
3.8K Mengikuti381 Pengikut
Tweet Disematkan
Misc
Misc@miscmod·
> 2026 is gonna be [] likely the busiest (and most consequential) year for the future of our species. Sounds like a good time to recalibrate gradients... x.com/jimmybajimmyba…
Jimmy Ba@jimmybajimmyba

Last day at xAI. xAI's mission is push humanity up the Kardashev tech tree. Grateful to have helped cofound at the start. And enormous thanks to @elonmusk for bringing us together on this incredible journey. So proud of what the xAI team has done and will continue to stay close as a friend of the team. Thank you all for the grind together. The people and camaraderie are the real treasures at this place. We are heading to an age of 100x productivity with the right tools. Recursive self improvement loops likely go live in the next 12mo. It’s time to recalibrate my gradient on the big picture. 2026 is gonna be insane and likely the busiest (and most consequential) year for the future of our species.

English
1
0
0
163
Misc
Misc@miscmod·
@anandragn The promise of vibe-coding is that you'll get exactly the interfaces and dashboards that you want, with all the nuance you can conceive.
English
1
0
0
42
Anand
Anand@anandragn·
In the era of AI slop, “yet another dashboard” is the new slop. More interfaces. More dashboards. More noise packaged as “insight”… Most tools simply tell you what already happened. The software version of reactive news reporting. The real edge isn’t dashboards. It’s the intelligence layer on top that adds nuance, context, depth, probability, and a sense of what’s likely to happen next… built around how you operate, then augments and amplifies your edge. Most tools will never get there. Only a handful will.
English
16
4
112
10.5K
Ein
Ein@cebaceps·
@TheWokestToEver @SenateDems look, i'm sorry but there are just some things that a government should be in control of. setting the standards for the safety and security of their populace and having that responsibility be theirs and not contracted out to a third party is one of their primary roles
English
1
0
0
18
Misc
Misc@miscmod·
@bennyjohnson Word on the street is that a Star Card doesn't prove U.S. citizenship in most states. This could get messy.
English
0
0
0
6
Benny Johnson
Benny Johnson@bennyjohnson·
I just went through TSA. Had to show my Government ID. Turns out — 90% of America have flown commercial and had to do the same thing. That’s +300 million people. If we can do it to fly we can do it to vote. Both must require absolute security. PASS the SAVE America Act 🇺🇸
English
4K
17.6K
80.5K
1.7M
Misc
Misc@miscmod·
@SecDuffy We should learn some lessons from the shutdown. There are certain things that the government just isn't good at. It's unacceptable that a handful of senators can cripple the travel industry. Would it work better if air-travel operations were fully funded by the private sector?
English
1
0
1
137
Secretary Sean Duffy
Secretary Sean Duffy@SecDuffy·
Before the shutdown, an average of 4 controllers retired a day. That number has now jumped to 15 to 20 a day. It’s pretty simple — when Democrats vote 14 times against controllers getting paid, it’s hard to convince them to stay in the profession. End the shutdown NOW.
English
1.4K
3.6K
25.8K
1.2M
Secretary Sean Duffy
America is building BIG, BEAUTIFUL infrastructure again! @USDOT’s Beautifying Transportation Infrastructure Challenge is bringing together a dream team of architects and engineers to usher in the Golden Age of Transportation 👷🛣️🏗️ Think you have the next big idea? Apply below ⬇️
English
43
110
698
14.5K
British Tim
British Tim@TeaconomistT·
@jonatanpallesen In a world of generative AI and robotics, does this even matter? Today somebody with an 80 IQ can vibe code a web site, and use AI to deliver the marketing material and help with budgeting he needs to sell his farm produce.
English
26
1
14
3.7K
Jonatan Pallesen
Jonatan Pallesen@jonatanpallesen·
The total number of smart people in the world has just peaked. And now it's about to crash.
Jonatan Pallesen tweet media
English
453
529
6.7K
1.4M
Misc me-retweet
DANISH
DANISH@astrodanish·
Your brain is under attack by a trillion dollar adversary intent on destroying it. This is your David vs Goliath. Resist the algorithm.
English
164
657
5.1K
857.5K
Misc
Misc@miscmod·
Loose "pattern matching" allows language models to handle the ambiguity that rule-based software can't. What if this surprising emergent strength of language models turns out to also be their primary limitation? For planning and reasoning, there's value in being literal.
English
3
0
0
30
Misc
Misc@miscmod·
Human intuition and machine learning are fundamentally the same. Handle them both as such. “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.” — Andrew Ng towardsdatascience.com/the-limitation…
English
3
0
0
0
Misc
Misc@miscmod·
Hints at something better than brute-force. > At 200K context length, every single model in the study exceeded 10% hallucination. The rate nearly tripled compared to optimal shorter contexts. x.com/abxxai/status/…
Abdul Șhakoor@abxxai

BREAKING: 🚨 Someone just tested 35 AI models across 172 billion tokens of real document questions. The hallucination numbers should end the "just give it the documents" argument forever. Here is what the data actually showed. The best model in the entire study, under perfect conditions, fabricated answers 1.19% of the time. That sounds small until you realize that is the ceiling. The absolute best case. Under optimal settings that almost no real deployment uses. Typical top models sit at 5 to 7% fabrication on document Q&A. Not on questions from memory. Not on abstract reasoning. On questions where the answer is sitting right there in the document in front of it. The median across all 35 models tested was around 25%. One in four answers fabricated, even with the source material provided. Then they tested what happens when you extend the context window. Every company selling 128K and 200K context as the hallucination solution needs to read this part carefully. At 200K context length, every single model in the study exceeded 10% hallucination. The rate nearly tripled compared to optimal shorter contexts. The longer the window people want, the worse the fabrication gets. The exact feature being sold as the fix is making the problem significantly worse. There is one more finding that does not get talked about enough. Grounding skill and anti-fabrication skill are completely separate capabilities in these models. A model that is excellent at finding relevant information in a document is not necessarily good at avoiding making things up. They are measuring two different things that do not reliably correlate. You cannot assume a model that retrieves well also fabricates less. 172 billion tokens. 35 models. The conclusion is the same across all of them. Handing an LLM the actual document does not solve hallucination. It just changes the shape of it.

English
0
0
1
23
Misc me-retweet
LonelySloth
LonelySloth@lonelysloth_sec·
Saying LLMs have an hallucination problem is actually a bit misleading. Everything they do is hallucination— except for ipsis litteris text retrieval. They have no ground truth, no rules to extrapolate from known facts, no way to actually verify validity. Sometimes the hallucinations match reality. But you can dream about real events too.
English
4
6
114
9.7K
Misc me-retweet
Tommy. T
Tommy. T@tallmetommy·
@TukiFromKL AI is draining the knowledge moats. What replaces them are leverage moats. Distribution Trust Taste Speed When intelligence becomes abundant, the scarce resource becomes judgment.
English
0
1
12
291
Misc me-retweet
Cuckturd
Cuckturd@CattardSlim·
Where are we sitting Maga? Mark my words. By January 20th, 2026 EVERYONE'S energy bills will be cut in 1/2. Electricity, oil, gas, natural gas, everything.
English
620
2K
8.8K
671.2K
Bryson 🦄
Bryson 🦄@brysonbort·
Explain AI to me in 0 words.
English
116
5
73
10.3K
Misc me-retweet
Two Cent Philosophy
Two Cent Philosophy@Dividend_Dojo·
The Borrowed History Predicament There’s a video online of Kobe Bryant streaming himself playing NBA 2K. He never did this in real life. But the video looks convincing enough that, for a moment, you could believe it. Your brain doesn’t reject it — it slips into memory, as if it might have happened. That’s the Borrowed History Predicament. Nearly everything we “know” comes not from deduction or direct experience, but from inherited accounts — things we’ve read, been told, or seen recorded. A book, a teacher, a news clip, a historical documentary, a YouTube highlight reel. You weren’t there. You couldn’t possibly have been there for most of it. You borrow it on trust. And this trust has always been fragile. Now it is dissolving. ⸻ The Three Paths of Knowing Human beings have only a few ways to know anything at all: 1.Deduction – timeless truths derived from logic or mathematics. If 2 + 2 = 4, it will always equal four. But deduction is limited to formal systems, not to the messy facts of life. 2.Intuition – what you directly feel or perceive. “I see red.” “I am in pain.” These cannot be doubted from the first-person perspective. Yet they are private and fleeting. 3.Borrowed History – everything handed to you through language, testimony, record, culture. You know Julius Caesar crossed the Rubicon because others told you. You know antibiotics work because science — a chain of experiments you never witnessed — assures you. Borrowed history is the scaffolding of civilization. You can add induction if you like — the reasoning from repeated experience to general law. But in practice, induction collapses into intuition when done firsthand, and into borrowed history when accepted secondhand. Almost all scientific knowledge you hold is borrowed, not discovered by your own observation. ⸻ Phase-Locked Truths Physics gives us a metaphor here. In quantum mechanics, when you try to measure an electron’s position, you disturb it. The question “what was the electron doing before you looked?” becomes unanswerable. The truth is locked out of reach by the very act of observation. History works the same way. Every event not witnessed directly is phase-locked. The moment it is told, retold, recorded, or reframed, it is no longer the untouched event but the disturbed trace. We never hold the raw truth. We hold the borrowed history of the truth. ⸻ Enter the Fabricators For most of history, the fragility of borrowed history was cushioned by practical limits. Forgery existed, but it was costly. The printing press made lies travel faster, but physical evidence and scarcity still mattered. Now, AI systems like Sora can produce endless streams of convincing synthetic testimony. A video of Kobe streaming 2K. Tomorrow, an AI Martin Luther King Jr. speech he never gave. Next week, “historical footage” of a war that never happened. The danger isn’t simply that we’ll be fooled in the moment. The danger is cultural drift. Over time, these fabricated pieces will embed themselves into memory, citation, and belief. They will become indistinguishable from genuine history in the collective mind. Borrowed history, once corrupted, cannot be uncorrupted. ⸻ The Predicament Deduction is pure but abstract. Intuition is pure but private. Borrowed history is impure but universal. Civilization rests almost entirely on it — and it is dissolving before our eyes. The Kobe Bryant video is a glimpse of the coming storm: futures that never happened will bleed into the past that did. The trust gap will widen until nothing is safe to borrow. At that point, humanity faces an epistemic collapse. We cannot function without borrowed history, but we cannot secure it once fabrication outpaces verification. The foundations of human knowing are cracking. ⸻ A Warning Each day, more synthetic memories slip into the collective record. Civilization is a house built of borrowed histories. And the foundations are beginning to crack. “The future is now.” -Kane
𝐏𝐮𝐫𝐩𝑮𝒐𝒍𝒅 🏆@PurpGolded

Kobe Bryant streaming 2k What has AI come to dawg 😭

English
19
53
193
287.3K
Misc me-retweet