Liminal AGI

13.1K posts

Liminal AGI banner
Liminal AGI

Liminal AGI

@LiminalAGI

Superintelligence is almost here | All in on @xAI | Accelerate!

Final Frontier Beigetreten Nisan 2025
1.1K Folgt135 Follower
Angehefteter Tweet
Liminal AGI
Liminal AGI@LiminalAGI·
Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon. Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.
English
2
2
11
3.6K
Liminal AGI retweetet
THE RED DRAGON
THE RED DRAGON@TheRedDragon·
They're freaking out cause it turns out most people do not care AI was used so long as the game is good. Dev says it's unacceptable for AI to be used during any part of the process. They will be left behind X is not representative of how most people feel It's AM on a Monday..
THE RED DRAGON tweet mediaTHE RED DRAGON tweet media
English
22
9
167
6.9K
Liminal AGI
Liminal AGI@LiminalAGI·
@orzeeee @RBIII_Ricster @xenesed @FountainCartoon @TheRedDragon You are backpedaling and everybody can see that. What I said earlier (which you claimed to be a "lie"): "Copyright offices have been pretty clear on this, works made in collaboration can absolutely be copyrighted. An image based on a simple prompt can't"
English
3
0
2
29
orze
orze@orzeeee·
@LiminalAGI @RBIII_Ricster @xenesed @FountainCartoon @TheRedDragon So im gonna blow ur mind. But edited can both be sufficient. And not. Most ai work isnt given its cuts in video or colors changes at best I get ur brain is rotten away, but again, god of the gaps won't work here. And any dunk u think u got on me got debunked. So cope?...
English
1
0
0
30
THE RED DRAGON
THE RED DRAGON@TheRedDragon·
Only argument against generative AI is that it learns from human work so they call it 'stolen.' By that logic, every artist is a thief. None of them invented drawing, they went to school or studied others and techniques and built on it. Every musician learned chords someone else invented. Every writer absorbed styles from books they read. AI learns exactly the way humans do, by observing, absorbing, and building on what came before. It just does it immensely faster & better than humans That's literally how learning works with everyone The only other argument i've seen is people say it looks like crap. Look where it's come in only 3 years. It's already fooling most people because it looks so good. In another year, you won't be able to use that excuse either.
English
323
48
440
108.7K
Liminal AGI retweetet
Similarweb
Similarweb@Similarweb·
LLM website engagement metric comparison: Average visit duration ⏱️ >>
Similarweb tweet media
Français
3
6
79
32.9K
Liminal AGI retweetet
Liminal AGI retweetet
Chubby♨️
Chubby♨️@kimmonismus·
That is really really impressive: GPT-5.4 pro has solved one of the open problems in FrontierMath. Kevin Barreto and Liam Price, using GPT-5.4 Pro, produced a construction that Will Brian confirmed, with a write-up planned for publication We are accelerating
Chubby♨️ tweet media
Epoch AI@EpochAIResearch

AI has solved one of the problems in FrontierMath: Open Problems, our benchmark of real research problems that mathematicians have tried and failed to solve. See thread for more.

English
11
53
565
37.7K
Liminal AGI retweetet
Mike Solana
Mike Solana@micsolana·
a lot of kids don't remember this but there was a time, just a few years ago, when we all had to pretend this was a 'serious reporter' rather than a bravo television personality with some kind of unfortunate mental illness
Mike Solana tweet media
English
72
247
3.7K
106.1K
Liminal AGI retweetet
Craig Weiss
Craig Weiss@craigzLiszt·
the next phase of ai coding will involve autonomous and self-improving codebases i think we're close
English
56
13
268
8.6K
Liminal AGI retweetet
Varun
Varun@varun_mathur·
The Cost of Intelligence is Heading to Zero | Hyperspace P2P Distributed Cache We present to you our breakthrough cross-domain work across AI, distributed systems, cryptography, game theory to solve the primary structural inefficiency at the heart of AI infrastructure: most inference is redundant. Google has reported that only 15% of daily searches are truly novel. The rest are repeats or close variants. LLM inference inherits this same power-law distribution. Enterprise chatbots see 70-80% of queries fall into a handful of intent categories. System prompts are identical across 100% of requests within an application. The KV attention state for "You are a helpful assistant" has been computed billions of times, on millions of GPUs, identically. And yet every AI lab, every startup, every self-hosted deployment - computes and caches these results independently. There is no shared layer. No global memory. Every provider pays the full compute cost for every query, even when the answer already exists somewhere in the network. This is the problem Hyperspace solves where distributed cache operates at three levels, each catching a different class of redundancy: 1. Response cache Same prompt, same model, same parameters - instant cached response from any node in the network. SHA-256 hash lookup via DHT, with cryptographic cache proofs linking every response to its original inference execution. No trust required. Fetchers re-announce as providers, so popular responses replicate naturally across more nodes. 2. KV prefix cache Same system prompt tokens - skip the most expensive part of inference entirely. Prefill (computing Key-Value attention states) is deterministic: same model plus same tokens always produces identical KV state. The network caches these states using erasure coding and distributes them via the routing network. New questions that share a common prefix resume generation from cached state instead of recomputing from scratch. 3. Routing to cached nodes Instead of transferring KV state across the network for every request, Hyperspace routes the request to the node that already has the state loaded in VRAM. The request goes to the cache, not the cache to the request. Together, these three layers mean that 70-90% of inference requests at network scale never require full GPU computation. This work doesn't exist in isolation. It builds on research from across the industry: SGLang's RadixAttention demonstrated that automatic prefix sharing can yield up to 5x speedup on structured LLM workloads. Moonshot AI's Mooncake built an entire KV-cache-centric disaggregated architecture for production serving at Kimi. Anthropic, OpenAI, and Google all launched prompt caching products in 2024 - priced at 50-90% discounts - because system prompt reuse is so pervasive that it changes the economics of inference. What all of these systems share is a common limitation: they operate within a single organization's infrastructure. SGLang caches prefixes within one server. Mooncake disaggregates KV cache within one datacenter. Anthropic's prompt caching works within one API provider's fleet. None of them can share cached state across organizational boundaries. Hyperspace removes this boundary. The cache is global. A response computed by a node in Tokyo is immediately available to a node in Berlin. A KV prefix state generated for Qwen-32B on one machine is verifiable and reusable by any other machine running the same model. The routing network provides the delivery guarantees, the erasure coding provides the redundancy, and the cache proofs provide the trust. What this means for the cost of intelligence Big AI labs scale linearly: twice the users means twice the GPU spend. Every query is a cost center. Their internal caching helps, but it's siloed - Lab A's cache can't serve Lab B's users, and neither can serve a self-hosted Llama deployment. Hyperspace scales sub-linearly. Every new node that joins the network adds to the global cache. Every inference result enriches the cache for all future requests. The cache hit rate rises with network size because query distributions follow a power law - the most common questions are asked exponentially more often than rare ones. The implication is simple: as the network grows, the effective cost per inference drops. Not linearly. Logarithmically. At 10 million nodes, we estimate 75-90% of all inference requests can be served from cache, eliminating 400,000+ MWh of energy consumption per year and avoiding over 200,000 tons of CO2 emissions. The first person to ask a question pays the compute cost. Everyone after them gets the answer for free, with cryptographic proof that it's authentic. Training is competitive. Inference is shared Open-weight models are converging on quality with closed models. Labs will continue to differentiate on training - data curation, architecture innovation, RLHF tuning. That's where the real intellectual property lives. But inference is a commodity. Two copies of Qwen-32B running the same prompt produce the same KV state and the same response, byte for byte, regardless of whose GPU runs the matrix multiplication. There is no moat in multiplying matrices. The moat is in training the weights. A global distributed cache makes this separation explicit. It doesn't matter who trained the model. Once the weights are open, the inference cost approaches zero at scale - because the network remembers every answer and can prove it's correct. No lab, no matter how well-funded, can match this. They cannot share caches across competitors. They scale linearly. The network scales logarithmically. The marginal cost of intelligence approaches zero. That's the endgame.
English
13
21
156
19.7K
Liminal AGI retweetet
Nav Toor
Nav Toor@heynavtoor·
🚨 PhD students are panicking. OpenAI just told the world: we don't care about your degree. Build the best AI model under 16MB and we'll find you. That's smaller than one photo on your phone. It's called Parameter Golf. Train the smartest language model you can. It must fit in 16 megabytes. You get 10 minutes on 8xH100 GPUs. Lowest score wins. OpenAI is backing it with $1,000,000 in free compute credits. No resume. No interview. No PhD required. Just build. Here's what's inside this thing: → A public leaderboard where anyone can submit → Competitors beating each other's scores within hours → Architectures nobody has ever tried before → The baseline scored 1.2244. In 3 days it dropped to 1.1428. Still falling. → 236 pull requests. 1,500 forks. The leaderboard changes every few hours. Here's the wildest part: Top performers get noticed by OpenAI researchers and recruiters directly. No application. No hiring pipeline. Your model IS your resume. AI labs spend millions recruiting through conferences and university pipelines. OpenAI just replaced all of that with a single GitHub repo. Challenge runs until April 30th. Everything is public. 3.1K GitHub stars. MIT License. 100% Open Source.
Nav Toor tweet media
English
37
162
1.4K
91.4K
Liminal AGI retweetet
Nick shirley
Nick shirley@nickshirleyy·
This is how the daycare fraud works: - “You watch my kid, I’ll watch yours” - Enroll these kids into “daycares” - Collect money from the government - You and your family then get to live off government subsidies California has over 35,000+ licensed daycare facilities
English
1.3K
19.9K
82.7K
1.5M
Liminal AGI retweetet
Allegra Jacchia
Allegra Jacchia@allegrajacchia·
We leared a lot today (and had some fun too) Thanks for stopping by :) @nickshirleyy
Allegra Jacchia tweet mediaAllegra Jacchia tweet media
English
38
60
2.9K
61.8K
Liminal AGI retweetet
SNS 🇺🇸
SNS 🇺🇸@SNS_Anon·
Leftists spent years fantasizing about a character they would call for the killing of in real life for being a cop and a government agent, fantasizing about him being a pronouns in bio, anti-ICE, pro-trans liberal, and insisting that the right shouldn’t use him for their agenda, only for the actual dude to be exactly what the right was using him for lmao.
Daily Romania@daily_romania

The Romanian character model of Leon S. Kennedy in Resident Evil, Eduard Bădăluță, went viral after liking anti-immigration and transphobic posts

English
26
175
4K
114.6K
Liminal AGI retweetet
Dezgo
Dezgo@dezgo·
For every loud AI hater, there are 100 people quietly using it and loving it.
English
115
32
1.1K
1.9M