Liminal AGI

13.1K posts

Liminal AGI banner
Liminal AGI

Liminal AGI

@LiminalAGI

Superintelligence is almost here | All in on @xAI | Accelerate!

Final Frontier เข้าร่วม Nisan 2025
1.1K กำลังติดตาม135 ผู้ติดตาม
ทวีตที่ปักหมุด
Liminal AGI
Liminal AGI@LiminalAGI·
Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon. Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.
English
2
2
11
3.6K
Liminal AGI
Liminal AGI@LiminalAGI·
@sheablo4 @bestfilly @RetnaburnX @san_goh97 @tripoai Yeah, or a complete nutcase. He spammed me with a total of four responses and then blocked me. Anyway, every image/video model you'd come across nowadays is based on a neural network. Obvious stuff that anyone could look up. It's almost like he's trying to look stupid.
English
0
0
1
8
Sheablo4
Sheablo4@sheablo4·
@bestfilly @LiminalAGI @RetnaburnX @san_goh97 @tripoai You're a god damn liar. LMAO Are you pretending to know what you're talking about for attention from your stupid cult? Does it feel good knowing that you only get that attention by being a fraud? Absolutely fucking pathetic.
English
1
0
1
17
Ric B. (MonteRicard)
Ric B. (MonteRicard)@RBIII_Ricster·
@LiminalAGI @orzeeee @xenesed @FountainCartoon @TheRedDragon There's plenty, maybe if you weren't so busy name calling and projecting, you'd be able to use your own autonomy to search yourself. (You won't though since you actually don't care and are tunnel visioned on your narrative.) Also, again, advise you won't head, stop projecting.
Ric B. (MonteRicard) tweet media
English
2
0
1
17
THE RED DRAGON
THE RED DRAGON@TheRedDragon·
Only argument against generative AI is that it learns from human work so they call it 'stolen.' By that logic, every artist is a thief. None of them invented drawing, they went to school or studied others and techniques and built on it. Every musician learned chords someone else invented. Every writer absorbed styles from books they read. AI learns exactly the way humans do, by observing, absorbing, and building on what came before. It just does it immensely faster & better than humans That's literally how learning works with everyone The only other argument i've seen is people say it looks like crap. Look where it's come in only 3 years. It's already fooling most people because it looks so good. In another year, you won't be able to use that excuse either.
English
328
48
445
117.9K
orze
orze@orzeeee·
@LiminalAGI @RBIII_Ricster @xenesed @FountainCartoon @TheRedDragon I haven't changed what i was saying at any point again. im not gonna god of the gaps for you dude please be better and actually get facts down. I'd advise u to stop copeing and try and look into why ai companies are currently fighting in court for the ability to copy right.
English
1
0
2
20
Liminal AGI รีทวีตแล้ว
THE RED DRAGON
THE RED DRAGON@TheRedDragon·
They're freaking out cause it turns out most people do not care AI was used so long as the game is good. Dev says it's unacceptable for AI to be used during any part of the process. They will be left behind X is not representative of how most people feel It's AM on a Monday..
THE RED DRAGON tweet mediaTHE RED DRAGON tweet media
English
22
9
167
6.9K
Liminal AGI
Liminal AGI@LiminalAGI·
@orzeeee @RBIII_Ricster @xenesed @FountainCartoon @TheRedDragon You are backpedaling and everybody can see that. What I said earlier (which you claimed to be a "lie"): "Copyright offices have been pretty clear on this, works made in collaboration can absolutely be copyrighted. An image based on a simple prompt can't"
English
3
0
2
40
orze
orze@orzeeee·
@LiminalAGI @RBIII_Ricster @xenesed @FountainCartoon @TheRedDragon So im gonna blow ur mind. But edited can both be sufficient. And not. Most ai work isnt given its cuts in video or colors changes at best I get ur brain is rotten away, but again, god of the gaps won't work here. And any dunk u think u got on me got debunked. So cope?...
English
1
0
1
38
Liminal AGI รีทวีตแล้ว
Similarweb
Similarweb@Similarweb·
LLM website engagement metric comparison: Average visit duration ⏱️ >>
Similarweb tweet media
Français
3
6
80
33.6K
Liminal AGI รีทวีตแล้ว
Liminal AGI รีทวีตแล้ว
Chubby♨️
Chubby♨️@kimmonismus·
That is really really impressive: GPT-5.4 pro has solved one of the open problems in FrontierMath. Kevin Barreto and Liam Price, using GPT-5.4 Pro, produced a construction that Will Brian confirmed, with a write-up planned for publication We are accelerating
Chubby♨️ tweet media
Epoch AI@EpochAIResearch

AI has solved one of the problems in FrontierMath: Open Problems, our benchmark of real research problems that mathematicians have tried and failed to solve. See thread for more.

English
11
53
568
38.1K
Liminal AGI รีทวีตแล้ว
Mike Solana
Mike Solana@micsolana·
a lot of kids don't remember this but there was a time, just a few years ago, when we all had to pretend this was a 'serious reporter' rather than a bravo television personality with some kind of unfortunate mental illness
Mike Solana tweet media
English
73
253
3.8K
109.6K
Liminal AGI รีทวีตแล้ว
Craig Weiss
Craig Weiss@craigzLiszt·
the next phase of ai coding will involve autonomous and self-improving codebases i think we're close
English
60
13
275
8.8K
Liminal AGI รีทวีตแล้ว
Varun
Varun@varun_mathur·
The Cost of Intelligence is Heading to Zero | Hyperspace P2P Distributed Cache We present to you our breakthrough cross-domain work across AI, distributed systems, cryptography, game theory to solve the primary structural inefficiency at the heart of AI infrastructure: most inference is redundant. Google has reported that only 15% of daily searches are truly novel. The rest are repeats or close variants. LLM inference inherits this same power-law distribution. Enterprise chatbots see 70-80% of queries fall into a handful of intent categories. System prompts are identical across 100% of requests within an application. The KV attention state for "You are a helpful assistant" has been computed billions of times, on millions of GPUs, identically. And yet every AI lab, every startup, every self-hosted deployment - computes and caches these results independently. There is no shared layer. No global memory. Every provider pays the full compute cost for every query, even when the answer already exists somewhere in the network. This is the problem Hyperspace solves where distributed cache operates at three levels, each catching a different class of redundancy: 1. Response cache Same prompt, same model, same parameters - instant cached response from any node in the network. SHA-256 hash lookup via DHT, with cryptographic cache proofs linking every response to its original inference execution. No trust required. Fetchers re-announce as providers, so popular responses replicate naturally across more nodes. 2. KV prefix cache Same system prompt tokens - skip the most expensive part of inference entirely. Prefill (computing Key-Value attention states) is deterministic: same model plus same tokens always produces identical KV state. The network caches these states using erasure coding and distributes them via the routing network. New questions that share a common prefix resume generation from cached state instead of recomputing from scratch. 3. Routing to cached nodes Instead of transferring KV state across the network for every request, Hyperspace routes the request to the node that already has the state loaded in VRAM. The request goes to the cache, not the cache to the request. Together, these three layers mean that 70-90% of inference requests at network scale never require full GPU computation. This work doesn't exist in isolation. It builds on research from across the industry: SGLang's RadixAttention demonstrated that automatic prefix sharing can yield up to 5x speedup on structured LLM workloads. Moonshot AI's Mooncake built an entire KV-cache-centric disaggregated architecture for production serving at Kimi. Anthropic, OpenAI, and Google all launched prompt caching products in 2024 - priced at 50-90% discounts - because system prompt reuse is so pervasive that it changes the economics of inference. What all of these systems share is a common limitation: they operate within a single organization's infrastructure. SGLang caches prefixes within one server. Mooncake disaggregates KV cache within one datacenter. Anthropic's prompt caching works within one API provider's fleet. None of them can share cached state across organizational boundaries. Hyperspace removes this boundary. The cache is global. A response computed by a node in Tokyo is immediately available to a node in Berlin. A KV prefix state generated for Qwen-32B on one machine is verifiable and reusable by any other machine running the same model. The routing network provides the delivery guarantees, the erasure coding provides the redundancy, and the cache proofs provide the trust. What this means for the cost of intelligence Big AI labs scale linearly: twice the users means twice the GPU spend. Every query is a cost center. Their internal caching helps, but it's siloed - Lab A's cache can't serve Lab B's users, and neither can serve a self-hosted Llama deployment. Hyperspace scales sub-linearly. Every new node that joins the network adds to the global cache. Every inference result enriches the cache for all future requests. The cache hit rate rises with network size because query distributions follow a power law - the most common questions are asked exponentially more often than rare ones. The implication is simple: as the network grows, the effective cost per inference drops. Not linearly. Logarithmically. At 10 million nodes, we estimate 75-90% of all inference requests can be served from cache, eliminating 400,000+ MWh of energy consumption per year and avoiding over 200,000 tons of CO2 emissions. The first person to ask a question pays the compute cost. Everyone after them gets the answer for free, with cryptographic proof that it's authentic. Training is competitive. Inference is shared Open-weight models are converging on quality with closed models. Labs will continue to differentiate on training - data curation, architecture innovation, RLHF tuning. That's where the real intellectual property lives. But inference is a commodity. Two copies of Qwen-32B running the same prompt produce the same KV state and the same response, byte for byte, regardless of whose GPU runs the matrix multiplication. There is no moat in multiplying matrices. The moat is in training the weights. A global distributed cache makes this separation explicit. It doesn't matter who trained the model. Once the weights are open, the inference cost approaches zero at scale - because the network remembers every answer and can prove it's correct. No lab, no matter how well-funded, can match this. They cannot share caches across competitors. They scale linearly. The network scales logarithmically. The marginal cost of intelligence approaches zero. That's the endgame.
English
13
21
156
19.7K
Liminal AGI รีทวีตแล้ว
Nav Toor
Nav Toor@heynavtoor·
🚨 PhD students are panicking. OpenAI just told the world: we don't care about your degree. Build the best AI model under 16MB and we'll find you. That's smaller than one photo on your phone. It's called Parameter Golf. Train the smartest language model you can. It must fit in 16 megabytes. You get 10 minutes on 8xH100 GPUs. Lowest score wins. OpenAI is backing it with $1,000,000 in free compute credits. No resume. No interview. No PhD required. Just build. Here's what's inside this thing: → A public leaderboard where anyone can submit → Competitors beating each other's scores within hours → Architectures nobody has ever tried before → The baseline scored 1.2244. In 3 days it dropped to 1.1428. Still falling. → 236 pull requests. 1,500 forks. The leaderboard changes every few hours. Here's the wildest part: Top performers get noticed by OpenAI researchers and recruiters directly. No application. No hiring pipeline. Your model IS your resume. AI labs spend millions recruiting through conferences and university pipelines. OpenAI just replaced all of that with a single GitHub repo. Challenge runs until April 30th. Everything is public. 3.1K GitHub stars. MIT License. 100% Open Source.
Nav Toor tweet media
English
42
181
1.6K
106.6K
Liminal AGI รีทวีตแล้ว
Nick shirley
Nick shirley@nickshirleyy·
This is how the daycare fraud works: - “You watch my kid, I’ll watch yours” - Enroll these kids into “daycares” - Collect money from the government - You and your family then get to live off government subsidies California has over 35,000+ licensed daycare facilities
English
1.4K
21.3K
88K
1.6M