Apollo

3.6K posts

Apollo banner
Apollo

Apollo

@0xApolloGL

Head of Intelligence | @0xGroomLake... opinions are my own.

USA/Europe Katılım Mart 2023
194 Takip Edilen260 Takipçiler
Apollo
Apollo@0xApolloGL·
@agusantonetti the purity spiral was so intense that it generated a whirlpool and sucked the ship into the sea
English
0
0
0
13
Agustín Antonetti
Agustín Antonetti@agusantonetti·
Atención. Dos embarcaciones que hacían parte de la flotilla VIP del régimen cubano se han perdido en el mar y no pueden localizarlos. La Marina de México acaba de activar un plan de emergencia para intentar encontrarlos y rescatarlos. El Convoy de activistas antiestadounidense también ha solicitado ayuda a Estados Unidos para que logren salvarlos. Había nueve tripulantes.
Agustín Antonetti tweet mediaAgustín Antonetti tweet media
Español
1.2K
1.1K
4.5K
233.7K
Apollo
Apollo@0xApolloGL·
"hispanic" vs "white" distinction is idiotic is a Cuban the same as a Guatemalan (both hispanic) but different than an Italian? it's a stupid label with no real value, some people have more or less European ancestry they will just merge into mainstream Americans just like the Italians did
English
0
0
4
139
Apollo
Apollo@0xApolloGL·
@TeksEdge guys this thing would be a game changer, you could use it to do a TON of work no you cannot just tell it "make me a 200K a year ARR business" and expect it to pop out an actionable plan for success but its a serious tool for those who know what they are doing
English
0
0
7
540
David Hendrickson
David Hendrickson@TeksEdge·
🎗️ "Medium-Sized" LLM Burners Coming Soon! 🔥 This Could Make Local HyperToken Generation a Reality. ⚡️ NVIDIA’s worst nightmare? 😱 ⚙️ Application-Specific Hardware Taalas new PCIe ASIC board would burn the entire medium-sized Qwen 3.5-27B LLM straight into silicon 🤯 (already doing it with small models) Taalos said medium models on ASIC would be available in their lab by Spring '26. 💭Imagine: 🚫 No more loading weights 🚀 ~10,000 Tokens Per Second locally (Llama 3.1 8B already @ 17,000 tps) 💻 Standard PC slot, ultra-low power (10x less) 🔋 🌍 100% offline with no cloud, no GPU farm 💰 Reddit unit cost rumor $300 to $400 🖥️ Imagine HyperToken generation on your desktop. 🤖 AI agents that think at light speed. ⚡️ Are you ready? 👀
David Hendrickson tweet media
English
148
314
2.2K
178.2K
Apollo
Apollo@0xApolloGL·
@himanshu__sriv @IterIntellectus lol remove the em dashes can you imagine being so smug and safe and rich that you serious DGAF about people in places like this and want them to revert to living in terror
English
0
0
1
410
Aqua Insane 👏
Aqua Insane 👏@himanshu__sriv·
@IterIntellectus Easy to admire the results… harder to question the methods. When safety improves, everything looks “fixed.” But the real test is whether justice and rights improved too—not just order. Short term success or long term precedent?
English
36
0
9
12.6K
Hunter Ash
Hunter Ash@ArtemisConsort·
As a former progressive, this is absolutely accurate and caused me endless frustration with my co-partisans. They fundamentally don’t care if their ideas work. They hardly even have a concept of ideas working. Their entire evaluation function is based on social perception and emotionalism which is, in fact, profoundly selfish and unvirtuous. If you care more about feeling like/being seen as good than you do about results, you’re a selfish parasite. Hardly exclusive to the left, but it defines the leftist project in a way it doesn’t define any other political faction.
Hunter Ash tweet media
CJ@UnderSneege

@edwest After a decade in the public sector I still find this one of the most replicable observations ever made.

English
242
1.9K
14.5K
500.3K
Apollo
Apollo@0xApolloGL·
@mert you are permanently banned from using Lufthansa
English
1
0
2
45.9K
Darren Rovell
Darren Rovell@darrenrovell·
The "butt birth" mechanical rhino from "Ace Ventura: Pet Detective" just sold at @propstore_com for $59,850. The high estimate was $8,000.
Darren Rovell tweet mediaDarren Rovell tweet media
English
136
202
3K
2.5M
Apollo
Apollo@0xApolloGL·
I would say the hard divider between rich and not rich is, "if I stopped working today, could I still live a life that would meet my standards" ofc some people might need $10M a year and others 1/100th of that, so it's subjective if you need 10M a year to be happy then you might never be rich
English
0
0
0
24
Orson Scott Card
Orson Scott Card@orsonscottcard·
Being “rich” is a relative term. If everybody's rich, nobody's rich. No matter how rich you are, somebody's so much richer than you that by contrast, you are poor. The equation’s the same, only inverted, for being poor. There's always somebody worse off than you, compared to whom, you're rich. The hardest thing to have is “enough.” But for those who believe they have enough, rich and poor are descriptors for other people only. Sufficiency is the true means of being comfortable. In the Lord's Prayer, “Give us this day our daily bread” is a prayer for enough. Seeking to be rich is a denial of even the concept of enough. It is wealth that never gets over the fear of poverty. It cannot be satisfied. And for those with enough, it is not envy, but pity they feel for those endlessly hungry “rich.”
English
31
27
303
6.5K
Apollo
Apollo@0xApolloGL·
@elonmusk @LeahLibresco please hurry up and announce this or we're going to have to get a sienna, model X too small
English
0
0
1
60
Apollo
Apollo@0xApolloGL·
@Rothmus they have no real expenses so any money they make is spending money, not too hard to understand
English
0
0
1
178
Trevor Sheatz
Trevor Sheatz@TrevorSheatz·
My wife was formerly promiscuous. I was a virgin. She was then radically born-again. Committed to church, evangelized constantly, Puritan books in her bedroom, prayer journals, grief over past sexual sin, etc. We got to know each other well for over a year, dated for four months, engaged for two and a half, and didn't sin sexually with one another. Our first kiss with each other was at the altar on our wedding day (reaction pic attached!). We've been married for over five years now, and she's been the most wonderful and godly wife, mother to our three children, and homemaker you could imagine. She's more pure than most virgins, as biblical purity has less to with past sins (though they certainly matter) and more to do with one's current posture of the heart and daily decisions to honor the Lord (Matt. 5:8). We're far too quick to forget the story of the woman labeled as a known "sinner" (likely a prostitute) in Luke 7:36-50 who was washing Jesus' feet with her tears while kissing them too. The Pharisees were shocked that Jesus let a public sinner do this. Jesus responded with a parable about debts being forgiven and ended with this powerful conclusion: "Her many sins have been forgiven; that’s why she loved much. But the one who is forgiven little, loves little" (Luke 7:47). Everyone seems to highlight the benefits of virginity, and it certainly is a blessing. But we forget to highlight the benefits of being forgiven much as well. My wife knows the depths of Jesus' forgiveness more than most people, enabling her to more easily live out a life of passionate love for her Savior. A woman or man's past sexual sin matters. But what matters far more when it comes to deciding who to marry is if the person is truly born again, if their repentance is real, if they truly have a heart for Christ, if they truly follow Jesus and obey his commands. "God has chosen what is foolish in the world to shame the wise, and God has chosen what is weak in the world to shame the strong. God has chosen what is insignificant and despised in the world ​— ​what is viewed as nothing ​— ​to bring to nothing what is viewed as something, so that no one may boast in his presence. It is from him that you are in Christ Jesus, who became wisdom from God for us ​— ​our righteousness, sanctification, and redemption, — in order that, as it is written: 'Let the one who boasts, boast in the Lord.'" (1 Cor. 1:27-31) "Therefore, if anyone is in Christ, he is a new creation; the old has passed away, and see, the new has come!" (2 Cor. 5:17)
Trevor Sheatz tweet media
Tom Buck (Five Point Buck)@TomBuck

If someone argues that a former promiscuous woman is "damaged goods" and questions whether a Christian young man should marry her, remember Rahab. She was a Canaanite prostitute but became a mother in the lineage of Jesus. God redeemed her, cleansed her, and Salmon married her.

English
12.9K
1.2K
12.9K
35.3M
Aqua Insane 👏
Aqua Insane 👏@himanshu__sriv·
@IterIntellectus Crazy how it ‘just works’… until you realize most problems aren’t switches—they’re tradeoffs. What’s the unintended cost of pressing it that no one’s talking about?
English
3
2
2
808
Apollo
Apollo@0xApolloGL·
@SAshworthHayes do you really want ppl who get pregnant in the back of cars having kids
English
0
0
0
41
AI guy
AI guy@AiFreak_·
@paul_merolla If only people stopped calling "5 – 9 tok/s" - running. Maybe "can be loaded in pain" is more accurate ;)
English
2
0
20
963
Paul Merolla
Paul Merolla@paul_merolla·
1/7 Running huge MoE models on affordable hardware is all the rage. Adding a new approach to the mix optimized for speed and model quality. Introducing FOMOE: Fast Opportunistic Mixture Of Experts (pronounced fomo). Runs Qwen3.5 flagship model with 397 billion parameters at 5 – 9 tok/s on a $2,100 desktop! Uses Q4_K_M quants. Two $500 GPUs, 32GB RAM, one NVMe drive. Runs on Linux.
Paul Merolla tweet media
English
20
20
279
22.3K
Apollo
Apollo@0xApolloGL·
@paul_merolla The meta is using fine tuned small models that can actually run on your computer plus good data science based heuristics Send the rest of it to Claude you will never be able to use flagship llms at scale on your own hardware Fine tuned 8-32B Qwen can do a lot
English
0
0
4
420
Apollo
Apollo@0xApolloGL·
@djcows this man just described marxist economics
English
0
0
0
4
djcows
djcows@djcows·
startup idea: submerged GPUs to heat the water to create steam to spin turbines to generate electricity to power the GPUs
djcows tweet media
English
1K
692
24.9K
2M
Apollo
Apollo@0xApolloGL·
@alexocheema for the love of god please stop doing this and just buy a linux box with a couple of 5090s its like using a ferrari to haul cinder blocks
English
0
0
0
58
Alex Cheema
Alex Cheema@alexocheema·
The new M5 Pro/Max MacBooks have 3 Thunderbolt 5 ports, enabling you to create RDMA clusters with up to 4 MacBooks. The latency with RDMA over Thunderbolt is single digit microseconds, fast enough for tensor parallelism with close to linear scaling.
Alex Cheema tweet media
Guybrush Threepwood@twistedmatrices

PSA: If you have multiple macbooks that support RDMA, you can cluster them using @exolabs and run 30B+ models at 70 tok/s over thunderbolt5. tensor parallelism on consumer hardware is a solved problem. you are renting GPUs that are worse than the laptop on your couch. 2X M4 Max(64GB each) running mlx-community/Qwen3-30B-A3B-4bit @ 70 TPS

English
103
367
5.2K
940.1K
Lukas (computer) 🔺
Lukas (computer) 🔺@SCHIZO_FREQ·
This is always how I assumed LLMs would wind up functioning because this is how I (and presumably most others) think I assume the base unit of thought is this gestalt thought vector thing, not "words," and we've just all developed a very fast way to translate these to words because words are more communicable than thought pieces This was always my issue with "some people don't have an internal monologue!" discourse It just makes no sense for words to be the base unit people think in. It's like 1000x faster to think in terms images or these thought pieces or whatever I assume it just seems like people think in words bc when they describe what they're thinking to people, they have to translate the thought pieces to words - as that's how we communicate - and this process converts their actual thoughts into the form of a monologue But it only makes sense to think in words when you need to output some form of communication. Otherwise it's not very efficient And human brains are insanely efficient
Simplifying AI@simplifyinAI

🚨 BREAKING: Tencent has killed the “next-token” paradigm. Tencent and Tsinghua has released CALM (Continuous Autoregressive Language Models), and it completely disrupts the next-token paradigm. LLMs currently waste massive amounts of compute predicting discrete, single tokens through a huge vocabulary softmax layer. It’s slow and scales poorly. CALM bypasses the vocabulary entirely. It uses a high-fidelity autoencoder to compress chunks of text into a single continuous vector with 99.9% reconstruction accuracy. The model now predicts the “next vector” in a continuous space. The numbers are actually insane: - Each generative step now carries 4× the semantic bandwidth. - Training compute is reduced by 44%. - The softmax bottleneck is completely removed. We’re literally watching language models evolve from typing discrete symbols to streaming continuous thoughts. This changes the entire trajectory of AI.

English
65
62
1.2K
117.5K
Apollo
Apollo@0xApolloGL·
@BillyM2k This looks like a list of upper middle class boys names in the 2000s
English
0
0
0
31
Shibetoshi Nakamoto
Shibetoshi Nakamoto@BillyM2k·
what occupation would you have had if you were born in medieval times?
Shibetoshi Nakamoto tweet media
English
434
38
460
53.4K