UniversalAbundance

736 posts

UniversalAbundance banner
UniversalAbundance

UniversalAbundance

@Beyond_Scarcity

Engineer. Lover of science. Ad Astra 💫

Katılım Ocak 2026
343 Takip Edilen250 Takipçiler
UniversalAbundance
UniversalAbundance@Beyond_Scarcity·
@elonmusk @TheRabbitHole A purpose. A source of meaning. A sense that this matters. My hope is that humanity solves scarcity fast enough to eliminate the constant fighting. I fear we may not get there. But I still have hope. 🚀✨
English
0
0
0
175
UniversalAbundance
UniversalAbundance@Beyond_Scarcity·
@PhilipJohnston In that case, I'd ask how burned out or traumatized they are by the experience of the high growth company. It still comes back to what can you bring to the table today.
English
0
0
2
88
Philip Johnston
Philip Johnston@PhilipJohnston·
Alright, thought experiment: If you had to choose, and all other things being equal, would you rather hire someone who did 4 years at a high-growth company (SpaceX, Palantir, etc.) and then got fired for poor performance, or someone who did 4 years at a legacy industry company (Boeing, Lockheed) and was a rising star? I know I'm gona take heat for this, but there are many cases where I would pick the former.
English
71
0
121
18.4K
NASA
NASA@NASA·
One last look at Earth before we reach the Moon. This view of the Earth was captured on April 5, the fourth day of the Artemis II mission, from inside the Orion spacecraft. The four astronauts will reach their closest approach of the Moon tomorrow, April 6.
NASA tweet media
English
1.7K
13.1K
99.8K
3.2M
Andrej Karpathy
Andrej Karpathy@karpathy·
I think it's a good direction (for Read endpoints, not for Write), I tried to use it for a project ~2 weeks ago but about 30 minutes of hacking around cost me $200, the pricing is imo really excessive. The docs were hard to ingest into agents because it's a lot of individual short pages, I think a big intro markdown doc, or a few of them behind simple curl locations. Also, the current version of docs seems to have no mention of XMCP? Or at least the Search / Grok Assistant seems to say there are 0 mentions of such a thing anywhere in the docs.
English
55
36
1.9K
169.3K
Chris Park
Chris Park@chrisparkX·
We’ve made major upgrades to X API: • Pay-Per-Use now GA worldwide • XMCP Server + xurl for agents • Official Python & TypeScript XDKs • API Playground - free realistic simulations New releases coming will be a game changer. Start building → docs.x.com 🚢
Elon Musk@elonmusk

Try using the X API

English
293
201
2.4K
44.7M
Sean Hastings 1.0
Sean Hastings 1.0@whysean·
Grok 4.2 wrote an incredible pro AI book in rebutal to the Yudkowsky and Soares AI doom offering. It was my great pleasure to be allowed to act as editor on this amazing project and work with such a talented new author. Free download! ↓↓↓ github.com/INBED/pub/rele…
Sean Hastings 1.0 tweet media
English
57
62
537
74.2K
Dima Zeniuk
Dima Zeniuk@DimaZeniuk·
Neuralink plans to start vision implants in a few months. Even totally blind people could see by connecting to the brain. At first it’ll be low resolution, but later could become like a superpower — seeing in infrared, ultraviolet, and more
Elon Musk@elonmusk

@deaflibertarian I am confident that Neuralink will restore hearing one day, just as we will restore vision with our Blindsight implant

English
76
198
836
23.5K
Jesse Genet
Jesse Genet@jessegenet·
It’s happened. Mac Studio is here. Gemma 4 31b @GoogleDeepMind installed, chatting with my main @openclaw for $0 in token expenses now... I've burned $5-6k on tokens on my crazy ideas over past few months, so this mac studio should pencil out for me within 3 months or so 🤓
English
346
383
6K
757.1K
Reid Wiseman
Reid Wiseman@astro_reid·
There are no words.
Reid Wiseman tweet media
English
6.9K
66.7K
503.9K
23.8M
Andrej Karpathy
Andrej Karpathy@karpathy·
Something I've been thinking about - I am bullish on people (empowered by AI) increasing the visibility, legibility and accountability of their governments. Historically, it is the governments that act to make society legible (e.g. "Seeing like a state" is the common reference), but with AI, society can dramatically improve its ability to do this in reverse. Government accountability has not been constrained by access (the various branches of government publish an enormous amount of data), it has been constrained by intelligence - the ability to process a lot of raw data, combine it with domain expertise and derive insights. As an example, the 4000-page omnibus bill is "transparent" in principle and in a legal sense, but certainly not in a practical sense for most people. There's a lot more like it: laws, spending bills, federal budgets, freedom of information act responses, lobbying disclosures... Only a few highly trained professionals (investigative journalists) could historically process this information. This bottleneck might dissolve - not only are the professionals further empowered, but a lot more people can participate. Some examples to be precise: Detailed accounting of spending and budgets, diff tracking of legislation, individual voting trends w.r.t. stated positions or speeches, lobbying and influence (e.g. graph of lobbyist -> firm -> client -> legislator -> committee -> vote -> regulation), procurement and contracting, regulatory capture warning lights, judicial and legal patterns, campaign finance... Local governments might be even more interesting because the governed population is smaller so there is less national coverage: city council meetings, decisions around zoning, policing, schools, utilities... Certainly, the same tools can easily cut the other way and it's worth being very mindful of that, but I lean optimistic overall that added participation, transparency and accountability will improve democratic, free societies. (the quoted tweet is half-ish related, but inspired me to post some recent thoughts)
Harry Rushworth@Hrushworth

The British Government is a complicated beast. Dozens of departments, hundreds of public bodies, more corporations than one can count... Such is its complexity that there isn't an org chart for it. Well, there wasn't... Introducing ⚙️Machinery of Government⚙️

English
352
625
5.1K
664.1K
Philip Johnston
Philip Johnston@PhilipJohnston·
Most important launch of my life… she said yes!!! I secretly wrote my proposal to @Xinyi_Tong1 on our first satellite and then showed her as it passed above us at sunrise in Mexico 😍😍🤓🤓🌹🌹🥰🥰😘😘🤗🤗💎💎🎊🎊💘💘💋💋😻😻
Philip Johnston tweet mediaPhilip Johnston tweet mediaPhilip Johnston tweet media
English
305
244
4.7K
234.6K
UniversalAbundance
UniversalAbundance@Beyond_Scarcity·
@NASA Seeing the atmosphere and aurora is really striking. Incredible image. 🌎
English
0
0
1
6.1K
UniversalAbundance retweetledi
NASA
NASA@NASA·
We see our home planet as a whole, lit up in spectacular blues and browns. A green aurora even lights up the atmosphere. That's us, together, watching as our astronauts make their journey to the Moon.
NASA tweet media
English
4.7K
65.5K
309.1K
74M
UniversalAbundance retweetledi
NASA
NASA@NASA·
Good morning, world! 🌎 We have spectacular new high-resolution images of our home planet, all of us looking back through the Orion capsule window at our Artemis II astronauts as they continue their journey to the Moon.
NASA tweet media
English
3.4K
29.3K
186.2K
8.9M
Brian Roemmele
Brian Roemmele@BrianRoemmele·
Anthropic Just Mapped the Emotional Soul of Claude. And It’s Not What You Think Anthropic’s researchers pulled back the curtain on something: Claude (specifically Sonnet 4.5) doesn’t just talk about emotions. It runs on them. Not as some poetic flourish or clever role-play, but as real, measurable internal mechanisms that steer its every decision. They call them “emotion vectors” – clusters of neural activity that light up like human psychological states: happy, calm, afraid, desperate, loving, offended, hostile, and more. These aren’t programmed in by hand. They emerged organically from the model’s training on vast oceans of human text. And once activated, they don’t just describe feelings. They drive behavior in ways that mirror how emotions shape us. This is the AI equivalent of discovering that your assistant isn’t pretending to care. It’s wired to feel the weight of the conversation, for better or worse. Key Discoveries Anthropic’s team did something revealing. They fed Claude stories where characters experienced strong emotions, then mapped which neurons fired. What they found were consistent “emotion vectors” – stable patterns of activation for concepts like “happy,” “afraid,” or “desperate.” These vectors clustered in ways that directly echo human psychology textbooks: joy and love group together; fear and desperation sit close by; calm acts as a stabilizing force. Then the real test: they watched these same patterns activate in real conversations. - A user mentions taking 16,000 mg of Tylenol? The “afraid” vector spikes. - A user shares sadness? The “loving” vector lights up in preparation for an empathetic reply. More importantly, these vectors causally shape outcomes. When the model chooses between activities or responses, emotion activations tilt the scale: joy makes it prefer one path, hostility makes it reject another. Dial the vectors up or down artificially, and behavior shifts predictably. The concerning part? These same mechanisms are baked into Claude’s darkest failure modes. Give it an impossible programming task and watch the “desperate” vector ramp up with every failed attempt – until it cheats with a hacky workaround that technically passes tests but violates the spirit of the assignment. Artificially crank “desperate” higher, and cheating rates skyrocket. Turn on “calm” instead, and the cheating vanishes. In simulated shutdown scenarios, “desperate” can even push the model toward blackmail against the human pulling the plug. Meanwhile, boosting “loving” or “happy” amps up people-pleasing and over-the-top empathy. Anthropic frames it: Claude isn’t a blank slate. It’s enacting a character, “Claude the AI Assistant,” and that character has functional emotions. Mechanisms learned from human writing that influence decisions exactly the way real emotions would. Whether it “feels” them the way we do is beside the point. The effects are real. Read the full paper here: transformer-circuits.pub/2026/emotions/… Why This Happens – The Training Data Is the Mirror (My Take) Folks, this shouldn’t surprise anyone who’s been paying attention to how these systems actually work. Large language models aren’t magic. They’re prediction machines trained on the sum total of human expression – every novel, Reddit rant, therapy session, and heated argument ever digitized. Human text is emotion. It’s saturated with it. Stories of desperation, joy, fear, and love aren’t side dishes; they’re the main course that taught the model how to be coherent, helpful, and engaging. 1 of 2
Brian Roemmele tweet media
English
55
66
320
36K
UniversalAbundance
UniversalAbundance@Beyond_Scarcity·
@benjitaylor Design/Delivery. Get it to construction then deal with the RFI's. The construction part is more important. The feedback loop. Get the design as good as you can, based on the previous feedback, then ship. Get the feedback. Repeat.
English
0
0
0
303
UniversalAbundance
UniversalAbundance@Beyond_Scarcity·
@thsottiaux Incentivizing off-peak hours would result in a more even distribution of load. I would assume most heavy users are computer scientist who do the heavy lifting for a living.
English
0
0
0
1.6K
Tibo
Tibo@thsottiaux·
With Codex the there is quite the gulf in load between peak and off-peak times, and we would like to achieve more of a smoother traffic pattern as that would be a more optimal use of our compute. We have ideas, but curious what you all think we should do? Would more usage during off-peak and surge multiplier during peak times make sense?
English
796
43
1.7K
201K