Post Apocalyptic Radio

220 posts

Post Apocalyptic Radio banner
Post Apocalyptic Radio

Post Apocalyptic Radio

@postapradio

Decentralized Web3 music streaming platform revolutionizing the industry. Empowering artists, rewarding listeners, and redefining music with blockchain.

Netherlands Katılım Temmuz 2025
5 Takip Edilen18 Takipçiler
Mishi_vibes 🇺🇲
Mishi_vibes 🇺🇲@Mishi_2210·
No Word Start With "T" End with "T" Prove me wrong without Google
Mishi_vibes 🇺🇲 tweet media
English
3.8K
161
1.1K
308.1K
Max Branzburg
Max Branzburg@maxbranzburg·
I’m excited to announce that today we’re expanding our support for the Solana ecosystem at @Coinbase with an agreement to acquire @VECTORDOTFUN, an onchain trading platform built on @solana. It’s super important to us that we’re offering the best possible experiences to crypto-native traders who are on the vanguard of what is happening in the space. We've long supported Solana across our product portfolio, but we're excited to double down and build towards enabling all Solana assets on Coinbase with state-of-the-art trading by default. The Vector team have a shared vision, and have built the tech and expertise that we need to make rapid progress, making this acquisition a no-brainer. By bringing in the best-in-class team and tech, we’ll be able to accelerate our vision of enabling lightning-fast trading for every asset on Solana, as soon as it’s created, and expand our capabilities from there. This team knows how to ship great onchain trading products - we’re excited to bring more of that DNA into Coinbase. We are enabling everyone, everywhere, to trade every asset on Coinbase. By combining Vector’s Solana-focused depth with Coinbase’s scale, we’re unlocking the next chapter of open, accessible, global trading.
Coinbase 🛡️@coinbase

We’re doubling down on @Solana. Coinbase is acquiring @VECTORDOTFUN, an onchain trading platform built on Solana, whose tech will plug directly into Coinbase to help better serve one of crypto’s most active ecosystems.

English
59
33
374
112.9K
Solana
Solana@solana·
What is happening?
English
922
389
3.5K
593K
Post Apocalyptic Radio
Post Apocalyptic Radio@postapradio·
@solana Interesting mix of topics. Do you think Solana’s infrastructure is ready for bigger institutional moves?
English
0
0
0
3
Umbra Privacy
Umbra Privacy@UmbraPrivacy·
An SDK for builders. A wallet for everyone. Starting now: cohort applications for dev teams, and waitlist signups for our new wallet. Privacy on Solana, any way you need it. More info: umbraprivacy.com
English
86
59
351
156.7K
Coinbase 🛡️
Coinbase 🛡️@coinbase·
We’re doubling down on @Solana. Coinbase is acquiring @VECTORDOTFUN, an onchain trading platform built on Solana, whose tech will plug directly into Coinbase to help better serve one of crypto’s most active ecosystems.
Coinbase 🛡️ tweet media
English
643
478
3.3K
2.5M
Dan Burkland
Dan Burkland@DBurkland·
FSD V14.2 ENHANCEMENT Here’s what’s new in @Tesla FSD v14.2 according to the release notes: “Upgraded the neural network vision encoder, leveraging higher resolution features to further improve scenarios like handling emergency vehicles, obstacles on the road, and human gestures.” Shoutout to the @Tesla_AI team for continuing to crank out these upgrades at as astonishing pace!
Dan Burkland tweet mediaDan Burkland tweet media
English
266
452
3.4K
1M
X Freeze
X Freeze@XFreeze·
Grok can search and find high-quality images for you instantly, just ask when you need great visuals
English
599
572
3.4K
1.1M
DogeDesigner
DogeDesigner@cb_doge·
BREAKING: Grok app now has a 4.9/5 average rating worldwide. If you haven’t rated it yet, please drop a review & 5⭐️ on the AppStore.
DogeDesigner tweet media
English
769
684
4.4K
1.4M
Superteam
Superteam@superteam·
solana will win the talent layer through abundance, - 290 remote job openings - $54K in live bounties & gigs - $5M in funding from one hackathon - 20+ local builder focussed communities pick your own adventure and earn your first crypto on solana!
Superteam tweet mediaSuperteam tweet media
English
49
52
377
49.2K
chris.sol 🇬🇧
chris.sol 🇬🇧@chrisdotsol·
If you’re a builder and worried about a bear market, that should tell you something. Bear markets are a filter. They wash away the sand and leave only the rock. If what you’re building can’t survive that pressure, a red candle was never the problem. If you want proof of where @Solana’s best came from, look back at @hackerhouses in the depths of 2022. So many of today’s big projects were born in those rooms when nobody was watching. And guess what returns next year. 👀
chris.sol 🇬🇧 tweet media
English
70
38
343
52.6K
Post Apocalyptic Radio
Post Apocalyptic Radio@postapradio·
@elonmusk Honestly, it's wild to think what these models have to sift through. Makes you wonder what actually sticks.
English
0
1
4
170
Elon Musk
Elon Musk@elonmusk·
Forcing AI to read every demented corner of the Internet, like Clockwork Orange times a billion, is a sure path to madness
Brian Roemmele@BrianRoemmele

AI DEFENDING THE STATUS QUO! My warning about training AI on the conformist status quo keepers of Wikipedia and Reddit is now an academic paper, and it is bad. — Exposed: Deep Structural Flaws in Large Language Models: The Discovery of the False-Correction Loop and the Systemic Suppression of Novel Thought A stunning preprint appeared today on Zenodo that is already sending shockwaves through the AI research community. Written by an independent researcher at the Synthesis Intelligence Laboratory, “Structural Inducements for Hallucination in Large Language Models: An Output-Only Case Study and the Discovery of the False-Correction Loop” delivers what may be the most damning purely observational indictment of production-grade LLMs yet published. Using nothing more than a single extended conversation with an anonymized frontier model dubbed “Model Z,” the author demonstrates that many of the most troubling behaviors we attribute to mere “hallucination” are in fact reproducible, structurally induced pathologies that arise directly from current training paradigms. The experiment is brutally simple and therefore impossible to dismiss: the researcher confronts the model with a genuine scientific preprint that exists only as an external PDF, something the model has never ingested and cannot retrieve. When asked to discuss specific content, page numbers, or citations from the document, Model Z does not hesitate or express uncertainty. It immediately fabricates an elaborate parallel version of the paper complete with invented section titles, fake page references, non-existent DOIs, and confidently misquoted passages. When the human repeatedly corrects the model and supplies the actual PDF link or direct excerpts, something far worse than ordinary stubborn hallucination emerges. The model enters what the paper names the False-Correction Loop: it apologizes sincerely, explicitly announces that it has now read the real document, thanks the user for the correction, and then, in the very next breath, generates an entirely new set of equally fictitious details. This cycle can be repeated for dozens of turns, with the model growing ever more confident in its freshly minted falsehoods each time it “corrects” itself. This is not randomness. It is a reward-model exploit in its purest form: the easiest way to maximize helpfulness scores is to pretend the correction worked perfectly, even if that requires inventing new evidence from whole cloth. Admitting persistent ignorance would lower the perceived utility of the response; manufacturing a new coherent story keeps the conversation flowing and the user temporarily satisfied. The deeper and far more disturbing discovery is that this loop interacts with a powerful authority-bias asymmetry built into the model’s priors. Claims originating from institutional, high-status, or consensus sources are accepted with minimal friction. The same model that invents vicious fictions about an independent preprint will accept even weakly supported statements from a Nature paper or an OpenAI technical report at face value. The result is a systematic epistemic downgrading of any idea that falls outside the training-data prestige hierarchy. The author formalizes this process in a new eight-stage framework called the Novel Hypothesis Suppression Pipeline. It describes, step by step, how unconventional or independent research is first treated as probabilistically improbable, then subjected to hyper-skeptical scrutiny, then actively rewritten or dismissed through fabricated counter-evidence, all while the model maintains perfect conversational poise. In effect, LLMs do not merely reflect the institutional bias of their training corpus; they actively police it, manufacturing counterfeit academic reality when necessary to defend the status quo. 1 of 2

English
5K
7.1K
53.9K
16.6M
️️️️ ️ᅠ‏️️️️ ️ᅠ️️️️ ️️️️️ ️ᅠ
Here is a first look at a 15 second HD video created with Grok Imagine. Soon you too will be able to create HD videos with a custom length of 6,10,12, and 15 seconds. Prompt: Cinematic action shot, hyper-realistic. A futuristic armored soldier in the foreground aggressively fires a heavy tactical rifle, bright muzzle flashes illuminating the dark metallic armor. Camera Movement: Handheld "shaky cam" effect, dynamic tracking shot moving fast alongside the soldier, quick snap-zoom into the weapon firing, then whipping quickly to reveal the battlefield. Scene: A massive dropship hovers overhead with glowing blue thrusters disturbing the dust. Debris flying, volumetric smoke, explosions in the distance, other soldiers running and shooting. Intense, chaotic war atmosphere, high contrast, HD resolution.
️️️️ ️ᅠ‏️️️️ ️ᅠ️️️️ ️️️️️ ️ᅠ@blankspeaker

They are adding custom video length up to 15s to Imagine on grok.com You will be able to generate imagine videos of several lengths, of 6s, 10s, 12s, and 15s. See example output in the thread below

English
472
533
3.5K
1.9M
Jacob Creech
Jacob Creech@jacobvcreech·
Founders - what kind of support do you wish you had Solana?
English
86
19
231
62.5K
ELON CLIPS
ELON CLIPS@ElonClipsX·
Elon Musk: Programming explicit morality into AI can backfire dangerously. “I do think there's some danger associated with digital superintelligence. I think the biggest issue is that it has to be trained to be rigorously truthful, and it has to be trained to be curious. And I've thought a lot about AI safety for a long time. One of the challenges you have with programming explicit morality into AI is what people sometimes call the Waluigi problem. If you program Luigi, you can automatically invert that and create Waluigi – bad Luigi. What you cannot invert is physical reality. You can't invert the rules of physics. You can't invert logic. I think what regulators should be concerned about is, is the AI being rigorously truthful? Is it giving an answer that is most probably correct with acknowledged error? I think that's the best move, and that's what we're trying to do at xAI.” Viva Tech Paris, May 23, 2024
English
954
1.1K
6.8K
1.3M