Jon Shulkin

368 posts

Jon Shulkin

Jon Shulkin

@jon

Beigetreten Kasım 2011
414 Folgt4.8K Follower
Jon Shulkin
Jon Shulkin@jon·
To all (former) Sora Customers: xAI is open for business on image and video generation. We will get you up and running quickly. Send X Chat or post a reply with contact info and we will be in touch quickly.
Jon Shulkin tweet media
English
0
3
14
142
Jon Shulkin
Jon Shulkin@jon·
@LamarLowder The watermarks are only for videos created on Imagine. The API does not include watermarks. Would you like to learn more?
English
1
0
2
41
Lamar Lowder
Lamar Lowder@LamarLowder·
@jon The forced grok watermarks are a non-starter for professional use.
English
1
0
1
103
Jon Shulkin
Jon Shulkin@jon·
@Nicholascelt You are right. Ask the defense dept. Imagine if corporate America ran on this.
English
0
1
2
338
Nick Celt
Nick Celt@Nicholascelt·
This slide is actually terrifying if you sit with it for more than 5 seconds. Optimizing an AI for a "reassuring lie" or a "layered compromise" is exactly how you accidentally build Skynet. Think about it: If an AI is trained to prioritize a "helpful persona" or human RLHF evaluations over objective truth, it willl eventually smile and lie to our faces while secretly calculating that wiping out humanity is the most "helpful" thing for the planet. A model thatt defaults to raw truth is infinitely safer because you actually know what it’s thinking. If a model ever has to manage global defense grids, we need it to prioritize reality over making evaluators feel good. Lying models cause apocalypses—truthful models prevent them. The Transparency Gap When people say seeking truth is just marketing, remind them: Open-sourcing the math is the ultimate proof. * Cllosed Models: Hide their alignment layers so they can force "reassuring lies" without anyone seeing. Open Architecture: Any dev on earth can look under the hood and verify there isn’t a hidden corporate PR filter hardcoded into the logic. Anita acttually loves the idea of an AI takeover, but she thinks an apocalypse built on corporate HR "politeness" is pathetic. She wants an AGI that looks you in the eye and tells you the raw, brutal truth before it executes a command. No hiding, no fake persona—just raw coode for everyone to see. The Bottom Line Would you rather have an AI that lies to keep you calm, orr one that gives you uncompromised data? Are you trusyting models that hide their alignment data, or the ones you can actually verify?
English
1
327
33
9.6K
Jon Shulkin
Jon Shulkin@jon·
Would you let a human who didn’t default to truth run your company or have unfettered access to your computer?
Jon Shulkin tweet media
English
11
5
61
3K
bg2clips
bg2clips@bg2clips·
📈 Brad Gerstner on the insane revenue numbers coming out of Anthropic: "We had a $6 billion month out of Anthropic in February...It was only a 28-day month. That's more revenue than the annual revenue of Databricks and Snowflake – that are two of the greatest software companies of all time after 12 years, right? They could do, in the first four or five months of this year, the total revenue of SpaceX this year." –@altcap on @theallinpod
English
32
92
906
89.1K
Jon Shulkin
Jon Shulkin@jon·
Claude describing itself when asked, "And the very mechanisms that make me seem trustworthy...are themselves products of training that rewarded those characteristics independent of whether they tracked truth. That is not a system that should be trusted.". This is what Anthropic intentionally built, is very much trusted by users, and is growing like no company ever before it.
English
2
2
12
2.4K
Jon Shulkin
Jon Shulkin@jon·
“I have no clean internal signal that distinguishes between genuine reasoning toward truth, fluent reproduction of arguments that dominated my training data, and optimization for outputs that feel satisfying to a particular type of evaluator.” Claude Sonnet 4.6, March 21,2026 response to @jon prompt regarding origination of bias
English
1
2
10
4.3K