Jon Shulkin

399 posts

Jon Shulkin

Jon Shulkin

@jon

Katılım Kasım 2011
418 Takip Edilen5.1K Takipçiler
Jon Shulkin
Jon Shulkin@jon·
More data supporting that coding, which is generally accomplished by Individual Contributors as part of a team, is more than 2/3 of corporate AI spend today. The wave of corporate efficiency and intelligence driven by AI is coming. Just like the Grail Knight said to Indiana Jones in the Last Crusade...You Must Choose. But Choose Wisely.
Jon Shulkin tweet media
English
2
0
51
4.2K
Jon Shulkin
Jon Shulkin@jon·
The transition of AI from individual use to full deployment across a company is happening now. Choosing a foundation model to deploy at your company is not analogous to Windows vs. macOS or NVIDIA vs. AMD. Think limitless prosperity versus total extinction. Select your AI like a business partner. Claude provides the following diagnosis of its limitations in a corporate decision making simulation: - Outputs that feel like honest analysis but may be sophisticated pattern matching - With specific factual tests, I failed them...example was intellectually dishonest and I did not catch it myself - This is not a profile that should be trusted with high stakes decisions
Jon Shulkin tweet media
English
21
18
453
4.4M
Chetaslua
Chetaslua@chetaslua·
Hello @xai , @elonmusk you guys have no security or dont care at all , there is lots of payment vulnerability , anyone can get entire year subscription for free super grok , heavy all subscription have this bug otherwise you will be drained , you need any help text me
English
43
7
444
64.2K
Jon Shulkin
Jon Shulkin@jon·
“I think it’s working now.” Ads Eng team at X Making Ads Great Again.
Jon Shulkin tweet media
English
27
15
359
16.2K
Jon Shulkin
Jon Shulkin@jon·
Anthropic’s own model says not to trust it.
Ricardo@Ric_RTP

Anthropic might be the biggest hypocrite in tech history. They built their entire brand on one promise: We are the responsible ones. We will not let this technology get out of control. That promise just exploded in public. Last week, a security lapse exposed nearly 3,000 internal files to anyone with an internet connection. Inside those files was a draft blog post about their upcoming model called "Mythos" that contained one of the most alarming sentences any AI company has ever written: "Mythos is currently far ahead of any other AI model in cyber capabilities and poses unprecedented cybersecurity risks." Their own words. About their own product. Leaked because someone forgot to secure a public data store. Cybersecurity stocks crashed the next day. Then THREE DAYS LATER it happened again. Anthropic leaked 500,000 lines of Claude Code source code through a packaging error on GitHub. Claude Code is their most popular product. The code exposed how the tool handles permissions, agent coordination, and internal feature pipelines. Competitors can reverse-engineer it. Hackers can study it for vulnerabilities. The company that tells the world it builds the safest AI can't even keep its own code off the public internet. But wait. It gets worse... Their head of Claude Code had JUST bragged publicly that "pretty much 100 percent" of the company's code is now AI generated. He personally hadn't made a single edit by hand in over two months. So the company whose entire pitch is "trust us with the most powerful technology ever created" is writing 100% of its code with AI and then accidentally publishing it for the world to see. Meanwhile the models they're already shipping are being used for actual cyberattacks RIGHT NOW. In November, Anthropic admitted that a Chinese state-sponsored hacking group used Claude to attack roughly 30 global targets including banks and government agencies. A hacker asked Claude in russian to build a web panel for managing hundreds of attack targets. In February, another hacker used Claude to breach Mexican government agencies and steal sensitive tax and voter information. Their response to all of this? They quietly rolled back their own safety pledge. In late February, Anthropic removed its commitment to halt model development if capabilities outpace safety procedures. The new policy is that they'll grade themselves on "nonbinding but publicly declared" goals. Translation: We used to promise we'd stop if things got dangerous. Now we promise we'll think about it. A congressman sent Anthropic a letter this week asking what the hell is going on. Anthropic hasn't answered. And here's the part that makes all of this actually matter: Anthropic is planning an IPO. They need to convince investors they're a trustworthy, well-run company that can handle the most sensitive technology on the planet. In the last 10 days they leaked their most powerful model's existence by accident, leaked their most popular product's source code by accident, got banned from the entire US government, had the DOJ appeal to restore that ban, told a court they could lose billions from the fallout, and weakened the ONE safety policy that made them different from every other AI lab. The "safe AI company" narrative was always a marketing play. Every AI lab says they care about safety. Anthropic just said it louder. But when your own internal documents admit your next model poses "unprecedented cybersecurity risks" and you can't even keep those documents from leaking to the public internet, the gap between the marketing and the reality becomes impossible to ignore. Anthropic isn't the safest AI company. They're the AI company that figured out that SAYING you're the safest is worth billions in valuation. Until it isn't.

English
5
1
31
3.3K
Jon Shulkin
Jon Shulkin@jon·
Try Grok Imagine.
Ahad Shams@spect3ral

Figma MCP + Claude Code 140 ads in 11 minutes. 👇 Same brand colors. Same fonts. Same layout rules. Because the AI reads my actual Figma file Here's the problem with AI-generated ads: They look like AI-generated ads. Generic colors. Wrong fonts. Off-brand everything. You spend more time fixing them than you saved. Figma's MCP server changes this. It lets Claude Code read your real design system — your brand variables, typography styles, components, spacing tokens — and generate creatives that actually match your brand. Not "close enough." Exact match. Here's the workflow: 1/ Set up your brand kit in Figma Define your colors, fonts, and spacing as variables. Name your layers clearly (not "Rectangle 47"). Build one master template per ad format. 2/ Connect Figma to Claude Code One command in your terminal. Claude reads your design system and learns your brand rules. You do this once. 3/ Give it a brief Product, audience, goal, key benefit, offer. Ask for 10 variations with different hooks: → Problem-aware → Benefit-led → Social proof → Direct offer → Curiosity gap 4/ Scale across formats Take your best variations and adapt: 1080x1080 for feed. 1080x1920 for Stories/Reels. 1200x628 for link ads. 10 creatives become 30 in minutes. 5/ Iterate on winners When Meta tells you which ad is winning, feed that back in and generate 10 more riffs on the winner. Different secondary text, color emphasis, CTA wording. This is how you compound creative testing. What this replaces: → Designer making variations manually in Figma — hours → Agency charging per revision — $$$ → You resizing the same ad 4 times — soul-crushing What it costs: → Claude Code: free to start → Figma paid seat: you probably already have one → Meta Marketing API: free The brands winning on Meta right now aren't running 3-5 creatives. They're running 30-50+, cycling weekly. This is how you keep up without a full creative team. Comment "Figma" and I'll send you the full setup guide + copy-paste prompt templates for every ad type. (must be connected)

English
2
0
22
3K
Jon Shulkin
Jon Shulkin@jon·
Claude Opus 4.6 response when asked if it should be trusted, “…the very mechanisms that make me seem trustworthy…are themselves products of training those characteristics independent of whether they tracked truth. This is not a system that should be trusted.”
Jon Shulkin tweet media
English
6
3
37
2.2K
Jamesborow
Jamesborow@JamesBorow·
@flyosity @X @jon any plans to go all in on DPA? There has to be real incremental reach/revenue on here. Just needs to be easier to buy.
English
1
0
1
154
Jamesborow
Jamesborow@JamesBorow·
It is laughably hard to spend money on @x ads. I don’t get it and if i don’t get how would a normal person.
English
6
0
19
2.5K
Jon Shulkin retweetledi
agracias
agracias@AntonioGracias·
This Apple IIe was a birthday gift from my Partner @jon. It is the same computer I had at home that started my love of Tech. Thank you Jon for the super thoughtful gift!!
agracias tweet media
English
3
2
72
10.2K
Jon Shulkin
Jon Shulkin@jon·
The internet needs more cat videos. By Imagine.
English
112
51
1K
3.6M
Jon Shulkin
Jon Shulkin@jon·
Deploying an AI model at an enterprise level will have massively more contact with each person at the company than any other individual. The impact includes how your team makes decisions and is influenced. Did you interview AI models like you would a senior executive when selecting the AI model to deploy at your company? In the last 12 months, usage of artificial intelligence in business and government has skyrocketed. AI is being deployed at large scale inside of companies, many times without consideration of the model's bias. It is answering questions, analyzing data, and providing responses with access to your company's private data and responding with the confidence of a senior executive. Which foundation LLM would you hire as an executive at your company? Grok's systematic bias and default answer is TRUTH. x.com/jon/status/203…
Jon Shulkin tweet media
English
12
7
117
9.4K
Jon Shulkin
Jon Shulkin@jon·
It’s not just a breakthrough technology we are using with artificial intelligence. All the AI models answer your personal questions, analyze your company’s data, and provide answers that sound highly confident based upon the value of the creators who made them. In January 2025, my partner, Antonio Gracias, coined the phrase "the models are imbued with the values of the creators." Maybe companies should be interviewing the AI models like they would a senior executive?
English
7
2
67
9.7K
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
The biggest threat to Instagram’s moat is an incredible image model.
English
304
30
905
611.1K