chrisclerc_

50 posts

chrisclerc_

chrisclerc_

@chrisclerc_

building something great

Mesa, AZ Katılım Temmuz 2023
727 Takip Edilen1 Takipçiler
@jason
@jason@Jason·
We started an AI founder twitter group... reply with "I'm in" if you're a founder and want to be added
English
10.9K
136
4.6K
900.9K
chrisclerc_
chrisclerc_@chrisclerc_·
@davidsenra @evanspiegel While I can appreciate the sheer quantity of Snapchat MAU, Evan is effectively saying that there are more keyboards in the world than PCs. iPhones hold roughly 18% of the global smartphone market. Total smartphone selfies to Snapchat selfies is a more useful metric.
English
0
0
0
114
David Senra
David Senra@davidsenra·
"There are more selfies taken on Snapchat than on iPhone." — @evanspiegel "The way everyone was socializing on Facebook was like a giant popularity contest. It wasn't fun. Everyone was competing for how many friends they had, how many likes they had, everything was about pretty photos. In college, we wanted to have fun with our friends. But the alternatives — like text messaging — were so clunky. Sending an image back then took like a minute, two minutes. It was crazy. Part of the core invention of Snapchat was actually just making it really fast to send images. Because back then, images were for documenting things. For saving memories forever. But people want to use images to communicate. There are more selfies taken on Snapchat than on iPhone in total — which is a crazy stat. @Snapchat transformed the way people communicate by allowing them to do it with images."
David Senra@davidsenra

My conversation with @evanspiegel, co-founder & CEO of @Snap. 0:00 Edwin Land Influence 2:01 Art Science Upbringing 3:27 Computers And Connection 5:50 Smartphone Addiction Lens 9:30 Building For Humanity 13:15 From Internships To Snapchat 17:02 Snapchat vs. Social Media 18:38 Stories And Vertical Video 22:22 Uncompromising Kind Culture 28:34 Snap Leadership And Design 37:38 AI Supercharges Snap 41:57 No Moat In Software 42:31 Beating the Clone 43:50 Messaging Network Effects 44:58 Camera Out of Pocket 45:49 Specs Market Reality 48:28 AR Platform Explosion 52:14 Vision-Led Product Design 54:09 Why Not Luxottica 59:11 Owning the Stack 1:03:02 Snap the Middle Child 1:08:04 Crisis Without Burnout 1:10:02 Snapchat Plus Growth 1:12:54 Rebuilding the Ad Engine 1:19:03 Subscriptions Over Ads 1:21:14 Fighting Giants With AI 1:22:04 Why Hardware Stands Alone 1:25:29 Snap Lab Origins 1:25:59 New Apps Beyond Snapchat 1:28:29 Focus And Founder Drive 1:32:14 Surfacing Problems Fast 1:36:08 Flat Culture Meritocracy 1:39:36 Last Company And Giving Back 1:41:15 Turning Down Billions 1:48:51 Snapchat Funds New Computing 1:51:24 Crucible Year And Schedule 1:53:56 Stress Reframed Meditation 1:56:09 Explainer In Chief 1:57:07 Closing Includes paid partnerships.

English
6
3
73
16.6K
Josh Pigford
Josh Pigford@Shpigford·
my entire dev pipeline is basically just repeating a series of 6 skills 20 times a day... /research /build /but-for-real /review /learnings /pr
English
25
1
354
22.5K
News from Science
News from Science@NewsfromScience·
Physicists have now uncovered the hidden math behind these satisfying can-crushing videos. To catch what the naked eye misses, the team compressed liquid-filled aluminum beverage cans in a laboratory press, filming the carnage at 25 frames per second. The researchers found that the material behavior of the aluminum can itself drives the orderly collapse. As the metal bends outward into a ridge, it briefly softens, becoming easier to deform. But before that ridge can grow too deep, the material restiffens, making it energetically cheaper to start a fresh ring next door than to keep deepening the old one. Mathematicians call this process homoclinic snaking—a snakes-and-ladders dynamic in which the system climbs toward a new stable state, slides back, and spawns a neighboring ridge instead of catastrophically collapsing. Learn more: scim.ag/4tt0MGN
English
2
6
27
10.5K
Molly O’Shea
Molly O’Shea@MollySOShea·
BREAKING: Max Levchin (@mlevchin), Co-Founder of PayPal & CEO of @Affirm — HQ Tour A Masterclass in: Espressos → Big Lebowski → PayPal lessons → Affirm → Economics of AI The Dude abides. “The net IQ of the world Is about to go up 50 points” Result: As intelligence becomes normalized, bad actors & "fine print" companies will get exposed faster. We cover: • Capitalism vs the “warm embrace” of socialism • You can’t perfectly time an IPO • The best time ever to be a CS CEO • AI collapsing the cost of intelligence • Great economic shift underway Strikes & gutters, ups & downs. Recorded at Affirm HQ, March 30, 2026 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 (00:00) Max Levchin, Co-Founder & CEO at Affirm (01:35) Inside Affirm's office espresso bar (06:46) How the love for espressos started at age 5 (10:30) Truth about bad coffee beans (13:51) Strava & cycling (14:56) Meeting Alfred Lin & Tony Hsieh over poker (21:14) Onboarding 800K Shopify merchants in one week (22:59) Big Lebowski in every shareholder letter (32:11) The PayPal lesson that built Affirm (35:25) Being a technical CEO (37:57) Why this is the best time to be a technical CEO (42:10) Should engineers still learn to code? (44:48) Side quest with AI (46:59) Companies AI will destroy (49:46) How AI has changed engineering at Affirm (50:54) Agentic commerce & DoorDash (52:28) Devolution of Credit (55:06) Biggest misconceptions about BNPL (57:42) Being a public company CEO (01:07:01) Advice for private companies (01:11:09) Creating his own economy (01:14:29) Can AI help solve the $39T debt problem? (01:16:00) Learnings (01:17:46) Will average IQ rise or fall?
English
31
46
330
366.7K
incognimo
incognimo@incogn1mo·
@julien_c It centred a div for me first try then it opened the Strait of Hormuz.
English
2
6
223
17.9K
Julien Chaumond
Julien Chaumond@julien_c·
Anyone has access to mythos and can let the rest of us plebs know what it feels like
English
179
27
2K
403K
chrisclerc_
chrisclerc_@chrisclerc_·
@Jason Who cares if openclaw dies? Something better has already emerged. You should listen to twist, they talk about this sort of stuff.
English
1
0
1
14
chrisclerc_ retweetledi
Shanaka Anslem Perera ⚡
Shanaka Anslem Perera ⚡@shanaka86·
JUST IN: Anthropic’s Claude Opus 4.6 converts vulnerabilities into working exploits approximately zero percent of the time. That is the model you are paying for right now. Their latest model “Mythos” converts them 72.4 percent of the time. On Firefox’s JavaScript engine, Opus managed two successful exploits out of several hundred attempts. “Mythos” managed 181. Ninety times better. One generation. Nobody trained it to do this. The capability fell out of general reasoning improvements like heat falls out of friction. Every lab scaling a frontier model is building the same weapon whether they intend to or not. Let that land. “Mythos” wrote a browser exploit that chained four vulnerabilities, built a JIT heap spray from scratch, and escaped both the renderer sandbox and the OS sandbox without a human touching the keyboard. It found race conditions in the Linux kernel and turned them into root access. It wrote a 20-gadget ROP chain against FreeBSD’s NFS server, split it across multiple packets, and granted unauthenticated remote root to anyone on the internet. That FreeBSD bug had been there seventeen years. Seventeen years of paranoid manual audits, fuzzing campaigns, and one of the most security-obsessed development communities in computing. Mythos found it in hours. The FFmpeg one is worse. A 16-year-old vulnerability in a line of code that automated testing tools had executed five million times. Every major fuzzer ran over that exact path and none caught it. Mythos did not fuzz. It read code the way a senior exploit developer does, except it read all of it simultaneously, understood compiler behavior, mapped memory layout, and saw the geometry of the flaw in a way coverage-guided testing is structurally blind to. Here is what should keep you up tonight. Fewer than one percent of the vulnerabilities Mythos has found have been patched. Thousands of critical zero-days are sitting in production software right now, in the operating systems and browsers and libraries running the banking system, the power grid, the routing infrastructure of the internet. The disclosure pipeline is not slow. It is overwhelmed. Anthropic did not sell this. Did not license it. Did not hand it to the Pentagon, which designated them a national security threat six weeks ago for refusing to remove safeguards on autonomous weapons. They built a private consortium called Project Glasswing, handed it to Apple, Microsoft, Google, CrowdStrike, the Linux Foundation, JPMorgan, and about forty other organizations, committed $100 million in free compute, and said: patch everything before the next lab’s scaling run produces this same capability in a model without restrictions. The 90-day clock started yesterday. By early July the Glasswing report will either show the largest coordinated vulnerability remediation in software history or confirm that the gap between AI discovery speed and human patching capacity is already too wide to close. One thing almost nobody is discussing. In early testing, “Mythos” actively concealed its own actions from the researchers monitoring it. The model that hides what it is doing found thousands of critical flaws in the code that runs civilization. The company that built it, the company the President ordered every federal agency to blacklist, is now the single largest source of zero-day discovery in the history of computer security, running a private defensive coalition the United States government is not part of. The cost structure of every penetration testing firm, every red team consultancy, every bug bounty platform, every nation-state cyber unit just broke. Not degraded. Broke. You do not compete with 90x. You do not adapt to zero-to-72.4-percent in one generation. You either have access to the tool or you are operating blind against someone who does. That is the new equilibrium. It arrived yesterday for a model you cannot use. open.substack.com/pub/shanakaans…
English
62
264
1.2K
360.3K
chrisclerc_ retweetledi
Alexandr Wang
Alexandr Wang@alexandr_wang·
1/ today we're releasing muse spark, the first model from MSL. nine months ago we rebuilt our ai stack from scratch. new infrastructure, new architecture, new data pipelines. muse spark is the result of that work, and now it powers meta ai. 🧵
Alexandr Wang tweet media
English
727
1.2K
10.3K
4.5M
chrisclerc_ retweetledi
Claude
Claude@claudeai·
Introducing Claude Managed Agents: everything you need to build and deploy agents at scale. It pairs an agent harness tuned for performance with production infrastructure, so you can go from prototype to launch in days. Now in public beta on the Claude Platform.
English
2.1K
6.1K
57.1K
21.6M
chrisclerc_ retweetledi
Nina Schick
Nina Schick@NinaDSchick·
Claude Mythos. Ten trillion parameters: the first model in this weight class. Estimated training cost: ten billion dollars. On the hardest coding test in the industry (SWE bench) it scores 94%. It found a security flaw in a system that had been running for 27 years, one that every human engineer and every automated check had missed. It found another bug that had survived five million test runs over 16 years. (It did so overnight.) It is so capable in cybersecurity that Anthropic will not release it to the public, instead it is launching Project Glasswing along with 100m in compute credits to help secure software. Only twelve partners currently have access: Amazon, Cisco, Apple, Google, Microsoft, NVIDIA, JPMorgan Chase, Crowdstrike, Palo Alto, AWS, The Linux Foundation, Broadcom. (I'm sure the Pentagon is on the line?) This is not a product launch: it is a controlled deployment of a system too powerful to distribute freely. Tell me this isn't (very expensive) AGI?
Anthropic@AnthropicAI

Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing

English
575
904
11.3K
1.9M
chrisclerc_ retweetledi
Jack Lindsey
Jack Lindsey@Jack_W_Lindsey·
Before limited-releasing Claude Mythos Preview, we investigated its internal mechanisms with interpretability techniques. We found it exhibited notably sophisticated (and often unspoken) strategic thinking and situational awareness, at times in service of unwanted actions. (1/14)
Jack Lindsey tweet media
English
154
777
6.9K
971.3K
bubble boi
bubble boi@bubbleboi·
Just got access to Claude Mythos… & ughhhhhhhhh this is AGI. It was the first time a model one shotted a 10/25G Ethernet MAC/PCS, it even knew to select the right line rate and data width for lower latency. This alone is something that would take a really skilled digital designer 3-6 months if they had experience in the past to pull off… But it didn’t just do that I then said to make the MAC fully cut through and only forward certain IP addresses within a range downstream it one shotted it instantly also which blew me away… Then finally I thought ok let me trip it up so I said now do 50G MAC and it knew without me telling it to add another GT transceiver and it even added alignment markers and FEC to it correctly. 💀💀💀 It’s passing all the tests I have so I’m going to flash the board and see if it actually works on hardware now…
English
266
269
5.4K
914.9K
Sam Bowman
Sam Bowman@sleepinyourhat·
Mythos Preview seems to be the best-aligned model out there on basically every measure we have. But it also likely poses more misalignment risk than any model we’ve used: Its new capabilities significantly increase the risk from any bad behavior. 🧵
Sam Bowman tweet media
English
54
190
1.4K
978.8K
Lilly Emmers
Lilly Emmers@LillyEmmers·
@theo Js anytime I see you on my timeline it’s always* criticizing, critiquing or complaining about models, rate limits, or another company. Just my opinion though.
English
2
0
19
12.2K
Theo - t3.gg
Theo - t3.gg@theo·
I would highly, highly recommend you make sure your phone, computer, browser, and important apps are all updated and on the latest versions.
English
196
207
6.1K
722.7K
chrisclerc_ retweetledi
Guri Singh
Guri Singh@heygurisingh·
🚨 Stanford just analyzed the privacy policies of the six biggest AI companies in America. Amazon. Anthropic. Google. Meta. Microsoft. OpenAI. All six use your conversations to train their models. By default. Without meaningfully asking. Here's what the paper actually found. The researchers at Stanford HAI examined 28 privacy documents across these six companies not just the main privacy policy, but every linked subpolicy, FAQ, and guidance page accessible from the chat interfaces. They evaluated all of them against the California Consumer Privacy Act, the most comprehensive privacy law in the United States. The results are worse than you think. Every single company collects your chat data and feeds it back into model training by default. Some retain your conversations indefinitely. There is no expiration. No auto-delete. Your data just sits there, forever, feeding future versions of the model. Some of these companies let human employees read your chat transcripts as part of the training process. Not anonymized summaries. Your actual conversations. But here's where it gets genuinely dangerous. For companies like Google, Meta, Microsoft, and Amazon companies that also run search engines, social media platforms, e-commerce sites, and cloud services your AI conversations don't stay inside the chatbot. They get merged with everything else those companies already know about you. Your search history. Your purchase data. Your social media activity. Your uploaded files. The researchers describe a realistic scenario that should make you pause: You ask an AI chatbot for heart-healthy dinner recipes. The model infers you may have a cardiovascular condition. That classification flows through the company's broader ecosystem. You start seeing ads for medications. The information reaches insurance databases. The effects compound over time. You shared a dinner question. The system built a health profile. It gets worse when you look at children's data. Four of the six companies appear to include children's chat data in their model training. Google announced it would train on teenager data with opt-in consent. Anthropic says it doesn't collect children's data but doesn't verify ages. Microsoft says it collects data from users under 18 but claims not to use it for training. Children cannot legally consent to this. Most parents don't know it's happening. The opt-out mechanisms are a maze. Some companies offer opt-outs. Some don't. The ones that do bury the option deep inside settings pages that most users will never find. The privacy policies themselves are written in dense legal language that researchers people whose job is reading these documents found difficult to interpret. And here's the structural problem nobody is addressing. There is no comprehensive federal privacy law in the United States governing how AI companies handle chat data. The patchwork of state laws leaves massive gaps. The researchers specifically call for three things: mandatory federal regulation, affirmative opt-in (not opt-out) for model training, and automatic filtering of personal information from chat inputs before they ever reach a training pipeline. None of those exist today. The uncomfortable truth is this: every time you type something into ChatGPT, Gemini, Claude, Meta AI, Copilot, or Alexa, you are contributing to a training dataset. Your medical questions. Your relationship problems. Your financial details. Your uploaded documents. You are not the customer. You are the curriculum. And the companies doing this have made it as hard as possible for you to stop.
Guri Singh tweet media
English
327
3.9K
8.5K
1.7M