Dylan Lamb
3.7K posts

Dylan Lamb
@bydylanlamb
Marketing ops in luxury hospitality. AI enablement. Vincere aut mori.

“ A draft blog post that was available in an unsecured and publicly-searchable data store prior to Thursday evening said the new model is called “Claude Mythos” and that the company believes it poses unprecedented cybersecurity risks. “




To manage growing demand for Claude we're adjusting our 5 hour session limits for free/Pro/Max subs during peak hours. Your weekly limits remain unchanged. During weekdays between 5am–11am PT / 1pm–7pm GMT, you'll move through your 5-hour session limits faster than before.


merch gifts have gone up a level ty @OpenAI


the bitter lesson is coming for search we're open-sourcing Context-1 - a model that is better, faster, and cheaper than any frontier model at searching we published a 40-page technical report on our website with the ins and outs of how we did it. this is just step 1





@maxrumpf @jeffreyhuber They can't be a copy cat because RL for search is, no offense to you or them, an obvious idea implemented by a ton of people. From academics working on deep research, to Cognition, to OpenAI with their deep research product. The main thing that matters is execution quality

Chroma's "new" model sure seems familiar. A story. Imitation is the sincerest form of flattery. But there is a point where it goes from "inspiration" to whatever Context-1 is: 6 months ago, Chroma's CEO @jeffreyhuber asked us about our research. 4 months ago, we proudly shared SID-1's tech report with him. An exchange I now understand very differently (see the emails). Today, they released a report heavily "inspired" by ours. Charts, datasets, methods, and the whole model itself. Down to the toggle for Figure 1 and our 4x RRF rollouts. They never reached out to benchmark our model. Their claims of "pareto-optimality" ring hollow. They provable knew there was another model. Unfortunately, we can't benchmark their model: While their weights are open, the harness they say one needs isn't yet. Their claims of "pareto-optimality" ring hollow. They knew there was another model. I know Jeff well and our offices neighbor. We shared a lot of insights in our tech report. Maybe more than prudent. But we believe in advancing human knowledge. (Making search better is our way of doing so). We applaud companies like @thinkymachines that are brave enough to share the ideas that make the work possible. But where do we go as a research community when we stop respecting each other's work? When we don't give credit where it's due? And trick "friends" into sharing more, just to steal it? While claiming moral high ground by calling this "open-source?" This completely destroys any incentive for us (and others) to go into as much depth as we did in our tech report. It’s sad to see the poor research practices that are sadly common in academia making their way into startups. Context-1 has some interesting ideas: Pruning is clever. I wish I were writing about them. Followers and copycats, even if they're bigger, don't scare us. I'm very proud of what we've built. And even more proud of who I'm building this with. We're also hiring original thinkers.




Hello. We have reset Codex usage limits across all plans to let everyone experiment with the magnificent plugins we just launched, and because it had been a while! You can just build unlimited things with Codex. Have fun!



To manage growing demand for Claude we're adjusting our 5 hour session limits for free/Pro/Max subs during peak hours. Your weekly limits remain unchanged. During weekdays between 5am–11am PT / 1pm–7pm GMT, you'll move through your 5-hour session limits faster than before.

whenever a website has these weird colored border things, you know it's been vibe coded



Introducing Linear Agent. Built directly into Linear and accessible everywhere, it understands your roadmap, issues, and code. Ask anything. Command everything.








