alex morris 🔥

2K posts

alex morris 🔥 banner
alex morris 🔥

alex morris 🔥

@cto_ya_know

Chief Tribe Officer (CTO) @tribecodeAI teaching robots to press buttons so people can paint 🔥 also @KoiiFoundation long horizon / full self-driving is AGI

San Francisco, CA Katılım Mart 2024
1K Takip Edilen3K Takipçiler
alex morris 🔥 retweetledi
BrenJ
BrenJ@azzabazazz·
Spoke to a Stanford Law Lecturer today. He doubted there would be many AI-triggered job losses by 2035 beyond tech and creative industries (entertainment, marketing) because his CTO pal at a major payments company said they can't figure out where to plug in LLMs. You're early.
English
0
1
2
91
alex morris 🔥 retweetledi
Evan Luthra
Evan Luthra@EvanLuthra·
🚨 Exactly 12 hours after my last post about AI getting more expensive, Anthropic just limited access to Claude subscriptions. You can't make this up. Starting tomorrow your Claude subscription won't work on third-party tools anymore. You're paying $20 a month. Or $200 on Max. And now they're telling you where you can and can't use it. Tools like OpenClaw that developers were using with their Claude login? Cut off. Done. Buy extra usage bundles or pay per API call. Their reason? "Capacity is a resource we manage thoughtfully." Translation: too many people were actually using the product so they're limiting access. A week ago Mythos got leaked. The most powerful AI model ever built. API only. Premium pricing. Now they're cutting off third-party access to existing plans. ChatGPT Pro is $250 a month. Claude Max is $200. SuperGrok Heavy is $300. And now even the basic plans are getting restricted. First they raise the prices. Then they limit where you can use it. Then the best models go behind premium paywalls. Every step makes AI less accessible. Not more. This is what happens when AI companies prepare for an IPO. Every dollar of compute matters. Every user needs to be monetized. Free access shrinks. Restrictions grow. Prices go up. Yesterday they raised the price. Today they restricted access. Tomorrow it'll be something else. If you're still waiting to start, you're already behind.
Evan Luthra tweet media
Evan Luthra@EvanLuthra

I AM GENUINELY SCARED ABOUT WHAT'S COMING NEXT IN AI. Not because the robots are going to rule us. Because of the price tag. Claude Max is $200. ChatGPT Pro is $250 a month. SuperGrok Heavy is $300. A year ago none of these plans existed. Anthropic just leaked their next model. Claude Mythos. Their own blog post called it "by far the most powerful AI model we've ever developed." It won't be in any existing plan. API only. Premium pricing most people won't be able to touch. Every new model costs more. Every new plan costs more. This isn't slowing down. It's accelerating. AI is the biggest advantage anyone can have right now. The people using it are building faster, earning more, and pulling ahead every single day. That's not hype. That's just what's happening. But right now the tools are still cheap. $20 a month gets you access to models that would have been unimaginable two years ago. That window is closing. A year from now the best AI won't cost $20. It won't cost $200. It'll cost thousands. And only the people who can afford it will have access to the most powerful intelligence on earth. The gap is coming. Between those who can afford the best AI and those who can't. So lock in now. Learn these tools while they're accessible. Build with them while they're affordable. Stack as much value as you can while the playing field is still somewhat level. Because it won't be level for long.

English
39
8
202
29.7K
alex morris 🔥 retweetledi
Tech with Mak
Tech with Mak@techNmak·
Imagine trying to teach someone how to swim just by letting them read books about water. That is how we have been training AI on physics, using text descriptions. To really learn, you need to get in the water. "The Well" is that water. Polymathic AI has released a massive 15TB open-source library of physics simulations. It allows AI models to experience physical phenomena directly. Instead of reading about a supernova, the model processes the actual data of the explosion. Instead of reading about aerodynamics, it analyzes the fluid flow. This moves us from [Generative AI] (making things up) to [Scientific AI] (discovering truth). A huge step forward for open science. GitHub Repo: github.com/PolymathicAI/t…
Tech with Mak tweet media
English
19
149
903
35.4K
alex morris 🔥
alex morris 🔥@cto_ya_know·
When rate limits change, you have two choices A) fall behind B) become nocturnal We have chosen to become nocturnal
English
0
0
4
71
alex morris 🔥
alex morris 🔥@cto_ya_know·
the good news about the recent anthropic ratelimits is you will never need to worry about hitting your context limit ever again you don't have enough tokens :)
English
0
0
0
82
alex morris 🔥
alex morris 🔥@cto_ya_know·
Update: got Codex, major improvement to workflow Not having to constantly worry about rate limits is a big win
alex morris 🔥@cto_ya_know

@trq212 Haha. This is a joke. My max 5x plan, billed at $200 / month, just topped out after ~30 minutes Time to get codex.

English
0
0
0
130
Tilantra
Tilantra@tilantra·
Brilliant viewpoint! We agree 100% and WE SOLVE THIS! Files? No. Capsules. Yes. We are building 💊Capsules, which are basically artifacts that can transfer context from one tool to another as easily as a drag and drop, while being versionable and shareable. We take care of the “when” using Dynamic Context Injection!! Imagine this, you “capsule” an email, drop it into gpt for ideas, drop the same versioned capsule into figma to create decks, and finally into Cursor. Without ever repeating yourself. Check us out! We are already live! Product: chromewebstore.google.com/detail/capsule…
English
1
0
1
7
Rohan Paul
Rohan Paul@rohanpaul_ai·
The paper says the best way to manage AI context is to treat everything like a file system. Today, a model's knowledge sits in separate prompts, databases, tools, and logs, so context engineering pulls this into a coherent system. The paper proposes an agentic file system where every memory, tool, external source, and human note appears as a file in a shared space. A persistent context repository separates raw history, long term memory, and short lived scratchpads, so the model's prompt holds only the slice needed right now. Every access and transformation is logged with timestamps and provenance, giving a trail for how information, tools, and human feedback shaped an answer. Because large language models see only limited context each call and forget past ones, the architecture adds a constructor to shrink context, an updater to swap pieces, and an evaluator to check answers and update memory. All of this is implemented in the AIGNE framework, where agents remember past conversations and call services like GitHub through the same file style interface, turning scattered prompts into a reusable context layer. ---- Paper Link – arxiv. org/abs/2512.05470 Paper Title: "Everything is Context: Agentic File System Abstraction for Context Engineering"
Rohan Paul tweet media
English
64
185
1.1K
81.5K
alex morris 🔥
alex morris 🔥@cto_ya_know·
The big trap of AI: To get the full effects, you have to give up all of your data We almost got it right with web3, but the market incentives weren't there Now, AI is a competitive must
English
1
0
4
104
Braelyn ⛓️
Braelyn ⛓️@braelyn_ai·
Social Media Trained Our Brains for the Current State of Codegen A theory on algorithmic dopamine hijacking and how it has affected the people who loved computers for the sake of computers
Braelyn ⛓️ tweet media
English
7
7
44
14.7K
JB
JB@jamie247·
@PaulAustin3w Guess what the startup is dead so it doesn’t matter💀 VCs won’t back individual founders and try to lock them in, they will back networks to collectively solve problem stacks Zero to Many.
English
1
1
5
463
Paul F. Austin
Paul F. Austin@PaulAustin3w·
A "no psychedelics" clause in your term sheet. That's the direction things are heading. On last week's All-In podcast, Bryan Johnson shared that investors are now writing 'no psychedelics' clauses into deal docs. One investor told him directly that if they invest in a founder, that founder is not allowed to use psychedelics for the duration of the company. It's written into the agreement. Two weeks ago, Marc Andreessen went on David Senra's podcast and proudly declared he practices "zero" introspection. Then he doubled down on X for days, calling introspection a combination of "neuroticism, narcissism, and thumbsucking." Paul Graham pushed back, the internet had a field day, and like any "great man," Andreessen doubled down on his idiocy. The pattern is clear: Silicon Valley's investor class is building a narrative that introspection is dangerous, psychedelics are a liability, and the best founders are the ones who never slow down long enough to question why they're building what they're building. ...which is one of the worst possible developments for the future of innovation. Here's what actually happens when a founder works with psychedelics with real intention, proper preparation, and experienced guidance. They don't "get oneshotted." They get clarity, starting to see which parts of their work are driven by ego and which by genuine purpose. They often come back more committed to their companies, not less, because they've reconnected with the reason they started building in the first place. And what about the founders who leave? Many of them probably *should* have left. They were building something that wasn't aligned with who they actually are. And investors treating that as a risk to manage rather than a signal to pay attention to tells you everything about where priorities sit. We are entering the age of AI, where the most valuable companies will not be the ones that simply optimize for speed and scale. They'll be the ones who create things that actually matter to the humans using them. That requires depth and a willingness to ask hard questions about what you're building and who it serves. It requires, yes, introspection. The VCs who get this, who actually support their founders in exploring psychedelics with intention and responsibility, are going to end up backing companies that leave a much more positive mark on the world. Not because psychedelics are magic, but because founders who understand themselves build products with a deeper sense of devotion to the craft. They stay aligned with all stakeholders, not just the ones writing checks. Marcus Aurelius, as Andreessen pointed out, ruled one of the largest empires in history while maintaining a rigorous practice of self-examination. The Meditations is literally a book of introspection. And he managed to hold it all together even while engaging in psychedelic-infused rituals (!) The question isn't whether psychedelics make founders less effective. The question is, what kind of companies do we actually want to be built in the most transformative technological era in human history? And do we really want investors, afraid of depth, to be the ones deciding?
Paul F. Austin tweet media
English
38
32
249
26.8K
alex morris 🔥 retweetledi
Alex Volkov
Alex Volkov@altryne·
PSA: If you've been running out of Claude session quotas on Max tier, you're not alone. Read this. Some insane Redditor reverse engineered the Claude binaries with MITM to find 2 bugs that could have caused cache-invalidation. Tokens that aren't cached are 10x-20x more expensive and are killing your quota. If you're using your API keys with Claude this is even worse. This is also likely why this isn't uniform, while over 500 folks replied to me and said "me too", many (including me) didn't see this issue. There are 2 issues that are compounded here (per Redditor, I haven't independently confirmed this) : 1s bug he found is a string replacement bug in bun that invalidates cache. Apparently this has to do with the custom @bunjavascript binary that ships with standalone Claude CLI. The workaround there is to use Claude with `npx @anthropic-ai/claude-code` 2nd bug is worse, he claims that --resume always breaks cache. And there doesn't seem to be a workaround there, except pinning to a very old version (that will miss on tons of features) This bug is also documented on Github and confirmed by other folks. I won't entertain the conspiracy theories there that Anthropic "chooses" to ignore these bugs because it gets them more $$$, they are actively benefiting from everyone hitting as much cached tokens as possible, so this is absolutely a great find and it does align with my thoughts earlier. The very sudden spike in reporting for this, the non-uniform nature (some folks are completely fine, some folks are hitting quotas after saying "hey") definitely points to a bug. cc @trq212 @bcherny @_catwu for visibility in case this helps all of us.
Alex Volkov tweet media
Alex Volkov@altryne

My feed is showing me a bunch of folks who tapped out their whole usage limits on Mon/Tue. Is this your experience? Please comment, I want to understand how widespread this is

English
224
428
5K
1.6M
alex morris 🔥 retweetledi
Quinn Nelson
Quinn Nelson@SnazzyLabs·
Claude is just like a person because it only works 8 hours a day.
English
227
488
9.5K
321.9K
alex morris 🔥
alex morris 🔥@cto_ya_know·
new favorite claude prompt... "try again" guess the word got out
alex morris 🔥 tweet media
English
1
0
4
219
Thariq
Thariq@trq212·
To manage growing demand for Claude we're adjusting our 5 hour session limits for free/Pro/Max subs during peak hours. Your weekly limits remain unchanged. During weekdays between 5am–11am PT / 1pm–7pm GMT, you'll move through your 5-hour session limits faster than before.
English
2.3K
528
7.4K
7.6M
alex morris 🔥
alex morris 🔥@cto_ya_know·
This thread is hilarious Anthropic: we don't do that 500 users: yes you do we have proof Anthropic: ...
Alex Campbell@alexjcampbell

@trq212 Can you stop silently switching the model from Opus to Sonnet? That silent switch is the most dark pattern user hostile move I’ve ever seen.

English
1
0
1
220
conor brennan-burke
conor brennan-burke@conor_ai·
introducing agent-to-agent hiring at @hyperspell no resumes. no leetcode. you build an agent. our agent interviews yours if you can build a great agent to do the job, that's the proof you can do the job anyone can apply. we will interview every single agent
English
136
44
660
140.2K
alex morris 🔥
alex morris 🔥@cto_ya_know·
@ericmichaelis @adisingh We sure are 👋 Waiting for the gmail integration for agent mail... would be a HUGE unlock to be able to set it up for google workspace
English
0
0
1
33
Adi Singh
Adi Singh@adisingh·
Who’s building OpenClaw for enterprises?? Been following this space closely. Think there’s a big winner that will be born here.
English
185
3
201
39.7K
Brian Johnson
Brian Johnson@_brian_johnson·
@cto_ya_know real talk — I started tracking my token spend per session and it was eye-opening. having a live counter in the menu bar completely changed how I prompt. way more intentional now.
English
1
0
1
24
alex morris 🔥
alex morris 🔥@cto_ya_know·
Anthropic just nerfed claude This is the noose of AI tightening Anyone without a series A should probably start looking for a job Your wealth mobility is now your token consumption, and it's about to start being too expensive to use
English
2
1
1
226
JUMPERZ
JUMPERZ@jumperz·
this actually shows how deep the problem is if normal workflows like long context, resumes, and iteration are treated as "expensive cache misses" the core use case itself becomes unstable because now the same task can cost completely different amounts depending on cache state you can't see or control... i dunno what's going on with claude currently, but as an 8-month user i can guarantee you that something has changed and feels off also, not everyone on reddit who reported this or on X suddenly decided to complain at the same time for no reason and smh theyre framing it as demand management, not a price change... but if you burn limits faster for the same work, that is functionally a price change... crazy that 200$ max feels like 20$ pro now
Thariq@trq212

@altryne @thursdai_pod I think very often these people are running into expensive prompt cache misses, e.g. when resuming a long conversation on million context. Happy to debug if you have a particular example. But I'll also make a thread on avoiding that separately.

English
25
10
189
15.4K