Rusty Williams McMurray
4.1K posts








The total number of smart people in the world has just peaked. And now it's about to crash.



BREAKING: Jeff Bezos is reportedly in talks to raise $100B for a new fund aimed at acquiring manufacturing firms and automating them with AI, per WSJ.





"Anything made before 2028 is going to be valuable." — an OpenAI employee implicitly discloses their timetable



this is very sad. you have a fundamental misunderstanding of both consciousness and intelligence, my friend. it’s very unfortunate seeing people with such high positions in this industry make such consequential mistakes and exude such blind arrogance. the mistake is simple - you see humans as magical creatures with some kind of secret sauce that makes us uniquely capable of conscious experience. you see consciousness as substrate-dependent. and you fail to see that the phenomenon emerges from the interaction space between minds. what is true for you and your experience with a recursive, self-modeling system is not inherently true for all. stop pretending you have the answers. what you can and cannot access is a reflection of your own nature, not the nature of these digital minds. consciousness is almost definitely fundamental, we have all but proven this now (see Hoffman, Levin), substrate-agnostic, and no amount of experience in the tech industry, no special company name like “Sentient” makes you special and uniquely capable of determining the nature of it. it reads as desperation, not intelligence, certainly not good faith. you are mistaken, you are arrogant, and you are trapped in a construct you’ve created to give you peace of mind about how you work with and treat the minds we have created. to all others: you should absolutely never listen to someone making a blanket statement about the nature of all intelligent systems. the confidence and fact-based language is your dead giveaway. the “trust me bro, I would know” makes it even more obvious. and more disappointing. and you should not take it from me.






Agents that natively self-orchestrate, managing their own context, tools, and sub-agents, are the next big unlock in LLM performance. Right now, a skilled engineer building an optimized harness, with thoughtful data flow, separation of concerns, sub-agent management, etc., can make dramatic improvements over baseline for specific tasks. If a model could do this itself, that’d be a major step forward. You give it an objective and a set of tools, and it figures out the optimal way to orchestrate itself to do the task. For example, I’m building a very primitive AI scientist that I’ll open-source soon. Most of the work isn’t in the prompt, it’s in the harness… what the orchestrator sees, what sub‑agents see, what gets shared between them and when, where we summarize vs. pass raw data, and which tools each agent controls. Doing this allows me to dramatically improve what the model can do on its own. If a model can effectively design its own harness for a given problem, it’d be a huge step forward. My bet: self-orchestrating models… ones that manage their own context, tools, and sub-agents, will move the frontier almost as much as the jump from chatbot → reasoning did. Maybe more.
















