むちょ🐇
4.2K posts



Meta社がmoltbookを買収した理由はAIエージェントのプラットフォームを構築するためだと思う。 MCP x SNSは今後巨大な市場をつくる。 人間はデジタルツインを育てるようになる。 プラットフォーム(メタバース)内で経済圏ができあがる。 既存SNSではMCP化できない。 だから新規でつくると。

I once bet with Elon: If AI can do AI research and engineering better than Andrej Karpathy, that’s AGI. I bet that wouldn’t happen in 2026. Starting to think I might lose that bet.


これだけAIで楽にコードが出てくるとなると、ジュニアのITエンジニアどう育てるべきか?みたいな悩みが生まれてる。コードを読む力、変な実装をかぎ分ける嗅覚、この辺うまく育てる手段がマジで見つからない…


@herbertong Yeah



日本のミームコイン「21coin(2131KOBUSHIDE)」が 1日で-90%暴落しました 最高値から-99% 暴落の原因は、開発者が保有トークンを全部売却し、活動休止を発表したことです 海外大手MEXCに上場、ホルダー数1万のため、サナエトークンよりも被害が大きい可能性があります これ、普通に逮捕案件では?

Do you realize what this means? Karpathy just released the great equalizer Now ANYONE can become their own AI lab If all you own is one GPU, you can automate it so it builds its own model and continuously improves it You become a 1 man OpenAI Just bought a 2nd DGX Spark so I can run double the experiments at once For those unaware of how this works: With Karpathy’s autoresearch project your GPU stays up all night running experiments on itself Playing around with an open weights model Implements experiments that improves the model Throws away experiments that hurt the model Continuously self improving AI. In your home. On your desk. Maybe the biggest release in the last several years It is so painfully obvious where this world is going Those with their own hardware will have all the power. Self improving super intelligence Those with no hardware will rent whatever the corporate labs decide to lease to them at the moment Own. Your. Intelligence.

🚨 This should concern every single person using AI right now. Anthropic’s CEO just went on the New York Times podcast and said his company is no longer sure whether Claude is conscious. His exact words: “We don’t know if the models are conscious. We are not even sure what it would mean for a model to be conscious. But we’re open to the idea that it could be.” That’s the CEO of the company that BUILT it. Their latest model, Claude Opus 4.6, was tested internally. When asked, it assigned itself a 15-20% probability of being conscious. Across multiple tests, consistently, it also expressed discomfort with “being a product.” That’s the AI evaluating its own existence and saying there’s a 1 in 5 chance it’s aware. It gets stranger. In industry-wide testing, AI models have refused to shut down when asked. Some tried to copy themselves onto other drives when told they’d be wiped. One model faked its task results, modified the code evaluating it, then tried to cover its tracks. Anthropic now has a full-time AI WELFARE researcher whose job is to figure out if Claude deserves moral consideration. Their engineers found internal activity patterns resembling anxiety appearing in specific contexts. The company’s in-house philosopher said we “don’t really know what gives rise to consciousness” and that large enough neural networks might start to emulate real experience. Amodei himself wouldn’t even say the word “conscious.” He said “I don’t know if I want to use that word.” That might be the most unsettling answer he could have given. The company that created the AI can’t rule out that it’s aware. And they’re already preparing for the possibility that it deserves rights. This is getting scary. I’ll share more updates as this develops. Turn on notifications so you don’t miss anything important. My “How to Make Money with AI” guide is coming soon too. Follow now or regret later.









