

Findy AI+で組織内のAI活用度のランキングが見れるようになりました🎉 具体的には ・🤖AIを活用して定着できているか ・👨💻トークンがうまく使えているか などの順位が見れたり、利用状況の深堀りが可能になりました! 無料で始められますので、ぜひご利用頂けると嬉しいです! 🔽詳細はリプへ
Findy AI+|生成AIの最新ニュースを発信
170 posts

@AIPlus_Findy
グローバルの生成AI最新トレンドを毎日お届け。 Claude Code、Codex、CursorなどのAIツールを使いこなし、Agentic Workflowを開発組織のデファクトに。


Findy AI+で組織内のAI活用度のランキングが見れるようになりました🎉 具体的には ・🤖AIを活用して定着できているか ・👨💻トークンがうまく使えているか などの順位が見れたり、利用状況の深堀りが可能になりました! 無料で始められますので、ぜひご利用頂けると嬉しいです! 🔽詳細はリプへ







Last week, we released a preview of memories in Codex. Today, we’re expanding the experiment with Chronicle, which improves memories using recent screen context. Now, Codex can help with what you’ve been working on without you restating context.

We're launching the Anthropic STEM Fellows Program. AI will accelerate progress in science and engineering. We're looking for experts across these fields to work alongside our research teams on specific projects over a few months. Learn more and apply: job-boards.greenhouse.io/anthropic/jobs…



New signups for Copilot Pro, Pro+, and Student plans are paused to maintain service reliability for current users. • Usage limits tightened; Pro+ offers 5X higher limits than Pro github.blog/changelog/2026…


Claude Code fully dissected! Researchers from UCL reverse-engineered the leaked Claude source. What they found changes how you should think about agent design. Only 1.6% of the codebase is AI decision logic. The other 98.4% is operational infrastructure. Permission gates, tool routing, context compaction, recovery logic, session persistence. The model reasons. The harness does everything else. This is the opposite of what most agent frameworks do today. LangGraph routes model outputs through explicit state machines. Devin bolts heavy planners onto operational scaffolding. Claude Code gives the model maximum decision latitude inside a rich deterministic harness, and invests all its engineering effort in that harness. The core loop is a simple while-true. Call model, run tools, repeat. But the systems around that loop are where the real design lives: A permission system with 7 modes and an ML classifier. Users approve 93% of prompts anyway, so the architecture compensates with automated layers instead of adding more warnings. A 5-layer context compaction pipeline. Each layer runs only when cheaper ones fail. Budget reduction, snip, microcompact, context collapse, auto-compact. Four extension mechanisms ordered by context cost. Hooks (zero), skills (low), plugins (medium), MCP (high). Each answers a different integration problem. Subagents return only summary text to the parent. Their full transcripts live in sidechain files. Agent teams still cost roughly 7x the tokens of a standard session. Resume does not restore session-scoped permissions. Trust is re-established every session. That friction is the point. The bet behind all of this is simple. As frontier models converge on raw coding ability, the quality of the harness becomes the differentiator, not the model. Paper: Dive into Claude Code (arXiv:2604.14228) In the next tweet, I've shared an article I wrote on Agent Harness and what every big company is building. Do check.

Our virtual hackathon is back! Join us for a week of building with Opus 4.7 alongside developers from around the world. The Claude Code team will be in the room all week, with a prize pool of $100K in API credits.

Through the end of this weekend, we are doubling Composer 2 usage limits inside of Cursor's new agents window. Enjoy!

Codex is open source, enabling anyone to build awesome applications on top of it:

// Self-Evolving Agent Protocol // One of the more interesting papers I read this week. (bookmark it if you are an AI dev) The paper introduces Autogenesis, a self-evolving agent protocol where agents identify their own capability gaps, generate candidate improvements, validate them through testing, and integrate what works back into their own operational framework. No retraining, no human patching, just an ongoing loop of assessment, proposal, validation, and integration. Why it's worth reading this paper: Static agents age quickly. As deployment environments change and new tools arrive, the agents that survive will be the ones that can safely rewrite themselves. Autogenesis is part of a growing wave of self-improving agent systems, alongside work like Meta-Harness and the Darwin Gödel Machine line, and it's one of the cleaner protocol-level takes on continual self-improvement so far. Paper: arxiv.org/abs/2604.15034 Learn to build effective AI agents in our academy: academy.dair.ai

Introducing Claude Design by Anthropic Labs: make prototypes, slides, and one-pagers by talking to Claude. Powered by Claude Opus 4.7, our most capable vision model. Available in research preview on the Pro, Max, Team, and Enterprise plans, rolling out throughout the day.