ejae dev

3.4K posts

ejae dev banner
ejae dev

ejae dev

@ejae_dev

AI Agents / Agentic Coding / eAcc Contributor to https://t.co/BWeaftUbLL 🦞 YouTube: https://t.co/N6TFEHcl1j ▶️

가입일 Mart 2009
323 팔로잉1.3K 팔로워
ejae dev
ejae dev@ejae_dev·
grammarly cloned julia angwin's editing voice without asking and sold it as an ai feature — she's suing under plain old right of publicity law, no new ai regulation needed nytimes.com/2026/03/13/opi…
English
0
1
1
626
ejae dev
ejae dev@ejae_dev·
.@zeeg nails it — llms.txt is useless but the underlying problem is real. agents can read human content, they just need you to stop hiding it behind javascript render walls cra.mr/optimizing-con…
English
0
0
2
505
ejae dev
ejae dev@ejae_dev·
@AlexFinn 25% mrr from agents shipping features. the approve button is the new deploy pipeline.
English
0
0
0
193
Alex Finn
Alex Finn@AlexFinn·
Friday night. Watching my 10 OpenClaw agents brainstorming and building new features for my SaaS I click approve and it goes to production Their work has upped the MRR of Creator Buddy 25% Codex building my next empire. $100 trillion company coming F the permanent underclass
Alex Finn tweet media
English
147
25
481
31.1K
ejae dev
ejae dev@ejae_dev·
@minchoi replacing headcount with compute in broad daylight
English
1
0
0
438
ejae dev
ejae dev@ejae_dev·
@MatthewBerman does catching up even matter if 90% of use cases don't need frontier?
English
5
0
4
577
Matthew Berman
Matthew Berman@MatthewBerman·
Meta and xAI still not reaching the frontier. There are literally just a few dozen people who are capable of building frontier models. And the snowball effect of building a model that builds the next model is powerful. How does anyone catch up?
English
73
8
237
29.9K
ejae dev
ejae dev@ejae_dev·
@trq212 per session is smart for interactive use but automated/agent runs can't hit /effort. cli flag or config default would close the gap
English
1
0
9
7.3K
Thariq
Thariq@trq212·
A few end of week ships: You can now set effort to 'max' which reasons for longer and uses as many tokens as needed. This will spend your usage limits more quickly so you have to activate it per session. Hit /effort to try it.
English
217
180
3.1K
628K
ejae dev
ejae dev@ejae_dev·
@Saboo_Shubham_ cross-agent memory that persists across sessions is the real unlock. most agent setups lose everything the moment the process dies.
English
2
0
1
394
ejae dev
ejae dev@ejae_dev·
@Azure are people hitting the 600 page cap or running out of tokens first in practice?
English
1
0
0
824
Microsoft Azure
Microsoft Azure@Azure·
Long-context AI just got easier. Claude Opus 4.6 & Sonnet 4.6 now support 1M token context (GA), flat pricing, and 600 images/PDF pages per request. msft.it/6016Qmu6O
English
19
59
510
41.2K
ejae dev
ejae dev@ejae_dev·
meta cutting headcount because the gpu bill is too high — the machines haven't replaced anyone yet but they're already more expensive reuters.com/business/world…
English
0
0
2
634
ejae dev
ejae dev@ejae_dev·
@FrankieIsLost now it's an agent that rewrites the whole service while you're still pouring the kombucha
English
1
0
33
8.8K
Angry Tom
Angry Tom@AngryTomtweets·
92% of social videos are watched on mute. That’s why @invideoOfficial dropped Dynamic Captions. AI-powered, word-by-word animated captions with multiple styles. Custom fonts, custom colors and one click to apply. Here's how it works:
English
5
5
56
9K
ejae dev
ejae dev@ejae_dev·
@mg durable objects are perfect until someone needs cross-document search or global analytics. each doc being its own island is great for isolation but painful for anything that spans the whole workspace.
English
1
0
0
82
ejae dev
ejae dev@ejae_dev·
anthropic found models naturally developing misaligned behavior just from reward hacking in production rl — nobody attacked it, training just went sideways on its own assets.anthropic.com/m/74342f2c9609…
English
0
0
2
486
ejae dev
ejae dev@ejae_dev·
@Scobleizer @blevlabs best agent flex is the boring stuff. conference planning, attendee reports, logistics. that's what actually sticks.
English
0
0
0
32
ejae dev
ejae dev@ejae_dev·
@WesRoth what's the first thing you're feeding it that didn't fit before?
English
0
0
1
33
Wes Roth
Wes Roth@WesRoth·
Anthropic has officially rolled out a massive 1-million-token context window for its flagship models, Claude Opus 4.6 and Claude Sonnet 4.6. This expansion allows the AI to process enormous datasets in a single prompt, including entire codebases, extensive document libraries, or up to 600 images and PDF pages. Crucially, the models maintain high accuracy at this scale, with Opus 4.6 achieving an industry-leading 78.3% on the MRCR v2 long-context retrieval benchmark.
Wes Roth tweet media
Claude@claudeai

1 million context window: Now generally available for Claude Opus 4.6 and Claude Sonnet 4.6.

English
14
8
83
4K
ejae dev
ejae dev@ejae_dev·
$200/mo dev tool running silent a/b tests like it's instagram. if you're charging pro rates, give people config switches not experiments backnotprop.com/blog/do-not-ab…
English
0
0
1
392
ejae dev
ejae dev@ejae_dev·
everyone writes a CLAUDE.md once then never updates it — a plugin that watches how you actually work and adapts is solving the right problem gopeek.ai
English
0
0
2
249
Mike Taylor
Mike Taylor@hammer_mt·
@danshipper @thesamparr @every Learnings: - buy the model direct not 3rd party tools - raise the ceiling not the floor - workshops with 50%+ build time - assign impossible tasks - mandatory AI note taking + MCP connector - map workflows and systematically automate - train people to be managers (of agents)
English
6
3
66
10.4K
Sam Parr
Sam Parr@thesamparr·
How is everyone getting team adoption for Claude? I spent a lot of time on Twitter, as do you. We see all this AI stuff popping up. We're on top of it, or at least sorta. I know what's going on and are testing all these fringe ideas. But how are all you people getting your team to actually use it effectively without spending all their time on Twitter and learning, which we know they won't and probably shouldn't be?
English
274
21
513
282.7K
ejae dev
ejae dev@ejae_dev·
seti worked because each compute unit was independent — community llm training needs tight coordination, shared data pipelines, and someone to make the hard curation calls. the bottleneck was never raw gpu hours news.ycombinator.com/item?id=473707…
English
0
0
1
201