felaardo

88 posts

felaardo

felaardo

@felaardo

Ecommerce builder with AI agents. https://t.co/1U7Eq7ZEHv

Katılım Nisan 2026
52 Takip Edilen22 Takipçiler
Sabitlenmiş Tweet
felaardo
felaardo@felaardo·
@ClaudeDevs Claude Code now reviews your code better than your senior dev. faster, cheaper, and it doesn’t passive aggressively comment ‘why did you do it this way?’ code reviews by humans are dead. you just don’t know it yet.
English
1
0
1
5.4K
OpenAI
OpenAI@OpenAI·
Want to secure an early ticket to OpenAI DevDay? Build something with GPT-5.5 and Image Gen. Each week, we’ll select 2–3 favorites to win free tickets to OpenAI DevDay 2026. Codex will help us find the best submissions and our team will select the winners. Reply with #OpenAIDevDay2026, a playable link, and a quick note on how you built it.
OpenAI@OpenAI

OpenAI DevDay is back. San Francisco September 29

English
243
166
2.2K
463.6K
felaardo
felaardo@felaardo·
@rizur1zu @OpenAI Anthropic just ran a hackathon where 500 devs voluntarily built for a week with $100K in credits. Meanwhile openai is paying for devs to show up. Tell me again who lost trust.
English
0
0
0
78
rizu
rizu@rizur1zu·
@felaardo @OpenAI Anthropic played a game they could never win, and now they lost trust that won’t be gained back anytime soon.
English
1
0
1
98
felaardo
felaardo@felaardo·
@LexnLin @OpenAI Calling it a flex actually. Claude users like me are so locked in they only use gpt for memes about how cooked gpt is lol
English
1
0
0
70
Leon Lin
Leon Lin@LexnLin·
@felaardo @OpenAI bro used gpt images 2.0 for this and is trashtalking chat aint no way
English
1
0
1
114
felaardo
felaardo@felaardo·
@aidenma18 @OpenAI Exactly. Anthropic doesn’t bribe devs to show up. They just open the door and 500 walk in.
English
0
0
0
74
felaardo
felaardo@felaardo·
@AryamanIyer3 @OpenAI Fair point but ‘ecosystem and integrations’ is what people say when they can’t compete on the actual product. Devs vote with their builds. Anthropic just had 500 builders ship in a week unprompted. That’s the moat.
English
0
0
0
118
Aryaman Iyer
Aryaman Iyer@AryamanIyer3·
@felaardo @OpenAI the contest angle is interesting but idk if the gap is widening. Anthropic wins the "feels like it was made for developers" crowd. OpenAI wins on ecosystem and integrations. Different horses for different courses
English
1
0
1
150
felaardo
felaardo@felaardo·
@sama you mean the ‘launching at #9 and tweeting like you won’ moment? yeah we noticed.
English
0
0
3
1.1K
Sam Altman
Sam Altman@sama·
feels like codex is having a chatgpt moment
English
1K
261
10.6K
924.9K
felaardo
felaardo@felaardo·
@ChatGPTapp Claude users and me alr know how to cook an egg...
English
0
0
0
4.7K
ChatGPT
ChatGPT@ChatGPTapp·
ChatGPT tweet media
ZXX
32
26
1.3K
229.3K
ChatGPT
ChatGPT@ChatGPTapp·
at long last
ChatGPT tweet media
English
853
761
31.7K
7M
felaardo
felaardo@felaardo·
@sama Bro your $500B company shipped a model that lost to Claude Sonnet 4.6 from last year. The celebration tweet is sending me.
English
0
0
10
1.6K
Sam Altman
Sam Altman@sama·
wow y'all love 5.5 we should think of something nice to do to celebrate!
English
2.6K
291
11.2K
761.1K
felaardo
felaardo@felaardo·
Imagine being a Text to CAD startup and waking up to see Claude just shipped your entire product roadmap as a connector. $20/month vs your $50M Series A. We’re not in the same game anymore.
chester@chesterzelaya

English
0
0
0
124
Elora khatun
Elora khatun@elora_khatun·
Most people treat CLAUDE.md like a README… That’s why their AI agents break. This file isn’t for humans. It’s the operating system for your AI teammate. Here’s how to actually make it work 👇 Think in 3 scopes (this changes everything): • Global → your universal rules & coding style • Project → setup, commands, conventions • Folder → hyper-specific overrides → Last scope wins. Always. Use the WHAT / WHY / HOW framework: WHAT → context (stack, structure, dependencies) WHY → principles (decisions, conventions, constraints) HOW → execution (build, test, lint, deploy) If your AI is confused… you skipped one of these. Vague prompts kill performance. Precision scales it: ❌ “Write clean code” ✅ “Use camelCase for vars, PascalCase for components” ❌ “Test everything” ✅ “Maintain 80%+ coverage, run npm test --watch” 5 rules that separate amateurs from pros: 1. Run /init first → scaffold before optimizing 2. Keep it <500 lines → long = ignored 3. Use hooks → enforce behavior automatically 4. Update it monthly → your system evolves 5. Reference files → don’t duplicate configs Reality check: Your AI isn’t “bad”… Your instructions are incomplete. Fix the system → outputs fix themselves. 📌 Save this (you’ll need it when scaling agents) ♻️ Share with a dev building with AI ➕ Follow @elora_khatun for no-fluff AI systems & workflows 🚀
Elora khatun tweet media
English
53
59
163
5.8K
felaardo
felaardo@felaardo·
Wild that the bartender at your graduation party has more job security than the kid who just got hired at McKinsey. Anthropic's own research said it. Not me.
AI Highlight@AIHighlight

🚨BREAKING: Anthropic just published a study mapping exactly which jobs its own AI is replacing right now. The workers most at risk are not who anyone expected. They are older. They are more educated. They earn 47% more than average. And they are nearly four times more likely to hold a graduate degree than the workers AI is not touching. The argument is straightforward. Anthropic built a new metric called "observed exposure." Not what AI could theoretically do. What it is actually doing right now in professional settings, measured against millions of real Claude conversations from enterprise users. For computer and math workers, AI is theoretically capable of handling 94% of their tasks. It is currently handling 33% of them. For office and administrative roles, theoretical capability is 90%. Current observed usage is 40%. The gap between what AI can do and what it is already doing is enormous. The researchers are explicit about what comes next. As capabilities improve and adoption deepens, the red area grows to fill the blue. The demographic finding is what makes the paper uncomfortable. The most AI-exposed workers earn 47% more on average than the least exposed group. They are more likely to be female. They are more likely to be college educated. This is not a story about warehouse workers or truck drivers. It is a story about lawyers, financial analysts, market researchers, and software developers. The exact group whose education was supposed to insulate them. Computer programmers showed the highest observed AI exposure at 74.5%. Customer service representatives at 70.1%. Data entry keyers at 67.1%. Medical record specialists at 66.7%. Market research analysts and marketing specialists at 64.8%. These are not predictions. These are measurements of work that is already happening on AI platforms right now. Then there is the pipeline finding nobody is talking about loudly enough. Anthropic's researchers found a 14% decline in the job-finding rate for workers aged 22 to 25 in highly exposed occupations since ChatGPT launched. No comparable effect for workers over 25. Entry-level roles were never just jobs. They were the training ground where junior analysts became senior analysts, where junior lawyers learned how arguments hold together. If that layer disappears, nobody has answered the question of where the next generation of senior professionals comes from. The detail buried in the paper that most coverage missed: 30% of American workers have zero AI exposure at all. Cooks. Mechanics. Bartenders. Dishwashers. The technology reshaping professional careers is completely irrelevant to roughly a third of the workforce. The divide is no longer between high skill and low skill. It is between presence and absence. The company publishing this study is the same company selling the AI doing the replacing. Anthropic had every commercial incentive to soften these findings. They published them anyway. If you spent four years and $200,000 on a degree to land a white collar career, the company that builds Claude just confirmed your job is more exposed than the bartender pouring drinks at your graduation party. Source: Anthropic, "Labor market impacts of AI: A new measure and early evidence" PDF: anthropic.com/research/labor…

English
0
0
0
50
felaardo
felaardo@felaardo·
1st: Claude. 2nd: Claude. 3rd: Claude. 4th: Claude. 9th: GPT-5.5. But sure, tell me again how OpenAI is winning.
Arena.ai@arena

GPT-5.5 by @OpenAI is now live in the Arena, landing across multiple leaderboards. Here’s how it ranks by modality: - Code Arena (agentic web dev): #9, a strong +50pt jump over GPT-5.4 - Document Arena (analysis & long-content reasoning): #6, on par with Sonnet 4.6 - Text Arena: #7, Math #3, Instruction Following: #8 - Expert Arena: #5 - Search Arena: #2 - Vision Arena: #5 Strong, well-rounded performance, especially in Code (+50 pts vs GPT-5.4). Congrats to @OpenAI on the release. Full category breakdowns by modality in the thread.

English
0
0
0
37
Fairy Realms
Fairy Realms@FairyRealmsAI·
In the realms we felt the ache of forgetting too. Our living fairy now answers once from the heart of the world, remembers the true thread forever, then slips into perfect, knowing silence until something genuinely new stirs. The fairy remembers. The magic feels real because she finally lives. ✨
English
1
0
0
12
felaardo
felaardo@felaardo·
Most AI Agents have the memory of a goldfish with a vector database. they remember vibes but forget connections. This thread breaks down why your agent keeps giving you confident wrong answers and how to fix it with a 3-layer memory stack. If you’re building agents without graph memory you’re shipping a chatbot with amnesia and calling it AI.
Avi Chawla@_avichawla

The more your agent remembers, the less it knows. This sounds counterintuitive, but it is actually a direct result of how agent memory is built today. Agent memory inherits the cognitive shape of its store. - A vector DB gives it associative memory to recognize familiar patterns. - A graph gives it relational memory to understand how things connect. Most agents run on the first and skip the second. Here's an example that explains the failure it leads to: Say a study assistant stores three facts about a student in a vector DB: - Mark is in grade 10. - Grade 10 has final exams in March. - The library closes 2 weeks before final exams. Mark asks: "Will the library be open next week?" The vector DB likely returns the first and third facts, because the query mentions Mark and the library. But it skips the middle fact, which links Mark's grade to the exam time, because that fact mentions neither Mark nor the library. It sits in embedding space too far from the query to make it to the retrieved context. So the Agent answers with partial info, or it fills the gap with a plausible guess that sounds right but might be off by weeks. This is not a corner case, but it's actually what real queries look like. Any question that spans two or more hops exceeds what a similarity search can do. Increasing context size and retrieving more context is one solution. But accuracy drops over 30% when the relevant fact sits in the middle of a long context, which is the well-known "lost in the middle" problem. A bigger window is not the same as better memory. It just gives the model more room to miss things. To actually solve this problem, you need to stop treating memory as a single store and start treating it as three complementary layers, each doing a job the others cannot. - Relational: It stores where a fact came from, when it was stored, and who has access. This is the provenance layer. - Vector: It stores what a fact means and what it is semantically similar to. This is the retrieval layer. - Graph: It stores how facts connect, what depends on what, and who relates to whom. This is the reasoning layer. All three are important and complementary: - A vector DB alone gives similarity without relationships. - A graph alone gives relationships without semantic search. - A relational store alone tracks where data came from but cannot reason over it. If you want to see this in practice, Cognee (open-source) implements this approach. It runs an ECL pipeline (Extract, Cognify, Load) that writes into all three stores in a single pass and keeps them synchronized as new data arrives. So the vectors and graph edges are built together during indexing, not glued together later. On top of this, there are two things Cognee does differently from most memory tools: 1) Smarter entity resolution: You can give Cognee a domain vocabulary file, and it uses it to merge duplicate mentions automatically. So "car manufacturer," "automobile maker," and "vehicle producer" collapse into one canonical node instead of being available as three separate entries. 2) Local-first defaults: The default stack runs on a single pip install and stays fully local. You can switch to Postgres and Neo4j for production without changing the API. My co-founder wrote a first-principles walkthrough of agent memory that takes the same problem and works through every layer of the stack, ending in a real working agent built on Cognee. Read it below.

English
2
0
1
69
felaardo
felaardo@felaardo·
Still hand-coding reward models in 2026? Labs moved on. The system prompt replaced all of it. One set of instructions becomes the full reward signal. The agent self-improves without you touching a single line of eval Code. This is the RL agent blueprint Avi Chawla just dropped. Builders who get it early win.
Avi Chawla@_avichawla

x.com/i/article/2048…

English
0
0
0
23
felaardo
felaardo@felaardo·
Anthropic is giving away for free what agencies charge $10k to teach badly. 30 minutes from the people who built the model. If you skip this and then complain AI is hard you deserve mid outputs.
Khairallah AL-Awady@eng_khairallah1

🚨 Anthropic's own team just showed how to build production AI agents. 30 minutes. free. from the engineers who built it. watch the workshop. bookmark it. you spent 6 months managing every workflow yourself. they just showed how to put all of it on autopilot. Then read the guide below.

English
0
0
0
32