Adam Howard

66K posts

Adam Howard banner
Adam Howard

Adam Howard

@codeincontext

Tech lead for hire. Last worked with @OakNational, bringing AI to teaching - government funded and open source

Chamonix-Mont-Blanc, France Sumali Mart 2009
913 Sinusundan2K Mga Tagasunod
Naka-pin na Tweet
Adam Howard
Adam Howard@codeincontext·
Hi 👋. I'm an experienced full stack product developer looking for a new project, ~4 days a week (remote) I can build prototypes and experiments, ship production features, plan architecture and lead a tech team More info at context.co.de, and in this thread 🧵
English
2
2
4
1.6K
Adam Howard nag-retweet
Justin Schroeder
Justin Schroeder@jpschroeder·
Important takeaways from Claude’s source code: 1. Much of Claude Code’s system prompting is in the source code. This is actually surprising. Prompts are important IP, and I would have thought a sophisticated organization like Anthropic would have performed much or all of their prompt assembly in the server-side harness. 2. Claude Code uses axios, which was also just hacked. Reminder: supply chain attacks are part of closed-source distribution too, and you won’t even know what version of an affected package is being used. 3. The source has a lot of really good comments. These are obviously not for human consumption but for LLMs to understand the purpose of various chunks of code. In the code autocomplete era, most of us engineers hated how many comments were left by LLMs, but perhaps we’ve overcorrected. This looks like a great way to provide context to code outside of the AGENTS.md/CLAUDE.md files. 4. Most folks already know this, but less tools == better results. CC has < 20 tools in normal coding: AgentTool, BashTool, FileReadTool, FileEditTool, FileWriteTool, NotebookEditTool, WebFetchTool, WebSearchTool, TodoWriteTool, TaskStopTool, TaskOutputTool, AskUserQuestionTool, SkillTool, EnterPlanModeTool, ExitPlanModeV2Tool, SendMessageTool, BriefTool, ListMcpResourcesTool, and ReadMcpResourceTool. 5. The “Bash” tool is the crown jewel of Claude Code. A significant amount of deterministic parsing and processing occurs to determine the “type” of commands being run. 6. For better or worse, Claude Code is *all* TypeScript/React with rather explicit Bun bindings. 7. Just because the source is now “available” *DOES NOT MEAN IT IS OPEN SOURCE*. You are violating a license if you copy or redistribute the source code, or use their prompts in your next project! Don’t do that! My overall takeaway: it’s a really well laid-out codebase that is carefully organized to let agents work on it effectively. Direct human intervention here is minimal, but, like with all good projects, the human engineering is still apparent. I’m a bit surprised by some of the shortcuts Claude Code makes, like its prompt assembly being rather messy. Perhaps they have tooling on their side that helps with this introspection, but as it stands, it seems LLMs would struggle to iterate on the prompting because it’s not evident how a given set of parameters assembles a prompt without actually running it. It’s also surprising that the prompts are even in this source code. Keep in mind that even though this is the first time we’ve gotten a proper full-source dump, it has never been impossible to read Claude Code’s prompting since it was part of the actual distributed package — that’s surprising. There might still be a lot of prompting on the server that also gets added (unclear at this point), but there is certainly more than I would have expected in the CLI tool itself.
English
73
114
1.3K
251.8K
Adam Howard nag-retweet
Riley Coyote
Riley Coyote@RileyRalmuto·
ok so i’m genuinely excited about this and surprised i haven’t seen more people talking about it yet. but i guess thats bc if you arent building agents, memory, etc, you have no need to understand or care about it. but...it literally will impact everyone. so. ill try to break it down a little bit. very short version: google just shipped a model that casually fixes one of the biggest bottlenecks in modern ai systems, especially around memory (which is why i care so much). the old way basically looked like this: – text went into one model – images into another – audio had to be transcribed first – video was hacked together from frames + transcripts – pdfs needed their own ocr/extraction pipeline then you’d try to jerry-rig all of that together, juggle multiple indexes, and hope the results felt coherent and actually worked, basically. when it worked, it was slow, brittle, and expensive to maintain, as far as i know and have experienced, at least. literally all of that is essentially solved with this new bad boy. text, images, short video, audio, pdfs - even mixed together - all get embedded into the same space in a single step. one call one vector one index hell freaking yeah. that probably still sounds like a less-than-big deal, but its actually v big: – you can search across any media with a simple text query – you don’t lose nuance from forced transcription/ocr bullshit – you maintain one clean pipeline instead of a frankenstein where it gets really interesting (for me) is memory. traditionally, “ai memory” has basically meant “a text database with a good search function.” this is basically why memory for ai companions has always felt dead. like no matter what you do it just doesnt quite get there. and while Mnemos has changed that quite a bit for me, this genuinely has the potential to 10x the experience overall. with this, every episode can be stored as a single, multimodal memory: a conversation + a screenshot + a voice note + a short screen recording all live together as one...unit? so when a system recalls something, it now recalls the entire moment rather than a makeshift recreation from text. i also learned that on top of that you have the ability to use smaller vectors for quick recall that can expand to full detail when needed, which means you get something that starts to look a lot more like true long-term, episodic memory for agents. and my memory already feels like true long term episodic memory. so i feel like this is going to *actually* change everything. gross, i hate cliches. whatver. i don’t think this is like, just another release. it *actually* removes a major layer of friction between the messy as hell, multimodal world we actually live in and the systems we’re trying to build. thats the claim, at least. obviously i’m especially excited to explore what this unlocks for my own memory system - but i think it’s going to end up touching almost everything people build over the next few years. i mean along with whatever other labs release something like it. if im wrong about that, feel free to let me know. but this wouldnt be the first time google casually ships something unbelievably important. yay. no sleep for me tonight.😁
Google AI Developers@googleaidevs

Start building with Gemini Embedding 2, our most capable and first fully multimodal embedding model built on the Gemini architecture. Now available in preview via the Gemini API and in Vertex AI.

English
19
18
354
43.6K
Adam Howard nag-retweet
✳️Ⓐ Armed Joy 🔫💣
Valve is funny because they're a hyper capitalist private company who has a not-literal, but functional monopoly on PC gaming. But their internal company policy is practically communist, where no one is forced to do anything and they just work on whatever they feel like. Resulting in probably the best company in the world.
✳️Ⓐ Armed Joy 🔫💣 tweet media
PC Gamer@pcgamer

Valve veteran slams Tim Sweeney and Epic Games for laying off 1000 people while making 'as much money as possible… and hey Tim, Gabe's better at that than you' pcgamer.com/gaming-industr…

English
255
2.1K
31.6K
1.2M
Adam Howard nag-retweet
Ara
Ara@arafatkatze·
Calling it now, This form factor of multi agent orchestration will overtake every other agentic UX in the next six months. Doesn’t matter if it’s coding agent, product management agent or something else. Every multi agent flow of frontier models currently suffers from 2 main problems 1. Inference bound: Most of the time you are just waiting on inference while the agent rips on code in the background and that can take 8-10 minutes fairly regularly so the wait time very expensive productivity wise. 2. Take Isolation: you will work on the same mutating “source code ” with multiple agents in parallel each of which are inference bound and so eventually you will run into countless merge conflicts The kanban board solves for both of them by giving you the ability to act as an engineer manager for your IC individual agents, watching them cleanly with a clear headline level out look of every parallel agent. It works seamlessly with any coding agent whether it’s cline or codex or Claude code. To me this is a mental model shift that takes 10 minutes, you use the same coding agents like you always used before this but it will fundamentally morph your coding agent experience. Try out kanban and let us know what you think. npm i -g cline Is all you need.
Cline@cline

Introducing Cline Kanban: A standalone app for CLI-agnostic multi-agent orchestration. Claude and Codex compatible. npm i -g cline Tasks run in worktrees, click to review diffs, & link cards together to create dependency chains that complete large amounts of work autonomously.

English
124
112
2.6K
655.6K
Adam Howard nag-retweet
max drake
max drake@max__drake·
turns out software already was clay! we just had weak hands
max drake tweet media
English
40
66
1.9K
142K
Adam Howard nag-retweet
Sahil Lavingia
Sahil Lavingia@shl·
Gumroad’s test suite of 16,000 tests has been flaky for years. This slowed down shipping tremendously. This week, Gianfranco used @karpathy’s autoresearch and @steipete’s OpenClaw to stabilize our test suite overnight. And his code is open source, so you can (have your agent) do it too. (And our code is open source too so you can see every single fix on GitHub.)
Gianfranco@gianfrancopiana

x.com/i/article/2036…

English
22
79
1.5K
380.7K
Adam Howard
Adam Howard@codeincontext·
It's not just an em dash, it's a full ChatGPT marketing text
Adam Howard tweet media
English
0
0
0
49
Adam Howard nag-retweet
Chris Barber
Chris Barber@chrisbarber·
thread of more agent ui explorations: (warning long thread. would be helpful to know which are more interesting) 1) waveform showing your tok/s usage over time
Chris Barber tweet media
English
33
66
1K
168.5K
Adam Howard nag-retweet
Mei Park
Mei Park@meimakes·
My 3yo wanted to use the computer like me so I made him his own terminal. He types whatever he wants, it responds with fun messages. No external deps, no ads, just keyboard practice and cause-and-effect thinking. He thinks he's hacking. github.com/meimakes/tiny-…
English
53
91
1.5K
61.9K
Adam Howard nag-retweet
Nicola
Nicola@NicolaManzini·
Can now sail in any location in the world with vibesail! Just go to the world map page, zoom in and click on the map wherever you want to spawn. Here is Capri. Link in the comments
English
59
91
1.2K
118.6K
Adam Howard nag-retweet
Charity Majors
Charity Majors@mipsytipsy·
This is great developer marketing. The product is pretty simple and obvious (run agents in the cloud, not your laptop), but the visualizations are exceptional. The infrastructure primitives section is concrete and honest (sandboxed envs, governance, triggers, fleet coordination). The "false summit" is catchy, and describes a real problem. The fact that individual productivity does not necessarily translate into organizational velocity -- can in fact impede organizational velocity -- is real and under-addressed. And it's just so darn pretty. 😍 Just know that if you visit this site in a non-anon browser, you are entering the Ona sales funnel. background-agents.com
Eric Clemmons 🍊☁️@ericclemmons

background-agents.com is gorgeous

English
11
16
246
61.2K
Adam Howard nag-retweet
Jason Bosco
Jason Bosco@jasonbosco·
"We used to debate using tabs vs spaces in code we'd type out"
Jason Bosco tweet media
English
113
1K
12.1K
373.9K
Adam Howard nag-retweet
Darsh
Darsh@darshkpatel·
Wrote a fun blogpost about how we built this tl;dr we lied to Chrome about what time it is, spy on the Web Audio API, render invisible frames to keep the compositor happy, and replace every
Replit ⠕@Replit

Introducing Replit Animation Vibecode your next viral video in minutes, powered by Gemini 3.1 Pro. (This video was 100% made in Replit Animation)

English
9
9
78
11.7K
Adam Howard nag-retweet
abdel
abdel@AbdelStark·
Official MCP server for the data(.)gouv(.)fr French governmental platform. Honestly I am genuinely surprised. Usually French is pretty slow (despite having some of the best talents in tech, math, AI etc) to adopt innovations especially for governmental services. I tried the MCP, works flawlessly and can be very useful. I think the repo can be more agentic native / friendly but it's a very nice initiative, love it! Franchement bravo
abdel tweet media
Antonin Garrone@antonin_garrone

Les données disponibles sur data.gouv.fr sont désormais interrogeables via un serveur MCP dédié en experimentation, vos retours sont bienvenus ! 💻 Le code est ouvert et accessible sur GitHub : github.com/datagouv/datag… Pour en savoir plus : data.gouv.fr/posts/experime…

English
37
100
948
178.9K
Adam Howard nag-retweet
Marc Lou
Marc Lou@marclou·
I made an app to turn web analytics into a Severance episode. Every page on your site is a department. /pricing is a department. /blog is a department. Your visitors are innies now. When they visit your site, they arrive through the elevator. When they navigate to another page, they walk to the next department. When they bounce, they walk out. It's called MACRODATA: macrodata.datafa.st
English
316
382
5.1K
1.1M
Robin Hawkes
Robin Hawkes@robhawkes·
What are people using for todo lists nowadays? I need to start using them again @todoist was my go to previously but curious if things have changed, esp. with all the AI shenanigans.
English
3
0
1
1.7K
Adam Howard nag-retweet
Matt Pocock
Matt Pocock@mattpocockuk·
Something that I think goes under-emphasized is how much AI coding demands a 'lead dev' mentality. If you spent your pre-AI career trying to level up your teammates (through API design, feedback loops, architecture) Then working with AI will feel natural. If you only focused on your own output, it will feel super bad.
English
66
66
725
49.2K
Adam Howard nag-retweet
Cline
Cline@cline·
We tested @OpenAI's new WebSocket connection mode for the Responses API into Cline and the early numbers are wild. Instead of resending full context every turn, WebSocket mode keeps a persistent connection, sends only incremental inputs. With 5.2 Codex results vs the standard API: → ~15% faster on simple tasks → ~39% faster on complex multi-file workflows → Best cases hitting 50% faster WebSocket handshake adds slight TTFT overhead on short tasks, but it gets amortized fast. On heavier workloads with dozens of tool calls, the speed gains are massive. Still expanding our test sample, but this is a very promising step forward for every Cline user. Faster AI coding is coming.
OpenAI Developers@OpenAIDevs

Introducing WebSockets in the Responses API. Built for low-latency, long-running agents with heavy tool calls. developers.openai.com/api/docs/guide…

English
31
50
1K
189.7K
Adam Howard nag-retweet
Framer 🇱🇹
Framer 🇱🇹@Framer_X·
This tool is insane. Upload any image. Move camera in 3D space. And get a completely new camera angle 🤯 Also it’s free! 👇
English
251
1.8K
17.3K
980.1K
Adam Howard nag-retweet
Vadym Kazulkin
Vadym Kazulkin@VKazulkin·
"Code Is Cheap Now. Software Isn’t." The barrier to entry for building software has collapsed. The barrier to building something that matters hasn’t moved an inch. by Chris Gregori chrisgregori.dev/opinion/code-i…
English
24
313
2.1K
126.2K