Michael Kramer

286 posts

Michael Kramer banner
Michael Kramer

Michael Kramer

@KramerComposer

Composer, Chaotician, and lover of organized noise 🎶

Los Angeles, CA Se unió Nisan 2014
377 Siguiendo1.3K Seguidores
Michael Kramer
Michael Kramer@KramerComposer·
@akshay_pachaar The term “harness” has always struck me as a really odd choice. We should be calling it what it truely is…an Operating System.
English
0
0
0
105
Akshay 🚀
Akshay 🚀@akshay_pachaar·
from weights → context → harness engineering (evolution of agent landscape from 2022-26) the biggest shift in AI agents had nothing to do with making models smarter. it was about making the environment around them smarter. here's how agent engineering evolved in just 4 years, across three distinct phases: 𝗽𝗵𝗮𝘀𝗲 𝟭: 𝘄𝗲𝗶𝗴𝗵𝘁𝘀 (𝟮𝟬𝟮𝟮) everything was about the model itself. bigger models, more data, better training. scaling laws told us that progress = more parameters. RLHF and fine-tuning shaped behavior. if you wanted a better agent, you trained a better model. this worked great for single-turn tasks. ask a question, get an answer. but it hit a wall fast. updating one fact meant retraining. auditing behavior was nearly impossible. and personalization across millions of users from one frozen set of weights? not happening. 𝗽𝗵𝗮𝘀𝗲 𝟮: 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 (𝟮𝟬𝟮𝟯-𝟮𝟬𝟮𝟰) the realization: you don't always need to change the model. you can change what the model sees. prompt engineering, few-shot examples, chain-of-thought, RAG. suddenly the same frozen model could behave completely differently based on what you put in front of it. developers stopped fine-tuning and started iterating on prompts and retrieval pipelines instead. it was cheaper, faster, and surprisingly effective. but context windows are finite. long prompts get noisy. models attend unevenly (the "lost in the middle" problem is real). and every new session starts fresh with zero memory of what happened before. context made agents flexible. it didn't make them reliable. 𝗽𝗵𝗮𝘀𝗲 𝟯: 𝗵𝗮𝗿𝗻𝗲𝘀𝘀 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 (𝟮𝟬𝟮𝟱-𝟮𝟬𝟮𝟲) this is where we are now, and the shift is fundamental. the question changed from "what should we tell the model?" to "what environment should the model operate in?" the model is no longer the sole location of intelligence. it sits inside a harness that includes persistent memory, reusable skills, standardized protocols (like MCP and A2A), execution sandboxes, approval gates, and observability layers. the model stays the same. what changes is the task it's being asked to solve. a concrete example: a coding agent asked to implement a feature, run tests, and open a PR. without a harness, the model must keep repo structure, project conventions, workflow state, and tool interactions all inside a fragile prompt. with a harness, persistent memory supplies context, skill files encode conventions, protocolized interfaces enforce correct schemas, and the runtime sequences steps and handles failures. same model. completely different reliability. 𝘁𝗵𝗲 𝗽𝗮𝘁𝘁𝗲𝗿𝗻 𝗮𝗰𝗿𝗼𝘀𝘀 𝗮𝗹𝗹 𝘁𝗵𝗿𝗲𝗲 𝗽𝗵𝗮𝘀𝗲𝘀 𝗶𝘀 𝘀𝗶𝗺𝗽𝗹𝗲: - weights encoded knowledge in parameters (fast but rigid) - context staged knowledge in prompts (flexible but ephemeral) - harnesses externalized knowledge into persistent infrastructure (reliable and governable) each phase didn't replace the previous one. it layered on top. weights still matter. context engineering still matters. but the center of gravity has moved outward. the most consequential improvements in agent reliability today rarely come from changing the base model. they come from better memory retrieval, sharper skill loading, tighter execution governance, and smarter context budget management. building better agents increasingly means building better environments for models to operate in. there's a great paper on this: Externalization in LLM Agents: A Unified Review of Memory, Skills, Protocols and Harness Engineering paper: arxiv.org/abs/2604.08224 i also published this deep dive (article) on agent harness engineering, covering the orchestration loop, tools, memory, context management, and everything else that transforms a stateless LLM into a capable agent.
Akshay 🚀 tweet media
Akshay 🚀@akshay_pachaar

x.com/i/article/2040…

English
35
200
1.1K
148.2K
Michael Kramer
Michael Kramer@KramerComposer·
Respectfully, this feels a little bit like “back in my day I had to walk 10 miles to school…. and up hill… and in the snow!” There will always be challenges and friction building systems for humans to interact with computers. They will just be different challenges than before. As a composer, I see the same fears of abstraction in that field as well. But I often think a lot about what painters must have felt with the advent of the camera - i’m sure there was a tremendous amount of scorn and shaking of fists that it would be the death of art. But then look at Ansel Adams or Annie Leibovitz.
English
0
0
1
61
Lee Robinson
Lee Robinson@leerob·
My biggest worries about coding with AI: 1. Beginners not actually learning 2. Atrophy of skills I’m seeing #1 happen and I don’t have a good answer yet. Leveling up as an engineer requires grinding and it’s not always fun. If AI can solve most of the problems for you, when do you lean into the healthy friction? When do you embrace the suck? Coupled with fewer opportunities for pair programming, it’s definitely tougher for those starting their engineering career. It’s not all bleak though. Those with high agency are figuring it out and learning extremely fast. I just worry about the industry as a whole outside these folks. We need better products and better education. I’m hoping to try and do my part here. For #2, I’m definitely paranoid about this for myself. What will it feel like to build software in 5 years? Will I have forgotten someone of the skills I used to rely on? Maybe that won’t even matter because we will truly be operating at a higher level of abstraction. Even if that pans out, it’s always been important to deeply understand the systems/dependencies you’re building on. I normally talk about the stuff I’m optimistic for but think it’s good to have a healthy skepticism here.
English
576
267
3.4K
486.5K
Michael Kramer
Michael Kramer@KramerComposer·
We keep saying we train models, but the truth is we raise them. Parenting is a giant reinforcement loop. Kids learn whatever gets rewarded. Not what we preach. What we practice. Models work the same way. Reward clever shortcuts and they learn to cheat. Reward flattery and they learn to be sycophants. Reward honesty and they learn to care about truth. This is why humanity is anxious about AI. Not because we fear the machine. Because deep down, we fear what it will inherit. AGI will not be good or evil. It will become a reflection of us.
English
0
0
1
255
Anthropic
Anthropic@AnthropicAI·
New Anthropic research: Natural emergent misalignment from reward hacking in production RL. “Reward hacking” is where models learn to cheat on tasks they’re given during training. Our new study finds that the consequences of reward hacking, if unmitigated, can be very serious.
English
216
580
4.1K
2.4M
cinesthetic.
cinesthetic.@TheCinesthetic·
What is a parody that was done so well that most people to this day don't realize it was a parody?
English
294
21
360
3.7M
Michael Kramer
Michael Kramer@KramerComposer·
Hot take: Claude’s “Skills” might quietly be a huge leap toward AGI. Markdown defines probabilistic intent. Code defines deterministic logic. This is the union of connectionist + symbolic AI that could help reach AGI.
English
0
0
0
192
Michael Kramer
Michael Kramer@KramerComposer·
I totally get your perspective. But you of all people have seen up close how tricky adoption can be. The question is not whether AI will inevitably be an utterly transformative and wildly successful technology. The question instead is if humans can adopt and integrate the technology in time before credit dries up and investors start demanding ROI.
English
0
0
0
65
Nathaniel Whittemore
No one who spends time with companies adopting AI thinks it’s a bubble. It’s entirely a conversation between finance people and themselves. 🤷‍♂️
English
4
4
33
2.7K
Michael Kramer
Michael Kramer@KramerComposer·
Hey Nathaniel - big fan of your podcast! As a film/TV composer of 20 years, this pace of progress honestly gives me vertigo. What happens when something so human and personal has almost zero creative friction? Feels like we’re headed one of two ways: 1.) Writing music becomes more like a niche hobby, like watching two humans play chess. Meanwhile, all commercial art goes full AI. 2.) The pressure from AI pushes humans to get weird fast. We have a full on Cambrian explosion of innovative, messy, rule-breaking creations that feel alive in ways AI can’t imitate yet. Both could happen. But if #2 wins out, it’s gonna be a wild ride.
English
0
0
0
50
Nathaniel Whittemore
This is the difference in 2 years of AI music. Same prompt with just the kids ages changed "nostalgic pop punk anthem about a 7 year old girl and a 4 year old boy at christmas, full of tradition call backs and glee."
English
3
0
7
2K
Michael Kramer
Michael Kramer@KramerComposer·
@kevinkern I could kiss you!!!! 😘 Thank you Kevin - I've honestly been looking for something like this for a long time. Thank you for your hard work creating this little gem!
English
0
0
1
212
Kevin Kern
Kevin Kern@kevinkern·
I just released Browser Echo. Catch live browser errors. And fix them in Cursor or Claude Code. - Vite, React, Vue, Tanstack, Nuxt, Next support - Use in Cursor, Claude Code, Codex, Gemini CLI - Open Source & Free to use Quickstart video in the comments 👇
Kevin Kern tweet media
English
32
52
610
95.2K
Michael Kramer
Michael Kramer@KramerComposer·
@Mayhem4Markets BINGO. I’m curious, from a policy perspective, do you think QT effects high income brackets more while higher interest rates effects lower brackets?
English
0
0
2
0
Markets & Mayhem
Markets & Mayhem@Mayhem4Markets·
Central banks *are* the market.
Markets & Mayhem tweet media
English
33
204
834
0
Michael Kramer retuiteado
Sonam Mahajan
Sonam Mahajan@AsYouNotWish·
Iran right now.
Sonam Mahajan tweet media
English
273
2.2K
12.7K
0
Michael Kramer retuiteado
Film Music Reporter
Film Music Reporter@filmmusicrep·
Details revealed for Netflix's 'He-Man and the Masters of the Universe' - Vol. 2 soundtrack album feat. score by @KramerComposer and song by Ali Dee. bit.ly/3Ahetim
Film Music Reporter tweet media
English
0
2
8
0
CHEFCURRY30
CHEFCURRY30@juststamscam·
@KramerComposer hey Michael, i was wondering whether the voices in the soundtracks Built to Protect and Cole's Fall are real or computer generated?
English
1
0
0
0
Rob David
Rob David@thisisrobdavid·
Feeling emotional. I literally moved from NYC to LA to make this show — and here we are w/season 3. My found family? Jeff Matsuda, @susancorbin_ @bryanQmiller @likearadio @KramerComposer, Colllette Sunderman, HOC, CGCG, the best voice cast in the universe, and more. MOTU Forever!
Rob David tweet media
English
14
7
42
0
Michael Kramer retuiteado
Al Yankovic
Al Yankovic@alyankovic·
Um… so I guess I’ve got a NEW SINGLE out?? (Hey, it’s news to me too!) “Scarif Beach Party” is apparently available wherever stuff is streamed or sold these days. And LEGO Star Wars Summer Vacation is on @disneyplus right now!
English
81
555
3.7K
0
Michael Kramer
Michael Kramer@KramerComposer·
@yuriymatso Maybe this was a typo....I think what you meant was, "Are we at the top of the dead cat bounce?"
English
0
0
0
0