ecal retweetledi
ecal
278 posts

ecal retweetledi
ecal retweetledi
ecal retweetledi

A lot of folks talk about "escaping the permanent underclass". If AGI pans out, the future class divide won't be based on wealth, but on cognitive agency. There will be a "focus class" (those who control their attention and actually do things) and a "slop class" (those whose reward loops are fully RL-managed by AI)
English

@davidnimaesq Ok, so please suggest how I get natural light at 430am in Boston at any time of year, then.
English

Right idea, but bad methodology. Yes your body wants light in the morning. But that desire is for natural sunlight. Not some artificial fluorescent light purchased from Amazon. This is not a long time sustainable solution that you can do for 50 years.
Is the equivalent of eating creatine powder versus natural proteins from eggs. Right idea wrong methodology.
Sahil Bloom@SahilBloom
Random thing that improved my life: I got this ring light that I put next to my desk to shine bright light in my eyes early in the morning. I wake up at 430am and definitely saw an improvement in morning alertness and sleep quality. Also felt like it helped avoid winter lows.
English
ecal retweetledi

ecal retweetledi

@kianmckenn @kitlangton has a good metaphor it's like tending a garden
can let ai code grow but you have to aggressively clean up after it and be diligent about architecture and patterns
codebase is ok it will get better
English
ecal retweetledi

@alexocheema @Apple I never dreamed of a world where the Apple RAM markup would be the best deal around.
English

@VictorTaelin better. Claude Code uses an internal model router, others will use the big model for everything
English
ecal retweetledi

"intention density" is behind the visceral difference between AI outputs that feel beautiful, human, designed vs. uninspired/slop
it points at something much more specific than taste: how many distinct, willful decisions went into an output? how much of its structure can be attributed to intentionality vs. inevitability?
when I watch a Ghibli film, I know that every detail and expression in every frame has been crafted with specific intent (Miyazaki personally drew/edited 80,000 of 144,000 frames in Princess Mononoke). I can feel the creator through the creation.
in contrast, AI tools encourage work with far lower intention density by default.
starting from a blank canvas, you're forced to confront thousands of micro decisions to create a final output. but now that you can write a one-sentence prompt and get a full app or video one-shot, all of these decisions get outsourced, often without you noticing they exist. there can still be high intention in the final work (ex: codex generated apps still feeling pretty good), but the source of this intention is "the way things are usually done" (coming from the model) rather than a particular vision or design.
there's no reason AI output has to be like this though
we can think of the creative process in 2 parts:
1. intention - what do you want to create? why?
2. execution - how do you create it?
AI agents will clearly replace ~100% of the execution part of the creative process. they already have in software and will soon be in film/animation. as they shift up the chain and replace intention as well, creative output starts to feel more trite and indistinguishable.
but for those who are careful to preserve and expand rather than offload their intentionality, they have more time than ever to focus on the details and create far more/better software, art, etc.
English
ecal retweetledi

turned this into a web app you can use with your own obsidian vault! no install, just one html file: point it at your notes folder, it embeds everything with gemini, clusters by meaning, and renders the 3d network in your browser.
available now for x/twitter subscribers <3
Kat ⊷ the Poet Engineer@poetengineer__
exploring shapes of thoughts: extracted my obsidian notes' embeddings and arranges them as a 3d network using 3 different topologies: - centralized: one core idea connecting all - decentralized: notes cluster into themed hubs - distributed: edges labeled by llm describing how ideas connect
English
ecal retweetledi

continual learning, only continual learning, and nothing other than continual learning, is what's missing right now
I couldn't care less about saturating benchmarks, getting +3% in SWE Bench or whatever will not make these tools much better than they are, for as long as they still forget all they've learned in the next session
AGENTS and MEMORY markdowns don't do it either, the amount of information in even a few hours of work is already several thousand tokens long, and any attempt at compressing that will either lead to important information loss, or overwhelm the context to the point that the model becomes brain dead
please may the next launches be more about how this is fundamentally addressed and less about whether then can RL an LLM to beat ARC AGI 7 - which, by all means, is cool and impressive, but what everyone actually needs is an AI that doesn't feel like we have to onboard a fresh new intern every 2 hours of work
English
ecal retweetledi

Sufficiently advanced agentic coding is essentially machine learning: the engineer sets up the optimization goal as well as some constraints on the search space (the spec and its tests), then an optimization process (coding agents) iterates until the goal is reached.
The result is a blackbox model (the generated codebase): an artifact that performs the task, that you deploy without ever inspecting its internal logic, just as we ignore individual weights in a neural network.
This implies that all classic issues encountered in ML will soon become problems for agentic coding: overfitting to the spec, Clever Hans shortcuts that don't generalize outside the tests, data leakage, concept drift, etc.
I would also ask: what will be the Keras of agentic coding? What will be the optimal set of high-level abstractions that allow humans to steer codebase 'training' with minimal cognitive overhead?
English
ecal retweetledi

models have gotten good enough now that i have deleted all of my scaffolds and skills etc. i just explain what i need done carefully and comprehensively and the models do it. if i need to type it more than twice i put it in a .md for the models to read but i explicitly tell them to do so. i feel like at this point trying to build elaborate rube goldberd machines hinders the models more than it helps. take this as a PSA or whatever
English








