ecal

278 posts

ecal banner
ecal

ecal

@0xeca1

applied research scientist

nvim Katılım Ocak 2017
586 Takip Edilen127 Takipçiler
ecal retweetledi
Charles Rosenbauer
Charles Rosenbauer@bzogrammer·
These two things are about the same size "2nm" transistor gate pitch : 42nm Flagellum motor : 45nm
Charles Rosenbauer tweet mediaCharles Rosenbauer tweet media
English
90
686
12.7K
652K
ecal retweetledi
turbomander
turbomander@turbomander·
no disrespect to wizard who believe themselves to be wiser than the council but that could never be me. i am far too wise to do something so foolish. perhaps even wiser than the council
English
36
3K
26K
448.5K
ecal retweetledi
ksa 🏴‍☠️
ksa 🏴‍☠️@kosa12m·
How Anthropic talks about Claude Mythos rn:
ksa 🏴‍☠️ tweet media
English
86
1.7K
31.9K
525.1K
ecal retweetledi
François Chollet
François Chollet@fchollet·
A lot of folks talk about "escaping the permanent underclass". If AGI pans out, the future class divide won't be based on wealth, but on cognitive agency. There will be a "focus class" (those who control their attention and actually do things) and a "slop class" (those whose reward loops are fully RL-managed by AI)
English
250
329
3.4K
735.9K
Dillon Mulroy
Dillon Mulroy@dillon_mulroy·
i think i’m back to wanting a really good tab model - any progress here outside of cursor (i don’t have access to supermaven) and for nvim?
English
50
2
300
67.8K
Sahil Bloom
Sahil Bloom@SahilBloom·
@davidnimaesq Ok, so please suggest how I get natural light at 430am in Boston at any time of year, then.
English
50
0
346
69.3K
David Nima
David Nima@davidnimaesq·
Right idea, but bad methodology. Yes your body wants light in the morning. But that desire is for natural sunlight. Not some artificial fluorescent light purchased from Amazon. This is not a long time sustainable solution that you can do for 50 years. Is the equivalent of eating creatine powder versus natural proteins from eggs. Right idea wrong methodology.
Sahil Bloom@SahilBloom

Random thing that improved my life: I got this ring light that I put next to my desk to shine bright light in my eyes early in the morning. I wake up at 430am and definitely saw an improvement in morning alertness and sleep quality. Also felt like it helped avoid winter lows.

English
7
0
30
135.6K
ecal
ecal@0xeca1·
@justalexoki yabai if you’re a linux head and disable SIPS
English
0
0
0
28
taoki
taoki@justalexoki·
best macos window management? raycast isn't enough atm
English
41
0
49
8K
ecal
ecal@0xeca1·
@dillon_mulroy Codeium by Windsurf is free and pretty good
English
0
0
1
166
ecal retweetledi
dax
dax@thdxr·
@kianmckenn @kitlangton has a good metaphor it's like tending a garden can let ai code grow but you have to aggressively clean up after it and be diligent about architecture and patterns codebase is ok it will get better
English
25
58
1.3K
43.6K
ecal retweetledi
the tiny corp
the tiny corp@__tinygrad__·
@alexocheema @Apple I never dreamed of a world where the Apple RAM markup would be the best deal around.
English
6
14
655
11.3K
ecal
ecal@0xeca1·
@VictorTaelin better. Claude Code uses an internal model router, others will use the big model for everything
English
0
0
0
155
Taelin
Taelin@VictorTaelin·
People using Opus 4.6: in your experience, is the model's performance and intelligence, downgraded if we use an alternative to Claude Code (like OpenCode)? I've been appreciating Opus 4.6 increasingly more these days, but Claude Code has been lackluster ):
English
50
2
172
29.1K
ecal retweetledi
adammaj
adammaj@MajmudarAdam·
"intention density" is behind the visceral difference between AI outputs that feel beautiful, human, designed vs. uninspired/slop it points at something much more specific than taste: how many distinct, willful decisions went into an output? how much of its structure can be attributed to intentionality vs. inevitability? when I watch a Ghibli film, I know that every detail and expression in every frame has been crafted with specific intent (Miyazaki personally drew/edited 80,000 of 144,000 frames in Princess Mononoke). I can feel the creator through the creation. in contrast, AI tools encourage work with far lower intention density by default. starting from a blank canvas, you're forced to confront thousands of micro decisions to create a final output. but now that you can write a one-sentence prompt and get a full app or video one-shot, all of these decisions get outsourced, often without you noticing they exist. there can still be high intention in the final work (ex: codex generated apps still feeling pretty good), but the source of this intention is "the way things are usually done" (coming from the model) rather than a particular vision or design. there's no reason AI output has to be like this though we can think of the creative process in 2 parts: 1. intention - what do you want to create? why? 2. execution - how do you create it? AI agents will clearly replace ~100% of the execution part of the creative process. they already have in software and will soon be in film/animation. as they shift up the chain and replace intention as well, creative output starts to feel more trite and indistinguishable. but for those who are careful to preserve and expand rather than offload their intentionality, they have more time than ever to focus on the details and create far more/better software, art, etc.
English
27
35
344
57K
ecal retweetledi
dax
dax@thdxr·
"x is too dangerous so i am the only one who can be trusted with it" is such a hilariously cliche trope it shows up constantly in history, books, movies and it's the same outcome every time and yet we're watching this happen in real time with AI
English
23
18
359
25.1K
Kat ⊷ the Poet Engineer
Kat ⊷ the Poet Engineer@poetengineer__·
turned this into a web app you can use with your own obsidian vault! no install, just one html file: point it at your notes folder, it embeds everything with gemini, clusters by meaning, and renders the 3d network in your browser. available now for x/twitter subscribers <3
Kat ⊷ the Poet Engineer@poetengineer__

exploring shapes of thoughts: extracted my obsidian notes' embeddings and arranges them as a 3d network using 3 different topologies: - centralized: one core idea connecting all - decentralized: notes cluster into themed hubs - distributed: edges labeled by llm describing how ideas connect

English
43
120
1.7K
102.1K
ecal retweetledi
Taelin
Taelin@VictorTaelin·
continual learning, only continual learning, and nothing other than continual learning, is what's missing right now I couldn't care less about saturating benchmarks, getting +3% in SWE Bench or whatever will not make these tools much better than they are, for as long as they still forget all they've learned in the next session AGENTS and MEMORY markdowns don't do it either, the amount of information in even a few hours of work is already several thousand tokens long, and any attempt at compressing that will either lead to important information loss, or overwhelm the context to the point that the model becomes brain dead please may the next launches be more about how this is fundamentally addressed and less about whether then can RL an LLM to beat ARC AGI 7 - which, by all means, is cool and impressive, but what everyone actually needs is an AI that doesn't feel like we have to onboard a fresh new intern every 2 hours of work
English
78
57
925
40.5K
ecal retweetledi
François Chollet
François Chollet@fchollet·
Sufficiently advanced agentic coding is essentially machine learning: the engineer sets up the optimization goal as well as some constraints on the search space (the spec and its tests), then an optimization process (coding agents) iterates until the goal is reached. The result is a blackbox model (the generated codebase): an artifact that performs the task, that you deploy without ever inspecting its internal logic, just as we ignore individual weights in a neural network. This implies that all classic issues encountered in ML will soon become problems for agentic coding: overfitting to the spec, Clever Hans shortcuts that don't generalize outside the tests, data leakage, concept drift, etc. I would also ask: what will be the Keras of agentic coding? What will be the optimal set of high-level abstractions that allow humans to steer codebase 'training' with minimal cognitive overhead?
English
169
384
3.3K
322.7K
ecal
ecal@0xeca1·
is it my coding agent getting dumber later at night or me?
English
1
0
0
88
ecal retweetledi
xjdr
xjdr@_xjdr·
models have gotten good enough now that i have deleted all of my scaffolds and skills etc. i just explain what i need done carefully and comprehensively and the models do it. if i need to type it more than twice i put it in a .md for the models to read but i explicitly tell them to do so. i feel like at this point trying to build elaborate rube goldberd machines hinders the models more than it helps. take this as a PSA or whatever
English
59
37
963
51.2K
Taelin
Taelin@VictorTaelin·
I think for the first time in all these years I feel like I'm finally about to release something that is ready to, and *will*, be massively adopted... ofc it could just flop (and that's fine too!! expected, even), but I never felt like that and that's a very cool feeling
English
46
9
726
25.1K