evanfeenstra

662 posts

evanfeenstra banner
evanfeenstra

evanfeenstra

@evanfeenstra

Creating a free future one line of code at a time

Portland, OR Katılım Temmuz 2018
284 Takip Edilen187 Takipçiler
evanfeenstra
evanfeenstra@evanfeenstra·
@fchollet Just like a LLM... Human memory is layers of progressively compressed knowledge - so u can can fit higher level patterns into a single thought, the higher up the stack you go (further forward in your frontal lobes)
English
0
0
0
40
François Chollet
François Chollet@fchollet·
The role of memorization and knowledge is to cache & reuse past cognitive work. It should be leveraged as a way to speed up cognition, not as a *replacement* for cognition.
English
14
21
213
19.1K
François Chollet
François Chollet@fchollet·
Simply retrieving a reasoning trace looks a lot like human reasoning, until it's time to navigate uncharted territory. If you memorized all reasoning traces of humans from 10,000 BC, you could automate their lives but you could not invent modern civilization.
English
65
48
565
39.5K
evanfeenstra
evanfeenstra@evanfeenstra·
@akibablade @TFTC21 @nvk Yup everyone has this already... It's called a bunch of AGENTS.md / CLAUDE.md files organized around you dirs
English
0
0
0
21
Akiba
Akiba@akibablade·
I mean you can vibe code this yourself pretty easily. A lot of people I know, me included, have been doing their own version of this for a while. It's certainly one of my main priorities right now and I'm about 70% through my version. Love reading about other people's approach to the 'second brain' concept regardless. Gonna dig into this now.
English
1
0
8
472
TFTC
TFTC@TFTC21·
.@nvk just released llm-wiki v0.0.10. llm-wiki is an open-source tool inspired by Andrej Karpathy’s idea for building persistent personal knowledge bases with LLMs. Instead of stateless chats that forget everything, it lets AI agents compile raw documents into a structured, interlinked Markdown wiki that keeps improving over time. This update brings better research quality, session persistence, and reliability fixes.
TFTC tweet media
English
12
22
177
13.4K
evanfeenstra
evanfeenstra@evanfeenstra·
@jack Yup we have a multi-agent cloud coding UI pushing 100s of PRs a day... Almost all of them with goose
English
0
0
0
661
jack
jack@jack·
people are sleeping on how excellent goose has become under the hood (interface needs some work but team is pushing). it's a superpower. github.com/block/goose
English
216
442
5K
481.9K
evanfeenstra
evanfeenstra@evanfeenstra·
@karpathy @balajis Like here’s a view of a page on my web app, with all connections to other components, utils, endpoints, and backend data models (the graph crosses repos and services!)
evanfeenstra tweet media
English
0
0
1
34
Andrej Karpathy
Andrej Karpathy@karpathy·
Good post from @balajis on the "verification gap". You could see it as there being two modes in creation. Borrowing GAN terminology: 1) generation and 2) discrimination. e.g. painting - you make a brush stroke (1) and then you look for a while to see if you improved the painting (2). these two stages are interspersed in pretty much all creative work. Second point. Discrimination can be computationally very hard. - images are by far the easiest. e.g. image generator teams can create giant grids of results to decide if one image is better than the other. thank you to the giant GPU in your brain built for processing images very fast. - text is much harder. it is skimmable, but you have to read, it is semantic, discrete and precise so you also have to reason (esp in e.g. code). - audio is maybe even harder still imo, because it force a time axis so it's not even skimmable. you're forced to spend serial compute and can't parallelize it at all. You could say that in coding LLMs have collapsed (1) to ~instant, but have done very little to address (2). A person still has to stare at the results and discriminate if they are good. This is my major criticism of LLM coding in that they casually spit out *way* too much code per query at arbitrary complexity, pretending there is no stage 2. Getting that much code is bad and scary. Instead, the LLM has to actively work with you to break down problems into little incremental steps, each more easily verifiable. It has to anticipate the computational work of (2) and reduce it as much as possible. It has to really care. This leads me to probably the biggest misunderstanding non-coders have about coding. They think that coding is about writing the code (1). It's not. It's about staring at the code (2). Loading it all into your working memory. Pacing back and forth. Thinking through all the edge cases. If you catch me at a random point while I'm "programming", I'm probably just staring at the screen and, if interrupted, really mad because it is so computationally strenuous. If we only get much faster 1, but we don't also reduce 2 (which is most of the time!), then clearly the overall speed of coding won't improve (see Amdahl's law).
Balaji@balajis

AI PROMPTING → AI VERIFYING AI prompting scales, because prompting is just typing. But AI verifying doesn’t scale, because verifying AI output involves much more than just typing. Sometimes you can verify by eye, which is why AI is great for frontend, images, and video. But for anything subtle, you need to read the code or text deeply — and that means knowing the topic well enough to correct the AI. Researchers are well aware of this, which is why there’s so much work on evals and hallucination. However, the concept of verification as the bottleneck for AI users is under-discussed. Yes, you can try formal verification, or critic models where one AI checks another, or other techniques. But to even be aware of the issue as a first class problem is half the battle. For users: AI verifying is as important as AI prompting.

English
133
537
4.4K
844.1K
evanfeenstra
evanfeenstra@evanfeenstra·
@karpathy @balajis And then create maps and code “paths” for feeding to an LLM. Its the only way i’ve found to accurately describe large complex repos
English
0
0
1
29
evanfeenstra
evanfeenstra@evanfeenstra·
@stevewdavens @stakwork Yup once your codebase is indexed go to the `mcp` dir and just run `yarn dev`. The mcp SSE server starts up on localhost:3000
English
1
0
4
108
evanfeenstra retweetledi
Stakwork
Stakwork@stakwork·
Here's our approach to getting AI coding to work with large code bases and multiple repos. AST + LSP + Graph lets you retrieve just the code you need. Open source: github.com/stakwork/stakg… Demo by @evanfeenstra
English
5
8
27
4K
Dan McAteer
Dan McAteer@daniel_mac8·
🤯 this genius stores his entire codebase syntax in a graph database and queries it so provide context to an llm
English
345
892
12.4K
2M
evanfeenstra
evanfeenstra@evanfeenstra·
@g_korland @LinkedIn Agreed! But RAG should be #1 on the list … context is king. I’m finding that slightly better context leads to huge improvements, independant of what model u use
English
1
0
1
18
Anthony Bonato
Anthony Bonato@Anthony_Bonato·
Mathematicians throw shade like no others
Anthony Bonato tweet media
English
166
2.2K
22.9K
1.2M
evanfeenstra
evanfeenstra@evanfeenstra·
@RyanSAdams Sry bro but US or EU is not able to change bitcoin supply. Your meme is only relevant to ETH and under
English
1
0
0
40
evanfeenstra
evanfeenstra@evanfeenstra·
@RyanSAdams He should do what’s right and say “bitcoin not crypto”. We should not be pushing scam coins on the population. Start with the coin that was distributed in a fair way
English
0
0
1
30
RYAN SΞAN ADAMS - rsa.eth 🦄
Trump came out this week in support of crypto. What should Biden do. How about just say "I support crypto too" Then call off Gary, approve ETH ETF, and let Sab 121 pass w/o veto. "Crypto isn't partisan" You're right. It's shouldn't be. Unless Biden and democrat leads keep making it partisan - it's their choice now.
English
111
62
783
57.3K
nvk 🌞
nvk 🌞@nvk·
Mr @moneyball this is not for low resource embedded, how can we load this into a micro? #L15" target="_blank" rel="nofollow noopener">gitlab.com/lightning-sign…
English
5
2
5
3.5K
evanfeenstra
evanfeenstra@evanfeenstra·
@hus_qy “Unruh demonstrated theoretically that the notion of vacuum depends on the path of the observer through spacetime. From the viewpoint of the accelerating observer, the vacuum of the inertial observer will look like a state containing many particles in thermal equilibrium”
English
1
0
0
153
evanfeenstra
evanfeenstra@evanfeenstra·
@hus_qy The misunderstanding comes from ignorance of the “medium” that light waves travel through. Its assumed that electromagnetic waves “need no medium” but that is wrong
English
1
0
1
91
Hans Moog
Hans Moog@hus_qy·
Since I received a few DM's asking if I am fine or having some mental breakdown because I suddenly write about a teleological world view (and even god), I want to try to explain myself 😅. So when I say that the universe is agential / goal directed / intelligent then what do I mean with that? Let's first define what "intelligence" is. I define Intelligence as the ability to successfully pursue some goal "in the future" despite living in a changing and unpredictable environment. Let's use the following situation as an example, where a lifeguard wants to rescue a drowning person: Since he is slower in water than on land, he has multiple options to get to the person e.g.: - take path A (straight line) - take path B (least time in water) - take path C (optimal path minimizing total time) - ... Studies have shown that lifeguards intuitively choose a path that is very close to the optimal one (C) and I think it is pretty obvious that this is a sign of intelligence as it requires a "good understanding" of the world and ones own capabilities to be able to "plan ahead" and run in the right direction. Now let's replace the lifeguard with a dog and the drowning person with a floating ball and ask the same question again, which path will the dog take to fetch the toy? Interestingly, the dog will also choose the same time minimizing path and I think this is actually not very surprising as dogs are usually considered to be pretty smart. So let's go one step further and replace the dog with an ant colony and the drowning person with some food that they want to bring home. Obviously the food can not float in water so let's also replace the water with e.g. gras that has a similar impact on the travelling speed of the ants (they are slower on rough surfaces). Now things start to become interesting as ants are believed to not have an extended "model of the world" that would allow them to peek towards the horizon, spot the food and then plan their trip - but again they happen to travel along the most optimal path. Instead of relying on some internal world model, they use pheromones to encode their knowledge about the world "directly in their environment". So instead of building a meta-understanding like humans and dogs that can intuitively decide these things, they exploit the fact that pheromones loose their smell over time which means that by following the strongest smell, they will arrive at the same optimal path towards the colony. Now people might say, well but these are all biological systems and if they wouldn't be able to pursue goals in the future then they wouldn't survive very long so of course they need to show signs of intelligent behavior. So let's take one last step and leave the realm of biological systems where some form of intelligence is usually assumed to be present and let's go to the lower extreme of our spectrum! What would happen if our lifeguard is an atom sending out a photon and the drowning person is another atom that receives this photon? Since the speed of light is slightly slower in water, this results in a very similar problem as before and interestingly, the photon does not take a straight line (but the same path as the humans, dogs and ants - that minimizes the total travel time). This fact is known as "Fermat's principle" but how does the atom do that? How does the atom send the photon in the correct direction to hit the destination at exactly the minimal amount of time? Does it know its destination or the water it will have to traverse in the future (similar to the lifeguard) and is therefore able to plan ahead and send the photon in the right direction? Contemporary physics doesn't really try to "explain" this phenomenon beyond just describing it in the form of a physical law. Of course people have wondered about the role of "the future" in the corresponding path integrals and how the photon might be able to obtain this knowledge but mostly, people just accept it as one of many "fundamental laws". Now there is one interesting thing about a photon and that is that it sometimes behaves like "a wave" where it's position, angular momentan and so on are not well defined until it "is measured". I believe that this is equivalent to a "search process" where the photon explores multiple different paths "it could take in parallel", to later "select" the path that was actually taken as soon as the first interaction occurs. It doesn't need information from the future because all possible ways to "get to a possible receiver" are explored in parallel and the one that "actually happens" only gets chosen in retrospect as soon as the spreading wave reaches the absorbing receiver. In other words, it is the receiver that "decides what manifests as reality" and the only reason why we observe the photon in a random state is because we can not predict which "version of the photon" will eventually win the race (it is computationally irreducible). Now if this would be true, it would have a few very profound implications: 1. Reality is virtual: We can not simulate multiple versions of reality (and later pick one) if what we call "classical reality" is already the "ground truth". This means that our reality has to be virtual or to some degree "holographic" and in fact, the 2022 nobel prize was awarded for a proof that showed that "the universe is not locally real" (is not just a space of objects bouncing into each other). 2. The universe is agential: Since the universe is running a search process combined with a retrocausal selection process, it has a "goal" (to minimize variational principles or the time between consecutive interactions). 3. The universe "understands" the non-local nature of the branchial space: The only way to stop the search process and collapse the space of possibilities "everywhere at the same time" is by knowing which competing computational processes belong to which emitter (and when they were "resolved" independently of their coordinates), which we know as "spooky action at a distance". This means that the universe is a "computational entity" that pursues the goal of minimizing variational principles while constructing a virtual perception of reality and according to our definition from the beginning it fulfills the classification for agency / intelligence. It is an "agent" that hosts (and guides) all other interactions (the ultimate "markov blanket" around all other states) and I think that humans usually address such an omniscient entity with the name "god" but even if we would prefer to call it a computer - it would still have the "size of the known universe" which seems pretty big for an ordinary computer. Maybe everything that we know as "classical reality" are just "agential programs" that run on the computational substrate of the universe and whose interactions are guided by an underlying optimization algorithm, which we perceive as "quantum mechanics". TL;DR: We seem to exist in the "imagination of some god-like computational entity" 😅
Hans Moog tweet mediaHans Moog tweet media
Hans Moog@hus_qy

I used to have a very similar perception as you (maybe less dramatic in the sense that I would still consider the possibility of a god), but I also didn't believe and was assuming that others only believed because they either grew up with these ideas, needed them to "cope with the challenges of life" or were simply uncomfortable with the concept of not being able to "explain everything", just yet. In east Germany, religion was not very popular and I think that the very first time I ever met a person that was openly expressing a belief in god was at the age of 19 when I met one of my first girlfriends (she was a Jehovas Witness and dead serious about the idea of god being real). We weren't together for long but it triggered my interest in the topic and I read most of the major religious texts to see if there was something to this idea. I even tried to proactively make friends with people from different religious groups to be able to join their gatherings and collect first hand experience rather than just "reading about things". It was super interesting to learn about all these ideas and I still consider most of the people I met back then to be very close friends but I personally concluded that all these text were way too fuzzy and unspecific "for me" to not leave massive room for questions and interpretation. Even if there would be conclusive proof for a god in these ancient text, it seemed almost impossible to decide which "religious framework" to adopt as they were all deeply intertwined with societal norms and values. To me, it seemed like a god would leave something behind that would be less "debatable" and that would not be denoted in "human language" that constantly changes its meaning. Of course I heard quotes like "The first sip from the cup of natural science makes one an atheist, but at the bottom of the cup, God awaits." but I always thought that if this would be true, then the scientists making these claims would surely be able to share their line of thought to allow others to arrive at a similar conclusion. The reason why I am writing this entire text in the past tense is because I am starting to seriously question my PoV on this topic and I am starting to move away from the traditional western materialist view - not because I developed some weird desire for spirituality but because I believe that the scientific evidence we have collected over the last years points in exactly that direction. Interestingly, I am not the only scientist / person interested in these topics who has recently expressed a "dramatic change of mind". Joscha Basch for example who I consider to be one of the most brilliant thinkers of our times (especially in the realm of AI / consciousness research) posted the following tweet just 4 hours after I posted mine: x.com/plinz/status/1… I think we are on the verge of a new scientific understanding of the cosmos that revolves around teleological and animist concepts and we will eventually see more and more people commit to these ideas as they mature into precise theories with testable predictions.

English
38
32
201
70.1K