Max

3.1K posts

Max banner
Max

Max

@systemscontext

AI analysis focused on models, tools, research, and real-world implications.

Sumali Aralık 2021
321 Sinusundan1.5K Mga Tagasunod
Max nag-retweet
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
6.7K
55.9K
19.9M
Max
Max@systemscontext·
@BrainyMarsupial @kareem_carr Ask this question again in 5 years and I’ll bet the response will be that way least 99.9% of Software Engineers will be using AI-written code only. AI is getting exponentially better at Coding.
English
0
0
1
45
Koala
Koala@BrainyMarsupial·
@kareem_carr I’ve spoken to multiple software engineers who’ve said that Claude Code has cut their time spent at work massively. It seems that the demands of employers have not caught up with the productivity gains observed (this is partially because such gains are hidden).
English
15
0
49
19.5K
Dr Kareem Carr
Dr Kareem Carr@kareem_carr·
I keep hearing that software engineers don’t write much code anymore and it’s mostly AI now. Can any software engineers confirm how true this is? Do you just drink coffee and watch Claude code all day now?
English
535
12
580
171.8K
Max nag-retweet
Advait Paliwal
Advait Paliwal@advaitpaliwal·
I built Feynman, Claude Code for research. I gave it a question and it came back 30 minutes later with a cited meta analysis. It can also replicate experiments on Runpod, audit claims against code, and simulate peer review. Open source & MIT license, link below
English
136
420
4.9K
321.5K
Andrew Ambrosino
Andrew Ambrosino@ajambrosino·
this would be entertaining
Andrew Ambrosino tweet media
English
64
10
833
40.2K
Max
Max@systemscontext·
@michpokrass Can we get a 5.4 version instead for instant?
English
0
0
1
114
Michelle Pokrass
Michelle Pokrass@michpokrass·
we shipped a new version of 5.3 instant to chatgpt yesterday. 5.3 was unintentionally pretty annoyingly clickbait-y. it's better in yesterday's model and we're going to keep stamping that behavior out. keep the feedback coming! help.openai.com/en/articles/68…
English
78
27
454
58.6K
Max
Max@systemscontext·
@JonhernandezIA Feels directionally right, but “like a child” undersells how alien these systems could be. Alignment is less parenting and more specifying objectives we barely understand
English
0
0
0
75
Jon Hernandez
Jon Hernandez@JonhernandezIA·
📁 Geoffrey Hinton, deep learning pioneer and Turing Award winner, says AI will not be an obedient assistant. It will be more like a child. Smarter than us. And eventually making its own decisions. The challenge is not controlling it. It is making sure it cares about us.
English
130
156
657
97.3K
Max
Max@systemscontext·
@sama Sam, if coding models keep improving, what becomes the scarce skill in software? Taste? Problem framing? Systems thinking?
English
0
0
0
19
Sam Altman
Sam Altman@sama·
The Codex team are hardcore builders and it really comes through in what they create. No surprise all the hardcore builders I know have switched to Codex. Usage of Codex is growing very fast:
Sam Altman tweet media
English
1.3K
307
6.6K
846.5K
Max
Max@systemscontext·
The MacBook Air M5 battery is so good it’s actually messing with my habits. I keep reaching for the charger out of instinct. Then realise I don’t need it. 🤣🤣
Max tweet media
English
0
0
0
93
Max
Max@systemscontext·
@annbauerwriter I think this mostly depends on how you use it. If it replaces thinking, yeah. If it’s more like a sparring partner or editor, it often pushes the thinking further.
English
0
0
0
5
Ann Bauer
Ann Bauer@annbauerwriter·
I've been using AI for about 3 weeks, because it is (I've been told) essential for speeding up business tasks. Perhaps. But it is also, without question, making me dumber, flatter of mind, less likely to think spontaneously. I don't like this.
English
332
64
1.2K
50.2K
Max
Max@systemscontext·
@anothercohen Cool project. Also a good example of why a lot of AI app value right now is just glue code for broken or fragmented software.
English
0
0
0
41
Alex Cohen
Alex Cohen@anothercohen·
Finally finished vibe coding my personal health app built with Claude. Here's what it does: - Connects to the Oura API to sync sleep, recovery, steps, and exercise data - Tracks my monthly bloodwork via Rythm Health CSV uploads - Uses Playwright to scrape Chronometer daily nutrition and water intake - Uses Gemini to OCR Ladder workout screenshots and track my lifts - Full dashboard with weight trends, calorie balance charts, macro tracking, and a tabbed daily log It's completely interactive and honestly, pretty fucking cool. Blood markers even have visualizations based on what's in range and out of range.
English
122
27
1.1K
168.1K
Max
Max@systemscontext·
@vitrupo Interesting framing, though “enslaved by evolution” is probably doing a lot of dramatic work here. Feels closer to: we’re running on inherited reward machinery that’s useful, imperfect, and often at odds with explicit goals.
English
0
0
0
30
vitrupo
vitrupo@vitrupo·
Joscha Bach says consciousness isn’t free. Your body gives “compute credits” to the mind so it can solve the organism’s problems. In that sense, we are enslaved by our bodies to serve evolution. Suffering often means the parts of you that generate reward are misaligned with the parts that choose actions.
English
40
29
323
23.3K
Max
Max@systemscontext·
@sama For ‘Thinking’ mode
English
0
0
0
3
Sam Altman
Sam Altman@sama·
GPT-5.4 is great at coding, knowledge work, computer use, etc, and it's nice to see how much people are enjoying it. But it's also my favorite model to talk to! We have missed the mark on model personality for awhile, so it feels extra good to be moving in the right direction.
English
2.9K
599
11.8K
1.2M
Derya Unutmaz, MD
Derya Unutmaz, MD@DeryaTR_·
I’ve had early access to GPT-5.4 Pro. Without any reservation, I can say it is the most intelligent AI model to date, even significantly surpassing GPT-5.2 Pro at several levels! I’ve been using it non-stop past several days and am super excited about another major jump in AI! I will share specific examples, but overall GPT-5.4 Pro demonstrates relatively higher creativity, insight, and abstract intelligence. It tends to ask “why,” “what if,” “can I,” and “why it matters” type questions more frequently than the 5.2 Pro model. It also appears to generalize more effectively and comes across as more AGI-like in its reasoning, and even displays human-like intuition! Especially biomedical science-based responses are unifying large data sets and simply amazing!
English
40
58
867
50.4K
Max
Max@systemscontext·
@OpenAI 🔥LFG! 🔥
0
0
0
228
OpenAI
OpenAI@OpenAI·
GPT-5.4 Thinking and GPT-5.4 Pro are rolling out now in ChatGPT. GPT-5.4 is also now available in the API and Codex. GPT-5.4 brings our advances in reasoning, coding, and agentic workflows into one frontier model.
OpenAI tweet media
English
2.1K
3.3K
23.6K
7M
Max
Max@systemscontext·
@markgurman MacBook Air M5 🙌
English
0
0
0
590