Adam Kerr

2K posts

Adam Kerr banner
Adam Kerr

Adam Kerr

@zero_to_seed

Building Argo IQ. I like starting companies. Bullish on technology acceleration.

Planet Earth, Charlotte, NC เข้าร่วม Eylül 2023
606 กำลังติดตาม475 ผู้ติดตาม
Adam Kerr
Adam Kerr@zero_to_seed·
Claude spinning a little longer while you aren't looking:
Adam Kerr tweet media
English
0
0
0
21
Adam Kerr รีทวีตแล้ว
Tj Dunham
Tj Dunham@RealTjDunham·
what if you and 10 friends could pool your laptops and run the same AI models that cost model providers millions in GPUs? i built this. the model splits across every device in the group. they compute in parallel and the output is mathematically identical to running it on a single machine. not similar. identical. bit for bit. that's the breakthrough. the engine is fully deterministic so every device is interchangeable. nodes don't trust each other. they don't need to. if anyone computes wrong the math catches it instantly... inference passes consensus mesh-llm partitions experts across nodes. smart approach, but each node only sees part of the model. we shard the full model across any device with zero quality loss. every parameter active, every inference. 10 laptops, 50 phones, a gaming PC, whatever shows up. the output is mathematically identical to a single datacenter GPU. same model, same quality, owned by the people running it
jack@jack

mesh-llm: pool compute to run open models. built by @michaelneale at block: docs.anarchai.org

English
10
15
63
13.3K
Adam Kerr
Adam Kerr@zero_to_seed·
This is the best part of the internet
Adam Kerr tweet media
English
0
0
0
13
Adam Kerr
Adam Kerr@zero_to_seed·
@Trace_Cohen Fortunately intelligence density is also accelerating
English
0
0
1
20
Trace Cohen
Trace Cohen@Trace_Cohen·
I can’t imagine their scaling issues… Token use must be 🚀 and we all got amazing subsidized Ai for a few months. I can’t believe we already went through the Ai VC subsidized time already.
Boris Cherny@bcherny

Starting tomorrow at 12pm PT, Claude subscriptions will no longer cover usage on third-party tools like OpenClaw. You can still use these tools with your Claude login via extra usage bundles (now available at a discount), or with a Claude API key.

English
3
0
8
1.1K
Adam Kerr
Adam Kerr@zero_to_seed·
@Trace_Cohen The more the major utilities get throttled the less power they have. It just highlights the single point of failure risk.
English
0
0
1
10
Adam Kerr
Adam Kerr@zero_to_seed·
Anthropic banning openclaw and other third parties creates a massive hole, and opportunity, in the market. The people want choice, not control.
English
2
0
2
42
Adam Kerr
Adam Kerr@zero_to_seed·
@TheAhmadOsman Anthropic is closing their access while Google is pushing open models, OpenAI embraced openclaw, and open source and local LLM capabilities are accelerating.
English
0
0
2
179
Ahmad
Ahmad@TheAhmadOsman·
friends don’t let friends use Claude Code in 2026 btw alternatives? Codex, Droid, Kimi Cli, OpenCode among others
Ahmad tweet media
English
52
20
371
24.3K
Adam Kerr
Adam Kerr@zero_to_seed·
All of the various libraries, frameworks and operating systems are a security risk. Large blast radius to monitor. Simple = more secure and LLMs reduce the need for the bloated systems of the past. Principle of least privilege, light weight over feature rich, just in time instead of just in case.
English
0
0
0
25
@levelsio
@levelsio@levelsio·
Okay honestly this makes vibe coding into production very dangerous, you guys were all right I think what I'll do is cut off all access to DBs and run it as a user with almost no privileges
Basel Ismail@BaselIsmail

URGENT PSA - New supply chain attack vector that I found WILD > AI LLMs hallucinate package names roughly 18-21% of the time. Hackers have started pre-registering those hallucinated names on PyPI and npm with malicious payloads; they call it "slopsquatting" You can only imagine what's next

English
161
75
1.7K
418K
Adam Kerr
Adam Kerr@zero_to_seed·
It’s amazing how much value you can get out of well organized folder structures with markdown and links like a wiki. No need for RAG or search to get started. Throw in a keep-knowledgebase-updated rule and it’s pretty snappy.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
0
0
2
63
Adam Kerr
Adam Kerr@zero_to_seed·
@jack @michaelneale This is awesome! Though I can't tell if this is the last season of silicon valley or the first of battlestar galactica...
English
0
0
5
308
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
In case you haven’t heard, we put up $3.5M+ to source optimistic visions of the future. We don’t care who you are or where you are. Tell me… what future do you want for your children?
English
143
32
435
20.5K
Sneyon
Sneyon@IAmSneyon·
@TheAhmadOsman @zero_to_seed Most frustration with local models isn’t the model it’s the setup around it. Performance is only one part of the equation.
English
1
0
1
125
Ahmad
Ahmad@TheAhmadOsman·
if you’re using Ollama switch to llama.cpp if you’re using OpenClaw switch to Hermes these are basics at this point
English
100
119
2.2K
138.9K
George Kal
George Kal@heygeorgekal·
Why are people allergic to AI generated art... but perfectly fine using vibe coded apps?
English
64
1
50
4.5K
Adam Kerr
Adam Kerr@zero_to_seed·
@sandislonjsak Maybe they set up guardrails so that QA is less of a bottleneck? Though I’m with you I’ve maxed out at 3-4 due to QA
English
0
0
0
8
cow
cow@cowincrisis·
if you woke up one day and you were a cow what’s the first thing you’d do
English
910
34
830
47.6K