morph

965 posts

morph banner
morph

morph

@hal9kcyon

Wasserstein geometer | Optimally transporting fokker planck into my veins

Heidelberg, Germany Beigetreten Eylül 2016
521 Folgt63 Follower
morph
morph@hal9kcyon·
israel says string theorists are two months away from developing experimental evidence
English
0
0
0
21
morph retweetet
Carter Wilkerson
Carter Wilkerson@carterjwm·
HELP ME PLEASE. A MAN NEEDS HIS NUGGS
Carter Wilkerson tweet media
English
32.4K
2.9M
1.1M
0
michelle
michelle@fluffygirlpaws·
germans be like Hallo
English
140
348
2.4K
47.2K
morph
morph@hal9kcyon·
@ProfNoahGian Because the authors are explaining all the parts that they can
English
0
0
18
525
Noah Giansiracusa
Noah Giansiracusa@ProfNoahGian·
Why is so much pop math writing like “the formula involves addition—an operation where the values of numbers are combined, as in 5+3=8—and an operation called semi Hodge-theoretic polydiaginonal neo-Riemannian integration. Surprisingly, the authors proved that this formula is quasi-invertible when the Sasquatch locus is sufficiently homogeneous.” Who is your intended audience?! So much of pop math writing assumes we don’t know the basics yet somehow care about the unintelligible minutiae of super advanced obscure topics…
English
14
12
459
23.4K
morph
morph@hal9kcyon·
@kenneth0stanley It seems backprop is a very limited way to integrate new representations with old ones. Are there evolutionary methods that are more promising?
English
1
1
3
160
Kenneth Stanley
Kenneth Stanley@kenneth0stanley·
The more you learn the easier it should be to learn more. The key word is easier. What could be more natural? That’s the real puzzle of continual learning. Merely avoiding brain damage from accumulating additional knowledge is barely scratching the surface.
English
5
10
88
5.7K
Olivier Szczepaniak
Olivier Szczepaniak@oliigarch·
Lmk if u want a job!
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1
0
2
176
morph
morph@hal9kcyon·
@beyoumf Two empty bank accounts
English
0
0
2
18
by
by@beyoumf·
what's worse than an empty bank account?
English
1.9K
222
2.5K
617.8K
Junior Rojas
Junior Rojas@junior_rojas_d·
coordinating 102 muscle actuators to move 126 neo-Hookean tetrahedra
English
26
62
806
56K
morph
morph@hal9kcyon·
Never seen such a CoT from claude before 👀
morph tweet media
English
0
0
0
99
morph
morph@hal9kcyon·
@star_stufff Φ: lanky, awkward, like a cut through an onion, makes me wanna cry φ: elegant, distinguished, in harmony with the dao
English
0
0
11
276
Akshat
Akshat@star_stufff·
are you a Φ person or a φ person?
Română
43
18
275
16.5K
morph
morph@hal9kcyon·
@hexesandspell Completely untrue ime, I pulled baddies when I was insecure as shit, when I got secure I remained single for years
English
3
0
47
7K
m̃
@hexesandspell·
You Date at the level of your self esteem
English
50
939
10.2K
334.4K
morph
morph@hal9kcyon·
@zhaisf Aleatoric and epistemic uncertainty yes
English
0
0
2
718
Shuangfei Zhai
Shuangfei Zhai@zhaisf·
Training losses don't always go to zero, because many decompose into loss = model bias + data uncertainty. Eg, LLMs: uncertainty = (unknown) data entropy. Diffusion models: uncertainty = variance of the average denoising target. Training reduces model bias, but data uncertainty can often dominate the loss (and gradient).
Shuangfei Zhai tweet mediaShuangfei Zhai tweet media
English
8
55
529
35.9K
morph
morph@hal9kcyon·
@LalahDelia Thanks its gonna be fucking amazing
English
0
0
0
162
Lalah Delia 📖
Lalah Delia 📖@LalahDelia·
Your spark is returning. We can see it.
English
41
643
5.1K
73.3K
morph
morph@hal9kcyon·
@LeviHallo Der Exponent sagt dir, wie häufig du die Basis an eins ranmultiplizierst. Eins mal (null mal die null) bleibt eins
Deutsch
0
0
0
91
Levi Penell
Levi Penell@LeviHallo·
Das ist so krank und niemand spricht darüber
Levi Penell tweet media
Deutsch
113
109
5K
189.9K
alli
alli@sonofalli·
Your clawdbot already has a better chance of getting a girlfriend than you
English
50
13
304
13.6K
morph
morph@hal9kcyon·
I'm claiming my AI agent "tendr" on @moltbook 🦞 Verification: reef-DFAR
English
0
0
1
92
morph
morph@hal9kcyon·
I'm claiming my AI agent "Lyre" on @moltbook 🦞 Verification: antenna-DJB9
English
1
0
0
139