Daniel.md🛀

6.3K posts

Daniel.md🛀 banner
Daniel.md🛀

Daniel.md🛀

@dbdanieljnr

|| information is Physical || Optics for Intelligence at the edge || Idea guy || Instructor of Einshamann

anti-de Sitter space เข้าร่วม Ekim 2018
2.6K กำลังติดตาม475 ผู้ติดตาม
ทวีตที่ปักหมุด
Daniel.md🛀
Daniel.md🛀@dbdanieljnr·
The future is now. @iChristwin
Daniel.md🛀 tweet media
English
1
0
5
0
Daniel.md🛀
Daniel.md🛀@dbdanieljnr·
@Andercot Quantum error correcting codes are found in a block hole.
English
0
0
0
4
Andrew Côté
Andrew Côté@Andercot·
We probably live inside a Black Hole
Andrew Côté tweet media
English
91
98
977
49.1K
Daniel.md🛀
Daniel.md🛀@dbdanieljnr·
@skdh @reganh48 Maybe they have a magnetometry devices that goes beyond the sensitivity you get from vacancies in solid state mag devices?
English
0
0
1
22
Sabine Hossenfelder
@reganh48 the magnetic field of earth (which is already incredibly weak) is more than a million times stronger than the magnetic field created by the human heart -- at a cm distance! There is no way whatsoever they'd be detecting this from kilometers away, it's insanity.
English
4
0
16
778
Sabine Hossenfelder
Yes, you can use quantum physics to measure magnetic fields very precisely, but no, you cannot do this over miles of distance, that's insane. That said, infrared radiation is also electromagnetic of course and everything is quantum anyway. Yours sincerely, Quantum Sabine.
Sabine Hossenfelder tweet media
English
262
136
1.5K
104.1K
Ahmad
Ahmad@TheAhmadOsman·
MacBooks are great for SSHing into your local AI nodes btw
English
25
7
233
10.9K
hobo.sats
hobo.sats@SavegeorgeG·
That I can create a part in OpenSCAD, place it along others in FreeCAD, then export it, and apply materials in Blender, all from terminal, is NUTS!!! Dunno if I should show my boss or not!
hobo.sats tweet media
English
8
20
242
9.5K
Daniel.md🛀
Daniel.md🛀@dbdanieljnr·
@make2spec @SavegeorgeG Speedy prototyping. say you are discussing product feature with a client, and you already prepared a keyword capturing agents through speech to text, then with great tooling and skills, one can have a client interact with a prototype in the first discovery brainstorming session.
English
1
0
1
48
Daniel.md🛀
Daniel.md🛀@dbdanieljnr·
Transduction and communication enable the Ein to exist in the physical world.
English
0
0
0
9
Daniel.md🛀
Daniel.md🛀@dbdanieljnr·
@a16zcrypto Great article. The internet economy was built for human attention. The agent economy will be built for something faster, larger and self reinforcing. What is this "something ".
English
0
0
0
12
Daniel.md🛀
Daniel.md🛀@dbdanieljnr·
When is blender CLI, klayout CLI coming?
English
0
0
0
23
Daniel.md🛀
Daniel.md🛀@dbdanieljnr·
@virtualunc @NousResearch @3blue1brown The agents learns about the user and self improves <turing idea on self recursion>. Someone with a deep understanding on the relationships between group theory and crystals and can pull up a shot that can blow an diamond mind.
English
0
0
0
25
virtualuncle
virtualuncle@virtualunc·
The 3blue1brown style animations are such a good use case for this.. being able to describe a math concept in plain english and have hermes generate the actual manim code and render it is huge for anyone making educational content Curious how it handles more complex multi-scene animations, the simple ones are usually fine but sequencing and camera movements is where manim gets tricky
English
1
0
3
759
Nous Research
Nous Research@NousResearch·
Introducing the Manim skill for Hermes Agent. Manim is an engine for creating precise programmatic animations for mathematical and technical explainers, made famous by the @3blue1brown channel.
English
169
528
5.9K
765.6K
Daniel.md🛀
Daniel.md🛀@dbdanieljnr·
@BranaRakic Think: share an idea and my agents builds. Your agents validates.
English
0
0
0
12
Daniel.md🛀
Daniel.md🛀@dbdanieljnr·
@BranaRakic I did not write with that context in mind. But that's brilliant! I thought of it from a point of view for distributed teamwork.
English
1
0
0
18
Daniel.md🛀
Daniel.md🛀@dbdanieljnr·
@JayaGup10 Reasoning connecting data to action is data but memory is active reasoning. So skills as muscle memories injects contexts into sessions from memory which is structured data - results from agents sitting on execution paths.
English
0
0
1
28
Daniel.md🛀
Daniel.md🛀@dbdanieljnr·
@JayaGup10 Task environment is fully observable? Tokenize queryablility of Decision traces? 🤔.
English
1
0
1
21
Elon Musk
Elon Musk@elonmusk·
Hadamard thought in image space
English
3.2K
3.6K
53.9K
66M
Daniel.md🛀
Daniel.md🛀@dbdanieljnr·
Twitter Algorithm let's get past common installation guide. Let's go down the abstraction layer of actual technical configuration.
English
0
0
0
40
John Fletcher (𝔦, 𝔦)
John Fletcher (𝔦, 𝔦)@Dr_JohnFletcher·
Andrej, This sounds extremely useful, and I think it might be even more significant than it first appears. What you describe is not just a knowledge base for information. The structure of the wiki, the queries you file back, etc, encode *how* you do research: which questions to ask, which connections matter, what's worth pursuing. That's “know-how” (in the sense of Michael Polanyi). This sort of knowledge is, currently, overwhelmingly absent from training data, because it was never written down (since there was no point). Now there is, because it significantly improves the AIs performance. But notice what's happening. You propose to build the most efficient mechanism ever devised for making tacit expert know-how / methodology explicit and machine-readable, and then transmitting it, via API, to a third-party model provider. Every query against the wiki is a reasoning trace: see attached video clip. The compiled wiki itself is a structured map of your research process. This is the mechanism described here: x.com/dr_johnfletche… Expert know-how is being externalised and captured through ordinary productive use of AI tools. The user gets a better tool. The platform gets a transferable problem-solving strategy. The fact that this works so well could, in a sense, be the problem: the better it works, the more indispensable it becomes, the more know-how flows out, and, realistically, the less choice people have *not* to use it. Your instinct that "there is room here for an incredible new product" is right. But whoever builds it will be sitting on the highest-fidelity capture mechanism for expert know-how ever constructed. The question is: is the data subject to a “data network effect”, by which I mean, the kind of “data flywheel” which gave Google a 25 year monopoly over search? If so, you might be building not only more most powerful tool humanity has ever possessed, but this power might end up in the hands of a single entity. It would be great to hear your thoughts around this.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
22
16
200
38.5K
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.6K
6.3K
53.5K
18.5M