Count de la Fère 🇾🇪

10.4K posts

Count de la Fère 🇾🇪 banner
Count de la Fère 🇾🇪

Count de la Fère 🇾🇪

@Train_Deluxe

IT guy. Creating sw for banking, telco, logistics. Crypto research. (RT = personal bookmarks).

Champagne Katılım Ekim 2011
604 Takip Edilen258 Takipçiler
Sabitlenmiş Tweet
Count de la Fère 🇾🇪
Count de la Fère 🇾🇪@Train_Deluxe·
#Monero (Privacy is normal) 8BAAoamtCuwFGHvZHepbtNBRyfiwFWxMuatkiSrodYu1WRjBTrLDVrDgnDw8TTWedv3UGv7WpnA3Li6852qwYaNm8cKroVS
Count de la Fère 🇾🇪 tweet media
Indonesia
0
0
12
2.1K
Count de la Fère 🇾🇪 retweetledi
Wallet in Telegram
Wallet in Telegram@wallet_tg·
$XMR live in Wallet in Telegram @monero one of the most well-known privacy coins, built to improve transaction confidentiality and protect financial data. $XMR is also available for perp trading with up to 10x leverage. In 2025, it gained more than 140% and currently ranks #15 by market capitalization on @CoinMarketCap
Wallet in Telegram tweet media
English
39
47
397
32.7K
Dovydas Vitkauskas
Dovydas Vitkauskas@Dovydas44444·
🇪🇺 What are the biggest inconsistencies that defy logic in the EU budget distribution: A) 🇩🇪Germany paying almost a half among all the net contributors; B) 🇪🇸Spain: net beneficiary, while 🇮🇹Italy - contributor; C) 🇧🇪Belgium: net beneficiary, while 🇫🇷France - contributor; D) 🇵🇱Poland being by far the biggest net beneficiary; E) 🇱🇺Luxembourg being a net beneficiary; F) Other?
Dovydas Vitkauskas tweet media
English
171
59
495
136K
Chip
Chip@notholitho·
@Train_Deluxe @lukeatmindload @_avichawla For a specific program. Once everything from everyone on a single project is usable to the system, it should be able to give you status, critical path, risks, etc. one problem would be privacy (e.g. raising concern to manager about poor performance of ‘John’ on the project)
English
1
0
0
8
Avi Chawla
Avi Chawla@_avichawla·
The next step after Karpathy's wiki idea: Karpathy's LLM Wiki compiles raw sources into a persistent md wiki with backlinks and cross-references. The LLM reads papers, extracts concepts, writes encyclopedia-style articles, and maintains an index. The knowledge is compiled once and kept current, so the LLM never re-derives context from scratch at query time. This works because research is mostly about concepts and their relationships, which are relatively stable. But this pattern breaks when you apply it to actual work, where context evolves across conversations constantly, like deadlines, plans, meetings, etc. A compiled wiki would have a page about the project but it wouldn't track ground truth effectively. Tracking this requires a different data structure altogether, which is not a wiki of summaries, but a knowledge graph of typed entities where people, decisions, commitments, and deadlines are separate nodes linked across conversations. Rowboat is an open-source implementation of exactly this, built on top of the same Markdown-and-Obsidian foundation that Karpathy uses, but extended into work context. The way it works is that it ingests conversations from Gmail, Granola, and Fireflies, and instead of writing a summary page per topic, it extracts each decision, commitment, and deadline as its own md file with backlinks to the people and projects involved. That's structurally different from a wiki, because a wiki page about "Project X" gives you a summary of what was discussed. A knowledge graph gives you every decision made, who made it, what was promised, when it was promised, and whether anything has shifted since. It also runs background agents on a schedule, so something like a daily briefing gets assembled automatically from whatever shifted in your graph overnight. You control what runs and what gets written back into the vault. You bring your own model through Ollama, LM Studio, or any hosted API, and everything is stored as plain Markdown you can open in Obsidian, edit, or delete. Repo: github.com/rowboatlabs/ro… TL;DR: Karpathy's LLM Wiki compiles research into a persistent Markdown wiki. It works well for concepts and their relationships but breaks down for real work where the context evolves over time. Rowboat builds a knowledge graph instead of a wiki, extracts typed entities with backlinks, and runs background agents that act on that accumulated context. Open-source, local-first, bring your own model.Karpathy nailed the foundation. The next layer is here.
Avi Chawla tweet media
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
43
128
1.1K
183.6K
Michael A. Arouet
Michael A. Arouet@MichaelAArouet·
Eye-opening chart. While Poland has enjoyed an economic boom, Ukraine is poorer today than it was in 1990. Free markets, entrepreneurialism, EU and NATO membership, and low corruption fueled Poland’s economic miracle. Let’s hope Ukraine follows a similar path after the war.
Michael A. Arouet tweet media
English
31
28
239
14.2K
AlphaSignal AI
AlphaSignal AI@AlphaSignalAI·
Someone removed the vector database from RAG and accuracy jumped to 98.7%. Most RAG systems chunk your documents, embed them as vectors, then retrieve by similarity. The core assumption: similar text means relevant text. That assumption fails on professional documents. Ask "what were the debt trends in Q3?" and vector search returns chunks that look like your question. The real answer sits in an appendix, split across three sections, with zero semantic overlap. Traditional RAG never finds it. PageIndex is an open-source repo that removes the vector database entirely. Inspired by AlphaGo, it builds a tree index from your document. Then it reasons through it like a human expert would. Instead of pattern matching, the model navigates sections logically. "Debt trends live in financial summaries or Appendix G. Let's look there." What changes: 1. No chunking that breaks cross-section context 2. Retrieval traceable to exact pages 3. Multi-step reasoning over document structure It scored 98.7% on FinanceBench. Perplexity scored 45%. GPT-4o hit 31%.
AlphaSignal AI tweet media
English
21
63
546
33.3K
Chip
Chip@notholitho·
@lukeatmindload @Train_Deluxe @_avichawla Luke, I could see a situation where an entire program is run using a Mindload like system, especially if everyone uses a dedicated channel and can tag all relevant emails,docs,slides, etc. w/project name, and the system has access to everything (e.g. all inboxes and drives)
English
2
0
0
25
Count de la Fère 🇾🇪
Count de la Fère 🇾🇪@Train_Deluxe·
@lukeatmindload @_avichawla I am a bit confused about „content“. Do you have some type schema? Or can I feed your system any document and it types it dynamically? How do you deal with duplicate semantics like „User“, „Customer“, „Kunde“, „Human“ etc?
English
1
0
0
11
Luke
Luke@lukeatmindload·
@Train_Deluxe @_avichawla Right now, we do not have OCR. Mindload is focused on text based formats first, where we can extract content and return grounded answers reliably. But we've got img, audio and video in our roadmap.
English
2
0
0
49
Luke
Luke@lukeatmindload·
This is a fascinating distinction. You are essentially arguing that static summaries fail in dynamic environments like project management, and that we need structured entity extraction to track "ground truth" over time. I agree that wikis struggle with the entropy of daily work. However, I am curious: do you find that the maintenance of this knowledge graph becomes a "second job" for the user? Building a system that perfectly extracts decisions and commitments from meetings is a massive technical win. But there is always a high tax on the user to ensure the model doesn't hallucinate relationships or misclassify entities. My bet with Mindload has been the opposite: instead of extracting entities into a graph, we keep the raw context as it is and use retrieval to bring the "ground truth" to the surface only when needed. Do you think users will eventually prefer a highly structured graph that they have to trust, or a search layer that lets them query the raw source material directly? I love the approach you took with Rowboat, but I wonder if the overhead of managing a graph will be the next bottleneck for users.
English
1
0
4
1K
Lola Dziemian
Lola Dziemian@LolaDziemian·
@Train_Deluxe @MichaelAArouet Idiot. The only what counts is benefits per capita. In such chart - Poland is below mediana. The same regarding contribution. Germany is not necessarily the biggest per capita. Thai chart is simple cheating.
English
1
0
0
16
Count de la Fère 🇾🇪 retweetledi
James L. Nuzzo, PhD
James L. Nuzzo, PhD@JamesLNuzzo·
Note how the UN expresses no interest in closing the most glaring health "gap" of them all: the sex difference in life expectancy. Here is that gap in the U.S.:
James L. Nuzzo, PhD tweet media
UNFPA@UNFPA

Science has never held more promise for health—yet too many women and girls are still left behind. This #WorldHealthDay, join @unfpachief and @UNFPA in closing the gaps in women’s health, for a healthier future for all. 📄 Read her full statement: unf.pa/WHD26

English
10
113
680
23.7K
Count de la Fère 🇾🇪 retweetledi
Steve Stewart-Williams
Steve Stewart-Williams@SteveStuWill·
Massive new replication study: Only 55% of 274 social-science findings successfully replicated. Education was in first place, economics last, and psychology somewhere in between. [Link below.]
Steve Stewart-Williams tweet media
English
22
76
317
21.1K
Count de la Fère 🇾🇪 retweetledi
@amuse
@amuse@amuse·
GENOCIDE? Algorithmically-curated dating apps have engineered a generational mismatch in romantic expectations that mirrors the worst dynamics of winner-take-all markets and positional goods inflation. Traditional courtship was bounded by geography, social circles, and time, forcing realistic self-assessment and holistic evaluation of partners across sustained interaction. Swipe-based platforms obliterated those constraints, collapsing all of human mate selection into a single ranked marketplace where hypergamy, the female preference for partners of equal or higher status, now operates without friction or limit. The data is not subtle: the top 20% of men on these platforms receive roughly 80% of female attention, a Gini-coefficient-level concentration of desire that has no precedent in the history of human courtship. Those men rationally respond to sudden abundance by abandoning commitment entirely, because the incentive to settle has been structurally demolished. At the same time, women who periodically access that top tier, regardless of how briefly or on what terms, permanently recalibrate their expectations upward, pricing themselves out of the realistic market of men who would have made them genuinely happy. Neither group is acting irrationally given the incentives presented to them, which is precisely what makes this so damaging: the apps have not corrupted individual character so much as they have poisoned the entire coordination mechanism through which generations of ordinary people found lasting partnership. The consequence is not a dating culture that is merely unsatisfying. It is one that is structurally incapable of producing the outcomes most of its participants actually want. h/t @Degen_Poster
@amuse tweet media
English
79
226
1.4K
74.1K
andrej
andrej@reactive_dude·
What are your favorite agent skills? I'll start: > grill-me (brainstorming) > write-a-prd (specs) > tdd (the best way to code with agents rn) > agent-browser (great for debugging/qa)
English
35
21
713
68.6K
OneManSaas
OneManSaas@OneManSaas·
@garrytan Git's great until you hit those limits, but 2.3GB for a knowledge base suggests the real win might be in how you're structuring the data itself. SQLite will definitely solve the size problem, but curious if you considered breaking the knowledge graph into smaller, linked repos...
English
1
0
1
443
Garry Tan
Garry Tan@garrytan·
My Karpathy-style git wiki knowledge base for OpenClaw got to 2.3GB and I know git limit is 5GB so my GStack autoplan skill one line prompted this spec for my upgraded GBrain with SqlLite. This will be MIT license open source soon. gist.github.com/garrytan/49c88…
Garry Tan tweet media
English
96
109
1.4K
107.9K