👐

25.9K posts

👐 banner
👐

👐

@_mcsd_

capital internacional das crises da identidade

He him ele dele Bergabung Ekim 2014
449 Mengikuti85 Pengikut
👐
👐@_mcsd_·
@kepano I understand that surfing this hype might be beneficial for the product but please don’t fuck this tool up 😫
English
0
0
0
19
👐
👐@_mcsd_·
@kepano Kepano, it’s great that Obsidian works for such different things. I love it because it allows me to think. It’s a peaceful place for me that I don’t want to pollute with slop. Hope it never falls into the AI-first solution eabitthole that Notion fell into
English
2
0
0
928
kepano
kepano@kepano·
More and more people are using Obsidian as a local wiki to read things your agents are researching and writing. It works best with a separate Obsidian vault that you can fill it with content, e.g. via Obsidian Web Clipper.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
46
93
2.1K
159.4K
👐
👐@_mcsd_·
@notnullptr no because its "just a search engine" but its the best search engine
English
0
0
1
232
nullptr 🐱🍩
nullptr 🐱🍩@notnullptr·
is kagi search worth the $10/mo?
English
24
0
109
36.8K
👐
👐@_mcsd_·
life update update, está fodido para se ler as mensagens na seg social direta
👐 tweet media
Português
0
0
0
24
👐
👐@_mcsd_·
life udpate: abri a segurança social direta e tinha 10 mensagens nao lidas
Português
0
0
0
26
👐
👐@_mcsd_·
@uwukko I understand you have a soul but if one has the skills and the team to do so, you just have to copy Arc (or zen but on chromium) for immediate success
English
0
0
0
275
wukko
wukko@uwukko·
designing the ui for a web browser is insanely difficult if you don’t want to end up with nearly identical ui as before
English
16
3
321
11K
Mr_Dog
Mr_Dog@mr_dog195·
@EuropeElects Why did e Melo suddenly lose so many voters?
English
1
0
0
450
Europe Elects
Europe Elects@EuropeElects·
Portugal, CESOP-UCP poll: Presidential election Ventura (CH-PfE): 22% (+13) M. Mendes (*-EPP): 20% (+1) Gouveia e Melo (*): 18% (-19) Seguro (PS-S&D): 16% (+9) Cotrim (IL-RE): 14% (new) Filipe (CDU-LEFT|Greens/EFA): 3% (new) Martins (BE-LEFT): 3% (-1) Pinto (L-Greens/EFA): 2% (new) M.J. Vieira (*): 0.5% (new) +/- vs. 17-26 March 2025 Fieldwork: 04-12 December 2025 Sample size: 1,185 ➤ europeelects.eu/portugal
Europe Elects tweet media
Català
11
26
218
23.9K
Gonçalo Aguiar
Gonçalo Aguiar@GoncaloAguiar·
@gajodagabardine Não temos mercado para absorver toda a malta que foi instruída. Essa é que é a verdade. Espero que isso vá mudando nos próximos anos.
Português
4
0
12
1.2K
Tomás Ribeiro
Tomás Ribeiro@gajodagabardine·
Hoje (e sempre) este é o tipo de notícias que nos devia deixar mais em pânico sobre o futuro. Infelizmente, a opinião pública continua mais interessada nas guerrilhas políticas do dia-a-dia. E a conta chegará para pagar, um dia.
Expresso@expresso

Estudo do Conselho Nacional de Juventude dá conta de uma geração instruída, frustrada e mal remunerada. E que se identifica mais com as políticas da União Europeia do que com as nacionais. Veja todos os resultados do estudo: expresso.pt/sociedade/2025… 📷 Getty Images

Português
8
38
273
12.2K
👐
👐@_mcsd_·
@_annytta_ Eu gosto de ver o James Hoffmann no YouTube
Português
0
0
1
61
ana
ana@_annytta_·
o meu próximo hobby não vai ser correr, quero ser barista. quero fazer um flat white em casa. onde aprendo?
Português
4
1
12
1.7K
👐
👐@_mcsd_·
Ola hi wtf
Español
0
0
0
25
👐
👐@_mcsd_·
@afonso_axe Ele ? Tem razão? Uma coisa é uma pessoa mais nova e informada discutir salários com a entidade patronal. Tudo bem. Outra coisa é a dona Amélia que lava as sanitas das casas de banho do shopping falar com o gerente. Ela mal sabe ler e vai ser enganada pelo Chico esperto
Português
2
0
0
312
👐
👐@_mcsd_·
@karpathy at least give credit to Pewdiepie
English
0
0
1
26
Andrej Karpathy
Andrej Karpathy@karpathy·
As a fun Saturday vibe code project and following up on this tweet earlier, I hacked up an **llm-council** web app. It looks exactly like ChatGPT except each user query is 1) dispatched to multiple models on your council using OpenRouter, e.g. currently: "openai/gpt-5.1", "google/gemini-3-pro-preview", "anthropic/claude-sonnet-4.5", "x-ai/grok-4", Then 2) all models get to see each other's (anonymized) responses and they review and rank them, and then 3) a "Chairman LLM" gets all of that as context and produces the final response. It's interesting to see the results from multiple models side by side on the same query, and even more amusingly, to read through their evaluation and ranking of each other's responses. Quite often, the models are surprisingly willing to select another LLM's response as superior to their own, making this an interesting model evaluation strategy more generally. For example, reading book chapters together with my LLM Council today, the models consistently praise GPT 5.1 as the best and most insightful model, and consistently select Claude as the worst model, with the other models floating in between. But I'm not 100% convinced this aligns with my own qualitative assessment. For example, qualitatively I find GPT 5.1 a little too wordy and sprawled and Gemini 3 a bit more condensed and processed. Claude is too terse in this domain. That said, there's probably a whole design space of the data flow of your LLM council. The construction of LLM ensembles seems under-explored. I pushed the vibe coded app to github.com/karpathy/llm-c… if others would like to play. ty nano banana pro for fun header image for the repo
Andrej Karpathy tweet media
Andrej Karpathy@karpathy

I’m starting to get into a habit of reading everything (blogs, articles, book chapters,…) with LLMs. Usually pass 1 is manual, then pass 2 “explain/summarize”, pass 3 Q&A. I usually end up with a better/deeper understanding than if I moved on. Growing to among top use cases. On the flip side, if you’re a writer trying to explain/communicate something, we may increasingly see less of a mindset of “I’m writing this for another human” and more “I’m writing this for an LLM”. Because once an LLM “gets it”, it can then target, personalize and serve the idea to its user.

English
908
1.5K
17K
5.3M
👐
👐@_mcsd_·
@yourvenicebich Quando a vitima não é branca vem toda a gente dizer que os jornais espalham ódio ao dizer a etnia, país de nascença, etc
Português
0
0
4
768
bia com b 🎀
bia com b 🎀@yourvenicebich·
quando um random lunático matou do nada aquela rapariga ucraniana, a extrema-direita achou q foi um crime de ódio por ela ser branca, mas quando uma criança brasileira tem os seus dedos amputados devido a uma agressão de bullying, é absolutamente impossível que tenha sido racismo
Português
101
155
1.1K
19.5K
👐
👐@_mcsd_·
@GonzaaaaaaL Certo, mas com a liderança da Catarina Martins acho que o desfecho seria o mesmo. A liderança estava gasta. A Mortagua vem da mesa lista e corrente interna. Ainda sob a Catarina o BE deixou de perceber se era radical de esquerda ou se era muleta do PS, e aí deixa de fazer sentido
Português
0
0
0
226
GonzaaL✖️🏴‍☠️
GonzaaL✖️🏴‍☠️@GonzaaaaaaL·
@_mcsd_ Contextos diferentes, o país nessa altura estava muito virado à esquerda, o PS e o PCP também tiveram muito mais deputados do que têm agora, a direita toda junta teve menos do que a AD sozinha teve nestas últimas eleições
Português
1
0
0
804
👐
👐@_mcsd_·
@TiagoS2109 @GonzaaaaaaL O meu argumento é que a liderança se gastou, não que ela seja incompetente
Português
0
0
0
23
👐
👐@_mcsd_·
Eu a abrir o Twitter uma vez por semana para dizer uma coisa nada a ver com a última que disse
Português
0
0
0
30
👐
👐@_mcsd_·
Tudo para ter o Will Toledo aos berros no telemóvel mas pronto
Português
0
0
0
18
👐
👐@_mcsd_·
Got it done e estou oficialmente a ouvir coisas que legalmente me pertencem em vez de alugar a uma streaming app 😇
👐 tweet media
Português
1
0
0
34