smthg_cool

1.2K posts

smthg_cool banner
smthg_cool

smthg_cool

@kaz_crows

talent doesn't choose morality

he/him 23 Beigetreten Ocak 2022
114 Folgt24 Follower
Angehefteter Tweet
smthg_cool
smthg_cool@kaz_crows·
Being gentle and kind even after everything means you are strong, you absolute fking idiots <3
English
0
0
2
531
smthg_cool retweetet
can
can@marmaduke091·
>AI can't do math >Look inside >Strongest model they tested was GPT-4o-mini Get this slop out of here. The models tested are so inferior to current SOTA models that the findings are completely irrelevant to the current AI landscape.
Sukh Sroay@sukh_saroy

🚨Everyone thinks GPT can do math they're wrong new paper called SenseMath just proved LLMs don't have number sense at all this changes everything about how we should use them:

English
76
75
1.9K
182.1K
smthg_cool retweetet
Lex Fridman
Lex Fridman@lexfridman·
This life is fucking amazing. I'm so grateful to be alive, with all of you on this miracle of a planet. Oh and I'm sorry if I fuck things up sometimes. I'm a flawed human. But I promise to do whatever I can to try to add some more understanding and love to this world. After the world leader convos I get attacked intensely by all sides, and many disparate online communities. It has led to some really low points for me mentally. But I don't matter. I'm listening. I'll do better. And I'll try to find the strength to do more of them, always with rigor and backbone, seeking to truly understand. And despite accusations, I do extremely high amounts of research, sometimes 100+ hours for a conversation. Ask many of my previous guests. But when I come to the table, I put all that aside, and make it all about the other person. I don't ever try to sound smart. I know the vastness of my ignorance. But I'm trying. Sometimes I do fuck up and sound like a douche, or do something incredibly cringe. And I hate myself right after. But I'd rather fail and embarrass myself a million times, than not do what my heart says is right. And besides world leaders, historians, CEOs, engineers, etc, this year I want to travel the world and talk to a lot more everyday people on and off the mic. This is something I've wanted to do for a long time. Anyway this is written while on I'm on a 10 mile run, probably procrastinating, since to type I have to walk and not run 🤣 But I did just get stopped by a super smart and kind girl who works at a humanoid robotics company here. And she asked if she can give me a hug to thank me for being me. Sometimes the universe sends you a message that even a dumb dude like me can almost hear. I really needed that today. Thank you for the hug and the kindness 🙏 I'm just hoping she was real and I didn't just imagine that 🤣 Then again if I went full crazy might as well enjoy it! Back to the run. I love you all! ❤️
English
1.6K
363
13.3K
1.6M
smthg_cool
smthg_cool@kaz_crows·
Never thought i would hate a place this much but alas
English
0
0
1
27
smthg_cool retweetet
UNDΘΘMΞD
UNDΘΘMΞD@Undoomed·
UNDΘΘMΞD tweet media
ZXX
17
309
4.7K
31.6K
smthg_cool retweetet
Nav Toor
Nav Toor@heynavtoor·
🚨 Andrej Karpathy thinks RAG is broken. He published the replacement 2 days ago. 5,000 stars in 48 hours. It's called LLM Wiki. A pattern where your AI doesn't retrieve information from scratch every time. It builds and maintains a persistent, compounding knowledge base. Automatically. RAG re-discovers knowledge on every question. LLM Wiki compiles it once and keeps it current. Here's the difference: RAG: You ask a question. AI searches your documents. Finds fragments. Pieces them together. Forgets everything. Starts over next time. LLM Wiki: You add a source. AI reads it, extracts key information, updates entity pages, revises topic summaries, flags contradictions, strengthens the synthesis. The knowledge compounds. Every source makes the wiki smarter. Permanently. Here's how it works: → Drop a source into your raw collection. Article, paper, transcript, notes. → AI reads it, writes a summary, updates the index → Updates every relevant entity and concept page across the wiki → One source can touch 10 to 15 wiki pages simultaneously → Cross-references are built automatically → Contradictions between sources get flagged → Ask questions against the wiki. Good answers get filed back as new pages. → Your explorations compound in the knowledge base. Nothing disappears into chat history. Here's the wildest part: Karpathy's use case examples: → Personal: track goals, health, psychology. File journal entries and articles. Build a structured picture of yourself over time. → Research: read papers for months. Build a comprehensive wiki with an evolving thesis. → Reading a book: build a fan wiki as you read. Characters, themes, plot threads. All cross-referenced. → Business: feed it Slack threads, meeting transcripts, customer calls. The wiki stays current because the AI does the maintenance nobody wants to do. Think of it like this: Obsidian is the IDE. The LLM is the programmer. The wiki is the codebase. You never write the wiki yourself. You source, explore, and ask questions. The AI does all the grunt work. NotebookLM, ChatGPT file uploads, and most RAG systems re-derive knowledge on every query. This compiles it once and builds on it forever. 5,000+ stars. 1,294 forks. Published by Andrej Karpathy. 2 days ago. 100% Open Source.
Nav Toor tweet media
English
125
369
3K
372.9K
smthg_cool
smthg_cool@kaz_crows·
Punching (the walls) my way out of this mess of fucked up feelings
English
0
0
1
39
smthg_cool retweetet
Andrej Karpathy
Andrej Karpathy@karpathy·
Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
961
2.6K
25.2K
6.2M
smthg_cool retweetet
Telugu360
Telugu360@Telugu360·
Transgender Persons (Amendment) Bill, 2026 has been passed in Parliament • India says no to Western-style woke culture; ends the system of self-declared gender identity • Section 4(2) of the 2019 Act, which allowed self-perceived gender identity, has been deleted • Recognition now limited to biological variations (chromosomes/hormones) and traditional communities like Kinner, Hijra, Aravani, Jogta, and Eunuchs • Gender recognition to be decided by medical boards, not self-declaration • Introduces strict penalties, including up to life imprisonment in some cases, for those who force or manipulate others into a transgender identity
Telugu360 tweet mediaTelugu360 tweet mediaTelugu360 tweet media
English
118
605
4.2K
315.3K
smthg_cool retweetet
Natalie F Danelishen
Natalie F Danelishen@Chesschick01·
As a female, I would like to say I'm perfectly fine with LOTR having almost no female characters. The few they did have were strong and beautiful. This doesn't need to be fixed. As a matter of fact, this single scene in Return of the King was all we ever needed:
Natalie F Danelishen tweet media
Matt Walsh@MattWalshBlog

LOTR had almost no female characters at all. Modern Hollywood sees that as a sin that must be rectified. They're coming up with all of these sequels and spin offs almost entirely for the purpose of feminizing the franchise and injecting female characters into it. It's feminist reparations. That's why this godawful sequel concept will apparently center around a female character.

English
168
659
7.2K
203.5K
smthg_cool
smthg_cool@kaz_crows·
Would have been better off with a deeper cut
English
0
0
1
46
Ali Haider Khan
Ali Haider Khan@thehaider·
I hope I am not only a nuisance to you guys, but also a friend.
English
1
0
2
147
smthg_cool retweetet
Ian Haworth
Ian Haworth@ighaworth·
It’s hilarious that Mamdani’s defense is “my wife is a private citizen, leave her alone,” and not “of course she doesn’t hate Jews, blacks, and gays.”
English
197
3K
30.2K
395.7K
smthg_cool
smthg_cool@kaz_crows·
@thehaider Hopefully they bring back Morlun in the next movies for peak miserableness ✨
English
1
0
1
70
Ali Haider Khan
Ali Haider Khan@thehaider·
Peter Parker is miserable again, we're so fucking back.
Ali Haider Khan tweet media
English
1
0
2
187
smthg_cool
smthg_cool@kaz_crows·
@bedagarkenjoyer You have time to do this bs, atleast have the decency to return my stuff
English
0
0
1
60
santra billa🍉
santra billa🍉@bedagarkenjoyer·
wifeguy apparently not a wifeguy many such cases
English
2
0
14
231
santra billa🍉
santra billa🍉@bedagarkenjoyer·
45 minutes traffic jam lord kill me
English
2
0
2
85