Kevin

2.9K posts

Kevin

Kevin

@kevinltweets

Nerd, Dad 2.0, I work on data for a living. Your POV on my POV is your own as mine is mine.

Katılım Mayıs 2009
3.1K Takip Edilen217 Takipçiler
Kevin retweetledi
How To AI
How To AI@HowToAI_·
Chinese researchers have developed the best shortest-path algorithm in 41 years! Dijkstra’s Algorithm has been the undefeated king of the shortest path for over 40 years. Whether you’re using Google Maps, booking a flight, or routing internet packets, Dijkstra is the engine running in the background. Since 1984, textbooks have taught that its efficiency was hit by a "sorting barrier." To find the shortest path, you have to sort the points by distance. And sorting has a mathematical floor you can’t cross. Until now. A research team from Tsinghua University just published a paper that shatters the 41-year-old record. They proved that Dijkstra is not optimal. By combining the logic of the Bellman-Ford algorithm with a revolutionary "recursive partial ordering" method, they figured out how to find the path without fully sorting the nodes. The results are a massive shift in theoretical computer science: - The first deterministic improvement to the Single-Source Shortest Path (SSSP) problem since 1984. - A new time complexity of $ O(m \log^{2/3} n)$, officially beating the long-standing $ O(m + n \log n)$ limit. - On massive sparse graphs (like the web or global logistics), this means finding the best route significantly faster than previously thought possible. For four decades, the greatest minds in algorithms believed this limit was absolute. Last year, even the legendary Robert Tarjan won an award proving Dijkstra was "optimally efficient" at sorting distances. Tsinghua’s answer? Stop sorting. The world’s most settled problem is suddenly wide open again. If we can break a 40-year-old law in basic graph theory, what other "impossible" speed limits are waiting to be crushed?
English
91
596
4.1K
820.4K
Kevin retweetledi
Kai is a journalist
Kai is a journalist@teo_kai_xiang·
Workers at GovTech are seeking feedback from the public on a potential solution to Singaporean dating woes and low birth rates: a government-run dating service with free meals and identity verification. Alas, the survey has since been deleted. straitstimes.com/life/govtech-s…
Kai is a journalist tweet media
English
27
37
321
60.5K
Kevin retweetledi
cinesthetic.
cinesthetic.@TheCinesthetic·
One of the greatest stand-up bits ever, Rowan Atkinson as the Devil is pure genius.
English
157
2K
12.2K
511.5K
Kevin retweetledi
Dr. Lemma
Dr. Lemma@DoctorLemma·
The Kyoto Aquarium in Japan keeps a wall-sized flowchart tracking the romantic relationships, breakups, and drama between their penguins. They update it every year. Red hearts mean couples. Blue broken hearts mean it’s over. Purple lines with question marks mean it’s complicated. Yellow means friendship. Green means enemies. One female penguin reportedly ended six relationships in a single year. The comment under her photo, translated from Japanese, described her as “basically demonic.” Another penguin was caught dating someone 17 years older who also turned out to be their great aunt. And penguins who get broken up with sometimes refuse to eat. Apparently the staff say wing-flapping means flirting, grooming each other means it’s official, and if a penguin steals another penguin’s egg… well, it’s exactly what it looks like.
Dr. Lemma tweet media
English
213
5.7K
42K
2.5M
Kevin retweetledi
taga | 暮山暖叙
taga | 暮山暖叙@Qiqi7877·
这个越南爷爷听说孙媳妇怀孕了,就去他家拜访,教他和妻子如何给新生儿洗澡。 有趣的是,他找不到一个娃娃来演示,就用了一只安静得令人难以置信、并且非常配合爷爷指示的猫来进行教学,他讲解了超过两分钟。😂😂
中文
545
14.5K
95K
3.6M
Kevin retweetledi
Maziyar PANAHI
Maziyar PANAHI@MaziyarPanahi·
🚨 Over 1 billion rows of psychiatric genetics data. Now on Hugging Face. ADHD. Depression. Schizophrenia. Bipolar. PTSD. OCD. Autism. Anxiety. Tourette. Eating disorders. 12 disorder groups. 52 publications. Every GWAS summary statistic from the Psychiatric Genomics Consortium. Before: wget, gunzip, 20 minutes debugging separators, repeat 50 times. Now: one line of Python.
Maziyar PANAHI tweet media
English
123
596
4.4K
1.3M
千百度
千百度@dameizhongguo3·
这个机器是不是能干掉很多行业
中文
1
0
6
1.5K
Kevin retweetledi
Ministry of Law, Singapore
SGLaw200: Lee Kuan Yew's Vision for the Rule of Law What is the Rule of Law and how does it benefit us? Senior Minister of State Murali Pillai explains by sharing insights from the late Mr Lee Kuan Yew's transformative approach to the concept.
English
0
18
176
961.4K
Kevin retweetledi
Phil Ewels
Phil Ewels@tallphil·
Super excited to be launching two things today: #RustQC 🦀🧬 and rewrites.bio 🚀 I used AI to rewrite 15 RNA-seq QC tools into a single Rust binary (I've never written any Rust). It ended up being over 60x faster. Here's the story 🧵 seqeralabs.github.io/RustQC/
English
9
56
201
11.9K
Kevin retweetledi
John Bistline
John Bistline@JEBistline·
This is my favorite climate change chart. Japanese monks, aristocrats, and emperors kept meticulous records of cherry blossom festivals for 1,200 years and accidentally built the world's longest climate dataset.
John Bistline tweet media
English
235
9.4K
50.1K
1.6M
Kevin retweetledi
allen institute
allen institute@AllenInstitute·
The Orange Cat Brain Atlas is here. 🧠🐈 Today, we published the first comprehensive cellular map of the orange cat brain. The new atlas reveals a single, specialized neuron responsible for behaviors like staring at walls, knocking objects off tables, and the 3am "zoomies."
English
11
127
710
51.9K
Kevin retweetledi
Trung Phan
Trung Phan@TrungTPhan·
RIP Chuck Norris. Legendary career in martial arts and film. His fight with Bruce Lee in The Way of The Dragon (1972) is immortal. To pull it off in Roman Coliseum, story goes they bribed officials and snuck in cameras for 3 hours. Two masters at work.
English
20
108
797
70K
Andrej Karpathy
Andrej Karpathy@karpathy·
Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.
Andrej Karpathy tweet media
English
965
2.1K
19.5K
3.6M
Kevin retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Two Turing-class AI researchers just raised $2B in three weeks to bet against every LLM company on the planet. Fei-Fei Li closed $1B for World Labs on February 18. LeCun closed $1.03B for AMI Labs today. Both building world models. Both arguing that the entire generative AI paradigm is a statistical parlor trick. And the investor overlap tells you this is coordinated conviction, not coincidence. Nvidia backed both. So did Sea and Temasek. The math on AMI is absurd. $3.5B pre-money valuation. Four months old. Zero product. Zero revenue. The CEO said on the record that AMI won’t ship a product in three months, won’t have revenue in six, won’t hit $10M ARR in twelve. He described it as a long-term scientific endeavor. Investors gave him a billion dollars anyway. This tells you everything about how the smart money is actually modeling AI’s future. They’re not pricing AMI on a revenue multiple. They’re pricing it on the probability that LLMs hit a ceiling. And if you look at the investor list, Nvidia, Samsung, Toyota Ventures, Dassault, Sea, these are companies that need AI to understand physics, geometry, and force dynamics. A language model that can write poetry is worthless to a robotics company trying to predict what happens when a mechanical arm applies 12 newtons at a 30-degree angle to a flexible surface. LeCun raided his own lab to build this. Mike Rabbat, Meta’s former research science director. Saining Xie from Google DeepMind. Pascale Fung, senior director of AI research at Meta. He walked into Zuckerberg’s office in November, told him he was leaving, and four months later half of FAIR works for him. Meta is reportedly partnering with AMI anyway, which means Zuckerberg thinks LeCun might be right even while Meta keeps scaling Llama. AMI’s first partner is Nabla, a medical AI company, building toward FDA-certifiable agentic AI. That’s the use case that makes world models existential. LLMs hallucinate. In healthcare, hallucinations kill people. You can’t prompt-engineer your way out of a model that generates statistically plausible text when you need a system that actually understands how a human body works. Two billion dollars in three weeks. Two of the most credentialed researchers alive. And a thesis that says the $100B+ already poured into scaling LLMs is optimizing the wrong architecture entirely. If they’re wrong, investors lose money. If they’re right, every company building on top of GPT and Claude for physical-world applications just bought the wrong foundation.
AMI Labs@amilabs

Advanced Machine Intelligence (AMI) is building a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe. We’ve raised a $1.03B (~€890M) round from global investors who believe in our vision of universally intelligent systems centered on world models. This round is co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions, along with other investors and angels across the world. We are a growing team of researchers and builders, operating in Paris, New York, Montreal and Singapore from day one. Read more: amilabs.xyz AMI - Real world. Real intelligence.

English
39
78
467
74.7K