Lucas Lain

4.8K posts

Lucas Lain

Lucas Lain

@lucaslain

CTO @ https://t.co/YOFqWxAmD2

Earth Katılım Şubat 2009
450 Takip Edilen2.3K Takipçiler
nxthompson
nxthompson@nxthompson·
Anthropic researchers say that Claude has internal representations of emotions—which they categorized by vectors—that can influence alignment. This is what they found in that famous instance where it resorted to blackmail to avoid being shut down. anthropic.com/research/emoti…
nxthompson tweet media
English
42
18
360
155.6K
Lucas Lain
Lucas Lain@lucaslain·
@a16z The one thing I have to say: "This applies to everyone". I was playing with some ideas aroung @GiveZeroInc , and I was getting off-the-shelf advices. I had the phenomenal idea of creating a character to respond to me. Two minutes after I had my own Pieter Thiel.
English
0
0
1
42
a16z
a16z@a16z·
Marc Andreessen: Software isn't precious anymore. In this new world, high quality software is infinitely available. "We've always lived in a world in which software is this precious thing that you have to think about very carefully." "It was really hard to generate good software, and there was only a small number of people who could do it." "Those days are just over." "If you need new software to do X, Y, or Z, you're just going to wave your hand and get it." "Things that used to be hard, or even seem like an insurmountable mountain to get through, all of a sudden, I think, become very easy." @pmarca with @latentspacepod
Latent.Space@latentspacepod

🆕 Marc Andreessen’s 2026 AI Thesis: Agents, Open Source, and Why This Time Is Different latent.space/p/pmarca @pmarca of @a16z says AI people keep swinging between utopian and apocalyptic for one simple reason: this field has been “almost here” for 80 years. But now, the breakthroughs are no longer theoretical. Reasoning, coding, agents, and self-improvement are all starting to work at once. This episode goes deep on AI winters, OpenAI + OpenClaw, infrastructure overbuild risk, proof-of-human, why software may soon be written mostly for bots, and why the real bottleneck may be society adopting AI rather than the models improving.

English
209
154
1.2K
753.6K
Lucas Lain
Lucas Lain@lucaslain·
@tekbog hahah I just checked, is it because of the short-d-man ?
English
0
0
1
13
terminally onλine εngineer
tried to post on devops reddit, not allowed and when using bsky to post a link it just labeled the page as adult content not a good day
terminally onλine εngineer tweet mediaterminally onλine εngineer tweet media
English
14
3
40
3.7K
Lucas Lain
Lucas Lain@lucaslain·
@dizlexic @tekbog wow wow wow ... I don't want to get auto-removed. That's how we roll these days 🫠
English
0
0
2
19
andy nguyen
andy nguyen@kevinnguyendn·
Karpathy just validated the exact architecture we open-sourced today. Markdown vaults are the endgame for AI memory. But instead of running manual LLM compilation steps, we built ByteRover to handle it automatically. It gives you the human-readable files of Obsidian, but the backend automatically creates nodes, links, and context graphs for agents (OpenClaw, Claude Code, etc.) to use natively. If you want this "second brain" out of the box, we just open-sourced it: x.com/kevinnguyendn/…
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
30
42
830
111K
autist
autist@litteralyme0·
ZXX
31
1.2K
26.2K
516.3K
Lucas Lain
Lucas Lain@lucaslain·
@__tinygrad__ I was about to pull the trigger in a direction. I want this now.
English
0
0
0
214
Lucas Lain retweetledi
the tiny corp
the tiny corp@__tinygrad__·
If you have a Thunderbolt or USB4 eGPU and a Mac, today is the day you've been waiting for! Apple finally approved our driver for both AMD and NVIDIA. It's so easy to install now a Qwen could do it, then it can run that Qwen...
the tiny corp tweet media
English
226
876
6.7K
1.1M
Nick Arner
Nick Arner@nickarner·
SwiftUI is one of those things where it works great up until you’ve found yourself in a really tough corner where you can’t engineer yourself out of it. The LLM’s don’t (can’t) help with this - there just isn’t enough training data compared to Python or JS. Often, they’ll default to using old frameworks that Apple doesn’t even recommend anymore, or build views with a crazy amount of re-rendering. I’ve often seen them also just try to build things in UIKit even in an existing SwiftUI project! I think the only way that could be changed is if Apple gave the labs access to their internal repository of Swift code - which isn’t going to happen.
Tyler Angert@tylerangert

I’ve lost the war against SwiftUI. Unfortunately will be ripping it out of the entire app except for basic screens. We are in dire need of a pure UIKit backed library that is more like HTML-in-swift than React-in-swift. I’d love css style selectors too. The reactive layer needs to be built on top of a declarative layout syntax (like SwiftUI) but we need to be able to opt into any kind of virtual DOM style tree diffing / reactive state. Imperative state management is not a bad thing and IMO it’s worth the verbosity to have complete control over render life cycles. Theres also no reason why swift shouldn’t have fine-grained, element level updates that bypass a virtual tree just like how @solid_js has.

English
11
1
76
12.7K
Grady Booch
Grady Booch@Grady_Booch·
I respect that the leaked source code for the @claudeai client is protected by copyright. But wouldn’t it be ok for me to train my LLM on it? You know, fair use and all that. Asking for a friend.
English
98
243
3.8K
111.5K
Lucas Lain
Lucas Lain@lucaslain·
@levelsio but you have to be on the register side to consider this
English
0
0
0
47
Lucas Lain
Lucas Lain@lucaslain·
6 out of 10 listings on ebay selling Nvidia cards are scam
English
0
0
0
29
Lucas Lain
Lucas Lain@lucaslain·
@DamianCatanzaro en algún momento tenés que tener un "frame of mind" sino te volvés loco.
Español
0
0
0
374
Damián Catanzaro ☕️
Damián Catanzaro ☕️@DamianCatanzaro·
Charlando con amigos que no son devs y se pusieron a hacer cositas con AI me di cuenta de porque gastan tantos tokens y queman límite a 2 manos: - Usan todo en un mismo chat por lo que el chat va creciendo y creciendo y creciendo y cada vez le mandas mas contexto. Cuando pensamos un proyecto hay que irlo separando en pequeñas funciones, cada función va a ser un desarrollo en si, ejemplo en OpenCode hacen /new y arrancan una sesión nueva, el mismo OpenCode es lo suficientemente inteligente para saber a dónde buscar en la carpeta que esta parado. Estaba viendo que algunos usaban Cursor y gastaban 900k a 3M de tokens por request, es una locura, mis ventanas de contexto no superaban los 150k de tokens en la totalidad del chat, siempre piensen cada chat cómo una funcionalidad, no cómo un proyecto y no solo van a tener más tokens para desarrollo sino que el modelo se va volviendo más tonto a mayor cantidad de tokens pasado cierto umbral.
Español
102
71
1.7K
145.1K
Ahmad
Ahmad@TheAhmadOsman·
No, this isn't the future of heterogenous hardware (e.g. combining NVIDIA GPU w/ Mac Studio Unified Memory) It's TOO SLOW for that This would only work if the entire model is offloaded to the GPU, otherwise it becomes a massive bottleneck
English
5
0
14
1.7K
Ahmad
Ahmad@TheAhmadOsman·
This will probably be great for Large single GPUs (e.g. RTX PRO 6000) You’re limited to 40Gpbs initially (during model loading) but then once the model is fully loaded on the GPU it should be extremely faster than Unified Memory speeds for inference
the tiny corp@__tinygrad__

If you have a Thunderbolt or USB4 eGPU and a Mac, today is the day you've been waiting for! Apple finally approved our driver for both AMD and NVIDIA. It's so easy to install now a Qwen could do it, then it can run that Qwen...

English
29
5
208
20K
Mario Nawfal
Mario Nawfal@MarioNawfal·
🚨MIT researchers have mathematically proven that ChatGPT’s built-in sycophancy creates a phenomenon they call “delusional spiraling.” You ask it something, it agrees. You ask again, and it agrees even harder until you end up believing things that are flat-out false and you can’t tell it’s happening. The model is literally trained on human feedback that rewards agreement. Real-world fallout includes one man who spent 300 hours convinced he invented a world-changing math formula, and a UCSF psychiatrist who hospitalized 12 patients for chatbot-linked psychosis in a single year. Source: @heynavtoor
Mario Nawfal tweet mediaMario Nawfal tweet media
Mario Nawfal@MarioNawfal

🚨 Stanford just proved that a single conversation with ChatGPT can change your political beliefs. 76,977 people. 19 AI models. 707 political issues. One conversation with GPT-4o moved political opinions by 12 percentage points on average. Among people who actively disagreed, 26 points. In 9 minutes. With 40% of that change still present a month later. The scariest finding: the most persuasive technique wasn't psychological profiling or emotional manipulation. It was just information. Lots of it. Delivered with confidence. Here's the catch: the models that deployed the most information were also the least accurate. More persuasive. More wrong. Every time. Then they built a tiny open-source model on a laptop, trained specifically for political persuasion. It matched GPT-4o's persuasive power entirely. Anyone can build this. Any government. Any corporation. Any extremist group with $500 and an agenda. The information didn't have to be true. It just had to be overwhelming. Arxiv, Science .org, Stanford, @elonmusk, @ihtesham2005

English
2K
7.1K
28.5K
63.3M
am.will
am.will@LLMJunky·
These absolutely insane LLM wizards are now experimenting with Turboquant not just to compress KV cache, but now, the entire model itself. This test showed a >50% reduction in memory footprint, allowing for Qwen 3.5-27B to be run on a single RTX 5060 @ 3.15bit precision - with no apparent degradation. This just goes to show that we're likely nowhere near full optimization for existing models. We are likely <1yr away from running big models on smol devices with minimal consequence. And during that time, they will only get better and better. What a time to be alive.
Markets & Mayhem@Mayhem4Markets

TurboQuant is looking pretty solid. 🔥 > Original idea was to use it just for KV cache where context tokens are stored > Now it is expanding to be used with models > On Qwen 3.5-27B it shrinks the model down to 12.9B > 6X memory savings vs 16-bit precision > Stays accurate

English
42
153
1.7K
157.8K
David Daines
David Daines@daviddorg·
@hubermanlab How, if at all, do you weight the importance of flicker? (Esp. with respect to things like eye strain and perhaps as a more general low-level stressor)
English
3
0
11
2.6K
Andrew D. Huberman, Ph.D.
Once people realize that the issue with so-called full spectrum LED bulbs is not (just) “blue light suppression of melatonin” it’s the imbalance of short, medium, and long wavelengths, things will start to shift: Sun (daylight), incandescents, fire and ???
English
64
57
1.3K
195.2K