Lance

529 posts

Lance banner
Lance

Lance

@LKRBuilds

Building agents that obey @ObedienceCorp https://t.co/kkIkcNl7qP

Colorado, USA Katılım Eylül 2017
4.8K Takip Edilen745 Takipçiler
Lance
Lance@LKRBuilds·
@nkohari If everyone has access to super-intelligence there's going to be a lot of new markets that won't be obvious until they're possible
English
0
0
0
42
Nate Kohari
Nate Kohari@nkohari·
Honestly, there are days when it's really difficult for me to work in AI. I don't believe that AI is going to take everyone's job, but it can be hard to stay motivated with that narrative constantly buzzing in the background. That's not a future I want to help create.
English
75
27
469
19.5K
Lance
Lance@LKRBuilds·
@TanksAaron @aidaniil Yeah it's hard to make quality/difficulty easily legible in a sea of crap
English
0
0
0
11
Dan
Dan@aidaniil·
github is not a huge indicator for me, but why even bother applying to a founding eng role with this commit graph?
Dan tweet media
English
424
2
719
1.4M
Lance
Lance@LKRBuilds·
@TanksAaron @aidaniil Fake candidate scams have been a pretty big problem in general the last few years, it's pretty common for the first interview to start with the interviewer saying "thank god you're a real person" xD
English
2
0
1
17
Lance
Lance@LKRBuilds·
@TanksAaron @aidaniil My question is more related to before the conversation happens, it's always easier to explain through a conversation but if the first impression is "fake activity on github" then the conversation probably won't happen
English
2
0
1
20
Lance
Lance@LKRBuilds·
If someone applied to a position with what looked like a fake graph, is there anything that they could say upfront to get ahead of a github profile that may appear fake at first glance but isnt? I have over 11.5k contributions over the past 12 months on my main account, I’ve started using multiple accounts just to make the interface more usable because gh interface isn’t designed for this kind of volume per user
English
1
0
0
27
aaron 🇦🇺
aaron 🇦🇺@TanksAaron·
@aidaniil You might be right or you might be throwing babies out with bathwater. I only cared about obviously fake GitHub charts when I was hiring. Perhaps you are looking for something specific?
English
1
0
0
2.4K
Lance
Lance@LKRBuilds·
10x engineers did not disappear. The scale changed. AI raised the floor, but it also raised the ceiling. Some engineers can now create outcomes at 1,000 to 5,000x the speed of a pre-AI average engineer could produce. Markets rarely recognize exponential talent early because the highest leverage work is often counterfactual. If one person removes 2 years of future work before the team even knows it is coming, there is no clean metric for the timeline they prevented. By the time that leverage is legible, it already looks obvious. The skillset for extreme leverage doesn’t fit into the kpi’s used to measure talent. Someone spending an extra week on a jira ticket to eliminate a year of work that no one else understands is coming, makes them look like an under performer. The best thing about ai right now is you rarely need that extra week.
English
0
0
2
144
Rohan Paul
Rohan Paul@rohanpaul_ai·
Chamath on how AI agents are making the "10x engineer" distinction disappear because the most efficient "code paths" are now obvious to everyone. Just as AI solved chess and removed the mystery of the best move, AI is doing the same for coding, making the process reductive and removing technical differentiation. "I'm going to say something controversial: I don't think developers anymore have good judgment. Developers get to the answer, or they don't get to the answer, and that's what agents have done. The 10x engineer used to have better judgment than the 1x engineer, but by making everybody a 10x engineer, you're taking judgment away. You're taking code paths that are now obvious and making them available to everybody. It's effectively like what happened in chess: an AI created a solver so everybody understood the most efficient path in every single spot to do the most EV-positive (expected value positive) thing. Coding is very similar in that way; you can reduce it and view it very reductively, so there is no differentiation in code." --- From @theallinpod YT channel (link in comment)
English
65
34
395
54.3K
Lance
Lance@LKRBuilds·
@smartretard_ @ylecun @yacineMTB @Ph_Aghion @erikbryn The only way his future happens is if people stay afraid, don’t compete and the government gives him a monopoly so he can keep access to intelligence expensive so only the capital class can afford to use it
English
0
0
0
34
Yann LeCun
Yann LeCun@ylecun·
Dario is wrong. He knows absolutely nothing about the effects of technological revolutions on the labor market. Don't listen to him, Sam, Yoshua, Geoff, or me on this topic. Listen to economists who have spent their career studying this, like @Ph_Aghion , @erikbryn , @DAcemogluMIT , @amcafee , @davidautor
TFTC@TFTC21

Anthropic CEO Dario Amodei: “50% of all tech jobs, entry-level lawyers, consultants, and finance professionals will be completely wiped out within 1–5 years.”

English
1.2K
2.8K
21.4K
4.1M
Lance retweetledi
Uncle Bob Martin
Uncle Bob Martin@unclebobmartin·
AIs are just another step up the semantic expression ladder. We initially expressed our semantics in binary, then assembler, then Fortran, then C, then Java, then Python, etc. AI is just the next step up that same old ladder. And when you take that step, nothing else changes. You are still expressing behavioral semantics. You still need to express structural semantics. All the old principles still apply. You still have to be concerned about design and architecture. And even though the syntax allows informal statement, you cannot abandon formalism. When you express behavior you need a formal way to enforce the behavior you want. I use Gherkin for this. It seems to work pretty well. Consider that Gherkin is written in triplets of Given/When/Then. Each of those GWT triplets is a transition of a state machine. A full suite of Gherkin triplets is a formal description of the finite state machine that represents the behavior of the application. Other formalisms that matter are things like module dependency graphs, testing constraints, complexity constraints, and many others. This step up the semantic expression ladder provides you with an enormous amount of options. But you'd better choose those options wisely!
English
56
72
662
36K
Lance
Lance@LKRBuilds·
The ownership burden doesn't go away, the burden of the outcomes fall on the developer. "Sorry you got hacked, ai wrote it, it's not my fault" isn't acceptable. People are socially responsible for what they post publicly, how they create what they post publicly doesn't negate the responsibility of making something public. This is a solvable spam detection problem
English
0
0
0
44
GREG ISENBERG
GREG ISENBERG@gregisenberg·
What happens to open source when AI is writing 100% of the code? I've been thinking about this a lot. Like… the whole system was built around humans valuing the act of contribution. You learned, you struggled, you submitted a PR, you got feedback, you got better. That loop created engineers. It created community. It created ownership. If AI writes the PR, who owns it? Who learned from it? Who's gonna stay up at 2am debugging the thing they shipped because they actually care? The cool part about OSS is that no one owns it. As a consumer, you could always look under the hood, fork it, take it somewhere else. I don't think open source dies. But I genuinely don't know what it becomes... Any ideas?
English
170
14
243
28.6K
Lance
Lance@LKRBuilds·
@adxtyahq No, it made larger scale problems accessible to more people. AI lets you apply the known patterns faster so what's left is the things that don't have examples, things that don't have solutions and that's where the real fun is
English
0
0
0
33
aditya
aditya@adxtyahq·
did AI kill the fun of coding and software engineering?
English
455
13
835
87.2K
Lance
Lance@LKRBuilds·
Claude code is much faster and has better integration with external tools, but the performance is more variable than codex. Codex has consistently gotten more usable over time, I haven't personal experienced a major regression in basic functionality of codex. Claude code has been the opposite experience, where sometimes I'm experiencing basic functional regressions multiple times a week, and the same bugs get re-introduced months after they're fixed. Claude code's metadata has been consistently incorrect, you can't really use it to make useful decisions
English
0
0
0
101
Jashan
Jashan@Jashanx_gill·
Why is everyone making a shift from Claude to Codex?
English
716
28
1.3K
309.3K
Lance
Lance@LKRBuilds·
If you don’t mind the terminal, try out festival campaign workspaces. camp init to create a workspace cgo to quickly navigate around the filesystem within the workspace (no deeply nested cding) csw to switch between workspaces It’s local filesystem based, you can sync across devices with git, google drive/icloud, and you can use obsidian as a viewer/explorer fest.build github.com/Obedience-Corp…
English
0
0
0
22
Channing Allen
Channing Allen@ChanningAllen·
We built an app called Hatch It enables Karpathy's entire LLM Knowledge Base workflow out of the box, in 2 or 3 clicks, in a single interface. No need to stitch together Obsidian + plugins + markdown files + custom tools + etc. Hatch is an AI workspace where files, docs, and chats all live together. Not only can the chats READ files and docs, but they can WRITE and ORGANIZE them as well. This means you can set up an LLM Wiki in literally 3 steps, as I show in this video.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
44
77
939
177.9K
Lance
Lance@LKRBuilds·
This is my solution to this problem github.com/Obedience-Corp… You create an ai workspace called a campaign that functions like an obsidian vault using the camp init command and you keep all your plans, research, designs, etc… in that workspace and add projects related to that campaigns context to it, then easily reference everything with @ and use jump/fuzzy navigation in the terminal using the cgo command to navigate around the workspace quickly. Here’s an example campaign created for crypto hackathons: github.com/lancekrogers/O…
English
0
0
0
6
Pete Simard
Pete Simard@SimardPete·
@chamath this is the biggest gap in AI tooling right now. mass amounts of context spread across conversations you can never find again. the conversation is the product but there's no way to treat it like one.
English
2
0
9
370
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
This may be a dumb question but I’ll ask it here anyways: I can’t find a good way for my various AI chats to automatically sync its conversation history into a structured knowledge base. So that as I update various chats from time to time and refine context, my knowledge base automatically grows with this new info.
English
1.1K
62
2.4K
805.2K
Lance
Lance@LKRBuilds·
@benvargas How much context window are you using when this happens? Since they increased the context window to 1M, it seems claude still thinks it has a 200k context limit and will suggest doing the work in the next session/next day
English
2
0
2
254
Ben Vargas
Ben Vargas@benvargas·
Heavy claude users, is this normal? "Want me to ..., or are you done for today on this one?" So often I see Opus trying to finish for the day/evening/night... is Anthropic prompting it to suggest stopping to save compute?
English
219
8
413
42K
Lance
Lance@LKRBuilds·
Claude Code Using festival to execute an agile epic's worth of work autonomously in 47 minutes on march 26, 2026 What you can't see is I was setting up autonomous builds for several other projects on a second monitor, and once everything was running I drove to the grocery store github.com/Obedience-Corp…
English
1
0
1
66
Lance
Lance@LKRBuilds·
@trikcode Everyone if people stop buying into ai fear mongering
English
0
0
0
9
Wise
Wise@trikcode·
Who do you think will win the AI race ?
English
579
8
314
36.5K
Basel Ismail
Basel Ismail@BaselIsmail·
URGENT PSA - New supply chain attack vector that I found WILD > AI LLMs hallucinate package names roughly 18-21% of the time. Hackers have started pre-registering those hallucinated names on PyPI and npm with malicious payloads; they call it "slopsquatting" You can only imagine what's next
English
65
194
1.6K
573.5K
Mahesh Chulet
Mahesh Chulet@mchulet·
Drop your project URL 👇🏻 Let’s drive some traffic !!!...
English
307
4
110
9.5K
Lance
Lance@LKRBuilds·
@lennysan That’s why I built festival methodology github.com/Obedience-Corp… Instead of thinking about an individual project, I now think of all related projects as part of 1 related, flexible system
English
0
0
0
197
Lenny Rachitsky
Lenny Rachitsky@lennysan·
"Using coding agents well is taking every inch of my 25 years of experience as a software engineer, and it is mentally exhausting. I can fire up four agents in parallel and have them work on four different problems, and by 11am I am wiped out for the day. There is a limit on human cognition. Even if you're not reviewing everything they're doing, how much you can hold in your head at one time. There's a sort of personal skill that we have to learn, which is finding our new limits. What is a responsible way for us to not burn out, and for us to use the time that we have?" @simonw
Lenny Rachitsky@lennysan

"Using coding agents well is taking every inch of my 25 years of experience as a software engineer." Simon Willison (@simonw) is one of the most prolific independent software engineers and most trusted voices on how AI is changing the craft of building software. He co-created Django, coined the term "prompt injection," and popularized the terms "agentic engineering" and "AI slop." In our in-depth conversation, we discuss: 🔸 Why November 2025 was an inflection point 🔸 The "dark factory" pattern 🔸 Why mid-career engineers (not juniors) are the most at risk right now 🔸 Three agentic engineering patterns he uses daily: red/green TDD, thin templates, hoarding 🔸 Why he writes 95% of his code from his phone while walking the dog 🔸 Why he thinks we're headed for an AI Challenger disaster 🔸 How a pelican riding a bicycle became the unofficial benchmark for AI model quality Listen now 👇 youtu.be/wc8FBhQtdsA

English
563
702
6.9K
1.9M