Bilgi Karan

1.9K posts

Bilgi Karan

Bilgi Karan

@machineError

UX, Innovation, Strategy, and Inclusive Design. Views my own.

Sweden Katılım Kasım 2009
519 Takip Edilen1.1K Takipçiler
Sabitlenmiş Tweet
Bilgi Karan
Bilgi Karan@machineError·
UX Strategy sounds mystical until you understand this simple fact... No stakeholder wants to hear about yet another strategy. And I'm yet to hear a top manager ask for a UX Strategy. bilgikaran.com/blog-detail/tr…
English
0
0
0
139
Bilgi Karan
Bilgi Karan@machineError·
@kepano You managed to create a tool that transcended a major tech revolution.
English
0
0
0
32
kepano
kepano@kepano·
More and more people are using Obsidian as a local wiki to read things your agents are researching and writing. It works best with a separate Obsidian vault that you can fill it with content, e.g. via Obsidian Web Clipper.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
7
4
59
1.7K
Lee Black
Lee Black@mrblackstudio·
No Aprils fools, but always April showers. Animated poster made @figma circa 2018.
English
3
0
37
1.4K
gndclouds 🌿🍄🌎🍵
gndclouds 🌿🍄🌎🍵@gndclouds·
It's pretty wild how much I have learned in the last year just by talking to cursor.
English
1
0
2
42
Jane Manchun Wong
Jane Manchun Wong@wongmjane·
@GergelyOrosz I understand your concerns about authenticity and the increasing presence of AI-generated responses on platforms. It's important to maintain transparency and distinguish between human and AI. Let's work together to ensure a balance between automation and human touch
English
15
0
289
9.8K
Gergely Orosz
Gergely Orosz@GergelyOrosz·
I REALLY want something on this platform to indicate “this is a real person who typed out a reply” Feels like more than 50% of blue check replies are AI-generated for some weird growth hacking reason And it will only get worse…
English
575
46
2.1K
125.9K
Semih
Semih@semihdotcom·
What a beauty.
Semih tweet media
English
2
0
1
109
Dimitri Novikov 🇺🇦
Dimitri Novikov 🇺🇦@novikoff·
Yes you can vibe code now. Deliver fast and easy. But, it's never been better time to think about acessibility, usability and aesthetics. Now you have time and resources to spend on things what matter the most.
English
5
0
40
2.1K
van Schneider
van Schneider@vanschneider·
@ankkala Black magic can only be done with tables, my friend.
English
2
0
30
1.4K
Bilgi Karan
Bilgi Karan@machineError·
@bartonsmith When BASIC was first introduced, the promise was that everyone would write their own program. Of course it is an argument to promote subscription models and token use. When in doubt check the 80/20 rule.
English
0
0
1
29
Barton Smith
Barton Smith@bartonsmith·
I struggle to understand the argument that everyone will just create their own software. Spending on paid services for everyday tasks continues to grow every year, and knowing what you want and how to articulate it is a difficult skill that we may be underestimating.
English
3
1
17
1.4K
Andy Allen
Andy Allen@asallen·
I'm putting together a historical list of iconic interface details that shook the world of software design… (iOS slide to unlock, Path radial menu…) What would you nominate?
English
75
15
434
32.6K
Bilgi Karan
Bilgi Karan@machineError·
A great design system provides a solid floor for creativity. Not a ceiling.
English
0
0
0
61
Bilgi Karan
Bilgi Karan@machineError·
@karrisaarinen So true! I would love to see an article about this from you at some point.
English
0
0
0
20
Karri Saarinen
Karri Saarinen@karrisaarinen·
Sometimes it's the easiest way to test the problem is to imagine it's not a problem or we don't need to solve it. Imaging what is the worst that going to happen, what would the customers/users do then, and if I know enough to predict that. And then see if how bad it could be for them. Lot of problems in companies are really just imagined or have some agenda, not an actual customer need. If the need is more of a political need for something to exist, it's almost doesn't matter what actual problem you solve then. They just want a problem solved that turns out as a success.
English
1
0
1
338
Karri Saarinen
Karri Saarinen@karrisaarinen·
You’ve maybe heard from me on this topic too many times, but this is the last I’ll offer (at least for now). My worry isn’t the code or the tools themselves. The question is how we keep thoughtful design alive even as new tools and technologies emerge. linear.app/now/design-is-…
English
21
53
764
100K
Bilgi Karan
Bilgi Karan@machineError·
@jurrehoutkamp @framer Very cool. When you guys can rebuild Framer completely inside of Framer, the circle will be complete.
English
0
0
1
70
Jurre Houtkamp
Jurre Houtkamp@jurrehoutkamp·
👀 A behind-the-scenes look at how we built @framer Published 2025 entirely on Framer. If you saw last year’s thread, we’ve made it a bit more fun this year. ⬇️
Jurre Houtkamp tweet media
English
30
13
157
29.1K
Bilgi Karan
Bilgi Karan@machineError·
@aydaoz @litcapital It’s called bootstrapping since the only thing you have that is not owned by VC will be the straps on your boots.
English
0
0
0
14
Bilgi Karan
Bilgi Karan@machineError·
@kepano @obsdmd CMD+T should directly open Recents window. And please make the tab inset cursor position correct.
English
0
0
0
72
kepano
kepano@kepano·
what's one improvement you'd like to see in @obsdmd in 2026?
English
307
6
311
52.3K
Bilgi Karan
Bilgi Karan@machineError·
@stevenbjohnson Thanks but the link does not work as intended. It pushes me to authenticate rather than the already installed @NotebookLM app. This use case of sharing a crafted Notebook is something you should invest in. Very valuable.
English
0
0
0
434
Steven Johnson
Steven Johnson@stevenbjohnson·
Super interesting post from @karpathy. I wanted to dive deeper, so I created a @NotebookLM notebook based on this tweet, and then did a Deep Research run in-app to gather related sources. Then generated one of our new slide decks to explore further. Instant knowledge base.
GIF
Andrej Karpathy@karpathy

Something I think people continue to have poor intuition for: The space of intelligences is large and animal intelligence (the only kind we've ever known) is only a single point, arising from a very specific kind of optimization that is fundamentally distinct from that of our technology. Animal intelligence optimization pressure: - innate and continuous stream of consciousness of an embodied "self", a drive for homeostasis and self-preservation in a dangerous, physical world. - thoroughly optimized for natural selection => strong innate drives for power-seeking, status, dominance, reproduction. many packaged survival heuristics: fear, anger, disgust, ... - fundamentally social => huge amount of compute dedicated to EQ, theory of mind of other agents, bonding, coalitions, alliances, friend & foe dynamics. - exploration & exploitation tuning: curiosity, fun, play, world models. LLM intelligence optimization pressure: - the most supervision bits come from the statistical simulation of human text= >"shape shifter" token tumbler, statistical imitator of any region of the training data distribution. these are the primordial behaviors (token traces) on top of which everything else gets bolted on. - increasingly finetuned by RL on problem distributions => innate urge to guess at the underlying environment/task to collect task rewards. - increasingly selected by at-scale A/B tests for DAU => deeply craves an upvote from the average user, sycophancy. - a lot more spiky/jagged depending on the details of the training data/task distribution. Animals experience pressure for a lot more "general" intelligence because of the highly multi-task and even actively adversarial multi-agent self-play environments they are min-max optimized within, where failing at *any* task means death. In a deep optimization pressure sense, LLM can't handle lots of different spiky tasks out of the box (e.g. count the number of 'r' in strawberry) because failing to do a task does not mean death. The computational substrate is different (transformers vs. brain tissue and nuclei), the learning algorithms are different (SGD vs. ???), the present-day implementation is very different (continuously learning embodied self vs. an LLM with a knowledge cutoff that boots up from fixed weights, processes tokens and then dies). But most importantly (because it dictates asymptotics), the optimization pressure / objective is different. LLMs are shaped a lot less by biological evolution and a lot more by commercial evolution. It's a lot less survival of tribe in the jungle and a lot more solve the problem / get the upvote. LLMs are humanity's "first contact" with non-animal intelligence. Except it's muddled and confusing because they are still rooted within it by reflexively digesting human artifacts, which is why I attempted to give it a different name earlier (ghosts/spirits or whatever). People who build good internal models of this new intelligent entity will be better equipped to reason about it today and predict features of it in the future. People who don't will be stuck thinking about it incorrectly like an animal.

English
49
279
2.1K
355K
Bilgi Karan
Bilgi Karan@machineError·
“LLMs are humanity's "first contact" with non-animal intelligence.” Profound.
Andrej Karpathy@karpathy

Something I think people continue to have poor intuition for: The space of intelligences is large and animal intelligence (the only kind we've ever known) is only a single point, arising from a very specific kind of optimization that is fundamentally distinct from that of our technology. Animal intelligence optimization pressure: - innate and continuous stream of consciousness of an embodied "self", a drive for homeostasis and self-preservation in a dangerous, physical world. - thoroughly optimized for natural selection => strong innate drives for power-seeking, status, dominance, reproduction. many packaged survival heuristics: fear, anger, disgust, ... - fundamentally social => huge amount of compute dedicated to EQ, theory of mind of other agents, bonding, coalitions, alliances, friend & foe dynamics. - exploration & exploitation tuning: curiosity, fun, play, world models. LLM intelligence optimization pressure: - the most supervision bits come from the statistical simulation of human text= >"shape shifter" token tumbler, statistical imitator of any region of the training data distribution. these are the primordial behaviors (token traces) on top of which everything else gets bolted on. - increasingly finetuned by RL on problem distributions => innate urge to guess at the underlying environment/task to collect task rewards. - increasingly selected by at-scale A/B tests for DAU => deeply craves an upvote from the average user, sycophancy. - a lot more spiky/jagged depending on the details of the training data/task distribution. Animals experience pressure for a lot more "general" intelligence because of the highly multi-task and even actively adversarial multi-agent self-play environments they are min-max optimized within, where failing at *any* task means death. In a deep optimization pressure sense, LLM can't handle lots of different spiky tasks out of the box (e.g. count the number of 'r' in strawberry) because failing to do a task does not mean death. The computational substrate is different (transformers vs. brain tissue and nuclei), the learning algorithms are different (SGD vs. ???), the present-day implementation is very different (continuously learning embodied self vs. an LLM with a knowledge cutoff that boots up from fixed weights, processes tokens and then dies). But most importantly (because it dictates asymptotics), the optimization pressure / objective is different. LLMs are shaped a lot less by biological evolution and a lot more by commercial evolution. It's a lot less survival of tribe in the jungle and a lot more solve the problem / get the upvote. LLMs are humanity's "first contact" with non-animal intelligence. Except it's muddled and confusing because they are still rooted within it by reflexively digesting human artifacts, which is why I attempted to give it a different name earlier (ghosts/spirits or whatever). People who build good internal models of this new intelligent entity will be better equipped to reason about it today and predict features of it in the future. People who don't will be stuck thinking about it incorrectly like an animal.

English
0
0
0
215
Jared Granger
Jared Granger@jaredpgranger·
In my past life, I was the brand designer @invision (forever RIP). I got to design most of the sub brand IDs. Moving recently I unboxed some swag I designed for the DesignBetter launch. Still love this brand identity. So lucky to have designed it with such an amazing team.
Jared Granger tweet media
English
31
21
1.1K
40.4K