Dimitar Bakardzhiev

4.1K posts

Dimitar Bakardzhiev banner
Dimitar Bakardzhiev

Dimitar Bakardzhiev

@dimiterbak

Founder of @kedehub Inventor of KEDE; Entrepreneur; Investor; Publisher; Author;

Bulgaria Katılım Nisan 2009
686 Takip Edilen984 Takipçiler
Sabitlenmiş Tweet
Dimitar Bakardzhiev
Dimitar Bakardzhiev@dimiterbak·
Knowledge Discovery Efficiency (KEDE), quantifies the knowledge gap a human needs to bridge for completing a task. Besides, KEDE is pronounced [ki:d]. When individual capability exceeds task complexity, the knowledge gap is too narrow, leading to wasted potential and boredom. Conversely, a wide knowledge gap, where tasks are too complex, leads to stress and lower productivity. An optimal gap keeps humans in a state of Flow, leading to higher productivity and job satisfaction. Imagine typing the word: "Honorificabilitudinitatibus", from Shakespeare’s "Love's Labour's Lost". To calculate KEDE, we track the process, of typing this word Each time interval, if I know what letter to type, mark "1". If I'm unsure, and need to check the spelling, mark "0". I begin by reviewing the word, and this takes me two time intervals. Then, I confidently write: "Honor", then hesitate. After checking, I write: "ificabi", hesitate again, then write: "lit", check, write: ‘udi’,check, write: ‘ni’,check again, and finally write: "tatibus". What results is a sequence of ones and zeros, alongside our word. The ones represent existing knowledge; the zeros, moments of knowledge discovery. The resulting Missing Information is 0.41 bits. The KEDE score is 71. In short, tangible output, like our word, indicates applied knowledge. The ones and zeros are the journey of discovery. This exercise illuminates the process behind KEDE calculation.
English
0
0
2
259
Dimitar Bakardzhiev
Dimitar Bakardzhiev@dimiterbak·
@rauchg Wrong! Never has a college degree, work experience, network, even the accumulation of knowledge been worth **more**.
English
0
0
0
9
Guillermo Rauch
Guillermo Rauch@rauchg·
There are no limits anymore. Anyone can do anything. The only limiting factors are agency and ambition. Never has a college degree, work experience, network, even the accumulation of knowledge been worth less. You can just ship things.
English
343
704
6.6K
429.4K
Felix Haas
Felix Haas@felixhhaas·
Ultimate AI Prompt Directory 🔥 Over the past weeks I collected my favorite prompts and turned them into one "master directory" so you can just copy + paste what you need. Prompts you'll find: 👉 Foundation (auth, users, settings) 👉 Core UX & UI (dashboards, file uploads, realtime) 👉 Collaboration & Growth (teams, invites, notifications) 👉 Monetization (Stripe, PayPal, billing) 👉 Integrations (Slack, Resend, Maps, Calendly) 👉 Advanced Systems (feature flags, analytics, cron jobs) 👉 AI Superpowers (chatbots, semantic search, rec engines) Built for Lovable. In Lovable. Want access? Comment “Directory” and I’ll send you the link. LFG 🚀
English
1.5K
209
1.7K
292.4K
Zed
Zed@zeddotdev·
LLMs can write code, but they can't maintain mental models. Engineers test as they go. When tests fail, they check their mental model to decide whether to fix the code or the tests. @conradirwin on why LLMs can't really build software: zed.dev/blog/why-llms-…
Zed tweet media
English
12
54
503
33.5K
Dimitar Bakardzhiev
Dimitar Bakardzhiev@dimiterbak·
The dichotomy method, also known as the method of division in halves, involves systematically scanning a defined space or interval by repeatedly dividing it into smaller halves. This process continues until a desired level of precision or a solution within the space is found.
10X AI@10X_AI_

1. The "No Failure" Research Method @demishassabis said: "There's no such thing as failure in blue sky research as long as you're picking experiments that meaningfully split the hypothesis space." Takeaway: Design tests where success AND failure both teach you something valuable.

English
0
0
0
28
Dimitar Bakardzhiev retweetledi
David Deutsch
David Deutsch@DavidDeutschOxf·
@ChrisFriedler The trick is to seek good explanations, not someone to believe.
English
4
14
79
13.3K
Hadi Vafaii
Hadi Vafaii@hadivafaii·
In the KL framework, there is fundamentally only one random variable, X, representing events that occur in the real world. The probability distributions p_world and p_brain pertain to this same variable. They reflect the true and the subjective likelihoods associated with X, respectively. Introducing another random variable, Y, and considering conditioning on it to reduce uncertainty about X, as expressed by H(X|Y), operates at a higher level of abstraction. This step is about how statistical relationships between different aspects of the world (i.e., when I(X; Y) > 0) can serve as a resource to reduce uncertainty about X. But this is the main point I'm trying to make: this conceptual layering about H(X|Y) (and anything else) can be understood as built on top of the KL framework. (TL; DR) The KL framework is the most fundamental layer of abstraction, which contains all other constructs that might require the introduction of tasks, auxiliary variables, or specific statistical relationships.
English
1
0
0
59
Hadi Vafaii
Hadi Vafaii@hadivafaii·
Every living system must adapt, or die. In my first blog post, I show how this fundamental principle can be mathematized: ✅ Brains adapt, and adaptation is about KL divergence minimization. Let's unpack the main insights 🧵[1/n] (Link in the first reply👇)
Hadi Vafaii tweet media
English
8
34
182
17.1K
Dimitar Bakardzhiev retweetledi
Quinn Slack
Quinn Slack@sqs·
Devs: oh I better not incur too much expense from this new code AI tool or I’ll get in trouble CEOs: let me go to my LLM usage dashboard, sort by cost descending for the month, and celebrate the dev AI power users
English
6
1
60
3.8K
Dimitar Bakardzhiev
Dimitar Bakardzhiev@dimiterbak·
@swardley Questions are the cognitive tools we use to close knowledge gaps. Questions arise only when there is Knowledge to Be Discovered. A binary question that removes 50% of possible answers is one bit of information. docs.kedehub.io/kede/what-is-k…
English
0
0
0
156
Simon Wardley
Simon Wardley@swardley·
There is a lot of discussion on the youthful cadre of DOGE engineers being let lose on legacy systems in the US Government. Some of it talks about how talented they are, how they can use AI to solve problems and how they are used to dealing with complicated problems. Most of it is frankly ... clueless. There is also a lot of scared people, mostly those with experience of that legacy. The problem with legacy is a problem of understanding. When it comes to legacy then software engineering is a decision-making process about systems that are too large for us to fully grasp. Our traditional approaches to tackling this consists of manual inspection, reading code, ad-hoc inquiry and gut feel. The reason for this is to do with the toolsets we use. I've heard the DOGE team will be using AI and I've even met several CIOs that hope AI will be a magic bullet to the burden of legacy, but alas, in its current form, whilst LLMs / LMMs may aid they can do little more. Even as an aid the impact on improvement is normally in the 5%-60% range. Assuming that Musk has 10x developers (or at least people who believe they are), then even at our most generous we'd be talking a 15x performance improvement over your average engineer. Sounds impressive except when you consider that your average legacy migration project (something which people have experience of) takes 48 months, fails 74% of the time and often involves 10s to 100s of person years of effort. Given the pressure that the DOGE engineers will be under to find results in legacy environments then mistakes are likely to happen. It doesn't have to be this way. In my world, 15x is chicken feed. I'm used to 600x improvement when dealing with legacy but that comes from not only thinking about software engineering as a decision making process but removing the constraints of tools and using techniques such as Moldable Development. To help others get to grips with these techniques, Tudor Girba & I are releasing chapters 1-3 of our book on Rewilding Software Engineering. We will be releasing more, as we find time to write it. Naturally the book will be creative commons share alike. The concepts contained in these first chapters are foundational to the practices which we will explore later in the book. I hope you enjoy it, we've done quite a bit of rewriting from our earlier version and yes, comments are welcome! ... oh, and yes, there are maps. There's always maps. It's me. Chapter 1: Introduction medium.com/feenk/rewildin… Chapter 2: How we make decisions medium.com/feenk/rewildin… Chapter 3: Questions and Answers medium.com/feenk/rewildin…
English
4
11
39
3.2K
Dimitar Bakardzhiev retweetledi
Tsarathustra
Tsarathustra@tsarnick·
Perplexity CEO Aravind Srinivas says AI will mean the marginal cost of research trends to zero, so the value will be in asking the best questions
English
135
250
1.9K
195.4K
Dimitar Bakardzhiev
Dimitar Bakardzhiev@dimiterbak·
Is there a better feeling than receiving proofs?
Dimitar Bakardzhiev tweet media
English
0
0
2
40
Hadi Vafaii
Hadi Vafaii@hadivafaii·
I had some difficulty understanding your setup. It seems like you have two random variables (X and Y), where X represents the real world and Y represents subjective beliefs. But then you say: "Initially, Y holds a certain level of information that may or may not be sufficient to resolve X." What does it mean to "resolve" a random variable X? The formal treatment using H(X|Y) suggests these are random variables, yet the text discusses them more like knowledge states or sets of information. I'm probably missing something here, but what is your intended way of interpreting X and Y?
English
2
0
3
35
Dimitar Bakardzhiev
Dimitar Bakardzhiev@dimiterbak·
@hadivafaii I have used a similar to your example of Treasure Hunt here using not KL but H(X|Y). I wonder what KL brings that H(X|Y) does not? #knowledge-discovered" target="_blank" rel="nofollow noopener">docs.kedehub.io/kede/kede-know…
English
2
0
1
33
Hadi Vafaii
Hadi Vafaii@hadivafaii·
The framework I discuss in the post operates at a deeper level of abstraction. There isn't even a discussion of priors or posteriors. These concepts, along with conditional entropy H(X|Y), might appear at the next level, when we start thinking about specific algorithms and mechanisms for KL minimization. The core objective I derive (eq. 7) simply shows that adaptation can be thought of as KL minimization, before we even get to questions of how this minimization might be achieved.
English
1
0
2
34
Dimitar Bakardzhiev
Dimitar Bakardzhiev@dimiterbak·
@hadivafaii KL or Conditional Entropy H(X|Y)? H(X|Y)quantifies the uncertainty that remains about X after considering the prior knowledge I(X;Y) H(X|Y) represents the "knowledge to be discovered" - the difference between the prior knowledge and the required knowledge to complete a task.
English
1
0
1
48
Hadi Vafaii
Hadi Vafaii@hadivafaii·
That's all, folks. I'll be back with Part 2 (and beyond) once they're up. Thanks for reading this far. I hope it sparked your curiosity. (here's the link again: mysterioustune.com/2025/01/13/wor…) P.S. If you enjoyed this thread, I'd love to hear your thoughts. 🧵[16/n]
English
2
1
8
425
Markus Meister
Markus Meister@mameister4·
Jieyu Zheng @jieyusz, my splendid co-author, is looking at post-doc opportunities...
English
3
0
12
1.6K
Dimitar Bakardzhiev
Dimitar Bakardzhiev@dimiterbak·
@girba @swardley Yes, I am aware of your product. How steep would be the learning curve of your product for the average developer?
English
1
0
1
26
Tudor Girba
Tudor Girba@girba·
@dimiterbak @swardley Well, the idea is to replace reading, actually. We replace it by custom tools created for each problem.
English
1
0
0
27