DK

144 posts

DK banner
DK

DK

@silenceandmagic

founder of Fintella Labs https://t.co/ZiQOTvkgT0 Personal Context Capsule for people using Al, Context Ground for agents acting on their behalf.

San Francisco, CA Katılım Ağustos 2010
337 Takip Edilen46 Takipçiler
DK
DK@silenceandmagic·
@MohapatraHemant Agreed. But right now we’re only simulating personalization. Chat memory, email connectors and others aren’t a real human representation. Until we have a proper, persistent model of the person in the AI stack, it’s still mostly pretend.
English
0
0
0
85
Hemant Mohapatra
Hemant Mohapatra@MohapatraHemant·
There's a lot of talk about "services as product" but not enough understanding of why this is happening / just how powerful it is. Product as a "concept" came about when the same thing needed to work for everyone. The marginal cost of servicing an n+1 customer = 0 when you are building a product (even physical products). But people have always wanted a service - something custom just for me. E.g. when you can make a coke just for me, the right colour, sugar content, etc with my face on the can, it becomes a service. Service only worked for very very high priced items like a Maserati. Ford said I'll give you any color as long as it's black. Classic product. Maserati is a service. Ford is a product. Low price high vol items can't be service because customizing it took too long for every product in history. Except people -- because people are self learning, self adjusting, self guided. People are service. That's why services firm have always scaled ~linearly with bench strength. But now with AI, you can make every product into a service. The token self edits, self learns, self adjusts. A token is a service in the guise of a product. The marginal cost of this customization is going to trend 0. That's a huge unlock for finally converting low price high vol product to a low price, high vol service. This isn't a new phenomenon - @MohapatraHemant/capex-opex-supercycles-part-ii-888ecf29f025" target="_blank" rel="nofollow noopener">medium.com/@MohapatraHema… would be a fun read for those interested to dig into this, written in 2024 (that's where the quote is from).
Hemant Mohapatra tweet media
English
10
2
80
6.3K
DK
DK@silenceandmagic·
@Grady_Booch But the transition is going to be painful for a lot of people.
English
0
0
0
51
Grady Booch
Grady Booch@Grady_Booch·
The number of people in the world who will need to know the details of successfully training state-of-the-art LLMs will be a tiny fraction of the number of people in the world who will simply use out of the box LLMs and/or treat them as a minor subsystem in the context of a larger software-intensive system. Both skill sets are necessary; both have very different purposes. In the fullness of time, LLMs will become a commodity, and the vibrant heat and smoke and noise and murmuration of large piles of cash you see now will settle down.
Stanford NLP Group@stanfordnlp

There are two paths to learning the details (aka “tricks” or “secrets”) of successfully training state-of-the-art language models:
 1. Get a job at one of the leading language model companies 2. Complete all the coursework of CS336 We’re not sure which is harder to do 🤔

English
18
12
142
17.6K
DK
DK@silenceandmagic·
@gerstenzang 2.5% conversion to invested. Cold outreach or warm intro?
English
0
0
1
281
Sam Gerstenzang
Sam Gerstenzang@gerstenzang·
I've posted this before but it's always motivating to me to go back and look. Our first attempt to raise money for B&W + Moxie.
Sam Gerstenzang tweet media
English
13
6
197
79.3K
DK
DK@silenceandmagic·
By “layer” I mean user-owned context infrastructure that sits between a person’s real-life data and the apps or agents acting for them. With permission, it connects to digital traces, finds the person’s routines, constraints, preferences, recent shifts, decision patterns, and more, then turns that into a safe, portable context object. A personal assistant can use it to understand the user over time. A shopping or booking agent can use it to rank, filter, and act with a better sense of what actually fits.
English
0
0
0
11
DK
DK@silenceandmagic·
One of the most important alignment papers this year. Safety alignment has kept models from doing harm, but it never really asked them to do good, and this is the first serious attempt to treat human flourishing as a technical target rather than a side effect. One thread the paper leaves open is that flourishing is long-term by definition, which means a model can’t support it without seeing a person over time, and that makes a portable, continuous representation of the user a structural requirement rather than a feature. Which is exactly the layer we’re building at Fintella Labs.
Séb Krier@sebkrier

If anyone builds it, everyone thrives. Over the past decade, a lot of important work on AI alignment has focused on avoiding harm. But freedom from harm isn't the same as freedom to flourish. In this paper, we introduce 'Positive Alignment'. A positively aligned agent is one that helps us navigate our own value trade-offs, builds our resilience, and acts as a scaffold for human flourishing. Doing this without slipping into top-down, technocratic paternalism is the great design challenge of our time. We think a lot more research is now needed to explore this frontier: how do we align models that actively help us thrive? Amazing work by @RubenLaukkonen, @drmichaellevin, @weballergy, @verena_rieser, @AdamCElwood, @996roma, @FranklinMatija, @shamilch, @_fernando_rosas, @scychan_brains, @matybohacek, @sudoraohacker, and others. arxiv.org/abs/2605.10310

English
1
1
2
387
DK
DK@silenceandmagic·
@jankulveit User modeling: stated preference is predictive (user forecasts self), revealed behavior is selective (shaped over years, often unaware). Most personalization treats the first as ground truth.
English
0
0
1
72
Jan Kulveit
Jan Kulveit@jankulveit·
If you see something highly unlikely around, a good question is "is there some optimisation going on?" Plausibly the next best question is: "Is the optimisation predictive or selective?" Once you get the pattern, can't be unseen; also can compress some classics to ˜one line. lesswrong.com/posts/GhhNswGB…
English
4
6
57
5.8K
DK
DK@silenceandmagic·
@scychan_brains Powerful work, and beautifully timed!
English
0
0
1
41
DK
DK@silenceandmagic·
Observe→act is exactly the right frame. The hidden requirement is a continuous representation of the human that travels across systems, not just better sensors per product. A paper out today from DeepMind, Anthropic, OpenAI and Oxford lands on the same point from the alignment side: supporting a human over time needs a longitudinal substrate the model can read before acting. That’s exactly the layer we’re building at Fintella Labs. Worth applying?​​​​​​​​​​​​​​​​ 🤓
English
0
0
2
85
Kenan Saleh
Kenan Saleh@kenanhsaleh·
Proactive AI Agents Today’s AI products are reactive. You give the model a prompt, it responds with an answer. These are useful, but I’m excited about products that take this further and shift the paradigm from “ask → answer” to “observe → act." These agents will continuously monitor context in the background across all of your connected tools and data, predict what matters, and take action before being asked to do so – much like a human does. So instead of you prompting the model, the model will start prompting you. Examples here could include agents that remind you about tasks you forgot to complete, resolve customer issues before support tickets are filed, or debug and ship code fixes automatically. This shift represents a new paradigm where AI products behave more like humans and less like tools. We’re already starting to see this dynamic with products like OpenClaw, Poke, and more - and we’ve only scratched the surface of capabilities here. We’re accepting applications for the next cohort of @a16z @speedrun – If you’re building the next generation of AI products, apply online.
English
103
23
380
67.9K
DK
DK@silenceandmagic·
@dara_venture Gut isn’t magic, it’s just better data. It reads revealed behavior, what someone actually does, tone, hesitation, how words match the body. A deck is stated preference, what someone chooses to present. Same gap exists in how AI reads users today.
English
0
0
1
22
DK
DK@silenceandmagic·
@jaltma The inverse is also true. Researcher-founders won’t round off the parts that don’t fit the pitch arc. The signal often lives in what they refuse to simplify.
English
0
0
0
183
Jack Altman
Jack Altman@jaltma·
One of the big changes in venture that Bennett and Kevin discussed is needing to understand a new type of founder -- researchers, which historically didn't always make for great founders but are obviously a huge part of the AI era. "You have to listen so carefully...every word from a great founder will have so much meaning and intention about what they're gonna build."
Jack Altman@jaltma

My guests on Uncapped this week are @kevinhartz and @BennettSiegel, co-founders of the early stage VC firm A*. They've backed companies like Notion, Mercor, Ramp, Decagon, Similie, and many more. They also announced a new $450m fund today. We discussed the state of venture capital firms in the current AI cycle, what it means for seed specialists, trends with great founders, and what they're seeing in AI. Timestamps: (0:00) Intro (0:25) The A* Capital story (1:16) Why big funds went into seed (7:50) The mother of all bubbles (10:46) Why founders are getting younger (13:00) Mapping talent, not markets (16:31) The rise of AI researcher founders (19:16) Why seed investing is so hard (22:54) Concentration and venture returns (27:34) The AI rollup craze (31:15) AI vs traditional software (33:15) Robotics and the future of AI (35:39) What’s next for A*

English
10
12
113
32.7K
DK
DK@silenceandmagic·
@pmddomingos Right, but there’s a second memorization question that gets less attention. Models can recall and paraphrase the world. They can’t do either about the user. That’s a separate gap.
English
0
0
1
111
Pedro Domingos
Pedro Domingos@pmddomingos·
Memorization does not imply overfitting. Overfitting is strictly about what happens on non-training data. So, e.g., just because LLMs memorize data doesn’t make them stochastic parrots. What matters is what they do with it, and they typically paraphrase it quite appropriately.
English
17
5
76
5.1K
DK
DK@silenceandmagic·
@matybohacek One of the more important AI papers I’ve read this year. The most interesting part for me is the layer it points toward. The future of alignment may not only live inside the model.
English
0
0
1
98
Maty Bohacek
Maty Bohacek@matybohacek·
AI alignment has been almost exclusively focused on safety applications (i.e., avoiding harms). Today, we’re thrilled to introduce a complementary direction that explores how AI systems can be aligned, in a pluralistic way, around human flourishing as the guiding principle.
Maty Bohacek tweet media
English
5
7
62
6.6K
DK
DK@silenceandmagic·
@sebkrier you point to longitudinal memory and behavioral proxies as necessary, but where should that signal actually come from? Did passive cross-domain data come up as an input?
English
0
0
1
72
Séb Krier
Séb Krier@sebkrier·
If anyone builds it, everyone thrives. Over the past decade, a lot of important work on AI alignment has focused on avoiding harm. But freedom from harm isn't the same as freedom to flourish. In this paper, we introduce 'Positive Alignment'. A positively aligned agent is one that helps us navigate our own value trade-offs, builds our resilience, and acts as a scaffold for human flourishing. Doing this without slipping into top-down, technocratic paternalism is the great design challenge of our time. We think a lot more research is now needed to explore this frontier: how do we align models that actively help us thrive? Amazing work by @RubenLaukkonen, @drmichaellevin, @weballergy, @verena_rieser, @AdamCElwood, @996roma, @FranklinMatija, @shamilch, @_fernando_rosas, @scychan_brains, @matybohacek, @sudoraohacker, and others. arxiv.org/abs/2605.10310
Séb Krier tweet media
English
87
217
1K
280K
DK
DK@silenceandmagic·
Wow. Powerful work! One piece worth highlighting: positive alignment depends on telling reflective values from impulsive ones, and long-term goals from short-term preferences. That can’t come from stated preferences alone. It needs a behavioral substrate that sees a person over time. We’re working on exactly this layer at Fintella Labs, building it from cross-domain signals of real life rather than what users declare, so models can read who someone actually is before deciding how
English
0
0
1
393
DK
DK@silenceandmagic·
payment is solved. approval flow with user confirmation works. the real trust gap is whether the AI knows you well enough to put the right two seats in front of you. wrong shortlist and you’ll click no every time. we’re building exactly that layer at fintella.io. a behavioral capsule any agent Fintella.io
English
0
0
0
575
Marques Brownlee
Ok genuine question: Would you actually trust an AI with your credit card to execute this in one click?
Marques Brownlee tweet media
English
1.2K
134
7.7K
475.9K
DK
DK@silenceandmagic·
@craigzLiszt opposite take. when you’re actually an expert in something, you see how convincingly LLMs get it wrong there. that should make you more skeptical of the answers you can’t check, not less.
English
2
1
9
140
Craig Weiss
Craig Weiss@craigzLiszt·
ai is your personalized expert in everything you’re not an expert in
English
85
9
247
6.2K
DK
DK@silenceandmagic·
Tom Griffiths in the Guardian: human intelligence is an adaptation to constraints. Short lives, small brains, mouth noises. AI has none of those constraints, so it arrives at decisions in a fundamentally different way. If intelligence were one scale, AI would scale up and eventually understand us from the inside. It won’t. AI sees the world differently and will keep seeing it differently. That’s the case for a representation layer. A bridge between two kinds of minds, not a shortcut between them. theguardian.com/books/2026/may…
English
0
0
1
123
DK
DK@silenceandmagic·
I really like the framing that behavior comes before categories. Curious how you think about the stage before behavior is legible. The signals you describe are early, but already observable: communities, waitlists, intense usage, users teaching each other. What about markets that are still pre-behavioral, where there is no clear pull yet, but the current framework is visibly reaching its ceiling and a new behavior feels inevitable? Is that simply too early, or do you use a different lens for those cases?
English
0
0
0
43
Emily Bennett
Emily Bennett@emilybenn12·
Early Stage Markets Rarely Look Like Markets Everybody loves a good market map. It’s seductive to think that you can compress the world into tidy grids with logos neatly sorted into categories, each square representing a company that has raised meaningful capital. For investors, these maps are useful tools to help think through investment opportunities. They are also dangerous for founders. By the time a market is legible enough to be mapped, the most valuable entry points have already closed. The best founders are not waiting for a sector to mature enough that a VC can toss it on a 2x2 it. Generational companies are built by recognizing patterns of behavior that precede the sector map entirely. In my work at a16z speedrun, the most promising opportunities almost never arrive looking like opportunities. They arrive looking strange. Behavior Comes Before Categories Markets begin as behavioral shifts in small, overlooked communities. It may be a few hundred developers who can’t stop talking about a new tool. Or maybe there are a few thousand users engaging with an app in ways that seem totally indecipherable to outsiders. The usage numbers are modest, and if you were just glancing at the product, you would probably ignore it. What matters is the intensity of the behavior. There is a meaningful difference between an app that people open often and an app that changes how people operate. A worldchanging company has to, by its very definition, induce structural new ways of working/being/creating/existing. Consider OpenClaw. An open-source project built for developers, it required terminal knowledge and real technical fluency. It was not used by everyone. But for those who adopted it, OpenClaw became a fundamental part of their workflow. It reinvented the way they conducted work. What also made OpenClaw significant was its downstream effects. At speedrun we have already started seeing a new generation of pitches built on the conceptual foundation it propagated. Founders now have proof that agents could be held towards outcomes, not merely tasks completed. Within a week or two of OpenClaws launch, we had pitches for making these systems accessible to non-engineers. Then as the ideas permeated, we started having companies proposing agents that could autonomously run entire business functions, from legal to HR. The agentic workforce thesis now discussed at every conference accelerated across a small developer community restructuring their work around a tool most of the industry had never heard of. But as a founder, you don’t have to wait for something as popular as OpenClaw to come along. How to Spot an Early Market Working with really early-stage companies at speedrun, you start to get a gut sense for whether something is being pulled in by real demand or just pushed out, well before the usual metrics tell you much. What we try to get founders to look for is simple: are there signs that demand exists even before the product is fully there? The specifics vary, but a few patterns show up a lot. First is what happens before launch. It is not just about racking up signups. Plenty of subpar products with good marketing campaigns can do that. But is there real intent behind the interest? You can see this in everything that forms around the waitlist. People share it, talk about it, and bring others in without being asked. That energy is coming from the market. Second is early community. Sometimes people start gathering around a product before it’s even live, and the conversation keeps going on its own. No one from the company is propping it up. People show up because it connects with something they’ve been looking for. Users start creating explainers, tutorials, or threads about the product before there’s any official documentation. They’re doing the company’s marketing without being asked, because they want others to find it. Third is how intensely a small group uses it once they get access. You’ll see a handful of users spending a surprising amount of time with the product, not because they have to, but because they want to. From there, one of two things usually happens. Either it replaces tools they were already using, or they start bending it into new use cases the founders did not plan for. Enough of these happen and you know you have something primordial and powerful on your hands. Building for What You Cannot Yet See One common failure mode I see from founders is when a team gets stuck building “one more feature” before showing their app to users. The founders I have seen get this right do something that sounds limiting but is actually the opposite. They pick ten users, maybe fewer, and they go deep. They become almost unreasonably attentive to how those specific people work and what they actually need. And the insights that come out of that kind of closeness are usually generalizable. What ten power users care about tends to map, at least directionally, onto what ten thousand eventual users will care about. Narrowing your focus and shipping live features constantly It is how you earn the right to go broad. The Map and the Territory Market maps will keep getting published. They’re useful abstractions, but they’re definitionally lagging. By the time something is legible enough to map, the underlying behavior has already stabilized and the earliest forms of leverage have already been captured. The next market worth building in is forming right now in a community, or a product, or a pre-order page that no one has yet thought to categorize. Whether you see it depends entirely on whether you are paying attention to behavior or waiting for a label. For founders, that means building the habit before you need it. You have to haunt the weird corners before they’re legible, the Discord servers with no business model, the subreddits full of complaints about a workflow nobody has named yet. You’re looking for people who have already changed how they hope to work and just haven’t been handed the right product yet.
English
17
7
60
4.4K
DK
DK@silenceandmagic·
@alliekmiller The cross-domain edges are where it gets interesting. We’re working at one of them, turning the densest passive trace of a life into structured context a model can actually use. You’d be surprised how much of a life sits in bank data. Fintella.io/about
English
0
0
0
26
Allie K. Miller
Allie K. Miller@alliekmiller·
Talked to a friend at a top AI lab. Their whole team is former journalists, training models on poems, summaries, and creative writing. I use AI every day and can see that it tends to flatten my writing, remove key nuggets, generalize specificity, and just sound devoid of the human oomph. I'm not against AI writing but there's clearly a gap. Will continue to watch where the frontier labs are hiring (gaming? physics? banking and pe? organizational behavior?). That is basically the map of where the models or harnesses are still weak.
English
45
5
91
11.8K
DK
DK@silenceandmagic·
Models have memory now, but they only remember the version of you that you bothered to type. The version you typed and the version you live are different things. Recent work on AI alignment finds that models predict a person's choices more accurately from behavioral data than from written self-description. The principle generalizes. How someone describes themselves and how they actually decide are different inputs, and the second is usually more predictive. arxiv.org/abs/2603.29317
English
1
0
1
64