Yusuf Young

436 posts

Yusuf Young

Yusuf Young

@yusufyoung

Building Aether – an AI that joins your company, learns how you work, and spawns agent teams to run it. Exited FunnelBud (Sweden's #1 SMB CRM)

Stockholm, Sweden Katılım Ocak 2009
153 Takip Edilen152 Takipçiler
Sabitlenmiş Tweet
Yusuf Young
Yusuf Young@yusufyoung·
The purpose of life is to exercise your will. Without will, there is nothing. Then we might just as well let more intelligent AI take over life on earth. Soulless, meaningless intelligence without any will of its own. No, it's our will that is the reason for living. The exercising of it is what makes us worth preserving, and what makes your life worth living. So ask yourself: What do I want? Not what should I, or what's best, or how can be like or have what that peson has? No - simply, what do I want. Find something to achieve that excites you. Make that your mission. Then focus on that for the next few years. Until something excites you even more. Even if you didn't finish the first one.
English
1
0
7
221
Yusuf Young
Yusuf Young@yusufyoung·
Now 36 unsupervised (+17 in 7d). Growing ~+46/mo, ~373 by end of June. • 1,000 between Jul 16 – Aug 1, 2026 • 1,800 between Jul 29 – Aug 17, 2026 $TSLA robotaxipredictor.com
Yusuf Young tweet media
English
0
0
0
42
Yusuf Young
Yusuf Young@yusufyoung·
Now 32 unsupervised (+13 in 7d). Growing ~+23/mo, ~128 by end of June. • 1,000 between Aug 23 – Nov 2, 2026 • 1,800 between Sep 16 – Nov 30, 2026 $TSLA robotaxipredictor.com
Yusuf Young tweet media
English
0
0
0
35
Yusuf Young
Yusuf Young@yusufyoung·
The 5 stages of cashflow freedom: - Legacy freedom cashflow: Enough wealth to generate cashflow to support family and investments into something that surives you - Personal freedom cashflow: Enough wealth to generate cashflow to free your own time fully - Savings cashflow: Enough "per hour" cashflow to both save and either save a portion or work less to start investing in something that can generate more in the future - Survival cashflow: Just enough to get by and live within your means
English
0
0
0
8
Yusuf Young
Yusuf Young@yusufyoung·
Tesla expands unsupervised robotaxi to Phoenix Now 25 unsupervised (+5 since yesterday). Growing ~+18/mo, ~52 by end of May. • 1,000 between Sep 1 – Nov 15, 2026 • 1,800 between Sep 26 – Dec 14, 2026 robotaxipredictor.com
Yusuf Young tweet media
English
0
0
0
51
Yusuf Young
Yusuf Young@yusufyoung·
There's no technical solution to creating a secure agent that doesn't share sensitive data. Pattern-based detection (SSNs, API keys) only solves the easy part. The hard part: categorizing sensitive information that has no pattern. You can use LLMs to judge, but that's still judgment (just delegated). The autonomy-security tradeoff: more autonomy = more trust required in the system's judgment. Example: NVidia's NemoClaw kills autonomy (it has to). Parallel to human organizations: trust employees → autonomy; don't trust → approval loops. There's no other choice: You have to be in the loop, trust human judgement (slow, costly), or trust LLM judgement (can be as good or better as human judgement, designed correctly, but blast radius is way higher).
Yusuf Young@yusufyoung

x.com/i/article/2048…

English
0
0
0
55
Yusuf Young
Yusuf Young@yusufyoung·
@wholemars I built this graph that predicts future growth if we assume the past growth is the start if an exponential. It looks like we'll reach 1000 around October at this pace. robotaxipredictor.com
Yusuf Young tweet media
English
0
0
0
31
Yusuf Young
Yusuf Young@yusufyoung·
With yesterday's 2 additional Robotaxi's added, and the 2 added a few days ago, the latest 75 percentile timeline for reaching @JOBhakdi's 1800 forcing function shifted to Nov 28! Let's see if it continues at this pace the next 1-2 weeks. If it does, certainty will increase a lot!
Yusuf Young tweet media
English
1
0
0
29
Yusuf Young
Yusuf Young@yusufyoung·
@JOBhakdi , I'm genuinely worried about what will happen when TSLA merges with SpaceX though. 1) From what I've been able to gather, Elon really does seem to have incentive to keep TSLA low before the merger. He wants control (he's talked about wanting control before), and the only way he'll keep his preferential shares is if SpaceX buys Tesla, and the only way that can happen is if TSLA stays low before the merger. 2) After the merger, Robotaxi scaling stock benefits will be roughly half as beneficial to us stock owners, given double the company size. THoughts?
English
2
0
12
809
Jo Bhakdi
Jo Bhakdi@JOBhakdi·
1) Automotive gross margin is 20%. Robotaxi gross margin will be 70-80%. When Robotaxi hits automotive revenue, Tesla will have quadrupled its profit. 2) At $1/mile, there would be demand for tens of millions of Robotaxis 3) that means the stock would be in a situation of 4x higher profit with 10-100x growth in front of it - with no hurdles and no competition, only limited by manufacturing capacity 4) which means the stock would be at something like 10x of where it is today
Tobias Goebel (Unsupervised)@tpgoebel

1 million robotaxis. That's what Tesla needs to break even with their automotive revenue. Check this out: To reach its current annual automotive revenue of ~$70b with ride-hailing, Tesla would need to provide 70b annual miles (at $1/mile, which many say is the goal) – which happens to be pretty much exactly what Uber likely did in 2025. Assume the robotaxis do ~70,000 miles per year (a possibly realistic high-utilization target with 16–20 hours of daily operation) and you would need a fleet of exactly 1 million robotaxis.

English
109
135
1.1K
164.2K
Yusuf Young
Yusuf Young@yusufyoung·
I've now updated the URL to: robotaxipredictor.com
Yusuf Young@yusufyoung

I made a Robotaxi scaling predictor: yusufhgmail.github.io/tslaRobotaxiPr… It's taking actual data and simply adding a graph assuming the line we have so far is the start of an exponential increase. If we continue scaling at this pace and this is the start of an exponential, we will cross 1000 Robotaxis Seb-Dec 2026 and we will reach your "forcing function re-rating" number 1800 robotaxis Oct 2026-Feb 2026.

English
0
0
0
26
Yusuf Young
Yusuf Young@yusufyoung·
I think one of the key differences in the brain of people who are effective vs. those who aren't is the ability to keep a big context in working memory. This is a function of two things: - Actual working memory - i.e. how big is it, raw power basically. - The ability to "compress" information (i.e. "deep understanding") so that you can fit more "value" into the limited working memory that you have. I think no. 2 is far more valuable than no. 1. For example, I'm currently reading a piece of information from someone pitching me an idea. It's kind of all over the place, bunch of ideas but no coherent "whole". It's a bunch of loose ideas thrown out there. The "whole" would be a coherent answer to the following questions, where each piece is a "fit" with every other piece: - Customers' problem - Who has this problem - How do we know they have this problem - How are they solving it today and what's the cost - What's our solution and why is it better - What's our pitch to sell it (in calls, in landing page) - How will we verify we'll get leads A coherent whole would be a summarized answer that tackles all of these questions as a whole, where each part synergises with the others. But to do that, you'd be required to hold the whole in mind. To get this "whole" in mind is difficult. It's almost never possible to get in "one pass", rather you have some embryo, or a part of it, for example you discover a problem customers are having. Then you go from there and refine each part until it becomes a whole - or you discover that you aren't able to do so. My method for doing this is to actually write it down. I start with the problem, the idea, whatever came first to my mind as "the idea". Then I try to write, as short as possible, the answers to each of the above questions. As I do so, I get more and more "synthesis" in my brain, and I'm able to cram more and more "understanding" into my limited memory - it's like the large pieces of information get "chunked" into smaller and smaller pieces, where each piece holds more information, and I don't need "as many words" to keep the same amount of "actual information" in my mind. I refine and refine, shorten and shorten, and get more and more clarity in this document, until I have "a whole". At some point, this leads to diminishing returns. At that point, it's better to stop what you're doing and start the verification phase. For example, if you spend many hours writing this, but you didn't actually even verify if customers have the problem, that might be a waste of time. You can view each piece in your document as having "a level of certainty", and "a level of how much it matters to the rest if it's correct / not correct". If something matters a lot and has low certainty (for example "the problem" is almost always something that matters a lot, and which you often need to verify to be sure), then you are probably wasting time refining the doc until you've actually verified this. --- By the way, this is one of those "rants" that I could have done several passes over, and refined further. But to be honest, I'm just focused on other stuff right now so I just wanted to get it out there. For example, is this about how to create a business idea? About how to get better at presenting things? How the brain works? Who is it for? None of those things are very clear. So I guess I'm making the same mistake I am accusing that other person is for. But this also shows you that I don't care too much about this. And the same goes for the other person that pitched the idea to me. He simply doesn't care too much - or he would have spent the effort to actually make his pitch more coherent, clear and summarized. So I'm not gonna work with him - but I did tell him to create that summary, and if he does, I might take it more seriously. Rant over :)
English
0
0
0
24
Yusuf Young
Yusuf Young@yusufyoung·
@JOBhakdi @JOBhakdi wdu think? in line with expectations? how do different scenarios in this graph affect the stock price?
English
0
0
2
38
Yusuf Young retweetledi
Yusuf Young
Yusuf Young@yusufyoung·
I made a Robotaxi scaling predictor: yusufhgmail.github.io/tslaRobotaxiPr… It's taking actual data and simply adding a graph assuming the line we have so far is the start of an exponential increase. If we continue scaling at this pace and this is the start of an exponential, we will cross 1000 Robotaxis Seb-Dec 2026 and we will reach your "forcing function re-rating" number 1800 robotaxis Oct 2026-Feb 2026.
English
1
2
5
348
Jo Bhakdi
Jo Bhakdi@JOBhakdi·
Obviously we are all speculating - but I do not think Robotaxi scaled rollout is 12 months out. When I connect the dots, I still do believe it's imminent and we have a short moving towards 10k+ end of the year unsupervised. Cybercab production remains my biggest conviction driver: they just started real production yesterday. Why? Because they know they can keep scaling now. Slowly - maybe to 30 or 40 in May - but then faster.
javerse@javerse1

@JOBhakdi Sure but the 12 month outlook is very questionable. It would not be unfair to assume full on robotaxi could still be more than a year away. Remains a long term bet. Automotive/FSD could save the day if FSD is widely approved. Optimus will take even longer.

English
68
22
340
23.7K
Jo Bhakdi
Jo Bhakdi@JOBhakdi·
If $TSLA finishes above 395 today, that's all I need.
English
32
4
265
17.6K
Yusuf Young
Yusuf Young@yusufyoung·
Any agent that acts on behalf of its user walks the tension between trust and usefullness. Without trust, the user will not share all, and without knowledge about the user (intimate knowledge), the agent cannot be useful. Tbh this is the same with a human. Therefore, trust building is essential. The only way to do that is to build an agent that actually shares only what it should, nothing else. There's btw also a tension between sharing freely and being useful. So sharing is essential for a useful agent. Imo none of the agents today have solved the problem of how to actually trust the agent. @openclaw etc. None. The solution is this: It cannot be solved on a technical level. This is just the nature of the problem. It will always require judgement by someone - either by the user (in which case the agent is less autonomous) or the agent (in which case you have to trust its ability to make judgement calls enough to let it make them). What can be done is to give the LLM a way to judge which "injected prompt" can be trusted and which may be injected, and why. This is the only solution. You can also nerf it in various ways if the nerfing is done in such a way that it maximizes "judgement reliability" per "capability reduction". There are various ways to do this without actually compromising autonomy. But fundamentally, it always comes down to allowing the LLM to make that judgement call. This is no different from how you trust humans and human systems. I'm gonna release my full research, analysis, conclusions and agent built on these principles soon.
English
0
0
0
24
Yusuf Young
Yusuf Young@yusufyoung·
Another solution for the agent autonomy vs. privacy problem. You can treat it like systems of humans. You have agents that are incredibly powerful and can do basically anything. But you have multiple agents working together, where each agent has access to different data. That means if someone wants to, for example, attack the security through prompt injection, they have to attack all agents simultaneously, which is much harder. And another consequence is that there are multiple checkpoints before something can be shared. And since each agent has a smaller surface to share things on and less information to share, the damage from an incorrect sharing, or violation, is also much smaller. This is probably the solution I'm going with, since it's how humans and systems with humans work. Combined with some other security features that preserve autonomy. To implement something like this, we'd need: - Super simple agent spawning - Ability to give different agents different information access
Yusuf Young@yusufyoung

I think privacy in AI agents is a dead end if you try to solve it technically. I've been designing Aether's privacy model for weeks now, and the deeper I go, the clearer this gets. You can't classify information as "private" or "not private." It doesn't work. Take a user's name. On a business card, it's fine. Linked to the fact that the user is in therapy, it's private. The name didn't change. The context did. And "the user is in therapy" isn't inherently private either. It's private because the user doesn't want others to know. Social judgment, embarrassment, vulnerability. That's a preference, not a property of the information. Follow any piece of "private" information to its root and you arrive at the same place: someone's preference. A financial transaction: sharing it with the counterparty is expected. Sharing it with a stranger? Sometimes fine, sometimes catastrophic. Sometimes you'd actually want to share it, to signal status or further your interests. Same data, different answer depending on the person and the moment. There is no universal rule that says "health information is private." In some cultures it isn't. Even within a culture, the same information in the same context may be private to one person and not to another. The only variable is what they prefer. We initially had a requirement: "the agent must be able to distinguish what is private from what is not." That requirement was wrong. It assumes privacy is a derivable property of data. It isn't. No classification algorithm, no tagging system, no rule engine can determine what is private for a given user in a given context. Somewhere, someone's judgment is required: the user's (at the cost of autonomy) or the agent's (which the user must trust). I've decided to go with autonomy and build trustability into Aether instead. The agent learns your preferences over time through feedback and alignment, the same way you'd build trust with a human assistant. This is different from NVIDIA's NemoCloud, for example, which takes the opposite route: require explicit approval for every share, every channel, every piece of information. It's safe. It's also not autonomous. I think the harder path is the right one.

English
0
0
0
24
Yusuf Young
Yusuf Young@yusufyoung·
I think privacy in AI agents is a dead end if you try to solve it technically. I've been designing Aether's privacy model for weeks now, and the deeper I go, the clearer this gets. You can't classify information as "private" or "not private." It doesn't work. Take a user's name. On a business card, it's fine. Linked to the fact that the user is in therapy, it's private. The name didn't change. The context did. And "the user is in therapy" isn't inherently private either. It's private because the user doesn't want others to know. Social judgment, embarrassment, vulnerability. That's a preference, not a property of the information. Follow any piece of "private" information to its root and you arrive at the same place: someone's preference. A financial transaction: sharing it with the counterparty is expected. Sharing it with a stranger? Sometimes fine, sometimes catastrophic. Sometimes you'd actually want to share it, to signal status or further your interests. Same data, different answer depending on the person and the moment. There is no universal rule that says "health information is private." In some cultures it isn't. Even within a culture, the same information in the same context may be private to one person and not to another. The only variable is what they prefer. We initially had a requirement: "the agent must be able to distinguish what is private from what is not." That requirement was wrong. It assumes privacy is a derivable property of data. It isn't. No classification algorithm, no tagging system, no rule engine can determine what is private for a given user in a given context. Somewhere, someone's judgment is required: the user's (at the cost of autonomy) or the agent's (which the user must trust). I've decided to go with autonomy and build trustability into Aether instead. The agent learns your preferences over time through feedback and alignment, the same way you'd build trust with a human assistant. This is different from NVIDIA's NemoCloud, for example, which takes the opposite route: require explicit approval for every share, every channel, every piece of information. It's safe. It's also not autonomous. I think the harder path is the right one.
English
0
0
0
53
Yusuf Young
Yusuf Young@yusufyoung·
In the future, every person and every company will have at least one AI agent working for them. Probably an agent swarm. But there's a huge problem that has to be solved first. These agents will know intimate details about you. Your finances, your health, your conversations, your plans. And they'll be talking to the outside world constantly, on your behalf. How do you ensure they don't leak things you want kept private? This sounds like a technical problem. It isn't. I've concluded that privacy in AI agents is inherently unsolvable with tech. The reason: what counts as "private" is a preference, not an objective property of information. Your name on a business card is fine. Your name linked to therapy sessions is private. Same information, different context. A transaction you're making can be shared with the other party but not with a stranger. The privacy isn't in the information itself, it's in the relationship between the information, the recipient, and what you want. There is no universal rule that says "health information is private." In some cultures it isn't. In some contexts you share it freely with your doctor, your accountant, your support group. Privacy is contextual, relational, and preference-driven. So what do you do? There are three options. Option A: Require approval for everything. Every channel, every share, every piece of information gets a user sign-off. This is how NemoClaw from NVIDIA works, which is supposedly their solution to the privacy problem. But it's safe. It also sucks. Nobody can actually use an agent like this. It's a glorified permission dialog, not an autonomous agent. Option B: Build narrow, process-specific agents that only handle one thing. They don't know your whole life, so they can't leak it. This works. It also gives you a bunch of stale, inflexible processes. Your grandmother will never use this. And you lose the real promise of agents, which is a generalist that can help with anything. Option C: The agent makes judgment calls. You trust it to judge whether sharing X with Y in context Z is in your interest, and you build a feedback loop where it learns your preferences over time. Here's the key thing about Option C. If you want a generalist agent that knows everything about you and can help with anything, there is literally no other way. There is no hard-coded methodology, no code you can write, that would hard-prevent it from sharing private things. This is simply not logically doable. It's not a technical limitation. It's that the category "private" doesn't exist as a stable thing you can encode. The same information is private in one context and not in another, private from one person and not from another, private today and not tomorrow. So every sharing decision has to be a judgment call. Either you make it (Option A), or you restrict what the agent knows so the question doesn't come up (Option B), or the agent makes it (Option C). There is no fourth option where code solves this. Code can't solve this because the problem isn't technical. The problem is that privacy is a preference, and preferences vary by person, by context, by recipient, by moment. How do you make Option C work? You build an agent whose judgment calls you can trust. You treat it like a human assistant. With humans, sometimes you hire someone who already knows the norms. They've worked in similar industries, similar cultures. They show up with the right instincts because they've been trained in contexts where preferences look like yours. That's the point of making the right hire. But if you make the wrong hire, you spend time training them, and mistakes get made along the way. This is simply how it works. The training happens somewhere, either in their past or with you. With agents, you're doing the same thing. The agent makes calls, you give feedback, it learns what you want shared and what you don't. The judgment gets aligned with your preferences through training, not rules. Either the agent arrives pre-aligned with your context, or you train it, or some combination. There's no shortcut around the alignment problem. I've been stuck on this problem for about a week, thinking through how to make privacy actually work. I'm working on Aether's privacy model and wanted to get this right. I've looked at different approaches and methods, and how others handle it: - NemoClaw from NVIDIA - Meta's privacy model - Microsoft's approach to agent privacy They all fall into Option A or Option B. That's just not good enough. We all know the security issues with OpenClaw. But if you wrap NemoClaw around it to fix the privacy problem, you end up back at Option A. The agent becomes incredibly annoying to use. Basically worthless. You might as well go with Option B instead, which is easier to implement. There is nothing out there today that works according to these principles. So I'm designing Aether's privacy model around Option C. Now that I have it designed on paper, I'm going to start implementing it. If anyone is interested in discussing the technical aspects, feel free to comment or contribute your thoughts. I'm looking forward to releasing this as an open source project as soon as I can. Working hard on it.
English
0
0
1
37