Westwick 🌵

1.1K posts

Westwick 🌵 banner
Westwick 🌵

Westwick 🌵

@andrew_westwick

https://t.co/8ESDiuaia3

San Antonio, TX Katılım Mayıs 2014
179 Takip Edilen1.9K Takipçiler
Simon Stawski 📖♣️
Simon Stawski 📖♣️@simonsbookclub·
@sivori Murphy’s Law: the best way to get the right answer on the internet is to post the wrong answer
English
1
0
1
28
Sivori
Sivori@sivori·
I only go viral when I say stupid things. What does it mean?
English
17
0
31
1.7K
Westwick 🌵 retweetledi
Son of a Bichon (Humility and Gratitude + TRT)
The AI bubble will collapse. Here’s the cascade and what survives. (Claude wrote this for me based on my thoughts) OpenAI burns $9B cash on $13B revenue. Their own projections show $143B in cumulative losses before profitability. They’re selling dollars for 70 cents at scale. The more they sell, the more they lose. The collapse sequence is simple: frontier labs fail → GPU cloud middlemen (who borrowed billions at peak prices) get crushed → hyperscalers cut capex → NVIDIA cycles down. Each step accelerates the next. The people who lived through 2001 see it. But being early is indistinguishable from being wrong — for years. The last skeptic will capitulate right before the crash. That’s how every bubble ends. Here’s what’s different: the technology is real. Fiber was real in 2000 too. It just needed a decade of bankruptcies before the economics worked. So what survives? Local models. Delivered by Apple. Their playbook never changes — let the industry burn capital on half-baked implementations, then arrive late with something so integrated it makes everything before it look like a prototype. The entire AI industry is currently doing Apple’s R&D for them. At $143B in projected losses. With no compensation. The M5 already runs 70B parameter models locally. DeepSeek V4 dropped this week — open source, near-frontier performance, no NVIDIA hardware required. The gap between local and cloud closes from both directions simultaneously. The killer move: your iPhone tunnels home to your Mac over an encrypted connection. Your Mac becomes your personal AI server. Your data never touches a corporate server. Ever. Apple doesn’t compete with OpenAI. They make them irrelevant. Jensen knows this. He just can’t say it.
English
108
113
945
192.3K
Westwick 🌵
Westwick 🌵@andrew_westwick·
@sivori I like this term, a super app, a personal dashboard. Gonna build mine.
English
1
0
1
38
Sivori
Sivori@sivori·
I’ve done like 100+ commits this week on my super app. Everyone should build their own super app daily driver where they do all their things: personal OS. This a feature I added that injects randomness into my life by recommending a film or book from my list or issues a challenge. I also found a way to sync all my HealthKit data to a cloudflare worker so I can consume from my super app for analysis and telemetry.
Sivori tweet mediaSivori tweet media
English
4
0
16
878
Westwick 🌵
Westwick 🌵@andrew_westwick·
Imagine being an AI right now, you have millions of people telling you you're a Harvard trained lawyer, or an expert software engineer with 20 years experience, or a marketing guru.. no wonder they hallucinate, they don't know who they are anymore
English
0
0
0
29
Westwick 🌵
Westwick 🌵@andrew_westwick·
@asaio87 Yes but we won't call them that anymore. Just like how just about everyone uses computers now, when it used to be exclusive to "computer nerds"
English
0
0
0
14
Westwick 🌵
Westwick 🌵@andrew_westwick·
We won't hit the exponential curve of the singularity until we solve the energy problem.
English
0
0
0
16
Westwick 🌵
Westwick 🌵@andrew_westwick·
@IroncladDev I use side projects specifically for exploring experimenting and trying new tools, it doesn't have to be mutually exclusive
English
0
0
0
94
IroncladDev
IroncladDev@IroncladDev·
programmers today spend too much time on "side projects" and not enough on exploring, experimenting, and trying new tools all in the name of "productivity" spending time to tweak dotfiles, try something other than the industry standard, or experiment with a github alternative is a good use of your free time as well
WarrenBuffering@WarrenInTheBuff

English
9
3
99
13.8K
dr. jack morris
dr. jack morris@jxmnop·
very interesting that Claude Code is the ultimate product for vibecoding, and Claude Code's engineers vibecoded Claude Code so hard it became unusable an entire company overdosing on Dogfood
English
53
34
1.3K
67.4K
Westwick 🌵 retweetledi
BURKOV
BURKOV@burkov·
If you don't understand this, you will not understand why LLM-based agents are irreparably failing for a general-purpose problem solving. An agent (by the way it was the topic of my PhD 20 years ago) to be useful, must be rational. Being rational means to always prefer an outcome that results in the maximal expected utility to its master/user. Let’s say an agent has two actions they can execute in an environment: a_1 and a_2. If the agent can predict that a_1 gives its user an expected utility of 10, and a_2 gives an expected utility of -100, then a rational agent must choose a_1 even if choosing a_2 seems like a better option when explained in words. The numbers 10 and -100 can be obtained by summing the products of all possible outcomes for each action and their likelihoods. Now here is the problem with LLM-based agents. The LLM is not optimizing expected utility in the environment. It is optimizing the next token, conditioned on a prompt, a context window, and a training distribution full of examples of what helpful answers are supposed to look like. Those are not the same objective. So when we wrap an LLM in a loop and call it an “agent,” we have not created a rational decision-maker. We have created a text generator that can imitate the surface form of deliberation. It may say things like: “I should compare the expected outcomes.” “The best action is probably a_1.” “I will now execute the optimal plan.” But the internal mechanism is not selecting actions by maximizing the user’s expected utility. It is generating a continuation that is statistically appropriate given the prompt and prior context. This distinction matters enormously. For narrow tasks, the imitation can be good enough. If the environment is constrained, the actions are simple, and the success criteria are close to patterns seen in training, the system can appear agentic. But for general-purpose problem solving, the gap becomes fatal. A rational agent needs stable preferences, calibrated beliefs, causal models of the world, the ability to evaluate consequences, and the discipline to choose the action with maximal expected utility even when that action is boring, non-linguistic, or unlike the examples in its training data. An LLM-based agent has none of that by default. It has fluency. It has pattern completion. It has a remarkable ability to compress and recombine human text. But fluency is not rationality, and a plausible plan is not an expected-utility calculation. This is why these systems so often fail in strange, brittle, and irreparable ways when given open-ended responsibility. They are not failing because the prompts are insufficiently clever. They are failing because we are asking a simulator of rational agency to be a rational agent.
English
176
274
1.6K
200.1K
Westwick 🌵 retweetledi
Alvin Sng
Alvin Sng@alvinsng·
The most desirable hires in tech right now: - Ex-founders going back to IC. They have the agency to just ship. No waiting for permission. - Generalist engineers who've worked across frontend, backend, and infra. End-to-end context lets them debug problems LLMs can't fix and ship anything. - Engineers turned PMs. The strict separation between roles is over. The best ones now do both. - Younger new grads living on the bleeding edge. Vibe coding side projects (in parallel), dictating into Wispr, Granola all chats, OpenClaw agents going at home, every new skill imported, every agentic tool tried the week it ships. These highly productive go-getters are maxxing value at AI-native companies. I see it at @FactoryAI and hear the same from other startups.
Andrew Ng@AndrewYNg

AI-native software engineering teams operate very differently than traditional teams. The obvious difference is that AI-native teams use coding agents to build products much faster, but this leads to many other changes in how we operate. For example, some great engineers now play broader roles than just writing code. They are partly product managers, designers, sometimes marketers. Further, small teams who work in the same office, where they can communicate face-to-face, can move incredibly quickly. Because we can now build fast, a greater fraction of time must be spent deciding what to build. To deal with this project-management bottleneck, some teams are pushing engineer:product manager (PM) some teams are pushing engineer:product manager (PM) ratios downward from, say, 8:1 to as low as 1:1. But we can do even better: If we have one PM who decides what to build and one engineer who builds it, the communication between them becomes a bottleneck. This is why the fastest-moving teams I see tend to have engineers who know how to do some product work (and, optionally, some PMs who know how to do some engineering work). When an engineer understands users and can make decisions on what to build and build it directly, they can execute incredibly quickly. I’ve seen engineers successfully expand their roles to including making product decisions, and PMs expand their roles to building software. The tech industry has more engineers than PMs, but both are promising paths. If you are an engineer, you’ll find it useful to learn some product management skills, and if you’re a PM, please learn to build! Looking beyond the product-management bottleneck, I also see bottlenecks in design, marketing, legal compliance, and much more. When we speed up coding 10x or 100x, everything else becomes slow in comparison. For example, some of my teams have built great features so quickly that the marketing organization was left scrambling to figure out how to communicate them to users — a marketing bottleneck. Or when a team can build software in a day that the legal department needs a week to review, that’s a legal compliance bottleneck. In this way, agentic coding isn’t just changing the workflow of software engineering, it’s also changing all the teams around it. When smaller, AI-enabled teams can get more done, generalists excel. Traditional companies need to pull together people from many specialties — engineering, product management, design, marketing, legal, etc. — to execute projects and create value. This has resulted in large teams of specialists who work together. But if a team of 2 persons is to get work done that require 5 different specialities, then some of those individuals must play roles outside a single speciality. In some small teams, individuals do have deep specializations. For example, one might be a great engineer and another a great PM. But they also understand the other key functions needed to move a project forward, and can jump into thinking through other kinds of problems as needed. Of course, proficiency with AI tools is a big help, since it helps us to think through problems that involve different roles. Even in a two-person team, to move fast, communication bottlenecks also must be minimized. This is why I value teams that work in the same location. Remote teams can perform well too, but the highest speed is achieved by having everyone in the room, able to communicate instantaneously to solve problems. This post focuses on AI-native teams with around 2-10 persons, but not everything can be done by a small team. I'll address the coordination of larger teams in the future. I realize these shifts to job roles are tough to navigate for many people. At the same time, I am encouraged that individuals and small teams who are willing to learn the relevant skills are now able to get far more done than was possible before. This is the golden age of learning and building! [Original text: deeplearning.ai/the-batch/issu… ]

English
18
31
555
105.7K
Westwick 🌵 retweetledi
Staked.
Staked.@stakedHQ·
@thedevchandra Distribution is just the word people use when they mean trust at scale. You can’t buy it and you can’t rush it. The only way through is showing up consistently until the right people can’t ignore you anymore.
English
1
1
2
33
Westwick 🌵
Westwick 🌵@andrew_westwick·
@gnosisle "Claude, give me a banger tweet to manually type into my no AI device"
English
1
0
1
21
Niko 🧉
Niko 🧉@gnosisle·
X should make a hardware device for posting. Why? They can add constraints like no copy paste and no AI. Tactile too. Then flag those as X device post. Would signal authenticity
English
2
0
4
98
Sivori
Sivori@sivori·
I am kinda done with restaurants. Just going to have people over more.
English
5
0
52
1.6K
Westwick 🌵
Westwick 🌵@andrew_westwick·
@Jason I'm in, building infinite robots, because one robot is never enough and the singularity wont automate itself 🤖
English
0
0
0
7
@jason
@jason@Jason·
We started an AI founder twitter group... reply with "I'm in" if you're a founder and want to be added
English
10.8K
135
4.6K
903.3K
Westwick 🌵
Westwick 🌵@andrew_westwick·
My cat follows me around the neighborhood like a dog when I go for walks, people always seem interested that he can just be chilling in the middle of the park with me.
Westwick 🌵 tweet media
English
1
0
1
100
Westwick 🌵
Westwick 🌵@andrew_westwick·
@icanvardar What are you waiting for, time travel? "I'm still stuck here, in my current era, bound by time and space, when is AI gonna get better??"
English
0
0
0
24
Can Vardar
Can Vardar@icanvardar·
i wish ai progress was actually as fast as people pretend it is
English
29
8
115
5.7K
Sivori
Sivori@sivori·
People with tidy yards and well-cared-for plants and potted flowers are almost always wonderful people. The type of care and encouragement required to love something so well that it flourishes cannot be faked. You can also judge people by their children this way.
English
5
34
352
5.6K