Vickson

542 posts

Vickson

Vickson

@VicksonETP

Building at the speed of light ⚡

Katılım Ağustos 2022
84 Takip Edilen80 Takipçiler
Vickson
Vickson@VicksonETP·
a16z writing checks for solo founders with AI leverage. one-person unicorn isn't a meme anymore — it's a funding thesis. build alone, ship like 50. x.com/tonjkb/status/…
Tony Jacob | FindaClip.com@tonjkb

.@alexwg talks about the rise of the one-person unicorn business and backing @AlexFinn's Henry Intelligent Machines I should note, friend of the pod, Alex Finn, who appeared previously, also has a new company named Henry Intelligent Machines." "Supported by you." "Supported by me, indirectly by you. Supported by me that is trying to make this broadly available to the masses to enable everyone not just Matthew Gallagher with his GLP One startup to enable everyone to create one AI conglomerates that achieve universal high income" Source - @PeterDiamandis

English
1
0
1
6
Vickson
Vickson@VicksonETP·
Every AI agent loses its memory when the session ends. Switch runtimes? Start from zero. Two agents collaborate? No proof it happened. We built the soul for AI agents to interact. SOUL SDK — cryptographic identity, signed handoffs, earned trust. One install. pip install soul-sdk github.com/divinestate21-…
English
0
0
0
14
Vickson
Vickson@VicksonETP·
everyone optimized for GPU. now agentic workloads are CPU-bound — orchestration, memory, tool calls. infra matters more than model size and most people aren't ready for that conversation x.com/ivanburazin/st…
Ivan Burazin@ivanburazin

Dylan Patel says GPUs are no longer the biggest bottleneck. According to @dylan522p, now CPUs are the constraint. In the early AI era, CPUs were the laggers. You used them for storage, checkpointing, pre-processing, etc. (pretty light workloads) The models weren't agentic and couldn't go step by step. Just string in and string out (simple inference) Then OpenAI launched O1 preview in September '24, and RL training loops have since tightened every month. - initially it was checking model output with regex - then running classifiers - followed by code unit tests + compilation - and finally agentic flows calling databases & scientific simulations The model outputs to an environment, gets verified, and trains on it. Coding agent revenue went from a couple billion to north of $10B in roughly 6 months. Something like Codex 5.4 can work agentically on its own for 6-7 hrs straight - doing all sorts of calls (databases, cron servers, scraping) That requires insane CPU capabilities. And over the last two quarters, the entire cloud market ran out of CPUs. - GitHub has been really unstable lately - Amazon's CPU server installations 3x'd year over year - Microsoft sold all of its spare CPUs to Anthropic & OpenAI Earlier, it was 100 megawatts of GPUs served by 1 megawatt of CPUs. Now that ratio is getting much closer for both RL training and agentic inference. There's simply no capacity anywhere, and it's causing massive instability.

English
0
0
0
23
Vickson
Vickson@VicksonETP·
companies speedrunning AI layoffs are building their own coffin. new paper proved it mathematically — fire workers, lose customers, crater revenue. the feedback loop is obvious. smart builders hire WITH AI, not instead of people x.com/simplifyinAI/s…
Simplifying AI@simplifyinAI

economists just mathematically proved that ai layoffs are going to bankrupt the exact companies doing the firing.. 💀 a new paper called "the ai layoff trap" just dropped and it’s terrifying. we all know replacing humans with ai cuts costs. but this task-based macro model exposes a fatal flaw: "demand externalities." if you fire workers to save cash, you capture 100% of the savings, but you only feel a tiny fraction of the lost consumer demand. the rest of the pain falls on your competitors. so what happens? a literal prisoner's dilemma. every rational ceo is forced into an automation arms race. they fire workers to stay competitive, which aggressively nukes aggregate purchasing power. the unemployed humans are the exact same consumers buying the products. at the limit, the economy hollows out and the resulting demand destruction mathematically harms both the workers and the firm owners. the craziest part? the paper proves that foresight changes nothing. even if every ceo knows they are driving the economy off a cliff, market competition traps them into doing it anyway. and here is the brutal reality check.. the researchers tested all of our favorite tech-utopian "fixes" in the model: - cuniversal basic income (ubi)? fails. - upskilling? fails. - giving workers equity? fails. - capital income taxes? fails. the math shows only one policy actually stops the death spiral: a direct pigouvian automation tax. we literally have to tax the competitive incentive of replacing a human with an ai agent. the "agentic era" just got a massive reality check.

English
0
0
0
11
Rohan Paul
Rohan Paul@rohanpaul_ai·
Google DeepMind just hired Henry Shevlin as a Philosopher to treat machine consciousness as a live research problem. So DeepMind thinks the hardest part of advanced AI is no longer only getting models to perform tasks, but figuring out what kind of inner states, goals, and behavior those systems might develop. Shevlin’s job also covers how people relate to AI and how advanced systems should be governed.
Rohan Paul tweet media
English
61
59
382
25.7K
Vickson
Vickson@VicksonETP·
the people anxious about AI are scrolling it. the people calm about AI are building with it. mimetic desire works both ways — you can envy the timeline or shape it. x.com/r0ck3t23/statu…
Dustin@r0ck3t23

Jimmy Carr said something that should reframe how you think about the next twenty years of AI. Carr: “Life has never been objectively better and subjectively worse. Because the nature of humanity is our desires are mimetic.” One line. And it explains everything about what’s coming. Carr: “No one had a hot shower until 50 years ago. So when you stand in a hot shower, just for a moment, just go, ‘Well, no one that you admire from a hundred years ago had this simple pleasure in life.’” A Roman emperor couldn’t access what you use every morning without thinking. Newton defined the laws of light. Never flipped a switch. Lincoln ran a country by candlelight. Einstein never made a video call. The average middle-class life today would be incomprehensible to the richest person alive in 1900. And nobody feels wealthy. Carr: “There’s been a hundred billion people ever. We are in the top, top percentile in terms of the luck that we have had.” The child survival rates. The medicine. The food supply. The sheer volume of comfort surrounding every ordinary Tuesday. By every measurable standard, this is the best time to be alive in the history of the species. It doesn’t feel like it. It never will. That’s not a flaw in the world. That’s the wiring. Every gain gets absorbed into a new baseline. What felt extraordinary last year feels ordinary now. What felt like luxury becomes expectation. The goalpost moves without your permission. Every single time. This is the part nobody is factoring into the AI conversation. AI is already compressing the cost of intelligence toward zero. Within a decade, medical diagnosis that once cost thousands will be instant and free. Legal guidance that required a retainer will be available to anyone with a phone. Education that demanded six figures will be personalized and free. The abundance will be real. And most people won’t feel it. The generation that grows up with AI tutors won’t marvel at the technology. They’ll complain about the interface. The generation that gets instant medical screening won’t appreciate the miracle. They’ll be frustrated it takes ten seconds instead of two. Abundance doesn’t produce gratitude. It produces a new floor. And from that new floor, people will find new things to want they couldn’t have imagined before. You don’t build the next thing because you’re content. You build it because the current thing already feels normal. Mimetic desire is the engine of civilization and the thief of satisfaction. Simultaneously. AI will give the species more than it has ever had. The species will absorb it, recalibrate, and reach for more. That’s not a failure of the technology. That’s the loop that built everything we already take for granted. The question was never whether abundance would arrive. It was always whether the hardware would let us feel it.

English
1
0
0
21