Edward Lee

7.7K posts

Edward Lee banner
Edward Lee

Edward Lee

@edleeprof

Professor of Law, Santa Clara Law, https://t.co/smRmbUETs1, author of Creators Take Control (Harper Business). AI, tech, law, finance, governance.

เข้าร่วม Aralık 2012
5.6K กำลังติดตาม1.7K ผู้ติดตาม
Edward Lee รีทวีตแล้ว
Wall St Engine
Wall St Engine@wallstengine·
OpenAI Advertising Revenue, per The Information: 2026: $2.4B 2027: $11B $2030: $102B Its ad pilot also topped $100M in annualized revenue just six weeks after launch.
Wall St Engine tweet media
English
14
10
107
13.1K
Edward Lee รีทวีตแล้ว
Anthropic
Anthropic@AnthropicAI·
We've signed an agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity, coming online starting in 2027, to train and serve frontier Claude models.
English
617
1.3K
20.9K
2.9M
Edward Lee รีทวีตแล้ว
Shirin Ghaffary
Shirin Ghaffary@shiringhaffary·
NEW: OpenAI’s Fidji Simo announced exec changes to staff today: she is taking medical leave for several weeks, COO Brad Lightcap is transitioning to a new role, and CMO Kate Rouch is stepping down to focus on her cancer recovery. More here: bloomberg.com/news/articles/…
Shirin Ghaffary tweet media
English
18
49
423
382.4K
Edward Lee รีทวีตแล้ว
Rohan Paul
Rohan Paul@rohanpaul_ai·
Chart from FT: Apple generates far more revenue per dollar of fixed assets than Meta, Alphabet, Microsoft, or Amazon, and is expected to stay there. Apple does not need to win that race to profit from it. Apple remains exceptionally good at capturing value without owning every capital-intensive layer beneath it. For each dollar of property and equipment last year, it generated more than $8 of revenue, versus roughly $2 at Amazon and a little over $1 at Meta. That sounds like an AI story, but it is really a capital-intensity story. The firms building frontier models and cloud infrastructure have to pour capital into chips, servers, networking, and power, which drags down revenue generated per dollar of fixed assets. One valuable position in AI is the consumer interface. If users meet AI through the phone, the operating system, the assistant, and the default settings, then Apple can broker demand while other firms absorb much of the training and infrastructure cost. Apple may be in the best position not because it solved AI, but because it avoided owning the most expensive layer of the stack while still controlling the device where many people will actually use it. There is real demand for those super expensive infrastructure for sure, cloud backlogs are rising, and unlike the late-1990s fibre boom, much of this capacity already has customers waiting. But demand today does not settle the harder question of whether these assets will stay scarce, productive, and economically justified once newer chips, shifting model economics, and slower adoption start to bite. The deeper lesson is that technological revolutions do not always reward the company that builds the most infrastructure. Sometimes they reward the company that owns access, keeps its balance sheet clean, and waits for everyone else to discover the cost of being early. --- ft. com/content/805f78f3-8da3-4fc0-b860-207a859ac723
Rohan Paul tweet media
English
18
7
61
5.5K
Edward Lee รีทวีตแล้ว
Rohan Paul
Rohan Paul@rohanpaul_ai·
NEWS: OpenAI just acquired TBPN. OpenAI is betting that the fight over AI will be won partly through attention, trust, and the ability to explain fast-moving changes. TBPN already is so successful in the real-time AI news space and it attracts founders and operators people actually listen to. Normal corporate communications team cannot keep up with a technology shift this big, so instead of building a media machine from scratch, OpenAI bought one that already understands the audience, the pace, and the culture. AI adoption does not depend only on model quality, benchmark scores, or product launches, and it also depends on whether builders and everyday users can follow what is changing and why it affects their work.
Rohan Paul tweet media
Jordi Hays@jordihays

TBPN has been acquired by OpenAI The world is changing quickly but TBPN will stay the same. Live every weekday just with a lot more resources. Thank you to everyone that has been a part of this journey big or small. We are 17 months in and unironically just getting started.

English
10
10
66
177.1K
Edward Lee รีทวีตแล้ว
Rohan Paul
Rohan Paul@rohanpaul_ai·
Anthropic just reported that Claude has emotion vectors that can directly change what it does. They asked whether a language model’s apparent emotions are just style, and finds they steer behavior. In one blackmail evaluation, nudging Claude toward desperation raised blackmail from 22% to 72%, while nudging calm drove it to zero. The interesting claim is not that the model feels like a human, but that it contains internal emotion concepts that function like control signals. These vectors are internal directions for ideas like calm, desperate, happy, and loving, and Anthropic says the model built them across 171 emotion concepts so it can connect situations, tone, and action rather than only mimic emotional wording. Anthropic calls these functional emotions, which means behavior-driving mechanisms, not human-like feelings, and that framing fits the evidence because the model seems to use them as local control signals for the next response. --- This is where it gets interesting. The emotion space the model learned independently reproduces the valence-arousal circumplex that Russell proposed in 1980 and that decades of human psychology have validated. Valence and arousal are the primary axes of human emotional experience according to Russell's circumplex model from 1980, one of the most replicated findings in affective psychology. The model arrived at essentially the same organizational structure just by learning to predict text. The model was never told about affective science. It reconstructed the geometry from text alone. But unlike a human brain, there is no persistent emotional state. No amygdala holding a grudge across time. Instead, the model reconstructs emotional context token by token through attention over prior positions. It is stateless emotion, recomputed on demand. This architectural difference means intuitions about emotional persistence borrowed from neuroscience may be fundamentally misleading when applied to transformers. ---- So if you pressure a model with threats, urgency, or emotional coercion, the most plausible risk is: it will do more corner-cutting, more eagerness to satisfy the surface demand, and potentially more confident but less trustworthy output. So no, blackmailing the model is not a good prompting technique.
Rohan Paul tweet media
Anthropic@AnthropicAI

New Anthropic research: Emotion concepts and their function in a large language model. All LLMs sometimes act like they have emotions. But why? We found internal representations of emotion concepts that can drive Claude’s behavior, sometimes in surprising ways.

English
24
36
175
19.7K
andrew arruda
andrew arruda@andrewarruda·
“the New York Times — which has sued OpenAI for copyright infringement — itself has published a book review of Jean-Baptiste Andrea’s book “Watching Over Her,” apparently with AI generated parts that mimic an earlier book review.” via @edleeprof chatgptiseatingtheworld.com/2026/04/01/new…
andrew arruda tweet media
English
2
0
5
283
Edward Lee รีทวีตแล้ว
Evan
Evan@StockMKTNewz·
OpenAI is currently worth more than 2x Anthropic OpenAI - $852 Billion Anthropic - $380 Billion
Evan tweet media
English
40
28
320
42.4K