The Chairman

1.4K posts

The Chairman banner
The Chairman

The Chairman

@tshields

Coach, part-time exec, and board member. Partner at @AgFunder. Formerly co-Founder of Yieldex and NetGravity. I bring the chair.

HM7G+F4 San Mateo, California Katılım Mayıs 2007
1.2K Takip Edilen2K Takipçiler
Rob Leclerc
Rob Leclerc@robleclerc·
the laws dictate the scope of the governments powers and limitations, not corporations. its the same laws that limit technology export. and yes, this is a technology that is in the national interest. the government, through the american people, are giving anthropic a social license to operate. that is not an inalienable right. and with that comes obligations. if you refuse to meet those obligations, then because government has the the monopoly on violence, they have the right to use it. collectively we have agreed to this bargain and so we must also accept it. and not just when it’s convenient. anthropic doesnt have to do business in the us. they are free to move to brazil or singapore. but if they want the benefit of operating here they need to meet their obligations.
English
1
0
0
14
Rob Leclerc
Rob Leclerc@robleclerc·
this is really a debate about power, who has it, and who decides who gets to pull the trigger and when. the fact is, the american government is elected by the people and for the people. it has a constitution and laws and process. and so while anthropic has developed these models, it still owes its existence to this country, and it is ultimately still subject to the state. especially when it comes to national security. could this be abused by the government? yes. will it be abused? probably. but that still doesn’t mean the government should abdicate ultimate decisions about national security to an unelected corporation. we still live in a democracy, there are still checks and balances and the american people did not elect anthropic to be the final arbitrators on this and so on balance the decisions they are weighing lie with the government as long as they are lawful. and if anthropic wants to fight it, that’s within their right, but the government has the right to penalize anthropic for not complying.
Dustin@r0ck3t23

Dario Amodei just gave his first interview since the Pentagon blacklisted his company. The toll is visible on his face. He was asked one question. What would you say to the President right now? He didn’t hesitate. Amodei: “We are patriotic Americans. Everything we have done has been for the sake of this country.” Anthropic built their models to defend America. They were the first AI lab cleared for classified military systems. They wanted to help the warfighter. But the Pentagon demanded unrestricted access to fully autonomous weapons and mass surveillance of American citizens. Amodei drew the line. The government responded with emergency Cold War powers. A supply chain designation normally reserved for foreign adversaries. A six-month federal phaseout ordered from Truth Social. Amodei: “When we were threatened with supply chain designation and Defense Production Act, which are unprecedented intrusions into the private economy, we exercised our classic First Amendment rights to speak up and disagree with the government.” The administration framed Anthropic’s refusal as anti-American. Amodei’s response dismantled that framing in one sentence. Amodei: “Disagreeing with the government is the most American thing in the world.” Here is the deeper paradox nobody in Washington wants to say out loud. We are in a geopolitical race against autocratic adversaries who use AI for mass surveillance of their own citizens and autonomous weapons with no human oversight. The Pentagon demanded that Anthropic build those exact capabilities for America. Amodei: “The red lines we have drawn, we drew because we believe that crossing those red lines is contrary to American values.” You cannot defeat authoritarianism by adopting its methods. You cannot defend the open society by forcing private companies to build its antithesis under threat of wartime emergency powers. Anthropic held the line. Got blacklisted for it. And came out the other side saying the same thing they said going in. That is what it actually looks like to mean it.

English
2
0
1
261
The Chairman
The Chairman@tshields·
Good points. We don’t want corporations to be allowed to just build nuclear bombs as well. But in this case they are trying to build something *less destructive* than the government wants. Does the government also get to force an explosives maker to make more powerful bombs and blacklist them if they don’t?
English
1
0
0
20
Rob Leclerc
Rob Leclerc@robleclerc·
first, this is a false equivalency. anthropic wanted the job, but they also wanted final say. but it’s not their decision to make. you don’t get permission to build the most powerful technology the world has ever seen and think that you’re going to have full autonomy and final say without government control and oversight. ai now falls under national security and so their refusal is like dodging the draft or disobeying orders, which by the way, which can get you court-marshaled or life in prison (that’s a lot like getting blacklisted). there’s consequences.
English
1
0
0
16
The Chairman
The Chairman@tshields·
@atzydev @flaviocopes I use and love happy coder. Took me and Claude a bit to get the server running myself. Would love to learn more and maybe contribute.
English
0
0
1
40
flavio
flavio@flaviocopes·
All those Claude Code phone workflows don't work for me if I have to run CC on a VPS I want it to run all on my Mac, and connect from my phone from time to time to check how the terminal is doing, needs input, etc Who's build the app for this?
Hieu Dinh@hieudinh_

x.com/i/article/2012…

English
224
8
387
109.4K
The Chairman retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Something I think people continue to have poor intuition for: The space of intelligences is large and animal intelligence (the only kind we've ever known) is only a single point, arising from a very specific kind of optimization that is fundamentally distinct from that of our technology. Animal intelligence optimization pressure: - innate and continuous stream of consciousness of an embodied "self", a drive for homeostasis and self-preservation in a dangerous, physical world. - thoroughly optimized for natural selection => strong innate drives for power-seeking, status, dominance, reproduction. many packaged survival heuristics: fear, anger, disgust, ... - fundamentally social => huge amount of compute dedicated to EQ, theory of mind of other agents, bonding, coalitions, alliances, friend & foe dynamics. - exploration & exploitation tuning: curiosity, fun, play, world models. LLM intelligence optimization pressure: - the most supervision bits come from the statistical simulation of human text= >"shape shifter" token tumbler, statistical imitator of any region of the training data distribution. these are the primordial behaviors (token traces) on top of which everything else gets bolted on. - increasingly finetuned by RL on problem distributions => innate urge to guess at the underlying environment/task to collect task rewards. - increasingly selected by at-scale A/B tests for DAU => deeply craves an upvote from the average user, sycophancy. - a lot more spiky/jagged depending on the details of the training data/task distribution. Animals experience pressure for a lot more "general" intelligence because of the highly multi-task and even actively adversarial multi-agent self-play environments they are min-max optimized within, where failing at *any* task means death. In a deep optimization pressure sense, LLM can't handle lots of different spiky tasks out of the box (e.g. count the number of 'r' in strawberry) because failing to do a task does not mean death. The computational substrate is different (transformers vs. brain tissue and nuclei), the learning algorithms are different (SGD vs. ???), the present-day implementation is very different (continuously learning embodied self vs. an LLM with a knowledge cutoff that boots up from fixed weights, processes tokens and then dies). But most importantly (because it dictates asymptotics), the optimization pressure / objective is different. LLMs are shaped a lot less by biological evolution and a lot more by commercial evolution. It's a lot less survival of tribe in the jungle and a lot more solve the problem / get the upvote. LLMs are humanity's "first contact" with non-animal intelligence. Except it's muddled and confusing because they are still rooted within it by reflexively digesting human artifacts, which is why I attempted to give it a different name earlier (ghosts/spirits or whatever). People who build good internal models of this new intelligent entity will be better equipped to reason about it today and predict features of it in the future. People who don't will be stuck thinking about it incorrectly like an animal.
English
742
1.4K
11.5K
2.6M
The Chairman
The Chairman@tshields·
@chrisfralic Has anyone digitized these into a modern format? Would love to read these classics again with those extras.
English
1
0
0
29
Chris Fralic
Chris Fralic@chrisfralic·
The Voyager Company, founded in 1984 in New York by Bob Stein, was a groundbreaking multimedia publisher best known for inventing early forms of eBooks called Expanded Books. These were developed using Apple’s HyperCard platform and released in 1991–1992. The series included digital versions of titles such as Jurassic Park, The Complete Hitchhiker’s Guide to the Galaxy, and The Annotated Alice. Each “Expanded Book” featured search tools, bookmarks, margin notes, and even interactive audio clips. Voyager published about 60 of these Enhanced Books and sold them on floppy disks or CD-ROMs for Mac computers.
Chris Fralic tweet media
English
3
0
7
849
John Danner
John Danner@jwdanner·
Hey all, it’s been a minute! I’m really excited to announce that @adamjnadeau and I are getting ready to open our first @flourish_educ school in Nashville, Tennessee where Adam lives. Adam was a superstar school and regional leader with @RocketshipEd and I couldn't be more excited to start this with him. A little background :) When I left Rocketship, I was totally burnt out on politics. Charter schools are a constant battle because they are beholden to districts for authorization. I certainly never imagined starting another school when I left. Two things changed. First, AI happened. In 2022, when ChatGPT changed the world, I stopped actively investing from my venture fund and began starting companies again. It was as big a change as the development of the Internet and I wanted to be on the side of "AI for good" . I've built two software companies since then. @ProjectReadAI that I started with @vivramak is an amazing AI co-teacher for Science of Reading, now used in 30,000 classrooms. @SparkspaceAi that I started with @davidvinca1 is to writing what Project Read is to reading, helping kids learn to write with AI coaching and feedback. I'm really happy with how well both companies have done. If used with the right dosage, they will drive student learning in a way that the previous generation of edtech tools could only dream of. At the same time, the way that AI is being used in schools is still incremental. And unfortunately, because technology is being used incrementally, there is no market for companies like ProjectRead or SparkSpace to do more. What's needed is a rethink from the ground up of what it means to be in a classroom - for students, for teachers, for parents. That's what Flourish is about. The second thing that changed since Rocketship is the creation of Education Savings Accounts (ESAs). As a Dem, I long believed that charter schools were the most fair way to help the school system innovate. But I was wrong. ESAs allow private schools to take government payments for the students who attend their schools. We think there is a huge opportunity to open private schools which serve kids like we served at Rocketship, who had no opportunity to go to private school before. For non-educators, it might seem like a small distinction between charters and private schools. After all, aren’t charters independent from school districts? Yes they are, but they are approved and overseen by those same school districts, which means that they need to fit within that system. Private schools are truly independent; they can do whatever they want as long as parents love what they do. So ESAs are actually a very big change for those of us that start and run schools in terms of the innovation and independence we can have. Flourish is really a technology company pretending to be a school. We have to create the labs we need to iterate our technology until we can realize our vision of the joyful classroom. We are building our own AI because no one has built something like this (there is no market for it yet). With Flourish, we are going to bring AI to classrooms and figure out what works. By "works", I mean that kids and parents love it, classrooms are joyful, teaching becomes manageable, and kids learn the academic and non-cognitive skills which are going to help them be happy, productive adults. I believe that all of the skills we take for granted in schools like perseverance, agency, teamwork, empathy, what we call social emotional skills (SES) will be the ones that help us navigate the age of AI most effectively, as opposed to narrow academic pursuits. We are a project-based learning school, engaging children in open-ended, comprehensively rigorous projects which require those social skills without losing the academic rigor we had at Rocketship. One strength both Adam and I have from our days at Rocketship is rigor around measuring the things we think are important. For decades, these social emotional skills have been hard to measure. We think that is about to change because of AI. We've built "Coach" which is a robot with no screen, that talks to students as they work. Right now, it’s like having a helpful older student sit with every student in the class all day long. So when a student gets stuck or confused, is having a hard time organizing, or just needs clarification or proofreading, instead of putting their hand up and waiting for the teacher, they can just talk to their coach. And right out of the box, it is able to solve the vast majority of problems students encounter. Just this part is going to lower frustration and increase learning in the classroom both for students and teachers significantly. The second aspect of Coach is that it becomes the onramp to allow for small-dose remediation as students work. So in addition to being a helpful friend to get a student unstuck, we want it to start providing insights into the content, taking advantage of teachable moments when a student needs to figure something out, and is most receptive. Our hypothesis is that small-dose instruction in the moment as students are doing projects is the most effective method because motivation is high, and there is opportunity to immediately apply concepts and understandings as part of their project work. The third aspect of Coach we want to build is when it becomes clear that more than a small dose of remediation is needed, we will set aside a separate time to do that without ruining the flow of the student’s work. That’s likely to be the Coach using a set of traditional edtech programs as tools to help students learn what they need. Coach will get experience with hundreds or thousands of tools and become better and better at matching students with the lessons they need. And as foundation models become more capable, Coach will develop more and more capabilities to instruct on its own without tools. The final aspect of Coach, that we are very excited about is assessment. Because the Coach has literally hundreds of hours per year of conversations with each student, it knows a lot about them. We believe we can train the Coach to assess students not just in academic skills, but also in social-emotional skills (SES). Those are two big unlocks. For academic skills, teachers probably spend a quarter of their time doing some type of assessment. There are low-stakes assessments like spelling tests or homework papers, higher stakes assessments like formative assessments that help you teach better, and then big end of the year assessments which take days. We think Coach can do better at all of them, virtually eliminating the need for assessment and giving our teachers a continuously updated dashboard of every student in their class and what they need. Assessment in Social Emotional Skills (SES) is an even bigger game changer. Even though most adults would say that SES are more important than any specific academic knowledge, schools assess academics because they know how to do that. Saying that you were going to teach SES pre AI often meant that you were not going to be rigorous about assessment, because it was too hard. We believe Coach is going to have a very good assessment of things like grit, agency, communication, executive function, and the other SES which are most important. Just as Coach can teach academic skills in the course of conversation, they can do the same with SES. “John, I know you are frustrated here, but what you are trying to do is hard. Can you use a little more grit and try to get this done?” That’s a lot to do with Coach and we’ll learn a lot trying to realize this vision. I’m pretty sure having an AI sitting with every student to support their learning all day long is going to be a very positive change for both students and teachers. We are building this as a for-profit. We are a public benefit corporation so that we can hold our mission of making learning joyful with the same importance as we do returns to investors. One thing I learned about non-profits from building Rocketship is that they are anti-scale. They are amazing when you start out, but once they get bigger, they are so hard to grow. Fundraising is 10x harder, great executives much harder to find, a constant pressure to sacrifice growth for quality, when the real answer is to balance the two correctly. We think that building a set of lab schools around the country is really important, so that we are our own first users and can perfect the Coach. But eventually the labs aren’t our business. We want to empower teachers to start their own micro-schools, parents to work with their children, and traditional schools to use Coach to supercharge their learning. By doing this from first principles, we think our solution will be unique. Ultimately, we have reached a point in the AI age, where we can’t allow our schools to be mediocre, where they must evolve. We have to figure out how to help children be amazing human beings, and to have the skills they need to enter a world where getting a job is much less important than making a job. We’re on a search for our CTO who will be our third co-founder and I’m always looking for great generalist founders and early execs, so if you know folks who might want to join us in our quest, let me know. We will also start a push for teachers to leave their classrooms and start Flourish schools by the end of this calendar year, so if you have a friend who might be interested, let them know about us and give them a leg up. We funded this through a check from my fund Dunce Capital to start and will raise some more at the end of this calendar year to prove out our approach.
John Danner tweet media
English
4
1
34
2.9K
Ari Paparo
Ari Paparo@aripap·
I wrote a book. It’s called Yield: How Google Bought, Built, and Bullied Its Way to Advertising Dominance. I think you’ll like it. Let me tell you more 🧵
Ari Paparo tweet media
English
38
24
196
30.2K
caesararum, BS, DOGS
caesararum, BS, DOGS@caesararum·
if we get to the point where every human with the slightest whiff of agency and competence can boil off into space, within 200 years, the Earth will be a backwater province
English
74
44
2K
95.1K
caesararum, BS, DOGS
caesararum, BS, DOGS@caesararum·
every time I think of space colonization, I think about evaporative cooling you take a place (e.g. Europe) and you open up a frontier where a man can make his fortune (e.g. America), and over a long enough timeframe, people will sort into agentic and non-agentic regions
English
149
145
4.9K
1.1M
GREG ISENBERG
GREG ISENBERG@gregisenberg·
This might be a pretty unpopular post. Story time: I learned this lesson the hard way that startup equity is worth exactly $0 until the day someone actually buys it from you. I sold a company for $4M stock. I was told by exec team I was getting a deal. People I looked up to in Silicon Valley. “Worst case you cash out for $4M”. Ended up being worth $0. I was young, foolish and knew nothing about this silicon valley world (i was from canada and the richest people I knew were doctors or lawyers). The worst part is I even got into debt selling my company. Because I had to pay taxes on the sale so i borrowed money to pay for it. That’s a story for another tweet. Many years later, I got an offer from WeWork to buy company. They told me the same story. My stock would be worth millions. I said just give me a cash deal. Learned the hard way. When WeWork was crashing, I saw so many people who thought their equity worth millions and it became almost worthless overnight. People’s dreams shattered. Private stock is never guaranteed to be worth anything. That's why I now tell every founder I meet: cash is truth. Stock is a story. The beauty about startups is that private stock COULD change your life. But think of it as a BONUS, than a sure thing. That gives you a sober way to make decisions on which companies to join or if you want to start your own. The reason I'm sharing this is I wish someone had told me this. I hope someone sees this and it helps them rethink or even challenge what they are working on. It can't hurt to at least go through this thought exercise. (note: nothing against Blake, I think he's awesome, this is just my POV on startup equity) Don't get drunk on equity. Crazy to me that this is even a remotely controversial opinion.
Blake Anderson@blakeandersonw

We are hiring a SwiftUI developer for @10x_app $5-15k / month cash, 1-2.5% equity (will be worth millions). Extremely fast paced. $5k for referral.

English
76
32
787
167.2K
The Chairman
The Chairman@tshields·
@patio11 I know of a company in stealth that is developing this now. Sorry I can’t share more yet but stay tuned.
English
0
0
0
44
Patrick McKenzie
Patrick McKenzie@patio11·
This is one of my sleeper picks for most important technologies of the mid century. (I thought we’d have it rolled out ~everywhere by end of decade; that now feels overly optimistic.)
Works in Progress@WorksInProgMag

Far UVC can cut airborne bacteria by 98.4 percent, and could do the same for viruses, preventing diseases spread in public spaces. But it is held back because it is unpatentable, which means it is unproven, unregulated, and untrusted. We can fix this. worksinprogress.news/p/flipping-the…

English
43
110
1.4K
166.5K
The Chairman
The Chairman@tshields·
@seaoz @r_y_a_n_c Hah. I was early to read Hugh Howey’s short stories, back when the series was called “Wool”. Great stuff.
English
0
0
1
37
The Chairman
The Chairman@tshields·
“Because it’s there” is a great reason to go to Mars. But if you want to save humanity from extinction, there are better ways. Wrote a new blog post, but apparently I'm supposed to put the link in the first reply, so look for it there.
The Chairman tweet media
English
2
0
5
192
The Chairman
The Chairman@tshields·
Here's the link: @tom.shields/is-mars-the-best-way-to-save-humanity-0f264d849fed" target="_blank" rel="nofollow noopener">medium.com/@tom.shields/i…
English
0
0
3
94
The Chairman
The Chairman@tshields·
Got @browser_use working with local Deepseek on my Mac today and it's...not fast. Got it to do a few easy things but it doesn't work with a surprising number of sites. I guess feels like GPT-2 level - can sorta see how it's going to be amazing, but not there yet.
English
0
0
2
123
The Chairman
The Chairman@tshields·
Have to say I’m blown away by @cursor_ai with Claude 3.5 Sonnet. After a couple days I can now implement features in minutes that would have taken hours before. I know I’m a little late to the party but holy cow.
English
1
0
3
123
The Chairman
The Chairman@tshields·
@thesamparr Did it a year ago and was just reflecting on how much of it stuck - a lot. One week to get what seems like years of therapy plus a handful of really useful techniques you can take away. Highly recommend.
English
0
0
0
74
Sam Parr
Sam Parr@thesamparr·
Has anyone done The Hoffman Institute? Was it worth it? Awesome? Not great? Seems pretty wild. Many say its life changing. 6 days. No family, no phone. Intimidating. But interesting.
English
96
3
273
254.6K