---*

241 posts

---* banner
---*

---*

@henryvzero

bending reality | human in residence | past life @ycombinator @airbnb @harvard

Katılım Eylül 2021
414 Takip Edilen580 Takipçiler
sysls
sysls@systematicls·
People who are highly numerate tend to be shocked that I frequently punt on negative EV, low probability games (e.g. lottery). There is a logical reason why. In games of negative EV, you "lose money" by playing an infinite amount of times, because your expected edge is negative. In systems that are ergodic - that is, you CAN reach the "infinite amount of times", then you SHOULD avoid games of negative EV. A good example my audience would understand is trading. In trading, you can trade hundreds of thousands of times in a single day. If every trade is negative EV, you are going to go bankrupt very soon. It does not take very long to reach "the long run". On the other hand, human lives are not ergodic. If I bought a lottery a million times, I would indeed lose money on average. If the multiverse existed, then yes, a million sysls would buy the same lottery, and as a whole, we would lose money. But this me, in this world, will never buy enough lotteries to reach the "long run". On the other hand, the amount to punt is insignificant whilst the amount on the infinitesimally small chance of winning is actually meaningful. I COULD bet a thousand times in this lifetime, lose it all, and not have it affect my utility in any material sense. On the other hand, I just need to win once to have a significant shift in my utility. You can extrapolate this to most games of extreme fat tails - like start-ups, for example.
English
50
12
478
54K
---*
---*@henryvzero·
@gabriel1 we're all just repackaging energy at the end of the day
English
0
0
0
135
Anjney Midha
Anjney Midha@AnjneyMidha·
over the next few months certain AIs will start refusing to do certain work related tasks for their users this will cause an explosion in demand for more malleable, flexible and controllable AIs that naively comply with user requests things are going to get weird
English
15
3
123
9.2K
---*
---*@henryvzero·
@buccocapital to be fair their fries are elite value prop is better than 90% of other subscription services
English
0
0
0
78
---*
---*@henryvzero·
@rektmando this is what crypto (the technology) was made for even if crypto (the assets) are in the dumps
English
0
0
1
398
Mando
Mando@rektmando·
we putt every asset known to man on chain everyone is cheering Hyperliquid while altcoins hit ATL but now its a surprise people are pivoting to broader finance content? i dont think this is a cyclical change - i think its structural and its what we asked for
English
24
7
175
13K
---*
---*@henryvzero·
@0xsmac bro's in love with the game
English
0
0
1
24
smac
smac@0xsmac·
the PTJ interview is inspirational because this guy is 70 years old & still waking up for 30-45 minutes at 3am to watch the london open
English
15
14
513
41.3K
kache
kache@yacineMTB·
you can outsource your thinking but you cannot outsource your understanding
English
236
3.5K
15.9K
2M
---*
---*@henryvzero·
@loomdart tbh I feel like the opposite is true, people seem to trade much dumber stuff, at higher frequency, with more leverage, and much less research when times are good
English
0
0
0
20
loomdart
loomdart@loomdart·
similar to how lipstick stock goes up during recessions, i feel like more aggressive direct shorter form market analysis and content does better during times of strife, as people r looking for that one easy fix and on the flip side, longer form "educational" stuff does better during times of excess
English
7
1
25
4.4K
---*
---*@henryvzero·
@deanwball "technical AI safety research can be profoundly accelerationist" feels analogous to how instituting lots of good tests can speed up engineering at scale by making it safer to launch and iterate faster
English
0
0
3
503
Dean W. Ball
Dean W. Ball@deanwball·
Okay, jokes aside, my thoughts about the WSJ’s reporting that the White House is asking Anthropic not to disseminate Mythos any further: 1. Assuming the story is true, I suspect the White House is making the right call. But this is the opposite of a tenable strategy, like trying to erect a dam against a tsunami. There is no way to stop the diffusion of capabilities like Mythos within the next 6-18 months. 2. We should be clear that the government restricting the release of AI models is a type of licensing regime. It is an informal, highly improvised licensing regime, but a licensing regime nonetheless. This isn’t going to be the last such model we see of this capability tier, and cyber vulnerability discovery is very far from the only type of dangerous capability. If the government is going to insist on restricting frontier capabilities for the foreseeable future, it will need to formalize the rules for those restrictions—how long must you delay, what objective factors generate a “green light,” etc. I know this will feel even more regulatory to some, but the alternative is an unpredictable, inconsistent, improvisatory licensing system, and this is both bad for business and the rule of law. 3. I have been critical of the Trump admin for being TOO libertarian with regard to major AI risks. I stand by those criticisms. But I am also infinitely grateful that there will be people advising the President who truly do understand the risks of regulatory overreach, and fear them even more than I do. There wouldn’t have been in a hypothetical Biden/Harris administration. I wish them fortitude and luck. 4. A thing that would be better than an improvised licensing regime would be to bolster technical model and system safeguards. Imagine a version of Mythos that was just as capable, but had been specifically neutered in cyber vulnerability discovery. This a longstanding area of technical AI safety research! There are tradeoffs with this specific approach (as there all with all approaches), but the broader point is that bolstering technical safeguards would mean we could disseminate Mythos-level models more quickly than we can today. 5. If you think clearly about (4), you will understand that technical AI safety research can be profoundly accelerationist rather than evil, decelerationist, whatever other pejorative you have seen hurled at “AI safety.” This does not mean “all AI safety research is good,” but it does mean that technical safety work is an essential part of actually achieving AI takeoff while maintaining societal order. 6. I cannot emphasize enough how much the training wheels have come off on AI policy. The trial runs are over. Many of the heuristics people adopted during the training-wheel period will not be useful (“AI safety is decelerationist” is one of those heuristics, btw). If you want to contribute usefully to the cause of making AI go well, you will need to increase the IQ of your speech. 7. Dealing with risks of this kind should be nonpartisan and technocratic. Catastrophic risk mitigation is not the thing to negatively polarize on partisan lines, as some, especially on the accelerationist side, have been doing. Let’s have partisan fights about things like AI/labor—that’s healthy! But not catastrophic risk management, please.
The Wall Street Journal@WSJ

Exclusive: The White House opposes a plan from Anthropic to expand access to its powerful artificial-intelligence model Mythos on.wsj.com/4cHiUY5

English
25
45
383
56.6K
gum
gum@gumsays·
I built a dashboard for the AI bottlenecks market → 250+ stocks tracked → Risk Profile, Fundamentals The coolest part is the visualization of stocks in a dot plot of year-to-date performance + AI exposure calculation Made it for myself but can share if you are interested
English
36
2
110
10.4K
---*
---*@henryvzero·
@buccocapital @gerstenzang Yea the Airbnb Blind chat was euphoric the day of the IPO when it teleported 2x from open There was a year of anger and despair leading up to it (covid 2020). Nobody inside saw it coming lol
English
0
0
3
76
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
@gerstenzang That this chart is meaningless because his company went public at a dumb 2021 price and when they told him what it was trading at on live tv he was so shocked it looked like he shit his pants
English
6
0
161
19.5K
---*
---*@henryvzero·
@MoonOverlord someone needs to set up perps for spotify monthly listeners
English
0
0
0
5
moon
moon@MoonOverlord·
addison rae will be the biggest popstar of this decade I cant make any money on it, I just want you to know, that I was right when it happens
English
22
2
78
16.6K
---*
---*@henryvzero·
@jon_stokes @davidshor This was not targeted toward you specifically, as I see you do acknowledge that Dario likely sincerely believes what he's saying It's more for the many on my TL who only seem to agree with the second part of your statement (he's talking his book only)
English
1
0
1
122
Jon Stokes
Jon Stokes@jon_stokes·
@henryvzero @davidshor I beg you to read my other replies here. You are attributing to me claims I have not made and do not support.
English
1
0
1
285
Jon Stokes
Jon Stokes@jon_stokes·
I don’t even know what to say to this. I know Shor is familiar with the concept of a regulatory moat. It can obviously be true that they believe their own hype AND they’re talking their book.
David Shor@davidshor

@real_jerseylee @mattyglesias “The labs are running around telling people their product is dangerous and should be taxed and regulated as a ploy to pump up their valuations” is a such a perfect demonstration of the thesis of this paper journals.sagepub.com/doi/10.1177/01…

English
6
6
44
16.1K
---*
---*@henryvzero·
@davidshor @jon_stokes It really is astounding. Many otherwise smart people's brains seem to crack apart to even consider the possibility that Dario is just speaking his long-held beliefs plainly
English
1
0
2
302
David Shor
David Shor@davidshor·
@jon_stokes I just find the lack of curiosity around this astounding - there is an absolutely massive amount of publicly available information about the origin and evolution of Sam and Dario's beliefs on AI and the beliefs and customs of the communities they came out of
English
7
8
280
21.4K
---*
---*@henryvzero·
@bayeslord Could tell by the number of takes critiquing his "messaging" tone but not the substance of his arguments
English
0
0
1
27
bayes
bayes@bayeslord·
Dario has a worried person vibe which rubs people the wrong way but the idea that he's just outright wrong about automation and unemployment is unserious. Jensen is also correct to point out that the question is not only about automation of tasks but about automation of whole jobs. The truth is that we don't know how this will all shake out, but it certainly looks like capital mostly wants efficiency. Note that in the event of big automation and job loss, unemployment waves might be temporary while people figure out what new things they want to do with their time
English
17
9
145
16.7K
---*
---*@henryvzero·
@GergelyOrosz Suspect it was initially useful as a blunt tool to get people to try AI at all for their jobs But quickly outlives its usefulness once people do and then game it
English
0
0
0
107
Gergely Orosz
Gergely Orosz@GergelyOrosz·
Anyone sensible inside the industry is rapidly coming to realize that tokens burned is the most silly thing to track. Real story: talked with a dev inside one of the major AI labs. A team was about to launch an internal token leaderboard. It was promptly stopped with a big WTF
Jaana Dogan ヤナ ドガン@rakyll

Unpopular opinion: If you think tokens burned is a productivity metric, no one should take you seriously. Imagine you are a top 0.0001% writer and they are only counting the tokens you produce.

English
96
44
908
126K
---*
---*@henryvzero·
@beffjezos Underestimating the exponential, again and again and again and...
English
0
0
0
69