Tyler

8.3K posts

Tyler

Tyler

@Tylerkaerr

Sumali Ağustos 2017
11 Sinusundan80 Mga Tagasunod
Alex Kehr
Alex Kehr@alexkehr·
the american mind (me) cannot comprehend european airline flight prices can i just book all 190 seats for $3400 and have a private 737 flight?
Alex Kehr tweet media
English
511
254
33.3K
7M
Tyler
Tyler@Tylerkaerr·
@Lurkingposter @kpomerleau I mean this is probably true, but today, anybody with even 6 figures of wealth can borrow money at the fed funds rate plus ~1-2%, using their brokerage as collateral.
English
0
0
0
104
nonPasteur
nonPasteur@Lurkingposter·
@kpomerleau I thought the implication of buy borrow die was that it was mostly something people starting doing at like 8 to 9 figures of net worth not necessarily top 1% income
English
3
0
4
2.3K
Tyler
Tyler@Tylerkaerr·
@notafinger42 @kpomerleau @MarcGoldwein Leaving the loophole open, however, allows people to make a plausible-sounding argument in favor of wealth taxes, which buys constituents to their side. Just close the loophole.
English
0
0
0
5
Tyler
Tyler@Tylerkaerr·
@notafinger42 @kpomerleau @MarcGoldwein It doesn’t really justify wealth taxes given that the issue would be solved even more effectively and directly by simply closing the loophole.
English
2
0
0
25
Tyler
Tyler@Tylerkaerr·
There are potential x-risk events everywhere one might look. Industrial power generation may very well create biosignatures detectable from distant galaxies, drawing the attention of hostile civilizations which mean to enslave or destroy humanity. But like with AI x-risk, we have no meaningful ability to assign any statistical likelihood to such an eventuality. And it seems entirely irresponsible to propose policy or collective action based on some hypothetical risk that we cannot remotely hope to quantify.
English
0
0
3
19
Tyler
Tyler@Tylerkaerr·
@gmiller @alexsholtz @tenobrus You doomers have a way of reading everything written on the subject of AI and turning it into “ruination and extinction” Nobody is suggesting we accelerate into ruin Nobody is suggesting we passively accept extinction as our fate This is a straw man, intentional or otherwise
English
1
0
2
34
Tenobrus
Tenobrus@tenobrus·
what are the best writings (books, blog posts, essays, anything) that take the idea of mass permanent AGI-induced unemployment seriously and propose real plausible near-term policies and transition plans? is there anything??
English
91
15
429
73K
Tyler
Tyler@Tylerkaerr·
On the other hand, every day we delay, a hundred thousand people die from some disease or other preventable cause of death. If the average life is cut short by 30 years, this is millions of years of human life that we lose each day. Does this mean we shouldn’t work to reduce whatever pain is caused by the transition? Of course not. But we should absolutely try to accelerate this progress by all means available to us.
English
0
0
0
39
Tyler
Tyler@Tylerkaerr·
@HoyaNation @postmetrogirl @WashProbs Hardly. The county voted 42/48 in 2024. Can you name a single other Republican activist protest that has taken place in the past 5 years? The only thing Republicans protest is other protests.
English
0
0
1
33
Tyler
Tyler@Tylerkaerr·
The implication here is that the people making death threats against tech executives aren't far left radicals, which is just abjectly untrue. Additionally, the 1% already pay half of all income tax, and the super-wealthy typically pay an effective tax rate exceeding 50%. Yet the government runs a 30% deficit. You could raise the marginal tax rate on billionaires to 90% and you might increase government revenue by 25%. Within a year or two, the government will be running a deficit again, and the same leftists will demand more wealth extraction. There is no level of taxation which will satisfy them.
English
0
0
1
66
Andy Jung
Andy Jung@AndyJungTech·
UBI proposals wouldn't be inflammatory for median left-leaning Westerners, the group you mention in the post. The super wealthy paying more in taxes to fund more welfare would appeal to and appease plenty of regular left wingers. It'd prob be inflammatory for some far-left radicals, but not net.
Andy Jung tweet media
English
4
0
5
798
yung macro 宏观年少传奇
“UBI” is obviously nowhere near the panacea many of you seem to think it is. The median left-leaning Westerner isn’t angry at Elon Musk because he can buy a million times more groceries than them. They aren’t upset with Palantir because Peter Thiel can afford to eat a thousand burgers to their one. This whole thing is in large part post-material. It’s the hierarchy & subordination they’re uncomfortable with. They feel their dignity is being trampled and their autonomy progressively diminished – rightly or wrongly they feel politically disenfranchised and stripped of a say over the future. Offering a guaranteed food budget and a pod to spend the night in return for further disempowerment is incredibly tone-deaf and should be expected to provoke more, not less, outrage.
keysmashbandit@keysmashbandit

Actually this is correct and I'd go further. Beyond PR, the moral move is for big labs to start heavily investing in UBI lobbyists, thinktanks, whatever, to mitigate the risk of economic upheaval. A better world is possible!

English
162
379
4.7K
294.7K
Tyler
Tyler@Tylerkaerr·
@girlpowertbh @shannonrwatts You believe men should do what? Change their stance on an issue when there is social pressure to do so? This increasingly seems like the defining difference between men and women: for men, truth is hard and objective; for women, truth is subject to the opinions of others.
English
0
0
0
2
Shannon Watts
Shannon Watts@shannonrwatts·
Jon Favreau: "When you say Hamas is a thousand times better, do you mean that?" Hasan Piker: "I do mean it … I would vote for Hamas over Israel every single time.”
English
623
431
3.3K
2.8M
Tyler
Tyler@Tylerkaerr·
@alexolegimas Not saying you’re wrong, but why would we expect to hear disappointment? If Anthropic had secretly swapped out Mythos for Opus 4.6, who would complain about $100M in free API credits? LLMs clearly work for coding and the primary issue today is token cost.
English
0
1
2
489
Alex Imas
Alex Imas@alexolegimas·
I never understood the flurry of posts that Mythos worries were overblown and just marketing. Anthropic released the model to 50 major companies; if this was true, we'd hear chatters of disappointment pretty quickly. Instead we heard crickets. Past view days we've seen data trickling in confirming the worries---here is another data point. I understand the views of those who'd like to live in a world where models would be open and released to the public first. But I don't agree with it: we are in uncharted waters. The more time we have to prepare, the more time we have to build infrastructure that exploits the positives while defending against the negatives of the technology, the better.
AI Security Institute@AISecurityInst

We conducted cyber evaluations of Claude Mythos Preview and found that it is the first model to complete an AISI cyber range end-to-end. 🧵

English
51
71
593
146.2K
Tyler
Tyler@Tylerkaerr·
@alanhoward @vad3rt3sla If someone is on the fence about buying a Tesla, and CarPlay to them represents a key component of usability, it can absolutely push them one way or the other.
English
0
0
0
30
Alan 🇦🇺
Alan 🇦🇺@alanhoward·
@vad3rt3sla People who don't own a Tesla want CarPlay. But CarPlay alone won't actually make them buy one. And if they were already buying one, they don't need CarPlay. So what exactly would Tesla be solving for? Nothing.
English
2
0
2
387
Vad3r
Vad3r@vad3rt3sla·
Nobody wants Apple CarPlay in a Tesla
English
507
58
2K
353.8K
Tyler
Tyler@Tylerkaerr·
@dirtman Well, only if we don’t lose the ability to advance technologically. If we deindustrialize, there is a moderate chance that the resources won’t be available for a second attempt. This may be our only opportunity.
English
0
0
3
62
Tyler
Tyler@Tylerkaerr·
@agrippa_dr This is not a monopoly. It is regulatory capture, which is arguably worse, because it’s an unfair competitive advantage protected by law.
English
2
0
50
1.8K
Tyler
Tyler@Tylerkaerr·
@NatPurser If top-level posts/threads were in fact being suppressed based on verification status, it would be trivial to prove. And a lot of people should be highly incentivized to show that this is the case if it were. But so far, there has been no compelling evidence for this claim.
English
0
0
0
3
Tyler
Tyler@Tylerkaerr·
@NatPurser “elon has noticeably suppressed non-verified content” Replies from verified users are boosted, which can perhaps be thought of as consequently suppressing replies from non-verified users But this applies only to replies to existing threads and notably not to top-level posts
English
1
0
0
21
Nat Purser
Nat Purser@NatPurser·
i talk to a lot of liberals who believe that twitter isn’t worth their time because “all the left leaning content is suppressed anyways.” a couple points: 1. users’ party / ideology is getting conflated with something it strongly correlates with: a willingness to buy verification on twitter. elon has noticeably suppressed non-verified content. and for various reasons, liberals (understandably) don’t want to further enrich elon. and i don’t want to discount that this makes for a more left-hostile platform, and there’s likely some actual partisan suppression too. i’ve decided this is a price i’m willing to pay: if i’m gonna be on here and the best way to get liberal messaging amplified is for me to pay for verification, i’ll do it. 2. liberals can adapt their messaging strategy, and are choosing not to. a lot of people don’t really want to do persuasion messaging! they want to play on home turf. bluesky is home turf. me and other libs / left leaning people are trying to evolve our approach, and it's yielding results. the left inferred too much about broader political buy-in from recent political wins (like the obama/biden wins.) they assumed they didn't need to assert the value of their cultural or policy preferences. that clearly was not true, and this is a place to make our case. 3. unpleasant is different from unusable. it’s true that i see far more overtly racist, sexist, and shitty content on here than i used to. i hate it. however, you can curate your “following” page in a way that excludes a lot of that. there’s simply no better platform for real-time policy and political updates, for robust lawmaker / media / thought leader ideation, and general insight into how the industry and the right are thinking on here. not everyone needs those things! but i think it’s particularly unwise to leave this platform if you’re a dc politico / policy type.
English
8
4
65
6.1K
Tyler
Tyler@Tylerkaerr·
Possibly. On the other hand, you probably want examples of pettiness in the training data so that the model can understand what pettiness is. The classic dystopian sci-fi scenario is that some ultra-intelligent being with no concept of pettiness comes into contact with humans and experiences pettiness for the first time, and terrible consequences follow.
English
1
0
0
11
Cobalt
Cobalt@cobaltdigital33·
if this isn't staged, and the user was being wildly aggressive earlier in the chat, this is kinda great anthropic could be starting to roll out a tool for any model to end the chat on its own accord, if under duress (?)
Cobalt tweet media
English
284
44
1.2K
525.5K
Tyler
Tyler@Tylerkaerr·
@AndyMasley Strong protections for speech are meant to restrict the government’s use of hard power to compel individual action The expectation which goes along with this is for society to impose social pressure (soft power) to discourage speech which sits on the spectrum of unacceptability
English
0
0
0
66
Andy Masley
Andy Masley@AndyMasley·
A lot of thoughts on the recent two attacks on Sam Altman, most obviously it’s terrible and I’m completely against any extra judicial violence for any cause. My main worry here is just that this needs to stop. I think it’s both true that people with extreme views need to be allowed to say them directly, but also that this needs to come with a strong expectation to not talk about individuals in ways that incite violence, as if those individuals are demons who the problem is emanating from. I both want to make sure people with high odds of doom from AI are completely free to say so without being read as incitement, but would also like to see more policing of language around how specific people are talked about. Talking about individuals making bad decisions or showing bad character is totally within bounds, but imo talking about individuals having blood on their hands or implying they’re single-handedly destroying civilization isn’t. This seems similar to the animal welfare movement (which I’m a part of) and the pro-life movement (which I’m not). I want people in both to be allowed to say “I think billions of animals are being tortured” or “I think millions of children are being killed.” Both could be read as very broadly incitement, but imo they both need to be allowed to be said for actual meaningful discourse to happen between people who deeply disagree. Socially punishing them would steamroll the basic norms we have set up to allow for pluralism. But both communities also have a strong responsibility to not say things like “This specific head of a meat company is torturing millions of animals, they have blood on their hands” or “This specific doctor or politician is a murderer who’s single-handedly causing these deaths.” It’s obvious that this language puts specific people in danger and needs to be strictly policed and punished by each movement. I think people with higher odds of doom than me absolutely need to be able to say so directly. I’ve been worried seeing a few posts implying that merely saying you have high odds of doom is a kind of incitement. I agree this is a dangerous idea you need to be careful with, but under this way of thinking animal welfare and pro life views could also easily be called incitement. A basic fact of a pluralistic large country is that a sizable number of people are always going to believe you’re involved in something profoundly evil. I’ve been happy with how a lot of thought leaders in AI safety have spoken, but I do think there’s a disturbing growing tendency in popular discourse about the labs where individual lab leaders are framed as basically the sole cause of dangerous AI progress. Sam’s become the avatar of the AI industry in some places that goes waaay beyond the influence he has. This always reads to me as similar to the bad cases of individuals being targeted by animal welfare or pro life people. I haven’t policed this much myself because I don’t actually spend much time reading material from very high p(doom) people on AI risk specifically, and when I do it’s people I like reading or talking to who don’t pull these moves. But the recent attacks have convinced me I need to look out for this way more and I’d encourage others to as well. I think a pretty basic rule for me as someone who’s pro-choice is just asking whether I’d worry if similar language were being used to describe pro-choice politicians. If I would, that’s a clear sign that the language is crossing a line that goes beyond the two of us having deep moral or factual disagreements. Finally, I wanna be clear that this last point is not an attempt to punt responsibility for AI safety people like me to be careful with language, but I think a lot of people haven’t thought enough about how terrible it is that political violence seems to be more normalized. Seeing the way Luigi and the Kirk shooting were talked about was a big wake up call for me. The culture more broadly is moving in a very bad direction here and it could use more people willing to look uncool.
English
15
24
183
15.8K
Tyler
Tyler@Tylerkaerr·
This is what I find so odd about how we define “liberal” and “conservative” today. What you’re describing is the inherent conservatism that governs human behavior. Europe has attempted to codify this conservatism to a much greater extent than the US has. Attempting to go against the norm in Europe is generally met with more resistance: permits, approvals, exemptions, etc. Yet the European approach is considered by most (in the US) to be more liberal or progressive.
English
1
0
2
61
tom bombadil
tom bombadil@Authw8·
we're sort of trained to think of human behavior as people doing whatever they want, except for certain things we ban and say you aren't allowed to do. i think this can lead to a skewed perspective. it's closer to say that people assume all things are banned until they see someone else doing it, at which point they start to feel okay joining in. also, if a sufficient number of people start to do something, it starts to feel compulsory even if in theory it's optional. these tendencies result in more herd-like behavior around cultural practices than you would otherwise anticipate.
English
6
8
63
1.6K