🄹🄳🄷

45 posts

🄹🄳🄷 banner
🄹🄳🄷

🄹🄳🄷

@jdh

Katılım Ocak 2009
0 Takip Edilen20.3K Takipçiler
rat king 🐀
rat king 🐀@MikeIsaac·
@devahaz @winterrose also wesburger and trueburger east bay recs: smokehouse for chargrill lovely’s for ambiance Beeps Pearls Als Park Phyllis
English
4
0
6
1K
🄹🄳🄷
🄹🄳🄷@jdh·
@bubbleboi Sure, they don't ship for a month: stock could go down, might not go down, this is just gambling on market sentiment.
English
1
0
0
113
🄹🄳🄷
🄹🄳🄷@jdh·
@bubbleboi TSMC could ship no chips for a month and I'm not sure it's a less valuable asset than it was yesterday. They have a monopoly and people want what they have, what's a month's lost production, people will pay 2x to get whatever capacity they have if production cuts in half.
English
1
0
1
621
🄹🄳🄷
🄹🄳🄷@jdh·
@HPbasketball 10/10, no notes, perfect casting for lead actor of the Lakers. Showtime!
English
0
0
0
272
Hardwood Paroxysm
Hardwood Paroxysm@HPbasketball·
the efficiency gap is massive. Shai's game leaves almost nothing extra out there. There's no meat on the bone. Luka's shot selection to me feels like half the time he's playing for the highlight so he can laugh and smile and shit talk the fans rather than win the game.
Three Level Scorer@3LevelScorer

@HPbasketball Now that I think about it, it’s hard to make the argument that “Shai is the best scorer since…” when Luka is playing. Both have around 500 career games. Luka has scored 14570 pts vs SGA’s 13000. Luka has 57 40-point games, SGA has 31. SGA has been more efficient recently though

English
27
4
218
30.7K
Mdiac
Mdiac@Mdiac_·
The Hivemind offices are turning into art galleries
English
1
0
6
463
Neil Chilson ⤴️⬆️🆙📈 🚀
I don’t think your first paragraph is accurate. This isn’t about whether companies have the right to set terms, it’s about whether they have the right to *unilaterally*set terms. And of course they don’t. Not with the military and not with anyone else. Contracts are negotiations between each party. Obviously, when you’re negotiating with the guy who has all the guns, that’s a complicated situation and one in which we need plenty of checks and balances.
English
8
0
38
6K
Neil Chilson ⤴️⬆️🆙📈 🚀
Just FYI, this is the Department of War language that Anthropic doesn't want in their contracts. I've seen a lot of people say "if Anthropic's red lines are already unlawful, then why doesn't DoW just agree to them?" You could just as well say, "if the DoW only wants lawful uses, why does Anthropic want special conditions?" In any case, this undermines the (pretty loud) narrative that this debate is happening because the DoW is actively pursuing unlawful uses.
Neil Chilson ⤴️⬆️🆙📈 🚀 tweet media
Neil Chilson ⤴️⬆️🆙📈 🚀@neil_chilson

I've been thinking more about this Anthropic vs. Department of War fight, and there's a weird wrinkle here, separate from the Defense Production Act mess. And even if what I said earlier sounded pro-Anthropic, this part might not. (Sorry, Claudites.) Here's the question: who do we want making military decisions? It's an old question but AI makes it fresh again. When Colt sells the military an M16, Colt is out of the loop. Not just legally, but practically. Colt has no way to override a soldier's judgment about where to point the gun or when to pull the trigger. The gun's limits are baked in at the point of sale, and really even earlier. AI isn't like that. Even if the military runs an AI model on its own hardware and fine-tunes it in-house, the base model can still carry the developer's built-in constraints and priorities. That's kind of the point of "alignment": the model keeps following certain principles across many different situations. So a model used in an operation could spit out answers, take actions, or refuse to take actions, that clash with what a soldier asks, what a commander orders, or what the mission requires. This basic issue isn't new. Whether and how "upstream" model choices shape "downstream" behavior is a huge debate in AI policy. It's why some people want to regulate model developers directly, and why others want to pin liability on them when an application causes harm. But in most of those debates, the concerning outcome is unintended by the model developer. I don't think anyone thinks model developers intend to cause AI psychosis or libel or whatever. The classic exception is concerns over "woke AI," where critics claim the model is intentionally steered to reflect certain "woke" values. This Anthropic / DoW dust-up is more like the "woke AI" debate than most of AI policy, because it is about intentional changes to the model. You can imagine the DoW worrying that Anthropic's training choices, its "constitution," etc, could make Claude resist certain tasks or nudge decisions in ways that undercut military intent. The fear isn't AI accidentally failing, it's the AI subverting the chain of command. Now, I don't know for sure that is the concern driving this disagreement with Anthropic. Another read of the reporting is simpler: maybe it's just about which programs Anthropic will participate in. The company has been vocal that it doesn't want its tech used for domestic surveillance or autonomous weapons. But I suspect the military's bigger worry isn't "we can't use Claude for Program X." It's "we can't trust Claude to do what we need when it matters." A few things point that way: - The military says this isn't about surveillance or autonomous weapons. - The dispute reportedly flared when Anthropic raised concerns about use tied to seizing Maduro. That's not domestic surveillance, and it's not obviously autonomous weapons either. - The threat to label Anthropic a supply-chain risk makes a lot more sense if the worry is deeper than which programs use Claude--if the worry is that the model itself carries values that could conflict with defense objectives. To me, that concern makes sense. It no doubt applies to every AI model developer, not just Anthropic. But I understand why the military wants a partner that will work to embody the military's values and directives into the AI model, even if that overrides the company's values. I'm no expert on the institutions that govern military force. But it seems fundamental that the monopoly on legitimate force has to sit with **politically accountable** government actors. If outside parties can intentionally shape how military tools behave in decisive moments, that blurs responsibility. Commanders still own the consequences of military actions. They should be able to demand tools that let them make the calls they'll be held accountable for. To be clear, I don't think Anthropic has to sell anything to the U.S. military. They can refuse service. That's their right. But if they do sell the service, the procurement model should look more like the M16: once the deal is done, the military makes the decisions. And the military owns the results.

English
59
52
417
116.9K
Phoadobo
Phoadobo@christophcruz·
@willdepue Using quotation marks in Google (including gmail) has been a thing since it's inception. Instead of just plainly writing: proof of insurance. Try: "proof of insurance".
English
2
0
4
10.3K
will depue
will depue@willdepue·
whoever builds Gmail app search should be burned at the stake. everytime i use this app i want to kill myself
will depue tweet media
English
346
2.2K
75.1K
1.5M
🄹🄳🄷
🄹🄳🄷@jdh·
45DF slowly dripping out into the world, see the video for a peek at the physical component of this project, painstakingly designed and assembled by @kimasendorf
MATHIAS THIΞL@MathiasThiel

I have to admit: I managed to get my hands on a piece by @kimasendorf (commissioned by @jdh). 45DF #96 is in the vault now. I love everything about it—the unmistakable digital language of Kim, the physical object, even the wild packaging. This piece has a special place in my collection, and it always will. And maybe I played the tiniest part in how all of this came to be. That’s what many people don’t understand: if you collect for the right reasons, you’re so much more than a buyer. You can’t help but engage with artists, open doors, and support where you can. Something grows—something far more complex than what’s visible from the outside. For me, this work also stands for that complexity. Thank you, Kim. (And of course @jdh as well.)

English
6
1
49
2.9K
🄹🄳🄷
🄹🄳🄷@jdh·
@catehall I think it’s particular for rigid personality type, overrepresented in our circle
English
0
0
4
1.7K
Cate Hall
Cate Hall@catehall·
the truth is, *everyone* is willing to believe this. they just won't do it anyway, because being caught in the act of being bad at something, EVEN by yourself, is so mortifying most people would rather stay bad than do it.
LaurieWired@lauriewired

no one’s gonna believe me but becoming a good speaker is really easy just record yourself for 10 minutes every day, first thing in the morning. don’t send it to anyone, just force yourself to watch it later. you’ll notice every possible flaw you can imagine.

English
19
183
3.6K
123.4K
🄹🄳🄷
🄹🄳🄷@jdh·
One limitation of the first two projects was the question of how recipients could enjoy the art. The evolution and motion are fundamental but you can't make a print and hang it on your wall. @kimasendorf solved this, the result is compelling and I hope many get to experience it!
English
18
0
60
1.7K
🄹🄳🄷
🄹🄳🄷@jdh·
This is my third NFT commission, after tremendous projects with @MacTuitui "24 Heures" and @msoriario "variaciones del yo". All 3 projects explore themes in digital art that compel me -- disorder, decay, and time. They all evolve, unique to digital - rather than still images.
English
2
0
72
1.9K
🄹🄳🄷
🄹🄳🄷@jdh·
45DF: A new NFT project by one of the best artists working in digital today, @kimasendorf. Just deployed this morning, a project conceived and begun in May 2024! It is spectacular, scaled small or large!
English
83
55
401
36.4K
derek guy
derek guy@dieworkwear·
Tony Traina, who does a nice watch newsletter called Unpolished, just did a guide to vintage Omega Seamaster chronographs with the caliber 321 movement. IMO, a very nice, relatively affordable chrono. You can find a link to the newsletter by going to Instagram unpolishedwatch
derek guy tweet mediaderek guy tweet mediaderek guy tweet mediaderek guy tweet media
English
19
51
1.3K
131.9K