residual human

426 posts

residual human banner
residual human

residual human

@midquant

参加日 Aralık 2010
919 フォロー中53 フォロワー
residual human
residual human@midquant·
@SeattleStone37 @andrewsharp none of which required starting this war to complete. I’m willing to accept there’s a small chance America comes out in a better spot if the conflict winds down quickly, but to think these goals are worth the ongoing risks is gratuitous
English
0
0
0
8
residual human
residual human@midquant·
@SeattleStone37 @andrewsharp mentioned are “weakening the most active sponsor of terror … , becoming further entrenched as the world’s leading energy superpower, deepening security relationships with Saudi Arabia and the UAE, and … gaining leverage over imports that China needs for its own survival”
English
1
0
0
37
residual human
residual human@midquant·
@singletwinz @trq212 I’ve had decent luck the last day running it in tmux on a VPS with systemd running to restart/start on server boot if needed
English
1
0
0
29
Twin
Twin@singletwinz·
@trq212 Would you recommend running this in tmux because after three to five minutes the session stops working and the bot goes offline?
English
2
0
2
783
Thariq
Thariq@trq212·
setting this up is so fun, need to figure out a way to stream the dev process without getting instantly owned
Thariq@trq212

English
82
10
540
95.2K
residual human
residual human@midquant·
@statmuse Don't put Luka in the same sentence as D-Lo, he hasn't earned it
English
0
0
266
11.5K
StatMuse
StatMuse@statmuse·
Luka Doncic vs Pacers: 44 PTS 9 REB 5 AST 3 STL 2 BLK 7 3P Joins D-Lo as the only players in NBA history to reach those numbers in a game.
StatMuse tweet media
English
64
190
6.2K
2.5M
residual human
residual human@midquant·
@cremieuxrecueil Ah yes, a wildly vague interpretation of a statement from an old hat, good reason to declare a company a supply chain risk
English
0
0
1
107
Crémieux
Crémieux@cremieuxrecueil·
lmao no way
Crémieux tweet media
English
200
487
7.2K
561.8K
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
I have a theory Sam is just way way way better at speaking normie human than Dario This was all basically a misunderstanding in language between some hyper nerds and hyper jocks and it took a y co tech bro to bridge the gap
Sam Altman@sama

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

English
107
19
946
99.9K
residual human
residual human@midquant·
@dee_bosa I trust the law with regards to AI as much as I trust Chuck Schumer to bench press 135
English
0
0
0
40
Deirdre Bosa
Deirdre Bosa@dee_bosa·
Essentially comes down to who you trust more to decide when AI is too dangerous for the military to use… the law, or the people building the AI. Thats a genuinely hard question. Would love to hear what OAI’s own engineers think
Senior Official Jeremy Lewin@UnderSecretaryF

For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms. This, again, is a compromise that Anthropic was offered, and rejected. Even if the substantive issues are the same there is a huge difference between (1) memorializing specific safety concerns by reference to particular legal and policy authorities, which are products of our constitutional and political system, and (2) insisting upon a set of prudential constraints subject to the interpretation of a private company and CEO. As we have been saying, the question is fundamental—who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems. It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here 🇺🇸

English
33
3
48
12.1K
residual human
residual human@midquant·
@JoshKale Good thing we have through, rational AI use laws to keep things in line!
English
0
0
0
1.2K
Josh Kale
Josh Kale@JoshKale·
Everyone’s saying OpenAI got the “same deal” Anthropic was banned for. Read the fine print. They’re not the same: On weapons: Anthropic asked for “no fully autonomous weapons without human oversight” = a human involved in the decision. OpenAI’s deal says “human responsibility for the use of force” = someone accountable, which can happen after the fact. Oversight ≠ Responsibility. One requires a human before the trigger. The other requires a name on the paperwork after. On surveillance: Dario said explicitly: current law hasn’t caught up with AI. The government can already buy your movement data, browsing history, etc without a warrant. AI can assemble that into a complete picture of your life, at scale. That’s mass surveillance without breaking a single law. Anthropic wanted protections beyond current law. OpenAI’s deal says the Pentagon “reflects them in law and policy.” That’s existing law as the safeguard, the exact law Anthropic said is insufficient. Same words. Different agreements. Read them carefully
English
201
1.9K
9.3K
788.8K
residual human
residual human@midquant·
@chamath My brother in Christ, how can a free market depend on a petty government that will label an American company a supply chain risk because they disagree with their policies?
English
0
0
1
75
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
In a democracy, it’s absolutely ok to define who can use the things you make and how. But it’s also absolutely ok for the Government to lose trust in you, tell you to fuck off and find an alternative. It’s also absolutely ok for you to nuke your own company in the process. The timing of this is not good for Anthropic and could be a potential boon to every other model that is exceeding expectations in their upcoming version (Grok, OAI, Gemini). More generally, I don’t see how this isn’t a slippery slope. What if a model maker updates their ToS that would block a use case that is legal but subjective? Agreeable in some states but not in others? What about in different countries with different governance or religions? It’s a huge can of worms. How can a government or company rely on a model that could have an ever-changing definition of what’s allowed without taking on major business/governance risk? They won’t. My hunch is that the company that embraces the “no holds barred” ToS will win because it’s the least risky to adopt wrt long term risk of getting rug-pulled.
Chamath Palihapitiya tweet media
English
426
147
2.1K
272K
residual human
residual human@midquant·
@_NathanCalvin @murd_arch This description, if believed, says the difference is “law” will be the ultimate arbiter of AI use whereas Anthropic wanted that control with the company
English
0
0
0
27
Nathan Calvin
Nathan Calvin@_NathanCalvin·
@murd_arch x.com/UnderSecretary… from this, seems like not much in terms of the legal terms - the exact integration of the tech/staff provided might be different though
Senior Official Jeremy Lewin@UnderSecretaryF

For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms. This, again, is a compromise that Anthropic was offered, and rejected. Even if the substantive issues are the same there is a huge difference between (1) memorializing specific safety concerns by reference to particular legal and policy authorities, which are products of our constitutional and political system, and (2) insisting upon a set of prudential constraints subject to the interpretation of a private company and CEO. As we have been saying, the question is fundamental—who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems. It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here 🇺🇸

English
2
0
7
1.1K
Nathan Calvin
Nathan Calvin@_NathanCalvin·
Keep in mind Anthropic's description of the deal that they were offered and rejected, when evaluating the deal that OAI signed: "New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will."
Sam Altman@sama

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

English
8
43
442
23.3K
residual human
residual human@midquant·
@ryxcommar The admin may really just be that petty about a press release and gave OpenAI the same terms Anthropic wanted
English
0
0
4
734
Senior PowerPoint Engineer
Senior PowerPoint Engineer@ryxcommar·
I'm confused. This whole thing is confusing. Is Sam lying or misleading about the terms, or does the Trump administration just not like Anthropic? Is this because Dario Amodei supported Kamala Harris? When's the lawsuit happening over the procurement process? What's going on?
Sam Altman@sama

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

English
46
20
675
41.9K
residual human
residual human@midquant·
@atheistsquid @ZaidJilani A communist phrase 😂. Wake up, the highest quality complex goods (cars, solar panels, electronics) are by large margins manufactured outside the United States
English
1
0
0
20
Charre
Charre@atheistsquid·
“Late stage capitalism” is a communist nonsense phrase. We still manufacture lots of expensive stuff in the US; we don’t manufacture cheap things because the economics of it don’t make sense compared to making it over seas. Every time someone complains about capitalism their problem is with corruption or just the reality of limited resources. Communisms puts the corrupt people in charge of everything with no competition. Capitalism has done more to overcome limited resources that anything else, and communism just makes sure most people share the poverty equally. Not the corrupt leaders though! They still get enough to be fat in a country full of people starving to death.
English
2
0
0
51
residual human
residual human@midquant·
@atheistsquid @ZaidJilani It’s not the mines, it’s 30+ years of investment in advanced manufacturing that happened in China and not the US due to unfettered late stage capitalism leading companies to outsource entire supply chains
English
1
0
0
35
Charre
Charre@atheistsquid·
@ZaidJilani It isn’t about “getting good,” China owns most of the mines to get the raw materials and can produce the most expensive components for cheaper than anyone else. You can’t “get good” to out compete a monopoly. Not even Trump is wrong about everything.
English
2
0
0
613
chuck
chuck@K__Breeee·
@cuneytdil Remote work + suburbanization + r/antiwork + screentime
English
1
0
102
7.1K
hunter
hunter@hxxntrr·
Met a guy at a poker game who said he "borrows money for a living" Thought he meant loan shark shit Nope He takes $200-300K from banks every 18 months at 0% interest, uses it to buy assets, pays it back before they charge a penny, keeps everything he made He's done this 6 times. Net worth now: $4M+. Started with nothing but a 700 credit score The infinite capital loop: Here's what he explained between hands: Banks offer business credit cards with 0% APR for 12-18 months. They're hoping you'll carry a balance and pay 24% interest after the promo ends What they don't expect: someone who takes $200K, deploys it into something profitable, and pays it all back at month 17 His exact cycle: Year 1: Stack $200K in 0% business cards Year 1-2: Buy rental property / fund business / invest Month 17: Pay off cards from profits Month 18: Credit score rebounds Month 24: Do it again He bought his first rental with Chase's money. Cash flows $2,400/month. Paid Chase back $0 in interest Second rental: Amex money Third rental: Bank of America money Fourth rental: US Bank money Four properties. $9,600/month cash flow. Total interest paid to banks: $0 "Banks are the best business partners. They give you unlimited capital and ask for nothing but the principal back. If you're smart." He's not doing anything illegal. Just using promotional offers exactly as written The banks are betting you'll fuck up and pay interest He's betting they're wrong 6 cycles later, he's retired at 41 (I help you get up to $250k in 0% funding. DM me if you want to start your first cycle, must have a 700+ credit score)
English
661
369
7.4K
2.7M
floridamodern
floridamodern@floridamodern1·
@mornings0da I think the difference in the US is sprawl, lack of transit/walkable amenities, and the specific lack of free/public spaces. not everything has to be "you just aren't trying hard enough" we know that capitalism breeds alienation. that's like economics 101.
English
1
0
2
324
Joseph Carlson
Joseph Carlson@joecarlsonshow·
Just remember: if you think the hedge funds are "too smart", and have "extra data" that you don't, so you may as well not compete with them. The fund that billionaire investor Philippe Laffont runs called Coatue, with $50 billion in AUM, and is supposed to be a prestigious tech fund, didn't even list Google as a top 40 AI company because they thought it would be destroyed by ChatGPT.
English
51
55
1.5K
135.4K
cold 🥑
cold 🥑@coldhealing·
I love how in post-covid yuppie America everyone has a "winter break" where they can work from wherever like they're in college
English
28
58
8.2K
462.8K
residual human
residual human@midquant·
@Route2FI If you do have true domain knowledge or an edge in an area, and willingness to lose money and time, then sure allocate some fraction of your capital to higher-risk plays. But don't pretend that just because you're above average IQ/ability, you should be gambling
English
0
0
0
4
residual human
residual human@midquant·
@Route2FI **checks bio** Ah yes, crypto peddler. Most very smart, very driven people still do not have a durable market edge in any market. Index investing + high savings rate buys you the ability to take some targeted risks later, after establishing a foundation
English
1
0
0
11
Route 2 FI
Route 2 FI@Route2FI·
The problem with the FIRE movement is that it is a bet against yourself, and yes, this is coming from someone who used to be a part of it. The whole strategy is to save up as much as you can in index funds, and genereally be extremely frugal. If you're in your 20s and 30s, this is the time to take massive assymetrical bets, not to follow investment strategies that is originally intended for pension funds. You are limiting yourself from having a growth mindset to a scarcity mindset. A mindset where you believe you can't be better than the average man. I think people so extreme that they seek FIRE in the first place are so driven that they're already way above average. Not everyone can double their net worth every year, but people in this corner of Twitter at least have a shot of doing better than 7-10% per per year. Take that risk.
Route 2 FI tweet media
English
122
44
1K
164.1K