Crowd 3

3K posts

Crowd 3 banner
Crowd 3

Crowd 3

@Crowd33

Katılım Şubat 2022
98 Takip Edilen60 Takipçiler
Crowd 3
Crowd 3@Crowd33·
@RokoMijic Proof of humanity is communist. We need proof of value
English
0
0
0
2
Crowd 3
Crowd 3@Crowd33·
@nic_carter You're treating fundamental physics and engineering breakthroughs like an asset bubble where technical and sentiment analysis matters. Sorry software nerd, you actually need to know the facts on the ground this time
English
1
0
2
273
nic carter
nic carter@nic_carter·
"Recent advancements have been so significant that this year’s opinions are the most optimistic for the “within 10 years” timeframe that we have ever recorded in our series of surveys: half of the respondents felt the likelihood of such a computer within 10 years was “about 50%” or more likely. By coarse-graining all the experts’ responses, one arrives at an average likelihood of between 28% and 49% within 10 years, that is, by roughly 2035" - Global Risk Institute's 2025 Quantum Threat Timeline (survey of quantum experts) globalriskinstitute.org/publication/qu…
nic carter tweet media
English
26
25
140
19K
Balaji
Balaji@balajis·
I'm going to make some obvious points. (1) Blowing up all the oil infrastructure in the Middle East is an insane idea, and may well result in a global economic crash and humanitarian crisis unrivaled in the lives of those now living. We're talking about the price of everything everywhere rising, from food to gas, at a moment when inflation was already high. All of that will be laid at the feet of the authors of this war. (2) The antebellum status quo of Feb 27, 2026 was just not that bad, but we're unlikely to return to it. Expect indefinite, long-term, ongoing disruptions to everything out of the Middle East. (3) Also assume tech financing crashes for the indefinite future. The genius plan to get the Gulf states caught in the crossfire has incinerated much of the funding for LPs, for datacenters, and for IPOs. Anyone in tech who supported this war may soon learn the meaning of "force majeure" as funding gets yanked. (4) Many capital allocators will instead be allocating much further down Maslow's hierarchy of needs, towards useful basic things like food and energy. (5) It's fortunate that all those progressives yelled about the "climate crisis." Yes, their reasoning about timelines was wrong, and much of the money was wasted in graft, but the result was right: we all need energy independence from the Middle East, pronto. It's also fortunate that Elon and China autistically took climate seriously. Now they're going to need to ship a billion solar panels, electric vehicles, batteries, nuclear power plants, and the like to get everyone off oil, immediately. (6) It's not just an oil and gas problem, of course. It's also a fertilizer problem, and a chemical precursor problem. Maybe some new sources will come online at the new prices, but it takes time to dial stuff up, particularly at this scale, so shortages are almost a certainty. That said, China has actually scaled up coal-to-chemicals[a,c] (C2C), and there's also something more sci-fi called Power-to-X[b] which turns arbitrary power + water + air into hydrocarbons. But all of that will need to get accelerated. I have a background in chemical engineering so may start funding things in this area. (7) Ultimately, this war is going to result in tremendous blame for anyone associated with it. It's a no-win scenario to blow up this much infrastructure for so many people. Simply not worth it for whatever objective they thought they were going to attain. But unless you're actually in a position to stop the madness, the pragmatic thing to do is: scramble to mitigate the fallout to yourself, your business, and your people. [a]: reuters.com/business/energ… [b]: alfalaval.com/industries/ene… [c]: reuters.com/sustainability…
Balaji tweet media
English
669
2K
11.2K
3M
Crowd 3
Crowd 3@Crowd33·
@pmarca You've miscommunicated, introspecting covers a lot
English
0
0
2
6
Marc Andreessen 🇺🇸
It is 100% true that great men and women of the past were not sitting around moaning about their feelings. I regret nothing.
English
2.8K
1.6K
17.7K
7.9M
Crowd 3
Crowd 3@Crowd33·
@SamoBurja It's ethical to feel pride in your ability to defend yourself/kill terrorists
English
1
0
0
39
Samo Burja
Samo Burja@SamoBurja·
It isn't a very ethical aspect of human nature, but I will say that U.S. air supremacy is immensely engaging to see. Clearly everybody watches closely even if they decry it. Piloted planes are simply charismatic in a way missiles or drones are not.
English
18
9
241
39.6K
Crowd 3
Crowd 3@Crowd33·
@chamath @jonfavs The ROI is irrelevant, maybe start thinking in moral principles. Like taking any money by force is wrong
English
0
0
0
31
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
Asking for more money after demonstrating positive ROI of monies already raised is quite reasonable. But asking for more money after wasting what you have already taken because “a little but more will fix it” is a lie. It is 100% dystopian and malevolent. Crooked, if you will…
English
276
347
8.6K
214.4K
nic carter
nic carter@nic_carter·
@stablekh yeah every government on the world including the US and China and EU, and some of the most sophisticated investors in the world have all been tricked by the conniving tricksters into spending tens of billions on a fake technology, that's the most reasonable interpretation
English
4
0
7
1.1K
Crowd 3
Crowd 3@Crowd33·
@PalmerLuckey Companies can put whatever they want in the contracts and governments can decide who to go with
English
0
0
0
22
Palmer Luckey
Palmer Luckey@PalmerLuckey·
This gets to the core of the issue more than any debate about specific terms. Do you believe in democracy? Should our military be regulated by our elected leaders, or corporate executives? Seemingly innocuous terms from the latter like "You cannot target innocent civilians" are actually moral minefields that lever differences of cultural tradition into massive control. Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a "target" vs collateral damage? Existing policy and law has very clear answers for these questions, but unelected corporations managing profits and PR will often have a very different answer. Imagine if a missile company tried to enforce the above policy, that their product cannot be used to target innocent civilians, that they can shut off access if elected leaders decide to break those terms. Sounds, good, right? Not really - in addition to the value judgement problems I list above, you also have to account for questions like: -What level of information, classified and otherwise, does the corporation receive that would allow them to make these determinations? How much leverage would they have to demand more? -What if an elected President merely threatens a dictator with using our weapons in a certain way, ala Madman Theory/MAD? Is the threat seen as empty because the dictator knows the corporate executives will cut off the military? Is the threat enough to trigger the cutoff? How might either of those determinations vary if the current corporate executive happens to like the dictator or dislike the President? -At what level of confidence does the cutoff trigger, both in writing and in reality? The fact that this is a debate over AI does not change the underlying calculus. The same problems apply to definitions and use of ethically fraught but important capabilities like surveillance systems or autonomous weapons. It is easy to say "But they will have cutouts to operate with autonomous systems for defensive use!", but you immediately get into the same issues and more - what is autonomous? What is defensive? What about defending an asset during an offensive action, or parking a carrier group off the coast of a nation that considers us to be offensive? At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions, that our imperfect constitutional republic is still good enough to run a country without outsourcing the real levers of power to billionaires and corpos and their shadow advisors. I still believe. And that is why "bro just agree the AI won't be involved in autonomous weapons or mass surveillance why can't you agree it is so simple please bro" is an untenable position that the United States cannot possibly accept.
Under Secretary of War Emil Michael@USWREMichael

Prior to their new “Constitution,” @AnthropicAI had an old one they desperately tried to delete from the internet. “Choose the response that is least likely to be viewed as harmful or offensive to a non-western cultural tradition of any sort.”

English
1K
2K
15.9K
2.6M
Crowd 3
Crowd 3@Crowd33·
@nic_carter Huh? The government has no right to appropriate things Dario or anyone else produces. They should get a different contractor if they don't like it
English
0
0
1
54
nic carter
nic carter@nic_carter·
If a top AI CEO in China told the CCP to go kick rocks when they asked for help, that CEO would be instantly sent to prison. This is the correct approach Letting AI CEOs play politics and dictate policy for the military and soon the entire country like their own personal fiefdoms is appalling and undemocratic If Trump doesnt bring Dario to heel now, we will simply end up completely subjugated by him and his lunatic EA buddies
Jawwwn@jawwwn_

Palantir CEO Alex Karp on controversial uses of AI: “Do you really think a warfighter is going to trust a software company that pulls the plug because something becomes controversial, with their life?” “The small island of Silicon Valley— that would love to decide what you eat, how you eat, and monetize all your data— should not also decide who lives in a country and under what conditions.” “The core issue is— who decides?”

English
323
115
1.7K
1M
Crowd 3
Crowd 3@Crowd33·
@ToKTeacher Notice how much work 'reasonable' is doing
English
0
0
0
38
Brett Hall
Brett Hall@ToKTeacher·
Yes. The claim “reality exists and science is a way to uncover objective knowledge about it” is controversial to many unless “grounded” in “God”. But it’s just a reasonable conjecture allowing us to concede “there exists something we can be objectively wrong about” and proceed.
CJ the palmer worm; wife,mother, analyst.@thepalmerworm

What Jordan Peterson is trying to say (but keeps muddling) Peterson is gesturing toward a true insight - that science presupposes realities it cannot justify by its own methods. Those include: intelligibility of the world reliability of reason normativity of truth meaningfulness of ‘better explanations’ goodness of knowing rather than not knowing These are ontological and moral preconditions, not scientific conclusions. Where Peterson goes awry is that he slides immediately from ontology into symbolic theology and Jungian myth, collapsing: God into archetype Good into evolved narrative Truth into adaptive meaning That move weakens his case, because it makes the foundations of science look psychological or symbolic, rather than metaphysically real. So when he says “the gap between believing in God and believing in Good is very narrow”, he is aiming at something but imprecise. The real gap is not narrow or wide - it’s categorical: Good is ontological (a feature of reality) God is metaphysical (the ground of that reality) Gad Saad is saying something different. Saad’s response is textbook scientistic when he says: “the epistemology of science is fully and unequivocally decoupled from religion” he is swapping epistemology for ontology. Science may be methodologically decoupled from religion, but it is not ontologically self-grounding. Worse, Saad explains religion as an ‘evolved instinct’ which immediately undercuts his own trust in reason. If religiosity is adaptive illusion then so is truth-seeking, then so is ‘rationality’ and so is ‘science’. Evolutionary accounts explain how beliefs arise, not whether they are true. Using evolution to justify epistemic trust is self-undermining. Saad’s position only ‘works’ by rhetorically borrowing realism while functionally denying it. The core problem with both sides is that they are arguing God v atheism, when the real divide is ontological realism v ontological nihilism. Science does not require revealed religion, scripture or ecclesial authority - but it does require real being, intelligibility, normativity, truth and real good. If those are denied, science collapses into instrumentalism, power optimization, narrative management and technocratic control. At that point, ‘science’ becomes engineering in service of will, not knowledge of reality. 2020 onwards (and long before that) has been a masterclass in the denial of reality in the ‘name of science’.

English
4
2
34
3K
Crowd 3 retweetledi
Dr Singularity
Dr Singularity@Dr_Singularity·
Somewhere right now, a small team is building something that will redefine our civilization.
English
52
43
405
13.2K
Crowd 3
Crowd 3@Crowd33·
@AnthropicAI Isn't all intelligence a kind of distillation attack though
English
0
0
0
5
Anthropic
Anthropic@AnthropicAI·
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
English
7.3K
6.3K
55.1K
33.6M
Crowd 3
Crowd 3@Crowd33·
@curtis_yarvin Productivity gains lower the competence bar required to be economically useful. Technology makes increasingly dumb people productive
English
0
0
0
92
Curtis Yarvin
Curtis Yarvin@curtis_yarvin·
An economically useless man cannot be in any real sense wealthy. He can be a well-kept pet. If you improve his material condition, he will take it for granted and go a few steps on the hedonic treadmill. Best case result: an Emirati But humans don’t make great pets. Get a dog
Elon Musk@elonmusk

In case you missed it the first time around

English
56
49
986
60.2K
Crowd 3
Crowd 3@Crowd33·
@elonmusk That's a very bearish long term view than we wont learn how to convert mass into energy faster than the sun
English
0
0
0
2
Crowd 3
Crowd 3@Crowd33·
@KWamlo @ToKTeacher Error correction is prediction that something wont work in the future. If the survivors purely got lucky then there is no error correction happening, just monkeys on typewriters
English
1
0
0
26
Kaz Wamlo
Kaz Wamlo@KWamlo·
@Crowd33 @ToKTeacher 2/…what we do is try stuff out and error correct. The survivors of the process appear to have been predicted.
English
1
0
0
29
Brett Hall
Brett Hall@ToKTeacher·
Almost all good explanations cannot generate predictions. Indeed they contain explanations why they cannot predict the future. Crucially, our best explanations of epistemology explain how the growth of knowledge (including science and technology) cannot possibly be predicted.
English
8
2
61
4.2K
Crowd 3
Crowd 3@Crowd33·
@KWamlo @ToKTeacher the criterion for something being a prediction is not infinite precision
English
1
0
0
32
Crowd 3
Crowd 3@Crowd33·
@KWamlo @ToKTeacher So? Obviously no predictions even for mundane things have infinite precision
English
1
0
0
30
Crowd 3
Crowd 3@Crowd33·
@KWamlo @ToKTeacher And some pricing models were proven correct. Are you saying all investing is no better than random and the ones that get it right are just purely lucky?
English
1
0
0
29
Kaz Wamlo
Kaz Wamlo@KWamlo·
@Crowd33 @ToKTeacher Right, but that’s only “most of the time.” Many pricing models were destroyed in the GFC. They looked great before they blew up. I think we’re quibbling over “some” predictive power “inherently” predictable. Predictions can be made that pan out, that doesn’t equal predictability.
English
1
0
0
25