Professor_blackpill

7.4K posts

Professor_blackpill

Professor_blackpill

@REAL_PROF_BP

参加日 Nisan 2025
1.6K フォロー中133 フォロワー
Professor_blackpill がリツイート
Engeltrude
Engeltrude@engelchrudinous·
@cdor_n @apralky Preserving existing power and redistributing wealth are not necessarily opposed. In fact they go hand and hand more often than not. And many forces beyond massive wealth inequality that caused WW2. I don't get what Black Death has to do with this.
English
1
1
1
36
Professor_blackpill がリツイート
Chris
Chris@cdor_n·
@engelchrudinous @apralky these types of concessions are calibrated to preserve the existing power structure. the only case when it goes any other way are mass casualty events which increase the value of labour (black death, world war 2) so pretty doomer of you to say this, really
English
1
1
2
72
Professor_blackpill がリツイート
Engeltrude
Engeltrude@engelchrudinous·
@cdor_n @apralky There is ample historical precedent for them to make concessions to maintain stability. It has not yet necessitated literal distribution in the form of UBI. But the wealth has always gotten distributed by proxy when need be.
English
1
1
2
68
Professor_blackpill がリツイート
Chris
Chris@cdor_n·
@apralky what historical precedent has there been for - when due to massive societal change driving down the cost of labour - elites have been motivated to distribute wealth? none i think. in the industrial revolution people slept under bridges.
English
2
1
5
616
Professor_blackpill がリツイート
yung macro 宏观年少传奇
The “permanent underclass” thesis in the medium run will probably turn out to be overstated -- largely because if the current political climate is anything to go by, adequately (unprecedentedly) redistributive tax policy will front-run it more meaningfully than many expect, to the dismay of some. But for the San Francisco optimists to simultaneously and loudly maintain the convictions that: (1) AGI is fairly close, but (2) the “permanent underclass” thesis in the medium run will probably turn out to be overstated, and (3) we absolutely cannot have a conversation about any unprecedentedly redistributive tax policy, is either epistemically sloppy or malicious, and will justifiably continue to generate backlash. Especially so given that one of the standard objections to broadly redistributive tax policy -- which also happens to be this cohort’s primary one in this discourse -- that such policy hampers long-run growth and therefore overall welfare by disincentivizing innovation, substantially weakens if we stop arbitrarily discarding the conditional underwriting the rest of this hypothetical: (1) AGI is fairly close, and (2) once here, it will independently generate ~all technological innovation without the need for incentivized humans in the loop.
Jack Altman@jaltma

I might be too much of an optimist but I just don’t buy the permanent underclass thing. I just think no matter how smart AI gets, there’s no way a motivated person will wake up each day and be unable to contribute to society.

English
23
12
225
24.4K
Professor_blackpill がリツイート
yung macro 宏观年少传奇
Between this, DOGE, etc. is it fair to say that the 2000s/2010s domains of wealth creation were so anomalous in their simplicity (social media, phone apps...) that they falsely updated an entire generation against the merit of deep experience, only for us to then foolishly start stumbling back into necessary correctives in the 2020s as complexity recovered from the trough? E.g. obviously a Zuckerberg was great at doing Facebook, but probably wouldn't have been so great at Nvidia!
Piotr Pomorski@PtrPomorski

No way, a product that sucks doesn’t work as expected! idnfinancials.com/news/61918/zuc…

English
30
47
1.1K
188.6K
Professor_blackpill がリツイート
yung macro 宏观年少传奇
Take this editor’s letter published in n+1 late last year, which is probably about as representative of this constituency’s AI consensus as one is likely to find. I’m told that n+1 is a prestigious publication that takes itself seriously – staffed by well-educated, intelligent people – yet the 4,000-word piece is anything but intelligent or serious. In it, they make several familiar claims which are load-bearing in the conclusion, but also clearly deficient. (1) They, as usual, frame AI as "colossally wasteful,” and its present and potential output as comically devoid of real economic value – describing “the ratio of AI’s resource rapacity to its productive utility” as “indefensibly and irremediably skewed.” Now there’s obviously a reason why those who have thought deeply about this domain have the exact opposite contention – that AI could be ruinous precisely because of its immense transformative capacity. The idea that a technology which has started to show signs of potentially automating some meaningful fraction of knowledge work can authoritatively be declared materially inconsequential is, of course, absurd on the face of it. Anthropic’s revenue hasn’t grown 10-fold through 2025 because people are using its models for “racist memes” and “cardboard essays.” If one correctly assumes that a variety of real-world problems are presently bottlenecked by, say, organizations’ ability to analyze large amounts of unstructured data, or the pace at which they can write functional code, then it’s very difficult – without contradicting a syllogism – to make the case that LLMs will be economically insignificant, even if capability progress were to stall. Yet that claim is ever-present. (2) Then they explain that they believe LLMs can never truly do what can be done with “real” human intelligence, because LLMs are simply statistical methods of next-word prediction – while actual thought requires “organic associations,” “speculative leaps,” and “surprise inferences.” It’s charitable to assume that this section is even attempting to make a sound argument, rather than just putting together a string of phrases that feel good to the authors and the audience, but even with that charitable interpretation, the chain of reasoning of course has no merit – which is why, as I’m sure you know, it’s become a slogan near-exclusively invoked by shallow and facetious participants in the discourse. “Intelligence,” insofar as the classification is relevant for appraising the potential real-world impact of various processes which may or may not be described as such, concerns those processes’ outputs and not the methods generating them. One can argue endlessly about whether it’s suitable by some arbitrary criteria to define “intelligence” so as to exclude whatever it was that made GPT-10.5 solve cancer, or Opus 4.6 hypothetically mass-surveil the American population – but this will neither unsolve cancer nor unsurveil the American population. (3) Then they make a detour into the worlds of financial analysis and technological forecasting, confidently claiming that “AI is a bubble which will burst,” and that the pace of improvement in LLMs “will stall.” For a group of magazine writers (I have nothing against magazine writers, this just is not their wheelhouse) to declare that the global financial market is blatantly mispricing one of the world’s principal industries is of course ridiculously audacious; to do so without the evidence to back it is even more so. Maybe they did the work behind the scenes and the claim rests on rigorous in-house modeling which they simply excluded for brevity, but most would probably bet against that at any nonzero probability. The second claim, that capability progress in LLMs will certainly stall, is similarly unreasonable and contrary to the informed discourse, where this remains an entirely unresolved question. Again, one wouldn’t suppose that the contrarian declaration rests on rigorous analysis which has been excluded; one would suppose that it rests on brazen epistemic uncleanliness in service of emotionally satisfying conclusions. (4) Lastly, they say that “major terrains remain AI-proofable,” and that certain knowledge workers will need to “develop an ear” to recognize and disincentivize work done with the help of LLMs. As the models become more sophisticated, they caveat, this will become harder and require “a new kind of literacy, an adaptive practice of bullsh*t detection.” This is of course foolish on two fronts: one, if these domains of knowledge work are assumed to have outputs with value independently of who produced them, as would, say, a piece of food or a medicinal product, an assumption with which I presume they would agree – it’s contradictory to argue for their reduced production without making the case for why exactly certain other harms will be more than offsetting – which they have neither done nor recognized the need for. The second part is foolish in a different way – it’s more symptomatic of their broader issue: no, of course you can’t simply keep up with better LLMs by developing “a better ear for bullsh*t detection.” We’re already at a stage where statistical methods far outperform humans at LLM detection, a gap which will only widen – humans, bottlenecked by biology, aren’t getting smarter at the same pace as the tools. Maybe they meant that we should be using more Pangram, but I doubt it. That they thought this valiant prescription a viable one for their problem just goes to show how little reasoning is being done by these groups to contend with arguably the most transformative technology in human history. Overall, it clearly isn’t all that important that some magazine is being lazy and unreasonable in its coverage of this – but as a representative manifestation of the broader discourse it is irritating. The divergence between mainstream/layman coverage of AI safety and the conclusions of domain experts seems more striking than in many other areas, despite its being such an important front-page discourse. A little more convergence would be great.
yung macro 宏观年少传奇 tweet mediayung macro 宏观年少传奇 tweet mediayung macro 宏观年少传奇 tweet mediayung macro 宏观年少传奇 tweet media
yung macro 宏观年少传奇@apralky

It is kind of interesting: the East Coast magazine types are seemingly intelligent people, seemingly selected through very tight funnels -- great universities, great SAT scores, they can often think critically, follow long and established traditions of thought, etc. but here we are moving at breakneck pace through this unprecedented technological revolution which by any reasonable consensus has decent odds of literally ending biological life as we know it in all sorts of horrible ways, and all they can think about is that... prose is getting kinda worse on the way? That some writing sounds a little sloppier now? That they are not fans of the style? This is somehow their elephant in the room? What? Is this just what happens when you compound bad epistemic habits for decades? Functional paralysis in otherwise sane minds?

English
7
9
97
10.3K
Professor_blackpill がリツイート
yung macro 宏观年少传奇
Imagine an alien society in which it’s widely understood that within twenty years, the entire economy will be managed by accountants. The aliens have just discovered the spreadsheet, and it turns out that all their societal problems are oddly spreadsheet-shaped -- it’s just going to take a little time for widespread diffusion. The alien society has developed colloquial theories about these accountants of tomorrow: they will be the permanent overclass, destined to manage every facet of life through immeasurably large spreadsheets, accruing unprecedented political, cultural, and economic power. As a result, accounting undergraduates become overrepresented in subsequent university intakes, and for quite some time their unemployment rate skyrockets, rising well above that of other fields. Due to oversupply, entry-level accounting jobs become exceptionally difficult to find. Has this made accounting a bad undergraduate major?
yung macro 宏观年少传奇 tweet media
Leo Invests@Leo_Traydes

Computer Science went from one of the absolute best degrees to pursue to one of the worst all within a decade Absolutely nuts.

English
11
21
433
41.2K
Professor_blackpill がリツイート
yung macro 宏观年少传奇
Early European intellectuals were strikingly cosmopolitan and well travelled -- the distance between place of birth and place of death among notable Europeans is pretty much unchanged since the Late Middle Ages! Despite the obvious improvements in ease of travel etc.
yung macro 宏观年少传奇 tweet media
English
5
27
355
18.9K
Professor_blackpill がリツイート
Geoff Shullenberger
Geoff Shullenberger@g_shullenberger·
If there’s any lesson here I suspect it has to do with the seeds of authoritarianism within liberalism. The fortification of EU technocracy is one side of this, the authoritarian turns of radical civilizational (Karp) or economic (Hoppe) liberalism are another.
English
1
1
57
1.9K
Professor_blackpill がリツイート
Geoff Shullenberger
Geoff Shullenberger@g_shullenberger·
Habermas has a surprising place in the intellectual genealogy of the Tech Right. Alex Karp claims he studied with him, which is disputed, but he definitely studied with students of his; Hans-Hermann Hoppe, source of Yarvinite neo-monarchism, was originally a Habermas protégé.
English
9
27
296
36K
Professor_blackpill がリツイート
yung macro 宏观年少传奇
Insofar as there’s a common thread, it’s a distrust of society’s natural trajectory. The Frankfurt School was kind of what became of Marxists after persecution in National Socialist Germany beat the optimism out of them. The disagreement is over how exactly one should fix the normies -- “benevolently”, through “critique” and guidance (Adorno, Horkheimer, Marcuse, Habermas), or “evil style” through hierarchy/technology/rule (Hoppe, Karp, Yarvin). There’s also a chronological component -- the Frankfurt School is disillusioned by authority-shaped pathologies (NatSocs, USSR), so their remedies are anti-authoritarian; the tech-right is disillusioned by woke-liberty-shaped pathologies (in ways outgrowing from the former remedies), so their remedies are authoritarian. Yarvin for example has at times framed his antagonism in part against Marcuse.
Geoff Shullenberger@g_shullenberger

Habermas has a surprising place in the intellectual genealogy of the Tech Right. Alex Karp claims he studied with him, which is disputed, but he definitely studied with students of his; Hans-Hermann Hoppe, source of Yarvinite neo-monarchism, was originally a Habermas protégé.

English
12
10
191
18.4K
Professor_blackpill がリツイート
en
en@childofeternal·
Do you want to go down a schizo rabbit hole... Somewhere down the line, women were genetically manipulated by other alien races to bleed every month just out of pure ritualistic depravity and so they could breed with humans and be amongst us. (This is biblical too if you care) No other mammals bleed this often, in fact, most mammals don't even shed the uterine lining, they reabsorb it!! And they go through a cycle of 3, 6 months. There is literally no need to bleed monthly other than to drain the life force out of the force that brings life into the planet and to harvest that loosh (🩸 magick) The fact that most women's bleed aligns with the new or full moon is because powerful psychos use astro alignments to bleed you out during these moons and harvest the energy for big collective events and movements following the moons. A huge amount of little sacrificial offerings and 🩸 magick. Using YOUR life force. These warlocks are crazier than anyone could ever imagine. This picture is just the surface level of it.
The Notorious J.O.V.@whotfisjovana

i never saw it from this perspective and now im mad

English
0
1
3
314
Professor_blackpill がリツイート
Glen Schaefer
Glen Schaefer@hardenuppete·
OK at first I wasn't buying into the theory all the recent oil and gas infrastructure accidents were acts of sabotage, including the recent fire at the Oil Refinery in Victoria, but now another one just 48 hours later makes all these 'accidents' fall beyond the coincidence threshold.
aussie17@_aussie17

🚨🚨 Just hours ago (April 16, 2026) Gas pipeline explodes in Haripur , Pakistan: - 8 dead (including children), massive fireball engulfs homes. Just on the same day, Australia’s Geelong refinery (one of our ONLY 2 left) erupts in flames, slashing fuel output amid the Iran war chaos. Energy infrastructure is burning worldwide. Coincidence or coordinated attacks hitting us all???

English
72
441
1.6K
44.9K
Professor_blackpill がリツイート
Valerie Anne Smith
Valerie Anne Smith@ValerieAnne1970·
"Bad news for egg lovers as tests reveal Vital Farm pasture raised eggs containing more Linoleic Acid than a spoon of Rapeseed Oil."
English
69
289
609
46.1K
Professor_blackpill がリツイート
Noah
Noah@TrueOnX·
🚨 THEY’RE TELLING US EXACTLY WHAT’S COMING. Klaus Schwab (WEF): “The COVID-19 crisis would be seen as a small disturbance in comparison to a major cyber attack… which would bring a complete halt to the power supply, transportation, hospital services; our society as a whole.” Sam Altman (OpenAI CEO, April 2026): “There could well be a world-shaking cyber attack this year that would get people’s attention… I think that’s totally possible. In the next year we will see significant threats we have to mitigate from cyber.” They ran Cyber Polygon “preparedness” exercises. They hyped pandemics right before one hit. Now the same voices are openly forecasting a cyber “pandemic” that makes COVID look tiny. Coincidence? Or are the globalist elites forecasting their next move again? We’ve seen this movie before. Don’t sleep on it. Prepare. Stay vigilant. Don’t comply. What do you think... predictive programming or genuine warning?
Noah@TrueOnX

Billionaire hedge fund manager Ray Dalio just told Tucker Carlson that central bank digital currencies are coming: "There will be no privacy... all transactions will be known... and if you're politically disfavored, you could be shut off."

English
56
281
339
17.8K
Professor_blackpill がリツイート
Noah
Noah@TrueOnX·
Catherine Austin-Fitts just sounded the alarm. A pesticide liability shield will cause an “extinction-level event.” Why? It will absolutely destroy fertility. And she says this is no accident. It’s intentional. “Depopulation” is exactly what the elites want. “They’ve enjoyed so much success depopulating by giving vaccines corporate liability shields that you have Bayer, who bought Monsanto, coming around and trying to get both the feds and the states to give pesticide liability shields.” “If you look at the extraordinary amount of money and support that’s coming [from] the mega rich to support this, it’s no accident.” “There’s no way, given the science on this, that they don’t know.” “There’s no way it’s anything other than intentional.” “The battle for control of the food system is extraordinary.” “If you look at the Trump policies, you are looking at thousands of different actions by government to severely consolidate farming in agriculture.” Trump is pushing a pesticide liability shield on two fronts right now: First, House Republicans included a liability shield for pesticide companies in the 2026 Farm Bill. Trump personally urged Congress to pass it. Second, Monsanto is asking the Supreme Court to give them a liability shield in a case that will be heard later this month, Monsanto v. Durnell. And Trump’s DOJ sided with Monsanto. In case you haven’t noticed, Trump has completely abandoned and betrayed MAHA. @Solari_The @TrueOnX
Noah@TrueOnX

😱 OMG They're not building a currency... They're building a control grid. Watch Catherine Austin Fitts drop truth at Hillsdale College: CBDC = programmable permission slip for your own money. One second it's "convenience"... the next it's total control over what you can buy, when, and from whom. The BIS in Switzerland is coordinating it all. This is financial freedom on the line. Would you accept programmable money? Let me know below and don't forget to share this! 👇

English
11
140
172
15.9K
Professor_blackpill がリツイート
Professor_blackpill がリツイート
Giga Based Dad
Giga Based Dad@GigaBasedDad·
How to break feminist propaganda with one simple video:
English
64
565
6.2K
71.6K