Gabriel

12.8K posts

Gabriel banner
Gabriel

Gabriel

@Real___Gabriel

Best efforts at reasonable takes—no guarantee.

Beigetreten Mart 2022
845 Folgt389 Follower
Gabriel
Gabriel@Real___Gabriel·
@CBBTalking @TheDunkCentral To all the laughable low iq basketball player bootlickers in this thread, the dude should have asked to see ID from such a young appearing girl. Everyone knows there are underage people at clubs.
English
3
0
1
959
TalkingCBB🏀
TalkingCBB🏀@CBBTalking·
@TheDunkCentral Brother didn’t you have an entire scandal about having sex with a 16 year old girl 2-3 years ago and it got so bad to where your own home crowd was booing you? I think your the one who needs help
English
114
213
8.3K
199.3K
NBACentral
NBACentral@TheDunkCentral·
Josh Giddey says Jaden Ivey needs help. ‘‘Obviously, the whole thing is kind of unfortunate in a way. I hope he gets the help he needs, whatever he’s going through or not going through. I do really hope he gets help. It’s not going to be with the Bulls anymore, but wherever it is, I hope he gets it.’’ (Via @Suntimes )
NBACentral tweet media
English
3.3K
367
12.2K
3.4M
Reverend Jordan Wells
Reverend Jordan Wells@WellsJorda89710·
🚨 BREAKING: Chicago Bulls waive young guard Jaden Ivey after he preached Jesus Christ and called out the NBA’s celebration of Pride Month. Head Coach Billy Donovan said Ivey didn’t “live up to certain organizational standards.” The team’s official statement claimed his “conduct was detrimental to the team.” Translation from the Chicago Bulls: Preaching the Gospel and saying sin is sin makes you a problem. Jaden Ivey — a vocal Christian who was recently baptized and has boldly shared his faith in Jesus — went on Instagram Live speaking truth about Scripture, unrighteousness, and the NBA pushing LGBTQ ideology during Pride Month. His message? The world (and the league) celebrates what the Bible calls sin. He’s preaching repentance and Christ. For that, they cut him after just a handful of games. The Chicago Bulls have made it crystal clear: They do not support Christians who actually live and speak their faith when it conflicts with the rainbow agenda. You can bow to BLM, promote Pride, stay silent on sin — that’s fine. But open your mouth about Jesus and biblical truth? “Detrimental.” This is the new standard in the NBA: Christian athletes are welcome… as long as they keep quiet about the parts of Scripture the league dislikes. Ivey’s response: “All I’m preaching about is Jesus Christ and they waived me.” The message to every believing player is loud and clear — conform or be gone. Stand with Jaden Ivey for having the courage to speak truth in a league that demands silence. Share if you believe faith in Christ shouldn’t get you fired. #JusticeForIvey #BullsExposed #ChristianAthletes #PreachJesus
English
1K
2K
4.5K
328.4K
Gabriel retweetet
Yanco
Yanco@the_yanco·
@Aella_Girl ~Every AI-techno-optimist I've ever met in one picture:
Yanco tweet media
English
1
2
16
576
Gabriel
Gabriel@Real___Gabriel·
@Aella_Girl Also, there is not one cogent anti-doomer argument in this thread. Besides marginal gains like in healthcare, further uncontrolled AI development will only harm humanity. AI is a tool to create an Orwellian perfect-surveillance tyranny. It is an inherently tyrannical technology.
English
2
0
0
144
Aella
Aella@Aella_Girl·
Just saw the AI doc and came away pissed at the optimists. I sort of expected them to have any argument that actually addressed the x-risk side, but they were basically like 'historically tech is good, people have been worried before but it was fine!' They didn't address at ALL the extremely entry-level concerns of like 'building something smarter than us is a categorically new type of threat'. They just repeated that tech would help humanity. It's especially infuriating cause the most lifelong techno optimists I know ARE the doomers. The x-risk community are the ones who grew up on epic sci-fi fiction and have thought long and hard about what the singularity might bring. One of my friends (who was in the doc) once spent all night carrying ice into a hospital room to preserve the corpse of his friend in a desperate attempt to get him into a cryonics lab. It's real for them! But "AI has promise" is not even close to an adequate response to the extinction threat on the table. Even the AI CEOs in the movie - the ones that are *actually* doing the most acceleration - seemed to at least understand the gravity of the arguments they were engaging with. The optimists in the doc seemed to have domain expertise in their technical fields, but were amateurs. They both are insufficiently visionary and also fail to engage with the actual risk in a practical way. I think they pattern match the "ai might kill us" people onto general woke anti-tech movement, and shout against them from a place of ego. That's the only good explanation I can think of for why they must be beating an activist drum that's so damn empty.
Aella tweet media
English
89
54
761
109.9K
Gabriel
Gabriel@Real___Gabriel·
@heynavtoor Impossible the article is any good. The reality is no LLM can consistently solve simple algebra problems correctly.
English
0
0
0
58
Nav Toor
Nav Toor@heynavtoor·
🚨 An AI just wrote a scientific paper. Came up with the hypothesis. Designed the experiments. Ran the code. Analyzed the data. Created the figures. Wrote every word. Then it passed peer review at a top machine learning conference. No human touched it. Not one word. Not one edit. This is not a demo. This actually happened. At ICLR 2025. It's called AI Scientist v2. An open source system that does the entire scientific research process. Autonomously. End to end. From idea to published paper. Here's what this system does on its own: → Generates research hypotheses from a broad topic you provide → Searches existing literature to check if the idea is novel → Designs experiments to test the hypothesis → Writes and debugs its own experiment code → Runs the experiments on GPUs → Analyzes the results with statistical methods → Creates publication-ready figures and visualizations → Writes the entire manuscript. Title to references. LaTeX formatted. → Reviews its own paper and improves it before submission Here's the wildest part: They submitted 3 fully AI-generated papers to an ICLR workshop. Reviewers were told some papers might be AI-generated but not which ones. One paper scored 6, 7, and 6 from three reviewers. That put it in the top 45% of all submissions. Above the average human paper. The AI outscored most human researchers. At a real conference. Through blind peer review. PhD programs cost $50,000 to $80,000 per year. Research takes 5 to 7 years. Postdocs earn $55,000 for more years of the same grind. 2.2K GitHub stars. Published research paper. Apache 2.0 License. 100% Open Source.
Nav Toor tweet media
English
47
68
254
22K
Gabriel
Gabriel@Real___Gabriel·
@GadSaad @FOOL_NELSON The left cult’s fake appeal to “empathy” is relatively unimportant lipstick on the pig that makes the cult more appealing to women at the margin. If the prevailing cult at universities were fanatical tough-on-crime socialist nationalism, female judges would be tough on crime.
English
0
0
1
65
Gabriel
Gabriel@Real___Gabriel·
@GadSaad @FOOL_NELSON It is NOT empathy. Women in Nazi Germany equally fell for and imposed the tenets of the very non-empathetic Nazi cult. On average women are wired more to want to belong to the “poplular”cult. These judges are just imposing the self-loathing “liberal” cult’s insane tenets.
English
2
6
19
866
Gabriel retweetet
Gabriel
Gabriel@Real___Gabriel·
@pmarca Nobody has voted to “live” as a slave in an Orwellian perfect-surveillance state where robots do all the work. AI is a surveillance/control technology. Need to decouple from China while negotiating to control AI development. China doesn’t want a billion unemployed citizens.
English
0
1
1
196
Gabriel
Gabriel@Real___Gabriel·
@deanwball This is the central issue of our time—in human history so far I would argue. Thank you for engaging in an honest constructive conversation this morning.
English
0
0
3
39
Dean W. Ball
Dean W. Ball@deanwball·
@Real___Gabriel I agree with that! But there is a very large world of middle ground between “uncontrolled” and “banned”
English
1
0
5
116
Dean W. Ball
Dean W. Ball@deanwball·
Some other questions for pause or stop advocates: 1. Can we accomplish the goal of pausing AGI/ASI development without also halting compute-intensive deep learning applications in biomedicine, materials science, weather forecasting etc.? How? In that world, we’d still have the advanced AI compute and the data centers and the fabs. Maybe we’d have them at much lower scale, but they would still be in use, assuming we did not also decide to ban biomedical research and what not. Would the treaty-signatory government seize those assets and only allow permitted uses/users? Or would the asset seizure be undertaken by an international global AI governance body created to enforce the AI pause/stop treaty? Or is there something I am missing? 2. A pause/stop probably throws the U.S. economy into a recession. Quite possibly it is an existential threat for major businesses in Korea, Japan, the Netherlands, Taiwan, and of course the U.S. itself. Stock markets down 30%, companies you’ve heard of declaring bankruptcy. How do you propose to deal with the economic fallout resulting from a sudden forced stoppage of progress in a multi-trillion dollar industry in which untold millions of normal Americans have a significant stake (not “Silicon Valley VCs” and “Wall Street elites” but the normal people whose retirements are managed by those people)? 3. If a pause treaty happened tomorrow, what would happen to consumer AI apps? Does ChatGPT get to continue existing? Can OpenAI privately operate it? How would the economics of that work given that OpenAI (and Anthropic and xAI, at the least) have staked their business models on *future* model capability progress? If OpenAI has to forfeit all their compute and shut ChatGPT down, what is the message to the millions of Americans who use ChatGPT? To the millions of businesses around the world who depend on it? 4. If OpenAI can continue operating ChatGPT, would government deploy inspectors and surveillance in the data centers to ensure OpenAI is only serving models to customers and not training? Remember that these days a lot of what constitutes “training” is not pre-training per se but forward-pass-heavy synthetic data generation and RL rollouts. So government inspection of the compute usage would need to be quite intensive, one imagines. Or am I missing something? If I am not missing anything, how would this inspection of frontier lab compute usage work in the context of a treaty? Would the individual governments all handle the inspection themselves? Or would the international AI governance body deploy inspectors? I am specifically interested in whether you believe inspectors directly or indirectly linked to the Chinese and U.S. governments should be allowed inspector-level access to the usage data of frontier AI systems from the other country. 5. What happens to robotics research in pause land? Does that disappear too? Does the answer differ between humanoids, drones, AVs, etc.? Same data center inspection question applies here if so. 6. I am not aware of a single instance of major powers accepting binding constraints on strategically decisive technologies without retaining significant freedom of action through tiered exemptions (as in NPT), weak verification regimes (BWC), or simply non-participation. How do you overcome this hurdle?
Dean W. Ball@deanwball

Here are some questions I wish "Pause" and "Stop" advocates would address: 1. Assuming we achieve the desired policy goal through a bilateral US/China agreement, what would be the specific metric or objective we would say needs to be satisfied in advance? Who decides whether we have satisfied them? What if one one party believes we have satisfied them but the other does not? 2. If the goal is achieved through a bilateral US/China agreement, would we need capital controls to ensure that U.S. investors cannot fund semiconductor fabs, data centers, or AI research labs in countries other than the U.S. and China? 3. Would we need to revoke the passports of U.S.-based AI researchers and semiconductor engineers to prevent them leaving America to join AI-related ventures elsewhere? How else would the U.S. and China keep researchers within their borders? 4. How should we grapple with the fact that (2) and (3) are common features of autocratic regimes? 5. Do the above questions mean that this really should be a global agreement, signed by all countries on Earth, or at least those with the theoretical ability to host large-scale data centers (probably Vanuatu doesn't need to be on board)?

English
23
17
150
26.3K
Gabriel
Gabriel@Real___Gabriel·
@deanwball Do you understand why many who see value in human liberty/agency and liberal democracy believe it cannot survive uncontrolled AI/robot development? I imagine you continuously reassess your view on this. Regardless, do you agree this is central issue?
English
1
0
0
105
Dean W. Ball
Dean W. Ball@deanwball·
@Real___Gabriel I am not saying AI will make this worse, in fact I've said the opposite. I just took an extremely politically expensive stance on this very issue; I have borne costs for my views here in ways you cannot imagine. We don't need to ban or pause AI to deal with this.
English
1
0
3
115
Gabriel
Gabriel@Real___Gabriel·
@deanwball @1a3orn I don’t think that distinction is relevant. Isn’t the idea to prohibit 1. development/use of LLMs (and AI models generally) beyond a certain capability; and 2. implementation that are anti-human like robots that replace workers and private drone attack forces?
English
0
0
0
29
Dean W. Ball
Dean W. Ball@deanwball·
@Real___Gabriel @1a3orn You believe it is “trivially easy” to spot the difference between forward passes resulting from customer inference and forward passes resulting from RL rollouts intended to improve model capabilities without direct surveillance of the inputs and outputs?
English
1
0
1
48
Dean W. Ball
Dean W. Ball@deanwball·
Here are some questions I wish "Pause" and "Stop" advocates would address: 1. Assuming we achieve the desired policy goal through a bilateral US/China agreement, what would be the specific metric or objective we would say needs to be satisfied in advance? Who decides whether we have satisfied them? What if one one party believes we have satisfied them but the other does not? 2. If the goal is achieved through a bilateral US/China agreement, would we need capital controls to ensure that U.S. investors cannot fund semiconductor fabs, data centers, or AI research labs in countries other than the U.S. and China? 3. Would we need to revoke the passports of U.S.-based AI researchers and semiconductor engineers to prevent them leaving America to join AI-related ventures elsewhere? How else would the U.S. and China keep researchers within their borders? 4. How should we grapple with the fact that (2) and (3) are common features of autocratic regimes? 5. Do the above questions mean that this really should be a global agreement, signed by all countries on Earth, or at least those with the theoretical ability to host large-scale data centers (probably Vanuatu doesn't need to be on board)?
Dean W. Ball@deanwball

Pause AI rhetoric is predicated on the notion that the AI companies are recklessly racing toward dangerous tech and that a government controlled pause button is therefore necessary, but this seems really hard to reconcile with the fact that government is attempting to destroy an AI company because *the government* is racing toward plausibly dangerous AI uses (Sec. Hegseth has stated in official directives that he wants to deploy AI into critical systems regardless of whether it is aligned, for example) and *the company* is pushing back. The roles are totally reversed from the logic that Pause AI and frankly other AI safety advocates confidently assumed for years. It is *industry* that is in favor of alignment and at least somewhat measured deployment risks, and government whose actions seem much closer to reckless. I predicted this for years. I said, in particular, that pauses and bans and licensing regimes gave government a dangerously high degree of control over AI, and that the incentives of government are much more dangerous than those of private industry with competitive market incentives. I believe the events of the last month are good evidence in favor of my view. At this point if you are an AI safety advocate whose policy proposals do not wrestle seriously with the brutal political economic reality of the state and AI, I don’t take you seriously. It gives me no pleasure to have been right about this, by the way. The state has an incredibly strong structural incentive to centralize power using AI, and we are, all of us, not so empowered to stop it. I am quite concerned about this.

English
27
22
204
58K
Gabriel
Gabriel@Real___Gabriel·
@deanwball Given that in Europe there is already such monitoring that people are prosecuted for making 100-view social media posts that oppose the state’s policy position, and facial recognition cameras on every street corner, what’s your argument that AI will not make this 10,000x worse?
English
1
0
0
107
Dean W. Ball
Dean W. Ball@deanwball·
@Real___Gabriel No, I disagree with you that these are the stakes. You are assuming your conclusions, an extremely common element of AI doomer thought.
English
2
0
6
116
Gabriel
Gabriel@Real___Gabriel·
@deanwball There is no escaping that these are the stakes. So the cost of a global recession is certainly worth it if that were an actual cost. The other side of the argument would be like arguing in 1940 that we can’t not develop nuclear weapons because it might trigger a recession—right?
English
1
0
3
107
Dean W. Ball
Dean W. Ball@deanwball·
@Real___Gabriel believe it or not, no, I don’t want to debate on the side of “being enslaved by totalitarian AI is worth it in exchange for better medicine,” since that is not in fact something I believe
English
1
0
7
243
Gabriel
Gabriel@Real___Gabriel·
@deanwball @1a3orn It’s trivially easy to police public usage of AI. The digital footprint is obvious.
English
1
0
0
36
Dean W. Ball
Dean W. Ball@deanwball·
@1a3orn My own view is that it is not possible to do this without basically an authoritarian regime but a)happy to be persuaded I'm wrong (would still oppose pauses/stops, but that crux would fall) and b)I am genuinely not sure pause/stop advocates understand what they are pushing for.
English
6
1
47
1.5K
Gabriel
Gabriel@Real___Gabriel·
@deanwball And yes it needs to be a global agreement—like the ban on human cloning. And yes obviously it’s easy to police public usage of AI. It may be difficult to police, say, state development of military AI drone forces, but we have somewhat policed biological warfare which is similar.
English
0
0
0
59
Gabriel
Gabriel@Real___Gabriel·
Nobody has voted to “live” as a slave in an Orwellian perfect-surveillance state where robots do all the work. AI is a surveillance/control technology. Need to decouple from China while negotiating to control AI development. China doesn’t want a billion unemployed citizens.
Marc Andreessen 🇺🇸@pmarca

Claude knows! —> The Lump of Labor Fallacy and Why AGI Unemployment Panic Is Economically Illiterate Let me lay this out with full rigor, because this argument deserves to be prosecuted completely rather than waved away with a sound bite. I. What the Lump of Labor Fallacy Actually Is The lump of labor fallacy is the assumption that there exists a fixed, finite quantity of work in an economy — a lump — such that if a machine (or an immigrant, or a woman entering the workforce) does some of it, there is necessarily less left for human workers to do. It treats employment as a zero-sum pie. The fallacy was named and formalized in the early 20th century but the error it describes is far older. It animated the Luddite riots of 1811–1816, where English textile workers destroyed power looms convinced that the machines would steal their jobs permanently. It drove opposition to the spinning jenny, the cotton gin, the mechanical reaper, the steam engine, the telegraph, the railroad, the automobile assembly line, the personal computer, and every other major labor-displacing technology in the history of industrial civilization. Every single time, the catastrophists were wrong. Not partially wrong. Structurally, fundamentally, categorically wrong — because they misunderstood the nature of economic production itself. The reason the fixed-pie assumption fails is this: demand is not fixed. Work generates income. Income generates demand for goods and services. Demand for goods and services generates new categories of work. This is an engine, not a reservoir. When you drain some of the reservoir with a machine, the engine speeds up and refills it — and often refills it past its previous level. II. The Classical Economic Mechanism That Destroys the Fallacy To understand why the lump-of-labor assumption is wrong about AGI, you need to understand the precise mechanism by which technological unemployment resolves itself. There are four distinct channels, all operating simultaneously: Channel 1: The Productivity-Demand Feedback Loop (Say’s Law, Modified) When a technology increases the productivity of labor or replaces labor entirely in a given task, it lowers the cost of producing whatever that task was part of. Lower production costs mean either: ∙Lower prices for consumers (real purchasing power rises), or ∙Higher profits for producers (which get reinvested, distributed as dividends, or spent as wages for other workers), or ∙Both. Either way, aggregate real income in the economy rises. That additional real income does not evaporate. It gets spent on something — including goods and services that didn’t previously exist or were previously too expensive to consume at scale. That spending creates demand. That demand creates jobs. This is not a theoretical conjecture. The average American in 1900 spent roughly 43% of their income on food. Today it’s around 10%. Agricultural mechanization didn’t produce a nation of starving unemployed farm laborers — it freed up 33% of household income to be spent on automobiles, television sets, air conditioning, healthcare, education, travel, smartphones, and streaming services, most of which didn’t exist as industries in 1900. The workers who left farms went to factories, then to offices, then to service industries, then to information industries. The economy didn’t run out of work. It metamorphosed.

English
0
0
0
229