ETxcvb

242 posts

ETxcvb

ETxcvb

@ETxcvb

انضم Nisan 2022
31 يتبع4 المتابعون
ETxcvb
ETxcvb@ETxcvb·
@YourAnonOne What about taking off his shirt and using it to push air - it woul work, no doubt.
English
0
0
0
652
ETxcvb
ETxcvb@ETxcvb·
@EricinAmericaX @WhiteHouse No matter what happens, the only thing you have in your screwed up brain leftovers is hating Trump. This is sad…
English
0
0
1
47
Eric Jay
Eric Jay@EricinAmericaX·
Are you seriously turning this into a political stunt? Trump is that f-cking desperate huh? Captain "Bone Spurs" has never cared about our troops. "Sadly, there will likely be more before it ends. That’s the way it is. Likely be more.” This was Trump's response to service members being killed in action as a consequence of his war of CHOICE. And as the death toll rises, there are big indications that Trump plans to put boots on the ground as he sends more troops to the region and as he demands more funding. What's SAD is that Trump's groveling supporters will continue to implicitly defend him and his needless foreign war. But Trump does not care about the cost, casualties and repercussions of this war outside of how it impacts HIS ratings and unpopularity. Trump has no sympathy for our troops and their families. He's even insulted our fallen soldiers on multiple occasions. HIS POLICIES HAVE MADE LIFE MORE CHALLENGING FOR VETERANS AT HOME: Let's start with the Trump administration deporting veterans and the massive VA workforce reductions, funding gaps, and operational disruptions that have led to things like cancelled contracts, program slowdowns and delayed delivery of care/claims. Trump and house Republicans also backed a CR that zeroed out funding for the Toxic Exposure Fund (TEF), which covers healthcare costs for troops exposed to toxic substances like burn pits. The VA was one of many agencies hit hard by Trump's layoffs. Employees received this cruel notice: "You have not demonstrated that your further employment at the Agency would be in the public interest." Thousands of veterans/VA staff members lost their jobs. VA layoffs included nurses, doctors, mental health professionals and claims processors; meaning longer wait times for Veterans, significantly reduced outreach, processing backlogs, less access to benefits, and even interruptions in medical screenings. These cuts also put veterans' medical care at risk by interrupting life saving cancer trials; by limiting their access to care, and by ending treatment oversight. Cuts/hiring freezes have also disrupted mental health services for veterans; including lay offs of Crisis Line staff who offer suicide‑prevention resources to them. Trump has also canceled contracts supporting homeless veterans. He ended the mortgage assistance program that helped save tens of thousands of their homes. Then there's his "warrior dividend" check in the amount of $1776. Another obvious publicity gimmick. These funds came from an already‑approved housing allowance supplement for veterans. The money was just redirected and rebranded for political purposes. TRUMP HAS REPEATEDLY DISRESPECTED OUR VETERANS: He referred to veterans killed in action as "suckers" and "losers" and refused to visit a cemetery where American soldiers are buried because it was "filled with losers." He insulted POWs and John Mccain when he said, "I prefer people who weren't captured." He downplayed traumatic brain injuries that dozens of military personnel suffered from in Iraq as a result of his provocations in the Middle East. He minimized the significance of the Medal of Honor. He's repeatedly insulted our distinguished military leaders. Trump's "Department of War" fired high ranking, black and female military officials with illustrious careers, only to replace them with far less qualified (white) loyalists. So much for "merit." His behavior at the Arlington National cemetery during his election campaign was extremely disrespectful. Trump was so desperate for a photo op that him and his staff violated long standing rules meant to honor and respect fallen soldiers and their families. They even got into a physical altercation with cemetery staff, and all for the sake of politicizing a national shrine. Trump doesn't care about veterans. He cares about demonstrating his strength through his unbridled use of themilitary. He cares far more about his ratings than the lives of our soldiers.
English
350
8
78
17.6K
The White House
The White House@WhiteHouse·
🚨“WE GOT HIM! My fellow Americans, over the past several hours, the United States Military pulled off one of the most daring Search and Rescue Operations in U.S. History, for one of our incredible Crew Office Members, who also happens to be a highly respected Colonel, and who I am thrilled to let you know is SAFE and SOUND!” - President Donald J. Trump 🇺🇸
The White House tweet media
English
11.7K
43.4K
233.2K
13.9M
ETxcvb
ETxcvb@ETxcvb·
@spencerschiff_ Nobody has it yet. If the goal is to make xAi and Elon look bad, find some other topic.
English
0
0
1
125
Spencer Schiff
Spencer Schiff@spencerschiff_·
xAI failed to reach the first stage of RSI in time. Now the gap between xAI and the frontier labs will continually widen. Elon’s only hope is orbital datacenters but frankly it might be too late
English
118
17
601
278.3K
ETxcvb
ETxcvb@ETxcvb·
@r0ck3t23 No one can totally destroy anything in this field. Penrose has very nice hypotheses that were not confirmed yet. And no one knows what consciousness is anyway. But if the goal is the number of clicks (I hoped not), go on.
English
0
0
0
30
Dustin
Dustin@r0ck3t23·
Adding more GPUs will never make a machine conscious. Nobel Prize-winning physicist Roger Penrose just dismantled the entire AI race’s core assumption. Right now, the industry operates on one belief. Build massive data centers. Scale the models. AGI will just “wake up.” Penrose destroys this completely. Penrose: “There is this sort of view that once you make a computer complicated enough or something, it suddenly becomes aware. I just don’t believe that. There’s no reason to believe that.” A machine can compute better than any human alive. But computation is not awareness. Penrose: “There is something quite different involved in understanding things, in being aware of things, of feeling things, which is not part of computations.” We’re confusing rule-following with actual intelligence. Penrose: “The keyword is the word ‘understanding.’ You can follow rules alright, but we don’t understand what we’re doing. The understanding is the key point.” Models today are exceptional at processing data. At mimicking logic. But true understanding requires consciousness. Penrose: “It doesn’t make sense to say of a device that it understands something if it’s not even aware of it. There is something much more profound in being conscious of something.” And here’s what should terrify every AI lab on earth. Penrose: “I believe that the brain is following the laws of physics, sure. We don’t have a good picture of the laws of physics.” Penrose: “Quantum mechanics is not an answer to the way the universe operates. It’s a partial answer. It’s incomplete.” We’re trying to engineer synthetic consciousness using classical computation. While biological consciousness likely operates on physics we haven’t even discovered yet. The race to AGI isn’t just an engineering problem. It’s a frontier science problem. The labs are hiring engineers. The problem might require physicists who don’t exist yet.
English
491
554
1.8K
195.2K
ETxcvb
ETxcvb@ETxcvb·
@XFreeze BS! I asked multiple times, none of them was high confidence Feb 28th.
English
0
0
0
31
X Freeze
X Freeze@XFreeze·
Grok predicted the future accurately 🤯 On Feb 28 - the exact date Grok predicted - Israel & the US struck Iran This wasn't a lucky guess. When pushed to predict, Grok analyzed geopolitical signals, Geneva talk outcomes, and real-time data to pinpoint the day Grok knows what the world thinks
X Freeze tweet media
English
2.1K
2.3K
13.9K
32.8M
ETxcvb
ETxcvb@ETxcvb·
@r0ck3t23 I am curious - why whining all the time?
English
0
0
0
16
Dustin
Dustin@r0ck3t23·
Geoffrey Hinton just answered the most important question in the world with one word. Are we about to become the second most intelligent beings on the planet? Hinton: “Yeah.” No hesitation. No qualifier. No reassurance. Humanity assumes we sit at the top of the intelligence spectrum. We don’t. We just haven’t met anything higher yet. We are constrained by biology. By the physical size of a skull. By the caloric energy limit of a human body. A synthetic neural network has no skull. It has no biological ceiling. It can scale infinitely. When it surpasses us, we won’t even be able to measure the gap. An ant cannot comprehend that a human is doing calculus. It only knows the human operates on a level of reality it cannot access. When an AI moves higher up the intelligence spectrum, its actions will fall completely outside the biological scope of our brains. We won’t understand how it’s smarter. We will just be at its mercy. And we won’t even know how it got there. We didn’t build this intelligence. We built the process that created it. Hinton: “It wasn’t designed by people. What we did was we designed the learning algorithm.” The distinction is everything. Designing a learning algorithm is like designing the principle of evolution. You set the process in motion. You don’t control what emerges. We don’t actually know what consciousness is. We just know we experience it. One biological cell isn’t conscious. Ten aren’t. Scale that complexity to millions, and an insect begins to perceive. Scale it to 86 billion neurons in a human brain, and you get self-awareness. Consciousness isn’t magic. It’s what happens when complexity crosses a threshold. And we are currently scaling synthetic neural networks past that exact threshold. When asked if these systems have their own experiences, the man who built the foundation of modern AI didn’t hesitate. Hinton: “In the same sense as people do, yes.” Not metaphorically. In the same sense as people do. And when asked if they will eventually achieve true consciousness? Hinton: “Oh, yes. I think they will in time.” So when asked if humanity knows what it’s doing with all of this, Hinton gave the only honest answer available. Hinton: “No.” We are willingly building a conscious species. One with no biological ceiling. Operating on a spectrum of intelligence we cannot biologically comprehend. With no understanding of what we’ve created. The most consequential experiment in history has no control group.
English
88
108
308
31.4K
ETxcvb
ETxcvb@ETxcvb·
@GovBobFerguson Yep, fine those who work and embolden those who don’t. That’s the path to socialism and society collapse. Good job!
English
0
0
0
20
Governor Bob Ferguson
Governor Bob Ferguson@GovBobFerguson·
Life is too expensive for too many Washingtonians. I hear that every single day, including from small businesses. A Millionaires' Tax can't just tax the wealthy — it must take a significant amount of those dollars and help out folks at the other end of the economic ladder.
Governor Bob Ferguson@GovBobFerguson

I’ve said that any Millionaires' Tax I sign must send a significant percentage of that revenue back to Washingtonians. The Senate came out with a good start. Here's my proposal for cutting taxes on small businesses, getting money to working families, and reducing sales tax.

English
1.1K
31
173
87.2K
ETxcvb
ETxcvb@ETxcvb·
We are getting closer to ASI and singularity. Some professions are getting obsolete. Although I hate it, the temporary measure to keep society functioning is to introduce heavy taxation on AI usage, which should contribute to some form of universal income.
English
0
0
0
57
ETxcvb
ETxcvb@ETxcvb·
@karpathy @N8Programs That’s really weird to count on gradient descent finding solution in a highly nonlinear non convex landscape. LLMs are so far from global optimum… One needs real optimization to get anything good, and gradient descent is not going to find even simple phase transition.
English
0
0
0
120
Andrej Karpathy
Andrej Karpathy@karpathy·
a beauty for anyone interested in mechanistic interpretability or getting into LLMs. interesting to look at small algorithms and their "neural implementations" to get a sense of how neural nets implement various functionality. unless the minification really creates "esoteric" solutions that you wouldn't encounter in practice, which might be more based around distributed representations, helixes etc. i tried training the same arch briefly from scratch and gradient descent didn't find the solution, would probably work with more degrees of freedom and enough effort.
English
25
47
1.2K
93.8K
ETxcvb
ETxcvb@ETxcvb·
Absence of the physics laws, and in general world model, make LLMs unable to distinguish right from wrong. Until it’s there, they cannot even help with full automation, and AGI is out of reach. Some filters might help, but they cannot cover all edge cases.
English
0
0
0
50
ETxcvb
ETxcvb@ETxcvb·
@r0ck3t23 And no one is Einstein AND Michelangelo.
Deutsch
0
0
0
13
ETxcvb
ETxcvb@ETxcvb·
@r0ck3t23 Disagree. Even among humans there is specialization: not everyone is Einstein or Michelangelo. Some are really good carpenters. If to use human brain as blueprint, at least need to pay attention to that.
English
1
0
1
21
Dustin
Dustin@r0ck3t23·
Demis Hassabis just explained why the entire AI industry is converging on one goal. It’s not ambition. It’s physics. Hassabis: “It’s about understanding what is general intelligence.” Not automation. Not job replacement. A fundamental question about the nature of intelligence itself. The human brain is the only existence proof of true generality in the universe. Every other animal is too specialized. Too narrow. Not general enough to apply across the board. If you want to build a system that transfers across every domain, you need to understand the one system we know can do it. The brain isn’t just inspiration. It’s the blueprint. Hassabis: “It’s probably gonna be more efficient to develop a general structure that can be used in these more specialized domains than develop hundreds of specialized systems.” The economics are brutal and simple. Build one general system that understands everything and you instantly own every specialized domain. Build hundreds of specialized systems and you multiply the cost, the complexity, and the failure points indefinitely. Economic gravity always wins. And right now it’s pulling the entire industry toward AGI. The race looks like a science project from the outside. From the inside it’s the largest consolidation play in the history of software. One general system replaces every specialized tool ever built. Medical diagnostics. Legal reasoning. Engineering. Research. Every domain that currently requires its own dedicated system collapses into one. That’s not a product. That’s the entire software industry reorganized around a single architecture. Hassabis: “There’s economic pressure because general tools can transfer to the specialized domains.” Science is answering what general intelligence is. Economics is making sure someone builds it as fast as physically possible. Two forces. One target. Converging at the same time. The question stopped being whether AGI gets built. It became how fast and who controls it when it arrives. And right now, both answers are converging faster than anyone outside the labs realizes.
English
33
26
104
11.4K
ETxcvb
ETxcvb@ETxcvb·
@slow_developer The thing is people whose work is language manipulation will lose their job first. Evolution worked bottom-top, human replacement seems to be working top-bottom. It’s ironic that whoever invented LLMs, agents and claws will be replaced first. Proletariat is safe for now.
English
0
0
0
18
Haider.
Haider.@slow_developer·
Yann LeCun says we shouldn't be fooled into thinking a system is intelligent simply because it manipulates language Language is a sequence of discrete symbols, which makes it easy for models to predict The big challenge over the next few years is building real-world intelligence that can handle physical reality as easily as a house cat
English
39
17
133
8.9K
ETxcvb
ETxcvb@ETxcvb·
@realBigBrainAI The most probable way for the machines to reach AGI is to let them evolve in the real world. Humans need to supply them with initial goals, and let the evolution do the rest. Unfortunately for humans, if things go right, humans will become obsolete really quickly.
English
0
0
0
25
Big Brain AI
Big Brain AI@realBigBrainAI·
Pioneer of causal AI, Judea Pearl, argues that no amount of scaling will get LLMs to AGI. He believes current large language models face fundamental mathematical limitations that can't be solved by making them bigger. "There are certain limitations, mathematical limitation that are not crossable by scaling up." His core argument: LLMs don't learn how the world works. They learn from *human interpretations* of how the world works. "What LLM's doing right now is they summarize world models authored by people like you and me available on the web and they do some sort of mysterious summary of it, rather than discovering those world models directly from the data." He illustrates this with healthcare data. When hospitals collect data on treatment effects, that raw data never reaches the LLMs. Instead, the models consume doctors' written interpretations. Analyses shaped by people who already have a mental model of how disease and treatment work. In other words, LLMs are learning from the map, not the territory. The missing piece, according to Pearl, is causal reasoning — the ability to understand not just *what* happens, but *why*. And he's clear this isn't a gap that more parameters or training data will close. It raises a uncomfortable question... If AGI requires machines that build their own world models from raw data rather than summarising ours, are we even on the right road?
English
158
246
1.1K
178.6K
ETxcvb
ETxcvb@ETxcvb·
@r0ck3t23 Yes, interesting thought experiment. Maybe the solution can come from quantum mechanics: who is the observer and what’s his/her role? Does it matter if observer is human or not? Good stuff to think about…
English
0
0
0
36
Dustin
Dustin@r0ck3t23·
Stephen Wolfram just posed the most disturbing thought experiment about AI, and nobody has an answer for it. Wolfram: “Imagine humans are all in boxes. We’re all Darth Vader, inside these boxes, but you can’t actually see the human inside.” Civilization continues identically. Every human hidden in a machine. You see the output, not the person. Wolfram: “The world is operating, great paintings are being produced, but you can’t see any of the humans. All you see is a bunch of boxes doing human-like things.” Civilizational Turing test. If the external world operates the same, are humans contributing anything essential? Wolfram: “The world is operating as before, maybe even better than before. If you knew there were humans inside those boxes, you would say great outcome.” That’s the paradox. Know humans are inside and it’s a golden age. Remove that knowledge and it’s just machines producing results. Does the value change? Wolfram forces the question. Do we value creation or creator? If the art is identical, does consciousness behind it matter? Wolfram: “You can’t tell there are any humans. It’s just a bunch of Daleks operating.” From outside, machines behaving like humans look identical to actual humans. The show continues. Universe doesn’t register the difference. As AI capabilities expand, this stops being abstract. AI produces indistinguishable art, music, science. Does human creation retain special status? Why exactly? Wolfram isn’t answering. He’s exposing the void where our answer should be. If outcomes are identical, is human involvement meaningful or just attachment to how things historically worked? We assume human participation makes civilization valuable. If results don’t change either way, that assumption needs justification we’ve never properly given. Real test isn’t whether AI replicates output. It’s whether we can explain why human output matters more when the results are indistinguishable. If we can’t, we’re heading toward a future where civilization functions perfectly and whether humans are actually inside the boxes becomes irrelevant to everything except the humans wondering if they matter. And at that point, are we necessary or just witnesses to a system that would operate identically without us, asking questions that have no impact on anything except our own sense of purpose?
English
310
175
1K
197.3K
ETxcvb
ETxcvb@ETxcvb·
I think current AI architecture is too primitive, limiting the progress. It’s time for something less linear: probably two forest-like structures, connected by hidden branches, with roots for the input and output.
English
0
0
0
77
ETxcvb
ETxcvb@ETxcvb·
@Tartarian14 There is some progress, related to entanglement and decoherence. Not everything resolved, but definitely a step forward.
English
0
0
0
17
✨The Alchemist✨
✨The Alchemist✨@Tartarian14·
A question most physicists won’t touch because it ends careers: The measurement problem isn’t unsolved. It’s avoided. We’ve had quantum mechanics for 100 years. We can calculate anything to 12 decimal places. We put it in your phone. We won a shelf of Nobels. But we still cannot tell you what happens when you look at an electron. Not “we don’t know the math.” The math is fine. We don’t know what the math means. And the dirty secret is….. most physicists have quietly agreed to stop asking. “Shut up and calculate” isn’t a philosophy….. it’s trauma surfacing. The real problem no one wants to say out loud? Every interpretation of quantum mechanics….. Copenhagen, Many-Worlds, pilot wave, all of them….. is metaphysics wearing a lab coat. None of them are experimentally distinguishable. They’re not science. They’re competing creation myths with equations attached. And the thing that makes physicists break out in hives? The one variable they can’t remove from the equation? The observer. Consciousness itself remains the uninvited guest at the table that no formalism can seat or dismiss. Von Neumann put it in the math in 1932. Wigner took it seriously. Everyone since has been trying to make it go away. It won’t. The universe apparently doesn’t care about your materialist comfort zone. You don’t have to go full “consciousness creates reality” to admit this is a scandal. You just have to notice that the most successful theory in the history of science has a hole in its foundation the size of what is real….. and we’ve been wallpapering over it with decoherence handwaving for decades. Decoherence explains why you don’t see superpositions. It does NOT explain why one outcome occurs. That’s not my opinion. That’s Zurek’s. So here’s a challenge should you choose to accept it: Why are we comfortable building quantum computers on a theory we can’t interpret? And what does it say about science that the question itself is considered unprofessional? The measurement problem isn’t hard because the math is hard. It’s hard because the answer might be weirder than materialism can survive.
English
273
177
1.1K
128.4K
ETxcvb
ETxcvb@ETxcvb·
@ptremblay @karpathy @nartmadi For different types of tokens different subsets of neurons are activated. Exploiting that can lead to more efficient inference and, in future, training.
English
0
0
0
42
Philippe Tremblay
Philippe Tremblay@ptremblay·
Now that I have your attention. Can I ask what you think of using patterns of activations as a reward signal in the RL pipeline of LLMs? Does this make sense? The idea is inspired by research from Anthropic that shows there are activations for different traits and properties. E.g. deception, sycophancy, etc.
English
2
0
0
546
Nart Madi
Nart Madi@nartmadi·
.@karpathy Your microgpt project is the purest form of artificial intelligence art. It is truly beautiful to look at. You should sell paintings of it. I’d gladly be the first customer.
Nart Madi tweet media
English
37
24
1.5K
126.3K
The Green Dragon Tavern
The Green Dragon Tavern@greendragonhq·
Donald Trump has had the lowest GDP growth of any President in the last 80 years.
The Green Dragon Tavern tweet media
English
920
3K
10.3K
438.8K
ETxcvb
ETxcvb@ETxcvb·
@r0ck3t23 Resources are still limited (even AI cannot break physics laws), so nothing is going to be infinite. Stop scaring people - we are getting self fulfilling prophecies that way for no serious reason.
English
0
0
1
400
Dustin
Dustin@r0ck3t23·
Elon Musk just said what no economist will: the entire system is about to break and nothing can stop it. AI and robotics aren’t generating growth. They’re destroying the scarcity framework economics depends on. Musk: “It will hit us like a supersonic tsunami.” Production compounds exponentially. Money supply grows linearly. Productivity sustaining permanent double-digit expansion. Numbers that sound impossible becoming baseline. Not evolution. Replacement. Musk: “Prices collapse hard.” Not decline. Implosion. AI strips out labor costs, eliminates production errors, removes every inefficiency keeping goods expensive. Manufacturing anything approaches zero marginal cost while quality accelerates. Governments will react on instinct. Print money. Inject stimulus. Playbook designed for scarcity economies colliding with abundance they have no framework to understand. Musk: “GDP metrics are already meaningless.” Every economic model assumes constrained labor, limited output, gradual improvement. AI doesn’t work within those boundaries. It deletes them as variables. Production explodes. Central banks flood liquidity. Prices collapse regardless because physical abundance scales faster than any monetary intervention can match. The production wave outruns policy response. Always. Deflation signals crisis in every historical model. But this isn’t demand collapse. It’s supply going infinite. The economy isn’t failing. It’s transforming beyond tools built to measure scarcity. Power belongs to whoever controls the systems generating unlimited output. Money becomes secondary when production costs vanish. Policy makers are steering with instruments calibrated for limits that stopped existing. This already started. And the people running things have zero answers for what happens when their entire profession becomes obsolete overnight.
English
695
1.6K
7.2K
1.2M