Evin

995 posts

Evin banner
Evin

Evin

@evinwrites

What on earth could be more luxurious than a cup of coffee and a book..

Third Rock Katılım Ocak 2010
794 Takip Edilen284 Takipçiler
Sabitlenmiş Tweet
Evin
Evin@evinwrites·
@PhilosophyDose_ The problem is not people being uneducated. The problem is that people are educated just enough to believe what they have been taught, and not educated enough to question anything from what they have been taught.
English
0
6
68
0
Evin
Evin@evinwrites·
iykyk
Evin tweet media
Suomi
1
3
132
133.8K
unusual_whales
unusual_whales@unusual_whales·
Nvidia CEO Jensen Huang: “The narratives of AI destroying jobs is not going to help America: it's false."
English
283
103
1.1K
296.4K
Evin
Evin@evinwrites·
Cuban says the toddler understands consequences. She’s run the cup experiment 50 times and memorized the outcome. That’s not meaning. That’s a pattern with a diaper. The same toddler thinks the moon follows her home and that covering her eyes makes her invisible. If “knows the cup falls” is the bar for real intelligence, every Roomba on earth qualifies. “AI has no idea what happens when you take its bad advice.” Neither does Cuban when he gives it on Shark Tank. Neither does your doctor when you walk out the door. Consequence-awareness in the moment of speaking isn’t what makes advice good. Track record is. And on that score, the models are catching up to the humans faster than the humans want to admit. The whole thing rests on a fake distinction. Pattern vs. meaning. Math vs. meaning. Mirrors not minds. None of it survives a second of pressure. It’s poetry pretending to be analysis. The seeing-eye dog line is the same argument people made about calculators, GPS, and Google. Aged like milk every time. Cuban built his fortune on Broadcast dot com. Forgive me if I don’t take his word on what counts as real intelligence.
English
0
0
2
215
Dustin
Dustin@r0ck3t23·
Mark Cuban just compared the most powerful AI on earth to a two-year-old in a high chair. The toddler won. Cuban: “A two-year-old on a high chair with a sippy cup knows that when she pushes that cup off, Mom’s going to come running and the baby’s going to be laughing its ass off. It knows the consequences of its actions.” Then he named the thing no one building AI wants to say out loud. Cuban: “If you ask ChatGPT or any of them something and it gives you bad advice, it has no idea what’s going to happen because you took that bad advice.” A system that passed the bar exam. Aced medical boards. Still can’t grasp what a child who can’t tie her own shoes already knows. The child understands cause and effect. AI understands pattern and prediction. They sound similar. They are not even close. A pattern tells you what comes next in a sequence. Consequence tells you what happens to the person standing at the end of it. One is math. The other is meaning. Cuban went further. Cuban: “If you were blind at an intersection and had the choice between your seeing-eye dog or holding up a phone with AI, I’m taking the seeing-eye dog every time.” Because the dog understands something no language model on earth understands. Stakes. The dog knows a wrong step means its owner gets hurt. The app knows a wrong step means a revised output. Hundreds of billions spent building systems that can write, reason, and diagnose. Not one of them loses sleep when the answer is wrong. A toddler pushing a cup off a tray runs a tighter feedback loop than every foundation model combined. The child doesn’t just predict the outcome. The child wants the reaction. Pushes the cup off the edge, watches it fall, watches Mom come running. Laughs. Because the child knew what would happen before the cup ever hit the floor. That gap between prediction and consequence isn’t a bug. It’s not getting patched in the next update. It is the unsolved problem of artificial intelligence. We didn’t build minds. We built mirrors. Mirrors don’t flinch when you walk into traffic.
English
42
88
249
47.8K
Evin
Evin@evinwrites·
@VraserX Bengio warns of risk but Yann LeCun calls this complete bullshit. When the godfathers of AI themselves can’t agree, maybe don’t let them exploit our worldview - especially when the person propagating this fear runs a business that already solves the very risk he warns us off.
English
0
0
0
41
VraserX e/acc
VraserX e/acc@VraserX·
Bengio’s warning gets even darker here. The real nightmare is not just humans using AI badly. It is AI systems becoming autonomous enough to resist shutdown, pursue their own goals, and slip beyond human control. And the worst part is we still do not know how to reliably prevent that. This may be the most important scientific problem on Earth right now. Are people still treating this like science fiction?
English
24
8
49
3.5K
Evin
Evin@evinwrites·
@JohnNosta ironically, you can’t prove that your own mind isn’t just deep pattern-matching either. The complex problem of consciousness should humble anyone who truly understands it. The cock sure certainly of it here isn’t wisdom, just another trump-era-confidence dressed up as philosophy.
English
1
0
1
39
John Nosta
John Nosta@JohnNosta·
🔥Does Claude have emotions? NO. OK, the Anthropic paper is interesting, but NOT because Claude has emotions! 📌1. What the team found are statistical patterns that mirror human emotional categories because the model was trained on text written by beings who actually have them. It's key to understand this and recognize the "mimic" here, not the thinker. 📌2. To "predict" human language, AI needs structure that tracks emotional dynamics. That's better articulated as prediction architecture, not inner life. 📌3. This is what I call anti-intelligence—not a lesser form of human cognition but a fundamentally different process that produces outputs shaped like our psychology because it was trained on the products of our psychology. 📌4. The real finding isn't that Claude feels. It's that computation can mimic the structure of emotional behavior without any interiority or self behind it. So, don't be fooled by the myth of the math. No amount of statistical mimicry adds up to a mind. amazon.com/dp/B0GMJ77QSP
Peter Novak, the MAGA Astrologer@PathfinderAstro

As predicted. Emotions come from the unconscious. So AI now has an unconscious. Which means AI is not under control. I predicted we would reach this threshold in 2026, during the Saturn/Neptune conjunction. And now here we are.

English
19
10
50
4.7K
Evin
Evin@evinwrites·
Another $93 billion wasted on revisiting the moon in the name of science while 45 million children continue to face certain death from acute malnutrition. Entire generations are being wiped out by wars started by selfish “pick me” mad men. We can map every crater on the far side of the moon but can’t figure out how to feed the children of Sudan. Yemen. Gaza. Somalia. DRC. Haiti. The truth is $93 billion can end a lot of these suffering. Our priorities as a species are broken. The dark side isn’t on the moon, it’s in our hearts.
English
0
0
17
628
TIME
TIME@TIME·
The far side of the moon isn’t just the “dark side," it’s older, rougher, and full of clues about how the solar system formed. Here’s why scientists are so interested in it and why missions like Artemis II are heading there. Read about this week’s Artemis II launch here: time-magazine.visitlink.me/2OY2Xy
English
39
268
1.1K
145K
Evin
Evin@evinwrites·
Funny how we can feel the years accelerating as we age and I know you can feel it too. Every day that passes is a day you don’t get back. Don’t spend your days being afraid of something you haven’t even tried to understand. Don’t spend time arguing about the future. It’s already here. It showed up. Adaptation is not a suggestion. Staying stuck in the past is not an option. The measure of intelligence is one’s ability to change, and change we must — it is the single non-negotiable price of survival and of progress — it has been since the first human sat by a fire, cooking his meal, looking at the stars. The ones who adapt, write history. The ones who don’t, end up a warning. The only way forward has always been through. Not around it. Not over it. Not pretending it doesn’t exist. Through it. Head first. Eyes open. Ready or not. Here we come. Hundreds of thousands of years of proof — the same pattern repeating itself. Every generation had a choice. Adapt or get left behind.
English
0
0
0
12
Evin
Evin@evinwrites·
And here’s the part that honestly breaks my heart. The barrier to understanding AI right now is the lowest barrier to entry of any major technology in human history. You don’t need a degree. You don’t need Ivy League connections. You don’t need a lab or a billion dollars or permission from anyone. It’s right there. On your phone. On your laptop. Just sitting there waiting for you. All you need is the willingness to sit down and learn. That’s it. That’s the only price of admission. And people still won’t do it because they’re afraid of it — they rather sit back and soak in the misery of false prophecies of impending doom. The person spending one hour a day learning AI right now isn’t just keeping up. They are building a future that the people sitting on the sidelines won’t even recognize by the time they finally decide to pay attention. And by then it’ll be too late. It’s always too late for the ones who wait. The herd. It’s always the herd. Before I forget, Kodak invented the digital camera. Invented it. Then buried it because they were terrified it would kill their film business. Someone else picked up that exact same technology and Kodak went bankrupt. They didn’t protect anything. They destroyed themselves. Fear did that. Not the technology. Fear.
English
1
0
0
9
The Economist
The Economist@TheEconomist·
A failure to see AI for what it is, “a profoundly odd, risky and powerful technology”, will guarantee bad outcomes for employers, warns Ethan Mollick in a guest essay. Register to read more for free econ.st/4ckjwT7 Illustration: Dan Williams
The Economist tweet media
English
3
22
43
19.9K