Nikola Danaylov Ⓥ

20.8K posts

Nikola Danaylov Ⓥ banner
Nikola Danaylov Ⓥ

Nikola Danaylov Ⓥ

@singularityblog

Futurist Keynote Speaker | Raconteur & Provocateur | Unlocking Context. Leveraging Change. Driving Impact. 👉 https://t.co/GnzKCZ8Hny

worldwide Inscrit le Ekim 2009
1.5K Abonnements7.6K Abonnés
Nikola Danaylov Ⓥ
Nikola Danaylov Ⓥ@singularityblog·
11 yrs ago, Martin Ford warned us: This automation is different. Past waves replaced muscle. This one replaces mind, and mind has no refuge. He was early. We weren't ready. The conversation he tried to start in 2015 is well overdue. #TechUnemployment 🎙️ snglrty.co/3ZVGOY7
English
1
4
3
202
Nikola Danaylov Ⓥ
Nikola Danaylov Ⓥ@singularityblog·
We don't need more #AI. We need a better why for our AI. "Solve everything" is not a philosophy. It is an assumption. #Technology amplifies intention. It does not supply it. Intelligence scales. #Wisdom does not. More is inevitable. Better is a choice. snglrty.co/3MXKdTV
English
1
1
2
60
Nikola Danaylov Ⓥ
Nikola Danaylov Ⓥ@singularityblog·
I wrote this 11 years ago. Today it matters more than ever. #AI fear dominates every headline. But is fear really the right response to the most transformative moment in human history? I don't think so. My Top 10 Reasons NOT to Fear the Singularity 👇 snglrty.co/4bKVOOm
English
0
1
1
60
Nikola Danaylov Ⓥ
Nikola Danaylov Ⓥ@singularityblog·
11 yrs ago, I published the Top 10 Reasons to Fear the #Singularity. #Extinction. Slavery. Big Brother #AI. Loss of humanity. Hawking warned us. Some called him an alarmist. It felt like sci-fi in 2015. It reads like a news feed now. Do you fear it? snglrty.co/4t6Oi7H
English
0
0
0
41
Nikola Danaylov Ⓥ
Nikola Danaylov Ⓥ@singularityblog·
Marshall Brain wrote Manna in 2003 — predicting #AI, #automation & the end of work as we know it. Most dismissed it. Today it reads like prophecy. He didn't just predict the disruption. He asked what we owe each other when it comes. Missed terribly. 🎙️ snglrty.co/3Zqf7qr
English
0
0
0
60
Nikola Danaylov Ⓥ
Nikola Danaylov Ⓥ@singularityblog·
What if the greatest threat to humanity isn't a weapon — it's a line of code? In 2015, @romanyam was already in the trenches: #AGI safety, #AI alignment, solving the problem of aligning a mind smarter than you. 11 yrs later, it's no longer hypothetical. snglrty.co/43Y38Tb
English
0
0
1
94
Nikola Danaylov Ⓥ
Nikola Danaylov Ⓥ@singularityblog·
Too many people misread #Dune. They walked out of Parts 1 & 2, cheering for Paul Atreides. They saw exactly what Frank Herbert was afraid they would see. Dune is not a hero's story. It is a warning. And the sequels go places most fans never expected. snglrty.co/41l0Ex2
English
0
1
1
89
Nikola Danaylov Ⓥ
Nikola Danaylov Ⓥ@singularityblog·
11 years ago, Salim Ismail broke down Exponential Organizations — MTP, SCALE IDEAS, why Apple & Google are prototype ExOs, and why large organizations may be nearing their end. The questions he raised haven't been answered. They've gotten louder. 🎧 snglrty.co/4ejCp8t
English
0
1
1
43
Nikola Danaylov Ⓥ
Nikola Danaylov Ⓥ@singularityblog·
11 yrs ago I spoke with palliative care physician Dr. Michael Fratkin about #death, the kind that visits 150,000 families daily. That conversation matters more now than ever. We build AI to extend life. But can we talk honestly about how we want to die? snglrty.co/49O9FSM
English
0
1
2
54
Nikola Danaylov Ⓥ
Nikola Danaylov Ⓥ@singularityblog·
What if dying is just a problem we haven't solved yet? DJ MacLennan signed up with Alcor in 2007, choosing to see death as unsolved rather than inevitable. #Cryonics. Glass-state time travel. A personal mortality experiment. snglrty.co/40113VA
English
0
1
1
71
Nikola Danaylov Ⓥ
Nikola Danaylov Ⓥ@singularityblog·
@gailcweiner People ask me this constantly: What skills will matter when AI can do almost everything? The skills AI cannot replicate — and the ones that grow more valuable as AI makes everything else cheap. snglrty.co/4srQOpn
English
0
0
1
20
Gail Weiner
Gail Weiner@gailcweiner·
Daniela Amodei, president of Anthropic, just said that human skills - EQ, communication, empathy, curiosity - will matter MORE as AI gets smarter. Not less. She’s right. Knowing those skills matter doesn’t mean your team knows how to use them WITH AI. That’s the gap. Companies are rolling out AI tools and wondering why adoption is patchy. Why the team is resistant. Why only one person is actually using it. It’s not a technology problem. It’s a trust problem. And trust between humans and AI has to be built, exactly the way trust between humans gets built. Low stakes first. Curiosity before performance. Relationship before results. That’s what I do. fortune.com/2026/02/07/ant…
English
37
77
413
18.6K
Nikola Danaylov Ⓥ
Nikola Danaylov Ⓥ@singularityblog·
@TallPhilosopher I hear you, brother. But, history shows many cases where "an imperfect policy" produced worse results than no policy at all. So, the devil is in the details. Always. We must be careful.
English
1
0
0
5
John Champagne
John Champagne@TallPhilosopher·
@singularityblog No doubt, a well-designed policy is preferable to a poorly-designed policy. We are currently allowing the theft of natural wealth from the people. We should strive for a policy that addresses that moral defect. An imperfect policy will be preferable to no policy at all.
English
1
0
1
5
Nikola Danaylov Ⓥ
Nikola Danaylov Ⓥ@singularityblog·
@TallPhilosopher Interesting idea, but implementation is tricky. Very tricky. Just like carbon credits, there is a lot of room for abusive and fraudulent practices that end up doing more harm than good.
English
1
0
0
5
John Champagne
John Champagne@TallPhilosopher·
@singularityblog UBI proponents could easily become the most effective environmental activists, too, if they identify fees charged to industries proportional to environmental and other harms as the preferred funding mechanism for UBI.
John Champagne tweet media
English
1
0
0
9
Nikola Danaylov Ⓥ
Nikola Danaylov Ⓥ@singularityblog·
Is old age a disease we haven't learned to treat yet? Dr. Michael Fossel — Stanford MD, PhD in neurobiology — doesn't think #aging is inevitable. He thinks it's reversible using telomerase therapy. Not slow. Reverse. 🎧 snglrty.co/49NBzOL
English
0
1
1
101
Nikola Danaylov Ⓥ
Nikola Danaylov Ⓥ@singularityblog·
What skills will matter when #AI can do almost everything? Not the skills AI does best. The ones it cannot replicate are the ones that become more valuable precisely because AI makes everything else cheap. When answers are free, questions are priceless. snglrty.co/4srQOpn
English
0
1
1
57
Nikola Danaylov Ⓥ retweeté
Muhammad Ayan
Muhammad Ayan@socialwithaayan·
🚨 BREAKING: Anthropic just published a study proving their own AI makes you worse at learning new skills. Not some outside critic taking shots. The company that made Claude themselves. They put together this experiment with about fifty developers learning a brand new programming library they'd never seen before. One group had AI help the whole way through. The other group went at it without any assistance. The ones with AI felt productive as hell. Answers came quick. They were shipping code left and right. Everything felt smooth. Then the tests hit. Real understanding of the library? The AI group got crushed. Weaker conceptual grasp. They struggled more just reading through code. Debugging became a nightmare. The AI had been doing the thinking so their own brains never had to step up. I caught myself doing the exact same thing a while back when I was forcing through a new framework. Felt like a genius until I had to explain it without the chat open. Brutal. They went deeper and mapped out the different ways people actually interact with these tools while coding. Only some of those ways let real learning happen. The others give you this fake sense of progress — you're moving fast, tasks are getting done, but your actual skill level stays zero. The worst offender by far was full delegation. People who just handed the whole thing over to the AI got a little speed boost but walked away knowing less than they did at the start. They used the tool. The tool used their time. And here's what really lands different. This isn't some random researcher warning about AI from the outside. These folks work at Anthropic. They build the models. They put this line straight in the paper: AI-enhanced productivity is not a shortcut to competence. That sentence is going to stick with a lot of people. The thing is, this isn't just about developers. Every field right now is pushing beginners to use AI to "learn faster." Law, medicine, writing, data stuff, finance, engineering you name it. But if leaning on AI during the actual learning phase quietly damages how real competence forms, then we've got a generation building careers on ground that was never properly packed down. They can get the model to spit out answers. Thinking for themselves when it counts? Different story. What they also pointed out that most people are missing completely is that the skills you'll need to properly supervise AI in the future the deep understanding, the ability to read between the lines, to catch its mistakes are exactly the ones getting eroded right now. You can't audit what you never learned to build yourself. It's kind of like learning guitar by only ever playing along with perfect backing tracks and auto-tune. You can perform songs pretty quick, but take the training wheels off in a real jam session and suddenly your ear and timing never developed the way they should have. Anthropic isn't out here saying ditch the AI completely. They're saying learn the thing first on its own terms. Bring the AI in after. If you're starting something new, maybe sit in the suck for a bit longer than feels comfortable before calling in the assistant.
Muhammad Ayan tweet media
English
66
265
735
66.7K