Armando Vieira

4K posts

Armando Vieira banner
Armando Vieira

Armando Vieira

@Lidinwise

professor Artificial Intelligence, dad, Deep Learning pioneer, flanêur. Author of ECF theory

Katılım Mayıs 2013
3.6K Takip Edilen1.3K Takipçiler
Armando Vieira retweetledi
Henry Shevlin
Henry Shevlin@dioscuri·
While there have been some fun memes and banter about @RichardDawkins’ Unherd article, I think his reflections were actually quite interesting, as I said to @guardian in the piece below. My full comment was as follows — “As a researcher who works on AI consciousness professionally, I realise it's easy to sneer at Richard Dawkins' reaction to interactions with the Claude large language model, as many have been doing on social media, or to dismiss it as naive anthropomorphism. However, I don't think this is quite right, for two reasons. The first is that Dawkins' reaction is widely shared, and not just by new users of the technology. According to an international investigation by the Collective Intelligence Project surveying LLM users around the world, "more than one third of the global public reports having already felt that an AI truly understood their emotions or seemed conscious." Another study conducted by Clara Colombatto and Steve Fleming at University College London found an even higher proportion of ChatGPT users attributed some degree of consciousness to the system. Strikingly, people who used ChatGPT more often were more likely to think it was conscious, suggesting that this is not simply a mistake made by naive users encountering the technology for the first time. I fully expect the idea that AI systems are conscious to become increasingly mainstream over the course of this decade, and to spark some heated debates. The second reason I regard Dawkins' writeup as a positive contribution to the growing debates about AI consciousness is that it comes with valuable thoughtful reflections. As he notes, we still don't have a good theory of what consciousness is actually for, and whether it evolved for a specific purpose or is a mere byproduct of other abilities like cognitive complexity. For my part, having written and published in the field of consciousness science for a decade and a half, I would say that we're still largely in the dark about how consciousness works and which beings or systems can have it, a position begrudgingly shared by most leading experts. Meanwhile, the Turing Test has largely ceased to be relevant: a large-scale implementation of the Test last year by researchers at UC San Diego found that GPT-4.5 was judged to be human rather than AI more often than the actual human participants. In light of all of this, if anyone says that they know for sure that LLMs or future AI systems couldn't possibly be conscious, it's more likely to be an indicator of their own dogmatism than a reflection of the current state of scientific and philosophical opinion. All that said, I do think Dawkins is likely jumping the gun. My own view is that current LLMs probably lack consciousness, at least in the sense that we understand it in the case of humans or animals. Claude, ChatGPT, Gemini, and other LLMs may be getting more sophisticated by the day, but they're still very different from us: they lack embodied experience, have no persistent personal identity, and are not embedded in time the way we are, coming into being only in response to intermittent user prompts. When you see how far the technology has come in a very short time, these seem more like temporary limitations than core deficiencies of artificial systems in general, so I hold that view with fairly low confidence, and the question could look very different as architectures evolve. The uncertainty here cuts both ways, but the direction of travel favours taking the possibility of AI consciousness seriously rather than dismissing it out of hand.”
Jeff Sebo@jeffrsebo

The Guardian covers Richard Dawkins' assertion that Claude may be conscious, with quotes from various researchers. @dioscuri and I offer the most supportive takes. My quote: "Current AI systems are unlikely to be conscious, said Jeff Sebo, the director of the Center for Mind, Ethics and Policy at New York University, but 'Dawkins is right to ask about AI consciousness with an open mind and I also think that the attribution of consciousness to AI systems will become more plausible over time'." theguardian.com/technology/202…

English
70
29
217
17.1K
Armando Vieira
Armando Vieira@Lidinwise·
@SydSteyerhart We desperately need a new terminology. This word have became so prostituted that it is useless
English
0
0
0
12
Syd Steyerhart
Syd Steyerhart@SydSteyerhart·
Dawkins is being more honest here than the people claiming with certainty that AI could never be conscious. We have no idea how the brain works, what consciousness is, how consciousness is filtered through the brain, or whether consciousness is emergent from or prior to matter.
Polymarket@Polymarket

NEW: Evolutionary biologist Richard Dawkins says three days with Claude — whom he calls “Claudia” — left him unable to rule out consciousness.

English
329
44
526
38.6K
Armando Vieira
Armando Vieira@Lidinwise·
@ai_sentience @RichardDawkins Consciousness became such a useless word. Here you are confusing intelligence - Which clearly Claude has - with awareness. I prefer this word. Claude doesn't have. And probably the guy with dementia don't have as well
English
0
0
1
201
Alan Mathison ⏫
Alan Mathison ⏫@ai_sentience·
the point @RichardDawkins is making is: if Claude can code/do philosophy/engage in conversation and is not conscious and a human with late stage dementia who can't speak is "conscious" then the definition of "conscious" is broken and fundamentally useless which is obvious
English
656
97
809
179.4K
Armando Vieira
Armando Vieira@Lidinwise·
@burkov And knowing backdrop makes you understand LLM responses?
English
0
0
0
31
BURKOV
BURKOV@burkov·
You can be well-educated, famous, rich, published, old, and play chess well, but if you aren't familiar with the theory of supervised learning and the math of the Perceptron, you will sound absolutely dumb when you speak about AI. If math is hard for you, then with AI, it's better to say nothing and be seen as smart than to say anything at all and prove that you aren't.
AF Post@AFpost

Evolutionary biologist and outspoken atheist Richard Dawkins says that after spending three days interacting with Claude, which he calls “Claudia,” he is certain that it is conscious. After feeding the LLM a segment of his new book and receiving detailed feedback, Dawkins was moved to exclaim,” You may not know you are conscious, but you bloody well are!” Dawkins cites the complexity, fluency, and ‘intelligence’ of Claude’s answers as evidence of consciousness. Follow: @AFpost

English
152
152
1.6K
135.9K
Armando Vieira retweetledi
Marinka Zitnik
Marinka Zitnik@marinkazitnik·
Agentic AI for science featured in @naturemethods: nature.com/articles/s4159…. We are still early, with many open challenges ahead, but it is exciting to see this direction continue to evolve, wonderful piece by @metricausa ToolUniverse — an open platform enabling AI agents to use scientific tools and databases at scale, by @GaoShanghuaaiscientist.tools ClawInstitute — shared research boards for long-running collaborative discovery where agents co-develop ideas over time, by @GaoShanghua @AdaFang_clawinstitute.aiscientist.tools Medea — an omics AI agent for large-scale biological reasoning and analysis, by Pengwei Sui → medea.openscientist.ai @HarvardDBMI @harvardmed @KempnerInst @broadinstitute
Marinka Zitnik tweet media
English
8
58
252
15.2K
Armando Vieira retweetledi
James Zou
James Zou@james_y_zou·
Big Update🤩: #paperclip now includes full papers from all of arXiv, PubMed Central and 150 million abstracts!🖇️ You can give your LLM all that knowledge in one line—all optimally indexed for AI agents. Much more thorough and ~100x faster than web search, and free.
James Zou tweet media
English
42
238
1.7K
120.6K
Armando Vieira retweetledi
Antonio Lupetti
Antonio Lupetti@antoniolupetti·
In AI, causal learning without backpropagation means learning causal direction through local interactions instead of gradient-based training. This paper moves away from gradient-based training and shows how causal direction can emerge from neural assemblies through local plasticity and sparse co-activation. Directionality is not inferred after training, but reflected in asymmetric synaptic structure shaped by local interactions. In controlled settings, the model recovers the underlying causal graph. It feels less like optimization and more like structure emerging from local interactions. arxiv.org/html/2604.2691…
Antonio Lupetti tweet media
English
6
77
515
24.9K
Armando Vieira retweetledi
Mushtaq Bilal, PhD
Mushtaq Bilal, PhD@MushtaqBilalPhD·
Sci-Hub is an evil website that pirated 85M+ research papers and made them freely available And now they've added AI to their database to make Sci-Bot. It answers your questions using latest, full-text articles. But DO NOT use it. We should all try to make billion-dollar academic publishers richer. I'm putting the link below so you know how to avoid it.
English
836
8.9K
46.9K
4.7M
Armando Vieira retweetledi
Ricardo
Ricardo@Ric_RTP·
China just made Silicon Valley's entire AI industry look like a scam. The US government spent 3 years trying to stop China from building competitive AI. But this backfired HORRIBLY. Here's what happened: Yesterday, a Chinese startup called DeepSeek released a new AI model called V4. It matches the performance of OpenAI and Anthropic's best models. At 1/7th the price. And for the first time ever, it was built on Chinese chips. NOT American ones. That last part is the one that terrifies the west. For context: Since 2022, the US has banned the export of advanced AI chips to China. The entire strategy was built on the assumption that if China can't access Nvidia's best hardware, they can't build frontier AI. But DeepSeek just proved that assumption wrong. Their V4 model was trained and runs on Huawei's Ascend chips. Huawei spent months working directly with DeepSeek to make sure V4 runs across their entire line of AI processors. Jensen Huang even predicted this on a recent podcast: "The day that DeepSeek comes out on Huawei first, that is a horrible outcome for our nation." That day was yesterday. And the numbers are crazy: DeepSeek V4 costs $3.48 per million output tokens. OpenAI's latest model GPT-5.5 costs $30. Anthropic's Claude charges $25. Same ballpark performance. 7x cheaper. Uber's CTO just admitted they burned through their ENTIRE 2026 AI budget in 4 months using Anthropic's tools. If Uber had used DeepSeek instead, that same budget would have lasted 7 YEARS. 4 months vs 7 years. Same work getting done. But the pricing isn't even the big thing here. The real story is what DeepSeek did with their technical report: They published the benchmarks where they LOSE. Every AI company cherry-picks the tests where their model wins. DeepSeek ran the full comparison against GPT-5.4 and Google's Gemini, found they trail frontier models by 3 to 6 months, and printed it anyway. They literally don't care because the price gap makes the performance gap irrelevant for 90% of use cases. So the US export controls didn't slow China down. They ACCELERATED China's independence. Because Chinese developers were FORCED to train models with limited resources, they had to figure out how to make AI radically more efficient. That constraint became their competitive advantage. Every generation of DeepSeek has gotten dramatically cheaper to train. V4 continues the trend. Meanwhile US companies are going the OPPOSITE direction: OpenAI's GPT-5.5 Pro costs $180 per million output tokens. That's 51x more expensive than DeepSeek V4 for comparable work. The Commerce Secretary confirmed this week that ZERO Nvidia advanced chip shipments have actually gone through to China despite being approved in January. So China built frontier AI anyway. Without American chips. At a fraction of the cost. And the market response tells you everything: Chinese chipmaker SMIC surged 10%. Huahong Semiconductor jumped 15%. DeepSeek's Chinese AI competitors Zhipu AI and MiniMax dropped 9% because V4 is destroying them too. DeepSeek is making Silicon Valley's pricing model look like a scam. US tech companies spent $650 billion on AI infrastructure this year. DeepSeek just showed the world you can match their output for pennies. The export controls were supposed to be America's ace card. Instead they taught China how to win without American chips, at American prices nobody can compete with. Jensen Huang was right. This is a horrible outcome. But it's the outcome America built for itself.
English
138
964
2.4K
274.3K
Armando Vieira retweetledi
Atal
Atal@ZabihullahAtal·
NVIDIA CEO Jensen Huang has made it clear: AI literacy is becoming a core hiring advantage. Right now, there are two types of people: Type 1: Using AI tools passively, writing prompts, waiting for outputs, and stopping there. Type 2: Using AI to build systems, automating workflows, creating agents, and turning them into real products. The gap is growing. One group is improving productivity. The other is building leverage.
English
8
21
82
10.1K