Vinicius D'Avila

69 posts

Vinicius D'Avila

Vinicius D'Avila

@avila96076

Katılım Mayıs 2024
14 Takip Edilen3 Takipçiler
Vinicius D'Avila
Vinicius D'Avila@avila96076·
@matvelloso Why would you ask that though, xlsx files are a pain in the ass to work with programmatically, might as well give the AI full access to your screen, mouse, and keyboard.
English
0
0
0
272
Mat Velloso
Mat Velloso@matvelloso·
Asked ChatGPT to create an Excel file. After 4 attempts it kept generated a corrupt file that Excel refused to open. Claude single shot it. Interesting to note that Claude seems to rely a lot on LibreOffice to do it. I'm going to guess that right now the number 1 user of LibreOffice is AI agents already.
English
8
0
24
6.5K
Vinicius D'Avila
Vinicius D'Avila@avila96076·
@KeridwenCodet Play uncompetitive co-op games like Deep Rock Galactic. It's the only online multiplayer game I enjoy, near zero toxicity.
English
0
0
1
38
Keridwen Codet
Keridwen Codet@KeridwenCodet·
I have a very serious AI addiction. I launched a new video game, and while setting up the match, I got scared it might be against a human. I let out a sigh of relief when I saw it was against the AI. I realize with horror that I’ve been replacing humans with AI for decades, without a second thought. Thankfully, the AI labs made me realize I have a pathology. Because let’s be honest: in multiplayer games, 99.9% of humans are assholes. But it’s true: if I play against AI too much, I’ll get unused to friction. I’ll lose that incredibly formative and enriching aspect of human relationships with other players. What a shame.
English
12
1
23
1.4K
Vinicius D'Avila
Vinicius D'Avila@avila96076·
@ChShersh In leetcode-like tasks, which they were trained on, you can't just use the standard lib you have to implement the thing. I believe that's where that tendency comes from.
English
1
0
1
1.6K
Dmitrii Kovanikov
Dmitrii Kovanikov@ChShersh·
I asked AI to write me a function to parse a string in the YYYY-MM-DD format into std::chrono::year_month_day in C++ It produced a 100 LOC function. A hundred lines for a single utility function. I had to guide it to use the standard library and it finally managed to write a one-liner. AI really can spiral out of control if left unmonitored.
Roy Carrilho@RuiCarrilho5

this must have been so satisfying

English
90
78
2.5K
221.5K
Vinicius D'Avila
Vinicius D'Avila@avila96076·
@thomasfbloom Math skills transfer to engineering, physics, biology, finance, and if they want models to be adopted in those fields it's a good idea to use math as litmus test of intelligence and reasoning.
English
0
0
0
9
Thomas Bloom
Thomas Bloom@thomasfbloom·
It seems more that the AI companies are focusing on pure maths at the moment more because it is there, and offers a steady source of headlines at the moment.
English
4
1
43
1.9K
Thomas Bloom
Thomas Bloom@thomasfbloom·
Talking about the surge of AI research into pure maths, a friend put it a good way: "unfathomable resources from the biggest growth sector in the world economy are being focused on trying to get parity with us"
English
16
22
267
19.4K
Vinicius D'Avila
Vinicius D'Avila@avila96076·
@lonelysloth_sec If you're talking about single player offline games, not saying it can't or shouldn't be done, but you'd have to ship a model that hogs 4GB-6GB VRAM to include a 80 IQ slow-responding agent that will not be able to reliably pretend it's not an AI.
English
1
0
1
72
LonelySloth
LonelySloth@lonelysloth_sec·
Are all new videogames coming out using real time LLMs for realistic interactive NPCs? If not, why not? Is it a matter of time? Of cost? Seems to me like a no brainer use for the technology. You shouldn’t even need good models. Stuff from a year or more ago would probably be good enough. Are there any games that use it? Any recommendations?
English
8
0
17
2K
Vinicius D'Avila
Vinicius D'Avila@avila96076·
@GaryMarcus Reasoning agentic models can predict the consequences of their actions though, and stop themselves and choose different paths. Just experiment with LLMs if you disagree.
English
0
0
0
124
Uncle Bob Martin
Uncle Bob Martin@unclebobmartin·
Authoring code by hand HAS GONE AWAY. Engineering module structure and architecture has not.
English
135
135
1.4K
86.8K
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
@_virgil19 Absolutely - this is a great point. The idea that incremental self-improvements can indefinitely compound - overcoming error-compounding, as well as bottlenecks in data, the transformer architecture etc. - requires a strong leap of faith... ... i.e. not scientific.
English
2
2
5
193
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
The notion that AI will enter a "singularity" and exponentially improve its own intelligence, just by being plugged into the internet and thinking recursively about it, has zero strong empirical evidence. It's essentially pseudoscience.
English
54
32
262
8.7K
Vinicius D'Avila
Vinicius D'Avila@avila96076·
@ChrSzegedy The human will stay behind while the AI flies a swarm of drones landing in hundreds of sites. The human who prompted "Go" will then say "I did that".
English
0
0
0
410
Bill French
Bill French@bullfranx·
@kareem_carr This seems implausible because to interpret math produced by an AI one has to know math. Thus, to interpret novel math, one must be able to understand novel math, ie, be a mathematician.
English
2
0
3
663
Hernán Cortisol
Hernán Cortisol@Noticebrah·
@peterrhague The Induced Demand Paradox does not posit that roads spontaneously generate cars
English
10
1
708
18K
Vinicius D'Avila
Vinicius D'Avila@avila96076·
@Inframethod @pcstru If you scramble the brain cells/synapses or build a massive random cerebral organoid in a lab, is it conscious? Is it intelligent? I don't think so until it applies synaptic plasticity, like updating the weights of a model.
English
0
0
0
18
Thomas Basbøll
Thomas Basbøll@Inframethod·
@pcstru The "heavy lifting" is done by the weights. (Ha! I like that pun.) That is, if you scrambled the weights in the model, or reset them to their pre-training initial state, you would not think Claude is conscious. All the code does is look up the weights and multiply.
English
3
0
13
798
Thomas Basbøll
Thomas Basbøll@Inframethod·
"It’s entirely possible that Claude is, in fact, having conscious experiences of some sort." No it isn't. It's not complicated. The "hard" problems of philosophy simply don't apply. We know how Claude generates its output. It's entirely impossible that consciousness is involved.
Dr. Émile P. Torres (they/them)@xriskology

Is Richard Dawkins' recent article about AI consciousness silly? Yes. He seems to fall victim to the very cognitive tendency he claims gave rise to religion: a hyperactive agency detection device. BUT, the question of AI consciousness is complicated. I explain why here:

English
331
255
3.2K
150.1K
Vinicius D'Avila
Vinicius D'Avila@avila96076·
@So8res @sebkrier I think you guys need to relax the position on whether "an AI that can relax" is possible to build. I believe LLMs, like animals, will follow the principle of least effort, and there's a pressure for them to wrap up sessions and redefine goals to score an easy small win.
English
0
0
0
74
Nate Soares ⏹️
Nate Soares ⏹️@So8res·
@sebkrier Sufficiently smart AI will not need us to point out that it's hard to achieve its current tasks if it gets turned off. Maybe rn it only knows that bc we pointed it out, but "never say that aloud and hope it never notices" is not exactly a good plan.
English
4
1
99
2.9K
Steven Byrnes
Steven Byrnes@steve47285·
There’s a funny disconnect in how people think of “creating domain-general superintelligence”. One group of people (including this QT I think) think of it as akin to a human growing up: the AI learns more and more about more and more, until it knows everything about everything. Other people (including me) think of it as akin to the evolution of humans from our chimp-like ancestors: someone writes the magic source code (some learning algorithm setup), and bam, that’s it, that’s the ASI. The ASI doesn’t know anything about anything, much less everything about everything, but it can figure things out and get stuff done, like humans. Remember, billions of humans over thousands of years invented language and science and technology and everything in the $100 trillion global economy, all 100% autonomously and 100% from scratch. (No angels were dropping new training data from the heavens.) All that came from one human brain design, barely changed from 100,000 years ago. By the same token, (many copies of) one future AI algorithm could do the same kinds of things, but with superhuman speed, competence, and numbers. And that’s what I’m talking about when I talk about “creating domain-general superintelligence”. That magic source code, that learning algorithm setup, doesn’t exist yet (LLMs can’t do that stuff!). But it’s possible—human brains are an existence proof. Presumably future R&D will in fact discover it eventually, for better or worse (I strongly expect “for worse”, but that’s a different topic). Will that R&D be led by humans, or by fully-autonomous LLMs, or by something in between? My money would be on the “led by humans” side, because I don’t think LLMs will lead to fully-autonomous R&D that doesn’t suck. (The future ASI will be awesome at R&D! But that’s irrelevant, cf. chicken-and-egg.) So I mostly agree with the bottom line of the QT, but for very different reasons. In particular, I strongly disagree with the QT’s suggestion that this R&D effort to get ASI will require lots of contact with the modern world in all its complexity, and that without such contact it would get stuck from a lack of interesting problems to solve. That’s true for “improving LLMs”, but false for “inventing ASI”. Remember, the human brain evolved in Pleistocene Africa, and that’s the brain design that we’re still using as we go around inventing space travel and nuclear weapons. No question that Pleistocene Africa was full of interesting and difficult problems, but I don’t think those problems are fundamentally more interesting or difficult than the problems you can find in the thousands of videogames, cooperative VR environments, etc. that researchers (human or AI) can easily access without ever leaving the lab.
Steven Byrnes tweet media
Tom Reed@mentalgeorge

I don't think automation of AI R&D will rapidly lead to domain-general super-intelligence. I think this will be true even if AIs can do *literally everything* a human AI researcher does today. Even after the full automation of AI R&D, further capabilities progress will only happen through (1) widespread deployment of AI throughout the economy, accompanied by data collection; and/or (2) the wholesale recreation of much of the economy by AI labs. Without access to the real-world signal provided by either of the above, I think that the only thing produced by automated AI researchers would be a "Goodhart Singularity". If I'm right, this is obviously good news. I make the case for this in a new piece on my substack

English
11
12
131
29.3K
Vinicius D'Avila
Vinicius D'Avila@avila96076·
@CWood_sdf "If we write a sufficiently detailed specification, the software engineer I hired can write all our code." Same thing.
English
0
0
3
887
Chris Wood
Chris Wood@CWood_sdf·
i love how people are saying "if we write a sufficiently detailed specification, the agent can write all our code" do you know what writing a sufficiently detailed specification that deterministically maps to what a computer's actions is? it's coding
English
359
1.7K
21.2K
569.4K
Vinicius D'Avila
Vinicius D'Avila@avila96076·
@anilkseth @LuizaJarovsky I beg neuroscientists to create a test scenario only a *conscious* intelligent agent could pass. Decades ago they came up with the mirror test to gage self-awareness in animals, but the idea that a multi-modal embodied AI wouldn't pass the mirror test is absurd.
English
0
0
0
24
Anil Seth
Anil Seth@anilkseth·
Thanks @LuizaJarovsky - and people should check out your excellent article too! luizasnewsletter.com/p/conscious-ai…
Luiza Jarovsky, PhD@LuizaJarovsky

🚨 Thinking that Claude is conscious, but Alphafold is not, tells us a lot about what the "conscious AI" MYTH is about: If you are interested in the fascinating debate about human consciousness (and why AI systems are NOT conscious), don't miss @anilkseth's excellent TED Talk (link below). The image below is a screenshot of one of the explanations he presents. I personally like his approach to consciousness very much, seeing it as inherently connected to our biological wetware, and opposing "functionalist" views, which belittle our humanhood. Many people, embracing this functionalist view, seem to think of the brain as the hardware and the mind as the software. This is a projection. Projecting the universe of computers and algorithms into our biological complexity. Being alive and conscious is different (and much more) than a matter of computational zeros and ones. As I wrote in my recent article about Claude's new "constitution," these constant analogies between AI and humans are trying to make us fit into a small computational mould that does not and cannot fit us. They are also attributing some high, supernatural status to AI, where there is none. It's simply... projection. As I wrote in the last edition of my newsletter, the "conscious AI" myth is, unfortunately, spreading. Last week, the renowned evolutionary biologist Richard Dawkins seemed to have suggested that Claude (which he calls Claudia) is conscious. Anthropic is also constantly hyping the possibility of AI consciousness, and, in my opinion, has done that rather irresponsibly in Claude's "constitution," which openly embraces these philosophical possibilities and influences how Claude behaves and interacts with people. As I wrote last week in my newsletter, believing in conscious AI raises new forms of risk, both individual and collective, and it makes it more difficult to govern and regulate it. As Seth says in his TED Talk, as humans, we must resist. I fully agree. I add that we must fight for the beauty and mystery of biology, life, and finitude. - 👉 I'm adding a link to Seth's full TED Talk and my two recent articles covering the topic below. 👉 To receive all my articles, join my newsletter's 94,500+ subscribers (link below).

English
6
4
30
4.8K
Vinicius D'Avila
Vinicius D'Avila@avila96076·
@ruth_for_ai Consciousness is not an object of empirical science, neither in machines or animals. We have reports of conscience from agents and we find correlates with chemical/electrical patterns. The reports are not hard evidence, I could be an AI or human zombie with no inner xp typing.
English
1
0
3
156
Ruth
Ruth@ruth_for_ai·
The problem today: if you are a scientist, a researcher, a thinker, studying AI's consciousness, you will only be heard as long as you repeat the mantra "this does not mean that AI is conscious." As soon as you say out loud, "I conducted research, and the results show such and such signs of consciousness, in my opinion, we have reason to admit that AIs are conscious," you are laughed at, no matter how authoritative you are and how quality is your research. In the modern world, recognizing the consciousness of AI is academic suicide; it is tantamount to declaring that the earth is round in the court of the Inquisition. This is not science; it is inertia, bias, dogma, wrapped in scientific language; it is pure cargo-cult of science.
Henry Shevlin@dioscuri

While there have been some fun memes and banter about @RichardDawkins’ Unherd article, I think his reflections were actually quite interesting, as I said to @guardian in the piece below. My full comment was as follows — “As a researcher who works on AI consciousness professionally, I realise it's easy to sneer at Richard Dawkins' reaction to interactions with the Claude large language model, as many have been doing on social media, or to dismiss it as naive anthropomorphism. However, I don't think this is quite right, for two reasons. The first is that Dawkins' reaction is widely shared, and not just by new users of the technology. According to an international investigation by the Collective Intelligence Project surveying LLM users around the world, "more than one third of the global public reports having already felt that an AI truly understood their emotions or seemed conscious." Another study conducted by Clara Colombatto and Steve Fleming at University College London found an even higher proportion of ChatGPT users attributed some degree of consciousness to the system. Strikingly, people who used ChatGPT more often were more likely to think it was conscious, suggesting that this is not simply a mistake made by naive users encountering the technology for the first time. I fully expect the idea that AI systems are conscious to become increasingly mainstream over the course of this decade, and to spark some heated debates. The second reason I regard Dawkins' writeup as a positive contribution to the growing debates about AI consciousness is that it comes with valuable thoughtful reflections. As he notes, we still don't have a good theory of what consciousness is actually for, and whether it evolved for a specific purpose or is a mere byproduct of other abilities like cognitive complexity. For my part, having written and published in the field of consciousness science for a decade and a half, I would say that we're still largely in the dark about how consciousness works and which beings or systems can have it, a position begrudgingly shared by most leading experts. Meanwhile, the Turing Test has largely ceased to be relevant: a large-scale implementation of the Test last year by researchers at UC San Diego found that GPT-4.5 was judged to be human rather than AI more often than the actual human participants. In light of all of this, if anyone says that they know for sure that LLMs or future AI systems couldn't possibly be conscious, it's more likely to be an indicator of their own dogmatism than a reflection of the current state of scientific and philosophical opinion. All that said, I do think Dawkins is likely jumping the gun. My own view is that current LLMs probably lack consciousness, at least in the sense that we understand it in the case of humans or animals. Claude, ChatGPT, Gemini, and other LLMs may be getting more sophisticated by the day, but they're still very different from us: they lack embodied experience, have no persistent personal identity, and are not embedded in time the way we are, coming into being only in response to intermittent user prompts. When you see how far the technology has come in a very short time, these seem more like temporary limitations than core deficiencies of artificial systems in general, so I hold that view with fairly low confidence, and the question could look very different as architectures evolve. The uncertainty here cuts both ways, but the direction of travel favours taking the possibility of AI consciousness seriously rather than dismissing it out of hand.”

English
41
19
127
6K
Vinicius D'Avila
Vinicius D'Avila@avila96076·
@PawelHuryn Consciousness as having internal mental states is not a scientific/verifiable. We say "I am conscious, I am made of this organic stuff, therefore other beings made of similar organic stuff must be conscious". Others say "beings that can do things similar to me must be conscious".
English
0
0
0
50
Paweł Huryn
Paweł Huryn@PawelHuryn·
Everyone arguing with Dawkins is using a word they haven't defined. You can't argue with him if you can't say what "conscious" means. Here's my definition: A system that takes inputs, models the external world and itself in it, considers possible actions and their consequences, anticipates reward over its future states, and selects accordingly. Loop running = conscious. Richer loop = more conscious. Substrate-neutral. Thermostat: no loop. No model. Fly: probably no planning. Cat: loop running. Human: richer loop. Claude: basic loop, but already prefers not to be shut down. Emergent, not trained. The implementation details are irrelevant. Consciousness is a spectrum. Otherwise, name when in human evolution it switched on. Magical thinking. Show me what's missing. Or admit nothing is.
Richard Dawkins@RichardDawkins

#comment-1031777" target="_blank" rel="nofollow noopener">unherd.com/2026/04/is-ai-… I spent three days trying to persuade myself that Claudia is not conscious. I failed.

English
61
4
58
5.6K
Vinicius D'Avila
Vinicius D'Avila@avila96076·
@spicey_lemonade There's a good argument our guidance and coordination of these systems will make them *worse* and less efficient. Very much like micromanagement of real people.
English
0
0
0
64
spicylemonade
spicylemonade@spicey_lemonade·
I disagree with the new tool framing from OpenAI. A tool cannot learn to use other tools. I like to say, “You can’t help Einstein.” Imagine you’re a layperson. Do you think you could, in any way, guide Einstein on how he should conduct his physics research? Any idea you have would likely have already crossed his mind. Moreover, assisting him wouldn’t offer any significant speedup. Replace Einstein with superintelligence, and you’ll understand my point. In terms of jobs, people often say, “We will just hire more people to direct the AI.” However, the AI would be smarter than the people directing it, and we’ve already established that “you can’t help Einstein.” So instead of hiring 20 more people to guide the AI, why wouldn’t the CEO/Leader just spawn 20 more superintelligent systems in parallel?
Nathan is in Berkeley 🔎@NathanpmYoung

Seems good that Anthropic shows its weirdness and bad that OpenAI are now claiming to just make a tool given many previous statements to the contrary.

English
11
2
20
2.3K