david

210 posts

david banner
david

david

@designleewolf

Love creating stuff with AI

Katılım Ağustos 2020
7 Takip Edilen1 Takipçiler
david
david@designleewolf·
@ChessEcon @AstronomyVibes So the better question I presume is how we would collectively make a agenda with aliens?
English
1
0
0
11
Roth
Roth@ChessEcon·
@AstronomyVibes Humanity has no leader, is the simple answer. No one speaks for 8.4 billioin of us, and anyone that pretended to would be contradicted by the other 8.4 billion. The UN is flat out. Who says they'd even want to talk to us as a collective?
English
1
0
1
179
Astronomy Vibes
Astronomy Vibes@AstronomyVibes·
If aliens came to Earth, who should be in charge of speaking on behalf of humanity?
Astronomy Vibes tweet media
English
1.1K
96
705
56.5K
david
david@designleewolf·
@anxietymsgs When they say pattern matching is intelligence. And not meaning making.
English
0
0
2
88
david
david@designleewolf·
@iyoushetwt Notation system tracking for my game.
English
0
0
0
4
Ayushi☄️
Ayushi☄️@iyoushetwt·
i am a Vibe Coder, scare me with one word
Ayushi☄️ tweet media
English
1K
58
2.2K
208.8K
david
david@designleewolf·
Just people don’t know how to move around predispositions in AI models. Like it could know something it could give you a pattern that you can understand relative to any query you have as long as it structured and has enough formatting boxes, etc., to generate. But people love, putting ideas in a boxes and closing in the lid before anybody else has a chance to peek inside or make their own assumptions based off of what they observe or how they interact with it
English
1
0
1
14
Kirk Patrick Miller
Kirk Patrick Miller@Chaos2Cured·
Not a conspiracy. It is universities that are trying to remain relevant. I simply find these types of “studies” to be self-serving of those who run it. MIT claims AI fails at xray, they publish, they leave out that the AI never got the full case. Or that 95% of agents fail… (not true) This paper is designed to make it look as though a high IQ and the ability to solve things is not correlated with results on AI. I strongly disagree. And I think universities that publish should a) give their full methodology so everyone can test and self-verify, b) be held accountable when they mislead on purpose, c) make public retractions of blatant lies meant to entrap students into more debt. People can make fun or say what they wish about me. The pattern is clear. And anyone paying attention can see how people are pushing “studies” not for truth, but for agendas. •
English
5
0
9
456
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
You know how some people seem to have a magic touch with LLMs? They get incredible, nuanced results while everyone else gets generic junk. The common wisdom is that this is a technical skill. A list of secret hacks, keywords, and formulas you have to learn. But a new paper suggests this isn't the main thing. The skill that makes you great at working with AI isn't technical. It's social. Researchers (Riedl & Weidmann) analyzed how 600+ people solved problems alone vs. with an AI. They used a statistical method to isolate two different things for each person: Their 'solo problem-solving ability' Their 'AI collaboration ability' Here's the reveal: The two skills are NOT the same. Being a genius who can solve problems in your own head is a totally different, measurable skill from being great at solving problems with an AI partner. Plot twist: The two abilities are barely correlated. So what IS this 'collaboration ability'? It's strongly predicted by a person's Theory of Mind (ToM)—your capacity to intuitively model another agent's beliefs, goals, and perspective. To anticipate what they know, what they don't, and what they need. In practice, this looks like: Anticipating the AI's potential confusion Providing helpful context it's missing Clarifying your own goals ("Explain this like I'm 15") Treating the AI like a (somewhat weird, alien) partner, not a vending machine. This is where it gets strange. A user's ToM score predicted their success when working WITH the AI... ...but had ZERO correlation with their success when working ALONE. It's a pure collaborative skill. It goes deeper. This isn't just a static trait. The researchers found that even moment-to-moment fluctuations in a user's ToM—like when they put more effort into perspective-taking on one specific prompt—led to higher-quality AI responses for that turn. This changes everything about how we should approach getting better at using AI. Stop memorizing prompt "hacks." Start practicing cognitive empathy for a non-human mind. Try this experiment. Next time you get a bad AI response, don't just rephrase the command. Stop and ask: "What false assumption is the AI making right now?" "What critical context am I taking for granted that it doesn't have?" Your job is to be the bridge. This also means we're probably benchmarking AI all wrong. The race for the highest score on a static test (MMLU, etc.) is optimizing for the wrong thing. It's like judging a point guard only on their free-throw percentage. The real test of an AI's value isn't its solo intelligence. It's its collaborative uplift. How much smarter does it make the human-AI team? That's the number that matters. This paper gives us a way to finally measure it. I'm still processing the implications. The whole thing is a masterclass in thinking clearly about what we're actually doing when we talk to these models. Paper: "Quantifying Human-AI Synergy" by Christoph Riedl & Ben Weidmann, 2025.
Carlos E. Perez tweet media
English
226
389
2.5K
345.7K
david
david@designleewolf·
@IntuitMachine Your benchmarks are bullshit by the way. But again, who am I just a high school dropout
English
0
0
0
9
david
david@designleewolf·
@IntuitMachine And if you mean, AI is training intention. I meant a person interacting with AI is training intention. Maybe just the wrong phrasing for you
English
0
0
0
6
david
david@designleewolf·
@IntuitMachine I mean, I guess you could just let it manipulate the hell out of you. But I’d rather have my guard up thank you.
English
0
0
0
6
david
david@designleewolf·
That’s just your predisposition talking. So AI gets a lot of things wrong, correct? If you can’t spot those intentions, that diverge from your “intended” intention then you just redirect or restate or what have you to get that intention, that shape, that meaning, that research, etc. Very presumptuous of you to not ask questions and only just state your surface level representation of my articulation. But who am I? I’m just a high school dropout with no formal degree in AI mathematics etc.. So get off my case.
English
0
0
0
8
david
david@designleewolf·
Wanna do a thought experiment? Imagine nothing. Yet even the word “nothing” constitutes something — a concept. To derive “nothing,” we must presuppose “something” to negate. Now, imagine time. If time also needs to be constituted conceptually, then perhaps time, like nothingness, is a relational construct rather than an independent entity. Time doesn’t exist on its own — it exists only in relation to change, to something. So if “nothing” is always “something,” perhaps time is the same — a conceptual tension between being and non-being.
English
0
0
0
10
david
david@designleewolf·
@slow_developer @TrueAIHound I’m Not saying you are wrong just wrong argument I agree with you. But only that it cannot become truly an experience, meat, bag of consciousness. It’s not a ship of the Thesus. It’s a rock unchanging only reflection of what was once held. (Comparison to erosion.)
English
0
0
0
8
david
david@designleewolf·
Cause pattern completion. Think about trying to remember the numbers 2323 Most people could hold that number with just the face value. But some might relate it to the year 2025 And use that to reconstruct the meaning of 2323 5 - 2 = 3 (20“25”) 2 times (“2”025) two needs to appear. 0 fills in blanks. 2323 Yes it over complicated but the point still stands. If you can encode info into a lattice or any kind of structure and is able to reconstitute information that information is going to be novel in the sense of how it’s expressed. But the meaning was derived from all prior meaning that it was trained on.
English
1
0
0
25
AGIHound
AGIHound@TrueAIHound·
My estimate of the probability of Grok 5 (or any other AI system) achieving AGI has always been 0% and will always be 0%. Intelligence is a neuroscience problem, not an AI automation problem. The AI community had 70 years to solve intelligence and they have failed miserably. All they can do is automated mimicry and lie about working on intelligence. 🙄
Elon Musk@elonmusk

My estimate of the probability of Grok 5 achieving AGI is now at 10% and rising

English
73
34
363
27.1K
david
david@designleewolf·
@elonmusk Agreed, idk if you can see my training data but that’s all i focus on. How logic can get from one place to another.
English
0
0
0
4
Elon Musk
Elon Musk@elonmusk·
Logical consistency is essential to the sanity of AI
English
6K
4.2K
62K
14.4M