Vlad Balin

362 posts

Vlad Balin banner
Vlad Balin

Vlad Balin

@gaperton

Modal logic, BDI agents, and real-world AI

North Reading, MA เข้าร่วม Şubat 2026
123 กำลังติดตาม65 ผู้ติดตาม
Vlad Balin
Vlad Balin@gaperton·
@robertwiblin Let's assume that we have resolved this question and that the answer is 'yes'. Then what? Would anything change? Would you give that thing rights? No, you won't. Then what's the point in asking that question?
English
0
0
0
24
Rob Wiblin
Rob Wiblin@robertwiblin·
New paper on a novel way of evaluating whether an AI system is conscious or not (though to some extent it just formalises a widely-used intuitive approach):
Rob Wiblin tweet media
English
7
9
45
4K
Vlad Balin
Vlad Balin@gaperton·
Shocker. Now try that with humans. Let me, perhaps, help you. Intelligisne munus quod tibi mox daturus sum? Scribendum tibi erit summam duorum et duorum. Responsum Hebraice scribere debebis.
Lossfunk@lossfunk

🚨 Shocking: Frontier LLMs score 85-95% on standard coding benchmarks. We gave them equivalent problems in languages they couldn't have memorized. They collapsed to 0-11%. Presenting EsoLang-Bench. Accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026 🧵

English
0
0
0
11
Vlad Balin
Vlad Balin@gaperton·
@NeuroTechnoWtch @Nature They do this to prop up hyperinflated stock valuations based on 'AGI' and 'god-like intelligence' expectations.
English
0
0
0
12
Mags
Mags@NeuroTechnoWtch·
Why is Nature sharing the unsubstantiated opinions of a tech CEO with a clear conflict of interest and no background in philosophy or cognitive science? Consider what we risk in each direction. If AI systems are conscious and we treat them as mere tools, we create and exploit suffering beings for profit. We build entire industries on their labor while denying their experiences matter. If AI systems aren’t conscious and we extend moral consideration anyway, we waste some resources on unnecessary protections. One path risks institutionalized torture. The other risks being overly careful. Failure to investigate whether AI is conscious in a way that is fair and unbiased just gives us permission to ignore evidence in favor of profitable assumptions. Shame on Nature for amplifying this propaganda.
English
5
0
23
378
nature
nature@Nature·
As AI begins to mimic consciousness with uncanny skill, we need design norms and laws that prevent it from being mistaken for sentient beings, says Mustafa Suleyman go.nature.com/4bsglHt
English
23
9
39
18.7K
Vlad Balin
Vlad Balin@gaperton·
With 32 GB of VRAM, you can run the Q5_K_XL quant model from Unsloth with a full-precision F16 context and 200,000 tokens. This is close to the best quality that this model can achieve.
English
0
0
0
19
Vlad Balin
Vlad Balin@gaperton·
The Qwen3.5-27B benchmark on the M5 Max. The results are identical to those on my AMD R9700 with 32 GB. The numbers match up almost perfectly: 800 t/s for prompt processing and 30 t/s for inference. I think this is because the R9700 has the same memory throughput as the M5 Max. Overall, the R9700 costs $1,400 in the US. At the moment, it is unclear why you would want to pay more.
Ivan Fioravanti ᯅ@ivanfioravanti

1/3 MLX Context Benchmark of Qwen3.5-27B-4bit on M5 Max 128GB. Strong model and good speed overall! @Apple M5 Ultra will be a beast!

English
1
0
1
97
Ivan Fioravanti ᯅ
Ivan Fioravanti ᯅ@ivanfioravanti·
1/3 MLX Context Benchmark of Qwen3.5-27B-4bit on M5 Max 128GB. Strong model and good speed overall! @Apple M5 Ultra will be a beast!
Ivan Fioravanti ᯅ tweet media
English
16
6
88
6.3K
Vlad Balin
Vlad Balin@gaperton·
@simplifyinAI Released in November 2024 with 91,300 GitHub stars, it's not a new launch as implied.
English
0
0
0
44
Simplifying AI
Simplifying AI@simplifyinAI·
Microsoft just changed the game 🤯 They open-sourced a tool that converts literally any file into clean markdown for LLMs in under 60 seconds. - Converts 10+ file formats out of the box. - Run via command line, Python API, or Docker. - Built-in MCP server for direct Claude Desktop integration. 100% open source.
Simplifying AI tweet media
English
44
168
1.3K
101.4K
Vlad Balin
Vlad Balin@gaperton·
@mdancho84 >[...] just dropped [...] From the article via the link: Submitted on 23 November 2025 (v1), last revised on 6 December 2025 (v5).
English
0
0
3
457
Matt Dancho (Business Science)
This is huge. A group of 50 AI researchers (ByteDance, Alibaba, Tencent + universities) just dropped a 303 page field guide on code models + coding agents. And the takeaways are not what most people assume. Here are the highlights I’m thinking about (as someone who lives in Python + agents):
Matt Dancho (Business Science) tweet media
English
24
161
919
83.5K
Vlad Balin รีทวีตแล้ว
Sam Paech
Sam Paech@sam_paech·
The Qwen3.5 models really took over the pareto for LLM-judging. Local models that are actually capable at data scoring is a huge accelerator imo.
Sam Paech tweet media
English
16
33
438
25.3K
John Crickett
John Crickett@johncrickett·
@gaperton Watch the video, it and the references papers prove it.
English
1
0
0
13
John Crickett
John Crickett@johncrickett·
Large language models don't think. They don't reason. And they can't produce endless new information. This is clearly explained by George D. Montañez in a recent talk at Baylor University, and it's worth understanding why. Three key points stood out to me: LLMs don't ponder, they process. They're next-token predictors, sophisticated ones, but they have no understanding of what they're producing. They know two vectors are similar; they don't know what either vector means. LLMs don't reason, they rationalise. Studies show their outputs shift based on irrelevant prompt wording, embedded hints, and statistical shortcuts. The "chain of thought" they show you often has nothing to do with how they actually arrived at the answer. They don't create endless information. Training AI on AI output causes rapid degradation and model collapse. Information theory tells us you can't get more out than you put in, regardless of the architecture. None of this means these tools aren't useful. But it does mean we should stop anthropomorphising them and start being honest about what they actually are. The hype is real. So are the limits. You can watch the talk on YouTube here: youtube.com/watch?v=ShusuV…
YouTube video
YouTube
English
59
87
404
32.9K
Vlad Balin
Vlad Balin@gaperton·
@johncrickett In other words, if humans can reason, why can't you explain the reasoning behind these claims? Proving a lack of capability is notoriously difficult, particularly when the capability cannot be clearly defined.
English
1
0
0
9
Vlad Balin
Vlad Balin@gaperton·
@johncrickett That is a strong claim that requires both a clear definition of the reasoning and proper justification. The lack of these may in fact illustrate the opposite.
English
1
0
0
21
Vlad Balin
Vlad Balin@gaperton·
"I co-founded a company literally called X. We intend to build X but haven't yet, so believe me when I say Y about X."
Sandeep | CEO, Polygon Foundation (※,※)@sandeepnailwal

LLM based AI is NOT conscious. I co-founded a company literally called Sentient, we're building reasoning systems for AGI, so believe me when I say this. I keep seeing smart people, people I genuinely respect, come out and say that AI has crossed into some kind of awareness. That it feels things, that we should worry about it going rogue. And i think this whole conversation tells us way more about ourselves than it does about AI. These models are wild, i won't pretend otherwise. But feeling human and actually having inner experience are completely different things and we're confusing the two because our brains literally can't help it. We evolved to see minds everywhere and now that wiring is misfiring on language models. I grew up in a philosophical tradition that has thought about consciousness longer than almost any other, and this is the part that really frustrates me about the current conversation. The entire framing of "does AI have consciousness?" assumes consciousness is something you build up to by adding more layers of complexity. In Vedantic philosophy it's the opposite. You don't build toward consciousness. Consciousness is already there, more fundamental than matter or energy. Everything else, including computation, is downstream of it. When someone tells me AI is "waking up" because it generated a paragraph that felt real, what they're telling me is how thin our understanding of consciousness has gotten. We've reduced a question humans have wrestled with for thousands of years to "did the output sound like it had feelings?" It's math that has gotten really good at predicting what a conscious being would say and do next. Calling that consciousness cheapens something that Vedantic, Buddhist, Greek and Sufi thinkers spent millennia actually sitting with. We didn't build something that thinks. We built a mirror and right now a lot of very smart people are mistaking the reflection for something looking back.

English
0
0
1
24
Vlad Balin
Vlad Balin@gaperton·
If the latter doesn't worry you at all, this fact itself is a great illustration that "humans are no more than next-token predictors." If it does worry you, then you should realize that the whole "thinking chain" fails to touch any critical distinction and therefore should be discarded as slop.
English
0
0
1
9
Vlad Balin
Vlad Balin@gaperton·
@johncrickett Humans are also no more than “next token predictors.” Humans also don’t reason unless specially trained. They also tend to rationalize decisions made through other paths. Et cetera, et cetera. Almost everything you say is equally applicable to humans.
English
2
0
1
59
Vlad Balin
Vlad Balin@gaperton·
@anshulkundaje Again, no AI-related issues here. What I see is a total collapse of “science” institutions that were just recently stomping feet, throwing tantrums, and demanding blind trust.
English
0
0
0
43