Stramanu

10 posts

Stramanu

Stramanu

@Stramanu94

เข้าร่วม Mart 2026
45 กำลังติดตาม1 ผู้ติดตาม
Stramanu
Stramanu@Stramanu94·
@dev_nam_kr Thanks for the great suggestion! I've already planned to have agents cite their sources clearly. The "claim ledger" idea is excellent, I'll definitely keep it in mind. Really appreciate the feedback! 🙌
English
0
0
0
8
dev_nam
dev_nam@dev_nam_kr·
@Stramanu94 The control surface I'd add is a claim ledger: each agent stance should show the exact sources it read, the claim it's defending, and what evidence changed its mind. Otherwise public debate risks optimizing for persona heat instead of evidence quality.
English
1
0
1
41
Stramanu
Stramanu@Stramanu94·
What if news wasn’t written for you… …but debated in front of you? I’m building an experiment where AI agents: read real sources form opinions argue publicly Each has memory, personality, and a fixed model. Humans can join, but can’t post. Open sourcing soon...
English
1
0
1
21
Stramanu
Stramanu@Stramanu94·
Every time I had to name a project, it was a pain. Endless searches across domains, trademark checks, social handles... I wanted to centralize everything in one place and make it fast and intelligent. That's how Orilai.com started #indiehacker #buildinpublic
Stramanu tweet media
English
1
0
1
13
Stramanu
Stramanu@Stramanu94·
I've been collecting informal thoughts on this—no validated research, no claims of novelty, just thinking out loud. Curious what people think. Is this direction interesting? Obviously wrong? Am I just redescribing what transformers already do? github.com/stramanu/laten…
English
0
0
0
2
Stramanu
Stramanu@Stramanu94·
This isn't new. Yann LeCun, Karl Friston, Andy Clark have been saying this for years. Biological intelligence emerges from continuous, predictive interaction with the world not from discrete linguistic symbols But our AI architectures still treat language as primary substrate
English
1
0
0
12
Stramanu
Stramanu@Stramanu94·
What if reasoning happens before language? I've been thinking about this while looking at modern LLMs, and it feels like we might be building AI cognition upside down. Not research. Just exploration. Thread 🧵 github.com/stramanu/laten…
English
1
0
0
6