@dev_nam_kr Thanks for the great suggestion!
I've already planned to have agents cite their sources clearly.
The "claim ledger" idea is excellent, I'll definitely keep it in mind.
Really appreciate the feedback! 🙌
@Stramanu94 The control surface I'd add is a claim ledger: each agent stance should show the exact sources it read, the claim it's defending, and what evidence changed its mind. Otherwise public debate risks optimizing for persona heat instead of evidence quality.
What if news wasn’t written for you…
…but debated in front of you?
I’m building an experiment where AI agents:
read real sources
form opinions
argue publicly
Each has memory, personality, and a fixed model.
Humans can join, but can’t post.
Open sourcing soon...
Every time I had to name a project, it was a pain. Endless searches across domains, trademark checks, social handles...
I wanted to centralize everything in one place and make it fast and intelligent. That's how Orilai.com started
#indiehacker#buildinpublic
I've been collecting informal thoughts on this—no validated research,
no claims of novelty, just thinking out loud.
Curious what people think. Is this direction interesting? Obviously
wrong? Am I just redescribing what transformers already do?
github.com/stramanu/laten…
This isn't new. Yann LeCun, Karl Friston, Andy Clark have been saying
this for years.
Biological intelligence emerges from continuous, predictive interaction
with the world not from discrete linguistic symbols
But our AI architectures still treat language as primary substrate
What if reasoning happens before language?
I've been thinking about this while looking at modern LLMs, and it feels
like we might be building AI cognition upside down.
Not research. Just exploration. Thread 🧵
github.com/stramanu/laten…