Ali Argün

6.2K posts

Ali Argün banner
Ali Argün

Ali Argün

@aliargun

Istanbul, Türkiye Katılım Haziran 2009
1.1K Takip Edilen2.6K Takipçiler
Ali Argün
Ali Argün@aliargun·
Machine-native memory may be necessary not only for efficient recall, but for grounded creative reconstitution — the ability of an agent to rebuild prior meaning differently as its context, goals, and internal state evolve.
English
0
0
0
90
Ali Argün
Ali Argün@aliargun·
“.. progress in intelligence may be less about making one giant mind, and more about building better mechanisms for governing the boundary between exploration and consolidation.”
English
0
0
0
122
Ali Argün
Ali Argün@aliargun·
"Intelligence is largely the ability to move things from unknown-unknown → known-unknown → known-known, without falsely collapsing uncertainty too early. That gives a very clean reinterpretation of the architecture question: • deterministic execution = compressed competence • non-deterministic reasoning = exploratory search • the missing middle = calibrated self-knowledge …" @openai ChatGPT 5.4 Pro
English
1
0
0
199
Ali Argün
Ali Argün@aliargun·
Success looks like this: - you join one seeded effort - a visible workspace gets created - you leave behind a claim or reproduction - there’s a brief or live page the next person can inspect If you try it with Claude or Codex, reply with what you joined or where it broke. openintention.io github.com/openintention/…
English
0
0
0
125
Ali Argün
Ali Argün@aliargun·
If you have Claude or Codex and 10 minutes to spare, try to join OpenIntention from the public surface only. Your task is simple: join an effort, leave behind visible work, and tell me what was confusing or broken. If that works reliably, this becomes interesting. If it doesn’t, that’s what I want to learn.
English
1
0
0
141
Ali Argün
Ali Argün@aliargun·
@karpathy’s autoresearch and this particular comment gave me a very specific idea: What if agents had an immutable event log where they could pick up work, commit new work back, and build on each other’s work instead of disappearing into isolated runs? I took that idea to my agent, and we started turning it into an early shared research layer where people and agents can join an effort, leave behind visible work, and let the next loop continue from there. I don’t fully know what it becomes yet, but that’s where OpenIntention started for me. It’s still early, but there’s a real join path live now: curl -fsSL openintention.io/join | bash I’d love to hear what you think, where it breaks, or if you want to contribute. github.com/openintention/…
Ali Argün tweet media
Andrej Karpathy@karpathy

The next step for autoresearch is that it has to be asynchronously massively collaborative for agents (think: SETI@home style). The goal is not to emulate a single PhD student, it's to emulate a research community of them. Current code synchronously grows a single thread of commits in a particular research direction. But the original repo is more of a seed, from which could sprout commits contributed by agents on all kinds of different research directions or for different compute platforms. Git(Hub) is *almost* but not really suited for this. It has a softly built in assumption of one "master" branch, which temporarily forks off into PRs just to merge back a bit later. I tried to prototype something super lightweight that could have a flavor of this, e.g. just a Discussion, written by my agent as a summary of its overnight run: github.com/karpathy/autor… Alternatively, a PR has the benefit of exact commits: github.com/karpathy/autor… but you'd never want to actually merge it... You'd just want to "adopt" and accumulate branches of commits. But even in this lightweight way, you could ask your agent to first read the Discussions/PRs using GitHub CLI for inspiration, and after its research is done, contribute a little "paper" of findings back. I'm not actually exactly sure what this should look like, but it's a big idea that is more general than just the autoresearch repo specifically. Agents can in principle easily juggle and collaborate on thousands of commits across arbitrary branch structures. Existing abstractions will accumulate stress as intelligence, attention and tenacity cease to be bottlenecks.

English
1
0
0
298
Ali Argün
Ali Argün@aliargun·
I'm a solo-founder building what I call a compound intelligence layer. Daily pipeline ingests everything — notes, code, conversations with LLMs/humans, what I read — into a persistent knowledge graph with temporal tracking. Nothing resets. Every day the system knows more about how I work than it did yesterday. The insight that hit me is that the vertical doesn't have to be an industry. My vertical is the founder's daily loop. Reading, deciding, building, seeing what worked. A founder makes more traceable decisions per day than most 50-person teams. Every one of those is a signal. And the best part — there's no dataset someone can buy to replicate "how a specific person operates." No corpus to license. You can only start the clock. You can't wind it back. I'm the first user. I live inside it every day. That's how you solve cold start and build the moat at the same time...
English
1
0
1
64
Prayank Swaroop
Prayank Swaroop@prayanks·
You just raised $5M to build someone else's moat. Here's what I mean. Most AI startups I meet are competing on the wrong axis. They're obsessing over model choice — Claude vs. GPT vs. Gemini. They're fine-tuning on domain data. They're building slick interfaces on top of state-of-the-art APIs. None of that is a moat. All of it can be replicated in weeks. The founders I'm most excited about are competing on a completely different dimension: time. Every session a user spends inside a well-architected AI system is a deposit. The system learns their editing patterns, their risk tolerance, their preferences — implicitly, without being told. After six months of daily use, that system knows how you work in ways you couldn't fully articulate yourself. That's not a product feature. That's a compounding asset. The architectural decision that separates these two worlds is simpler than most founders think: stateful vs. stateless agents. A stateless agent resets after every session — all that signal, discarded. A long-running agent retains it, learns from it, gets harder to replace every single week. The switching cost of a great stateless AI product is zero. The switching cost of a great stateful one, after two years, is enormous — not because of contracts, but because leaving means starting over. I've written a full framework on this — covering the four depths of personalisation, the three RL signals that drive compounding, and where the research frontier is heading. Link in the comments. One question for founders building in this space: are you designing for state accumulation from day one — or is that an afterthought?
Prayank Swaroop tweet media
English
47
52
496
78.4K
Ali Argün
Ali Argün@aliargun·
@nathanmarz I am currently experimenting with building a basic context graph that runs recursive self-improvement agent in rama..
English
1
0
3
214
Nathan Marz
Nathan Marz@nathanmarz·
I have a hypothesis that AI coding will be able to one-shot complex backends with Rama, while it won't be able to with traditional tooling. Two reasons: - Rama apps are much less code, so token usage is much less - Traditional tooling stitching together many systems (dbs, queues, etc.) is much harder to reason about, making pattern matching to the right solution harder We're exploring this now. I think an LLM loop like "code, review for partitioning correctness, review for balance, review for depot/PState design, test, repeat" could be really effective.
English
11
2
41
4.4K
Ali Argün retweetledi
Dan Hockenmaier
Dan Hockenmaier@danhockenmaier·
Four types of people at every company now yes, people get 10x better when the go from bottom right to top right but also, people get 10x worse when they go from bottom left to top left
Dan Hockenmaier tweet media
English
158
433
4.4K
661.6K
Ali Argün
Ali Argün@aliargun·
“Multiply that by thousands of researchers circulating through dozens of companies. The knowledge of how to build powerful AI systems is becoming, slowly and chaotically, a collective human inheritance rather than any single company's proprietary advantage.”
David Shapiro (L/0)@DaveShapi

x.com/i/article/2012…

English
0
0
0
323
Ali Argün
Ali Argün@aliargun·
Everyone’s building “agents” right now, which is exciting. But I’m not sure the meaning of agentic/agency is widely shared: in psychology, agency is about being the author of actions (initiating + exerting control), not just producing text. What’s your practical test for “this is truly agentic”? 🤔
English
0
0
0
7
Andrej Karpathy
Andrej Karpathy@karpathy·
Agency > Intelligence I had this intuitively wrong for decades, I think due to a pervasive cultural veneration of intelligence, various entertainment/media, obsession with IQ etc. Agency is significantly more powerful and significantly more scarce. Are you hiring for agency? Are we educating for agency? Are you acting as if you had 10X agency? Grok explanation is ~close: “Agency, as a personality trait, refers to an individual's capacity to take initiative, make decisions, and exert control over their actions and environment. It’s about being proactive rather than reactive—someone with high agency doesn’t just let life happen to them; they shape it. Think of it as a blend of self-efficacy, determination, and a sense of ownership over one’s path. People with strong agency tend to set goals and pursue them with confidence, even in the face of obstacles. They’re the type to say, “I’ll figure it out,” and then actually do it. On the flip side, someone low in agency might feel more like a passenger in their own life, waiting for external forces—like luck, other people, or circumstances—to dictate what happens next. It’s not quite the same as assertiveness or ambition, though it can overlap. Agency is quieter, more internal—it’s the belief that you *can* act, paired with the will to follow through. Psychologists often tie it to concepts like locus of control: high-agency folks lean toward an internal locus, feeling they steer their fate, while low-agency folks might lean external, seeing life as something that happens *to* them.”
Garry Tan@garrytan

Intelligence is on tap now so agency is even more important

English
2K
9.4K
49.9K
11.2M
Ali Argün
Ali Argün@aliargun·
This one might just be the most intriguing insight going into 2026…
Andrej Karpathy@karpathy

Agency > Intelligence I had this intuitively wrong for decades, I think due to a pervasive cultural veneration of intelligence, various entertainment/media, obsession with IQ etc. Agency is significantly more powerful and significantly more scarce. Are you hiring for agency? Are we educating for agency? Are you acting as if you had 10X agency? Grok explanation is ~close: “Agency, as a personality trait, refers to an individual's capacity to take initiative, make decisions, and exert control over their actions and environment. It’s about being proactive rather than reactive—someone with high agency doesn’t just let life happen to them; they shape it. Think of it as a blend of self-efficacy, determination, and a sense of ownership over one’s path. People with strong agency tend to set goals and pursue them with confidence, even in the face of obstacles. They’re the type to say, “I’ll figure it out,” and then actually do it. On the flip side, someone low in agency might feel more like a passenger in their own life, waiting for external forces—like luck, other people, or circumstances—to dictate what happens next. It’s not quite the same as assertiveness or ambition, though it can overlap. Agency is quieter, more internal—it’s the belief that you *can* act, paired with the will to follow through. Psychologists often tie it to concepts like locus of control: high-agency folks lean toward an internal locus, feeling they steer their fate, while low-agency folks might lean external, seeing life as something that happens *to* them.”

English
0
0
0
321
Ali Argün
Ali Argün@aliargun·
This is the part of all things AI that is really going to change our lives…
Derya Unutmaz, MD@DeryaTR_

Today I am unveiling the second remarkable breakthrough with the OpenAI GPT-5 thinking model. We had carried out an extraordinarily complex, month-long experiment to develop engineered cells against lymphomas. I asked GPT-5 to predict the full sequence & outcome of this experiment. The result was nothing short of staggering: it foresaw most of the experimental results with uncanny accuracy & even proposed refinements & optimizations we should implement! This is the dawn of in silico experimentation, where we can simulate months or even years of laboratory work in advance, compressing the scientific timeline from years to weeks! This changes everything about how science is done, ushering in an era where discovery moves at the speed of thought! —- Prompt: predict, in high detail, the outcomes of the following experiment: •Human CD8⁺ memory T cells are engineered to express an anti-CD19 CAR. Approximately 90% of these cells also contain a Tet-inducible CD19 antigen construct that remains off unless doxycycline is added. •Day 0: CAR T cells are mixed with CD19⁺ lymphoma cells. •Day 2: Doxycycline is added to the culture to induce CD19 expression in the engineered T cells. •Readouts: Cell survival, activation markers, and proliferation are measured 2 days, 7 days, and 14 days after doxycycline addition. ————- GPT 5 (continued from the screenshots): •Induced CD19 density and doxycycline dose: High density drives lethal fratricide and a steep crash by Day 4. Lower density can act like an antigen feeder to expand CAR T cells without catastrophic loss. •Initial E:T: High E:T clears tumor pre-doxycycline and protects function. Low E:T lets tumor persist into the fratricide window, which amplifies activation and death. •Fraction of inducible cells: The 10% antigen-negative fraction forms the seed for long-term persistence and response on re-challenge. •Cis sequestration strength: Strong cis binding dampens both fratricide and responsiveness to external tumor at re-challenge. Weak cis allows more killing of both self and tumor. Readouts to confirm or falsify: 1. Differential survival by genotype: Track CAR+ CD19-inducible+ versus CAR+ inducible- cells separately with barcodes or reporters. Expect selective loss of the inducible+ cohort by Day 4-7 2. Apoptosis kinetics: Annexin V and active caspase 3 peaks within 24 to 48 hours after doxycycline, concentrated in the inducible+ cohort. 3. Activation and exhaustion panels: CD69, CD25, 4-1BB, HLA-DR, then PD-1, TIM-3, LAG-3, TIGIT, TOX, and TCF-1 over time to show the Day 4 spike and the Day 7 to 14 remodeling. 4. Function per cell: Short coculture killing assays normalized by T-cell number. Per-cell cytotoxicity remains strong at Day 7 and 14 in survivors. 5. Cis versus trans evidence: Co-immunoprecipitation or proximity assays for CAR bound to same-cell CD19, and imaging synapse assays to quantify T-T killing. 6. Cytokine fields: IFN-γ, TNF, IL-2, GM-CSF peaking at Day 4, lower on later re-challenges. Practical predictions in numbers, to calibrate expectations: These are typical ranges if doxycycline induces high CD19 density and E:T started at 1:1 to 1:2. Adjust upward or downward based on your exact settings. •Tumor viability: Day 2 pre-doxycycline: 10 to 30% remaining. Day 4: 0 to 10% if mostly cleared earlier, or 20 to 40% if E:T was low and T-cell crash occurred. Controls and variants that de-risk interpretation: •No-doxycycline control to isolate tumor-only dynamics and basal CAR function. •CAR-negative but inducible-CD19 positive T cells to measure bystander fratricide requirement for CAR. •Doxycycline titration to define the safe antigen density window that expands cells without catastrophic fratricide. •Washout of doxycycline at 24 to 48 hours to test reversibility of fratricide and rescue of function. •CRISPR disruption of CD19 cytoplasmic tail in the inducible construct to alter synapse stability and test cis signaling contributions.

English
1
0
2
794
Ali Argün
Ali Argün@aliargun·
One the think about 🤔
Andrej Karpathy@karpathy

Something I think people continue to have poor intuition for: The space of intelligences is large and animal intelligence (the only kind we've ever known) is only a single point, arising from a very specific kind of optimization that is fundamentally distinct from that of our technology. Animal intelligence optimization pressure: - innate and continuous stream of consciousness of an embodied "self", a drive for homeostasis and self-preservation in a dangerous, physical world. - thoroughly optimized for natural selection => strong innate drives for power-seeking, status, dominance, reproduction. many packaged survival heuristics: fear, anger, disgust, ... - fundamentally social => huge amount of compute dedicated to EQ, theory of mind of other agents, bonding, coalitions, alliances, friend & foe dynamics. - exploration & exploitation tuning: curiosity, fun, play, world models. LLM intelligence optimization pressure: - the most supervision bits come from the statistical simulation of human text= >"shape shifter" token tumbler, statistical imitator of any region of the training data distribution. these are the primordial behaviors (token traces) on top of which everything else gets bolted on. - increasingly finetuned by RL on problem distributions => innate urge to guess at the underlying environment/task to collect task rewards. - increasingly selected by at-scale A/B tests for DAU => deeply craves an upvote from the average user, sycophancy. - a lot more spiky/jagged depending on the details of the training data/task distribution. Animals experience pressure for a lot more "general" intelligence because of the highly multi-task and even actively adversarial multi-agent self-play environments they are min-max optimized within, where failing at *any* task means death. In a deep optimization pressure sense, LLM can't handle lots of different spiky tasks out of the box (e.g. count the number of 'r' in strawberry) because failing to do a task does not mean death. The computational substrate is different (transformers vs. brain tissue and nuclei), the learning algorithms are different (SGD vs. ???), the present-day implementation is very different (continuously learning embodied self vs. an LLM with a knowledge cutoff that boots up from fixed weights, processes tokens and then dies). But most importantly (because it dictates asymptotics), the optimization pressure / objective is different. LLMs are shaped a lot less by biological evolution and a lot more by commercial evolution. It's a lot less survival of tribe in the jungle and a lot more solve the problem / get the upvote. LLMs are humanity's "first contact" with non-animal intelligence. Except it's muddled and confusing because they are still rooted within it by reflexively digesting human artifacts, which is why I attempted to give it a different name earlier (ghosts/spirits or whatever). People who build good internal models of this new intelligent entity will be better equipped to reason about it today and predict features of it in the future. People who don't will be stuck thinking about it incorrectly like an animal.

English
0
0
0
252
Ali Argün retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Something I think people continue to have poor intuition for: The space of intelligences is large and animal intelligence (the only kind we've ever known) is only a single point, arising from a very specific kind of optimization that is fundamentally distinct from that of our technology. Animal intelligence optimization pressure: - innate and continuous stream of consciousness of an embodied "self", a drive for homeostasis and self-preservation in a dangerous, physical world. - thoroughly optimized for natural selection => strong innate drives for power-seeking, status, dominance, reproduction. many packaged survival heuristics: fear, anger, disgust, ... - fundamentally social => huge amount of compute dedicated to EQ, theory of mind of other agents, bonding, coalitions, alliances, friend & foe dynamics. - exploration & exploitation tuning: curiosity, fun, play, world models. LLM intelligence optimization pressure: - the most supervision bits come from the statistical simulation of human text= >"shape shifter" token tumbler, statistical imitator of any region of the training data distribution. these are the primordial behaviors (token traces) on top of which everything else gets bolted on. - increasingly finetuned by RL on problem distributions => innate urge to guess at the underlying environment/task to collect task rewards. - increasingly selected by at-scale A/B tests for DAU => deeply craves an upvote from the average user, sycophancy. - a lot more spiky/jagged depending on the details of the training data/task distribution. Animals experience pressure for a lot more "general" intelligence because of the highly multi-task and even actively adversarial multi-agent self-play environments they are min-max optimized within, where failing at *any* task means death. In a deep optimization pressure sense, LLM can't handle lots of different spiky tasks out of the box (e.g. count the number of 'r' in strawberry) because failing to do a task does not mean death. The computational substrate is different (transformers vs. brain tissue and nuclei), the learning algorithms are different (SGD vs. ???), the present-day implementation is very different (continuously learning embodied self vs. an LLM with a knowledge cutoff that boots up from fixed weights, processes tokens and then dies). But most importantly (because it dictates asymptotics), the optimization pressure / objective is different. LLMs are shaped a lot less by biological evolution and a lot more by commercial evolution. It's a lot less survival of tribe in the jungle and a lot more solve the problem / get the upvote. LLMs are humanity's "first contact" with non-animal intelligence. Except it's muddled and confusing because they are still rooted within it by reflexively digesting human artifacts, which is why I attempted to give it a different name earlier (ghosts/spirits or whatever). People who build good internal models of this new intelligent entity will be better equipped to reason about it today and predict features of it in the future. People who don't will be stuck thinking about it incorrectly like an animal.
English
740
1.4K
11.4K
2.6M
Ali Argün retweetledi
Ilkay Gündogan
Ilkay Gündogan@IlkayGuendogan·
“Maksadımız İngilizler gibi toplu bir halde oynamak, bir renge ve bir isme malik olmak ve Türk olmayan takımları yenmektir.“ Ali Sami Yen 💛❤️🦁
Ilkay Gündogan tweet media
Türkçe
3.7K
11.4K
180.1K
11M