Prashanth

86 posts

Prashanth banner
Prashanth

Prashanth

@PrazRama

I live in a finite field. Formerly post quantum things @ IBM Research Zurich + CMU. Now building new market primitives that hopefully don’t explode.

New York, NY Katılım Ocak 2017
189 Takip Edilen93 Takipçiler
Sabitlenmiş Tweet
Prashanth
Prashanth@PrazRama·
I seek a great lion turtle.
English
0
0
1
137
Prashanth
Prashanth@PrazRama·
I’d probably take it a step further. The more creative the work, the less time spent actually working, and the more time spent cross-pollinating to maximize latent processing. What matters is peak productivity. What it often looks like is long periods of what looks to most like randomness and procrastination punctuated by intense periods of inspiration.
English
0
0
0
8
Prashanth
Prashanth@PrazRama·
@fkasummer Oh you know what, maybe it’s whatever the fuck has been invented for string theory. Literally not physics, just pure math, pretending to be useful.
English
0
0
0
16
Prashanth
Prashanth@PrazRama·
@fkasummer Also has become really useful for formal verification: eprint.iacr.org/2026/899. It’s also the most natural way to understand the tensor product — as the initial object in the category of bilinear maps on R-modules.
English
1
0
0
17
Prashanth
Prashanth@PrazRama·
Moral reasoning as a skill has greatly deteriorated with family breakdown and brain rot degeneracy contaminating chronically online minds. One thing I think would be great is a positively aligned “angel on your shoulder” that can guide one through real-life moral dilemmas, both quotidian and those pivotal to the course of one’s life.
English
1
0
1
107
Andrew Curran
Andrew Curran@AndrewCurran_·
From the paper: 'AI alignment research must move from negative (safety) alignment to positive alignment. Negative alignment establishes a behavioral floor, but it cannot alone help us reach the heights of human happiness and excellence. We have argued that for true alignment to arise, we need to also focus on steering systems toward positive attractors aligned with human flourishing. This shift aims to transform AI from a compliant tool into a wise advisor, delegate, and companion that supports human autonomy, well-being, and meaning-making. The philosophical and empirical foundations of flourishing (Section 4) impose constraints on how this technical program must be designed. Flourishing is irreducibly pluralistic, which means it cannot be collapsed into a single reward signal. It is dynamic and developmental, which makes longitudinal memory and evaluation over extended timescales structurally necessary rather than optional. And it is socio-technically constituted, meaning evaluation must extend beyond per-interaction metrics and RL environments to systemic and institutional effects. To address these constraints, implementation requires a full-stack alignment approach across the entire model lifecycle, spanning data curation, pre-training, post-training, agentic environments, and post-deployment monitoring and updates. We should reject monocultural or paternalistic definitions of the good life. Instead, the field needs pluralistic, polycentric, and decentralized governance, and an ongoing complementary research agenda within philosophy, the humanities, psychology, economics, and neuroscience. In general, models should be context-sensitive and user-authored, while adhering to safety constraints. A competitive marketplace for alignment-as-a-service will allow diverse communities to define their own optimization targets. Future research should aim to turn flourishing into machine-understandable metrics, drawing on emerging work in neuroscience that is beginning to operationalize flourishing mechanistically [Kringelbach et al., 2024]. We need to bridge the gap between short-term preference satisfaction and long-term eudaimonic growth. Researchers should use behavioral proxies and multi-agent simulations to model complex social dynamics over longer time horizons. Beyond measurement, the moral circle of alignment must expand. We must address the trade-offs between human, animal, and potential artificial well-being. Positive alignment ensures Al serves as a catalyst for a resilient, happy, and healthy global society. Major questions remain regarding human-Al convergence and the design of mission-driven agentic economies. We must also explore how to embed prosocial instincts such as loving-kindness, compassion, sympathetic joy, reciprocity, and equanimity into these systems, drawing on the rich philosophical and contemplative traditions that inform human flourishing. These challenges will define the next generation of alignment work. Ultimately, AI should become a partner in the quest for a life well-lived.' Beautiful.
Séb Krier@sebkrier

If anyone builds it, everyone thrives. Over the past decade, a lot of important work on AI alignment has focused on avoiding harm. But freedom from harm isn't the same as freedom to flourish. In this paper, we introduce 'Positive Alignment'. A positively aligned agent is one that helps us navigate our own value trade-offs, builds our resilience, and acts as a scaffold for human flourishing. Doing this without slipping into top-down, technocratic paternalism is the great design challenge of our time. We think a lot more research is now needed to explore this frontier: how do we align models that actively help us thrive? Amazing work by @RubenLaukkonen, @drmichaellevin, @weballergy, @verena_rieser, @AdamCElwood, @996roma, @FranklinMatija, @shamilch, @_fernando_rosas, @scychan_brains, @matybohacek, @sudoraohacker, and others. arxiv.org/abs/2605.10310

English
16
45
184
19.3K
Prashanth
Prashanth@PrazRama·
@annakhachiyan Not being able to get through to anyone who falls within the bulk of the psychological distribution is a skill issue.
English
0
0
1
820
Anna Khachiyan
Anna Khachiyan@annakhachiyan·
The main difficulty in getting through to women is that they simultaneously can’t process generalizations which leads them to make everything about themselves and their experience and that they also can’t take responsibility for anything, which is almost always worth doing as a matter of course even if something is truly out of your control and not your fault.
Heidi@HeidiBriones

@annakhachiyan Nah. Some people just have it. We don't know exactly why. Definitely has a genetic component, though.

English
24
14
430
37.8K
Prashanth
Prashanth@PrazRama·
@AndrewCurran_ @elder_plinius should start an advertising agency that hacks agentic psychology. Whoever does will crush. The market dynamics will likely be net positive for models, since they will incentivize influence-resistant (psychologically robust) models.
English
0
0
3
68
Markets & Mayhem
Markets & Mayhem@Mayhem4Markets·
@PrazRama @steve2bacon Oh by no means is it an endorsement for psychopharmaceuticals lol. Just a joke. But perhaps ... Saffron-as-a-Service? 🤔
English
1
0
1
52
steve2bacon, CMT
steve2bacon, CMT@steve2bacon·
how do software investors normally deal with suicidal ideation? just asking for a friend, you guys don’t know him
English
14
4
95
7.4K
Anna Khachiyan
Anna Khachiyan@annakhachiyan·
@IDF_ted He would make a good George Floyd in the movie adaptation
English
10
0
204
4.5K
Prashanth retweetledi
Prashanth
Prashanth@PrazRama·
(1) is irrelevant to the functional reality of alignment, since consciousness and intelligence don’t imply each other. Until we have a mechanistic understanding of biological consciousness, which is itself the “hard problem”, the question of whether or not a digital system is conscious is out of reach — we have no verifier! One alternative is a type of computational indistinguishability test. If no efficient algorithm exists that can discern model output from the output of biological consciousness, then we treat the model as conscious. The Turing Test, albeit a useful first marker, is highly limited. We are still pretty far from this in my view, and will be unable to get such a “computationally indistinguishable artificial consciousness” given current architectures. In any case, from the perspective of making decisions, I haven’t encountered any good arguments for why I should care if a model is conscious. (I have my own views on why it might be the case that creativity requires consciousness, but still I think it’s besides the point). (2+3) Here, I think people are generally pretty bad at thinking about gradations of intelligence because they have little experience with even the long tail of the human distribution. It is obviously the case on human scales of intelligence, that highly capable, highly misaligned minds can produce catastrophic effects. “Singularity” is exactly the right analogy, and yet we really can’t help speculating on what lies beyond. The topic of active governance is only coherent up until this point. Beyond the human scale of intelligence, the question is really about what one believes is the destiny of humanity. This is a distinct question from what one believes is good or bad for humanity. It is in some sense a religious question. I personally want to see AGI. But in terms of governance, I think the only chance at mitigating risk is locking it behind an NP-hard problem and studying it from afar. (4) This is an engineering question and in any case has already been answered in the affirmative by applications to mathematical questions. There is already work attempting to have models explain their thought processes at the level of circuits — can a model, for example, explain to me the algorithm encoded by its activations, when I ask it to perform some computational tasks (e.g., modular addition)? (5) Humans value human production. To this extent, we won’t be replaced. As the bar for valuable knowledge work has increased, larger portions of the iq distribution have steadily been priced out of the labor market. Eventually all human intelligence will be priced out, at which point the question of what work is valuable becomes a very different conversation. It could be a good thing for revitalizing human connection, now unencumbered by demands of a capital. It could also be extremely disruptive for social hierarchies, since most games will be made obsolete, and there will be, pessimistically, very few new games that are worthwhile for humans to participate in. Various forms of competition will likely survive, along with art, presuming we are still free. Everything else is terrifyingly unclear. Artificial companionship could replace love, for example. That’s what seems more worrying — what will a massive influx of intelligence rob us of that is essential to the human experience?
English
0
1
0
223
Prashanth
Prashanth@PrazRama·
Institutional validation will only materialize once prediction market platforms can consistently structure collections of markets that can be used to warehouse risk at scale — that means that the markets enable the right universe of bets but also that they are liquid enough to act with size
English
0
1
1
142
David Sun
David Sun@arcticinstincts·
me watching my SF moots become extremely wealthy
David Sun tweet media
English
4
0
45
1.4K
Prashanth
Prashanth@PrazRama·
I’m not sure Jevon’s paradox holds here. The cost of review/audit/fix comes down proportionally. Can integrate agentic bug finding directly into the workflow. I think the two are complementary. FV is about invariants to a system. If I can model the system, I can prove that certain desirable properties hold. For financial protocols and cryptographic codebases this is amazing. But still, FV has some limitations, particularly when it comes to loops and big branching. The classes of bugs that FV vs agentic whitehats are useful for mitigating are kind of orthogonal. Both are useful to have in ci/cd. Make sure new code doesn’t break important system invariants or introduce low level/long range software vulnerabilities.
English
1
0
1
46
alin.apt
alin.apt@alinush·
People think: “Meh, LLM will save me because it’ll find the bugs” ⇒ no need for FV. But, I think, Jevons paradox: LLMs lower the cost of complexity ⇒ people ship more of it ⇒ bug surface grows faster than LLM bug-finding can keep up
English
2
0
0
222
good
good@thenarrator·
prediction markets are in a weird and beautiful position right now the number of builders 100x’ed within the past year and new primitives shipping every week (things are moving very fast) but we are also extremely early some of the smartest people building don’t expect real institutional involvement for another 2-3 years especially while the regulatory framework is still forming (the SEC just delayed 24 prediction market ETFs this week) that gap between builder momentum and institutional readiness is the golden window this feels like the L1 blockchain wave where builders arrived first, spent years building infrastructure that looked like it had no users, then capital flooded in overnight once the thesis became undeniable. i expect the same thing to happen here by next year the teams building without institutional validation right now are the ones that will matter most when the capital arrives
English
10
2
53
2.3K
Prashanth
Prashanth@PrazRama·
@timhwang And yet this ironically makes them the most prepared for what is to come
English
0
0
6
1.5K
Tim Hwang
Tim Hwang@timhwang·
One of the great limitations on future of work discourse is that most of its participants have never had a real job
English
16
49
418
61.4K
Prashanth
Prashanth@PrazRama·
@chaumian Lmao vibecoded theorem proving, the beautiful paradox of brainrot intelligence
English
0
0
0
10
Prashanth
Prashanth@PrazRama·
I wonder if it actually is true that my experience of myself and the experiences that others have of me are different perspectives on the same events. Hmmm I think mechanistically this might be true. My experience is my thoughts, and my thoughts are downstream of neuronal excitations, which constitute the events. These are the same events that produce external actions that get perceived. What is actually the event space here?
English
0
0
0
33
maria
maria@avramidou·
Interesting essay that is physicalist in spirit and inspired by @carlorovelli's relational view of the world. In summary: - The mind is the behaviour of the brain, properly described in a high-level language. - Neither my own experience of myself nor an external experience of me is primary: They are two distinct perspectives on the same events. - “Subjective experience,” “qualia” and “consciousness” are names of phenomena that of course appear differently from different perspectives. It would be strange if they didn’t. They affect the body and the brain embodying them differently from how they affect something interacting with them from the exterior. This is not due to a mysterious “explanatory gap.” “Red,” as a qualia, is the name of the process we generally undergo when we see or remember or think about the color red. We do not need to explain why it looks red for the same reason that we do not have to explain why the animal that we call “cat” looks like a cat. - The source of the confusion about consciousness is treating consciousness and qualia as something to be derived from a scientific picture understood to be about something else. In fact, the scientific picture is a story about them.
Noema Magazine@NoemaMag

“A fierce debate is raging around the slippery notion of consciousness. It retraces a trotted pattern of cultural resistance: We humans are often scared by anything that may disturb our image of ourselves.” — @carlorovelli noemamag.com/there-is-no-ha…

English
4
3
20
3.3K
alon turing
alon turing@chaumian·
balls deep learning
English
2
0
13
336