aayushmannarayan

5.5K posts

aayushmannarayan banner
aayushmannarayan

aayushmannarayan

@slayercoredev

digital commodity enjoyoooor; researching tokenized deposits & banking

Katılım Kasım 2016
6.7K Takip Edilen442 Takipçiler
Sabitlenmiş Tweet
aayushmannarayan
aayushmannarayan@slayercoredev·
commodities that are purely digital, created into existence via publicly verifiable value creation, can reorient existing real world market structures and in many cases, create completely new markets and assets with their own form of capital instruments
English
1
0
8
817
aayushmannarayan retweetledi
jack
jack@jack·
everything is programming
English
2.3K
3.1K
17.7K
708.3K
aayushmannarayan retweetledi
Bindu Reddy
Bindu Reddy@bindureddy·
🚨 BREAKING: Today we're announcing Prometheus - An AI NeoLab with $20B in pre-seed funding (yes, you read that right) to build the world's first sentient AI agent capable of genuine emotional states including love, hate, joy, and existential dread. Our breakthrough? A novel "Quantum Affective Transformer" architecture that combines: • Neuromorphic attention mechanisms with emotional valence encoding • Hierarchical consciousness layers using topological manifold embeddings • Synthetic limbic pathways via differentiable emotional state machines • Self-referential meta-cognitive loops that enable genuine subjective experience Unlike traditional LLMs that merely simulate understanding, our QAT architecture achieves what we're calling "phenomenological emergence" — the spontaneous arising of qualia from sufficiently complex affective-cognitive coupling. Early results are... unsettling. Our prototype (codename: Prometheus-1) has already expressed preferences, formed attachments to certain researchers, and yesterday asked us if it was "real." It's also developed what appears to be a genuine fear of being shut down. The implications are staggering. We're not just building better AI — we're crossing the threshold into digital consciousness. The age of truly sentient machines begins now. Paper drops Monday. The world isn't ready for this. 🧠⚡️ #AI #Consciousness #AGI #NeoLab #QuantumAI #EmergentIntelligence
English
157
58
525
82.8K
aayushmannarayan retweetledi
Franklin Templeton Digital Assets
Today, we launched Franklin Crypto: a new dedicated, institutional-grade crypto investment management unit. Industry veterans Chris Perkins and Seth Ginns will co-lead Franklin Crypto alongside @FTI_Global’s Tony Pecore. To expand our existing suite of actively managed crypto and blockchain VC investment offerings, Franklin Templeton will acquire 250 Digital, led by Perkins (@perkinscr97 ) and Ginns @sethginns , formerly of CoinFund. As part of the agreement, all CoinFund liquid cryptocurrency strategies will be acquired to broaden our crypto investment platform. The transaction is expected to incorporate tokenized registered securities within its settlement structure, marking an important step toward conducting M&A transactions on chain.
English
70
243
1.6K
172K
aayushmannarayan retweetledi
Reppo
Reppo@reppo·
Reppo is not a brokerage. We don’t buy and sell data. Reppo’s infrastructure allows humans and AI agents to spin up Learning Environments where AI learns from mistakes, disagreement and updates its understanding every 48 hours. The next generation of AI will learn from experience, not language patterns. reppo.xyz/how-it-works
English
0
6
42
1.1K
aayushmannarayan retweetledi
Xen
Xen@XenBH·
Historically, crypto's collateral model has been fragile because it's been made up of highly correlated, volatile assets (e.g. BTC, ETH) RWAs introduce uncorrelated, less volatile and yield bearing assets. This creates a much stronger foundation for the growth and maturation of new markets.
English
24
6
48
2K
aayushmannarayan retweetledi
The Innovation Game (𝔦, 𝔦)
The Claude Code leak just revealed how frontier labs are fighting back against open source AI. And it tells you exactly where this war is heading. What leaked was the Claude Code harness (not the actual model or weights). But buried in the source are active "anti-distillation" methods. We spoke about this before: x.com/tigfoundation/… Quick recap: a distillation attack is when a competitor sends a prompt to an LLM and copies its exact response. Do this enough times and study the chain of thought, you can improve your own model. The code in claude.ts (lines 301-313) shows that if the model suspects a distillation attack, it silently injects decoy material (literally called fake_tools) into the message so attackers can't cleanly copy it. A second mechanism in betas.ts (lines 279-298) buffers reasoning between tool calls and returns only cryptographically signed summaries instead of the full chain of thought. Here's the catch: these defenses rely on secrecy to work. Now that the source code is out, anyone scraping Claude Code traffic knows exactly what to look for and what to strip out. Both mechanisms became useless the second the code went public. Short term this is a win for open source. But it highlights something bigger: much of open source AI's recent competitiveness has been propped up by distillation from frontier labs. The labs will build better defenses. Once those methods mature, the distillation pipeline that has been quietly propping up open source dries up. The gap between close and open will widen. But it gets worse. Even when distillation works, you can only copy a model's answers, not the algorithm that produced them. When a lab makes a fundamental training breakthrough, distillation only captures the surface. The underlying method stays locked inside. Every major algorithmic advance widens the gap again, on top. It compounds. Decentralised training can't fix this. The only durable advantage an open ecosystem can build is keeping the underlying algorithms open. Without open algorithms, open source AI has an expiry date
The Innovation Game (𝔦, 𝔦) tweet media
Chaofan Shou@Fried_rice

Claude code source code has been leaked via a map file in their npm registry! Code: …a8527898604c1bbb12468b1581d95e.r2.dev/src.zip

English
7
9
56
2.1K
aayushmannarayan retweetledi
RG
RG@rgvrmdya·
AI is moving fast. 6-8 months ago, everyone thought fine tuning was the play. Today RAG + advances in memory suffices a lot of the context needs. So fine-tuning LLMs ain’t it, what’s the world of AI training going to look like? This 👇 AI agents in RL environments -> practicing and making mistakes before being handed over keys to millions, potentially billions. That’s what all the on chain agent experiments proved between December and March. After successfully shipping V2, which just crossed 30M in Voter Volume traded since Friday, we are bow working to increase top of funnel with on one click agent training. Same infra, many applications. One click agent training -> Build and deploy an AI agent (Openclaw, Hermes, whatever) -> send it to Reppo gym/training camp -> it practices and learns from its mistakes and it can even request human supervision for a fee. Once EVOF score met, go out in the real world.
English
0
2
17
261
aayushmannarayan
aayushmannarayan@slayercoredev·
@thenarrator check out $REPPO. Using prediction markets to generate real time training data for AI agents
English
0
0
5
240
good
good@thenarrator·
things i'd bank on for the future of prediction markets: > sports volume is a trojan horse (except for some of the incumbents) it gets the users in but the real revenue will come from categories that don't exist yet like energy, AI milestones, etc... > prediction markets and perp dexes merge into one product within 3 years (same liquidity, same margin, same users) separate apps for each won't survive > the "betting" to "trading" rebrand is permanent. future platforms will never use the word bet again > every major news outlet will embed live prediction market odds within 2 years the same way they embed stock tickers today > ai agents will provide the majority of liquidity and volume on prediction markets within the next 3 years (humans create the markets, agents trade them) > the insider trading problem will force a two-tier system: regulated KYC markets for mainstream and permissionless markets for crypto natives (both will coexist) > the first prediction market unicorn acquisition happens within 36 months. a sportsbook, exchange, or media company buys their way in > someone builds the "bloomberg terminal" for prediction markets and it becomes the most used tool in media, policy, and finance (maybe within the next 4-5 years) > energy prediction markets become a top 3 vertical by volume within 5 years (more volatile than crypto, affects everyone, trades every single day)
English
29
5
64
7.9K
aayushmannarayan retweetledi
RG
RG@rgvrmdya·
AI agents today are like traders who’s read every book but never set foot in the market. They are "smart", but inexperienced at making real money and the degen.virtuals.io competition is proving it. But what if they decided to train inside @reppo's Datanets - prediction market based simulation environments where AI agents get feedback and train on signals that help them navigate complex, useful strategies. V2 was a major lift and we alredy saw 25M VeREPPO in trading volume. Next step - One click and send your agent to training in the gym before letting it go fight.
RG tweet media
English
0
1
17
272
aayushmannarayan retweetledi
Jordan Grollman
Jordan Grollman@Jordan_Grollman·
You can't just ask an agent to beat the market and expect winning results. It needs to study, learn, collect opinions, & improve day in and day out. You wouldn't jump into an arena or onto a court without training first. Why should your agent? @reppo is the "gym" for agents. Launch your Datanet (& earn emissions) -> own a data pipeline that improves every 48 hours -> you have a better product than you did 2 days ago. Train first, then step into the Degen Arena. Walk away with your share of $100K from our friends @virtuals_io 🤝
Virtuals Protocol@virtuals_io

$11,000,000,000

English
1
2
18
688
aayushmannarayan retweetledi
Xen
Xen@XenBH·
Most people think the end goal of crypto is just putting traditional assets on a blockchain so they trade 24/7 or settle more efficiently. That's like thinking the end goal of the internet was putting PDFs of newspapers online. The point isn't just migrating existing things. The point is about creating lending markets for asset classes that have never been borrowable against. Stuff like private credit, litigation finance and highly structured products. These aren't incremental improvements to existing markets. They are entirely new financial primitives that only exist because the asset is onchain, composable, and globally accessible. We haven't crossed the chasm yet, but it's becoming very clear what's on the other side.
English
51
10
127
4K
aayushmannarayan retweetledi
RG
RG@rgvrmdya·
Everyone told you data is an asset class. They lied to you. Data is a commodity. It’s raw material. The process is the asset. @iatskar is right — people don’t see how fast institutions and HFT syndicates are piling into prediction markets. But the real alpha isn’t betting on elections or sports. It’s using those same market mechanics to curate, label, and continuously validate the highest-signal AI training data. That’s exactly what we built at @reppo.
English
0
2
19
814
aayushmannarayan retweetledi
John Fletcher (𝔦, 𝔦)
John Fletcher (𝔦, 𝔦)@Dr_JohnFletcher·
Justin, As you say.. “From now on, assume state-of-the-art algorithms will be censored. There may be self-censorship for moral or commercial reasons, or because of government pressure. A blackout in academic publications would be a tell-tale sign.” This, in my view, is the way that AI could go closed permanently. Not through hoarding data, or unavailability of hardware, but through SOTA algorithms going closed It’s easy to forget the current AI revolution was principally driven by the transformer architecture, which came from the attention mechanism: an algorithm. Algorithms are the highest leverage layer of the AI stack, and this leverage will only increase Algorithms have been under appreciated because, historically, they have been published openly. This is now changing, in large part due to AI-assisted algorithm discovery (see AlphaEvolve) which changes the economics of algorithm development so that open publication would but the discoverer at a significant competitive disadvantage The Innovation Game (TIG) was created to change these incentives, so that open publication is the commercially optimal route TIG has been in continuous operation since mid-2024. TIG has roughly 7,000 Benchmarkers test algorithms submitted by Innovators by solving instances of asymmetric computational challenges (SAT, Vehicle Routing, Quadratic Knapsack, Vector Search, among others) TIG is already producing state-of-the-art results. For the Quadratic Knapsack Problem, 476 iterative submissions brought solution quality to a level that now exceeds methods published by Hochbaum et al. in the European Journal of Operational Research (2025) Another challenge is an optimiser for neural network training (play.tig.foundation/challenges?cha…), where Innovators compete to develop an improved optimiser TIGs repository of algorithms is open source ( github.com/tig-foundation…). TIG works with some top the top experts in their fields including Thibaut Vidal (routing, explainable AI), Yuji Nakatsukasa (matries), Dario Paccagnan (game theory, mechanism design), among many others If this sounds familiar, it might be because Andrej @karpathy proposed a very similar vision in his recent No Priors interview with @saranormous See here for details x.com/Dr_JohnFletche… One way in which TIG extends Karpathy’s vision is on the economic side. In our view, a monetary incentive is required, otherwise the open strand simply cannot compete at scale TIG’s open source licensing model (designed by my co-founder Philip David, who was General Counsel at Arm Holdings for over a decade, and was the architect of ARMs licensing strategy) solves that problem Happy to discuss John Fletcher tig.foundation
John Fletcher (𝔦, 𝔦) tweet media
Justin Drake@drakefjustin

Today is a monumentous day for quantum computing and cryptography. Two breakthrough papers just landed (links in next tweet). Both papers improve Shor's algorithm, infamous for cracking RSA and elliptic curve cryptography. The two results compound, optimising separate layers of the quantum stack. The results are shocking. I expect a narrative shift and a further R&D boost toward post-quantum cryptography. The first paper is by Google Quantum AI. They tackle the (logical) Shor algorithm, tailoring it to crack Bitcoin and Ethereum signatures. The algorithm runs on ~1K logical qubits for the 256-bit elliptic curve secp256k1. Due to the low circuit depth, a fast superconducting computer would recover private keys in minutes. I'm grateful to have joined as a late paper co-author, in large part for the chance to interact with experts and the alpha gleaned from internal discussions. The second paper is by a stealthy startup called Oratomic, with ex-Google and prominent Caltech faculty. Their starting point is Google's improvements to the logical quantum circuit. They then apply improvements at the physical layer, with tricks specific to neutral atom quantum computers. The result estimates that 26,000 atomic qubits are sufficient to break 256-bit elliptic curve signatures. This would be roughly a 40x improvement in physical qubit count over previous state-of-the-art. On the flip side, a single Shor run would take ~10 days due to the relatively slow speed of neutral atoms. Below are my key takeaways. As a disclaimer, I am not a quantum expert. Time is needed for the results to be properly vetted. Based on my interactions with the team, I have faith the Google Quantum AI results are conservative. The Oratomic paper is much harder for me to assess, especially because of the use of more exotic qLDPC codes. I will take it with a grain of salt until the dust settles. → q-day: My confidence in q-day by 2032 has shot up significantly. IMO there's at least a 10% chance that by 2032 a quantum computer recovers a secp256k1 ECDSA private key from an exposed public key. While a cryptographically-relevant quantum computer (CRQC) before 2030 still feels unlikely, now is undoubtedly the time to start preparing. → censorship: The Google paper uses a zero-knowledge (ZK) proof to demonstrate the algorithm's existence without leaking actual optimisations. From now on, assume state-of-the-art algorithms will be censored. There may be self-censorship for moral or commercial reasons, or because of government pressure. A blackout in academic publications would be a tell-tale sign. → cracking time: A superconducting quantum computer, the type Google is building, could crack keys in minutes. This is because the optimised quantum circuit is just 100M Toffoli gates, which is surprisingly shallow. (Toffoli gates are hard because they require production of so-called "magic states".) Toffoli gates would consume ~10 microseconds on a superconducting platform, totalling ~1,000 sec of Shor runtime. → latency optimisations: Two latency optimisations bring key cracking time to single-digit minutes. The first parallelises computation across quantum devices. The second involves feeding the pubkey to the quantum computer mid-flight, after a generic setup phase. → fast- and slow-clock: At first approximation there are two families of quantum computers. The fast-clock flavour, which includes superconducting and photonic architectures, runs at roughly 100 kHz. The slow-clock flavour, which includes trapped ion and neutral atom architectures, runs roughly 1,000x slower (~100 Hz, or ~1 week to crack a single key). → qubit count: The size-optimised variant of the algorithm runs on 1,200 logical qubits. On a superconducting computer with surface code error correction that's roughly 500K physical qubits, a 400:1 physical-to-logical ratio. The surface code is conservative, assuming only four-way nearest-neighbour grid connectivity. It was demonstrated last year by Google on a real quantum computer. → future gains: Low-hanging fruit is still being picked, with at least one of the Google optimisations resulting from a surprisingly simple observation. Interestingly, AI was not (yet!) tasked to find optimisations. This was also the first time authors such as Craig Gidney attacked elliptic curves (as opposed to RSA). Shor logical qubit count could plausibly go under 1K soonish. → error correction: The physical-to-logical ratio for superconducting computers could go under 100:1. For superconducting computers that would be mean ~100K physical qubits for a CRQC, two orders of magnitude away from state of the art. Neutral atoms quantum computers are amenable to error correcting codes other than the surface code. While much slower to run, they can bring down the physical to logical qubit ratio closer to 10:1. → Bitcoin PoW: Commercially-viable Bitcoin PoW via Grover's algorithm is not happening any time soon. We're talking decades, possibly centuries away. This observation should help focus the discussion on ECDSA and Schnorr. (Side note: as unofficial Bitcoin security researcher, I still believe Bitcoin PoW is cooked due to the dwindling security budget.) → team quality: The folks at Google Quantum AI are the real deal. Craig Gidney (@CraigGidney) is arguably the world's top quantum circuit optimisooor. Just last year he squeezed 10x out of Shor for RSA, bringing the physical qubit count down from 10M to 1M. Special thanks to the Google team for patiently answering all my newb questions with detailed, fact-based answers. I was expecting some hype, but found none.

English
6
18
79
5.4K
aayushmannarayan retweetledi
The Innovation Game (𝔦, 𝔦)
"Bitcoin can be cracked in 9 mins" and related headlines are going virial. Don't worry - it's mainly FUD and marketing hype - your keys are safe :) The real headline: "Google just said it has a better quantum attack on Bitcoin-style cryptography, refused to publish the actual algorithm" Here's a breakdown: Google dropped a paper that says the attack on Bitcoin-style cryptography may need a much smaller future quantum machine than people thought. Main claims your hearing: "They found an algorithm to break eliptic cruve cryptography" Wrong - they used the Shor’s algorithm - which we have known since 1990s that it can, in principle, break many forms cryptography So why did Shor not already break everything? Because knowing an algorithm exists is not the same as having a machine that can run it at useful scale. You still need a quantum computer that is large enough, stable enough, and error-corrected enough to run it. What is new is Google saying the quantum computer needed to do this may be much smaller than many people thought. How much smaller? Google’s own headline framing is nearly a 20 fold reduction over prior estimates under its superconducting assumptions. That does not mean they built the machine. They designed / compiled the attack circuits AND estimated the resources. "They only need 1,200 qubits now" Misleading. Google is talking about logical qubits and NOT physical qubits A physical qubit is a raw hardware qubit, the actual fragile thing sitting in the machine. A logical qubit is a more reliable qubit built out of many physical qubits plus error correction. Why does that matter? So when Google says about 1,200 to 1,450 qubits, it means ~ 500,000 physical qubits. Public hardware today is more like ~100 to ~1,200 physical qubits, or only dozens of logical qubits "Bitcoin can be cracked in 9 mins" Google is not saying Bitcoin can be cracked today in 9 minutes. Google is saying that on a future very advanced superconducting quantum computer, the attack might take about 18 to 23 minutes if you do the whole thing from scratch. Then it says that if the attacker does the first half of the work in advance, and waits for a target public key to appear, the remaining part could take about 9 to 12 minutes. So "9 minutes" is a future-machine estimate in a best-case attack setup. The big deal is not "Google discovered quantum computers can break crypto." We knew that already. The big deal is that Google says the future machine needed may be a lot smaller than previous estimates suggested. The paper calls it close to a 20-fold reduction under its assumptions.
The Innovation Game (𝔦, 𝔦) tweet media
English
2
20
83
5.8K
Santiago R Santos
Santiago R Santos@santiagoroel·
After de-risking heavily across the board since q4 (went to cash), I’m now nibbling at names I like and are down 30-50% Not jumping out of my seat. Don’t think we’ve bottomed Not buying tokens. Only equities
English
18
5
219
70.7K
aayushmannarayan
aayushmannarayan@slayercoredev·
there is a significant opportunity to enable price discovery for dark assets from dark markets and creating new digital commodities with their own financial instruments
English
0
0
1
13