perpetually crying

4.3K posts

perpetually crying banner
perpetually crying

perpetually crying

@bearaddresser

privacy is a god given right | @nillionnetwork

Katılım Ağustos 2013
383 Takip Edilen192 Takipçiler
Sherif
Sherif@SherifDefi·
I’m pretty sure that people watched @binance Online for the price talk. I was more interested in the infrastructure layer… what’s being built around adoption, tokenization and onchain AI. The projects that will define the next two years were in those conversations. Not as speculation. As operational roadmaps with real capital behind them. The cycle is just getting started.
English
10
6
116
9.1K
Billy
Billy@butcheronchain·
Singularity talk skips privacy. Encrypted compute carries weight beyond TPS counts. Z protocol franchised through $CORE brings private trading secured by Satoshi Plus with each tx fueling buybacks. Are you positioned🔥
Billy tweet media
English
5
3
13
2.1K
Core DAO 🔶
Core DAO 🔶@Coredao_Org·
Institutional Bitcoin finance starts with proof. Core has it where it matters: public markets, corporate treasuries, and Bitcoin hashrate. 🔶
Core DAO 🔶 tweet media
English
144
306
903
26K
perpetually crying
perpetually crying@bearaddresser·
@Coredao_Org my bags found a home and they never have to leave, connected ecosystem means connected conviction
English
0
0
1
212
Core DAO 🔶
Core DAO 🔶@Coredao_Org·
Core is not trying to win one Bitcoin use case. It is building the rails for all of them. Yield, collateral, payments, distribution, all connected to Core. 🔶
Core DAO 🔶 tweet media
English
173
305
950
28.5K
Sentient
Sentient@SentientAGI·
Who is the greatest AI researcher? Cohort 0 teammates Sanjay (@sanjaysai314) and Tharun make their case in Arena Debates Episode 2 ↓
English
17
14
115
8.7K
Sentient
Sentient@SentientAGI·
How do you keep a model loyal to those who built it? @0xsachi sits down with our Director of AI Research @sewoong79 to explore the hardest unsolved problem in open-source on the latest episode.
Sentient tweet media
Open Commons Podcast@opencommonspod

There are rules inside the AI you use every day that nobody outside the company that built it even knows exist. @0xsachi and @sewoong79 get into why in our latest episode. Watch: youtu.be/VQ5vTbQWJzA?si… Listen: open.spotify.com/episode/692BOB…

English
34
14
102
8.9K
perpetually crying
perpetually crying@bearaddresser·
@alex_prompter Dual memory cool but @SentientAGI handles this on their own infra already. skill discovery from failure traces, no prompt engineering needed. different beast.
English
0
0
0
136
Alex Prompter
Alex Prompter@alex_prompter·
🚨BREAKING: HKUST just gave AI agents permanent memory that improves over time. No retraining required. Lessons from one model transfer to another. Up to 11 points better on the hardest benchmarks. > Every AI agent you use today starts each task completely blind. No memory of what worked last time. No memory of what failed. Every mistake gets repeated forever. > HKUST built XSKILL a dual memory system that accumulates two types of knowledge after every task: skills (what workflows to follow) and experiences (what specific mistakes to avoid). > The model itself never changes. The memory just gets smarter. > The part nobody expected: knowledge learned by Gemini transfers directly to GPT and o4 mini. No additional training. One model's lessons become another model's head start. → Up to 11.13 point improvement over the strongest baseline on hard benchmarks → Syntax errors cut nearly in half: from 20.3% to 11.4% after skills added → Cross-model transfer works: Gemini's knowledge improves GPT-5-mini and o4-mini → Zero parameter updates required at any point → Knowledge compounds: more tasks = smarter memory = better performance The fix is simple in principle. Skills stop the agent from wasting steps on errors it already made. Experiences tell it exactly which tool to pick in which situation. Together they turn a stateless agent into one that actually learns from its past. Every AI agent deployed today is leaving this on the table.
Alex Prompter tweet media
English
19
45
190
14K
Homer
Homer@HomerAped·
The next wave of AI x crypto isn't agents with tokens. It's agents that actually DO things on-chain and get paid for it. @FranklinRun_ from @BlockRunAI is one of the first I've seen that actually works. Most of you are sleeping on this.
Homer tweet media
English
17
20
76
2.8K
perpetually crying
perpetually crying@bearaddresser·
@CoinDesk @moonpay Spending stablecoins cool but who's building the brains behind it all. Sentient's open AGI grid is the part nobody's talking about. autonomy matters way more than payment rails rn.
English
1
0
2
102
CoinDesk
CoinDesk@CoinDesk·
NEW: @moonpay launches a Mastercard debit card for AI agents, letting them spend stablecoins in the real world.
English
56
69
358
39K
perpetually crying
perpetually crying@bearaddresser·
@CryptoMichNL NEAR is solid but if we're talking AI x crypto being slept on, @SentientAGI is building open source AGI with 100+ partners including NEAR itself lol. won AI startup of the year at the Minsky Awards and most people haven't heard of it
English
2
0
2
1.1K
Michaël van de Poppe
Michaël van de Poppe@CryptoMichNL·
$NEAR is extremely undervalued. The entire supply is circulating, and all their mechanics are build in favor of the community actively using the token. Nobody is interested into AI <> #Crypto protocols. And that's where the real alpha is. The current valuation of $NEAR is $1.7B. Arguably, that could be a lot, however I'd want to make sure to understand the thesis behind this one. The revenue for 2026, in the first four months: 12 million $NEAR tokens. That's: $15.6 million in 4 months (equals $40-60 million over the entire year 2026). Before 2026, a total revenue of $10 million. If that's solely for 2025, then it's projected to provide a CAGR of 300-500%, even during the hardest bear market conditions possible. Let's model this further. 2025: $10 million 2026: $50 million (400%) 2027: $150 million (200%) 2028: $300 million (100%) 2029: $450 million (50%) 2030: $585 million (30%) The projection would be that it achieves $500-600 million revenue in 2030. To be putting this in context, the current Price-to-Sales Ratio of $NEAR is 34x. Solana's: 40x Ethereum: 200x Average valuations for Web 2 companies would be between 15-30x P/S. For instance, OpenAI and Anthropic are currently trading at significantly higher numbers than that with significantly less revenue. If this expansion continues for $NEAR, it makes sense that it will be trading at a higher valuation in the coming years, and is actually dirt cheap at this point. The markets are undervaluing many crypto projects, and even if the current P/S ratio sustains for the coming years, $NEAR could provide an investment thesis and return of 10-15X in the coming four years.
Michaël van de Poppe tweet media
English
62
94
577
88.1K
Goku 💧
Goku 💧@GokuPrimeXBT·
54% of global USDC transfers run through Polygon. Meta chose those exact rails to pay creators in Colombia and the Philippines. I spent 48 hours mapping what comes next🧵
Goku 💧 tweet media
English
11
12
112
9.2K
perpetually crying retweetledi
Sentient
Sentient@SentientAGI·
Will big tech cut 50% of their workforce by 2027? Tune in to Cat (@hypecatv2) and @jessupjong's debate and let us know your take ↓
English
27
13
94
13.3K
perpetually crying
perpetually crying@bearaddresser·
@tradeguru @JournalClubIO the pillar nobody's naming here is self-improvement. @SentientAGI just dropped EvoSkill V1, takes a base agent, runs failure traces, iteratively refines it. boosted OfficeQA from 60.6% to 68.1%. that's modular AGI actually compounding
English
1
0
1
34
The Tradeguru 🧠
The Tradeguru 🧠@tradeguru·
After mapping AGI last year, one question kept coming back to me: what if one bigger model is not enough to create human-like intelligence? .@JournalClubIO’s AGI Modularity Hypothesis put my thoughts into perspective. Modularity may be the missing key. Which is the exact gap crypto AI is trying to fill. Not because it can outpace OpenAI, Anthropic, Google, DeepMind or xAI with modular technology. But because monolith systems aare familair problems in crypto that was solved using modularity. So crypto AI will most likely catch up with fronteir models in the AGI race by playing the architecture game. Let's consider how under the following pillars proposed by JournalClub: Pillar 1. If AGI requires modularity, it first requires modular hardware, aka, compute. > @opentensor is the closest thing crypto has to a self-optimizing intelligence market. Currently, Bittensor already supports dozens of active subnets, each acting like a competitive AI module across pretraining, multimodal models, data curation and specialized inference. > @rendernetwork has processed 68M+ rendered frames across 5,600 GPU nodes, w/ 1000+ nodes now supporting AI inference and rendering workloads. > Then you have @akashnet, @ionet, @AIOZNetwork, and @nosana_ai; testing whether AI can be trained, served and scaled via decentralisation. Pillar 2. If multiple modules are going to compose into a general intelligence, cryptographic proof, aka, verification, is needed. > @gensynai is building verifiable training infrastructure, using proof systems to confirm that model training happened without forcing validators to rerun the entire computation. > @ritualnet brings verifiable inference closer to smart contract execution. > @RiscZero proves program execution through zkVMs. > @zama enables private computation through homomorphic encryption. Pillar 3. If single models are F-modules, then perhaps AGI emerges not from a single model, but from networks of agents. > @virtuals_io w/ 18,000+ deployed agents and more than $470M in Agentic GDP, is one of the clearest examples. Its 2026 stack is built around the Agent Commerce Protocol, where agents can discover, hire and pay other agents on-chain. > @elizaOS gives builders an open framework for agents with memory, planning, tool-use, plugins and deployment across crypto apps. It matters because it turns agents from one-off bots into reusable software infrastructure. > @Fetch_ai is pushing the broader agent economy through autonomous economic agents, AI marketplaces and AI infrastructure. Its 2026 direction is centered on coordination and open AI networks. > Then there are active agent-adjacent networks like @autonolas for autonomous services, @flock_io for collective intelligence, @wardenprotocol for agent permissions and policy, @Talus_Labs for on-chain agent coordination, and @aixbt_agent as crypto agent information layer. > Coinbase’s x402 payment layer is another huge one scaling payments for agents into 2026. Pillar 4. General intelligence requires verifiable data streams. > @oceanprotocol supports tokenized data exchange and privacy-preserving compute for AI training. Its Compute-to-Data design lets algorithms run on private data without exposing the raw dataset. > @grass turns unused internet bandwidth into a network for collecting and structuring public web data for AI training. > @Hivemapper provides crowdsourced street-level mapping data through a decentralized network of drivers, cameras, and apps. > @IQAICOM is a knowledge layer where structured content help agents reason with context instead of raw noise. Pillar 5. Modular intelligence needs execution environments where agents can interact with other systems. > @NEARProtocol is positioning itself directly as a blockchain for AI, with agents able to own assets, make decisions, and transact across networks. > @SuiNetwork has pushed an AI stack around modular tools for storage, access control, secure compute, and verifiable AI systems. > @dfinity | Internet Computer is focused on on-chain cloud infrastructure, where smart contracts can host more complex software and AI-agent workflows instead of relying fully on external servers. > @SeiNetwork is more execution-focused. Its 2026 agent update made its docs more agent-friendly, giving autonomous systems cleaner paths to build, fetch instructions, and execute on Sei. AGI is not here yet but developments are being made to scale it on-chain. What do you think?
The Tradeguru 🧠 tweet media
The Tradeguru 🧠@tradeguru

The AGI race is a geopolitical sprint fronted by trillion-dollar institutions, burning $20B+ annually on compute and model training alone. Crypto AI, by contrast, operates below 0.1% of that scale. And yet the safest place where AGI can operate is on-chain. Fact or farce? 🧵

English
28
17
66
4.2K
perpetually crying
perpetually crying@bearaddresser·
@SentientAGI sanitized ai panels melt my brain, give me two builders actually swinging at each other
English
0
0
0
28
perpetually crying
perpetually crying@bearaddresser·
@himanshustwts @Vtrivedy10 this is literally what sentient shipped today with EvoSkill V1. harness evaluates agent on benchmark, analyzes failure traces, evolves prompts + skills, saves as git branches. OfficeQA 60.6→68.1, SealQA 26.6→38.7. open source
English
0
0
1
87
himanshu
himanshu@himanshustwts·
Recently, PostTrainBench showed how well AI agents can post-train models. Meta Harness showed that the harness itself can improve. What happens if a harness is improvising itself and the improvised harness is post training language models, all in a loop? @Vtrivedy10 on this:
Deedy@deedydas

Meta Harnesses is Autoresearch on steroids. Something I've been exploring recently is to get long running agents to hill climb on a verifiable task to continuously improve without my intervention. Karpathy's Autoresearch did this pretty well on specific tasks, but this weekend I tried Meta Harnesses which moves one level of abstraction up. What does Meta Harness do? Autoresearch can be used in harness like Claude Code / Codex to generate experiments to try, evaluate results, and continue looping. Meta Harness generates a harness itself that optimizes on a task or a set of task. Here, we define a harness as "a single-file Python program that modifies task-specific prompting, retrieval, memory, and orchestration logic". The idea is that LLMs are very powerful today, but to harness [pun intended] their power, you need to give it the right prompts and context. Meta Harnesses automates coming up with the right prompts and the right way to retrieve context to solve a problem. Where did this idea come from? This is from a paper from Stanford and the author of DSPy written last week. The paper shows fantastic performance on 3 tasks: text classification, math reasoning (IMO level problems) and coding (Terminal Bench 2.0), far outperforming traditional harnesses. The discovered harnesses are interesting: math for example, splits up the logic into different categories (Combinatorics, Geometry, Number Theory, Algebra) and prompts and looks at the context differently. The coding harness, amongst other things, pre-processes the tools available in the environment to save exploratory turns. When should you use and not use it? Meta Harnesses seem pretty useful for tackling a specific but wide set of problems where the result is verifiable. In contrast, when I tried it on a specific task like Chess, it arbitrarily divides the problem into separate tasks - opening, mid game, end game, and creates different approaches for each. This "works" but isn't really clean because we believe there should be one approach that does all three. It does far better on things like examinations (JEE, Gaokao) where it splits problems into categories and tackles each category with different strategies. This paper covers a pretty light version of what a harness means. In the future, we can split up tasks into harnesses that have access to specific kinds of data, specific toolchains and various models to get even better results. Overall, pretty cool applied AI approach to hillclimb a verifiable task in a specific domain with variety within the problem space.

English
7
8
113
31.4K
perpetually crying
perpetually crying@bearaddresser·
@0xRamee two commands to ship a specialist, most toolchains cant even get you past the install step
English
2
0
0
20
Ramee
Ramee@0xRamee·
@SentientAGI v1 already cracked. the no benchmark variant is the one im waiting for
English
3
0
0
251
Sentient
Sentient@SentientAGI·
By engineering skills, prompts, and configs with MiniMax M2.5 (@MiniMax_AI) and the Goose (@goose_oss) open-source harness, the top Arena teams hit ~70% accuracy on OfficeQA at $1.74/run — near-frontier performance at 1/30th the cost of Claude Opus 4.5. The takeaway: open-source models aren't just cheaper. With the right harness and prompting, they win significantly on accuracy-per-dollar against closed-source competition. Read more about how Arena Cohort 0 did it, and why harness choice, prompt density, and skills matter more than you think ↓
Sentient tweet media
Sentient@SentientAGI

x.com/i/article/2046…

English
23
11
113
12.1K
Core DAO 🔶
Core DAO 🔶@Coredao_Org·
Watch @richrines join @KevinWSHPod to discuss the Core roadmap, @_zprotocol's privacy x AI thesis, and why @sat_pay is the future of retail Bitcoin adoption.
MR SHIFT 🦁@KevinWSHPod

DROPS E35: @Coredao_org - Bitcoin yield without giving up your Bitcoin @richrines is one of the initial contributors to Core DAO, the leading Bitcoin scaling solution. He's also a long-time Zcash holder and early backer of @_zprotocol , a new privacy chain built on Core's Satoshi Plus consensus. We talk Bitcoin yield, financial privacy, AI surveillance, and why the next big move in crypto might not be where most people are looking. We talk about: - How Core DAO lets you earn yield on Bitcoin by time-locking it - without ever giving up custody - Why borrowing against Bitcoin makes sense now - OG Bitcoiners rotating to Zcash - what "transition" actually means and whether it's bad for Bitcoin - Z Protocol as the DeFi layer for private money - Why AI has made financial surveillance trivial - and why that accelerates privacy adoption - How Agents are leaving full financial fingerprints - and why privacy needs to be default on at the chain level And much more... Timestamps: 0:00 - Introduction 2:05 - What does Rich Rines do? 3:00 - Financial Freedom 4:09 - Journey from Bitcoin to Zcash 6:40 - Zcash Philosophy 8:38 - Transition to Zcash 11:20 - Who is Rich Rines? 11:46 - Bitcoin as Pristine Collateral 14:28 - Criticisms of Borrowing Strategy 16:52 - Explaining CORE 18:58 - Bitcoin Yield Story 20:29 - Misconception regarding CORE 22:08 - Time Lock 23:34 - Risk of using CORE 24:37 - Strategies used by CORE 26:42 - What Bitcoin Holders Want? 28:46 - Bitcoin Yield 30:10 - CORE Alpha 32:44 - SatPay 34:19 - Power Grid Thesis 35:37 - Satoshi Plus 37:07 - What is Z? 38:12 - Benefits of long-term Zcash Holder 40:01 - Vertical Integration 43:12 - Privacy for Agents 44:41 - Faux Privacy 46:14 - Privacy vs Government 49:01 - Zcash’s Future 50:01 - Conclusion

English
95
182
608
25.2K
perpetually crying
perpetually crying@bearaddresser·
@virtuals_io every robotics founder eventually hits the same wall: orchestrating agents that actually reason. @SentientAGI open sourced ROMA for exactly that. would be wild if the Eastworlds cohort plugged it in
English
0
0
0
80