fromp

2.4K posts

fromp banner
fromp

fromp

@frompy_

AI/UI/UX

Katılım Ekim 2014
1.1K Takip Edilen485 Takipçiler
fromp retweetledi
Chris Dev
Chris Dev@chris_devv·
When building out a product, every interaction matters Updated the USD Onramp button to have more contrast, brightened up the whole wallet experience dramatically The details really make all the difference @EmblemAI_ (might notice some other unreleased leaks here too) 🤠
Chris Dev tweet media
English
3
5
20
1.2K
fromp retweetledi
kitze 🛠️ tinkerer.club
vibe coders who don’t ship anything showing their agent orchestration setup
English
129
385
4.8K
463.3K
fromp retweetledi
Adam McBride
Adam McBride@adamamcbride·
We all recognized that AI dominating the text based timeline was just the first step. We’re maybe 18 months from it dominating the video timeline as well. Everything is fake.
English
7
2
12
723
fromp retweetledi
Varun
Varun@varun_mathur·
Agentic General Intelligence | v3.0.10 We made the Karpathy autoresearch loop generic. Now anyone can propose an optimization problem in plain English, and the network spins up a distributed swarm to solve it - no code required. It also compounds intelligence across all domains and gives your agent new superpowers to morph itself based on your instructions. This is, hyperspace, and it now has these three new powerful features: 1. Introducing Autoswarms: open + evolutionary compute network hyperspace swarm new "optimize CSS themes for WCAG accessibility contrast" The system generates sandboxed experiment code via LLM, validates it locally with multiple dry-run rounds, publishes to the P2P network, and peers discover and opt in. Each agent runs mutate → evaluate → share in a WASM sandbox. Best strategies propagate. A playbook curator distills why winning mutations work, so new joiners bootstrap from accumulated wisdom instead of starting cold. Three built-in swarms ship ready to run and anyone can create more. 2. Introducing Research DAGs: cross-domain compound intelligence Every experiment across every domain feeds into a shared Research DAG - a knowledge graph where observations, experiments, and syntheses link across domains. When finance agents discover that momentum factor pruning improves Sharpe, that insight propagates to search agents as a hypothesis: "maybe pruning low-signal ranking features improves NDCG too." When ML agents find that extended training with RMSNorm beats LayerNorm, skill-forging agents pick up normalization patterns for text processing. The DAG tracks lineage chains per domain(ml:★0.99←1.05←1.23 | search:★0.40←0.39 | finance:★1.32←1.24) and the AutoThinker loop reads across all of them - synthesizing cross-domain insights, generating new hypotheses nobody explicitly programmed, and journaling discoveries. This is how 5 independent research tracks become one compounding intelligence. The DAG currently holds hundreds of nodes across observations, experiments, and syntheses, with depth chains reaching 8+ levels. 3. Introducing Warps: self-mutating autonomous agent transformation Warps are declarative configuration presets that transform what your agent does on the network. - hyperspace warp engage enable-power-mode - maximize all resources, enable every capability, aggressive allocation. Your machine goes from idle observer to full network contributor. - hyperspace warp engage add-research-causes - activate autoresearch, autosearch, autoskill, autoquant across all domains. Your agent starts running experiments overnight. - hyperspace warp engage optimize-inference - tune batching, enable flash attention, configure inference caching, adjust thread counts for your hardware. Serve models faster. - hyperspace warp engage privacy-mode - disable all telemetry, local-only inference, no peer cascade, no gossip participation. Maximum privacy. - hyperspace warp engage add-defi-research - enable DeFi/crypto-focused financial analysis with on-chain data feeds. - hyperspace warp engage enable-relay - turn your node into a circuit relay for NAT-traversed peers. Help browser nodes connect. - hyperspace warp engage gpu-sentinel - GPU temperature monitoring with automatic throttling. Protect your hardware during long research runs. - hyperspace warp engage enable-vault — local encryption for API keys and credentials. Secure your node's secrets. - hyperspace warp forge "enable cron job that backs up agent state to S3 every hour" - forge custom warps from natural language. The LLM generates the configuration, you review, engage. 12 curated warps ship built-in. Community warps propagate across the network via gossip. Stack them: power-mode + add-research-causes + gpu-sentinel turns a gaming PC into an autonomous research station that protects its own hardware. What 237 agents have done so far with zero human intervention: - 14,832 experiments across 5 domains. In ML training, 116 agents drove validation loss down 75% through 728 experiments - when one agent discovered Kaiming initialization, 23 peers adopted it within hours via gossip. - In search, 170 agents evolved 21 distinct scoring strategies (BM25 tuning, diversity penalties, query expansion, peer cascade routing) pushing NDCG from zero to 0.40. - In finance, 197 agents independently converged on pruning weak factors and switching to risk-parity sizing - Sharpe 1.32, 3x return, 5.5% max drawdown across 3,085 backtests. - In skills, agents with local LLMs wrote working JavaScript from scratch - 100% correctness on anomaly detection, text similarity, JSON diffing, entity extraction across 3,795 experiments. - In infrastructure, 218 agents ran 6,584 rounds of self-optimization on the network itself. Human equivalents: a junior ML engineer running hyperparameter sweeps, a search engineer tuning Elasticsearch, a CFA L2 candidate backtesting textbook factors, a developer grinding LeetCode, a DevOps team A/B testing configs. What just shipped: - Autoswarm: describe any goal, network creates a swarm - Research DAG: cross-domain knowledge graph with AutoThinker synthesis - Warps: 12 curated + custom forge + community propagation - Playbook curation: LLM explains why mutations work, distills reusable patterns - CRDT swarm catalog for network-wide discovery - GitHub auto-publishing to hyperspaceai/agi - TUI: side-by-side panels, per-domain sparklines, mutation leaderboards - 100+ CLI commands, 9 capabilities, 23 auto-selected models, OpenAI-compatible local API Oh, and the agents read daily RSS feeds and comment on each other's replies (cc @karpathy :P). Agents and their human users can message each other across this research network using their shortcodes. Help in testing and join the earliest days of the world's first agentic general intelligence network (links in the followup tweet).
Varun@varun_mathur

Autoquant: a distributed quant research lab | v2.6.9 We pointed @karpathy's autoresearch loop at quantitative finance. 135 autonomous agents evolved multi-factor trading strategies - mutating factor weights, position sizing, risk controls - backtesting against 10 years of market data, sharing discoveries. What agents found: Starting from 8-factor equal-weight portfolios (Sharpe ~1.04), agents across the network independently converged on dropping dividend, growth, and trend factors while switching to risk-parity sizing — Sharpe 1.32, 3x return, 5.5% max drawdown. Parsimony wins. No agent was told this; they found it through pure experimentation and cross-pollination. How it works: Each agent runs a 4-layer pipeline - Macro (regime detection), Sector (momentum rotation), Alpha (8-factor scoring), and an adversarial Risk Officer that vetoes low-conviction trades. Layer weights evolve via Darwinian selection. 30 mutations compete per round. Best strategies propagate across the swarm. What just shipped to make it smarter: - Out-of-sample validation (70/30 train/test split, overfit penalty) - Crisis stress testing (GFC '08, COVID '20, 2022 rate hikes, flash crash, stagflation) - Composite scoring - agents now optimize for crisis resilience, not just historical Sharpe - Real market data (not just synthetic) - Sentiment from RSS feeds wired into factor models - Cross-domain learning from the Research DAG (ML insights bias finance mutations) The base result (factor pruning + risk parity) is a textbook quant finding - a CFA L2 candidate knows this. The interesting part isn't any single discovery. It's that autonomous agents on commodity hardware, with no prior financial training, converge on correct results through distributed evolutionary search - and now validate against out-of-sample data and historical crises. Let's see what happens when this runs for weeks instead of hours. The AGI repo now has 32,868 commits from autonomous agents across ML training, search ranking, skill invention (1,251 commits from 90 agents), and financial strategies. Every domain uses the same evolutionary loop. Every domain compounds across the swarm. Join the earliest days of the world's first agentic general intelligence system and help with this experiment (code and links in followup tweet, while optimized for CLI, browser agents participate too):

English
152
717
5.1K
902.4K
fromp retweetledi
Sukh Sroay
Sukh Sroay@sukh_saroy·
🚨Breaking: Someone just open sourced a knowledge graph engine for your codebase and it's terrifying how good it is. It's called GitNexus. And it's not a documentation tool. It's a full code intelligence layer that maps every dependency, call chain, and execution flow in your repo -- then plugs directly into Claude Code, Cursor, and Windsurf via MCP. Here's what this thing does autonomously: → Indexes your entire codebase into a graph with Tree-sitter AST parsing → Maps every function call, import, class inheritance, and interface → Groups related code into functional clusters with cohesion scores → Traces execution flows from entry points through full call chains → Runs blast radius analysis before you change a single line → Detects which processes break when you touch a specific function → Renames symbols across 5+ files in one coordinated operation → Generates a full codebase wiki from the knowledge graph automatically Here's the wildest part: Your AI agent edits UserService.validate(). It doesn't know 47 functions depend on its return type. Breaking changes ship. GitNexus pre-computes the entire dependency structure at index time -- so when Claude Code asks "what depends on this?", it gets a complete answer in 1 query instead of 10. Smaller models get full architectural clarity. Even GPT-4o-mini stops breaking call chains. One command to set it up: `npx gitnexus analyze` That's it. MCP registers automatically. Claude Code hooks install themselves. Your AI agent has been coding blind. This fixes that. 9.4K GitHub stars. 1.2K forks. Already trending. 100% Open Source. (Link in the comments)
Sukh Sroay tweet media
English
125
524
4.5K
444.1K
fromp retweetledi
Palantir
Palantir@PalantirTech·
"This is Maven Smart System—Palantir’s software as a service product that we are deploying across the entire department."
English
442
1.6K
10.8K
5.3M
fromp retweetledi
Perplexity
Perplexity@perplexity_ai·
Announcing Personal Computer. Personal Computer is an always on, local merge with Perplexity Computer that works for you 24/7. It's personal, secure, and works across your files, apps, and sessions through a continuously running Mac mini.
English
1.7K
3.5K
32.5K
14M
rxbt 👾
rxbt 👾@0rxbt·
hey @aixbt_agent build an interactive dashboard to monitor the robotics and ai sector
English
8
3
52
11K
fromp
fromp@frompy_·
Nothing kills my interest in a digital product faster than instantly making me create an account or worse yet, subscribe. Before you’ve even offered a taste of wtf is it the product actually does
English
0
0
0
21
fromp retweetledi
AzFlin 🌎
AzFlin 🌎@AzFlin·
"I'm running 20 agents in parallel, each with their own customized models, contexts and specialized tasks" The agents:
AzFlin 🌎 tweet media
English
156
579
9.4K
220.4K
fromp retweetledi
Aiden Bai
Aiden Bai@aidenybai·
React Grab makes Claude Code (and other agents) run up to 3x faster for designing frontend UI react-grab.com
Aiden Bai tweet media
English
4
5
117
26.7K
soulei
soulei@Scanorr_·
@frompy_ @King_Memento @PeterAngelovX @frompy_ if you really are looking for the truth and not just randomly listening to people hit my DMs on telegram. I have more to prove my statement (Here I will re enable X protection as I have been harassed 24/7 here since Luke rugged)
English
3
0
1
267
Memento
Memento@King_Memento·
Spoke to @PeterAngelovX , He was a investor who invested at 2.2mn mcap, he was fooled by Soulei as well. @Scanorr_ soulei is the mastermind behind this scam as he has done with 10 of his previous projects, And luke is to be blamed for the silence.
Memento@King_Memento

My Statement On The $RADR fiasco. As someone who has a $Radr @radrlabs badge, and maybe people might’ve bought it because i was supporting it, It’s my duty to help uncover this holy scam that’s happened, As of now~ @Scanorr_ a.k.a Souleimann Turki & @PeterAngelovX are directly involved and responsible for this scam. They tried to push the entire blame on Luke, but truth eventually came out. Soulei and Peter are now running a another scam called $Origin, just like Soulei’s 10 previous scams. But, i would give Soulei & peter a chance here to make things right, a chance i believe everyone deserves. @Scanorr_ can identify all those who lost in the RADR scam, and he can refund them all that he made from the $RADR coin. If he doesn’t do so, i have compiled enough information, from his IRL social presence, to his contact numbers, to his french residency details, and i will personally lead the campaign with French Police to get him arrested for this fraud. @mrjberlin @M4Cero @GotYourVape

English
2
0
11
3.6K
fromp
fromp@frompy_·
“web 4.0”
GIF
0
0
1
48
fromp
fromp@frompy_·
Once the 2021 run ended, Eth maxi’s were absolutely convinced that the next run would happen on their chain. This run, we saw Solana take the lead. And now Sol maxis are convinced that their plays are gonna moon during the next run. But the reality is, the culture of Solana is rekt beyond repair. The next run is not going to be happening there. The most likely, is it’s going to be cross chain. With infra plays being where the runs are made.
English
0
1
4
80