Nomad Analyst

1.1K posts

Nomad Analyst banner
Nomad Analyst

Nomad Analyst

@NomadAnalyst

Roderick McKinley, CFA, FRM https://t.co/ReVv3K7te0

Katılım Ağustos 2022
765 Takip Edilen347 Takipçiler
Nomad Analyst retweetledi
ilemi
ilemi@andrewhong5297·
Every token should have its minting system transparently monitored. Here are the minters for USDC on Ethereum, for example, many contracts/multisigs/eoas are involved in nested levels. However, they have a max mint amount assigned in the role mapping for extra protection.
ilemi tweet media
English
5
4
55
3.6K
Nomad Analyst retweetledi
vincent charles
vincent charles@0x_vcharles·
Token Due Diligence takes hours You're jumping between 5 tools, cross-referencing wallets, checking money flows, holder distributions. Manually. Still not sure So I built a tool that does it in one command. 15 @nansen_ai CLI calls Try it: vchrl.github.io/token-dd/ #NansenCLI
English
2
3
8
329
Nomad Analyst retweetledi
Dune | We Are Hiring!
Dune MCP is live 🔌 Plug Dune directly into @claudeai, @ChatGPTapp, @cursor_ai, and more. Search tables. Write queries. Build charts. Check Usage. All from a single prompt. 💻 Your AI just became a Dune power user.
English
108
117
1.2K
205.4K
Nomad Analyst retweetledi
Indexed Podcast
Indexed Podcast@indexed_pod·
🚨NEW EPISODE: The Problem with DeFi Integrations🚨 Today we’re joined by Andrew Hong (@andrewhong5297), Co-Founder of Herd (@herd_eco), Advisor at Archetype (@archetypevc), and former Headmaster at Dune (@Dune). In this episode, we discuss: - Leaving Dune to start Herd - Composability in the age of AI agents - Why crypto due diligence doesn’t scale - Morpho vault complexity & operational risk - AI agents for contract research - Two-sided marketplace: protocols & institutions - Hooks, adapters, and modular vault design - Wallet policies & multisig risk management - Institutional DeFi integrations (Coinbase, BlackRock) - Agent-to-agent payments: hype vs reality - Why crypto needs “boring” infra - AI replacing crypto analysts? And much more—enjoy! — Timestamps: (00:00) Introduction (01:08) Leaving Dune, starting Herd (03:16) Agent research inbox vision (05:14) Institutional crypto adoption challenges (08:16) From aggregation to operations (10:21) Protocols vs institutions marketplace (14:11) Mapping contracts & transactions (21:48) One-shot integrations with AI (24:32) Vault adapters & hidden permissions (30:49) Security vs operational risk (34:30) Uniswap hooks & modular design (41:07) Agent payments skepticism (53:12) AI replacing analysts? (57:50) Outro
English
1
6
21
2.1K
Nomad Analyst retweetledi
polars data
polars data@DataPolars·
We've released Python Polars 1.39. Some of the highlights: • Streaming AsOf join join_asof() is now supported in the streaming engine, enabling memory-efficient time-series joins. • sink_iceberg() for writing to Iceberg tables A new LazyFrame sink that writes directly to Apache Iceberg tables. Combined with the existing scan_iceberg(), Polars now supports full read/write workflows for Iceberg-based data lakehouses. • Streaming cloud downloads scan_csv(), scan_ndjson(), and scan_lines() can now stream data directly from cloud storage instead of downloading the full file first. Link to the complete changelog: github.com/pola-rs/polars…
English
1
24
181
8.6K
Nomad Analyst retweetledi
DuckDB
DuckDB@duckdb·
We released DuckDB v1.5! This new release comes with a “friendly CLI” client, a new (opt-in) PEG parser, support for the VARIANT type and a built-in GEOMETRY type. It also ships a new network stack and a few lakehouse features. Finally, it can write to Azure and connect to databases through ODBC. For more details, read the announcement blog post: duckdb.org/2026/03/09/ann…
DuckDB tweet media
English
8
67
383
24.3K
Nomad Analyst retweetledi
David Gelberg
David Gelberg@davidgelberg·
in case u londoners are shy of events for the following weeks unicornmafia.ai/e
David Gelberg tweet media
English
27
29
331
56.6K
Nomad Analyst retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)
Andrej Karpathy tweet media
English
1K
3.6K
28.2K
10.9M
Nomad Analyst retweetledi
Lior Alexander
Lior Alexander@LiorOnAI·
It's over. Karpathy just open-sourced an autonomous AI researcher that runs 100 experiments while you sleep. You don't write the training code anymore. You write a prompt that tells an AI agent how to think about research. The agent edits the code, trains a small language model for exactly five minutes, checks the score, keeps or discards the result, and loops. All night. No human in the loop. That fixed five-minute clock is the quiet genius. No matter what the agent changes, the network size, the learning rate, the entire architecture, every run gets compared on equal footing. This turns open-ended research into a game with a clear score: - 12 experiments per hour, ~100 overnight - Validation loss measures how well the model predicts unseen text - Lower score wins, everything else is fair game The agent touches one Python file containing the full training recipe. You never open it. Instead, you program a markdown file that shapes the agent's research strategy. Your job becomes programming the programmer, and this unlocks a strange new loop: 1. Agents run real experiments without supervision 2. Prompt quality becomes the bottleneck, not researcher hours 3. Results auto-optimize for your specific hardware 4. Anyone with one GPU can run a research lab overnight The best AI labs won't just have the most compute. They'll have the best instructions for agents who never sleep, never forget a failed experiment, and never stop iterating.
Andrej Karpathy@karpathy

I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)

English
137
441
4.3K
875.6K
Nomad Analyst
Nomad Analyst@NomadAnalyst·
@rohanpaul_ai Yup convinced this is going to drive a trend towards more "fractional employment" - more companies will have budget for some work, and fewer companies will need true full time capabilities.
English
0
0
0
57
Rohan Paul
Rohan Paul@rohanpaul_ai·
Brilliant economic paper directly models the "Structural Jevons Paradox" happening right now in the AI industry. The cost of running an LLM is dropping, but total computing energy is exploding anyway. It mathematically proves that as the unit cost of digital intelligence and coding drops, the aggregate demand for complex AI agents and the infrastructure to support them surges exponentially, creating a massive new downstream ecosystem that requires human management. Reveals a massive paradox where dropping the price of AI usage does not save money, but instead encourages developers to build vastly more complex agents that eat up exponentially more computing power. Because of this relentless progress, small companies building simple applications on top of these models get completely crushed as the core AI naturally absorbs those exact same features over time. They also discovered a brutal dynamic where a perfectly working LLM becomes economically worthless the moment a competitor releases a smarter version. Ultimately, the researchers prove that this combination of massive computing costs and the need for constant user data naturally pushes the entire AI industry toward an unavoidable monopoly. --- arxiv. org/pdf/2601.12339v1 "The Economics of Digital Intelligence Capital"
Rohan Paul tweet media
Rohan Paul@rohanpaul_ai

Citadel Securities published this graph showing a strange phenomenon. Job postings for software engineers are actually seeing a massive spike. Classic example of the Jevons paradox. When AI makes coding cheaper, companies actually may need a lot more software engineers, not fewer. When software is cheaper to build, companies naturally want to build a lot more of it. Businesses are now putting software into industries and tools where it was simply too expensive before. --- Chart from citadelsecurities .com/news-and-insights/2026-global-intelligence-crisis/

English
32
82
367
38.1K
Nomad Analyst
Nomad Analyst@NomadAnalyst·
Will be checking this out...
Ahmad Awais@MrAhmadAwais

Introducing chartli 📊 CLI that turns plain numbers into terminal charts. ascii, spark, bars, columns, heatmap, unicode, braille, svg. $ 𝚗𝚙𝚡 𝚌𝚑𝚊𝚛𝚝𝚕𝚒 I wanted terminal charts with zero setup. No browser, no Python env, no matplotlib. Pipe numbers in, get a chart out. Again built using Command Code with my CLI taste. $ npx chartli data.txt -t ascii -w 24 -h 8 8 chart types spanning a fun range of Unicode density: - ascii (line charts with ○◇◆● markers) - spark (▁▂▃▄▅▆▇█ sparklines, one row per series) - bars (horizontal, ░▒▓█ shading per series) - columns (vertical grouped bars) - heatmap (2D grid, ░▒▓█ intensity mapping) - unicode (grouped bars with ▁▂▃▄▅▆▇█ sub-cell resolution) - braille (⠁⠂⠃ 2×4 dot matrix, highest density) - svg (vector output, circles or polylines) Input format is dead simple: rows of space-separated numbers. Multiple columns = multiple series. 0.0 0.1 0.1 0.1 0.2 0.4 0.2 0.4 0.3 0.2 0.4 0.2 Composes with pipes: $ cat metrics.txt | chartli -t spark S1 ▁▂▃▄▅▆ S2 ▁▄▂▇▅█ S3 ▁▂▄▃▆▅ S4 ▁▄▂▇▂▇ The braille renderer is my fav. Each braille character encodes a 2×4 dot grid, so a 16-wide chart gives you 32 pixels of horizontal resolution. Free anti-aliasing from Unicode. The bars renderer uses 4 shading levels (░▒▓█) to visually separate series without color. Works on any terminal, any font. Heatmap maps values to a 5-step intensity scale across a row×column grid, so you can spot patterns in tabular data at a glance. SVG mode has 2 render paths: circles (scatter plot) and lines (polylines). Output is valid XML you can pipe straight to a file or into another tool. Zero config by default, every dimension overridable (-w width, -h height, -m SVG mode). No config files. No themes. No dashboards. $ 𝚗𝚙𝚡 𝚌𝚑𝚊𝚛𝚝𝚕𝚒 Or global install it. $ npm i -g chartli # Skill for your agents $ npx skills add ahmadawais/chartli If you work in terminals and want quick data visualization without leaving your workflow, try it. ⌘ let's go!!

English
0
0
2
42
Nomad Analyst
Nomad Analyst@NomadAnalyst·
@rohanpaul_ai Also, deployed agent workflows will be sticky, and agent orchestration platforms will be designed to have high switching costs.
English
0
0
0
5
Nomad Analyst
Nomad Analyst@NomadAnalyst·
@rohanpaul_ai True, but disagree with framing. User/business data that makes the AI assistance feel personal and tailored is the moat worth fighting for.
English
1
0
0
34
Rohan Paul
Rohan Paul@rohanpaul_ai·
Larry Ellison on the AI moat: AI is commoditizing because models use the same public internet data. The true competitive edge isn't the model itself anymore, but access to exclusive, proprietary datasets. That is the only moat left.
English
231
340
3.6K
629.8K
Nomad Analyst retweetledi
Allium
Allium@AlliumLabs·
Today the US struck Iran. Polymarket odds were less than 20% on Feb 27. We looked at onchain data to identify unusual traders, potentially insiders. We looked at newly created wallets (last 7 and 30 days) and aggressive YES buyers in the last 48 hours. For example 1 wallet purchased $2209 of YES at an average price of $0.11 late evening on Feb 27. It's getting trickier to identify wallets that may be insiders as the market evolves. Do you have any filters that might be able to surface such wallets? Our data can surface them live. Also thank you @bax1337 for pointing out an incorrect query we shared earlier. app.allium.so/s/dashboard/qK…
Allium tweet media
English
4
2
25
4.5K