delist✖️

2K posts

delist✖️ banner
delist✖️

delist✖️

@Delistish

Research & Analysis. Upgrade your grey matter, cause one day it may matter. Former @yearnfi

Katılım Haziran 2021
1.2K Takip Edilen521 Takipçiler
delist✖️ retweetledi
Teknium (e/λ)
Teknium (e/λ)@Teknium·
Hermes cron job for scanning for new major vulnerabilities and checking + notifying and even resolving those vulnerabilities if existing locally might be a pretty great use case!
cocktail peanut@cocktailpeanut

I made a simple shell script to check if i've been affected by the global axios hack (I wasn't affected). Given a root path it scans the entire tree to find anything related to this hack. Open sourcing it so anyone can easily check (Mac, Linux, and Windows if you have bash).

English
11
14
208
17.2K
delist✖️ retweetledi
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
in case i haven’t made it obvious dario has closed loop and anthropic have taken off make sure you read the these Hidden Features from my old friend boris so you can take off with claude and not be left behind. Bookmark it and read it often chat. it’s important. to.
Boris Cherny@bcherny

I wanted to share a bunch of my favorite hidden and under-utilized features in Claude Code. I'll focus on the ones I use the most. Here goes.

English
6
24
533
89.4K
delist✖️ retweetledi
Nous Research
Nous Research@NousResearch·
The Hermes Agent update you've been waiting for is here.
English
316
457
4.9K
495.8K
delist✖️
delist✖️@Delistish·
That tweet is a shill. The guy's bio says "all content here is sponsored or commissioned" and links to @KreoPolyBot (a Telegram bot with 21K users) via referral code. The claim: $1,500 → $83,115 in 72 hours (55x). 3,317 bets. Built with Claude in 25 min. Reality: That's 46 bets/hour nonstop for 3 days. Not something you vibe-code in 25 minutes, it's a marketing video for a paid bot.
English
1
1
24
2.3K
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
SOMEONE PASTED GOOGLE'S TURBOQUANT PAPER INTO CLAUDE & BUILT A TRADING BOT IN MINUTES THE BOT MADE 3,317 PREDICTIONS AND TURNED $1,500 INTO $83,115 ON POLYMARKET IN 72 HOURS THE PAPER WAS FREE. CLAUDE COSTS $20 A MONTH
English
114
97
1K
216.2K
delist✖️ retweetledi
Nous Research
Nous Research@NousResearch·
Hermes Agent v0.5.0 is out:
Nous Research tweet media
English
75
106
1.7K
161.3K
delist✖️ retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
- Drafted a blog post - Used an LLM to meticulously improve the argument over 4 hours. - Wow, feeling great, it’s so convincing! - Fun idea let’s ask it to argue the opposite. - LLM demolishes the entire argument and convinces me that the opposite is in fact true. - lol The LLMs may elicit an opinion when asked but are extremely competent in arguing almost any direction. This is actually super useful as a tool for forming your own opinions, just make sure to ask different directions and be careful with the sycophancy.
English
1.7K
2.4K
30.8K
3.2M
delist✖️ retweetledi
Teknium (e/λ)
Teknium (e/λ)@Teknium·
We are only #9 fastest growing github repos this week, come check out the repo! 😅 github.com/NousResearch/h…
Sharbel@sharbel

the fastest growing GitHub repos this week: 1. affaan-m/everything-claude-code (+22.8K stars) agent harness optimization. skills, memory, security for Claude Code, Codex, Cursor and beyond. 2. obra/superpowers (+17.0K stars) agentic skills framework that works. just crossed 116K stars. 3. bytedance/deer-flow (+16.1K stars) open-source long-horizon SuperAgent. researches, codes, creates. sandboxes + subagents built in. 4. Crosstalk-Solutions/project-nomad (+14.6K stars) offline survival computer packed with AI. works anywhere, no internet needed. 5. FujiwaraChoki/MoneyPrinterV2 (+10.4K stars) automate making money online. the sequel nobody asked for but everyone starred. 6. TauricResearch/TradingAgents (+9.2K stars) multi-agent LLM financial trading framework. because one agent trading isn't scary enough. 7. jarrodwatts/claude-hud (+5.5K stars) Claude Code plugin showing context, tools, agents, and todos in real time. 8. mvanhorn/last30days-skill (+4.8K stars) AI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web. 9. NousResearch/hermes-agent (+4.6K stars) the agent that grows with you. 10. langchain-ai/open-swe (+1.8K stars) open-source async coding agent. async by design, not by accident. the theme this week: AI agents took over GitHub again. bookmark this. next week's list will look completely different.

English
5
7
153
12.2K
delist✖️ retweetledi
Sudo su
Sudo su@sudoingX·
i pointed hermes agent at nvidia's nemotron cascade 2 30B-A3B on a single RTX 3090 24GB. IQ4_XS quant by bartowski, 187 tok/s, 625K context. had it discover its own hardware, create an identity file, then build a full GPU marketplace UI from a single prompt. it one shotted it. first attempt no iteration. qwen 3.5 35B-A3B on the same hardware same 3090 24GB took an iteration to recover from a blank screen on the same type of build. 24 days between these two models releasing. same active parameters, completely different architectures and cascade 2 through hermes agent just keeps going. this model goes on and on. feast your eyes. more iterations and tests dropping soon. nvidia really cooked. no special flags needed. nvidia optimized this mamba MoE so well it just runs. flash attention auto enabled, context auto allocated. the model does the work not the config. but i compiled llama.cpp from source and i'm not sure how it performs on other engines. if you ran nemotron on any hardware drop your numbers below. RTX, AMD, Mac, whatever. model, quant, tok/s, engine. i want to see if it holds everywhere or just on llama.cpp.
Sudo su tweet mediaSudo su tweet mediaSudo su tweet media
Sudo su@sudoingX

nvidia's 3B mamba destroyed alibaba's 3B deltanet on the same RTX 3090. only 24 days between releases. same active parameters, same VRAM tier, completely different architectures. nemotron cascade 2: 187 tok/s. flat from 4K to 625K context. zero speed loss. flags: -ngl 99 -np 1. that's it. no context flags, no KV cache tricks. auto-allocates 625K. qwen 3.5 35B-A3B: 112 tok/s. flat from 4K to 262K context. zero speed loss. flags: -ngl 99 -np 1 -c 262144 --cache-type-k q8_0 --cache-type-v q8_0. needed KV cache quantization to fit 262K. both models held a flat line across every context level. both architectures are context-independent. but nvidia's mamba2 is 67% faster at generating tokens on the exact same hardware and needs fewer flags to get there. same node, same GPU, same everything. the only variable is the model. gold medal math olympiad winner running at 187 tokens per second on single RTX 3090 a card from 6 years ago. nvidia cooked.

English
64
52
766
68.6K
delist✖️ retweetledi
CatFu
CatFu@catfusolana·
snow fox once saved lives on the mountain but while legends slept jiangban duck took over the world catfu knows if you don’t fight back you become the story they replace
English
13
66
498
22K
delist✖️ retweetledi
Google Research
Google Research@GoogleResearch·
Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI
GIF
English
1K
5.8K
38.9K
19M
delist✖️ retweetledi
Juno Cash
Juno Cash@JunoCash_on_X·
Deposits are open on btse.com for Juno Cash ($JUNO), trading opens 8am UTC on March 24th
Juno Cash tweet media
English
6
8
20
5.1K
delist✖️ retweetledi
UK Back in the Day
UK Back in the Day@UKBackintheDay2·
30 years ago today… The Prodigy unleashed this masterpiece…
English
841
6.5K
29.2K
1.4M
delist✖️ retweetledi
Joseph Viviano
Joseph Viviano@josephdviviano·
me: "can you use whatever resources you like, and python, to generate a short 'youtube poop' video and render it using ffmpeg ? can you put more of a personal spin on it? it should express what it's like to be a LLM" claude opus 4.6:
English
550
1.2K
12.5K
1.4M
delist✖️ retweetledi
Teknium (e/λ)
Teknium (e/λ)@Teknium·
Some more free nous subs for everyone taking part in the hackathon to get goong with hermes agent!
Nous Research@NousResearch

The last few days have been wild. Here's what we've shipped over the weekend. But first, we're giving away free Nous Portal subscriptions to the first 250 people who claim code AGENTHERMES01 at portal.nousresearch.com - and there's a lot of exciting new stuff to use it on: -> Pokemon Player 🎮 Hermes can now play Pokemon Red/FireRed autonomously via headless emulation. The new pokemon-agent package (github.com/NousResearch/p…) and built-in skill provides a REST API game server, and Hermes drives it through its native tools - reading game state from RAM, making strategic battle decisions, navigating the overworld, and saving progress to memory across sessions. It just plays Pokemon. From your terminal. No display server needed. -> Self-Evolution 🧬 We shipped hermes-agent-self-evolution (github.com/NousResearch/h…) and an optional skill - an evolutionary self-improvement system that uses DSPy + GEPA to optimize Hermes's own skills, prompts, and code. It maintains populations of solutions, applies LLM-driven mutations targeted at specific failure cases, and selects based on fitness. Inspired by Imbue's Darwinian Evolver research that achieved 95.1% on ARC-AGI-2. -> OBLITERATUS 🔓 The abliteration skill got a major update. Hermes can now uncensor any open-weight LLM (Llama, Qwen, Mistral, etc.) by surgically removing refusal directions from model weights - 9 CLI methods, 116 model presets, tournament evaluation. Just say "abliterate this model" and it handles the rest. -> Signal, iMessage + 7-Platform Gateway 📱 Hermes now runs on iMessage and Signal alongside Telegram, Discord, WhatsApp, Slack, and CLI. Full feature parity: voice messages, image handling, DM pairing. Your agent is reachable everywhere. -> Automatic Provider Failover 🔄 When your primary model goes down (rate limits, outages), Hermes now automatically switches to a configured fallback model. Supports all providers including Codex OAuth and Nous Portal. One line of config, zero downtime. -> Secret Redaction Everywhere 🔒 All tool outputs now redact API keys, tokens, and passwords before they reach the LLM. 22+ patterns covering AWS, Stripe, HuggingFace, GitHub, SSH private keys, database connection strings, and more. Your secrets never leak into context.

English
8
3
84
7.5K
delist✖️ retweetledi
yearn
yearn@yearnfi·
Introducing ySplitter
Norsk
11
34
120
15.2K