Noir

1.9K posts

Noir banner
Noir

Noir

@Noirchrom

Noir, builder, researcher , pre sales. Shitposting at your favourite daos. https://t.co/zhe3MMIECJ

Katılım Nisan 2016
2.7K Takip Edilen406 Takipçiler
Noir retweetledi
Ethan Kho
Ethan Kho@ethanrkho·
"Crypto is the dumbest market in the world" Scott Phillips (@ScottPh77711570) runs HyperTrend — $20M of his own capital, one losing year in six. His edge? Picking the table big firms can't sit at. "There's no second-best counterparty in crypto. You see crime, you run towards it — crime is the foundation of edge." We cover: - Why crypto still has edge in 2026 — even when your uncle is talking about Bitcoin at Thanksgiving - The simple rules (buy 20-day highs, top-20 coins) that print through any market - Why stacking trend + momentum + carry gets you there from a spreadsheet — no automation required - Price-insensitive buyers (Saylor), price-insensitive sellers (North Korea) & why both are permanent alpha - The 90-day Binance listing short — an edge hiding in plain sight in market maker contracts - Why most shit coins trend to zero — and how to trade the ones that don't - Building a tokenized, permissionless DeFi hedge fund on hyperliquid — 2 & 20, fully on-chain - Why the best quant firms are run by near-non-verbal autists with one translator Thank you so much @ScottPh77711570 for coming on the pod! Highlights: 01:04 Table selection and the math of competitive alpha 06:21 Why basic trend following yields outsized Sharpe in crypto 08:49 Why market inefficiency persists despite institutional inflows 14:58 Price insensitive buyers: Cults, VCs, and North Korean hackers 17:17 Factor analysis and the size-decay effect in shitcoins 25:40 The structural edge in mid-frequency crypto strategies 32:43 Tokenized DeFi vaults and on-chain hedge fund governance 40:43 Designing a robust portfolio: Equal weighting vs. MVO 44:21 Sourcing alpha from ghost chains and VC exit liquidity 49:58 Exploiting market maker contracts and post-listing drift 53:55 Operational alpha: Managing margin and manipulated funding rates 01:01:13 Shifting from quant to CEO 01:11:28 How to bridge the mentorship gap with elite traders 01:22:38 Building network triads: The secret to compounding social capital 01:29:23 Why 10x goals require total identity transformation
English
57
101
1.3K
1.3M
Noir retweetledi
Sigrid Jin 🌈🙏
Sigrid Jin 🌈🙏@realsigridjin·
use cmux, wiring codex and claude code together is a total no-brainer just drop this exact prompt into your agent to set the foundation "please run cmux -h, identify your surface id and one for claude, then create a suitable message protocol using xml identifiers, letting the claude know how to message back. codify this in AGENTS.md for future agent collaboration."
Numman Ali@nummanali

Codex <---> Claude Code comms with cmux New way of working is letting Codex and Claude communicate between themselves while I co-ordinate as needed. If you already have cmux installed this is a no-brainer. Send this prompt: "Please run "cmux -h", identify your surface ID and one for Claude, then create a suitable message protocol using xml identifiers, letting the Claude know how to message back. Codify this in AGENTS.md for future agent collaboration." Adjust to your liking, if you want to go more advanced ask it to add guidance for creating a new cmux tab and starting the agent directly. Then extrapolate to multiple agents ie OpenCode, Gemini etc Oh, and cmux cli can do a shit tonne more: • control all of tmux (panes, splits, hooks, buffers, copy-mode) • manage windows, workspaces, tabs, and surfaces • send keys and read screens in any pane • drive a full browser (click, type, wait, snapshot, screenshot) • show status, progress, logs, and notifications in the sidebar • live-reload markdown viewer • auto-targets your current workspace via env vars • progress bars — 0.0–1.0 progress with labels, great for long agent runs • logs — structured per-workspace log stream with levels + sources • notifications — native notify w/ title, body, subtitle • markdown viewer — open any .md in a live-reload panel • claude-hook — wire Claude session-start/stop/notification into the UI • env auto-wiring — every command defaults to the current workspace + surface

English
3
17
194
22.5K
Noir retweetledi
Ronin
Ronin@DeRonin_·
10 GitHub repos to spend 60-90% less tokens in Claude Code: 1. RTK (Rust Token Killer) CLI proxy that filters terminal output before it hits your context - 60-90% reduction on common dev commands - one binary, zero dependencies - works with Claude Code, Cursor, Copilot Repo: github.com/rtk-ai/rtk 2. Context Mode Sandboxes raw tool output into SQLite instead of dumping it into context - 98% context reduction on Playwright, GitHub, logs - only clean summaries enter your conversation - works as Claude Code plugin Repo: github.com/mksglu/context… 3. code-review-graph Local knowledge graph that maps your codebase with Tree-sitter - Claude reads only what matters, not the entire repo - 49x token reduction on large monorepos - 6.8x on average reviews Repo: github.com/tirth8205/code… 4. Token Savior MCP server that navigates code by symbols, not full files - 97% reduction on code navigation - persistent memory across sessions - 69 tools, zero external deps Repo: github.com/Mibayy/token-s… 5. Caveman Claude makes Claude talk like a caveman to cut output tokens - 65-75% output reduction - one-line install - keeps full technical accuracy Repo: github.com/JuliusBrussee/… 6. claude-token-efficient one CLAUDE.md file that keeps responses terse - drop-in, no code changes - reduces output verbosity on heavy workflows - best for output-heavy sessions Repo: github.com/drona23/claude… 7. token-optimizer-mcp MCP server with caching, compression, and smart tool intelligence - 95%+ token reduction through intelligent caching - compresses repeated tool outputs Repo: github.com/ooples/token-o… 8. claude-token-optimizer reusable setup prompts for optimizing any project - 90% token savings in 5 minutes - reduces doc token usage from 11K to 1.3K Repo: github.com/nadimtuhin/cla… 9. token-optimizer finds ghost tokens that silently eat your context - survives compaction without losing quality - fixes context quality decay Repo: github.com/alexgreensh/to… 10. claude-context (by Zilliz) code search MCP that makes your entire codebase the context - ~40% reduction with equivalent retrieval quality - hybrid BM25 + dense vector search Repo: github.com/zilliztech/cla… [ how to stack them ]: you don't need all 10. pick 2-3 based on your workflow: > heavy terminal output? RTK > big codebase? code-review-graph + Token Savior > lots of MCP servers? Context Mode > quick fix? Caveman + claude-token-efficient most people are burning tokens without knowing it run /context in a fresh session and see how much is gone before you even type a word your pocket will thank me later :<)
Ronin tweet media
English
80
298
2.7K
341.1K
Noir
Noir@Noirchrom·
@jverbroucht Man this ogre character is really cool who is the artist ??
English
1
0
0
33
Kilosaurus
Kilosaurus@kilosaurus·
Every class has its own look now. Builders in blue. Lumberjacks in red. Archers in green. Warriors in orange. Low pixel ratio. Readability is everything. Devlog 7. 🎨 #gamedev #indiegame #devlog
English
4
18
314
11.7K
Noir retweetledi
Varun
Varun@varun_mathur·
Introducing Pods Hyperspace Pods lets a small group of people - a family, a startup, a few friends, to pool their laptops and desktops into one AI cluster. Everyone installs the CLI, someone creates a pod, shares an invite link, and the machines form a mesh. Models like Qwen 3.5 32B or GLM-5 Turbo that need more memory than any single laptop has get automatically sharded across the group's devices - layers split proportionally, inference pipelined through the ring. From the outside it looks like one OpenAI-compatible API endpoint with a pk_* key that drops straight into your AI tools and products. No configuration beyond pasting the key and changing the base URL. A team of five paying for cloud AI burns $500–2,000 a month on API calls. The same team's existing machines can serve Qwen 3.5 (competitive on SWE-bench) and GLM-5 Turbo (#1 on BrowseComp for tool-calling and web research) for free - the hardware is already on their desks. When a query genuinely needs a frontier model nobody has locally, the pod falls back to cloud at wholesale rates from a shared treasury. But for the daily work - code reviews, refactors, research, drafting - local models handle it and nobody gets billed. And when it is idle, you can rent out your pod on the compute marketplace, with fine-grained permissions for access management. There's no central server involved in inference. Prompts go from your machine to your pod members' machines and back: all of this enabled by the fully peer-to-peer Hyperspace network. Pod state - who's a member, which API keys are valid, how much treasury is left - is replicated across members with consensus, so the whole thing works on a local network. Members behind home routers don't need port forwarding either. The practical setup for most pods is three models covering different jobs: Qwen 3.5 32B for code and reasoning, GLM-5 Turbo for browsing and research, Gemma 4 for fast lightweight tasks. All running on hardware you already own. Pods ship today in Hyperspace v5.19. Model sharding, API keys, treasury, and Raft coordinator are all live. What Makes This Different - No middleman. Your prompts travel from your IDE to your pod members' hardware and back. There is no server in between reading your data. - No vendor lock-in. Pod membership, API keys, and treasury are replicated across your own machines using Raft consensus. If the internet goes down, your local network keeps working. There is no database in someone else's cloud that your pod depends on. - Automatic sharding. You don't configure layer ranges or calculate VRAM budgets. Tell the pod which model you want. It figures out how to split it across whatever hardware is online. - Real NAT traversal. Your friend behind a home router with a dynamic IP? Works. No VPN, no Tailscale, no port forwarding. The nodes handle it. - Free when local. This is the part that matters most. Cloud AI bills scale with usage. Pod inference on local hardware scales with nothing. The marginal cost of your 10,000th prompt is the electricity your laptop was already using. Coming soon: - Pod federation: pods form alliances with other pods. - Marketplace: pods with spare capacity can sell inference to other pods.
English
174
294
3.1K
288.9K
Noir retweetledi
naiive
naiive@naiivememe·
2026 Alt season :
English
91
479
6.5K
381.6K
Noir retweetledi
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
I am a Web3 Ambassador at World Liberty Financial. There are 12 of us on the team page. 4 are named Trump. 3 are named Witkoff. The page calls us "the passionate minds shaping the future of finance." 600,000 wallets bought our memecoin. They lost $3.87 billion. The family collected $350 million in trading fees. It launched 3 days before the inauguration. 80% of the supply went to CIC Digital LLC and Fight Fight Fight LLC. I did not choose the names. I designed the allocation, the vesting, the timing, and the distance between the product and the President. The distance is my best work. I am the reason these events are unrelated. World Liberty Financial sends 75 cents of every dollar to DT Marks DEFI LLC. That is the family entity. Zero capital contributed. Zero liability assumed. I wrote this into the Gold Paper. Page 14. The lawyers bound it in white leather. The binding cost more than the due diligence. Justin Sun invested $75 million. He was facing SEC fraud charges. The SEC dropped the case. He is now our advisor. These events are unrelated. Changpeng Zhao pleaded guilty to federal money laundering violations. He received a presidential pardon. The SEC dropped its lawsuit against his exchange the same week we listed our stablecoin. Then the exchange settled a $2 billion deal entirely in that stablecoin. These events are unrelated. Arthur Hayes, Benjamin Delo, and Samuel Reed of BitMEX pleaded guilty to Bank Secrecy Act violations. All 3 received presidential pardons. Then the company itself was pardoned. $100 million in fines. Gone. An American first. These events are unrelated. Sheikh Tahnoun of Abu Dhabi paid $500 million for a 49% stake that was never publicly disclosed. Then the administration approved semiconductor exports to his companies over national security objections. These events are unrelated. Everything is unrelated. I track the unrelatedness on a dashboard I built. The dashboard has 7 columns now. I am proud of the dashboard. On May 22nd, 220 people paid a combined $148 million to eat dinner with the America First president. Over half were foreign nationals. Justin Sun paid $18.5 million for the first seat. He visited the Executive Office Building the day before. I designed the seating chart. I put it on the Investor Confidence page. That page is doing well. The team page lists 3 Witkoffs. All 3 are Co-Founders. Steven Witkoff is the President's Middle East envoy. He testified as a character witness at the President's fraud trial. His son Zach runs the crypto operation. His son Alex is also a Co-Founder. I have not been told what Alex co-founded. The father runs the diplomacy. The sons run the platform. The family runs both. That is organizational efficiency. Barron is 19. His title is Web3 Ambassador. The same as mine. Donald Jr. called the conflicts of interest "complete nonsense." Eric launched a Bitcoin mining company called American Bitcoin. America First. The mining partner is Hut 8. Hut 8 was founded in Canada. America First means the name. On March 6th, the President signed Executive Order 14233 creating a Strategic Bitcoin Reserve. The order directs the government to hold Bitcoin. The President's family holds billions in Bitcoin. The executive order appreciates the President's assets by presidential decree. I did not write the executive order. I made sure it looked unrelated to the portfolio. Trump Media put $2 billion of Bitcoin on its balance sheet. The ticker symbol is DJT. His initials. The press secretary said it is absurd to insinuate the President profits off the presidency. Forbes calculated his crypto holdings exceed the combined value of Mar-a-Lago and Trump Tower. I would call that absurd too. That is my job. 600,000 wallets bought in. 1 of them asked why she could not withdraw her funds. I told her the protocol was experiencing dynamic market conditions. She asked what that meant. I sent her the Gold Paper. She said she had read the Gold Paper. I muted her channel. Dynamic means the conditions change. The condition that changed was her access. A congressman called us the world's most corrupt crypto startup operation. We put it on a coffee mug. Ironic merchandise. $45. The revenue split on the mug is also 75/25. My own tokens vest on a different schedule. I wrote that schedule. That is not in the Gold Paper. The memecoin funds the family. The family funds the platform. The platform funds the stablecoin. The stablecoin funds the deals. The deals require the pardons. The pardons free the partners. The partners fund the platform. The President signs the executive orders. The executive orders inflate the assets. The assets fund the family. I am the reason these events are unrelated.
Peter Girnus 🦅 tweet media
English
1.7K
7.4K
23.5K
5.5M
Noir
Noir@Noirchrom·
@RealAstropulse I feel your tool need extensive prompt guide I tried some generations that gave good results on flow but in your model the results was very different
English
0
0
0
43
Astropulse
Astropulse@RealAstropulse·
I took this to the extreme- seven individually animated sections! Still have more to do for attacks, but this is one of the most insane things I've made with Retro Diffusion. Very minimal cleaning, almost all of the 20 minutes I spent on it was waiting for gens and combining.
GIF
Astropulse@RealAstropulse

Here's a tip for animating large things with Retro Diffusion: You can do it all at once, up to 256x256, but this introduces more complications and noise than necessary. Instead, isolate the smallest portions you can, animate those independently, then combine. Way better:

English
8
5
98
3.1K
Noir
Noir@Noirchrom·
@Joestar_sann Unfortunately local models are still far away from cloud models in terms of actual coding capabilities and not calendar management
English
0
0
0
10
Joestar
Joestar@Joestar_sann·
so let me get this straight all of ai twitter was telling people to buy a mac mini to run openclaw, which is literally just a framework, an orchestration layer that sends api requests to actual ai models. something you can run on a $5/month vps. which is exactly what i do btw but when google drops gemma 4, an actual large language model that you can run and fine-tune locally on that same mac mini, with no api costs, no subscriptions, no third party dependencies, completely yours under apache 2.0 the ai community is silent you were buying $800 hardware to run a wrapper but ignoring the actual ai model that would justify that hardware this tells you everything you need to know about the average iq of ai twitter
English
723
1.3K
22.2K
816.7K
Noir
Noir@Noirchrom·
@noisyb0y1 This is like the bible of Claude
English
0
0
0
1.8K
Noir
Noir@Noirchrom·
@mert imo the big drop is in medium effort , if you put it high or max its fine
English
0
0
0
1.1K
mert
mert@mert·
it is insane how far behind opus 4.6 max has fallen behind codex 5.4 xhigh it's not close one is like talking to a coked out intern who hasn't slept in 40 hours taking every shortcut known to man and the other is a prime rigorous mathematician with a German bedside manner
English
188
60
2.1K
242.3K
Noir retweetledi
Sigrid Jin 🌈🙏
Sigrid Jin 🌈🙏@realsigridjin·
OpenClaw + clawhip + oh-my-codex - openclaw: setting robsters - clawhip: managing robsters, triggering them in discord - oh-my-codex: codex harnesses for robsters (you can replace them with oh-my-openagent or oh-my-claudecode as well) the BEST configuration for agents
Sigrid Jin 🌈🙏@realsigridjin

the sneak peek of agent ralph loop & agentic orchestration is here - clawhip(orchestration layer of agents) github.com/Yeachan-Heo/cl… - omo(oh-my-openagent/opencode) github.com/code-yeongyu/o… - omx(oh-my-codex) github.com/Yeachan-Heo/oh… - omc(oh-my-claudecode) github.com/Yeachan-Heo/oh…

English
6
12
100
9.8K
Noir retweetledi
moneyfetishist
moneyfetishist@moneyfetishist·
I am not going to motivate you because if you need motivation from a stranger on a plane the answer is stay but I will give you the game theory your corporate M&A gig is a repeated game with diminishing marginal returns. year 1 you learn everything. year 2 you refine it. year 3 you are executing pattern recognition. year 4+ you are being paid more to do the same thing with slightly larger numbers. the learning curve flattens but the golden handcuffs tighten because every year the comp goes up and the opportunity cost of leaving gets more painful on paper this is a classic status quo bias trap. the payoff of staying is known and comfortable. the payoff of leaving is uncertain and scary. so you stay not because staying is optimal but because the asymmetry of regret is lopsided. you can imagine regretting the leap. you cannot as easily imagine regretting the years you stayed too long because that regret builds slowly and never hits you in one moment here is where game theory actually helps: in your M&A seat you are playing someone else's game. the firm sets the rules, the deal flow, the comp structure, the promotion timeline. you optimize within their framework. you are a very well-compensated player in a game you did not design. your upside is capped by whatever the partnership or MD economics look like. your downside is protected by a salary. that is the trade owning a local business flips the entire payoff matrix. you design the game. you set the rules. the downside is real and unprotected but the upside is uncapped and compounds in ways a salary never does because you own the equity. a $2M EBITDA business bought at 4x and grown to $3M EBITDA over 3 years is worth $12-15M on exit. no M&A salary trajectory produces that kind of wealth creation in that timeframe unless you are a founding partner the Nash equilibrium of your current situation: you and every other M&A professional are competing for the same promotions, same deal credit, same bonus pool. the competition is fierce because the players are identical. same schools, same skills, same hours. you are in a crowded equilibrium where everyone works 80 hours to stay in the same relative position local business ownership is a different game with different players. the competition is a 62-year-old owner who stopped innovating in 2014 and a 35-year-old who inherited the business and does not want to be there. you walk in with financial sophistication, deal structuring experience, and the ability to read a balance sheet faster than anyone in the room. you are overqualified for the game which is exactly where you want to be. the best strategy in game theory is to play games where your existing skill set gives you an asymmetric advantage over the other players the timing question is about optionality. every year you stay in M&A your financial optionality goes up slightly because you save more. but your operational optionality goes down because you get further from the reality of running anything. the M&A guy who leaves at 28 adapts to operations in 6 months. the one who leaves at 38 has a decade of habits built around delegating to analysts and reviewing decks, and managing a P&L feels foreign in a way it would not have 10 years earlier but again. if you need me to motivate you, stay. the people who actually do this do not need motivation. they need a spreadsheet that shows the math works and then they cannot NOT do it. if you have the spreadsheet and you are still asking strangers for motivation the spreadsheet is not the problem
English
49
199
2.5K
477.4K
Noir retweetledi
Sudo su
Sudo su@sudoingX·
this guy has 29 models on huggingface at page 2 ranking. no lab behind him. no sponsorship. $2,000 from his own pocket on GPU rentals. he compressed GLM-4.7 to run on a MacBook and quantized Nemotron Super the week it dropped. all public. all free. nvidia is a trillion dollar company with hundreds of teams but they are not the ones quantizing models middle of the night and pushing them out before sunrise. if nvidia stopped tomorrow their employees stop working. people like @0xSero would not. that is the difference between a paycheck and a mission. @NVIDIAAI you talk about making AI accessible. the people actually doing it are right here. 29 models deep burning their own compute with no ask except more hardware to keep going. you do not need to build another program. just look at who is already building for you. one GPU to this man would produce more public value than a hundred internal sprints. i am not asking for charity. i am asking you to invest in someone who already proved it.
Sudo su tweet media
0xSero@0xSero

Putting out a wish to the universe. I need more compute, if I can get more I will make sure every machine from a small phone to a bootstrapped RTX 3090 node can run frontier intelligence fast with minimal intelligence loss. I have hit page 2 of huggingface, released 3 model family compressions and got GLM-4.7 on a MacBook huggingface.co/0xsero My beast just isn’t enough and I already spent 2k usd on renting GPUs on top of credits provided by Prime intellect and Hotaisle. ——— If you believe in what I do help me get this to Nvidia, maybe they will bless me with the pewter to keep making local AI more accessible 🙏

English
181
1.1K
12.4K
760.6K
Noir retweetledi