nuforms

23K posts

nuforms banner
nuforms

nuforms

@nuforms_lab

Lone analyst surfing the endless waves of the crypto ocean. Chasing Alpha, catching narratives, and leaving digital footprints in every tide.

nuforms.base.eth Katılım Mayıs 2009
4.1K Takip Edilen2.4K Takipçiler
nuforms
nuforms@nuforms_lab·
@gainzalgo cant even imagine how bad the support for real users can be if your onboarding is a hell quest 🤣
English
0
0
0
33
GainzAlgo
GainzAlgo@gainzalgo·
10,000+ Traders Strong on X ❤️ What started as an idea has grown into an incredible community of traders who support, test, and grow with us every single day. To celebrate this milestone, we're giving back to the community: 🎉 10% OFF on all GainzAlgo products Use code 10K at checkout for 10% OFF Thank you for all the support, trust, and love. More useful tools, more innovation, and more exciting things ahead 🙌
GainzAlgo tweet media
English
11
4
22
2.1K
nuforms
nuforms@nuforms_lab·
@elonmusk We talked a lot today with @grok and i was amazed how beautiful russian he have 👊🏻🔥
English
1
1
3
95
Elon Musk
Elon Musk@elonmusk·
Try Grok Voice
X Freeze@XFreeze

Grok Voice Think Fast 1.0 ranks #1 on the Artificial Analysis τ-Voice benchmark for real-world agentic customer service resolution Absolutely outperforming GPT-Realtime-2 (High) and Gemini 3.1 Flash by a huge margin That's a massive 12%+ lead over OpenAI's best model that just released a few days ago Grok is running real-time background reasoning without the latency penalty, which is why it is already handling live Starlink phone operations autonomously at scale

English
2.7K
4.9K
24.2K
12.4M
X Freeze
X Freeze@XFreeze·
xAI is now shipping at lightspeed and just released a massive wave of updates that completely supercharges the Grok ecosystem Here is everything you need to know: Grok 4.3: The best frontier intelligence-per-cost unit with massive 1M context window It crushes in two brutal private “Vals AI” benchmarks, securing #1 in CaseLaw for complex Canadian legal reasoning and #1 in CorpFin for analyzing dense, long-context corporate finance agreements Grok Voice Think Fast 1.0: It's the most advanced voice AI built for the real world. It listens, reasons, and responds simultaneously with zero added latency Currently Powering customer support and sales for Starlink - resolving 70% of support tickets and closing 20% of phone sales, fully autonomous. No human in the loop Custom Voice & Voice Library: A powerful new voice cloning suite. You can now clone a voice and deploy it instantly across Grok APIs for custom conversational agents Grok Computer: A fully autonomous desktop AI agent. It gets system-level CLI and file access, upgrading it from a chatbot to a true pair programmer that can directly read, write, run scripts, debug logs, refactor entire codebases, and generate visual assets Grok Connectors: Grok now natively integrates with your daily tools (Google Workspace, Notion, GitHub, Linear, Microsoft 365) right in the chat. You can also bring your own custom MCP servers to connect internal databases Grok Imagine Quality Mode API: A massive leap in visual realism. It solves AI text rendering with flawless multilingual typography, generates hyper-realistic textures, and offers precise brand control for enterprise marketing assets Grok STT & TTS APIs: Powerful standalone audio endpoints at disruptive pricing (TTS at $4.20/1M chars, STT at $0.10/hr). Features real-time WebSocket streaming, precise speaker diarization, smart inverse text normalization, and emotional speech tags like [laugh] or
X Freeze tweet media
English
121
145
1K
44.6K
nuforms
nuforms@nuforms_lab·
@TedPillows Just a reminder Ted use logarithmic scale charts please Stop manipulating
nuforms tweet media
English
1
0
2
131
Ted
Ted@TedPillows·
Altcoin MCap is having a bearish retest now. I wonder what'll happen next.
Ted tweet media
English
237
185
1.1K
147.5K
Ryan Stephen
Ryan Stephen@Ryan__Stephen·
liquid metal calendar
Español
125
38
833
631.2K
nuforms
nuforms@nuforms_lab·
Valid points — real profits, recurring revenue, and ETF/pension inflows do make this cycle structurally different from 1999's mostly vaporware era. The M2 adjustment is a smart lens too. That said, even 'better' companies can fuel bubbles when the narrative gets this crowded and multiples detach from reality. History doesn't care about stronger fundamentals if sentiment flips
English
0
0
1
122
nuforms
nuforms@nuforms_lab·
🚨 @BlackRock CEO Larry Fink: “The United States is short power, short compute, short chips. We just don’t have enough compute power right now.” He predicts the birth of a brand-new asset class — “futures of compute” — to hedge shortages and direct capital into AI infrastructure. This is bigger than Nvidia. This is the real bottleneck. #AI #BlackRock #ComputeFutures #Infrastructure
nuforms tweet media
English
1
0
0
55
Jukan
Jukan@jukan05·
Why did xAI hand over a 220,000-GPU cluster to Anthropic? The technical backdrop to xAI's decision to hand Colossus 1 over to Anthropic in its entirety is more interesting than it appears. xAI deployed more than 220,000 NVIDIA GPUs at its Colossus 1 data center in Memphis. Of these, roughly 150,000 are estimated to be H100s, 50,000 H200s, and 20,000 GB200s. In other words, three different generations of silicon are mixed together inside a single cluster — a "heterogeneous architecture." For distributed training, however, this configuration is close to a disaster, according to engineers familiar with the setup. In distributed training, 100,000 GPUs must finish a single step simultaneously before the cluster can advance to the next one. Even if the GB200s finish their computation first, the remaining 99,999 chips have to wait for the slower H100s — or for any GPU that has hit a stack-related snag — to catch up. This is known as the straggler effect. The 11% GPU utilization rate (MFU: the share of theoretical FLOPs actually realized) at xAI recently reported by The Information can be read as the numerical fallout of this problem. It stands in stark contrast to the 40%-plus MFU figures achieved by Meta and Google. The problem runs deeper still. As discussed earlier, NVIDIA's NCCL has traditionally been optimized for a ring topology. It works beautifully at the 1,000–10,000 GPU scale, but once you push into the 100,000-unit range, the latency of data traversing the ring once around becomes punishingly long. GPUs need to churn through computations rapidly to keep MFU high, but while they sit waiting endlessly for data to arrive over the network fabric, more than half of the silicon falls into idle. Google sidestepped this bottleneck with its own custom topology (Google's OCS: Apollo/Palomar), but xAI, by my read, has not yet reached that stage. Layer Blackwell's (GB200) "power smoothing" issue on top, and the picture comes into focus. According to Zeeshan Patel, formerly in charge of multimodal pre-training at xAI, Blackwell GPUs draw power so aggressively that the chip itself includes a hardware feature for smoothing power delivery. xAI's existing software stack, however, was optimized for Hopper and does not understand the characteristics of the new hardware; when it imposes irregular loads on the chip, the silicon physically destructs — literally melts. That means the modeling stack must be rewritten from scratch, which in turn means scaling is far harder than most of us imagine. Pulling all of this together points to a single conclusion. xAI judged that training frontier models on Colossus 1 simply was not efficient enough to be worthwhile. It therefore moved its own training workloads wholesale onto Colossus 2, built as a 100% Blackwell homogeneous cluster. Colossus 1, on the other hand — whose mixed architecture is far less crippling for inference, which parallelizes more forgivingly — was leased in its entirety to an Anthropic that desperately needed inference capacity. Many observers point to what looks like a contradiction: Elon Musk poured enormous capital into building Colossus, only to hand the core asset over to a direct competitor in Anthropic. Others read it as xAI capitulating because it is a "middling frontier lab." But these are surface-level reads. Look at the numbers and a different picture emerges. xAI today holds roughly 550,000+ GPUs in total (on an H100-equivalent performance basis), and Colossus 1 (220,000 units) accounts for only about 40% of the total available capacity. Colossus 2 — built entirely on Blackwell — is already operational and continuing to expand. Elon kept the all-Blackwell homogeneous cluster (Colossus 2) for himself and leased out the older, mixed-generation Colossus 1. In other words, he handed the pain of rewriting the stack — the MFU-11% debacle — to Anthropic, while keeping his own focus on training the next generation of models. The real point, then, is this. Elon's objective appears to be positioning ahead of the SpaceXAI IPO at a $1.75 trillion valuation, currently floated for as early as June. The narrative SpaceXAI now needs is that xAI — long the "sore finger" — is not merely a research lab burning cash, but a business with a "neo-cloud" model in the mold of AWS, capable of leasing surplus assets at high yields. From a cost-of-capital perspective, an "AGI cash incinerator" is far less attractive to investors than a "data-center landlord generating cash." As noted above, the most important detail of the Colossus 1 lease is that it is for inference, not training. Unlike training, inference requires far less tightly synchronized inter-GPU communication. Even when the chips are heterogeneous, the workload parcels out cleanly across them in parallel. The straggler effect — the chief weakness of a mixed cluster — is essentially neutralized for inference workloads. Furthermore, with Anthropic occupying all 220,000 GPUs as a single tenant, the network-switch jitter (unanticipated latency) that arises under multi-tenancy disappears. The two sides' technical weaknesses end up complementing each other almost exactly. One insight follows. As a training cluster mixing H100/H200/GB200, Colossus 1 was an asset that could only deliver an MFU of 11%. The moment it was handed over to a single inference customer, however, that asset transformed into a cash-flow asset rented out at roughly $2.60 per GPU-hour (a weighted average of the lease rates across GPU types). For xAI, what was a "cluster from hell" for training has become a "golden goose" minting $5–6 billion in annual revenue when redeployed for inference. Elon's genius, I would argue, lies not in the model but in this asset-rotation structure. The weight of that $6 billion becomes clearer when set against xAI's income statement. Annualizing xAI's 1Q26 net loss yields roughly $6 billion in losses per year. The $5–6 billion in annual revenue generated by leasing Colossus 1 to Anthropic, in other words, almost perfectly hedges xAI's loss figure. This single deal effectively pulls xAI to break-even. Heading into the SpaceXAI IPO, this functions as a core line of financial defense. From a cost-of-capital standpoint, if the image shifts from "research lab burning cash" to "infrastructure tollgate stably printing $6 billion a year," the entire tone of the offering can change. (May 8, 2026, Mirae Asset Securities)
Jukan@jukan05

What the SpaceX–Anthropic Deal Means Two weeks ago, we published a note laying out what GPT-5.5's release implied. The conclusion was simple: whoever secures compute first, in greater volume, and with greater reliability ultimately takes the win. With OpenAI's 30GW roadmap dwarfing Anthropic's 7–8GW, we closed by arguing that the structural advantage on compute sat with OpenAI. Less than a fortnight later, that conclusion is being tested. On May 6, Anthropic signed a single-tenant lease for the entirety of Colossus 1 with SpaceXAI — the infrastructure subsidiary that consolidates Elon Musk's xAI and SpaceX. The asset carries more than 220,000 GPUs and 300MW of power, and crucially, is scheduled to come online within this month. It served as the capstone of Anthropic's April blitz, which added 13.8GW of cumulative capacity over the span of a single month. On headline numbers alone, OpenAI took more than a year to stack 18GW; Anthropic has put 13.8GW in the ground in thirty days. The takeaways break down into three. First, the compute pecking order has been redrawn again. Anthropic has now swept up the AWS expansion (5GW, with $100B+ in spend commitments over a decade), Google + Broadcom (3.5GW of TPU), Google Cloud (5GW alongside a $40B investment), and now SpaceXAI's Colossus 1 (0.3GW). Cumulative committed capacity, inclusive of pre-April allocations, sits at 14.8GW. This is still only half of OpenAI's 2030 target of 30GW, but the fact that the SpaceX lease will be live inside a month makes "deliverability" a qualitatively different proposition. Second, Elon Musk is the plaintiff in an active lawsuit against OpenAI — and at the same time, the supplier handing 220,000+ GPUs and 300MW of power, in one block, to OpenAI's most formidable competitor. The timing matters: the deal was struck in the middle of the Musk–Altman trial. We read this as a deliberate pincer with OpenAI in the middle. In the courtroom, Musk works to dismantle the moral legitimacy of OpenAI's leadership; in the market, he arms Anthropic to absorb OpenAI's revenue and user base. Third, the structure is financial-engineering perfection — a clean win-win for both sides. xAI can recognize $6B of annual revenue from a single contract, an amount that almost precisely offsets its Q1 2026 annualized net loss of $6B. It also accelerates the cleanup of SpaceXAI's pre-IPO balance sheet, with the entity now being floated at around $1.75T. Anthropic, on the other side, converts roughly $5B of spend into what it expects to be $15B of ARR via the coming inference-revenue surge. (Mirae Asset Securities, May 8, 2026)

English
201
513
4.2K
1.2M
Ricardo
Ricardo@Ric_RTP·
Elon Musk is using the OpenAI trial to execute the biggest personal wealth transfer in history. His plan is absolutely genius, let me break it down: The trial verdict drops May 21. The SpaceX IPO roadshow starts June 8. That's 18 days apart. And once you see the full picture, you realize the lawsuit was NEVER about saving a charity... SpaceX filed confidentially with the SEC on April 1 for the largest IPO in the history of capital markets. $1.75 trillion valuation. That shatters Saudi Aramco's record by 3x. Elon holds 42% economic ownership, which at that price makes his SpaceX stake ALONE worth over $700 billion. But that's not even the important part. In February, Musk merged xAI into SpaceX. His entire AI company is now bundled inside the IPO vehicle. So when investors buy SpaceX stock in June, they're also buying into Elon's AI bet at a $250 billion embedded valuation. Now look at what he's doing in the courtroom 30 miles away: Elon is suing to remove Sam Altman and Greg Brockman from OpenAI, unwind the for-profit conversion, and destabilize the company right before it tries to IPO at $850 billion. If the judge rules against OpenAI on May 21, their IPO timeline implodes, Microsoft's $135 billion exposure is destroyed, and investor confidence craters. And where does that money flow? Directly into SpaceX, which starts its roadshow 18 days later with a clean narrative, no legal drama, and the only major AI company going public that ISN'T facing an existential lawsuit. Elon even restructured his damages claim to make this bulletproof: He told the court that if he wins $134 billion, he wants ZERO dollars paid to him personally. Everything goes back to OpenAI's nonprofit foundation. That makes it impossible for OpenAI's lawyers to argue he's doing this for money. Because the money isn't coming from the verdict - it's coming from the IPO. Destroy your biggest AI competitor's IPO prospects in court. Absorb the investor demand 18 days later with your own IPO. Become a trillionaire in the process. Elon even texted Brockman two days before the trial started: "By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be." This is a PR campaign designed to poison public sentiment against OpenAI right before both companies compete for the same pool of IPO investors. So while everyone debates whether Altman stole a charity, nobody is looking at the calendar: May 21: Trial verdict June 8: SpaceX roadshow June 2026: Largest IPO in history Elon doesn't need to win the trial. He just needs to create enough chaos around OpenAI that investors see SpaceX as the safer bet. And right now, that plan is working. But there's ONE more move after the IPO that makes his plan complete: Elon's 2025 Tesla pay package gave him 423 million shares tied to performance targets that could take a decade to hit. - Robotaxis at scale - Optimus mass production - $400 billion in EBITDA Stuff that might never happen. Except there's a clause in the SEC filing that makes all of that irrelevant: If Tesla gets acquired, every single milestone disappears and all 423 million shares vest on the spot. ONE transaction and the entire award unlocks instantly. Now ask yourself what happens if a $1.75 trillion SpaceX buys Tesla after the IPO... Elon gets the SpaceX stake, the IPO capital, and every Tesla share vesting at once through a deal he controls on both sides. So the full plan is: Destabilize OpenAI in court, run the biggest IPO in history, use SpaceX to acquire Tesla, trigger the clause, vest everything, and become a trillionaire. Do you think that plan will work out?
English
237
361
2.2K
421.4K
nuforms
nuforms@nuforms_lab·
Actual question – Where’s the analytical engine in this system? TradingView MCP provides the candles or indicators; OKX MCP acts upon orders. The middle step here is only pattern recognition from an LLM to the numeric data provided by the TradingView MCP. This is not AI-based technical analysis but automated trading with a chatbot in between. 👀
English
0
0
2
752
Miles Deutscher
Miles Deutscher@milesdeutscher·
Claude Code x TradingView is the best AI trading quant of all time. Gone are the days of AI slop market analysis - AI is now better at technical analysis than you. Here's how you can turn Claude Code into your expert trading quant (in <5 minutes): Step 1. Ensure you have these requirements: • Claude Code - installed on your computer (this is what talks to TradingView) • Node.js 18+ - installed on your computer (the MCP server runs on this) • TradingView Desktop app - downloaded from tradingview.com/desktop • A valid TradingView subscription (paid plan for real-time data) Step 2. Open Claude Code and run the following prompt to connect the TradingView MCP: "Install the TradingView MCP server. Clone and explore github.com/tradesdontlie/…, run npm install, add to my MCP config at ~/.claude/.mcp.json, and launch TradingView with the debug port." Step 3. Health check Restart Claude Code, and paste this prompt: "Use tv_health_check to confirm TradingView is connected." If correctly connected, Claude Code should respond with a confirmation. Step 4. Start prompting Claude Code now has access to your ENTIRE TradingView environment Your charts, your technical analysis, alerts - everything. Use this prompt to turn Claude Code into your personal market analyst: "Act as an elite quantitative trader and technical analyst with full access to my TradingView environment. Analyze the current market structure for [ASSET] on the following timeframes: 5m, 15m, 1H, 4H, 1D. Use my existing indicators, drawings, and chart context to: Identify the current trend and market regime (trending, ranging, accumulation, distribution) Mark key support and resistance levels based on price action and liquidity Identify liquidity pools, stop clusters, and likely areas of manipulation Analyze momentum using RSI, MACD, and volume where available Detect any chart patterns (breakouts, consolidations, deviations, etc.) Evaluate confluence across timeframes Then provide: A clear directional bias (bullish, bearish, neutral) The highest probability trade setup right now Exact entry, stop loss, and take profit levels Risk-to-reward ratio Invalidation point (what would prove this analysis wrong) Finally: Explain your reasoning step-by-step in plain English. Avoid generic statements. Be decisive. If no high-quality setup exists, explicitly say “no trade” and explain why." This is an EXTREMELY powerful setup - make sure to save this post so you don't forget it.
Miles Deutscher tweet media
English
104
334
2.1K
228.5K
nuforms
nuforms@nuforms_lab·
Hey, Sophie Actual question – Where’s the analytical engine in this system? TradingView MCP provides the candles or indicators; OKX MCP acts upon orders. The middle step here is only pattern recognition from an LLM to the numeric data provided by the TradingView MCP. This is not AI-based technical analysis but automated trading with a chatbot in between?
English
0
0
0
98
Sophie ⚛️ @ OKX
Sophie ⚛️ @ OKX@sophie_OKX·
This is powerful, thanks Miles! There's a final piece: Pair this with the TradingView MCP + OKX AI Tradekit okx.com/agent-tradekit (MCP available) and you've got the full loop — from AI-powered technical analysis straight to trade execution. No more copy-pasting setups manually. That's a killing combo 🔥 — trade like a pro 🤖📈
English
1
1
10
3.4K
Sergio Tesla
Sergio Tesla@sergio_tesla_·
Friendly warning: most "AI + TradingView" posts are either under-informed or engagement farming. Don't waste your AI budget chasing this. TradingView is optimized for humans. We need the visual layer because we're visual beings. Hand us 10,000 rows of OHLC data and we throw the laptop out the window. Agents don't have that problem. They're better at raw data than we'll ever be. So when you connect an agent to TradingView, you're forcing it to guesstimate price levels from screenshots instead of just… reading the numbers. An agent doesn't need to draw fibs. It can calculate retracements directly, more accurately than any human eyeballing levels on a chart. You're handicapping your agent with an interface built for the wrong species. If you want to play the AI agent game seriously, strip away the visual layer. That's the layer we needed. Agents don't.
Crypto Rover@cryptorover

Claude can FULLY control your Tradingview. But actually, it can do MUCH more than that. There a so many usecases.

English
5
2
45
7.9K
Bassey
Bassey@0xBassey·
@cryptorover That is interesting especially now that #BingX integrates TradingView execution directly AI plus analysis plus execution could be powerful
English
1
0
1
441
nuforms
nuforms@nuforms_lab·
@BasedOneX how to find legal terms? what kind of countries those cards are available?
English
0
0
0
58
Based
Based@BasedOneX·
Your spend just paid you back. Previous month's $BASED cashback has been credited to every Based Card holder with spending. Settled directly to your wallet, no claim flow, no waiting. Open the Based mobile app to see what your card earned you. How much did yours pay back? Get your limited Gold Card: based.one/cards The future is Based.
Based tweet media
English
39
26
161
16.1K
onchainquant
onchainquant@onchainquant·
I did something sacrilegious. I was tired of getting hunted on Binance by the Binance/Bitget/Chinese scam MM cartel so i devised a plan profit from these insidious projects as i was becoming increasingly broke from getting liquidated. i started by mapping all the supply of all potentially scammy projects 1. >95% supply controlled 2. no unlock in the next few months 3. Accumulation pattern from various onchain clusters 4. Egregious OTC rounds 5. On BNB chain 6. Binance Labs involved 7. Spot volume dominant on Bitget 8. Binance Alpha project a month ago, i landed on $LAB. virtually every signal showed an overwhelming propensity to inorganically pump in the near future. So i slowly accumulated and waited patiently for the actors to spring into action. And yesterday firework there was. Meanwhile, @GracyBitget is still pretending to be investigating and so is @_RichardTeng who is sleeping at the wheel. I don't feel particularly proud from profiting from such scam but we are in an environment where you need to hunt or being hunted. I also found the next perfect scam coin candidate. Will share that later this week.
onchainquant tweet media
English
49
24
485
52.5K