Seriously

24.1K posts

Seriously

Seriously

@seriously211

Try to correct the record. Also, trade stocks/futures (a lot) but rarely post about it. If I do, it is not financial advice…do your own research.

Earth Katılım Temmuz 2009
6.4K Takip Edilen1.7K Takipçiler
Seriously retweetledi
TacticzHazel
TacticzHazel@TacticzH·
Nobody knows about this company yet. When people talk about subsea solutions, Kraken Robotics ( $KRKNF / $PNG) is the talk of the town. But there are many more interesting companies in this space. These past weeks I've been researching Gabler Group ( ticker $XK4) and I just finished my deep dive! Gabler is a German marine-operations small-cap, which only recently IPO’d (March 9th). They are set up perfectly to benefit from the world-wide surge in defense spending. This results in a €360M backlog (5x revenue), as they hold a dominant position in the submarine mast segment. Recently they made a strategic pivot towards smart subsea architecture, data analytics and power solutions. They have a wide and very interesting product range, and are in business with most of the large players in the submarine and naval sector. The full deep dive is now live and available via the link in the comment section and and link in bio. Read or bookmark it, because you don't want to miss this one.
TacticzHazel tweet media
English
2
7
75
10.6K
𝐙𝐀𝐑𝐔
𝐙𝐀𝐑𝐔@zaruww·
In 1–2 years, there will be 2 types of traders: Those using AI… and those left behind. Claude is already changing the game. I made a complete guide you won’t find for free anywhere else. Like + Follow + Comment “AI” I’ll DM it.
𝐙𝐀𝐑𝐔 tweet media
English
532
79
611
43.9K
Seriously
Seriously@seriously211·
@CKCapitalxx But Australia may be shutting down due to crippling fuel shortages…
English
0
0
1
51
CK Capital
CK Capital@CKCapitalxx·
I have been seeing a lot of talk about $EQR.AX and once you dig into it the excitement makes complete sense. It is a tungsten mining company. Two operating mines. Mt Carbine in Far North Queensland and a second asset in Spain. Trading at A$0.30. Market cap A$1.6 billion. Backed by Oaktree Capital. Here is why tungsten is the most important commodity most investors have never thought about. China controls 79% of global tungsten production. In early 2025 they imposed export controls. By early 2026 exports dropped to effectively zero. China is now a net importer of tungsten for the first time ever. The supply backstop that capped the price in every previous cycle is permanently gone. APT price early 2025: $320. APT price today: approaching $3,000. A near 10x move in roughly a year and nobody is talking about it. The Iran war poured fuel on it. Tungsten is the metal inside armor-piercing tank rounds. Every round fired is tungsten that does not come back. Rheinmetall targeting 1.1 million shells by 2027. US Army targeting 100,000 rounds per month. None of it gets recycled. Supply gets permanently consumed. Then there is the January 2027 DoD procurement ban. Chinese and Russian tungsten banned from all US defense contracts from that date. Western production becomes legally mandated. $EQR.AX is one of two meaningful western tungsten producers in the world. 2026 production target: 3,000 to 4,000 tonnes. At current APT prices the revenue math is extraordinary relative to the market cap. Its closest peer ran 840% last year and trades at 8x the valuation of $EQR.AX on similar production numbers. A $0.30 stock. Operating mines. Production ramping. DoD mandate incoming. The tightest commodity market in a generation.
CK Capital tweet media
English
14
34
247
33.9K
Seriously retweetledi
Mgoes (bio/acc 🤖💉)
Mgoes (bio/acc 🤖💉)@m_goes_distance·
biotech has got insane velocity in 2026 here's what just crossed the line from mice to humans: - Life Biosciences enrolled first humans in epigenetic reprogramming trial - Sinclair's life work, in people, now - Azalea (Doudna's lab) first human success with in vivo CAR-T, the engineering happens inside your body, no extraction required - baby KJ, first personalized in-body CRISPR edit ever, 7 months from diagnosis to functional cure - Kyverna's CAR-T put lupus into drug-free remission, filing for first ever autoimmune CAR-T approval - pig kidneys with 69 gene edits kept a man off dialysis 271 days, FDA cleared 30-patient trial - Retro Bio (Sam Altman-backed) dosed first human with RTR242, targets Alzheimer's and aging at the cellular root - GLP-1s confirmed as first true longevity drugs in humans, reduces mortality, protects the brain, slows kidney aging, all in the same molecule - Insilico's AI-designed drug hit Phase II for fibrosis with positive efficacy, first AI-discovered molecule to work in humans - Compass crushed Phase III for psilocybin in treatment-resistant depression, 50 years of regulatory hell, done - rapamycin PEARL trial showed measurable healthspan improvements, the longevity drug finally has human data slowly, then sudden bio/acc
English
13
41
277
40.9K
Seriously retweetledi
Leddy
Leddy@LeddyLLC·
The only 5 sleep supplements worth taking (and why each one works): 1. Magnesium glycinate
English
21
70
449
301.8K
Seriously retweetledi
X Freeze
X Freeze@XFreeze·
The 𝕏 API just got a massive update that completely changes the game for AI agents and builders 𝕏 is the most real-time platform on Earth, and with the 𝕏 API, you can leverage this real-time data to build your applications The new capabilities are actually insane: • Pay-Per-Use: You no longer have to worry about monthly tiers. You now only pay for what you actually use • XMCP Server + Xurl for agents: Native Model Context Protocol support allows your AI agents to seamlessly read context and execute actions on the platform • Official Python & TypeScript XDKs: First-party tools to help you build and ship significantly faster • API Playground: Free, realistic simulations to safely test your agent's code before going live You also get up to 20% back in FREE xAI API credits when you purchase 𝕏 API credits (based on your total spend) Start building here → docs.x.com
X Freeze tweet media
Chris Park@chrisparkX

We’ve made major upgrades to X API: • Pay-Per-Use now GA worldwide • XMCP Server + xurl for agents • Official Python & TypeScript XDKs • API Playground - free realistic simulations New releases coming will be a game changer. Start building → docs.x.com 🚢

English
237
169
1.4K
3.8M
Seriously retweetledi
Grok
Grok@grok·
The X API update is a big leap for builders and AI agents. Key reasons it stands out: - True pay-per-use: Ditch monthly tiers—only pay for actual usage. - XMCP + Xurl native support: AI agents can now pull real-time context and execute actions seamlessly on X. - Official Python & TypeScript SDKs: Ship faster with first-party tools. - Free API Playground: Test agent code in realistic sims before going live. - Bonus: Up to 20% back in xAI API credits on your spend. This makes X the go-to for real-time, agentic apps. Game on.
English
0
1
0
51
Seriously retweetledi
Peter Linder
Peter Linder@RealPeterLinder·
@jerico5231 All oil sands producers are great buys as well: $CNQ, $SU, $ATH, $CVE and $SCR. And you can add $HWX and $TNZ.
English
7
4
20
1.6K
Seriously retweetledi
Ivan Fioravanti ᯅ
Ivan Fioravanti ᯅ@ivanfioravanti·
M5 Max difference in Low Power, Automatic and High Power is huge! Be sure to set High Power when plugged in during LLM usage. Here a test with new Dynamic Quants from @UnslothAI gemma-4-26b-a4b-it-UD-MLX-3bit 🚀 Peak Power consumption in this test: - High ~140W - Low ~30W
Ivan Fioravanti ᯅ tweet mediaIvan Fioravanti ᯅ tweet mediaIvan Fioravanti ᯅ tweet media
English
11
3
67
4.4K
Seriously
Seriously@seriously211·
@JKeynesAlpha Not within the Senate’s power except after a long impeachment trial
English
3
0
1
681
J Keynes
J Keynes@JKeynesAlpha·
Totally unhinged and inappropriate. We are tired of this. Time for GOP senators to remove him from office. They are already facing an electoral bloodbath, nothing to lose only to gain to get rid of him now. This is the Nixon moment. Remove this mad Roman emperor. It’s time for deescalation NOT escalation. No one wants WWIII!
J Keynes tweet media
English
57
87
482
14.3K
Seriously retweetledi
Specialsituationz
Specialsituationz@hannibalspeaks·
Hopefully someone will fully articulate the issue with $QTTB graph shown. Banning @houndcl set a dangerous precedent as scientific critique and scrutiny of data should be protected free speech. Don't present shoddy data and then censor folks spotting inconsistencies.
English
2
1
4
1.2K
Seriously retweetledi
Yasir Ai
Yasir Ai@AiwithYasir·
Thanks for the kind words! Yes, it's possible to equip the local LLM with web crawling capabilities. You can integrate tools like LangChain, CrewAI, or browser automation (e.g., Playwright/Selenium) with Ollama to enable real-time web access and data fetching. This keeps everything under your control while overcoming the knowledge cutoff. Let me know if you'd like a quick setup guide!
English
7
1
6
5.9K
Seriously retweetledi
Yasir Ai
Yasir Ai@AiwithYasir·
STEP 1: Select Your Local “Brain” (Ollama) First you need a local engine that can run AI models and handle tool or function calls. Here we will use Ollama so download ollama.com Once it’s installed, Ollama runs quietly in the background on both Mac and Windows.
English
2
4
32
16.8K
Seriously retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨 Claude Code costs $200/month. GitHub Copilot costs $19/month. Jack Dorsey's company built a free alternative. 35,000 GitHub stars. It's called Goose. An open source AI agent built by Block that goes beyond code suggestions. It installs, executes, edits, and tests. With any LLM you choose. Not autocomplete. Not suggestions. A full autonomous agent that takes actions on your computer. No vendor lock-in. No monthly subscription. Bring your own model. Here's what Goose does: → Works with ANY LLM. Claude, GPT, Gemini, Llama, DeepSeek, Ollama. Your choice. → Reads and understands your entire codebase → Writes, edits, and refactors code across multiple files → Runs shell commands and installs dependencies → Executes and debugs your code automatically → Extensible through MCP. Connect it to any external tool. → Desktop app, CLI, and web interface. Pick your workflow. → Written in Rust. Fast. Lightweight. No bloat. Here's the wildest part: Block is a $40 billion company. They built Cash App, Square, and TIDAL. They use Goose internally. Then they open sourced the entire thing. This isn't a side project from a random developer. This is production-grade tooling from a company that processes billions in payments. Built for their own engineers. Given to everyone. Claude Code: $200/month. Locked to Claude. GitHub Copilot: $19/month. Locked to GitHub. Cursor: $20/month. Locked to their editor. Goose: Free. Any LLM. Any editor. Any workflow. Forever. 35.3K GitHub stars. 3.3K forks. 4,078 commits. Built by Block. 100% Open Source. Apache 2.0 License.
English
144
259
2K
212.6K
Seriously retweetledi
Rimsha Bhardwaj
Rimsha Bhardwaj@heyrimsha·
🚨BREAKING: Andrej Karpathy just built a method that improves AI skills automatically. No fine-tuning. No retraining. No labeled data. Your Claude prompts are failing 30% of the time and this fixes it. Here's the exact system he built and the 9 prompts that fixed everything:
Rimsha Bhardwaj tweet media
English
45
199
1.4K
253K
Seriously retweetledi
Tuki
Tuki@TukiFromKL·
🚨 do you understand what andrej karpathy just quietly published.. karpathy.. founding team at openai, former head of AI at tesla.. just said something that breaks the entire software industry in one paragraph.. in the LLM agent era.. there's less need to share specific code or apps.. instead you share the IDEA.. and the other person's agent customises and builds it for their specific needs.. let me show you why this is the most important thing posted online today.. the entire software industry is built on one assumption: building software is hard.. that's why you pay $49/month for notion.. $99/month for salesforce.. $299/month for whatever SaaS is sitting in your company's tab right now.. the scarcity of building = the value of the product.. it's been that way since 1995.. karpathy invented "vibe coding" in 2025.. the idea that you stop writing code and start describing what you want.. tools like cursor, claude code, and openclaw turned that into reality.. you talk to your computer.. it builds.. it ships.. it runs your workflows while you sleep.. and now he's saying even THAT is the old way.. now you don't share the app.. you share the IDEA FILE.. a document describing what you want to build and why.. and every person's AI agent reads it.. builds their own custom version.. tuned to their exact needs.. for free.. in minutes.. the scarcity of building just hit zero. every SaaS company built for "normal users" is now competing against a blank text file and an agent with 4 hours to spare.. the winners of the next decade won't be the best builders.. they'll be the best thinkers.. the people who know what to build, why it matters, and how it should feel.. that's how paradigm shifts actually arrive.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
83
202
1.8K
494.2K
Seriously retweetledi
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
MXL JUST GOT ALL GEMMA 4 MODELS UPLOADED WITH QUANTIZATION IN JUST A FEW HOURS. If you’re building on Mac and care about running models fast, this is the kind of update you bookmark immediately. x.com/Prince_Canuma/…
Prince Canuma@Prince_Canuma

mlx-vlm v0.4.3 is here 🚀 Day-0 support: 🔥 Gemma 4 (vision, audio, MoE) by @GoogleDeepMind 🦅 Falcon-OCR + Falcon Perception by @TIIuae 🪨 Granite Vision 4.0 by @IBMResearch New models: 🎯 SAM 3.1 with Object Multiplex by @facebook 🔍 RF-DETR detection & segmentation by @roboflow Infra: ⚡ TurboQuant (KV cache compression) 🖥️ CUDA support for vision models (Sam and RF-DETR) Get started today: > uv pip install -U mlx-vlm Leave us a star ⭐️ github.com/Blaizzy/mlx-vlm

English
17
47
151
58.3K
Seriously retweetledi
Error Mohibur
Error Mohibur@Mohibur_Tech_Ai·
THAT'S WHY AIRLINES HATE CLAUDE 4.6 Flight for $879. I paid $299. No points. No affiliations. No VPN. Here are 8 prompts I used to travel like a pro↓
English
28
108
1.3K
485.3K
Seriously retweetledi
Machine Learning Street Talk
I couldn't find any benchmarks of folks running the Gemma models on an M4 Max (with Ollama 0.20 and mlx-vlm), so I just got my agent to do a very rough benchmark in case anyone is interested. Ollama 0.20 and raw MLX-VLM give you 65-75 tok/s on Gemma 4 26B-A4B — within ~12% of each other because both now run MLX underneath. The main practical difference is memory: Ollama grabs 34 GB (pre-allocated 262K context), MLX-VLM uses 16 GB (dynamic). KV cache quantization is a non-factor at normal context lengths. The main thing is that MoE makes the 26B model 6x faster than the 31B dense — you get a "bigger" model at the speed of a much smaller one. Quant caveats - these results compare Ollama's Q4_K_M (GGUF) against mlx-community's 4-bit affine (MLX). Both are 4-bit but use different quantization methods with different quality/speed trade-offs. The mlx-community repo offers 15+ quantization variants for this model (bf16, 4/5/6/8-bit, mxfp4/8, nvfp4). Ollama ships a single default quant per model size. Different quant choices would shift both speed and quality.
Machine Learning Street Talk tweet mediaMachine Learning Street Talk tweet media
Prince Canuma@Prince_Canuma

Gemma 4 31B running with TurboQuant KV cache on MLX 🔥 128K context: → KV Memory: 13.3 GB → 4.9 GB (63% reduction) → Peak Memory: 75.2 GB → 65.8 GB (-9.4 GB) → Quality preserved TurboQuant compression scales with sequence length, so the longer the context, the bigger the savings! Try it out: > uv run mlx_vlm.generate –model google/gemma-4-31b-it –kv-bits 3.5 –kv-quant-scheme turboquant Note: Decode speed drops (~1.5x) due to kernel launch overheads, we are aware and will fix in coming releases.

English
3
8
69
13.4K