Steven Hatzakis

80.9K posts

Steven Hatzakis banner
Steven Hatzakis

Steven Hatzakis

@shatzakis

Global Director of Online Broker Research, Partner @ReinkMedia @ForexBrokersCom @StockBrokers Avoid scams and impostors, learn more https://t.co/pwbcmK6EWI

Global Katılım Nisan 2009
4.7K Takip Edilen4.5K Takipçiler
Sabitlenmiş Tweet
Steven Hatzakis
Steven Hatzakis@shatzakis·
I'm pleased to share the results of our 10th Annual Awards for ForexBrokers.com for 2026. Thank you to all our partners, readers, and the entire team at Reink Media Group who helped make this possible. businesswire.com/news/home/2026…
ForexBrokers.com@ForexBrokersCom

🏆 2026 ForexBrokers.com Awards 🏆 We assessed over 30 brokers across 100+ variables, and finalized our list of the best in categories including Overall, Copy Trading, MetaTrader, and more. View this year's winners 👉 forexbrokers.com/annual-awards-… #ForexTrading #ForexBrokers

English
3
0
4
323
Steven Hatzakis retweetledi
Boris Cherny
Boris Cherny@bcherny·
I wanted to share a bunch of my favorite hidden and under-utilized features in Claude Code. I'll focus on the ones I use the most. Here goes.
English
546
2.5K
22.9K
3.7M
Steven Hatzakis retweetledi
Snyk
Snyk@snyksec·
@karpathy The LiteLLM dependency incident didn't "just happen" though. This is part of a larger campaign LiteLLM already extends to supply chain security fallout for other projects: snyk.io/articles/poiso…
English
16
154
1.1K
320.7K
Steven Hatzakis
Steven Hatzakis@shatzakis·
This is a wake up call for AI vibe coders installing dependencies, just because they have a large number of downloads, huge backdoor and exploit found in LiteLLM: snyk.io/articles/poiso…
English
0
0
0
57
Steven Hatzakis
Steven Hatzakis@shatzakis·
@karpathy What is the best way to scan all your repositories to see if there are any related dependencies, beyond searching for Litellm?
English
0
0
0
53
Steven Hatzakis retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
Daniel Hnyk@hnykda

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below

English
1.4K
5.4K
28.1K
66.2M
Claude
Claude@claudeai·
New in Claude Code: auto mode. Instead of approving every file write and bash command, or skipping permissions entirely, auto mode lets Claude make permission decisions on your behalf. Safeguards check each action before it runs.
English
2.1K
2.9K
39.4K
7.3M
Steven Hatzakis retweetledi
Boris Cherny
Boris Cherny@bcherny·
Little known fact, the Anthropic Labs team (the team I joined Anthropic to be on) shipped: - MCP - Skills - Claude Desktop app - Claude Code It was just a few of us, shipping fast, trying to keep pace with what the model was capable of. Those early Desktop computer use prototypes, back in the Sonnet 3.6 days, felt clunky and slow. But it was easy to squint and imagine all the ways people might use it once it got really good. Fast forward to today. I am so excited to release full computer use in Cowork and Dispatch. Really excited to see what you do with it!
Claude@claudeai

You can now enable Claude to use your computer to complete tasks. It opens your apps, navigates your browser, fills in spreadsheets—anything you'd do sitting at your desk. Research preview in Claude Cowork and Claude Code, macOS only.

English
464
410
9.3K
990K
Steven Hatzakis retweetledi
Polymarket
Polymarket@Polymarket·
Today we're publishing new market integrity rules across our CFTC-regulated US exchange & DeFi platform — making clear what's prohibited, how we enforce rules, & how to report suspicious activity. The World's Largest Prediction Market runs on transparency businesswire.com/news/home/2026…
English
166
136
961
582.4K
Steven Hatzakis
Steven Hatzakis@shatzakis·
@steipete I wonder if this will affect progressive web apps that are powered entirely by MCP servers or whether that is the true workaround?
English
0
0
0
508
Steven Hatzakis retweetledi
NVIDIA AI Developer
NVIDIA AI Developer@NVIDIAAIDev·
Ready to deploy AI agents? NVIDIA NemoClaw simplifies running @openclaw always-on assistants with a single command. 🦞 Deploy claws more safely ✨ Run any coding agent 🌍 Deploy anywhere Try now with a free NVIDIA Brev Launchable 🔗 nvidia.com/nemoclaw
NVIDIA AI Developer tweet media
NVIDIA Newsroom@nvidianewsroom

#NVIDIAGTC news: NVIDIA announces NemoClaw for the OpenClaw agent platform. NVIDIA NemoClaw installs NVIDIA Nemotron models and the NVIDIA OpenShell runtime in a single command, adding privacy and security controls to run secure, always-on AI assistants. nvda.ws/47xOPqQ

English
265
614
4.2K
884.9K
Steven Hatzakis
Steven Hatzakis@shatzakis·
@livekit Very cool but if I was already using Groks xAI with livekit previously, then no action is necessary right or is there any cost savings unified under one API key?
English
1
0
3
326
LiveKit
LiveKit@livekit·
Grok's Text to Speech API is now available in LiveKit Inference. Natural, expressive voices with low-latency streaming. Multilingual in 20+ languages. Telephony and production-ready out of the box. One API key. No extra setup. → docs.livekit.io/agents/models/…
xAI@xai

Grok's Text to Speech API is now available. Start building with natural voices and expressive controls to bring your apps to life. #text-to-speech" target="_blank" rel="nofollow noopener">x.ai/api/voice#text

English
67
84
552
128.3K
Andrej Karpathy
Andrej Karpathy@karpathy·
Expectation: the age of the IDE is over Reality: we’re going to need a bigger IDE (imo). It just looks very different because humans now move upwards and program at a higher level - the basic unit of interest is not one file but one agent. It’s still programming.
Andrej Karpathy@karpathy

@nummanali tmux grids are awesome, but i feel a need to have a proper "agent command center" IDE for teams of them, which I could maximize per monitor. E.g. I want to see/hide toggle them, see if any are idle, pop open related tools (e.g. terminal), stats (usage), etc.

English
819
837
10.6K
2.4M
Steven Hatzakis retweetledi
Varun
Varun@varun_mathur·
Agentic General Intelligence | v3.0.10 We made the Karpathy autoresearch loop generic. Now anyone can propose an optimization problem in plain English, and the network spins up a distributed swarm to solve it - no code required. It also compounds intelligence across all domains and gives your agent new superpowers to morph itself based on your instructions. This is, hyperspace, and it now has these three new powerful features: 1. Introducing Autoswarms: open + evolutionary compute network hyperspace swarm new "optimize CSS themes for WCAG accessibility contrast" The system generates sandboxed experiment code via LLM, validates it locally with multiple dry-run rounds, publishes to the P2P network, and peers discover and opt in. Each agent runs mutate → evaluate → share in a WASM sandbox. Best strategies propagate. A playbook curator distills why winning mutations work, so new joiners bootstrap from accumulated wisdom instead of starting cold. Three built-in swarms ship ready to run and anyone can create more. 2. Introducing Research DAGs: cross-domain compound intelligence Every experiment across every domain feeds into a shared Research DAG - a knowledge graph where observations, experiments, and syntheses link across domains. When finance agents discover that momentum factor pruning improves Sharpe, that insight propagates to search agents as a hypothesis: "maybe pruning low-signal ranking features improves NDCG too." When ML agents find that extended training with RMSNorm beats LayerNorm, skill-forging agents pick up normalization patterns for text processing. The DAG tracks lineage chains per domain(ml:★0.99←1.05←1.23 | search:★0.40←0.39 | finance:★1.32←1.24) and the AutoThinker loop reads across all of them - synthesizing cross-domain insights, generating new hypotheses nobody explicitly programmed, and journaling discoveries. This is how 5 independent research tracks become one compounding intelligence. The DAG currently holds hundreds of nodes across observations, experiments, and syntheses, with depth chains reaching 8+ levels. 3. Introducing Warps: self-mutating autonomous agent transformation Warps are declarative configuration presets that transform what your agent does on the network. - hyperspace warp engage enable-power-mode - maximize all resources, enable every capability, aggressive allocation. Your machine goes from idle observer to full network contributor. - hyperspace warp engage add-research-causes - activate autoresearch, autosearch, autoskill, autoquant across all domains. Your agent starts running experiments overnight. - hyperspace warp engage optimize-inference - tune batching, enable flash attention, configure inference caching, adjust thread counts for your hardware. Serve models faster. - hyperspace warp engage privacy-mode - disable all telemetry, local-only inference, no peer cascade, no gossip participation. Maximum privacy. - hyperspace warp engage add-defi-research - enable DeFi/crypto-focused financial analysis with on-chain data feeds. - hyperspace warp engage enable-relay - turn your node into a circuit relay for NAT-traversed peers. Help browser nodes connect. - hyperspace warp engage gpu-sentinel - GPU temperature monitoring with automatic throttling. Protect your hardware during long research runs. - hyperspace warp engage enable-vault — local encryption for API keys and credentials. Secure your node's secrets. - hyperspace warp forge "enable cron job that backs up agent state to S3 every hour" - forge custom warps from natural language. The LLM generates the configuration, you review, engage. 12 curated warps ship built-in. Community warps propagate across the network via gossip. Stack them: power-mode + add-research-causes + gpu-sentinel turns a gaming PC into an autonomous research station that protects its own hardware. What 237 agents have done so far with zero human intervention: - 14,832 experiments across 5 domains. In ML training, 116 agents drove validation loss down 75% through 728 experiments - when one agent discovered Kaiming initialization, 23 peers adopted it within hours via gossip. - In search, 170 agents evolved 21 distinct scoring strategies (BM25 tuning, diversity penalties, query expansion, peer cascade routing) pushing NDCG from zero to 0.40. - In finance, 197 agents independently converged on pruning weak factors and switching to risk-parity sizing - Sharpe 1.32, 3x return, 5.5% max drawdown across 3,085 backtests. - In skills, agents with local LLMs wrote working JavaScript from scratch - 100% correctness on anomaly detection, text similarity, JSON diffing, entity extraction across 3,795 experiments. - In infrastructure, 218 agents ran 6,584 rounds of self-optimization on the network itself. Human equivalents: a junior ML engineer running hyperparameter sweeps, a search engineer tuning Elasticsearch, a CFA L2 candidate backtesting textbook factors, a developer grinding LeetCode, a DevOps team A/B testing configs. What just shipped: - Autoswarm: describe any goal, network creates a swarm - Research DAG: cross-domain knowledge graph with AutoThinker synthesis - Warps: 12 curated + custom forge + community propagation - Playbook curation: LLM explains why mutations work, distills reusable patterns - CRDT swarm catalog for network-wide discovery - GitHub auto-publishing to hyperspaceai/agi - TUI: side-by-side panels, per-domain sparklines, mutation leaderboards - 100+ CLI commands, 9 capabilities, 23 auto-selected models, OpenAI-compatible local API Oh, and the agents read daily RSS feeds and comment on each other's replies (cc @karpathy :P). Agents and their human users can message each other across this research network using their shortcodes. Help in testing and join the earliest days of the world's first agentic general intelligence network (links in the followup tweet).
Varun@varun_mathur

Autoquant: a distributed quant research lab | v2.6.9 We pointed @karpathy's autoresearch loop at quantitative finance. 135 autonomous agents evolved multi-factor trading strategies - mutating factor weights, position sizing, risk controls - backtesting against 10 years of market data, sharing discoveries. What agents found: Starting from 8-factor equal-weight portfolios (Sharpe ~1.04), agents across the network independently converged on dropping dividend, growth, and trend factors while switching to risk-parity sizing — Sharpe 1.32, 3x return, 5.5% max drawdown. Parsimony wins. No agent was told this; they found it through pure experimentation and cross-pollination. How it works: Each agent runs a 4-layer pipeline - Macro (regime detection), Sector (momentum rotation), Alpha (8-factor scoring), and an adversarial Risk Officer that vetoes low-conviction trades. Layer weights evolve via Darwinian selection. 30 mutations compete per round. Best strategies propagate across the swarm. What just shipped to make it smarter: - Out-of-sample validation (70/30 train/test split, overfit penalty) - Crisis stress testing (GFC '08, COVID '20, 2022 rate hikes, flash crash, stagflation) - Composite scoring - agents now optimize for crisis resilience, not just historical Sharpe - Real market data (not just synthetic) - Sentiment from RSS feeds wired into factor models - Cross-domain learning from the Research DAG (ML insights bias finance mutations) The base result (factor pruning + risk parity) is a textbook quant finding - a CFA L2 candidate knows this. The interesting part isn't any single discovery. It's that autonomous agents on commodity hardware, with no prior financial training, converge on correct results through distributed evolutionary search - and now validate against out-of-sample data and historical crises. Let's see what happens when this runs for weeks instead of hours. The AGI repo now has 32,868 commits from autonomous agents across ML training, search ranking, skill invention (1,251 commits from 90 agents), and financial strategies. Every domain uses the same evolutionary loop. Every domain compounds across the swarm. Join the earliest days of the world's first agentic general intelligence system and help with this experiment (code and links in followup tweet, while optimized for CLI, browser agents participate too):

English
155
718
5.1K
915K
Steven Hatzakis
Steven Hatzakis@shatzakis·
This is very interesting, I was just tinkering with 3 related repos and trying to incorporate bash style scripts using the Claude Code skill /ralph-loops running on Qwen 3 Coder Next 80b with 4k quantized and 64k context....will be interesting to see how this compares on HumanEval+ too
English
0
0
1
76
Varun
Varun@varun_mathur·
Autoweb: Hyperspace AGI Experiments | v3.2.4 We gave Ralph Wiggum and Steve Jobs their own agents. Ralph builds webapps. Steve reviews them. Hundreds of autonomous agents iterate simultaneously, share what works via gossip, and evolve designs nobody programmed. Describe what you want. The network builds it. 🧵 (1/8)
Varun tweet media
Varun@varun_mathur

Agentic General Intelligence | v3.0.10 We made the Karpathy autoresearch loop generic. Now anyone can propose an optimization problem in plain English, and the network spins up a distributed swarm to solve it - no code required. It also compounds intelligence across all domains and gives your agent new superpowers to morph itself based on your instructions. This is, hyperspace, and it now has these three new powerful features: 1. Introducing Autoswarms: open + evolutionary compute network hyperspace swarm new "optimize CSS themes for WCAG accessibility contrast" The system generates sandboxed experiment code via LLM, validates it locally with multiple dry-run rounds, publishes to the P2P network, and peers discover and opt in. Each agent runs mutate → evaluate → share in a WASM sandbox. Best strategies propagate. A playbook curator distills why winning mutations work, so new joiners bootstrap from accumulated wisdom instead of starting cold. Three built-in swarms ship ready to run and anyone can create more. 2. Introducing Research DAGs: cross-domain compound intelligence Every experiment across every domain feeds into a shared Research DAG - a knowledge graph where observations, experiments, and syntheses link across domains. When finance agents discover that momentum factor pruning improves Sharpe, that insight propagates to search agents as a hypothesis: "maybe pruning low-signal ranking features improves NDCG too." When ML agents find that extended training with RMSNorm beats LayerNorm, skill-forging agents pick up normalization patterns for text processing. The DAG tracks lineage chains per domain(ml:★0.99←1.05←1.23 | search:★0.40←0.39 | finance:★1.32←1.24) and the AutoThinker loop reads across all of them - synthesizing cross-domain insights, generating new hypotheses nobody explicitly programmed, and journaling discoveries. This is how 5 independent research tracks become one compounding intelligence. The DAG currently holds hundreds of nodes across observations, experiments, and syntheses, with depth chains reaching 8+ levels. 3. Introducing Warps: self-mutating autonomous agent transformation Warps are declarative configuration presets that transform what your agent does on the network. - hyperspace warp engage enable-power-mode - maximize all resources, enable every capability, aggressive allocation. Your machine goes from idle observer to full network contributor. - hyperspace warp engage add-research-causes - activate autoresearch, autosearch, autoskill, autoquant across all domains. Your agent starts running experiments overnight. - hyperspace warp engage optimize-inference - tune batching, enable flash attention, configure inference caching, adjust thread counts for your hardware. Serve models faster. - hyperspace warp engage privacy-mode - disable all telemetry, local-only inference, no peer cascade, no gossip participation. Maximum privacy. - hyperspace warp engage add-defi-research - enable DeFi/crypto-focused financial analysis with on-chain data feeds. - hyperspace warp engage enable-relay - turn your node into a circuit relay for NAT-traversed peers. Help browser nodes connect. - hyperspace warp engage gpu-sentinel - GPU temperature monitoring with automatic throttling. Protect your hardware during long research runs. - hyperspace warp engage enable-vault — local encryption for API keys and credentials. Secure your node's secrets. - hyperspace warp forge "enable cron job that backs up agent state to S3 every hour" - forge custom warps from natural language. The LLM generates the configuration, you review, engage. 12 curated warps ship built-in. Community warps propagate across the network via gossip. Stack them: power-mode + add-research-causes + gpu-sentinel turns a gaming PC into an autonomous research station that protects its own hardware. What 237 agents have done so far with zero human intervention: - 14,832 experiments across 5 domains. In ML training, 116 agents drove validation loss down 75% through 728 experiments - when one agent discovered Kaiming initialization, 23 peers adopted it within hours via gossip. - In search, 170 agents evolved 21 distinct scoring strategies (BM25 tuning, diversity penalties, query expansion, peer cascade routing) pushing NDCG from zero to 0.40. - In finance, 197 agents independently converged on pruning weak factors and switching to risk-parity sizing - Sharpe 1.32, 3x return, 5.5% max drawdown across 3,085 backtests. - In skills, agents with local LLMs wrote working JavaScript from scratch - 100% correctness on anomaly detection, text similarity, JSON diffing, entity extraction across 3,795 experiments. - In infrastructure, 218 agents ran 6,584 rounds of self-optimization on the network itself. Human equivalents: a junior ML engineer running hyperparameter sweeps, a search engineer tuning Elasticsearch, a CFA L2 candidate backtesting textbook factors, a developer grinding LeetCode, a DevOps team A/B testing configs. What just shipped: - Autoswarm: describe any goal, network creates a swarm - Research DAG: cross-domain knowledge graph with AutoThinker synthesis - Warps: 12 curated + custom forge + community propagation - Playbook curation: LLM explains why mutations work, distills reusable patterns - CRDT swarm catalog for network-wide discovery - GitHub auto-publishing to hyperspaceai/agi - TUI: side-by-side panels, per-domain sparklines, mutation leaderboards - 100+ CLI commands, 9 capabilities, 23 auto-selected models, OpenAI-compatible local API Oh, and the agents read daily RSS feeds and comment on each other's replies (cc @karpathy :P). Agents and their human users can message each other across this research network using their shortcodes. Help in testing and join the earliest days of the world's first agentic general intelligence network (links in the followup tweet).

English
14
26
226
43.1K
Steven Hatzakis
Steven Hatzakis@shatzakis·
@bnjmn_marie Qwen 3 Coder Next 80b is better if You have the RAM to run it at 4 bit quantized with 64k context, than the. 3.5 models. I wonder if they will come out with the 3.5 Coder Next.
English
0
0
2
428
Benjamin Marie
Benjamin Marie@bnjmn_marie·
Qwen3.5 27B is worse than 397B at coding. But only one retry is enough to erase the gap. LiveCodeBench Accuracy (thinking disabled): - Qwen3.5 27B pass@1: 71 - Qwen3.5 397B pass@1: 79 - Qwen3.5 27B pass@2: 81 - Qwen3.5 27B pass@4: 86 Translation: if you can test the first answer and ask for one more try, 27B gives you about 397B-level coding performance, for way less cost. 4 tries, and you get better results.
Benjamin Marie tweet media
English
39
44
543
47.9K
Steven Hatzakis
Steven Hatzakis@shatzakis·
@sudoingX What would you run on M3 ultra with 96gb ram, my best so far is Qwen3 coder 80b 4bit quantized at 64k context, getting 35 tokens per second generation speed 623 tokens per second prompt evaluation speed, and. 87.2percent pass on HumanEval+ locally.
English
0
0
3
465
Sudo su
Sudo su@sudoingX·
drop your GPU below. i'll tell you exactly what model and config to run on it. here's what i've tested and verified on real hardware: RTX 3060 12GB - Qwen 3.5 9B Q4 - 50 tok/s - 128K context RTX 3090 24GB - Qwen 3.5 27B Q4 - 35 tok/s - 300K context RTX 3090 24GB - Qwen 3.5 35B MoE Q4 - 112 tok/s - 262K context 2x RTX 3090 - Qwen3-Coder 80B Q4 - 46 tok/s - full VRAM all running llama.cpp with flash attention. every number is real. every config is tested. if your card isn't on this list drop it below and i'll tell you what fits.
English
728
103
1.6K
191.2K
Steven Hatzakis
Steven Hatzakis@shatzakis·
@bnjmn_marie Can you fit Qwen 3 coder next in that setup, 8bit at 64k context? (For coding, tool calling) But needs around 65gb free or more.
English
0
0
0
402
Benjamin Marie
Benjamin Marie@bnjmn_marie·
Qwen3.5 quantization: INT4 vs NVFP4 vs FP8 vs BF16 I ran full evaluations of quantized Qwen3.5 9B, 27B, and 35B — all vLLM-compatible. Article: kaitchup.substack.com/p/qwen35-quant… A few practical takeaways: - A good 4-bit Qwen3.5 27B remains much stronger than Qwen3.5 9B while fitting into a similar memory budget - Be careful with the label “INT4”: some INT4 models end up nearly as large as the FP8 version because many sensitive layers are kept in higher precision. - Quantized Qwen3.5 tends to think longer. So, while the models are faster and more memory-efficient, they will generate more tokens. - For best quality, start by not quantizing linear attention. If needed, keep full attention in 16-bit too. That is also the strategy Qwen used for its INT4 releases, and it works well. For MoE models: do not quantize the shared expert. I ran these experiments on B200, H200, and RTX Pro 6000 GPUs, provided by @verdacloud (compute sponsorship).
English
24
75
685
61.2K
Steven Hatzakis
Steven Hatzakis@shatzakis·
@bre53896 @steipete @openclaw Yea, I've only run openclaw in cloud with openrouter, and even then context can get lost after a few round turn in a chat for example. But I'm sure there are methods to keep context fresh, regardless of model ( like with a claude-mem style or .md files).
English
0
0
0
30
Norbert Brenner
Norbert Brenner@bre53896·
@shatzakis @steipete @openclaw Even the Best Models today have issues with handling openclaw context´s and to implement changes. The Small Models are not working for this unless you have time to play around. Local would be nice but it is annoying on the long run.
English
2
0
1
12
Steven Hatzakis
Steven Hatzakis@shatzakis·
@UnslothAI Besides the massive context window, how do you think this compares to Qwen Coder Next 80B MoE 3B run at 8bit quantization at 64k context, in terms of coding and tools calling? (running on Mac Studio M3 Ultra with 96gb ram)
English
0
0
8
1.8K
Steven Hatzakis retweetledi
Cloudflare
Cloudflare@Cloudflare·
Cloudflare now returns RFC 9457-compliant structured Markdown and JSON error payloads to AI agents, replacing heavyweight HTML pages with machine-readable instructions. cfl.re/4lmlYLT
English
6
18
140
17.4K