CLICK

2.2K posts

CLICK banner
CLICK

CLICK

@sharetoCLICK

CLICK lets you automatically claim all your digital property rights online. Get paid as your data is automatically licensed to AI

metaverse Katılım Ekim 2021
1.7K Takip Edilen2.1K Takipçiler
CLICK retweetledi
Felix Rieseberg
Felix Rieseberg@felixrieseberg·
Today, we’re releasing a feature that allows Claude to control your computer: Mouse, keyboard, and screen, giving it the ability to use any app. I believe this is especially useful if used with Dispatch, which allows you to remotely control Claude on your computer while you’re away.
English
863
1.5K
18.3K
4.4M
CLICK retweetledi
Alex Cheema
Alex Cheema@alexocheema·
The new M5 Pro/Max MacBooks have 3 Thunderbolt 5 ports, enabling you to create RDMA clusters with up to 4 MacBooks. The latency with RDMA over Thunderbolt is single digit microseconds, fast enough for tensor parallelism with close to linear scaling.
Alex Cheema tweet media
Guybrush Threepwood@twistedmatrices

PSA: If you have multiple macbooks that support RDMA, you can cluster them using @exolabs and run 30B+ models at 70 tok/s over thunderbolt5. tensor parallelism on consumer hardware is a solved problem. you are renting GPUs that are worse than the laptop on your couch. 2X M4 Max(64GB each) running mlx-community/Qwen3-30B-A3B-4bit @ 70 TPS

English
101
363
5.2K
929.6K
CLICK retweetledi
Polymarket
Polymarket@Polymarket·
JUST IN: Wikipedia bans prolific editor after investigation reveals he was responsible for over a million pro-hamas edits on the site.
English
526
2.3K
20.3K
1.4M
CLICK retweetledi
Peter Steinberger 🦞
I feel my main velocity limitation lately isn't token speed anymore, it's compute. Running tests in parallel is taxing; can't wait for better cloud worker integration.
English
189
80
2.6K
246.5K
CLICK retweetledi
God of Prompt
God of Prompt@godofprompt·
🚨 BREAKING: Someone just open-sourced a full offline survival computer with AI, Wikipedia, and maps built in. Project N.O.M.A.D. is an open-source offline survival computer. Self-contained. Zero internet required after install. Zero telemetry. Everything runs locally on your hardware. What it includes: → Full Wikipedia archives via Kiwix → Offline maps via OpenStreetMap → Local AI models via Ollama + Open WebUI → Calculators, reference tools, resource libraries → A management UI to control everything from a browser One curl command installs the entire system on any Debian-based machine. Runs headless as a server so any device on your local network can access it. Minimum specs to run the base system: dual-core processor, 4GB RAM, 5GB storage. To run local LLMs offline, you want 32GB RAM and an NVIDIA RTX 3060 or better. No accounts. No authentication by default. No cloud dependency. No phone-home behavior. Built to function when nothing else does. The grid, the cloud, the API you depend on. None of it is guaranteed. The people building local-first systems right now are the ones who won’t be asking for help when access disappears.
God of Prompt tweet media
English
368
3.3K
24.3K
4.8M
CLICK retweetledi
Ava
Ava@noampomsky·
friend is in the stage of claude psychosis where he asks claude to send him newspapers about what claude is doing for him
Ava tweet media
English
257
450
8.6K
397K
CLICK retweetledi
Lee Robinson
Lee Robinson@leerob·
Here's confirmation the license is correct from the Kimi team. Agree with the feedback we should have mentioned the base up front, we will do that for the next model! x.com/Kimi_Moonshot/…
Kimi.ai@Kimi_Moonshot

Congrats to the @cursor_ai team on the launch of Composer 2! We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor's continued pretraining & high-compute RL training is the open model ecosystem we love to support. Note: Cursor accesses Kimi-k2.5 via Fireworks' hosted RL and inference platform as part of an authorized commercial partnership.

English
23
14
438
98.5K
CLICK retweetledi
Lee Robinson
Lee Robinson@leerob·
Since people really want me to say this: "KIMI K2.5" ‼️ Yes, that is the base we started from. And we are following the license through inference partner terms (e.g. Fireworks) I'm thankful for OSS models personally, good for the ecosystem.
English
105
93
1.8K
396.2K
CLICK retweetledi
Wei Dai
Wei Dai@_weidai·
Andrej Karpathy on autoresearch with an untrusted pool of workers: "My designs that incorporate an untrusted pool of workers (into autoresearch) actually look a little bit like a blockchain. Instead of blocks, you have commits, and these commits can build on each other and contain changes to the code as you're improving it. The proof of work is basically doing tons of experimentation to find the commits that work." The idea that distributed & permissionless autoresearch ~= proof-of-useful-work remains a high-level intuition for now, but it is extremely intriguing to say the least. Someone needs to take this further. See QT for more on what's missing.
Wei Dai@_weidai

Is it possible to build "proof-of-useful-work" on top of autoresearch? There's already great compute-versus-verification asymmetry that is tunable. Would need a reliable way to generate fresh & independent puzzles (that are still useful). Maybe a dead end, but someone should look into if decentralized consensus with useful work is possible on top of autoresearch. Let me know if you solve this.

English
87
169
2K
590.2K
CLICK retweetledi
Zixuan Li
Zixuan Li@ZixuanLi_·
Me introducing M2.7💯
Zixuan Li tweet media
English
34
20
768
37K
CLICK retweetledi
CLICK retweetledi
0xSero
0xSero@0xSero·
I'm not the only one doing this. - karpathy best thought leader, best person to learn from imo. Nanochat is the best way to get into training LLMs its the simplest and most digestible source for building your first AI model - steipete This guys GitHub is a national treasure, his writing is also very strong. Peekaboo, summarize.sh, openclaw, oracle, just talk to it, etc.. all unique and very useful - badlogicgames Mario’s Pi is a staple AI engine and possibly the best, simplest, open source agentic loop to learn from. Despite what people say about his methods, I think he’s going to set some new standards for Open source contribution. Big respect. - TheAhmadOsman This man is the GPU king, giveaways and lots of dense educational content around self hosting and home inference. He’s also tight with pretty much all the open weight labs and has them on for interviews regularly - sudoingX This is an up and comer who will change the game, he's pushing the limits of what a single gpu can do - Ex0byt I can confidently say this man will be fundamental in making local inference on massive models possible. - alexinexxx I genuinely feel motivated by her drive. She’s a real hard worker learning about GPU kernel programming. Also good aesthetics - gospaceport I would not have gotten into building my own hardware without this man’s hard work. He’s taught me so much about hardware and the economics of this. He also has the most impressive homelabs I’ve ever seen. - alexocheema The founder of Exolabs, pioneering Apple hardware inference, he’s also very engaged in the community and a good guy all around. If you are interested in Mac minis and Mac Studios this is your guys. - nummanali This guy is so prolific, he’s made tons of CLI tools for managing llm subscription budgets, using Claude code with alternative models etc.. - thdxr The entire Opencode team is wonderful but Dax specifically is a good writer. More anti-doomer content to sooth your anxieties. - juliarturc If you are interested in the science, Julias channel is where it’s at. Almost everything I’ve learned about LLM compression has been from her. - Teknium The Nous research & Prime intellect teams are both some of the most hard-working and principled people around. Tough fight in an industry so aggressive. - victormustar Head of Product for Huggingface, enabling us all to publish our work. - louszbd Head of community at ZAI some of the top LLMs available right now that are open weights. They supercharged the movement - SkylerMiao7 Making frontier intelligence fit on 10k USD of hardware. Via MiniMax - crystalsssup Building the best Open Weight model on the market, and releasing their latest research before their next gen model. Believe it or not these people are carrying the entire industry and giving us a fighting chance.
0xSero tweet media
English
64
349
4.4K
166.8K
CLICK retweetledi
Kimi.ai
Kimi.ai@Kimi_Moonshot·
Introducing 𝑨𝒕𝒕𝒆𝒏𝒕𝒊𝒐𝒏 𝑹𝒆𝒔𝒊𝒅𝒖𝒂𝒍𝒔: Rethinking depth-wise aggregation. Residual connections have long relied on fixed, uniform accumulation. Inspired by the duality of time and depth, we introduce Attention Residuals, replacing standard depth-wise recurrence with learned, input-dependent attention over preceding layers. 🔹 Enables networks to selectively retrieve past representations, naturally mitigating dilution and hidden-state growth. 🔹 Introduces Block AttnRes, partitioning layers into compressed blocks to make cross-layer attention practical at scale. 🔹 Serves as an efficient drop-in replacement, demonstrating a 1.25x compute advantage with negligible (<2%) inference latency overhead. 🔹 Validated on the Kimi Linear architecture (48B total, 3B activated parameters), delivering consistent downstream performance gains. 🔗Full report: github.com/MoonshotAI/Att…
Kimi.ai tweet media
English
329
2.1K
13.5K
4.9M
CLICK retweetledi
Derya Unutmaz, MD
Derya Unutmaz, MD@DeryaTR_·
In my new quest to train as a plumber-one of the most coveted jobs now, I' m creating plumbing videos & lessons using @NotebookLM. Here is an amazing short video! Turns out to be more interesting than I thought! Thanks to @GeminiApp, we are making plumbing great again (MPGA)!😅
English
101
284
3.6K
316.6K