Cellhasher

265 posts

Cellhasher banner
Cellhasher

Cellhasher

@Cellhasher

The smartphone box for high-performance mobile computing 📲 Check our docs out here: https://t.co/hqXbNOve7S

United States Katılım Aralık 2023
27 Takip Edilen1.1K Takipçiler
Cellhasher retweetledi
Acurast
Acurast@Acurast·
Something is coming to Acurast. Every block. Every deployment. Every transaction. Fully visible. Soon. 👀
Acurast tweet media
English
104
109
249
4.3K
Filipe Silva
Filipe Silva@crusnikpt·
@Acurast Efficiency is everything. Acurast phone farms pushing the boundaries of distributed networks. I would if @Cellhasher gifted me one of their babys :)
English
1
0
1
23
Acurast
Acurast@Acurast·
Phone farm check 📱👀 This is what decentralized infrastructure looks like. Not a data center. Not a server rack. A shelf. Some cables. Real phones. Real compute. Show us YOUR setup. #CloudRebellion #MyPhoneFarm 💚
Acurast tweet media
English
101
98
216
18.8K
Cellhasher
Cellhasher@Cellhasher·
Wait till everyone finds out with @Cellhasher you can deploy your 20phone cluster into an full R&D Lab with various different agents, Research, Security, Audit, Routing, Senior Engineer, Vision, func-call, dev-01 developer dev-02 developer dev-03 developer gateway developer1 guard developer senses developer reviewer meta_engineer research devops data-analyst multilingual writer scout reasoner nanbeige hot-spareguard
Cellhasher tweet media
English
0
1
6
413
Cellhasher retweetledi
Luke Wright
Luke Wright@lukewrightmain·
Imagine what happens when you use 20 phones all cooking at 10-20 token/s running 24/7 working for you Here is a taste of one Android Phone, a simple deploy of QWEN3.5 running on a 5 year old Android. @Cellhasher
English
1
1
15
1.1K
Cellhasher
Cellhasher@Cellhasher·
@lukewrightmain Low end compute is amazingly useful even in an ever advancing Tech Sector. Phones have their place!
English
0
0
2
104
Cellhasher retweetledi
Luke Wright
Luke Wright@lukewrightmain·
Here are Android AI @Cellhasher Qwen3.5 + DeepSeek 33B benchmarks. TLDR: its actually worth it if you have old androids for the cheap less than 5w most of these phones operate at. Especially as agents become more deployable and 24/7 hands off or sit and forget. AI model companies will eventually start fine tuning even further to get models into more of a MOE style but tuned directly to the use with Routers, routing each request. Cellhasher is working on this as well. As demand for compute and energy continues, eventually companies will not keep making bigger models and not Company can keep pace with the big ones, they will focus on small local models for retail. We will start with the 5 year old Chipset and Bring you up to speed with some wild results on a newest-chipset. Device: Snapdragon 888 (5 years old chip) CPU only Non-root (28 GB/s memory cap) Rooted devices (56 GB/s) scale ~1.8–1.9x. (using the fastest 4 cores actually outperforms using all 8 cores) Qwen3.5 - 0.8B CPU (4 big cores): 12.54–13.01 tok/s Rooted (1.8–1.9x): 22.6–24.7 tok/s Vulkan GPU (NGL=24): 1.60–1.78 tok/s (7–8x slower) Qwen3.5 - 2B CPU (4 big cores): 9.15–9.36 tok/s Rooted: 16.5–17.8 tok/s Vulkan GPU (NGL=24): 0.78–0.86 tok/s (11–12x slower) Large Models (Non-Root) #p stands for Number of Phones in a parallel pipeline ring made by Cellhasher Swarm AI DeepSeek 33B → 5.89 tok/s (Best was 7.8 tok/s) average is still 5.89 tok/s (12p, d=16, 81% accept) Qwen3.5-35B-A3B → 3.75 tok/s (best was 5.1 tok/s) (3p, d=8, 71.5% accept) Qwen3.5-32B (Coder) → 2.85 tok/s (best 4.8 tok/s) (7p, same-family draft) Rooted Estimate (56 GB/s) DeepSeek 33B → ~10–11 tok/s Qwen3.5-35B → ~6.5–7 tok/s Qwen3.5-32B → ~5–5.5 tok/s Snapdragon 8 Elite Gen 5 plus + 24gb RAM (Android Phone) ~75–85 GB/s memory bandwidth INT4/INT8 NPU usable Well-tuned pipeline Qwen3.5 - 0.8B Model CPU only: 30–45 tok/s CPU + NPU: 70–100+ tok/s Qwen3.5 - 2B Model CPU only: 18–25 tok/s CPU + NPU: 40–60 tok/s DeepSeek 33B CPU only: 15–20 tok/s CPU + NPU (blended): 20–28 tok/s (30 tok/s possible with ideal tuning) Qwen3.5-35B-A3B CPU only: 11–15 tok/s CPU + NPU: 16–22 tok/s Qwen3.5-32B (Coder) CPU only: 10–14 tok/s CPU + NPU: 15–20 tok/s (CPU+NPU is slightly tricky and prefill along with ring pipeline can determine alot, KV cache catching is something i haven't played around with yet) 33B class ~2.5–3.5x over Snapdragon 888 Small models see major NPU uplift Memory bandwidth remains the limiter on large models @Cellhasher has come along way driving inspo from @exolabs over the last few months although things needed to change in order to be correctly managed for android really none of the EXO features are now used, we have our own modification to llama.cpp that overs this ring pipelined parallelism for running LARGE models across however many phones it takes. What's hard is sometimes less phones doesn't always compute to high tokes as some would think less hops will do the trick, in some cases it does other cases like Spec drafting sometimes it doesn't. Regardless Automously running agents 24/7 if i Can run a 80b model at 1-5tok/s on 5 year old Android hardware that runs at 5-15w depending on the amount of phones used, ill take it. If you make the upgrade to the latest generations of phones you get to experience amazing breakthrough of NPU sync and bandwidth optimization that can get you that amazing 20-70 tok/s feel depending per model for every day use. And now im thinking about taking the plunge into 20 of these new phones 20k for 160cores, and 480gb RAM i could possible run the latest and greats at maybe speeds of 10 tok/s.... who knows let me know if you want me to try! (All Phones benchmarked on Ethernet as it offers the best latency over Wifi) (Wifi Still works just slower)
Luke Wright tweet mediaLuke Wright tweet media
Qwen@Alibaba_Qwen

🚀 Introducing the Qwen 3.5 Small Model Series Qwen3.5-0.8B · Qwen3.5-2B · Qwen3.5-4B · Qwen3.5-9B ✨ More intelligence, less compute. These small models are built on the same Qwen3.5 foundation — native multimodal, improved architecture, scaled RL: • 0.8B / 2B → tiny, fast, great for edge device • 4B → a surprisingly strong multimodal base for lightweight agents • 9B → compact, but already closing the gap with much larger models And yes — we’re also releasing the Base models as well. We hope this better supports research, experimentation, and real-world industrial innovation. Hugging Face: huggingface.co/collections/Qw… ModelScope: modelscope.cn/collections/Qw…

English
3
6
21
2.6K
Cellhasher
Cellhasher@Cellhasher·
@lukewrightmain @danieldalen @claudeai Our software supports this, A few more updates for 1-2 click deploys will come in this week as-well! Fun times, Helping people who are just getting started.
English
0
0
2
38
Luke Wright
Luke Wright@lukewrightmain·
@danieldalen @claudeai Bro you need help local stuff! What’s your specs Try a MOe model for the tasks you want. Lmk if you need help. Also picoclaw is lighter weight and faster than open claw and can use all the same skills at stuff… been doing this for years actually here’s 80 running
Luke Wright tweet mediaLuke Wright tweet media
English
2
0
5
991
Cellhasher
Cellhasher@Cellhasher·
What it feels like to have phone compute in a market turn like today!
GIF
English
1
0
3
565
rukasufall
rukasufall@rukasufall·
@tengyanAI @openclaw Installing OpenClaw via Termux isn’t as simple as it looks. A lot of things in Termux are only emulated, and many conflicts come up. I’ve already managed to launch OpenClaw on my Motorola E22, but it still can’t respond, the API connection drops. For now, I’m stuck at this step.
English
6
0
17
4.9K
Teng Yan · Chain of Thought AI
your old Android phones are a better agent server than a Mac mini. while most users spend $600+ on dedicated hardware for @openclaw, the real efficiency is hiding in your junk drawer. i saw some developers use Termux and Node.js to turn 3 watt devices into constant research hubs. by running npm install -g clawdbot, these discarded screens handle market monitoring and Telegram summaries without a break. 3 phones can roughly match the output of a Mac mini for almost 0 cost. this setup runs Clawdbot 24/7 to pipe private signal alerts directly to a primary device. i suspect the hardware bottleneck for autonomous agents is already dead. compute is now so cheap that our junk phones is sufficient!
Teng Yan · Chain of Thought AI tweet media
Chip.hl // Evgeny Yurchenko@chip1cr

I run Clawdbot on 3 old Android phones. Twitter research. Market monitoring. Telegram chats daily summaries. Private groups signals flashing to my main phone. Running on cheap models like glm-4-flash when ok to save on Claude sub Combined power: same as one Mac Mini. Cost: $0 total. That old phone in your drawer? Turn it into a 24/7 AI server: Download Termux from F-Droid. Run: pkg install nodejs-lts git npm install -g clawdbot clawdbot gateway start Remote access from your computer or main phone. Deploy skills. Cron jobs. Telegram bot. Code. 3-5W per device. Built-in UPS lmao! 🔥 Don't have one? Buy Redmi Note 10 Pro ($60). Pixel 4a ($80). Galaxy A52 ($100). Mac Mini energy. Junk drawer budget. VPS companies hate this setup too.

English
133
253
2.7K
294.4K
Cellhasher
Cellhasher@Cellhasher·
@d3b0m4n @Acurast Estimating in the next week or two! (Rackmount +20 Phones is still available at the moment though)
English
0
0
1
16
d3b0m4n
d3b0m4n@d3b0m4n·
@Cellhasher @Acurast When will you be restocking the Cellhasher Classic +20 Phones? Desiring minds would like to know...
English
1
0
0
12