Cactus

16 posts

Cactus banner
Cactus

Cactus

@cactuscompute

The fastest way to deploy mobile AI

San Francisco, CA Katılım Ekim 2025
7 Takip Edilen156 Takipçiler
Cactus retweetledi
Henry Ndubuaku
Henry Ndubuaku@Henry_Ndubuaku·
Gemma 4 on Cactus can run realtime language, vision and speech on a mobile device...with multiple apps running in the background. Cactus routing algorithms gives Gemma the capacity to forward tasks to frontier models like Gemini and Claude when confused. Typical on-device AI demos are too structured, but this is probably the closest we've come to Jarvis-like AI assistant. @googlegemma @GeminiApp @cactuscompute
English
1
5
26
2.6K
Cactus
Cactus@cactuscompute·
Proud to power on-device AI for Pebble – the product that pioneered the entire smart watch category – and now innovating on new wearable form factors.
Eric Migicovsky@ericmigi

Haven't posted much about @Pebble Index 01 recently (been...uh a bit in the weeds on PT2 shipping). But lots of progress has been made! Still need to order one? You can do that here 😉 repebble.com/index We're in PVT2! Yes, 2 :) The first Production Verification Test (PVT) at end of March exposed two main problems 1) possible ESD damage during assembly was blowing out the BLE amp on some units, 2) a change we made to the mic waterproof membrane caused some units to fail audio testing. We fixed both with modifications to the assembly line. Things seem to be on a good track now with PVT2! When will my Index 01 ship? Our goal is to manufacture the first 2k units by mid-May and start shipping them out. It will take us roughly 3 months to manufacture and ship out all pre-orders, meaning we should be finished by July. As always, these are estimates. Delays may happen! Brushed Silver We are switching from offering a polished silver to brushed silver finish! It looks great - photo below, with lots more to come. Head to orders.rePebble.com to switch colours. Alpha testers Thanks to some brave souls who've been using early versions of Index 01 and reporting bugs, we've crushed a ton of software issues over the last 3 months! Performance and reliability are starting to be really smooth on iOS and Android. Software improvements All open source, of course! github.com/coredevices/mo… - We added fully encrypted cloud backup (optional) - Local speech-to-text is working well, thanks to @cactuscompute + parakeet-tdt-0.6b-v3! - You can optionally route audio or transcriptions to a webhook - enabling you to pipe recordings directly from your Index 01 to an agent like OpenClaw - Our lead engineer also added Home Assistant support (because she wanted it herself!!). - Added Beeper (text directly from Index 01!), music control (Android only), Notion, Apple Reminders and Google Tasks integration - Bring your own MCP server Still working on a lot of stuff - dramatically improved UI, more reminder app integrations (Todoist, Tasks.org), and more. Tell us - what are you excited to do with your Index 01? Do you have a favorite reminder/todo/notes app that you would like to use with it?

English
0
2
2
289
Cactus retweetledi
Dominik Sobe ツ
Dominik Sobe ツ@sobedominik·
Anyone successfully using a local LLM on their iPhone? I have tested a few a year ago on my iP14 Pro but they all made my battery extremely hot and the UI sucked. Now with the iP17 Pro I’d love to give it another try. What app/model should I use?
English
6
0
1
2.1K
Cactus retweetledi
Samuel Donkor
Samuel Donkor@SAMADON_·
@cactuscompute @nothing @huggingface Excited to share that our team placed 2nd at the Cactus (YC S25) x Nothing x Hugging Face Mobile AI Hackathon. We were up against teams from MIT, Stanford, and builders from around the world. Grateful to have had the chance to build and compete alongside so many talented people.
English
0
1
2
391
Cactus retweetledi
Henry Ndubuaku
Henry Ndubuaku@Henry_Ndubuaku·
1.6B INT8 VLM by @liquidai on Cactus (YC S25) never exceeds 231MB of peak memory usage at any context size. 1. Cactus is aggressively optimised to run on budget devices with minimal resources, enabling efficiency, negligible pressure on your phone and passes your OS safety mechanisms. 2. Notice how 1.6B INT8 CPU reaches 95 toks/sec on Apple M4 Pro, faster than your eyes could process. Our INT4 will almost 2x the speed when merged. Expect up to 180 toks/sec decode speed. 3. The prefill speed reaches 513 toks/sec. Our NPU kernels will 5-11x that once merged. Expect up to 2500 - 5500 toks/sec. The time to first token of your large context prompt will take less than 1sec. 4. LFM2-1.2B-INT8 in the Cactus compressed format takes only 722mb. This means that with INT4 will shrink to 350mb. Almost half as much as GGUF, ONNX, Executorch, LiteRT etc. 5. Once done, we will start recommending 1B models to our users, cause your Grandma’s phones will run them. Stay tuned! github.com/cactus-compute…
English
7
13
154
37.4K
Cactus retweetledi
Jakub Mroz
Jakub Mroz@jakmroo·
We just shipped the Cactus React Native SDK🌵- the fastest and most efficient on-device AI inference engine for React Native.⚡️Lightweight, insanely fast, and built for mobile devices from the ground up.🚀
English
1
2
3
551
Cactus retweetledi
Sélim
Sélim@SelimBenayat·
Hackathon alert! London, SF, Boston. This Friday! 👀 @nothing is teaming up with @cactuscompute and @huggingface to hack on redefining on-device AI experiences! Come build something memorable, meet the teams, and ship in 24 hours! Signups are wild so far 🔥
Sélim tweet media
English
9
20
199
48.3K
Cactus
Cactus@cactuscompute·
@_iamEtornam thanks for building with us, Etornam! 🫶🏼🌵
English
0
0
1
18
Cactus
Cactus@cactuscompute·
Cactus React Native v1 is live! Deploy AI on-device with text inference, tool calling, embeddings and more – powered by the fastest edge inference engine 🌵 Our React Native bindings run on @margelo_com's Nitro Modules, yielding the fastest mobile inference we've seen so far.
English
1
2
3
359