RunAnywhere (YC W26)

227 posts

RunAnywhere (YC W26) banner
RunAnywhere (YC W26)

RunAnywhere (YC W26)

@RunAnywhereAI

RunAnywhere: The default way of running on-device AI at scale. Backed by @ycombinator

Katılım Temmuz 2025
8 Takip Edilen1.3K Takipçiler
RunAnywhere (YC W26) retweetledi
Yohei
Yohei@yoheinakajima·
just tried this out and it one-shotted* this video: "before the agent does anything" *i generated the narrative using chatgpt and used that as a prompt. featuring: @e2b @runanywhereai @composio @mem0ai @firecrawl @browser_use @agentmail @covenantlabsai some thoughts: - i clearly tried to stick too much into 30 seconds, they talk very fast and lost some content which breaks logic - character consistency is strong, i uploaded a single screenshot from my prior video as reference - voice consistency was not automatic. you notice unicorn switch from female to male voice part way through - the agent gives you an editor with generated scenes broken up but i don't see a way to regenerate a single section in the UI (which would be nice) - it is definitely a much better experience to have the agent stitch videos together than doing it yourself (i was using canva). was trying @flymy_ai's media agent api for it this weekend which also works well and with other models
Runway@runwayml

Meet Runway Agent. Your new AI creative partner that helps you ideate and execute fully finished, sound designed and edited videos. All with just a simple conversation. From ads to shorts to content for social, Runway Agent makes it easy to make more of what you need. Get started on web at the link below.

English
7
4
25
5.7K
RunAnywhere (YC W26) retweetledi
RunAnywhere (YC W26) retweetledi
Sanchit monga
Sanchit monga@sanchitmonga22·
Ambrosia is an excellent on-device AI journaling application that runs fully locally for complete privacy, speed, and reliability. It enables seamless offline tracking and AI-powered insights directly on the device. Integration was handled using the @RunAnywhereAI sdks. Thanks to @_amankishore for building and sharing this
Aman@_amankishore

x.com/i/article/2047…

English
1
2
9
780
RunAnywhere (YC W26) retweetledi
Sanchit monga
Sanchit monga@sanchitmonga22·
David Friedberg speaking straight FACTS on @ChrisWillx latest pod “This whole thing of data centers needs to be stopped. I actually don't think that data centers are going to have much to do with the benefits we're going to realize. So much of AI is going to sit at the edge. It's going to sit in embedded devices. It's going to sit on your desktop computer. It's going to sit on your iPhone. It's going to be ubiquitous. It's going to be everywhere.” Thx @friedberg That’s exactly why we’re building @RunAnywhereai: the on-device AI platform that actually lets models run blazing fast on the hardware people already own. (And yes that’s me in the photo with the legend himself at the All-In holiday party  Vision officially locked in.) Edge AI isn’t coming. It’s here. #runanywhere #modernwisdom #allinpod
Sanchit monga tweet media
English
1
2
7
547
RunAnywhere (YC W26) retweetledi
Sanchit monga
Sanchit monga@sanchitmonga22·
Launching Inference Radar: our new weekly newsletter that tracks the top 130+ inference repositories, monitoring every commit, release, code change, and emerging trend across the ecosystem, then distills it all into one clear briefing.
RunAnywhere (YC W26)@RunAnywhereAI

x.com/i/article/2042…

English
8
5
92
37.1K
RunAnywhere (YC W26) retweetledi
Qualcomm
Qualcomm@Qualcomm·
This Week in #AI: 🔵 @Snapdragon platforms now have day-zero support for @Googlegemma v4 model: bit.ly/4e1MSqL 🔵 Smart glasses edge closer to scale, with @barronsonline highlighting Qualcomm as a key driver of personal AI wearables: bit.ly/4sopdEI 🔵 Health Passport wins the Nexa AI x Qualcomm Bounty Program with an app that turns photos of health reports into structured records: bit.ly/4e1FeMY 🔵 @liquidai’s LFM‑2.5‑350M model has day‑zero support on Qualcomm Hexagon NPUs, enabled with @Zeticai_, @RunAnywhereAI, and @lmstudio: bit.ly/3PFXemh
Qualcomm tweet media
English
1
16
34
3K
RunAnywhere (YC W26) retweetledi
Sanchit monga
Sanchit monga@sanchitmonga22·
Last week’s @ycombinator Demo Day still feels surreal. Standing in front of that welcome sign turned three intense months of W26 into a lifetime of growth. @ShubhamMal72313 and I are proud of what we built at @RunAnywhereAI and somehow managed to present it to a room full of top investors without tripping over our words. The three months were intense but full of shared learning and YC community support. Super grateful to our partners @sdianahu and @vivianmshen for their guidance and belief in us from day one. Thanks to the entire YC community for the constant support and vibes. In the end, we’re finally crossing the “1mm mark of the marathon”… and we’re just getting started. Here’s to what comes next, together. #runanywhere #ycombinator
Sanchit monga tweet mediaSanchit monga tweet mediaSanchit monga tweet media
English
0
2
13
458
RunAnywhere (YC W26) retweetledi
Sanchit monga
Sanchit monga@sanchitmonga22·
Pumping us up as always @raphaelschaad the 🐐 Behind the scenes @ycombinator demo day! #runanywhere #ycombinator
Sanchit monga tweet media
Raphael Schaad@raphaelschaad

Welcome to @ycombinator W26 Demo Day — nothing like this in-person energy. Fun fact, my own Demo Day was supposed to be in March 2020 (!) Startups this batch have grown revenues 14% WoW on average, the fastest ever. Ten percent used to be best-in-class when I was a founder. We have founders building the supply chain for robot parts; payment infra for AI agents; and a new TCP/IP. Seeing a glimpse of the future today.

English
0
1
7
567
RunAnywhere (YC W26) retweetledi
Sanchit monga
Sanchit monga@sanchitmonga22·
At @RunAnywhereAI we just extended MetalRT with S2S support: beating @Apple at their own game once again and delivering the FASTEST Speech-to-Speech engine on Apple Silicon right now, the ONLY truly multimodal inference provider on the market. - 1.68s best latency - 1.52x faster than mlx-audio - 123 tok/s generate We crushed mlx-audio across short, medium, and long audio clips on @liquidai LFM2.5-Audio-1.5B 8-bit quantized on a single M4 Max. Multimodal inference just hit warp speed, full voice-video-text fusion coming soon. #ycombinator #runanywhere #ondeviceai #applesilicon #metalrt #S2S
Sanchit monga tweet media
Sanchit monga@sanchitmonga22

At @RunAnywhereAI we just extended MetalRT with 👀 support: beating @Apple at their own game once AGAIN and delivering the FASTEST VLM decode engine on the market for Apple Silicon right now. - 279 tok/s vision decode - 1.22× faster than mlx-vlm We crushed mlx-vlm and llama.cpp across every configuration tested on Qwen3-VL-2B-Instruct 4-bit quantized across multiple image resolutions on a single M4 Max. Vision decode just hit warp speed! Video analysis coming soon :) #ycombinator #runanywhere #metalrt #applesilicon #vlm #ondeviceai

English
3
3
18
1.5K
RunAnywhere (YC W26) retweetledi
Sanchit monga
Sanchit monga@sanchitmonga22·
At @RunAnywhereAI we just made VLM analysis warp-speed easy with MetalRT in RCLI. Grabbed a live $NVDA chart from the web, screenshot, and boom: Qwen3-VL-2B crushes the breakdown on my M4 Max in seconds. Trend spotting, levels, buy signals, all on-device. Vision decode at 279 tok/s changes everything. #ycombinator #runanywhere #ondeviceai #applesilicon #vlm #metalrt
Sanchit monga@sanchitmonga22

At @RunAnywhereAI we just extended MetalRT with 👀 support: beating @Apple at their own game once AGAIN and delivering the FASTEST VLM decode engine on the market for Apple Silicon right now. - 279 tok/s vision decode - 1.22× faster than mlx-vlm We crushed mlx-vlm and llama.cpp across every configuration tested on Qwen3-VL-2B-Instruct 4-bit quantized across multiple image resolutions on a single M4 Max. Vision decode just hit warp speed! Video analysis coming soon :) #ycombinator #runanywhere #metalrt #applesilicon #vlm #ondeviceai

English
2
3
11
1.4K