OpenInfer

20 posts

OpenInfer

OpenInfer

@openInfer

OpenInfer, an AI Agent Engine with a cross-hardware OS, democratizing real-time intelligence, efficiency, and privacy

San Francisco, CA, USA 加入时间 Kasım 2024
31 关注81 粉丝
OpenInfer
OpenInfer@openInfer·
The "wow" phase of AI is over. We’ve entered the era of 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗔𝗱𝗼𝗽𝘁𝗶𝗼𝗻. I’m excited to share my latest interview with @technewsworld. Special thanks to @jpmello for the great conversation on @OpenAI's 2026 strategy. Key focuses: 𝗔𝗜 𝗮𝘀 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲: Moving from novelty to a foundational operating layer. 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗙𝘂𝘁𝘂𝗿𝗲: AI agents solving real-world problems in health and science. 𝗗𝗲𝗹𝗶𝘃𝗲𝗿𝗶𝗻𝗴 𝗥𝗢𝗜: Scaling to meet global enterprise needs. Now, the real work of transforming how the world functions begins—driven by the need for transformational infrastructure and @openInfer 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗶𝗻𝗳𝗿𝗮 to support this new era. Full interview: technewsworld.com/story/openai-c… #OpenAI #openInfer #AI #TechTrends #openinfer @TechNewsWorld
English
0
0
1
51
OpenInfer
OpenInfer@openInfer·
RT @bastani_behnam: 🎙️ New Podcast Episode I joined The Software Leaders UNCENSORED Podcast to talk about why the future of AI is at the e…
English
0
1
0
0
OpenInfer 已转推
Behnam Bastani
Behnam Bastani@bastani_behnam·
Bringing inference to edge requires massive innovation around Memory system. Restructuring how inference on edge should be run, we are sharing a capability to remove the lack of meaningful on-device memory. Our latest release lets models hold persistent context, reason over larger spans, and collaborate intelligently. all running locally on the OpenInfer engine. This is how we break past edge memory limits. 🔗 openinfer.io/demos/mementos/ #edgeAi #openinfer #mementos #inference
English
0
1
1
120
OpenInfer
OpenInfer@openInfer·
Here’s how it works: 1️⃣ Submit a one-pager idea by Oct 3, 2025 → hackathon@openinfer.io 2️⃣ We review & select top concepts 3️⃣ Finalists present live in San Mateo 4️⃣ Winners pitch to top VCs + access OpenInfer early!
English
0
0
0
57
OpenInfer
OpenInfer@openInfer·
This is your chance to: ✅ Shape the future of edge-first, private AI ✅ Get in front of top-tier investors ✅ Turn your product vision into reality with our support
English
0
0
0
39
OpenInfer
OpenInfer@openInfer·
𝗧𝗼𝗱𝗮𝘆 𝘄𝗲’𝗿𝗲 𝘀𝗵𝗮𝗿𝗶𝗻𝗴 𝗮 𝗰𝗼𝗻𝗰𝗲𝗽𝘁 𝗱𝗲𝗺𝗼 𝗼𝗳 Local First 𝗠𝗲𝗺𝗲𝗻𝘁𝗼𝘀. See what it means when 𝘆𝗼𝘂 control your AI memories — what is remembered, what stays private, and when assistants can access it. 𝗧𝗿𝘆 𝗶𝘁 𝗼𝘂𝘁. 𝗙𝗲𝗲𝗹 𝘁𝗵𝗲 𝗽𝗼𝘄𝗲𝗿 𝗼𝗳 𝗹𝗼𝗰𝗮𝗹-𝗳𝗶𝗿𝘀𝘁. 🔗studio.openinfer.io/mementos #OpenInfer #LocalFirstAI #Mementos #AIprivacy
English
1
3
3
73
OpenInfer
OpenInfer@openInfer·
𝗠𝗲𝗺𝗲𝗻𝘁𝗼𝘀 show what it means to be in control of your AI memories. 𝗬𝗼𝘂 decide what is remembered, what stays private, and when assistants can access it. 𝗧𝗵𝗲 𝗳𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗔𝗜 𝗶𝘀 𝗹𝗼𝗰𝗮𝗹-𝗳𝗶𝗿𝘀𝘁. #OpenInfer #LocalFirstAI #AIprivacy #Mementos
English
2
1
7
349
OpenInfer
OpenInfer@openInfer·
Programs like this accelerate breakthroughs in AI. At @openInfer, we build the local-first AI inference system that turns innovation into deployable solutions
English
0
0
1
182
OpenInfer
OpenInfer@openInfer·
We got Llama 4 Scout running locally — where it normally would not fit. Thanks to OpenInfer Studio, large models now deploy smoothly on the edge. 🚀 No heavy lifting. Just fast, scalable inference. 🎥👇 #AI #EdgeAI #OpenInfer
English
0
1
3
1.1K
OpenInfer
OpenInfer@openInfer·
OpenInfer has raised an oversubscribed $𝟖 𝐦𝐢𝐥𝐥𝐢𝐨𝐧+ 𝐬𝐞𝐞𝐝 for our first round of funding. We’re building an inference engine optimized for edge AI, making large-scale AI inference seamless across all hardware surfaces. Read more about what we’re doing in @VentureBeat:
OpenInfer tweet media
English
1
4
8
1.4K
OpenInfer
OpenInfer@openInfer·
🚀 Bringing AI to the Edge with Zero-Rewrite Inference 🚀 AI inference shouldn’t require a rewrite. OpenInfer is a high-performance AI engine that seamlessly integrates into your stack, delivering datacenter-scale efficiency on edge devices. Here’s what OpenInfer enables: ✅
English
1
1
2
190
OpenInfer
OpenInfer@openInfer·
🚀𝐎𝐩𝐞𝐧𝐈𝐧𝐟𝐞𝐫 𝐉𝐮𝐬𝐭 𝐆𝐨𝐭 𝐅𝐚𝐬𝐭𝐞𝐫: 𝟐-𝟑𝐱 𝐏𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞 𝐁𝐨𝐨𝐬𝐭 𝐎𝐯𝐞𝐫 𝐋𝐥𝐚𝐦𝐚.𝐜𝐩𝐩 / 𝐎𝐥𝐥𝐚𝐦𝐚 Why it matters: OpenInfer runs 2-3x faster than Llama.cpp & Ollama on 𝐃𝐢𝐬𝐭𝐢𝐥𝐥𝐞𝐝 𝐃𝐞𝐞𝐩𝐒𝐞𝐞𝐤-𝐑𝟏 (𝐐𝐰𝐞𝐧𝟐 𝟏.𝟓𝐁, 𝐐𝐰𝐞𝐧𝟐 𝟕𝐁, 𝐚𝐧𝐝 𝐋𝐥𝐚𝐦𝐚 𝟖𝐁) 🔹 Benchmarked on AMD Ryzen 9 7845HX, significantly higher performance on Distilled DeepSeek-R1 models 🔹 Optimizations: Smarter quantization, faster memory access, model-specific tuning Next up: 𝐆𝐏𝐔 𝐨𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧𝐬 for even greater speed. #AIInference #EdgeAI #MachineLearning #Performance #OpenInfer #DeepSeek #Llama3 #AIModels #Benchmarking #EfficiencyMatters
OpenInfer tweet media
English
2
3
6
639