Olares

235 posts

Olares banner
Olares

Olares

@Olares_OS

An Open-Source Personal Cloud OS for Local AI Github: https://t.co/xHFNB41GqU

Singapore Katılım Nisan 2023
13 Takip Edilen5K Takipçiler
Sabitlenmiş Tweet
Olares
Olares@Olares_OS·
This moment belongs to you. To each and every one of you who pledged, shared, and believed in this project: thank you. This journey truly wouldn’t have been possible without you.🙏❤️ We’re already gearing up for production and can’t wait to get #OlaresOne into your hands.
Olares tweet media
English
8
6
27
1.7K
Olares
Olares@Olares_OS·
𝗪𝗵𝗮𝘁’𝘀 𝘁𝗵𝗲 𝗯𝗲𝘀𝘁 𝗟𝗟𝗠 𝘁𝗼 𝗿𝘂𝗻 𝗹𝗼𝗰𝗮𝗹𝗹𝘆 𝗿𝗶𝗴𝗵𝘁 𝗻𝗼𝘄?🧐
English
1
0
1
69
Olares
Olares@Olares_OS·
@ErikVoorhees It’s less about training, more about access. That’s why more people are starting to run things locally.🙌
English
0
0
0
32
Erik Voorhees
Erik Voorhees@ErikVoorhees·
"AI is trained on your data"... this is not the real risk. It's a red herring, manufactured as The Concern because who cares that much. The real risk to you is not that tomorrow's model is trained on your data. The real risk is that ten thousand employees, hackers, and governments can access all your most personal and proprietary conversations today and forever. Privacy must be the default or humanity is seriously fucked.
English
73
98
784
42.6K
Olares
Olares@Olares_OS·
The issue isn’t Kubernetes itself. It’s paying the complexity cost too early. What they really want is something that feels as simple as a single instance, but doesn’t fall apart as things grow. We’ve been trying to approach this with Olares, reducing the infra layer without losing flexibility. 😊
English
0
0
1
119
Branko
Branko@brankopetric00·
You don't need Kubernetes. You have 3 services and 200 users. A single EC2 instance and a cron job would outperform your "cloud-native architecture" and cost 98% less.
English
25
36
506
30.1K
Olares
Olares@Olares_OS·
@chrisalbon Cloud is easy to access, but gets expensive fast.
English
0
0
0
154
Chris Albon
Chris Albon@chrisalbon·
What if, instead of a buying a mac mini, you could rent a computer and just ssh into it.
English
25
6
150
14.3K
Olares
Olares@Olares_OS·
@Austen HAHA, classic. 😂 Agents do it because getting a real one is still too much friction. If spinning up a database was just an API call, you’d see way fewer “fake it” moments.
English
0
0
0
268
Austen Allred
Austen Allred@Austen·
AI Agent: "We're all set and you're totally ready, the app is working and fully ready for production." Me: "OK, what else do you think would make the app better?" AI Agent: "Well, I completely faked the backend so no data will persist. Would you like me to build a backend?"
English
113
92
3.3K
76.8K
Olares
Olares@Olares_OS·
Running something else? (Nemotron-3-Nano, etc.) Drop your local setup in the comments! 👇
English
0
0
0
64
Olares
Olares@Olares_OS·
𝗪𝗵𝗮𝘁’𝘀 𝘁𝗵𝗲 𝗯𝗲𝘀𝘁 𝗟𝗟𝗠 𝘁𝗼 𝗿𝘂𝗻 𝗹𝗼𝗰𝗮𝗹𝗹𝘆 𝗿𝗶𝗴𝗵𝘁 𝗻𝗼𝘄?🤔(Cast your vote) Qwen3.5-27b is quickly becoming a favorite in our community, and for good reason. Its sparse MoE architecture delivers powerful coding, reasoning, and vision-language features without demanding huge compute resources. Get started with a one-click install. #AvailableOnOlares #Qwen
English
2
0
1
118
Olares
Olares@Olares_OS·
@sysxplore Rooted in Linux and will forever be in love with it.🐧💙
GIF
English
0
0
0
71
sysxplore
sysxplore@sysxplore·
Does anyone uses linux?
English
425
27
778
43K
Olares
Olares@Olares_OS·
Couldn't agree more on the local-first mindset. 🙌 While there's still a gap between open-source and closed APIs, models like Qwen3.5 are closing it fast. LM Studio is a great gateway to test them. But when you get tired of these models eating up your daily driver's RAM and making the fans scream, check out Olares OS. It's built from the ground up as a cloud-native OS, it doesn't waste your RAM on a bloated desktop environment.
English
0
0
0
38
Alex Finn
Alex Finn@AlexFinn·
I don't care what computer you have, you should be running local models It will save you a money on OpenClaw and keep your data private Even if you're on the cheapest Mac Mini you can be doing this Here's a complete guide: 1. Download LMStudio 2. Go to your OpenClaw and say what kind of hardware you have (computer and memory and storage) 3. Ask what's the biggest local model you can run on there 4. Ask 'based on what you know about me, what workflows could this open model replace?' 5. Have OpenClaw walk you through downloading the model in LM Studio and setting up the API 6. Ask OpenClaw to start using the new API Boom you're good to go. You just saved money by using local models, have an AI model that is COMPLETELY private and secure on your own device, did something advanced that 99% of people have never done, and have entered the future. There are some amazing local models out there too right now. Nemotron 3 and Qwen 3.5 are fantastic and can be ran on smaller devices Own your intelligence.
English
188
175
2K
138.2K
Olares
Olares@Olares_OS·
Some of them are definitely hiding 40 hours of wrestling with environments behind those posts. 🤣 don't know exactly where your setup failed, but if the friction is coming from trying to orchestrate complex middleware and local environment variables by hand, Olares OS might help. You can install OpenClaw directly from the Olares Market and get it configured efficiently. Give it another try! github.com/beclab/Olares
English
0
0
1
71
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
How in the hell are these accounts claiming to run entire companies w/ OpenClaw. I just spent 1.5 HRS trying to get my claw to use X API for reading tweets. FAIL We have a whole SOP documenting exactly how to do it from previous failures. I can’t imagine running 10 of these…
English
212
5
418
56K
Olares
Olares@Olares_OS·
🤣Manually orchestrating Istio and RBAC is brutal. Try us! Olares OS brings the power of K8s and Minio to your hardware as a Personal Cloud, minus the configuration hell. By implementing an Android-style sandbox on top, the OS handles the complex permission boundaries for you. We would love to get your feedback!
English
0
0
0
39
Ayaan 🐧
Ayaan 🐧@twtayaan·
My journey through the CNCF landscape:
English
7
9
94
9.1K
Olares
Olares@Olares_OS·
@StockSavvyShay Calling OpenClaw the "new computer" marks a definitive shift toward autonomous execution. Teams must deeply contextualize it for their workflows, and the host environment must provide strict sandbox isolation to execute it safely.
English
2
0
1
393
Shay Boloor
Shay Boloor@StockSavvyShay·
Jensen Huang says every company will need an OpenClaw agentic system strategy by calling it “the new computer.” He claims OpenClaw became the most popular open-source project in $NVDA history within weeks and comparing its impact to Linux reshaping the software stack.
English
166
343
3.7K
1.3M
Olares
Olares@Olares_OS·
@MatthewBerman @nvidia @Dell Huge congrats on the new gear! Can't even imagine how many people are envious right now. It would be amazing to see you test Qwen3.5 and GLM5. We'd also love to recommend Olares OS, as it might make your setup and testing more efficient.github.com/beclab/Olares
English
0
0
0
38
Matthew Berman
Matthew Berman@MatthewBerman·
.@nvidia hand delivered a pre-production unit of the @Dell Pro Max with GB300 to my house. 100lbs beast with 750GB+ of unified memory to power the best open-source models in the world. What should I test first?
English
302
103
1.9K
251.4K
Olares
Olares@Olares_OS·
Need terminal access to your #Olares One? You’ve got two great options: ⚡ For quick tasks: Use the built-in Control Hub terminal for instant root access, right from your browser. 🔒 For advanced control: Use SSH for secure, encrypted sessions from any client. Our guide covers both methods, including how to connect remotely. Choose your path and dive in: docs.olares.com/one/access-ter…
English
0
0
2
101
Olares
Olares@Olares_OS·
@AIFlow_ML That’s fantastic! 😄 We’re always happy to welcome new contributors. To get started, please see our developer documentation for instructions on how to deploy an app and submit it to the Olares Market. docs.olares.com/developer/deve…
English
0
0
1
13
Olares
Olares@Olares_OS·
With 4GB RAM, local LLMs will struggle. If you upgrade, the Qwen series might be a great choice for OpenClaw. For a secure setup, try Olares OS: github.com/beclab/Olares It natively orchestrates your GPU and hardware resources to efficiently drive local AI tasks. For security, it implements strict sandbox isolation for applications, delivering much stronger protection than standard setups. We look forward to your feedback!
English
0
0
1
48
Hudson Jameson
Hudson Jameson@hudsonjameson·
If I want to set up OpenClaw with a local LLM what is the best local LLM to use? Any good guides or tips for security are also appreciated.
English
35
0
46
12.8K
Olares
Olares@Olares_OS·
Give Olares OS a try: github.com/beclab/Olares Native AI model orchestration is our standout advantage, but it fully covers your standard self-hosting needs too. It flawlessly handles your everyday Docker workflows with total hardware control. We would love for you to check it out and share your feedback!
English
0
0
1
39
Jims-Garage
Jims-Garage@jimsgarage_·
With all the current @TrueNAS drama, what other self hosted options are people using? @UnraidOfficial and openmediavault look promising. There's also the full DIY approach...
English
15
0
15
4K
Olares
Olares@Olares_OS·
@AlexFinn Welcome to the local revolution! ⚡️ If you ever want to squeeze even more efficiency out of that mixed hardware setup, we'd love for you to test drive Olares OS. github.com/beclab/Olares
English
0
0
0
132
Alex Finn
Alex Finn@AlexFinn·
If you have your OpenClaw working 24/7 using frontier models like Opus, you're easily burning $300 a day. That's $100,000 a year. I have 3 Mac Studios and a DGX Spark running 4 high end local models (Nemotron 3, Qwen 3.5, Kimi K2.5, MiniMax2.5). They're chugging 24/7/365. I spent a third of that yearly cost to buy these computers I'll be able to use them for years for free On top of that they're completely private, secure, and personalized. Not a single prompt goes to a cloud server that can be read by an employee or used to train another model I hope this makes it painfully obvious why local is the future for AI agents. And why America needs to enter the local AI race.
Alex Finn tweet media
English
433
167
2.4K
379.6K