ollama

7.6K posts

ollama banner
ollama

ollama

@ollama

https://t.co/1JpLwJ93nX

California, USA شامل ہوئے Ağustos 2023
10 فالونگ133.9K فالوورز
ollama
ollama@ollama·
@MrRemKing The moment the GLM team launches it open-source! ❤️❤️❤️
English
1
0
2
29
ollama
ollama@ollama·
@k_k_kaundal That you have to ask Docker. This is not our product. You can use Ollama or Ollama inside Docker.
English
1
0
1
72
K. K. 💫
K. K. 💫@k_k_kaundal·
@ollama Most of time i used these models with ollama based config so , can i use docker models in VS using this update ?
K. K. 💫 tweet media
English
1
0
0
70
ollama
ollama@ollama·
Visual Studio Code now integrates with Ollama via GitHub Copilot. If you have Ollama installed, any local or cloud model from Ollama can be selected for use within Visual Studio Code.
ollama tweet mediaollama tweet media
English
114
483
4.4K
318.3K
ollama
ollama@ollama·
@tammireddy so sorry to hear that. May I ask how are you deploying it?
English
0
0
1
841
Krishna Tammireddy
Krishna Tammireddy@tammireddy·
@ollama After 70+ production deployments, the failure mode is always the same -- great in demo, wrong at 9am on a Tuesday. The adapt-to-your-workflow part is the only spec that actually matters.
English
1
0
1
1.1K
ollama
ollama@ollama·
ollama launch pi --model kimi-k2.5:cloud Ollama can now launch Pi, the coding agent that powers OpenClaw. Designed to be a minimal coding harness that can be adapted to your workflows to create your own coding agent. Comes bundled with powerful primitives to build on, and can be extended with extensions, skills, prompt templates, and themes. All Pi packages work with Pi & Ollama, making it infinitely customizable for different tasks and use cases.
ollama tweet media
English
53
137
980
72.6K
ollama
ollama@ollama·
@k_k_kaundal What do you mean? Ollama can run models inside Docker.
English
1
0
1
309
K. K. 💫
K. K. 💫@k_k_kaundal·
@ollama So if i use docker for model then its work ?
English
1
0
0
335
ollama
ollama@ollama·
Customize Pi All Pi packages work with Pi & Ollama, making it infinitely customizable for different tasks and use cases. pi.dev/packages
English
1
2
31
7.7K
ollama
ollama@ollama·
Download Ollama ollama.com/download Try it directly with a cloud model or a model you have locally: ollama launch pi --model kimi-k2.5:cloud
English
2
1
42
9.2K
Z.ai
Z.ai@Zai_org·
GLM-5.1 is available to ALL GLM Coding Plan users! z.ai/subscribe
Z.ai tweet media
English
350
561
5.5K
1.2M
ollama
ollama@ollama·
@LyalinDotCom More to come on this. Going to update Qwen 3.5 for Ollama to be the fastest
English
3
4
58
6.5K
Dmitry Lyalin
Dmitry Lyalin@LyalinDotCom·
I have a new hobby. Time for me to start sharing open model information without the hype and BS. I'm so tired of it (the BS on here), so I'm going to get informed first-hand and share everything I learn. First, to ground me for the future, my test machine is an Apple MacBook Pro, the M5 Max (128GB RAM, 2TB drive, running macOS 26) I'm going to be using @opencode & @ollama since they work great with open models already. I might build a specific test harness but lets see how it goes. These will be just my real experiences, and often not super scientific. But I bet it will be more honest then most of the stuff you see here. Today for example I'm testing qwen3-coder-next, the 79.7B parameter version (Q4_K_M quantization, 51GB on disk) with 262K context length. More info to come.
English
13
4
122
13.7K
OpenAgents
OpenAgents@OpenAgentsInc·
@TheCesarCross @ollama Yep and tool use, now preparing to bench Hermes agent doing real work via Psionic vs Ollama/llama.cpp, should be up tonight/tomorrow
English
2
1
8
372
OpenAgents
OpenAgents@OpenAgentsInc·
Episode 217: Psionic: Fast Qwen 3.5 We add Qwen 3.5 (0.8B/2B/4B/9B) support to Psionic and beat @ollama's inference speed across all four models. Tokens per second on one NVIDIA 4080: 🏆 0.8B: Psionic 523.20, Ollama 328.72 🏆 2B: Psionic 247.21, Ollama 205.24 🏆 4B: Psionic 166.75, Ollama 141.62 🏆 9B: Psionic 102.68, Ollama 94.62 Thank you @Alibaba_Qwen for the awesome model and @OpenAIDevs for Codex's help to pretend we are ML engineers. 😆 Analysis & instructions to reproduce: #issuecomment-4145849496" target="_blank" rel="nofollow noopener">github.com/OpenAgentsInc/… We are happy to take more feature or model requests for Psionic, the worst and best ML library ever!
OpenAgents tweet media
OpenAgents@OpenAgentsInc

Episode 216: Psionic "Python sucks. It's time to get the ecosystem off of Python and onto proper languages like Rust. We're going to rewrite PyTorch and everything relevant from Python land in Rust." We introduce Psionic, our open-source Rust ML framework. It outperforms llama.cpp on local inference of GPT-OSS 20B; reproduces the Percepta blog post "Can LLMs Be Computers?"; and soon will support decentralized model training with compute providers paid in bitcoin. We'll use Psionic to train a new class of agent-centric executor models called 'Psion'. Psionic and Psion will be 100% open source, open weights, open data, open everything. Issues & insults welcome: github.com/OpenAgentsInc/…

English
3
6
20
3.2K
Bharath
Bharath@Bharath_021·
@Zai_org @ollama I heard it's open source so when can we use it on ollama?
English
1
0
0
135
ollama
ollama@ollama·
@rohit_p_shirke Personal hardware is rapidly improving. We also offer Ollama’s cloud for anyone wanting to try the big models without a super fast computer locally. Choice is good for everyone! ❤️
English
2
0
14
2.5K
Rohit Shirke
Rohit Shirke@rohit_p_shirke·
@ollama You forgot to add, needs only 256 gigs of ram for efficient processing! 😅
English
2
0
6
2.7K
ollama
ollama@ollama·
@Lulu38295199 Local on your computer or via Ollama's cloud
English
1
0
15
7.3K
Lulu
Lulu@Lulu38295199·
@ollama So, does VS Code auto-detect remote Ollama models over LAN?
English
1
0
0
8.8K
ollama
ollama@ollama·
@FoulkrodLarry sorry about that. We are improving the performance
English
0
0
0
44
Larry Foulkrod
Larry Foulkrod@FoulkrodLarry·
@ollama hi, minimax m2.7:cloud is just a lot slower than some of the other models like kimi and glm.
English
1
0
0
57
Radu
Radu@RamanduLight·
If you're not happy with the way Claude code is throttling during "peak" hours, I suggest you look into Ollama's Pro Cloud plan - I'm testing it with Kimi 2.5 and I'm having a blast. ollama.com/library/kimi-k…
English
6
0
20
1.8K
atomicbot.ai
atomicbot.ai@atomicbot_ai·
Run OpenClaw locally with @ollama 🦙 – Open source models: MiniMax, Qwen, Kimi and more – One click install on macOS and Windows – Local & cloud Ollama models connected @atomicbot_ai where llamas meet claws 🦞
English
74
76
530
52.6K