Daniel Bevenius

341 posts

Daniel Bevenius

Daniel Bevenius

@dbevenius

Sweden Katılım Mart 2009
174 Takip Edilen157 Takipçiler
Daniel Bevenius retweetledi
Georgi Gerganov
Georgi Gerganov@ggerganov·
llama.cpp at 100k stars now that 90% of the code worldwide is being written by AI agents, I predict that within 3-6 months, 90% of all AI agents will be running locally with llama.cpp 😄 Jokes aside, I am going to use this small milestone as an opportunity to reflect a bit on the project and the state of AI from the perspective of local applications. There is a lot to say and discuss and yet it feels less and less important to try to make a point. Opinions about viability of local LLMs are strongly polarized, details are overlooked, the scientific approach is lacking. Arguments are predominantly based on vibes and hype waves. One thing is clear though - local LLMs are used more and more. I expect this trend to continue and likely 2026 will end up being one of the most important years for the local AI movement. I admit that I didn't expect the agentic era to come so quickly to the local LLM space. One year ago, the available models were too computationally expensive for doing long-context tasks. There wasn't an obvious path towards meaningful agentic applications. The memory and compute requirements were huge. Last summer, with the release of gpt-oss, things started to change. It was the first time we saw a glimpse of tool calling that actually works well within the resource constraints of our daily devices. Later in the year, even better models were released and by now, useful local agentic workflows are a reality. Comparing local vs hosted capabilities at a given moment of time is pointless. To try put things into perspective: - We don't need frontier intelligence to automate searches and sending emails - We don't need trillion parameter models to be able to summarize articles or technical documents - We don't need massive GPU data centers to control our home appliances or turn the lights off in the garage I believe that there is a certain level of intelligence we as humans can comprehend and meaningfully utilize to improve our working process. Beyond that level, access to more intelligence becomes unnecessary at best and counterproductive at worst. I also believe that that level of useful artificial intelligence is completely within reach locally and it has always been just a matter of implementing the right software stack to bring it to the end user. With llama.cpp, I am confident that we continue to be on the right track of building that software stack! The llama.cpp project is going stronger than ever. With more than 1500 contributors, the project keeps growing steadily. From technical point of view, I think that llama.cpp + ggml is the only solution that actually makes sense. That is, the software stack must run efficiently on every possible device, hardware and operating system. The technology is too important to be vendor-locked. It has to be developed in the open, by the community, together with the independent hardware vendors. This is the only right way to build something that will truly make a difference in the long run. I won't try to convince you about what is currently and will be possible with local AI. We will just continue to build as usual. I am confident that after the smoke clears and we look objectively at what we have built together, the benefits will be obvious to everyone. Big shoutout to all llama.cpp maintainers. I feel extremely lucky to be able to work together with so many talented contributors. Every day I learn something new and I feel there is so much more cool stuff that we are going to build. Also, I am really thankful that the project continues to have reliable partners to support it! Cheers!
Georgi Gerganov tweet mediaGeorgi Gerganov tweet media
English
141
286
2.1K
177.8K
Daniel Bevenius retweetledi
Georgi Gerganov
Georgi Gerganov@ggerganov·
In collaboration with NVIDIA we announce support for the new NVIDIA Nemotron 3 Super model in llama.cpp NVIDIA Nemotron 3 Super is a 120B open MoE model activating just 12B parameters to deliver maximum compute efficiency and accuracy for complex multi-agent applications.
English
4
19
241
15.7K
Daniel Bevenius retweetledi
Xuan-Son Nguyen
Xuan-Son Nguyen@ngxson·
Qwen3-Coder-Next and Minimax-M2.1 are available on HF inference endpoints with the price of $2.5/hr and $5/hr respectively. With the context fitting supported, you can now utilize the largest context length possible for a given hardware. No more manual tuning -c option!
Xuan-Son Nguyen tweet media
English
4
8
58
9.5K
Daniel Bevenius retweetledi
Georgi Gerganov
Georgi Gerganov@ggerganov·
Introducing LlamaBarn — a tiny macOS menu bar app for running local LLMs Open source, built on llama.cpp
Georgi Gerganov tweet media
English
21
64
783
53.6K
Daniel Bevenius retweetledi
Xuan-Son Nguyen
Xuan-Son Nguyen@ngxson·
Hugging Face Inference Endpoint now supports deploying GLM-4.7-Flash via llama.cpp, for as cheap as $0.8/hr Using Q4_K_M and 24k tokens context length - should be enough for most use case!
Xuan-Son Nguyen tweet media
English
3
9
74
9.6K
Daniel Bevenius retweetledi
Georgi Gerganov
Georgi Gerganov@ggerganov·
Recent contributions by NVIDIA engineers and llama.cpp collaborators resulting in significant performance gains for local AI
Georgi Gerganov tweet media
English
24
46
588
35.9K
Daniel Bevenius retweetledi
Georgi Gerganov
Georgi Gerganov@ggerganov·
HuggingFace just shipped in-browser GGUF editing It allows you to edit GGUF metadata in the comfort of your browser, without having to even download the full model. This feature is enabled via the Xet technology that makes partial file updates possible.
English
6
49
374
37.3K
Daniel Bevenius retweetledi
Drogue IoT
Drogue IoT@DrogueIoT·
Join us at the @EclipseCon Hacker Day! You'll get to program @microbit_edu with @rustembedded and connect them to the internet! Write @QuarkusIO applications that process the data (tweet when you jump?) and send commands back (play smoke on the water?). Anything is possible!
English
0
7
12
0
Lance Ball
Lance Ball@lanceball·
Just got home from the vet where my sweet sweet Jack was given two months time. I’m devastated. 😥 It has been a tough year so far.
Lance Ball tweet mediaLance Ball tweet mediaLance Ball tweet mediaLance Ball tweet media
English
15
0
3
0
Daniel Bevenius
Daniel Bevenius@dbevenius·
I’m manually loading pez containers...there must be a better way 😔
English
0
0
1
0