Harm Aarts

6.7K posts

Harm Aarts

Harm Aarts

@haarts

Not really a programmer

Utrecht Katılım Temmuz 2007
1.1K Takip Edilen1.7K Takipçiler
Harm Aarts
Harm Aarts@haarts·
@NousResearch Is that self improvement feature the same as `RL Training (Tinker-Atropos training tools)` during the setup wizard?
English
2
0
0
150
Harm Aarts
Harm Aarts@haarts·
@SoveyX How much torque can it apply? Genuinely curious.
English
0
0
0
8
Sovey
Sovey@SoveyX·
AI is gonna take your job and your girl.
English
1.6K
2.6K
26.4K
3.1M
Nour Eddine Hamaidi
Nour Eddine Hamaidi@NOOROU·
@Teknium Using it with Paperclip and GLM 5 is really enjoyable. Thanks for this great piece of software.
English
3
0
14
1.2K
Harm Aarts retweetledi
Teknium (e/λ)
Teknium (e/λ)@Teknium·
Hermes Agent v0.4.0 — 300 merged PRs this week. Biggest release we've done. Background self-improvement, OpenAI Responses API endpoint for your agent, new messaging platforms, new providers, MCP server management, and a lot more.
Teknium (e/λ) tweet media
English
68
54
716
117.8K
Harm Aarts
Harm Aarts@haarts·
@DVATW What a drama queen. Sure, perhaps not the best decision in hindsight. But 'suicide'? Are you wanting for clicks?
English
0
0
2
117
David Vance
David Vance@DVATW·
This is what economic suicide looks like;
English
1.2K
12.6K
66K
4.5M
John Heritage
John Heritage@johnrheritage·
@SSCryptoCurrent @heygurisingh The 4T means trained from 4 Trillion parameters - but so far they've only published up to an 8 to 10B parameter model, according to this site.
English
2
0
4
275
Guri Singh
Guri Singh@heygurisingh·
Holy shit... Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU. It's called BitNet. And it does what was supposed to be impossible. No GPU. No cloud. No $10K hardware setup. Just your laptop running a 100-billion parameter model at human reading speed. Here's how it works: Every other LLM stores weights in 32-bit or 16-bit floats. BitNet uses 1.58 bits. Weights are ternary just -1, 0, or +1. That's it. No floats. No expensive matrix math. Pure integer operations your CPU was already built for. The result: - 100B model runs on a single CPU at 5-7 tokens/second - 2.37x to 6.17x faster than llama.cpp on x86 - 82% lower energy consumption on x86 CPUs - 1.37x to 5.07x speedup on ARM (your MacBook) - Memory drops by 16-32x vs full-precision models The wildest part: Accuracy barely moves. BitNet b1.58 2B4T their flagship model was trained on 4 trillion tokens and benchmarks competitively against full-precision models of the same size. The quantization isn't destroying quality. It's just removing the bloat. What this actually means: - Run AI completely offline. Your data never leaves your machine - Deploy LLMs on phones, IoT devices, edge hardware - No more cloud API bills for inference - AI in regions with no reliable internet The model supports ARM and x86. Works on your MacBook, your Linux box, your Windows machine. 27.4K GitHub stars. 2.2K forks. Built by Microsoft Research. 100% Open Source. MIT License.
English
882
2.7K
15.4K
2.2M
Harm Aarts
Harm Aarts@haarts·
@johncodes Sorry if I misunderstand, but the agent need to write somewhere too right? How does persistence and agent memory work?
English
1
0
2
624
John McBride
John McBride@johncodes·
🧃 Introducing stereOS: a Linux based operating system hardened and purpose built for AI agents. It's clear that agents need an ACTUAL operating system (not what people are calling an "OS") to witness the full breadth and depth of their capabilities while mitigating the blast radius of autonomous, untrusted actors. But there are so many problems with AI sandboxes today: * Going out to the apple store and buying a mac mini will never scale and is way too expensive (obviously) * Running in Docker is too restrictive (agents can't stand up their own container infrastructure, no sub virtualization, docker-in-docker is very broken) * Firecracker strips all the hardware so GPU PCIe passthrough, secure boot, FIPs, etc. is out of the question. * Native VMs are too fat and the overhead of 1 agent per VM is too much. stereOS takes a different approach: it's a full NixOS system that you boot and then kick off agent sandboxes inside with gVisor + /nix/store namespace mounting. Each agent gets their own kernel and the /nix/store is read only by nature. Even if the agent was somehow able to escape the gVisor virtual kernel, they'd land on the NixOS system as the "agent" user! Not your actual hardware!! If you want to take a defense-in-depth approach, we support "native" agents that run at the system level kicked off by our `agentd` utility. These agents, on their own, can manage and kick off other sub agents using the internal sandboxing mechanisms. Today, we're open sourcing all of this: * stereOS: our purpose built Linux OS - github.com/papercomputeco… * masterblaster: client utility to launch, manage, and orchestrate agents - github.com/papercomputeco… * stereosd: the stereOS system control plane daemon - github.com/papercomputeco… * agentd: the stereOS system agent management daemon - github.com/papercomputeco… Give it a try, throw us a star, and let me know what you think 🧃⭐️
English
160
244
1.9K
148.6K
Harm Aarts
Harm Aarts@haarts·
Who IS this clown?
Nick Sortor@nicksortor

🚨 #BREAKING: Justin Trudeau has just announced a retaliatory 25% tariff on American goods America has SEVERAL states with larger economies than the entire country of Canada You’re about to FAFO, Little Castro. You will not win this.

English
0
0
0
257
Wall St Engine
Wall St Engine@wallstengine·
Coinbase $COIN is bringing back Bitcoin-backed loans for U.S. customers, excluding New York. Users can borrow against their Bitcoin without selling, marking a significant step into decentralized finance (DeFi). This time, Coinbase isn’t acting as the lender. The loans are powered by Morpho, a DeFi protocol with $3.7B in deposits. Borrowed funds are secured by converting BTC into Coinbase's cbBTC, a DeFi-compatible token. The move comes after Coinbase shut down a similar program in 2023 due to SEC scrutiny and reduced demand. Unlike previous crypto lending platforms that collapsed during the 2022 crypto winter, Coinbase says this program ensures user custody and leverages DeFi principles like overcollateralization to manage risks. While users gain access to DeFi lending without the complexity, they still bear the risks of liquidation and variable interest rates. With billions in BTC on Coinbase, this could inject significant liquidity into the DeFi ecosystem.
Wall St Engine tweet media
English
18
32
145
99.4K
Harm Aarts
Harm Aarts@haarts·
I've been trying to coax my Sonos speakers to play something, anything, the last 30 minutes. I'm glad New Glenn launched successfully, otherwise I'd be very annoyed
English
0
0
2
211
Harm Aarts
Harm Aarts@haarts·
@BobMcElrath El Salvador yielded to the demands of the IMF. They wanted the loan more than they wanted their Bitcoin economy.
English
0
0
0
17
Bob McElrath
Bob McElrath@BobMcElrath·
@haarts IMF loans can be extremely predatory, and countries don't usually have another option. Bitcoin is a new bargaining chip.
English
1
0
0
32
Harm Aarts
Harm Aarts@haarts·
Gaan ze ons ook vertellen wat die #grenscontroles opgeleverd hebben? Nee? Verrassend.
Nederlands
0
0
0
196
Harm Aarts
Harm Aarts@haarts·
@fsiefken Imagine the people fooled by that theatre... But you are probably right on the first count
English
0
0
0
63
Harm Aarts retweetledi
Dick Schoof Minister-President
Dick Schoof Minister-President@MPDickSchoof·
Nogmaals mensen, niet dit soort filmpjes delen waar Geert Wilders door de mand valt. Het zou hem zo maar weer 5 zetels kunnen gaan kosten. Dat willen we toch niet? #WegMetWilders
Nederlands
88
711
2K
185.6K
Harm Aarts
Harm Aarts@haarts·
@elonmusk They are pricing in externalities. As one should
English
0
0
0
14