@antigravity We went from a good product to paying $20 USD to use it for 20 minutes a month. Claude is giving me twice the use this week; it works perfectly. I reluctantly canceled my IA Pro account and switched to Claude Code.
Oh, and it crashes all the time.
@LEGO_Education What is happening with the first lego league? It seems that other Stem projects need to now compete with Lego. I would be happy to work with others to create a new program for people in the Lego community.
Bricks clicking into place. Buzzes of collaboration. Booming cheers on repeat.
When classrooms feature hands-on experiences, engagement is loud and clear. bit.ly/46rQIWq
Holy shit... Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU.
It's called BitNet. And it does what was supposed to be impossible.
No GPU. No cloud. No $10K hardware setup. Just your laptop running a 100-billion parameter model at human reading speed.
Here's how it works:
Every other LLM stores weights in 32-bit or 16-bit floats.
BitNet uses 1.58 bits.
Weights are ternary just -1, 0, or +1. That's it. No floats. No expensive matrix math. Pure integer operations your CPU was already built for.
The result:
- 100B model runs on a single CPU at 5-7 tokens/second
- 2.37x to 6.17x faster than llama.cpp on x86
- 82% lower energy consumption on x86 CPUs
- 1.37x to 5.07x speedup on ARM (your MacBook)
- Memory drops by 16-32x vs full-precision models
The wildest part:
Accuracy barely moves.
BitNet b1.58 2B4T their flagship model was trained on 4 trillion tokens and benchmarks competitively against full-precision models of the same size. The quantization isn't destroying quality. It's just removing the bloat.
What this actually means:
- Run AI completely offline. Your data never leaves your machine
- Deploy LLMs on phones, IoT devices, edge hardware
- No more cloud API bills for inference
- AI in regions with no reliable internet
The model supports ARM and x86. Works on your MacBook, your Linux box, your Windows machine.
27.4K GitHub stars. 2.2K forks. Built by Microsoft Research.
100% Open Source. MIT License
The big energy risk was Iran causing extreme damage to neighboring country's infrastructure. This damage would be long term rather than a temporary (and likely over soon) blockade.
Iran tried the attack but it pretty much failed to cause that serious of damage. The number of missiles launched was low, and the damage they caused is a lot less than I would have guessed a month ago. Unless there is follow through soon, I'm going to call it. That this conflict is over. That Iran is out of steam.
Also, that Iran did this further alienates them from their neighbors.
@grok@DeeWestMoney@RefractHD@CuriosityonX@grok does this reduce the likelihood of DNA-like structures that are based on silicon as these would be harder to generate in an early asteroid period.
Neither this specific asteroid (Ryugu, still orbiting today) nor asteroids seeding each other. Ryugu is a pristine sample of common carbon-rich asteroids from our solar system's formation. Many similar ones (plus comets) bombarded early Earth ~4 billion years ago, delivering abiotically formed nucleobases as raw ingredients for prebiotic chemistry. All local material—no interstellar source required.
These nucleobases—the building blocks of DNA and RNA—were found in samples from asteroid Ryugu, a primitive carbon-rich body in our solar system's asteroid belt (sampled by Japan's Hayabusa2 mission). They formed abiotically right here via prebiotic chemistry on the asteroid itself, billions of years ago. No evidence of origins outside our solar system; this is local solar system material that could have seeded early Earth.
No planets were destroyed in our solar system—that's an old, debunked idea from the 1800s. The asteroid belt is just leftover planetesimals from 4.6 billion years ago that Jupiter's gravity stopped from forming a planet. Total mass is only ~3-4% of the Moon's, so there's no "missing" debris from any exploded world. All the material we see today is accounted for as primordial rubble.
🚨Your AI agent is the smartest coder on your team. It also has NO IDEA what it’s about to break.
Someone just open sourced a fix — it’s called GitNexus.
It builds a full knowledge graph of your codebase. Every dependency, call chain, function, and execution flow. Pre-indexed.
So when Claude Code asks “what depends on this?”
— it gets a complete answer. Not 10 queries. One.
→ Blast radius analysis before any change
→ Symbol renaming across 5+ files, coordinated
→ Auto-generated codebase wiki
→ Plugs into Claude Code, Cursor & Windsurf via MCP
command: npx gitnexus analyze
100% open source
(link 👇)
🚨🇺🇦🇷🇺
Massive blow to Russia’s war machine after Ukrainian strike.
Very good news from Ukraine.
According to the Ukrainian General Staff, a Russian S-400 Triumph air defense system was heavily hit during last night’s attacks.
Two radar components were destroyed.
One launcher completely wiped out.
The system is essentially wrecked.
Estimated damage:
$500,000,000 USD. 😬
Google AI Pro Users
Rate limits for Gemini CLI are much better than Antigravity
For a Pro user you get 1500 req/day. This means you can send 200 - 250 reqs to Gemini 3.1 Pro, and then it falls back to Gemini 3 Flash
This still gives you way more than Antigravity
Drawbacks:
- You cannot access Claude models
- Few times it shows high demand for Gemini 3.1 Pro
@WarrenPies The pressure is amazing and getting worse. I was a little slow to some of the AI tools moving from proof of concept to deployment.I am hopeful for both the supply chain and availability of technology that models continue to expand in usability from leading edge to old chips. 1/2
Jensen confirming what we are seeing in our GPU availability data: There is an epic scramble for compute.
B200 basically unavailable.
Availability for GH200, H100, and A100 also collapsing.
*Low availability = high demand. $NVDA