
@OpenAI o3-mini-high is the only model that can solve the 6*6 Einstein's Riddle aka Zebra Logic Puzzle so far! not DeepSeek R1 not Google Gemini 2.0 Flash Thinking Experimental 01-21
Pyre
5.2K posts

@Mad_dev
building AI & robots, created consumer/enterprise products for 10+million users; built data and AI teams at 2 Fortune 500 companies; health startup; biology PhD

@OpenAI o3-mini-high is the only model that can solve the 6*6 Einstein's Riddle aka Zebra Logic Puzzle so far! not DeepSeek R1 not Google Gemini 2.0 Flash Thinking Experimental 01-21






I've been working on this humble Claude Code alternative. In a nutshell: containerized by default, multi-provider (Anthropic, OpenAI, Gemini & Grok so far), self-building dev environments & 100% open-source, 100% Go. The repo is brand new, only 1 ⭐️, 🥲.


🧵 I just reverse-engineered the binaries inside Claude Code's Firecracker MicroVM and found something wild: Anthropic is building their own PaaS platform called "Antspace" (Ants + Space). It's a full deployment pipeline — hidden in plain sight inside the environment-runner binary. Here's what I found 👇






I wanted to understand world models from first principles, so I built JEPA primitives in Rust with Burn. LLMs are a thing of the past already, the future is World Models ? I don't know yet. But I like learning by building. Lately I’ve been getting increasingly interested in world models (yes mostly beause of all the noise on @amilabs), so instead of only reading papers and hot takes, I built jepa-rs: a Joint Embedding Predictive Architecture library for World Models, written in Rust. It provides modular, backend-agnostic building blocks for I-JEPA (images), V-JEPA (video), and hierarchical world models, built on top of the burn deep learning framework. It includes a CLI and interactive TUI dashboard, safetensors checkpoint loading, ONNX metadata inspection, and a pretrained model registry for Meta Research models. Repo: github.com/AbdelStark/jep… For people not familiar with World Models: An LLM predicts the next token in text. A world model tries to predict the latent state and dynamics of an environment (similar to an animal or us as babies). Text is one domain. The world is another. A world model learns abstract representations that capture what matters and ignore what doesn't. A world model doesn't need to predict every leaf on a tree. It needs to understand that trees sway in wind. This is what AMI Labs mean when they say: "Real intelligence does not start in language. It starts in the world." World models are not a replacement for what LLMs are good at. For text, coding and many other things LLMs are amazing and won't be replaced by World Models. But there are many areas where World Models can bring interesting applications, robotics being obviously one of them. I have a strong intuition there may be an interesting bridge between world models, safety, and verifiable computation. One new area where I can explore the use of STARK technology ;)

