
samjay
7.5K posts

samjay
@BlackburnSamson
AI Architect MAG7 Fmr/InfoSec, Retweets may not be agreement. 3x Australian AI Awards Finalist. Phil Theo Dbl Ba / Comp Sci Dip. MSP exp, 6yrs IT


THIS GUY VIBE CODED A TOOL THAT TURNS ANY SVG INTO A 3D OBJECT YOU CAN SPIN, ANIMATE, AND EMBED drag in an SVG, type some text, or draw pixel art. it becomes a 3D object instantly. spin it around, animate it, and embed it on your site. export as 4K image or video. runs entirely in your browser and nothing gets uploaded to any server with no account needed. 100% free AND its open source. this is one of those tools you didn't know you needed until you see it



🚨let me break down what Andrej Karpathy just said because I don't think people understand how big this is... there are two AIs now.. the free one that fumbles "should I drive or walk to the carwash" on your Instagram reels.. and the $200/month one that can restructure an entire codebase in an hour and find security vulnerabilities in computer systems.. the people laughing at AI and the people losing sleep over it are using two completely different products.. and both are right about what they're seeing.. the free version isn't broken by accident.. companies aren't fixing it because it doesn't make money.. the breakthroughs are in coding, math, research - the stuff corporations pay for.. writing, search, advice - the stuff regular people use.. barely moved.. AI has has a class system.. not an intelligence problem.. the best version goes to whoever can afford it.. and everyone else gets the version that's just good enough to keep you subscribed but never good enough to change your mind

Here’s a secret that every genuinely original thinker knows: there are no original ideas. there are only original combinations. Every “breakthrough” is two existing ideas from different domains meeting for the first time inside someone’s head. The person who reads only within their field will only ever have ideas that their field has already had. This is why the most interesting people are almost always polymaths. Go wider. Read the thing that has nothing to do with your work. Talk to the person who has nothing in common with you. Visit the place that makes no sense on your itinerary. The irrelevant input is the one that will combine with everything else and produce something nobody’s ever seen.



Something I've been thinking about - I am bullish on people (empowered by AI) increasing the visibility, legibility and accountability of their governments. Historically, it is the governments that act to make society legible (e.g. "Seeing like a state" is the common reference), but with AI, society can dramatically improve its ability to do this in reverse. Government accountability has not been constrained by access (the various branches of government publish an enormous amount of data), it has been constrained by intelligence - the ability to process a lot of raw data, combine it with domain expertise and derive insights. As an example, the 4000-page omnibus bill is "transparent" in principle and in a legal sense, but certainly not in a practical sense for most people. There's a lot more like it: laws, spending bills, federal budgets, freedom of information act responses, lobbying disclosures... Only a few highly trained professionals (investigative journalists) could historically process this information. This bottleneck might dissolve - not only are the professionals further empowered, but a lot more people can participate. Some examples to be precise: Detailed accounting of spending and budgets, diff tracking of legislation, individual voting trends w.r.t. stated positions or speeches, lobbying and influence (e.g. graph of lobbyist -> firm -> client -> legislator -> committee -> vote -> regulation), procurement and contracting, regulatory capture warning lights, judicial and legal patterns, campaign finance... Local governments might be even more interesting because the governed population is smaller so there is less national coverage: city council meetings, decisions around zoning, policing, schools, utilities... Certainly, the same tools can easily cut the other way and it's worth being very mindful of that, but I lean optimistic overall that added participation, transparency and accountability will improve democratic, free societies. (the quoted tweet is half-ish related, but inspired me to post some recent thoughts)


The real story is the 14x compression ratio and what it means if it scales up. Every single weight in this model is one bit. Zero or one. That's it. 8.2 billion parameters stored in 1.15 GB of memory. A standard 8B model at full precision takes 16 GB. Bonsai 8B fits on your phone with room left over for your photo library. The benchmarks are the part that shouldn't be possible. On standard evals, a model that's 1/14th the size of Qwen3 8B and Llama3 8B is trading punches with both of them. The intelligence density score, capability per GB, is 1.06/GB versus Qwen3 8B at 0.10/GB. That's a 10x gap in how much thinking you get per unit of storage. Now zoom out. Big Tech collectively spent over $320 billion on data center capex last year. Amazon alone dropped $85.8 billion, up 78% year over year. Google committed $75 billion for 2025. The US power grid is buckling under AI demand. Data centers now consume 4.4% of all US electricity. Virginia, where most of them sit, saw electricity prices spike 267% over five years. Residential customers in Ohio are watching their bills climb 60% because utilities are spending billions on transmission infrastructure to feed server farms. The entire AI scaling thesis runs on one assumption: intelligence requires massive compute. PrismML just published a proof point that the assumption might be wrong. Their CEO, Babak Hassibi, is a Caltech professor who spent years on the mathematical theory of neural network compression. The founding team is four Caltech PhDs. Khosla Ventures backed it. So did Cerberus, whose Amir Salek built the TPU program at Google. The 1.7B model runs at 130 tokens per second on an iPhone 17 Pro Max at 0.24 GB. The 4B hits 132 tokens per second on M4 Pro at 0.57 GB. These aren't research demos. They shipped llama.cpp forks with custom 1-bit kernels for CUDA and Metal. Apache 2.0 license. You can download and run it right now. The trillion-dollar question: what happens to the economics of a $75 billion data center budget when the same intelligence fits in 1/14th the space and runs on 1/5th the energy?

Thank you @DarioAmodei CEO of @AnthropicAI, for the opportunity to meet and discuss more investment in Australia and ways to maximise the economic benefits of AI while minimising the risks to people, creators and communities.


This guy is fascinating. Here’s Isaacs' "Knowledge Box" (1962). A 12-foot lightproof cube with 24 slide projectors mounted from the outside. Projections covered every inside surface with maps, numbers, words, photos etc in rapid succession. The idea was to overwhelm the mind into making connections. “[…] structured so that the student becomes the agent of synthesis rather than the hitchhiker the lecture system made of him.”




Today we're launching ARC-AGI-3 135 Novel Environments (nearly 1K levels) we build by hand It is the only unsaturated agent benchmark in the world Each game is 100% human solvable, AI scores <1% This gap between human and AI performance proves we do not have AGI Agents today need human handholding. Agents that beat V3 will prove they don’t need that level of supervision. Agents that beat V3 will demonstrate: * Continual learning - Each level builds on top of each other. You can’t beat level 3 without carrying forward what you learned in levels 1 and 2. * World modeling - Many of the environments require planning actions many actions ahead. AI will have no choice but to build an internal world model for how the environment works, run simulations “in its head” and proceed with an action In our early testing, we’ve seen a few clear failure modes of AI: * Anticipation of future events - If an environment requires that AI set up a scene, and then carry out a scenario (like in sp80), it starts to break down. * Anchoring on early hypothesis - Early in a game it comes up with a hypothesis (even if wrong) and refuses to update its beliefs later. * Thinking it’s playing another game - AI thinks it’s playing chess, pacman. The training data holds hard! One major problem is there is too much data to carry forward in a single context. Models must learn what to remember and what to forget The agent that beats ARC-AGI-3 will have demonstrated the most authoritative evidence of progress towards general intelligence to date We're excited to get this out and excited to see what you think

why can't they just use claude?

