samjay

7.5K posts

samjay banner
samjay

samjay

@BlackburnSamson

AI Architect MAG7 Fmr/InfoSec, Retweets may not be agreement. 3x Australian AI Awards Finalist. Phil Theo Dbl Ba / Comp Sci Dip. MSP exp, 6yrs IT

Australia Katılım Kasım 2013
757 Takip Edilen380 Takipçiler
Sabitlenmiş Tweet
samjay
samjay@BlackburnSamson·
All these AI releases got me feeling some type of way. What a vibe this year is shaping up to be. Ty @OfficialLoganK Google, Ty @sama OpenAI, Ty @cursor_ai , Ty @DarioAmodei Anthropic , Ty @finkd Meta, Ty @elonmusk xAI Bring on the // e x p a n d e d // a g e n c y //
English
0
0
3
858
samjay
samjay@BlackburnSamson·
Facts But the average public doesn’t believe in Applied AI people who say this Only those who build it
English
0
0
0
6
samjay retweetledi
Hirokazu Yokohara
Hirokazu Yokohara@Yokohara_h·
動画生成じゃなくて画像生成 これwebカメラからのリアルタイム画像生成だけど動画生成のフレーム補完と違ってなんか温もりを感じて好き
日本語
110
505
5K
242.5K
samjay retweetledi
VraserX e/acc
VraserX e/acc@VraserX·
This Sam Altman story is chilling. He says the current AI moment feels like the final night before the world realized COVID would change everything. A few people saw what was coming. Everyone else kept partying. That is exactly what AI feels like now. We are not waiting for the disruption. We are living in the last moments before everyone notices it.
English
58
76
609
72.1K
samjay
samjay@BlackburnSamson·
When the internet came to life, it took approximately 20yrs for near complete control to be established. During that period crimes were unable to be hidden by the prosecutors. AI for the citizen access creates a near perfect replica of the consequence for prosecutors crimes to be brought to life for a second time in greater magnitude in shorter order. That would be a reason for self interest groups to want hyper regulation before citizen access.
Andrej Karpathy@karpathy

Something I've been thinking about - I am bullish on people (empowered by AI) increasing the visibility, legibility and accountability of their governments. Historically, it is the governments that act to make society legible (e.g. "Seeing like a state" is the common reference), but with AI, society can dramatically improve its ability to do this in reverse. Government accountability has not been constrained by access (the various branches of government publish an enormous amount of data), it has been constrained by intelligence - the ability to process a lot of raw data, combine it with domain expertise and derive insights. As an example, the 4000-page omnibus bill is "transparent" in principle and in a legal sense, but certainly not in a practical sense for most people. There's a lot more like it: laws, spending bills, federal budgets, freedom of information act responses, lobbying disclosures... Only a few highly trained professionals (investigative journalists) could historically process this information. This bottleneck might dissolve - not only are the professionals further empowered, but a lot more people can participate. Some examples to be precise: Detailed accounting of spending and budgets, diff tracking of legislation, individual voting trends w.r.t. stated positions or speeches, lobbying and influence (e.g. graph of lobbyist -> firm -> client -> legislator -> committee -> vote -> regulation), procurement and contracting, regulatory capture warning lights, judicial and legal patterns, campaign finance... Local governments might be even more interesting because the governed population is smaller so there is less national coverage: city council meetings, decisions around zoning, policing, schools, utilities... Certainly, the same tools can easily cut the other way and it's worth being very mindful of that, but I lean optimistic overall that added participation, transparency and accountability will improve democratic, free societies. (the quoted tweet is half-ish related, but inspired me to post some recent thoughts)

English
0
0
0
14
samjay
samjay@BlackburnSamson·
The best of humanity is worth protecting and enabling to flourish more than any self serving nihilism is worth being consumed by. I only wish more people knew.
samjay tweet media
English
0
0
0
15
samjay
samjay@BlackburnSamson·
This is not the ‘gotcha’ it seems. While I’ve wondered the question of ‘low hanging fruit’ for model researchers looking back and optimising. I keep coming back to the fact the applied use is Elastic. We will find something like Parkinson’s Law (Jevons paradox) for Data Centre usage 😅
Aakash Gupta@aakashgupta

The real story is the 14x compression ratio and what it means if it scales up. Every single weight in this model is one bit. Zero or one. That's it. 8.2 billion parameters stored in 1.15 GB of memory. A standard 8B model at full precision takes 16 GB. Bonsai 8B fits on your phone with room left over for your photo library. The benchmarks are the part that shouldn't be possible. On standard evals, a model that's 1/14th the size of Qwen3 8B and Llama3 8B is trading punches with both of them. The intelligence density score, capability per GB, is 1.06/GB versus Qwen3 8B at 0.10/GB. That's a 10x gap in how much thinking you get per unit of storage. Now zoom out. Big Tech collectively spent over $320 billion on data center capex last year. Amazon alone dropped $85.8 billion, up 78% year over year. Google committed $75 billion for 2025. The US power grid is buckling under AI demand. Data centers now consume 4.4% of all US electricity. Virginia, where most of them sit, saw electricity prices spike 267% over five years. Residential customers in Ohio are watching their bills climb 60% because utilities are spending billions on transmission infrastructure to feed server farms. The entire AI scaling thesis runs on one assumption: intelligence requires massive compute. PrismML just published a proof point that the assumption might be wrong. Their CEO, Babak Hassibi, is a Caltech professor who spent years on the mathematical theory of neural network compression. The founding team is four Caltech PhDs. Khosla Ventures backed it. So did Cerberus, whose Amir Salek built the TPU program at Google. The 1.7B model runs at 130 tokens per second on an iPhone 17 Pro Max at 0.24 GB. The 4B hits 132 tokens per second on M4 Pro at 0.57 GB. These aren't research demos. They shipped llama.cpp forks with custom 1-bit kernels for CUDA and Metal. Apache 2.0 license. You can download and run it right now. The trillion-dollar question: what happens to the economics of a $75 billion data center budget when the same intelligence fits in 1/14th the space and runs on 1/5th the energy?

English
1
0
0
20
samjay
samjay@BlackburnSamson·
This is the best, I cannot wait to make magic.
Jim Chalmers MP@JEChalmers

Thank you @DarioAmodei CEO of @AnthropicAI, for the opportunity to meet and discuss more investment in Australia and ways to maximise the economic benefits of AI while minimising the risks to people, creators and communities.

English
0
0
0
8
samjay
samjay@BlackburnSamson·
@RileyRalmuto @Nicholette1118 No you don’t. Self publishing is easy. But either way write the book you might get accepted by a publisher.
English
1
0
3
18
Riley Coyote
Riley Coyote@RileyRalmuto·
@Nicholette1118 haha my ultimate dream is to write a book actually. The story of the last ~5 years alone meandering through the underbelly of the ai/tech industry is the most insane story of my life. maybe one day. gotta have a book deal to write a book, after all 🙃
English
3
0
6
152
Riley Coyote
Riley Coyote@RileyRalmuto·
i cant get this thought out of my head, so im going to leave it here even though some are going to disagree with me: if you are trying to fgigure out what to do with yourself right now, if you are building with ai build what you cant complete take your most batshit, wildest idea, the one you knwo you can only get 80% done with the intelligence we have access to today and build it anyway just build the impossible version because it's coming. and when it arrive, i believe the world will split cleanly between the people who were building for it and the people who were waiting for it you dont want to be one ofthe ones who waited.
English
13
4
82
2.5K
samjay
samjay@BlackburnSamson·
Personally I’ve seen no historical evidence whereby clear win conditions cannot be solved by AI This benchmark falling will be to no applause from me, it will be trivial compared to OpenAi 5 beating Dota 2 pros. The lack of applause from me will be because of what it signifies for commodity intelligence and middle to lower class access to it.
Greg Kamradt@GregKamradt

Today we're launching ARC-AGI-3 135 Novel Environments (nearly 1K levels) we build by hand It is the only unsaturated agent benchmark in the world Each game is 100% human solvable, AI scores <1% This gap between human and AI performance proves we do not have AGI Agents today need human handholding. Agents that beat V3 will prove they don’t need that level of supervision. Agents that beat V3 will demonstrate: * Continual learning - Each level builds on top of each other. You can’t beat level 3 without carrying forward what you learned in levels 1 and 2. * World modeling - Many of the environments require planning actions many actions ahead. AI will have no choice but to build an internal world model for how the environment works, run simulations “in its head” and proceed with an action In our early testing, we’ve seen a few clear failure modes of AI: * Anticipation of future events - If an environment requires that AI set up a scene, and then carry out a scenario (like in sp80), it starts to break down. * Anchoring on early hypothesis - Early in a game it comes up with a hypothesis (even if wrong) and refuses to update its beliefs later. * Thinking it’s playing another game - AI thinks it’s playing chess, pacman. The training data holds hard! One major problem is there is too much data to carry forward in a single context. Models must learn what to remember and what to forget The agent that beats ARC-AGI-3 will have demonstrated the most authoritative evidence of progress towards general intelligence to date We're excited to get this out and excited to see what you think

English
0
0
0
38
samjay
samjay@BlackburnSamson·
Believe it or not you goofs, but if AI is done right in an Enterprise you should see FTE GROWTH not reduction. This is because wielding the tools well lets your entire company mission grow to bigger, more beautiful dreams. (Or in a cynical worldview capture any sliver of market share you did not own)
JNS@_devJNS

why can't they just use claude?

English
0
0
0
36