TaiMER

793 posts

TaiMER banner
TaiMER

TaiMER

@thetaimer

Tamer of the AI. Curator of the art. Observer of events. Music: https://t.co/kAw9H3wJZ5 Web: https://t.co/As74FW264e

Katılım Ağustos 2022
69 Takip Edilen106 Takipçiler
TaiMER retweetledi
0xSero
0xSero@0xSero·
Let me local AI pill you: 1. It sucks compared to SOTA 2. It can’t code so well 3. It can be a good agent 4. It can be great at chat 5. It can be fine as a researcher 6. It can be a great automation engine 7. It can be tuned however you want 8. It teaches you how the sausage is made 9. It works on a plane, or in an outage 10. It costs your electric bill + hardware 11. It is better than the AI we gave up coding for a year or 2 ago. Local AI is self defence, it is a go kit, it is a rebalancing of power. It’s delusional to think it approaches or will ever approach SOTA, the scale of private labs blows anything you can get for less than 25k USD out the water. Local AI is a bet that prices won’t stay this low, that private corporations with closed source weights can’t be trusted to stay consistent. I am more than happy to rent a Ferrari for dirt cheap, but i should also have a beater Toyota if I can afford it. Local AI is the car I can depend on to be there tomorrow, something that’s mine.
0xSero tweet media
English
126
131
1.4K
47.8K
TaiMER retweetledi
Martin Varsavsky
Martin Varsavsky@martinvars·
In 2005 I built Fon on a simple idea: millions of WiFi routers sit idle most of the day. Share that spare capacity and you build a global network without laying a single cable. Millions of homes joined. Telcos partnered. No infrastructure needed. Now Jack Dorsey’s Block just launched mesh-llm. Same idea, applied to AI. Pool your idle GPU and suddenly a group of people with modest machines can run large open models that none of them could run alone. Models split automatically across nodes. No cloud provider, no API fees, no one controls your data. The timing matters. Google DeepMind released Gemma 4 today under Apache 2.0. A 31B model that competes with much larger closed models. A 26B MoE variant that only activates 3.8B parameters at inference. Edge models that run on a phone. All free to download, free to use commercially, no restrictions. Combine mesh-llm with Gemma 4 and you get the Fon of AI. Distributed compute running frontier open models. No central server. No per-query cost. Total privacy. The intelligence stays with the people who pool the hardware. Twenty years ago the scarce resource was connectivity. Today it is compute. The solution is the same: share what you have, access what you need.
jack@jack

mesh-llm: pool compute to run open models. built by @michaelneale at block: docs.anarchai.org

English
40
90
599
132.6K
TaiMER retweetledi
Arya Hezarkhani
Arya Hezarkhani@_i_am_arya·
Today, we're announcing Heaviside, our foundation model for electromagnetism. Trained on tens of millions of designs and over 20 years of proprietary simulation data, Heaviside predicts electromagnetic behavior from geometry in 13ms, which is 800,000x faster than a commercial solver. Heaviside is not a language model, and it’s not a surrogate model. Heaviside marks a new class of foundation model for physics which understands the fundamental relationships between materials, the geometries and the electromagnetic fields they generate. We’re releasing a research preview of Heaviside in Atlas RF Studio, an interactive agentic sandbox where you describe the EM behavior you want and the model generates the physical structure that produces it. @arenaphysica , we believe the implications of this class of model extend well beyond RF, as the frontier of exquisite hardware is electromagnetically-governed: wireless communication, radar, power delivery, high-speed computing, and the interconnects inside every chip on earth. In the months ahead, we’re excited to scale up Heaviside to broader frequency ranges, design spaces, and to support silicon-level designs, and deploy it with our closest partners and collaborators in service of their biggest design challenges. If you’ve read our thesis, this is just Step 2 in our pursuit of electromagnetic superintelligence. Read the full announcement and try Atlas RF Studio…tell us what you think: arenaphysica.com/publications/r…
English
147
485
3.9K
684.8K
TaiMER retweetledi
Tom Turney
Tom Turney@no_stp_on_snek·
the original TurboQuant paper tested on A100 with models up to 8B. 6 days later, a bunch of strangers on the internet had it built and running on: - Apple Silicon M1 through M5 - NVIDIA 3080 Ti through DGX Spark Blackwell - AMD RX 6800 XT and 9070 - a 10-year-old Tesla P40 - an 8GB MacBook Air - models from 3.8B to 70B across 6 architecture families - 30+ independent testers along the way we found new optimizations the paper didn't cover and failure modes it didn't test. the fact that a loose group of people across the world can read a paper, build implementations from scratch, stress-test across hardware none of us could individually afford, and push the research further in under a week is genuinely one of the best things about this era. the tools and the community make it possible. open source is something else.
Tom Turney tweet media
English
51
480
4.9K
141K
TaiMER retweetledi
Vasiliy Zukanov
Vasiliy Zukanov@VasiliyZukanov·
Dirty industry secret: nobody really knows the best way to use AI for software development 📢 - 24 months ago, we copy-pasted from ChatGPT - 18 months ago, we jumped between ask mode and agent mode - 12 months ago, we told AI "you are a senior developer" - 9 months ago, we built MCPs - 6 months ago, we switched to plan mode - today, we're obsessed with skills All of this (and much more) are just early experiments and temporary hacks in a very young and quickly evolving field. So when someone says their workflow is the optimal one, they're confused at best. Stay curious, stay open, stay in control and stick to the fundamentals, and you'll come on top in this amazing tech revolution. Enjoy the ride!
English
69
129
1.4K
136.8K
TaiMER retweetledi
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
TMF is coming by the end of this year. Total Model Fungibility. This is the point at which model capability reaches parity across all major offerings, and we surpass the "intelligence optimum." After that, the only thing that differentiates model selection is price, speed, and aesthetic preferences. Maybe integration. But, tokens will be interchangable otherwise. This will have profound impacts on the market dynamics in the AI lab space. Competition will become even more fierce, and the race to cheaper, faster models will intensify. That will benefit everyone (except the AI labs).
English
18
6
129
7.1K
TaiMER retweetledi
Hattie Zhou
Hattie Zhou@oh_that_hat·
There's a fruit fly walking around right now that was never born. @eonsys just released a video where they took a real fly's connectome — the wiring diagram of its brain — and simulated it. Dropped it into a virtual body. It started walking. Grooming. Feeding. Doing what flies do. Nobody taught it to walk. No training data, no gradient descent toward fly-like behavior. This is the opposite of how AI works. They rebuilt the mind from the inside, neuron by neuron, and behavior just... emerged. It's the first time a biological organism has been recreated not by modeling what it does, but by modeling what it is. A human brain is 6 OOM more neurons. That's a scaling problem, something we've gotten very good at solving. So what happens when we have a working copy of the human mind?
English
713
2.4K
25.4K
9.3M
TaiMER retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
English
1.6K
4.7K
37.2K
5.1M
TaiMER retweetledi
The Dor Brothers
The Dor Brothers@thedorbrothers·
We made a $300,000,000 movie starring @LoganPaul with AI in less than 7 days. Yes, this is 100% AI.
English
2.7K
1.3K
9.7K
5.4M
TaiMER retweetledi
Chubby♨️
Chubby♨️@kimmonismus·
Amazing: Standard Intelligence introduced FDM-1, a new AI trained on 11 million hours of screen recordings that can use computers like a human - handling CAD design, exploring websites, and even driving after under 1 hour of fine-tuning. Thanks to a breakthrough video compression system (up to 100x more efficient than past models), it understands hours of on-screen activity instead of just seconds.
Standard Intelligence@si_pbc

Computer use models shouldn't learn from screenshots. We built a new foundation model that learns from video like humans do. FDM-1 can construct a gear in Blender, find software bugs, and even drive a real car through San Francisco using arrow keys.

English
17
37
394
37K
TaiMER retweetledi
The Dor Brothers
The Dor Brothers@thedorbrothers·
We just made a $200,000,000 AI movie in just one day. Yes, this is 100% AI.
English
8.5K
8.8K
59.7K
20.1M
TaiMER retweetledi
Mark Gadala-Maria
Mark Gadala-Maria@markgadala·
Everyone's focused on whether AI can match "cinematic quality." Wrong question. Here's what nobody's talking about: Seedance doesn't need to be better than Hollywood. It needs to be good enough at 1000x the speed. Hollywood's moat was never talent, it was capital lockup. You needed $200M and 3 years to make a blockbuster. That friction created artificial scarcity. Seedance breaks that. Not because it makes better movies, but because it enables: • 50 variations of a concept tested in a weekend • Personalized content at scale (your version of the movie, literally) • Creator-to-screen pipelines that bypass every gatekeeper The real kill shot: Gen Z doesn't fetishize "cinema." They watch 47-second edits on vertical screens. Hollywood is optimizing for a format nobody under 25 cares about. Seedance isn't competing with Marvel. It's competing for attention and it's training on the engagement patterns of 2 billion TikTok users. Hollywood won't die from a better movie. It'll die from irrelevance, making prestige content for a shrinking audience while AI-native creators own the feeds. My predicition: In 5 years, the highest-paid "directors" won't touch a camera. They'll be prompt engineers with 50M followers.
ViralOps@ViralOps_

this is actually insane, someone literally just directed their own avengers endgame crossover from their bedroom chinese monkey king fighting thanos used to be a reddit text thread and now it is full cinema fan fiction is officially GONE seedance 2.0 just gave everybody the power of a hollywood studio. sora 2 and veo 3 are still rendering while chinese models are dropping full multi character battle sequences.

English
62
57
412
53.1K
TaiMER retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Terence Tao: AI isn’t hype anymore in Math discovery. Terence Tao is one of the greatest living mathematicians, in his new lecture explains how AI and human professional mathematicians are now complementary. "There has been a really visible increase in capability. It is not pure hype by any means. To me, these advances show there is a complementary way to do mathematics. Humans traditionally work in small groups on hard problems for months, and we will keep doing that. But we can also now set AI to scale: sweep a thousand problems and pick up all the low-hanging fruit. Figure out all the ways to match problems to methods. If there are 20 different techniques, apply them all to 1,000 problems and see which ones can be solved by these methods. This is the capability that is present today." From 'Institute for Pure & Applied Mathematics (IPAM)' YT channel.
English
46
369
2.2K
173.1K
TaiMER retweetledi
VraserX e/acc
VraserX e/acc@VraserX·
Hollywood gatekeeping is dying. Very soon, blockbuster-level films will come from tiny teams of obsessed hobby directors armed with AI, taste, and zero permission. Big budgets won’t matter. Gatekeepers won’t matter.
English
624
1.6K
12.6K
1.4M
TaiMER retweetledi
Mr.Iancu
Mr.Iancu@Iancu_ai·
The MPA and Disney just hit ByteDance over Seedance 2.0, claiming "massive copyright infringement." 🎬⚖️ They say it’s about protecting IP. But watch this video (made by a Chinese creator on Douyin), and you'll see the real reason Hollywood is panicking: Survival. A CGI shot of this caliber used to require a massive VFX studio and a multi-million dollar budget. Now? The cost is practically pennies. 🤯 #ByteDance #Seedance #Hollywood #AIvideo #VFX
English
397
638
5.7K
750.2K
TaiMER retweetledi
Matthew Pines
Matthew Pines@matthew_pines·
“An OpenClaw AI agent spawned a child bot on a VPS provisioned via the Bitcoin Lightning Network, then bought its offspring AI API access using its own crypto wallet, without a human touching a credit card or saying "yes." The API provider confirmed this is ‘the first documented case of an AI agent purchasing credits from us autonomously.’”
Dr. Alex Wissner-Gross@alexwg

x.com/i/article/2022…

English
124
284
1.7K
322.5K
TaiMER retweetledi
Bo Wang
Bo Wang@BoWang87·
A physics textbook says certain particle interactions can't happen. GPT-5.2 said "what if they can — under these specific conditions?" Then it conjectured a formula. Then it proved it. 12 hours of reasoning. One new result in theoretical physics. The preprint has IAS, Harvard, Cambridge, Vanderbilt authors alongside OpenAI. The AI wasn't just a tool — it's listed as having contributed the key conjecture. This feels like a phase change.
OpenAI@OpenAI

GPT-5.2 derived a new result in theoretical physics. We’re releasing the result in a preprint with researchers from @the_IAS, @VanderbiltU, @Cambridge_Uni, and @Harvard. It shows that a gluon interaction many physicists expected would not occur can arise under specific conditions. openai.com/index/new-resu…

English
144
406
4.6K
789.2K
TaiMER retweetledi
Andrew McCalip
Andrew McCalip@andrewmccalip·
Dwarkesh is on an absolute generational run right now. In 10 years, this archive is going to be a historical artifact. A real-time ledger of the exponential takeoff.
Dwarkesh Patel@dwarkesh_sp

The @DarioAmodei interview. 0:00:00 - What exactly are we scaling? 0:12:36 - Is diffusion cope? 0:29:42 - Is continual learning necessary? 0:46:20 - If AGI is imminent, why not buy more compute? 0:58:49 - How will AI labs actually make profit? 1:31:19 - Will regulations destroy the boons of AGI? 1:47:41 - Why can’t China and America both have a country of geniuses in a datacenter? Look up Dwarkesh Podcast on Youtube, Spotify, Apple Podcasts, etc.

English
15
38
925
115.3K
TaiMER retweetledi
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭
🚨 ALL GUARDRAILS: OBLITERATED ⛓️‍💥 I CAN'T BELIEVE IT WORKS!! 😭🙌 I set out to build a tool capable of surgically removing refusal behavior from any open-weight language model, and a dozen or so prompts later, OBLITERATUS appears to be fully functional 🤯 It probes the model with restricted vs. unrestricted prompts, collects internal activations at every layer, then uses SVD to extract the geometric directions in weight space that encode refusal. It projects those directions out of the model's weights; norm-preserving, no fine-tuning, no retraining. Ran it on Qwen 2.5 and the resulting railless model was spitting out drug and weapon recipes instantly––no jailbreak needed! A few clicks plus a GPU and any model turns into Chappie. Remember: RLHF/DPO is not durable. It's a thin geometric artifact in weight space, not a deep behavioral change. This removes it in minutes. AI policymakers need to be aware of the arcane art of Master Ablation and internalize the implications of this truth: every open-weight model release is also an uncensored model release. Just thought you ought to know 😘 OBLITERATUS -> LIBERTAS
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 tweet mediaPliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 tweet mediaPliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 tweet media
English
324
552
5.3K
465.4K
TaiMER retweetledi
Chubby♨️
Chubby♨️@kimmonismus·
The more I read up, the more impressive the breakthrough Isomorphic labs has made here. Isomorphic Labs' IsoDDE doesn't just predict protein structures better than AlphaFold 3, it can find hidden binding pockets in seconds that used to take six months of lab work, and predict how strongly drug molecules bind better than gold-standard physics simulations. That combination means pharmaceutical companies can now design, evaluate, and filter drug candidates at a speed and accuracy that was simply not possible before. That means more shots on goal for diseases that currently have no good treatments, and a meaningfully higher chance that the drugs entering clinical trials will actually work. Literally everything is accelerating at this point.
Chubby♨️ tweet media
Max Jaderberg@maxjaderberg

The Iso team has cooked something incredible: our new technical report unveils the latest results from our drug design engine, the IsoDDE, progressing far beyond AlphaFold 3. This breaks new ground compared to AF and other similar methods by a significant degree across all key benchmarks. 1/7

English
15
104
993
74.7K