P

5.6K posts

P banner
P

P

@phjlljp

Head of Media at @Swan | AI Maximalist | Quant at @MemeFactoryTM

Occult Ritual #6102 Katılım Haziran 2011
1.6K Takip Edilen15.5K Takipçiler
Aaron Wise
Aaron Wise@AaronWise5147·
Twitter is so dead that the #1 topic on CT today is Japanese porn.
English
1
0
0
174
Aaron Wise
Aaron Wise@AaronWise5147·
How many tens of billions of dollars did Zuckerberg incinerate in Horizon Worlds which is now being shut down entirely 😂🤣🤣
English
2
0
2
147
Jonathan Sawyer
Jonathan Sawyer@Jonatha14410794·
Don’t listen to influencers like this. KLAUS SCHWAB COULD NOT HAVE WRITTEN A BETTER AI BILL THAN THE ONE REPUBLICANS JUST DROPPED The "TRUMP AMERICA AI Act" is 300 pages of centralized AI control disguised as innovation policy. 1/ Preempts state AI laws. Your state can no longer protect you. One federal rulebook controlled by Washington replaces 50 state legislatures overnight. 2/ Creates a mandatory "duty of care" enforced by the FTC. Unelected bureaucrats now decide what AI can and cannot say. 3/ Requires frontier AI companies to report to the Department of Homeland Security and pass Department of Energy evaluations BEFORE deployment. Government permission to innovate. 4/ Mandates quarterly job displacement reports to the Department of Labor. They're not tracking losses to help you. They're building a workforce surveillance database. 5/ Sunsets Section 230 in two years. Every platform becomes legally liable for user speech. The largest speech suppression mechanism ever passed by a Republican Congress. This is not deregulation. It's the Great Reset wearing a red hat.
English
2
0
1
69
Ejaaz
Ejaaz@cryptopunk7213·
this is a huge deal. massive win for AI labs, founders and builders in the USA. Trump's new AI legal framework doesn't fuck around, gloves are off: - U.S. *does NOT* believe AI trained on copyright material violates copyright theft. MASSIVE win for anthropic, openai who have used copyrighted material. - data centers: full-speed ahead to build them. any increased costs for people should be subsidized. - Trump intends to override state AI laws that create "undue burdens" aka if it prevents USA from beating china - it gets killed. - NO new ai regulators - trump specifically told congress not to spin up further oversight. let the AI spice flow. - no censorship of AI by government. very interesting given the recent pentagon anthropic drama. so basically if you want to build crazy ai shit - the US isn't going to be the one to stop you. huge 180 from their stance last year. amazing work @DavidSacks and whoever else worked on this
Ejaaz tweet media
David Sacks@DavidSacks

In December, President Trump signed an Executive Order tasking us with the development of a national framework for AI, what he called “One Rulebook.” This was in response to a growing patchwork of 50 different state regulatory regimes that threaten to stifle innovation and jeopardize America’s lead in the AI race. Today we are releasing that framework. It will help parents safeguard their children from online harm, shield communities from higher electric bills, protect our First Amendment rights from AI censorship, and ensure that all Americans benefit from this transformative technology. We look forward to working with our colleagues in Congress to turn the principles we are announcing today into legislation. whitehouse.gov/articles/2026/…

English
131
222
1.7K
174.4K
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ 🛡️
Put together a /w command for Claude Code. The Problem: you know you worked on something before but you can't remember which session it was in. /w that one thing that one time Searches your transcripts, sessions, git, and finds it so you can resume. github.com/danielmiessler…
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ 🛡️ tweet media
English
53
58
776
62.7K
P retweetledi
Todd Saunders
Todd Saunders@toddsaunders·
The token cost to build a production feature is now lower than the meeting cost to discuss building that feature. Let me rephrase. It is literally cheaper to build the thing and see if it works than to have a 30 minute planning meeting about whether you should build it. It’s wild when you think about it. This completely inverts how you should run a software organization. The planning layer becomes the bottleneck because the building layer is essentially free. The cost of code has dropped to essentially 0. The rational response is to eliminate planning for anything that can be tested empirically. Don’t debate whether a feature will work. Just build it in 2 hours, measure it with a group of customers, and then decide to kill or keep it. I saw a startup operating this way and their build velocity is up 20x. Decision quality is up because every decision is informed by a real prototype, not a slide deck and an expensive meeting. We went from “move fast and break things” to “move fast and build everything.” The planning industrial complex is dead. Thank god.
English
373
565
5.5K
468.1K
P
P@phjlljp·
How Bitcoiners see election cycles
English
1
1
11
617
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
You can now replace your OpenClaw Agent's aggressive compaction process with a DAG to supercharge it's memory! Remember DAG shitcoins in crypto? directed acyclic graph ... alternate architecture to bitcoin's blockchain ... they sacrifice decentralization and security for higher throughput. Finally, a DAG has a use for a Bitcoiner :) IOTA had the tangle with coordinators RaiBlocks/Nano was a block-lattice DAG Hashgraph used a gossip about gossip consensus with a permissioned governance council ByteBall/Obyte used a DAG with witness nodes. Strip the shitcoins and governance nonsense away and you have something that's actually useful for AI agent memory enhancement. I hacked a whole skill together (SoulKeep) for my agent to stay in a session as long as possible because usually you want your agent to have as much context as possible for as long as possible. Josh & team put the DAG to work brilliantly to replace the default compaction process with rolling summarization nodes as a novel way of holding as much valuable context as possible in the session for as long as possible. It also as some tools to trawl the session context, they call it "walking the DAG" using a bounded subagent to keep token costs down and performance up. With the latest openclaw release they allow for compaction plug ins like lossless claw. This isn't meant to be a replacement for QMD, your obsidian vault or any other extended long term memory / system of record enhancements your'e using. It's meant to be used in parallel with those strategies to help your agent have better context for longer. I'm seriously considering switching to this! losslesscontext.ai
Brad Mills 🔑⚡️ tweet media
Josh Lehman@jlehman_

You don't need an agent memory system, you need context that doesn't reset. Update and try lossless-claw!

English
17
13
160
49.4K
P retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
If your entire product relies on adding guardrails to models that hallucinate, or patch the context window limits of current models - you are basically renting time until the base model learns to self correct or expand the native memory limits to infinity. Then the temporary fix becomes permanently obsolete. Creating complex adapter layers between different AI agents makes money today, but the massive foundation models will eventually handle multi agent routing natively and crush that entire middle market.
Rohan Paul tweet media
Paul Buchheit@paultoo

Many startups are growing fast and creating real value by building workflows, adaptors, and guardrails around today's AI models But future models won't need all that, and then the big AI companies will eat them for lunch We call these companies "Turkey graph startups"

English
15
5
47
7.4K
P
P@phjlljp·
Peak AI cinema. Hollywood is cooked.
English
2
1
12
1K
P
P@phjlljp·
@Pledditor His star continues to rise.
English
0
0
5
167
P
P@phjlljp·
@rodarmor You crazy motherfucker 😂
English
0
0
2
126
Casey
Casey@rodarmor·
root claude just hits different
Casey tweet media
English
12
5
118
10.3K
P retweetledi
typedfemale
typedfemale@typedfemale·
presenting: big jeff's trainium hell
English
112
557
4.6K
631.7K
P retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
English
1.6K
4.8K
37.2K
5.1M
P retweetledi
Robert Youssef
Robert Youssef@rryssf_·
Google DeepMind just used AlphaEvolve to breed entirely new game-theory algorithms that outperform ones humans spent years designing the discovered algorithms use mechanisms so non-intuitive that no human researcher would have tried them. here's what actually happened and why it matters:
Robert Youssef tweet media
English
16
106
662
44.1K
Aaron Wise
Aaron Wise@AaronWise5147·
Use the AI or die.
English
3
0
3
176
P
P@phjlljp·
ME: Time to fall asleep MY BRAIN:
English
0
0
2
407