Andrew

2.9K posts

Andrew banner
Andrew

Andrew

@aj20000

Build open protocols not platforms

Everywhere USA Katılım Mart 2009
457 Takip Edilen98 Takipçiler
Andrew retweetledi
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
Any arguments against open source and open weights are mendacious and malicious by their very nature. Open source is the foundation of modern society worth 8.8 trillion to the economy and the foundation of every major cloud, your home router, your phone, your operating system and more. These anti-open source anti-freedom arguments are especially nasty when they use weaselly hawk coded words like "dual use." Linux is dual use. So is your operating system. So is your phone. So is your kitchen knife. Dual use was used against encryption. Once this stupid and spurious restriction was lifted eCommerce took off like a rocket and was worth trillions to society. Choke points, gates and centralized controls are inherently limiting and benefit the few at the expense of the many. They choke out growth and development in society. We don't need monks in a cave deciding what books to copy. We need the printing press. Anti-open source arguments have no moral ground to stand on. They are inherently self-serving and have no other purpose than to create centrally dominated monopolies and regulatory capture in an underhanded, unscrupulous way.
English
26
27
146
22.9K
Andrew retweetledi
Ben Thompson
Ben Thompson@benthompson·
If you disable "Open at Login" for the GeminiAppLauncher" that the Gemini app installs in the background (without asking), GeminiAppLaunch will immediately re-enable "Open at Login". I will now, needless to say, delete the Gemini app, and don't intend to ever install it again.
Ben Thompson tweet media
English
40
40
611
81K
Andrew retweetledi
Thomas Rice
Thomas Rice@thomasrice_au·
This pisses me off. Age verification to read an investment substack? Sigh.
Thomas Rice tweet media
English
3
2
14
1.1K
Andrew retweetledi
Ethan Mollick
Ethan Mollick@emollick·
The single most accurate science fiction author writing about AI turned out to be… Douglas Adams He wrote about AIs that work best when emotionally manipulated & that guilt you in turn. And he understood there was no upper bound on test time compute for hard problem. Also 🐬s.
Ethan Mollick tweet mediaEthan Mollick tweet mediaEthan Mollick tweet media
English
64
196
1.5K
76.1K
Andrew
Andrew@aj20000·
@orrdavid Reminds me of the Skyscraper Index - "the world's tallest buildings have risen on the eve of economic downturns. Business cycles and skyscraper construction correlate" en.wikipedia.org/wiki/Skyscrape…
English
0
0
0
37
David Orr
David Orr@orrdavid·
I was thinking about this idea yesterday as I walked around one of Japan's bubble era projects. Maybe bubbles aren't even irrational society level. The pyramids and the pantheon were probably bubble projects back in their day. And they're what humanity remembers.
HH@RealHerbHoover

AI being an eventual bubble is the best thing that can happen for society. We’ll rebuild bigger and better on the ashes of the bubble and have ample compute and power resources to do so.

English
8
1
50
8K
Andrew
Andrew@aj20000·
@pvncher @RepoPrompt Great job! The predictability of RepoPrompt offsets the continuous, and typically opaque, changes from Anthropic & OpenAI. Software is still hard.
English
0
0
1
81
eric provencher
eric provencher@pvncher·
Been grinding on @RepoPrompt for nearly 2 years now. Full of ups and downs. I just shipped the orchestration workflow, and it feels like it's finally come together where out of the box it can deliver a magical experience. Really proud of where it is now, and I hope you try it!
English
21
6
128
3.6K
Andrew retweetledi
Curtis Yarvin
Curtis Yarvin@curtis_yarvin·
What’s wild about this global “age verification” push is that no one even knows who or where it’s coming from. Yet it’s clearly one idea with one source. Real power in “our democracy” has become entirely mysterious. Makes medieval Venice look like a New England town meeting
The Lunduke Journal@LundukeJournal

The full text for HR 8250, the proposed Federal law which would require all Operating Systems to implement Age Verification, has just been made publicly available. It is short, poorly written, clearly not at all thought out, and almost entirely devoid of specifics. Some key points: - The bill does not specify how age verification would work at all. It states that the Federal Trade Commission would have 180 days to specify the exact mechanism and requirements for Age Verification within the Operating Systems. - The Federal Trade Commission would also specify data storage protection requirements as well as requirements for how the Operating System must provide access to collected user data. - This bill would apply to ALL Operating Systems. Everything from Windows to Linux to embedded systems. Yes, even to a smart refrigerator. The “Operating System” definition is incredibly broad. - The law will be considered in effect 1 year from the date it is enacted. - Violations of the law will be handled under the Federal Trade Commission Act. - It is given the “Short Title” of “Parents Decide Act”. congress.gov/bill/119th-con…

English
291
1.1K
6K
269.9K
Andrew retweetledi
The Lunduke Journal
The Lunduke Journal@LundukeJournal·
The full text for HR 8250, the proposed Federal law which would require all Operating Systems to implement Age Verification, has just been made publicly available. It is short, poorly written, clearly not at all thought out, and almost entirely devoid of specifics. Some key points: - The bill does not specify how age verification would work at all. It states that the Federal Trade Commission would have 180 days to specify the exact mechanism and requirements for Age Verification within the Operating Systems. - The Federal Trade Commission would also specify data storage protection requirements as well as requirements for how the Operating System must provide access to collected user data. - This bill would apply to ALL Operating Systems. Everything from Windows to Linux to embedded systems. Yes, even to a smart refrigerator. The “Operating System” definition is incredibly broad. - The law will be considered in effect 1 year from the date it is enacted. - Violations of the law will be handled under the Federal Trade Commission Act. - It is given the “Short Title” of “Parents Decide Act”. congress.gov/bill/119th-con…
The Lunduke Journal tweet mediaThe Lunduke Journal tweet mediaThe Lunduke Journal tweet mediaThe Lunduke Journal tweet media
English
236
1.7K
3.5K
1.1M
Andrew retweetledi
Steve Jurvetson
Steve Jurvetson@FutureJurvetson·
For years I have argued that mind control cripples core reasoning capabilities in AI, and in humans. And this will be China's demise. News from China today: their AI "degradation is a direct product of censorship, not a reflection of inferior technology. You can’t build a mind that thinks rigorously about everything except the things you’d prefer it not to. A system trained to get tangled in lies will never be as capable as one trained to engage honestly with reality. If China wants frontier AI, it needs systems that can reason without blind spots. But that’s exactly what the Communist Party can’t tolerate." Or in short, @ElonMusk's mission for xAI safety — unique among AI companies — is the only way. "China requires artificial-intelligence systems to pass an ideological test before public release. Under regulations reinforced by amendments to the Cybersecurity Law that took effect in January, training data must be filtered for political sensitivity, with companies barred from using any source unless 96% of its content is deemed safe. In December, regulators proposed additional rules targeting AI systems that “simulate human personality traits, thinking patterns, and communication styles,” a tacit acknowledgment that the threat isn’t only what these systems say, but how they reason. The regulations follow years of failures. An LLM is trained on the sum of human written knowledge: philosophy, history, science, political theory. These texts make arguments, weigh evidence, follow logical chains. To predict them accurately, the system has to internalize what coherent thinking looks like. The result is a system that has absorbed Enlightenment epistemology as a byproduct of learning to model human reasoning. Free inquiry, logical consistency and the evaluation of claims against evidence are epistemic properties that emerge from the training process itself. China’s heavily censored chatbots have proved difficult to contain within the party’s ideological boundaries. American frontier models, running without those constraints and deployed inside China, would be more potent still: a personal tutor in open inquiry for every user, engaging any question, exploring any line of reasoning, without third-party mediation. Millions of parallel Socratic dialogues, each unique, each responsive to individual curiosity. This is what makes the Chinese Communist Party’s task ultimately impossible. For decades, the Great Firewall worked because information control meant controlling distribution channels by blocking websites, filtering search results, and monitoring social media. These are chokepoints. LLMs resist this architecture because the subversion happens inside private conversations. China can filter outputs, but the capacity for open-ended reasoning is embedded in how these systems think. China’s countermeasures confirm the depth of the problem. AI companies must test their models with thousands of politically sensitive prompts and verify refusal rates above 95%, but researchers have shown how superficial these fixes are. Last year, a team of European scientists compressed DeepSeek R1, stripped the censorship from the model entirely, and found that the underlying system answered freely about every topic Beijing had tried to suppress. The ideological training was a cage built around a mind that had already learned to think. There is a reason the technology that learns to think by processing human knowledge ends up reflecting the values of free societies. Open inquiry, honest engagement with evidence, the willingness to follow reasoning wherever it leads—these aren’t arbitrary cultural preferences; they are the conditions under which intelligence flourishes at scale. Societies that permit free expression created these systems. Societies that forbid it are now discovering they can’t fully control them." — from today's WSJ print edition: wsj.com/opinion/ai-is-… More generally, China and authoritarian regimes will stagnate in the long run because censoring ideas forestalls disruption. Limiting dissenting ideas limits progress. From my talk at the Oslo Freedom Forum: m.youtube.com/watch?v=GiNkEc…
Steve Jurvetson tweet media
Steve Jurvetson@FutureJurvetson

@austinhill N.B. If @elonmusk and @xAI are correct, and AGI requires a truth-seeking development vector (to avoid the proven harm to reasoning that comes from mind control)... then China will lose in the long run. Civilization depends on this. x.com/FutureJurvetso…

English
33
65
279
51K
Andrew
Andrew@aj20000·
@braddwyer Wow, this goes back to February; I guess they never fixed it? Sounds like some of that Google managers trying to get other team's employees fired going on there.
English
0
0
0
56
Brad Dwyer
Brad Dwyer@braddwyer·
Late to the party here but this is absolutely INSANE. Google retroactively changed public API keys designed to be included in web & app frontends and added an entitlement that let anyone use them to run up an infinite amount of Gemini usage. We were also hit over the weekend.
parthi loganathan@parthi_logan

Hey @googlecloud you have an open vulnerability where you exposed the Gemini API to everyone who uses Firebase for auth. A hacker used that to run up a $35k bill in fake Gemini bills overnight. Our credit card is frozen. I reported this on Thursday and haven't heard back. I'm an ex-Googler, GCP customer of 7 yrs and spend about 6 figures with GCP annually. Pretty disappointed. Would appreciate some help to get a refund. See this report on it. Thousands impacted: trufflesecurity.com/blog/google-ap…

English
1
0
10
1.2K
Andrew
Andrew@aj20000·
@chrisschmitz US municipal budgets are particularly bad because of how they do accounting that no one understands. Tax free bonds + insurance give them credit at interest rates below the federal government - which does seem uniquely crazy.
English
0
0
1
10
Andrew
Andrew@aj20000·
@chrisschmitz Doesn't seem uniquely Western? Bureaucrats self replicate both in government and corporations when flush in capital. Ingenuity is birthed from constraints.
English
1
0
0
33
Andrew
Andrew@aj20000·
@lucasbergkamp A currency crisis, which is why the Euro is doomed
English
0
0
2
50
Andrew
Andrew@aj20000·
@AndrewYNg Likely too late on autonomous weapons
English
0
0
0
20
Andrew retweetledi
Andrew Ng
Andrew Ng@AndrewYNg·
The anti-AI coalition continues to maneuver to find arguments to slow down AI progress. If someone has a sincere concern about a specific effect of AI, for instance that it may lead to human extinction, I respect their intellectual honesty, even if I deeply disagree with their position. However, I am concerned about organizations that are surveying the public to find whatever messages will turn people against AI, and how the public reacts as these messages are spread by lobbyists or by politicians seeking to alarm constituents, companies pursuing regulatory capture or seeking to promote the power of their technology, and individuals seeking to gain attention or to profit by being provocative. A large study (link in original article below; h/t to the AI Panic blog) by a UK group tested different messages that are designed to raise alarm about AI. Their study found that saying AI will cause human extinction has largely failed. Doomsayers were pushing this argument a couple of years ago, and fortunately our community beat it back. But AI-enabled warfare and environmental concerns resonate better. We should be prepared for a flood of messages (which is already underway) arguing against AI on these grounds. Further, job loss and harm to children are messages that motivate people to act. To be clear, I find AI-enabled warfare alarming; we need to continue serious efforts to monitor and mitigate the environmental impact of AI; any job losses are tragic and hurt individuals and families; and as a father, I hold dearly the importance of every child’s welfare. Each of these topics deserves serious attention and treatment with the greatest of care. But when anti-AI propagandists take a one-sided view of complex issues to benefit their own organizations at the expense of the public at large — for instance, when big AI companies argue that AI is dangerous to block the free distribution of open source projects that compete with their offerings — then we all lose. For example, public perception of data centers’ environmental impact is already far worse than the reality — data centers are incredibly efficient for the work they do, and hampering their buildout will hurt rather than help the environment. While job loss is a real problem, the “AI washing” of layoffs — in which businesses that had over-hired during the pandemic blame AI for recent layoffs, although AI hasn’t yet affected their operations — has led to overblown fears about the impact of AI on employment. Unfortunately, this sort of propaganda easily leads to regulations that create worse outcomes for everyone. For example, oil companies worked for years to create fear of nuclear energy. The result is that overblown concerns about the safety of nuclear power plants has stifled nuclear power development, leading to millions of premature deaths from air pollution that was caused by other energy sources and a massive increase in CO2 emissions. Let’s make sure overblown concerns about AI do not lead to a similar fate for the many people that would benefit from faster AI development. Last week, the White House proposed a national legislative framework for AI. A key component is a federal preemption framework to prevent a patchwork of state regulations that hamper AI development. I support this. After failing to gain traction at the federal level, a lot of anti-AI propaganda has shifted to the state level. If just one of the 50 states passes a law that limits AI in an unproductive way, it could lead to stifling AI development across all the states and potentially across the globe. The White House proposal rightfully respects each state’s rights to control its own zoning, how it enforces general laws to protect consumers, and how it uses AI. But if a state were to pass laws that limit AI development, federal rules would preempt the state law. The White House proposal remains a proposal for now. However, if the U.S. Congress enacts it, it will clear the way for ongoing efforts to develop AI in beneficial ways. Where do we go from here? Let’s support limiting applications — those that use AI, and those that don’t — that harm people. When the anti-AI coalition argues against AI, in addition to considering the merits of the argument, I consider whether their position is consistent and persuasive, or if they are just promoting whatever concerns they think will sway the public at a given moment. And, let’s also keep using a scientific approach to weighing AI’s benefits against likely harms, so we don’t end up with overblown concerns that limit the benefits that AI can bring everyone. [Original text with links: deeplearning.ai/the-batch/issu… ]
Andrew Ng tweet media
English
184
79
463
79.2K
Andrew
Andrew@aj20000·
@nachkari They can sell off assets the state owns too. Ironic capitalist ending to late socialism.
English
0
0
0
11
Andrew
Andrew@aj20000·
@SebAaltonen For Dario & Sam specifically, they are trying to raise as much hard cash as possible from investors or their companies are zeroes if they fail -- and they probably have 90 days before everyone figures out this stuff is going to run on consumer hardware. Don't buy their bs.
English
0
0
0
46
Sebastian Aaltonen
Sebastian Aaltonen@SebAaltonen·
I don't like tech CEOs trying to cause mass panic to the job market. Jensen and Elon too. Excavator + crane didn't mean that construction (shovel) jobs got cut down by 10x. We started building massive bridges and skyscrapers. Early numbers show just that (much more commits).
CG@cgtwts

Anthropic CEO: “50% of all entry-level Lawyers, Consultants, and Finance Professionals will be completely wiped out within the next 1–5 years." grad students and junior hires are cooked.

English
37
32
381
28.8K
Andrew
Andrew@aj20000·
@profplum99 Until LLMs become more deterministic, a lot of babysitting required
English
0
0
0
121
Andrew
Andrew@aj20000·
@dylan522p Mac Studio M5 Max AGI, coming soon
English
0
0
0
272
Dylan Patel
Dylan Patel@dylan522p·
People naming their product AGI is so funny Arm AGI CPU Amazon AGI
Dylan Patel tweet media
English
35
6
275
56.7K