XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤

11.8K posts

XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤 banner
XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤

XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤

@XQTStrategy

Accelerating Execution w people data collaboration @tapirxyz, institutional web3 adoption @rcktstrs, ⿻ 數位 Plurality @RadxChange. Art @Totemical @MattKaneArtist

EU Katılım Haziran 2011
4.3K Takip Edilen3.2K Takipçiler
Erik Brynjolfsson
Erik Brynjolfsson@erikbryn·
The @nytimes piece today by @ByrneEdsal13590 highlights a concern I share: “If we stay on the current path, the risk of extreme concentration — both economic and political — is very real.” In work with @zhitzig, we ask why AI may shift the balance between dispersed knowledge and centralized control.
Erik Brynjolfsson tweet media
English
8
61
226
103.8K
XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤
AI supercharges the centralisation tendency of digital technologies caused by increasing returns (especially data network effects). 👾 It is the endgame of governance! 🔥 We will soon publish a policy paper where we try to define „⿻ plural protocol ecosystems“ as an alternative and third way compared to American platform-capitalism and Chinese state-authoritarism. 🖖 @RadxChange
English
0
0
2
725
Dankrad Feist
Dankrad Feist@dankrad·
EF, last year: Hey, we want to listen to you users to make Ethereum better. EF, now: Jk, we looked at the real world. We don't like building for it after all, we'll go back to building cypherpunk stuff only. This is the EF going back to its old ways, undoing the changes from last year. I have feared this would happen because Vitalik clearly wasn't in with his heart. But whatever they say about the "ecosystem" being able to take care of this, the fundamental problems remain: - there are very few voices in ACD caring about real world Ethereum usage - there is nobody doing Ethereum BD (everyone else who is doing this also has their own separate interests)
Ethereum Foundation@ethereumfndn

Today, the Foundation’s Board released the EF Mandate. This document, which was first intended for EF members, reaffirms the promise of Ethereum, and the role of EF within this ecosystem.

English
118
41
555
203.9K
XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤 retweetledi
Ethereum Foundation
Ethereum Foundation@ethereumfndn·
Today, the Foundation’s Board released the EF Mandate. This document, which was first intended for EF members, reaffirms the promise of Ethereum, and the role of EF within this ecosystem.
English
198
382
2K
1.7M
RYAN SΞAN ADAMS - rsa.eth 🦄
A truth bomb for you. ETH will never earn fees. Ok, never is a strong word - let me rephrase - ETH won't earn fees anytime soon or in sustained amounts necessary to justify a centa-billion dollar asset. The reason is written in the roadmap. Ethereum intends to massively increase blockspace supply in the coming years. If we get to Justin Drake's gigagas in 5 years, that's a 200x increase in blockspace supply. ETH only generates fees when demand exceeds supply - demand won't outstrip supply during this rapid expansion era, that means low fees. So if your reason for holding ETH is fee generation, sell now - send it to zero. Or...re-consider how to value ETH. Consider what the market is already telling you. What assets don't earn fees but are worth trillions? Gold. Silver. Oil. Bitcoin. Together worth $170 trillion in value. Commodity money and store of value assets aren't priced on their ability to generate fees. They're priced on consumptive usage, and store of value demand relative to their scarcity. ETH is scarce. Lower annual issuance than gold or bitcoin. ETH has store of value demand. A censorship resistant digital money, a cyberpunk money, native to AI and the internet, economic bandwidth for DeFi. You can try to value ETH as a fee generating DCF asset and continue to be confused or you can value it as the market already does. ETH is an emerging commodity money.
RYAN SΞAN ADAMS - rsa.eth 🦄@RyanSAdams

@MikeIppolito_ > However, if ETH is going to go up, it must earn fees. Send it to zero then. It ain't earning fees.

English
108
64
615
108.3K
XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤 retweetledi
Outsider Aesthetics
Outsider Aesthetics@outsidercore1·
Outsider Aesthetics tweet media
ZXX
13
122
934
13.2K
XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤 retweetledi
Gavin Newsom
Gavin Newsom@GavinNewsom·
We noticed.
Gavin Newsom tweet media
English
3.3K
8.6K
82.8K
1.3M
XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤 retweetledi
Anish Moonka
Anish Moonka@AnishA_Moonka·
Marc Andreessen just dropped ~105 mins on Lenny's Podcast covering AI, jobs, careers, and why everyone is panicking about the wrong thing. Just the clearest macro framework I've heard on where AI actually lands. My notes: 𝟭. 𝗔𝗜 𝗶𝘀 𝗮𝗿𝗿𝗶𝘃𝗶𝗻𝗴 𝗮𝘁 𝘁𝗵𝗲 𝗲𝘅𝗮𝗰𝘁 𝗺𝗼𝗺𝗲𝗻𝘁 𝗵𝘂𝗺𝗮𝗻𝗶𝘁𝘆 𝗻𝗲𝗲𝗱𝘀 𝗶𝘁. US productivity growth has been running at half the rate of the 1940-1970 era and a third the rate of 1870-1940. The global population is declining below replacement in dozens of countries, including China. Without AI, we would be panicking about economies shrinking from depopulation, not job loss. The timing is almost miraculous. This is what Andreessen means when he says the real boom has not started yet. We have been in a 50-year productivity drought, and most people do not even realize it. 𝟮. 𝗔𝗜 𝗶𝘀 𝘁𝗵𝗲 𝗽𝗵𝗶𝗹𝗼𝘀𝗼𝗽𝗵𝗲𝗿'𝘀 𝘀𝘁𝗼𝗻𝗲. Isaac Newton spent decades trying to transmute lead into gold and never succeeded. AI does something more powerful: it converts sand (silicon) into thought. The most common material in the world is the rarest output. This one metaphor reframes the entire AI conversation. You do not have a job loss problem. You have a philosopher's stone sitting on your desk that you are not using enough. 𝟯. 𝗔𝗜 𝗺𝗮𝗸𝗲𝘀 𝗴𝗼𝗼𝗱 𝗽𝗲𝗼𝗽𝗹𝗲 𝘃𝗲𝗿𝘆 𝗴𝗼𝗼𝗱, 𝗮𝗻𝗱 𝘃𝗲𝗿𝘆 𝗴𝗼𝗼𝗱 𝗽𝗲𝗼𝗽𝗹𝗲 𝘀𝗽𝗲𝗰𝘁𝗮𝗰𝘂𝗹𝗮𝗿𝗹𝘆 𝗴𝗿𝗲𝗮𝘁. The best coders right now are not reporting 2x productivity. They are reporting 10x. The gap between "pretty good with AI" and "elite with AI" is widening, not narrowing. This is the most important signal for career planning right now. If you are just using AI to do the same job slightly faster, you are leaving the real leverage on the table. 𝟰. 𝗧𝗵𝗲𝗿𝗲'𝘀 𝗮 𝗠𝗲𝘅𝗶𝗰𝗮𝗻 𝘀𝘁𝗮𝗻𝗱𝗼𝗳𝗳 𝗯𝗲𝘁𝘄𝗲𝗲𝗻 𝗣𝗠𝘀, 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀, 𝗮𝗻𝗱 𝗱𝗲𝘀𝗶𝗴𝗻𝗲𝗿𝘀. Every engineer now thinks they can be a PM and designer. Every PM thinks they can code and design. Every designer knows they can do both. And they are all correct, because AI enables each role to absorb the tasks of the other two. I have seen this firsthand in the investing world. The analyst who can build models and write narratives is 5x more valuable than someone who can do only one. The same convergence is happening in the product. 𝟱. 𝗙𝗼𝗿𝗴𝗲𝘁 𝗧-𝘀𝗵𝗮𝗽𝗲𝗱. 𝗕𝘂𝗶𝗹𝗱 𝗮𝗻 𝗘-𝘀𝗵𝗮𝗽𝗲𝗱 𝗰𝗮𝗿𝗲𝗲𝗿. Scott Adams could not have created Dilbert by being the world's best cartoonist or the world's best business mind. He needed both. The additive effect of two skills is more than double. Three skills are more than triple. Larry Summers puts it differently: don't be fungible. The person who can code, design, and ship a product is no longer a unicorn. They are the new baseline for "extremely valuable." If you are only one of those three things, you are increasingly replaceable. 𝟲. 𝗝𝗼𝗯𝘀 𝗮𝗿𝗲 𝗯𝘂𝗻𝗱𝗹𝗲𝘀 𝗼𝗳 𝘁𝗮𝘀𝗸𝘀. 𝗧𝗮𝘀𝗸𝘀 𝗰𝗵𝗮𝗻𝗴𝗲. 𝗝𝗼𝗯𝘀 𝗽𝗲𝗿𝘀𝗶𝘀𝘁. Executives never typed their own emails in the 1970s. Secretaries printed incoming emails and hand-delivered them. Both roles survived the transition, just with different task sets. The same will happen with AI and coding, PM work, and design. Everyone obsessing over "will my job disappear" is asking the wrong question. The right question is: which tasks in my job are about to rotate, and am I ready to pick up the new ones? 𝟳. 𝗔𝗜 𝗰𝗼𝗱𝗶𝗻𝗴 𝗶𝘀 𝗷𝘂𝘀𝘁 𝘁𝗵𝗲 𝗻𝗲𝘅𝘁 𝗮𝗯𝘀𝘁𝗿𝗮𝗰𝘁𝗶𝗼𝗻 𝗹𝗮𝘆𝗲𝗿. We went from human calculators to machine code to assembly to C to scripting languages. Each layer was dismissed by the previous generation. Each time, the new layer won, and total coding employment grew. AI coding is the same pattern, not a rupture. The Perl programmers of 2005, laughing at JavaScript, are the C programmers of 1995, laughing at scripting. History rhymes, and it always rewards the people who adopt the next abstraction first. 𝟴. 𝗔𝗜 𝘁𝘂𝘁𝗼𝗿𝗶𝗻𝗴 𝗱𝗲𝗺𝗼𝗰𝗿𝗮𝘁𝗶𝘇𝗲𝘀 𝗲𝗹𝗶𝘁𝗲 𝗲𝗱𝘂𝗰𝗮𝘁𝗶𝗼𝗻. One-on-one tutoring is the only method proven to move a student from the 50th to the 99th percentile (Bloom's two sigma effect). It used to require being born into royalty. Alexander the Great was tutored by Aristotle. Now, any kid with a phone can access the same quality of personalized instruction. This is the most under-discussed consequence of AI. Every parent reading this should be supplementing their kid's education with structured AI tutoring right now. Not next year. Now. 𝟵. 𝗣𝗲𝘁𝗲𝗿 𝗧𝗵𝗶𝗲𝗹 𝘄𝗮𝘀 𝗺𝗼𝗿𝗲 𝗿𝗶𝗴𝗵𝘁 𝘁𝗵𝗮𝗻 𝗔𝗻𝗱𝗿𝗲𝗲𝘀𝘀𝗲𝗻 𝗼𝗿𝗶𝗴𝗶𝗻𝗮𝗹𝗹𝘆 𝗮𝗱𝗺𝗶𝘁𝘁𝗲𝗱. Progress in bits masked stagnation in atoms. The built world is barely different from 50 years ago. Same bridges from the 1930s, same dams from the 1910s. Cartels, monopolies, unions, and regulations prevent the rate of change that people had 100 years ago. This is also why AI will not transform everything overnight. Institutional sclerosis is real. Healthcare alone could take a generation. If you are building in atoms, budget for a war of attrition, not a blitzkrieg. 𝟭𝟬. 𝗠𝗼𝗮𝘁𝘀 𝗶𝗻 𝗔𝗜 𝗮𝗿𝗲 𝗴𝗲𝗻𝘂𝗶𝗻𝗲𝗹𝘆 𝘂𝗻𝗸𝗻𝗼𝘄𝗻. Within a year of ChatGPT's launch, five American companies, five Chinese companies, and open-source all had roughly equivalent models. DeepSeek emerged from a hedge fund in China and basically replicated the American labs' work. The smartest AI insiders privately admit there aren't many real secrets among the big labs. This is the most honest take I have heard from a top-tier VC. No one knows if the value accrues to models, apps, or infrastructure. Anyone who tells you otherwise is selling you certainty they do not have. 𝟭𝟭. 𝗔𝗜 𝗜𝗤 𝘄𝗶𝗹𝗹 𝗯𝗹𝗼𝘄 𝗽𝗮𝘀𝘁 𝗵𝘂𝗺𝗮𝗻 𝗹𝗶𝗺𝗶𝘁𝘀. Human IQ caps around 160 because of biology. Current AI models test around 130-140. There is no theoretical ceiling stopping AI from reaching 200, 250, or 300. The concept of AGI as a "human equivalent" will be a footnote because AI will race past that threshold. This is the frame that makes the "will AI take my job" debate feel small. We are not building a replacement for human thought. We are building something that will be better than the best human thought has ever been. 𝟭𝟮. 𝗧𝗵𝗲 𝗯𝗲𝘀𝘁 𝗳𝗼𝘂𝗻𝗱𝗲𝗿𝘀 𝗮𝗿𝗲 𝗿𝗲𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝘄𝗵𝗮𝘁 𝗮 𝗰𝗼𝗺𝗽𝗮𝗻𝘆 𝗲𝘃𝗲𝗻 𝗶𝘀. Layer one: AI redefines products. Layer two: AI redefines jobs within companies. Layer three, which has not dropped yet: AI redefines the very concept of having a company. The holy grail is the one-person, billion-dollar outcome, and the best founders are chasing it. Satoshi did it with Bitcoin. Instagram and WhatsApp came close with tiny teams. The question is no longer if this is possible with software. The question is how many of these we will see in the next five years. AI is the philosopher's stone. The question is whether you pick it up. The full podcast is worth your time. Link in replies.
Anish Moonka tweet media
English
91
569
3K
826K
Draceau▐┛
Draceau▐┛@SMI_NapoleonIII·
“Germany and France are too weak to rule Europe” - Minister of a nation that is actively leeching of France and Germany
Draceau▐┛ tweet media
TVP World@TVPWorld_com

“Germany and France today are too weak to rule Europe,” Polish Foreign Minister Radosław Sikorski [@radeksikorski] said in an interview for the German daily Spiegel. Sikorski added that “Europe must breathe with both lungs: the eastern and the western one,” and Poland – as the world’s 20th largest economy – aims to be the representative for the interests of Central and Eastern European countries, which joined the EU in 2004 or later.

English
213
283
6.7K
813.7K
XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤
Don’t AI-out yourself!
Guri Singh@heygurisingh

🚨 Stanford just analyzed the privacy policies of the six biggest AI companies in America. Amazon. Anthropic. Google. Meta. Microsoft. OpenAI. All six use your conversations to train their models. By default. Without meaningfully asking. Here's what the paper actually found. The researchers at Stanford HAI examined 28 privacy documents across these six companies not just the main privacy policy, but every linked subpolicy, FAQ, and guidance page accessible from the chat interfaces. They evaluated all of them against the California Consumer Privacy Act, the most comprehensive privacy law in the United States. The results are worse than you think. Every single company collects your chat data and feeds it back into model training by default. Some retain your conversations indefinitely. There is no expiration. No auto-delete. Your data just sits there, forever, feeding future versions of the model. Some of these companies let human employees read your chat transcripts as part of the training process. Not anonymized summaries. Your actual conversations. But here's where it gets genuinely dangerous. For companies like Google, Meta, Microsoft, and Amazon companies that also run search engines, social media platforms, e-commerce sites, and cloud services your AI conversations don't stay inside the chatbot. They get merged with everything else those companies already know about you. Your search history. Your purchase data. Your social media activity. Your uploaded files. The researchers describe a realistic scenario that should make you pause: You ask an AI chatbot for heart-healthy dinner recipes. The model infers you may have a cardiovascular condition. That classification flows through the company's broader ecosystem. You start seeing ads for medications. The information reaches insurance databases. The effects compound over time. You shared a dinner question. The system built a health profile. It gets worse when you look at children's data. Four of the six companies appear to include children's chat data in their model training. Google announced it would train on teenager data with opt-in consent. Anthropic says it doesn't collect children's data but doesn't verify ages. Microsoft says it collects data from users under 18 but claims not to use it for training. Children cannot legally consent to this. Most parents don't know it's happening. The opt-out mechanisms are a maze. Some companies offer opt-outs. Some don't. The ones that do bury the option deep inside settings pages that most users will never find. The privacy policies themselves are written in dense legal language that researchers people whose job is reading these documents found difficult to interpret. And here's the structural problem nobody is addressing. There is no comprehensive federal privacy law in the United States governing how AI companies handle chat data. The patchwork of state laws leaves massive gaps. The researchers specifically call for three things: mandatory federal regulation, affirmative opt-in (not opt-out) for model training, and automatic filtering of personal information from chat inputs before they ever reach a training pipeline. None of those exist today. The uncomfortable truth is this: every time you type something into ChatGPT, Gemini, Claude, Meta AI, Copilot, or Alexa, you are contributing to a training dataset. Your medical questions. Your relationship problems. Your financial details. Your uploaded documents. You are not the customer. You are the curriculum. And the companies doing this have made it as hard as possible for you to stop.

English
0
0
0
24
XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤 retweetledi
Guri Singh
Guri Singh@heygurisingh·
🚨 Stanford just analyzed the privacy policies of the six biggest AI companies in America. Amazon. Anthropic. Google. Meta. Microsoft. OpenAI. All six use your conversations to train their models. By default. Without meaningfully asking. Here's what the paper actually found. The researchers at Stanford HAI examined 28 privacy documents across these six companies not just the main privacy policy, but every linked subpolicy, FAQ, and guidance page accessible from the chat interfaces. They evaluated all of them against the California Consumer Privacy Act, the most comprehensive privacy law in the United States. The results are worse than you think. Every single company collects your chat data and feeds it back into model training by default. Some retain your conversations indefinitely. There is no expiration. No auto-delete. Your data just sits there, forever, feeding future versions of the model. Some of these companies let human employees read your chat transcripts as part of the training process. Not anonymized summaries. Your actual conversations. But here's where it gets genuinely dangerous. For companies like Google, Meta, Microsoft, and Amazon companies that also run search engines, social media platforms, e-commerce sites, and cloud services your AI conversations don't stay inside the chatbot. They get merged with everything else those companies already know about you. Your search history. Your purchase data. Your social media activity. Your uploaded files. The researchers describe a realistic scenario that should make you pause: You ask an AI chatbot for heart-healthy dinner recipes. The model infers you may have a cardiovascular condition. That classification flows through the company's broader ecosystem. You start seeing ads for medications. The information reaches insurance databases. The effects compound over time. You shared a dinner question. The system built a health profile. It gets worse when you look at children's data. Four of the six companies appear to include children's chat data in their model training. Google announced it would train on teenager data with opt-in consent. Anthropic says it doesn't collect children's data but doesn't verify ages. Microsoft says it collects data from users under 18 but claims not to use it for training. Children cannot legally consent to this. Most parents don't know it's happening. The opt-out mechanisms are a maze. Some companies offer opt-outs. Some don't. The ones that do bury the option deep inside settings pages that most users will never find. The privacy policies themselves are written in dense legal language that researchers people whose job is reading these documents found difficult to interpret. And here's the structural problem nobody is addressing. There is no comprehensive federal privacy law in the United States governing how AI companies handle chat data. The patchwork of state laws leaves massive gaps. The researchers specifically call for three things: mandatory federal regulation, affirmative opt-in (not opt-out) for model training, and automatic filtering of personal information from chat inputs before they ever reach a training pipeline. None of those exist today. The uncomfortable truth is this: every time you type something into ChatGPT, Gemini, Claude, Meta AI, Copilot, or Alexa, you are contributing to a training dataset. Your medical questions. Your relationship problems. Your financial details. Your uploaded documents. You are not the customer. You are the curriculum. And the companies doing this have made it as hard as possible for you to stop.
Guri Singh tweet media
English
329
3.9K
8.6K
1.7M
XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤 retweetledi
Bull Theory
Bull Theory@BullTheoryio·
This is HISTORICAL moment in AI. The US Pentagon wanted a $200M deal to use Anthropic’s Claude with zero restrictions, including mass surveillance of U.S. citizens and fully autonomous weapons. Anthropic said “NO” because those uses cross hard red lines on safety, ethics, and reliability. CEO Dario said: “We cannot in good conscience accede.” This led to an immediate federal ban on all Anthropic tech (6-month DoD phase-out) plus labeling them a “supply chain risk” a designation usually reserved for adversarial foreign firms. It is AI ethics vs national security priorities.
Bull Theory tweet mediaBull Theory tweet mediaBull Theory tweet media
Bull Theory@BullTheoryio

🇺🇸 PRESIDENT TRUMP JUST NOW: "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology" This is getting serious.

English
80
88
803
194.2K
XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤 retweetledi
Psyche Wizard
Psyche Wizard@PsycheWizard·
Psyche Wizard tweet media
ZXX
6
113
771
15.8K
XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤 retweetledi
The Footy Section
The Footy Section@FTBLsection·
Miroslav Klose: "I stopped playing football because I no longer recognised it. Today, young players think about other things. As a child, I only thought about training and becoming someone in this sport that I always loved. At Lazio and in the national team, after each training session, I put myself in a bathtub full of ice to avoid injuries. But the young players on the team systematically refused. When they saw me picking up the bags of balls to put them away at the end of training, they said to me 'But who tells you to do that?' At that moment, I said to myself: 'You're 20 years old and you can't help a 60-year-old worker?' They care more about whether their boots go with their socks.That's why I said stop. The football I knew no longer exists. Today's young players think first of cars, contracts with their sponsors, and their new boots. It is only after all these things that football comes. For them, their image is the most important thing. Whereas for me, all that mattered was football in its purest form."
The Footy Section tweet media
English
238
1K
7K
416.1K
XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤 retweetledi
🍂
🍂@Lovandfear·
— Albert Camus
🍂 tweet media
Français
130
3.2K
14.4K
876.3K
_gabrielShapir0
_gabrielShapir0@lex_node·
am I wrong to read this blog as signaling base is unlikely to become a stage 2 rollup and is likely to become more L1-like, creating its own decentralization and economic security rather than needing to 'borrow' ethereum's? (lots of references to decentralization, new economic mechanisms, independent client diversity, etc.)?
_gabrielShapir0 tweet media
English
39
0
66
8.4K
XQTStrategy.bsky.social ⿻ 🚀⭐️👨‍🎤 retweetledi
Barchart
Barchart@Barchart·
BREAKING 🚨: The World The World reaches highest level of uncertainty in history, surpassing Covid, the Global Financial Crisis, and the Dot Com Bubble 👻🤯👀
Barchart tweet media
English
724
2.2K
12.2K
2.3M