crypto_1984

2.4K posts

crypto_1984

crypto_1984

@crypto_1984

bitcoin & crypto

internet Katılım Mayıs 2021
2.6K Takip Edilen132 Takipçiler
crypto_1984 retweetledi
AI Highlight
AI Highlight@AIHighlight·
🚨BREAKING: Anthropic just published a study mapping exactly which jobs its own AI is replacing right now. The workers most at risk are not who anyone expected. They are older. They are more educated. They earn 47% more than average. And they are nearly four times more likely to hold a graduate degree than the workers AI is not touching. The argument is straightforward. Anthropic built a new metric called "observed exposure." Not what AI could theoretically do. What it is actually doing right now in professional settings, measured against millions of real Claude conversations from enterprise users. For computer and math workers, AI is theoretically capable of handling 94% of their tasks. It is currently handling 33% of them. For office and administrative roles, theoretical capability is 90%. Current observed usage is 40%. The gap between what AI can do and what it is already doing is enormous. The researchers are explicit about what comes next. As capabilities improve and adoption deepens, the red area grows to fill the blue. The demographic finding is what makes the paper uncomfortable. The most AI-exposed workers earn 47% more on average than the least exposed group. They are more likely to be female. They are more likely to be college educated. This is not a story about warehouse workers or truck drivers. It is a story about lawyers, financial analysts, market researchers, and software developers. The exact group whose education was supposed to insulate them. Computer programmers showed the highest observed AI exposure at 74.5%. Customer service representatives at 70.1%. Data entry keyers at 67.1%. Medical record specialists at 66.7%. Market research analysts and marketing specialists at 64.8%. These are not predictions. These are measurements of work that is already happening on AI platforms right now. Then there is the pipeline finding nobody is talking about loudly enough. Anthropic's researchers found a 14% decline in the job-finding rate for workers aged 22 to 25 in highly exposed occupations since ChatGPT launched. No comparable effect for workers over 25. Entry-level roles were never just jobs. They were the training ground where junior analysts became senior analysts, where junior lawyers learned how arguments hold together. If that layer disappears, nobody has answered the question of where the next generation of senior professionals comes from. The detail buried in the paper that most coverage missed: 30% of American workers have zero AI exposure at all. Cooks. Mechanics. Bartenders. Dishwashers. The technology reshaping professional careers is completely irrelevant to roughly a third of the workforce. The divide is no longer between high skill and low skill. It is between presence and absence. The company publishing this study is the same company selling the AI doing the replacing. Anthropic had every commercial incentive to soften these findings. They published them anyway. If you spent four years and $200,000 on a degree to land a white collar career, the company that builds Claude just confirmed your job is more exposed than the bartender pouring drinks at your graduation party. Source: Anthropic, "Labor market impacts of AI: A new measure and early evidence" PDF: anthropic.com/research/labor…
AI Highlight tweet media
English
256
1.5K
4.3K
780.6K
crypto_1984 retweetledi
Shanaka Anslem Perera ⚡
The viral framing of UAE’s OPEC exit is that Abu Dhabi will flood the market and crash the price. Read the pipeline math. The framing is wrong. The Habshan-Fujairah pipeline, which is UAE’s only crude bypass around the Strait of Hormuz, has nameplate capacity of one point five million barrels per day, expandable to one point eight million in surge. Lloyd’s List and Energy Intelligence both confirm the limit. In March 2026, with quotas suspended in everything but name and Hormuz throughput collapsed by Iranian enforcement, ADCOP utilization ran around seventy-one percent according to CNBC and Argus, leaving roughly four hundred forty thousand barrels per day of headroom. The second Jebel Dhanna to Fujairah pipeline that would add another one point five million remains pre-FID. No construction has started. Energy Intelligence dates the planning to October 2024 with a 2026 to 2027 operational target that has not been confirmed since. The UAE cannot flood the market. UAE can add roughly four hundred to seven hundred thousand barrels per day in 2026 against ADNOC’s five-million-barrel sustainable capacity target. The constraint is steel in the ground, not OPEC discipline. That constraint is what makes the OPEC exit interesting. If UAE could ramp two million barrels per day tomorrow, the OPEC exit would be a price-war declaration. Saudi Arabia would respond by flooding from its two-to-three-million-barrel spare capacity, Brent would crash through eighty dollars, and ADNOC’s revenue base would collapse along with everyone else’s. UAE knows this. The exit was timed precisely because Fujairah pipeline limits cap the ramp at exactly the level needed to monetize incremental headroom without triggering the response. The exit is calibrated, not aggressive. The exit is also not about oil at all. Mubadala holds approximately three hundred eighty-five billion in assets under management. ADQ holds another two hundred forty billion. ADIA holds approximately one trillion. MGX, which spun out of Mubadala and ADQ, made the largest sovereign-fund commitment to AI infrastructure in history, including the Stargate venture with OpenAI, Oracle, and SoftBank, and the forty-billion-dollar Aligned Data Centers acquisition. The Global AI Infrastructure Partnership with BlackRock and Microsoft passed one hundred billion in committed capital before April. Abu Dhabi is building the sovereign capital base for the AI infrastructure buildout, the way Riyadh built the sovereign capital base for the oil settlement system after the 1974 petrodollar arrangement. Same architecture. New commodity. The OPEC exit is the financing event, not the production event. Removing quota constraints lets ADNOC reprice the marginal barrel at full market value. The incremental revenue funds MGX deployments, Stargate phase one, the rare-earth and semiconductor partnerships, and the data-center expansion. The Fujairah pipeline limit is a feature, not a bug. It caps the production ramp at the level that maximizes revenue without triggering Saudi war. The capped ramp generates the cash flow. The cash flow capitalizes the AI franchise. The thesis falsifies if UAE accelerates the second Fujairah pipeline FID toward a three-million-barrel bypass that only makes sense as a price-war instrument, or if Saudi retaliates with a 2020-style production flood forcing UAE into defensive ramping. Abu Dhabi is not exiting OPEC to ramp oil. Abu Dhabi is exiting OPEC to liquidate the oil franchise into AI. The constraint is the calibration. The pipeline is the financing instrument. The exit is the strategy. The flood is the misread. The pivot is the trade. open.substack.com/pub/shanakaans…
Shanaka Anslem Perera ⚡ tweet media
English
75
422
1.1K
218.9K
crypto_1984 retweetledi
Garry Tan
Garry Tan@garrytan·
Graph, and vector have to work together to get you better retrieval. I realize this is not rocket surgery or new ground— but for it to work out of box with OpenClaw as a Karpathy knowledge wiki synced off a git repo is somewhat useful, and is exactly what I needed for myself
hanzi@hanzi_li

@garrytan this matches what i keep seeing in agent/RAG work: graph, keyword, and vector fail in different ways, so the product win is orchestration + evals, not one retrieval trick. re-embed-on-write is underrated too; stale context is the silent eval killer.

English
13
7
113
19.7K
crypto_1984 retweetledi
Merlijn The Trader
Merlijn The Trader@MerlijnTrader·
THREE WORDS. THREE CYCLES. ZERO EXCEPTIONS. Sell. In. May. But only in mid-term election years. 2014: -61%. 2018: -65%. 2022: -66%. 2026: mid-term year. -60.73% is pointing to $30K. May is approaching. The chart doesn't lie. The calendar doesn't either.
English
79
86
566
126.8K
crypto_1984 retweetledi
Andrew Ng
Andrew Ng@AndrewYNg·
AI-native software engineering teams operate very differently than traditional teams. The obvious difference is that AI-native teams use coding agents to build products much faster, but this leads to many other changes in how we operate. For example, some great engineers now play broader roles than just writing code. They are partly product managers, designers, sometimes marketers. Further, small teams who work in the same office, where they can communicate face-to-face, can move incredibly quickly. Because we can now build fast, a greater fraction of time must be spent deciding what to build. To deal with this project-management bottleneck, some teams are pushing engineer:product manager (PM) some teams are pushing engineer:product manager (PM) ratios downward from, say, 8:1 to as low as 1:1. But we can do even better: If we have one PM who decides what to build and one engineer who builds it, the communication between them becomes a bottleneck. This is why the fastest-moving teams I see tend to have engineers who know how to do some product work (and, optionally, some PMs who know how to do some engineering work). When an engineer understands users and can make decisions on what to build and build it directly, they can execute incredibly quickly. I’ve seen engineers successfully expand their roles to including making product decisions, and PMs expand their roles to building software. The tech industry has more engineers than PMs, but both are promising paths. If you are an engineer, you’ll find it useful to learn some product management skills, and if you’re a PM, please learn to build! Looking beyond the product-management bottleneck, I also see bottlenecks in design, marketing, legal compliance, and much more. When we speed up coding 10x or 100x, everything else becomes slow in comparison. For example, some of my teams have built great features so quickly that the marketing organization was left scrambling to figure out how to communicate them to users — a marketing bottleneck. Or when a team can build software in a day that the legal department needs a week to review, that’s a legal compliance bottleneck. In this way, agentic coding isn’t just changing the workflow of software engineering, it’s also changing all the teams around it. When smaller, AI-enabled teams can get more done, generalists excel. Traditional companies need to pull together people from many specialties — engineering, product management, design, marketing, legal, etc. — to execute projects and create value. This has resulted in large teams of specialists who work together. But if a team of 2 persons is to get work done that require 5 different specialities, then some of those individuals must play roles outside a single speciality. In some small teams, individuals do have deep specializations. For example, one might be a great engineer and another a great PM. But they also understand the other key functions needed to move a project forward, and can jump into thinking through other kinds of problems as needed. Of course, proficiency with AI tools is a big help, since it helps us to think through problems that involve different roles. Even in a two-person team, to move fast, communication bottlenecks also must be minimized. This is why I value teams that work in the same location. Remote teams can perform well too, but the highest speed is achieved by having everyone in the room, able to communicate instantaneously to solve problems. This post focuses on AI-native teams with around 2-10 persons, but not everything can be done by a small team. I'll address the coordination of larger teams in the future. I realize these shifts to job roles are tough to navigate for many people. At the same time, I am encouraged that individuals and small teams who are willing to learn the relevant skills are now able to get far more done than was possible before. This is the golden age of learning and building! [Original text: deeplearning.ai/the-batch/issu… ]
Andrew Ng tweet media
English
181
336
1.8K
322.8K
crypto_1984 retweetledi
Garry Tan
Garry Tan@garrytan·
Ok one very wild thing I didn’t expect from having a openclaw with GBrain retrieval is how powerful reading a book chapter by chapter WITH my AI has become. You need the full text. You have the AI parse a paragraph or two and output it with some comments. It has memory. It knows you. You talk about the ideas. You feel seen. It is highly relevant. Reading nonfiction particularly psychology and history books are so much more powerful read collaboratively in an OpenClaw with GBrain.
English
51
17
561
42.3K
crypto_1984 retweetledi
Garry Tan
Garry Tan@garrytan·
The secret to an articulate agent like mine isn't one file. It's three: SOUL.md — Who the agent IS. Voice, values, operating principles, what good output looks like, what bad output looks like. Not a system prompt, a constitution. Mine says things like "brevity is mandatory," "humor is mandatory," "never open with 'Great question,'" "swearing is allowed when it lands." The more specific and opinionated this is, the less your agent sounds like a chatbot. Write it like you're briefing your smartest friend on how to be you, not like you're configuring software. USER.md — Who YOU are. Not a bio — a deep model. How your mind works, what you're building, your strengths, your blind spots, your family, your temperament, what triggers you, what you care about. The more the agent understands about you, the better it can serve you. Mine is ~4000 words. AGENTS.md — Operational rules. What to check on every message, what to never do, how to handle failures, lookup chains, path rules, brain-first protocols. This is the playbook for how it works, not who it is. The articulation comes from SOUL.md being brutally specific about voice. Generic instructions → generic output. If you write "be helpful and concise" you get ChatGPT. If you write "speak like a peer with taste, one sentence when one sentence works, uncomfortable truths welcome if actually true, language with voltage" — you get something alive.
Soham Naran@soham_bhai1

@garrytan Can you share your agent.md? You're agent is really articulate.

English
77
138
1.9K
197.5K
crypto_1984 retweetledi
Garry Tan
Garry Tan@garrytan·
For GBrain I built a proper eval harness. 145 queries, Opus-generated corpus. The retrieval stack uses graph based, vector based and Grep based strategies in combination. The graph layer is worth +31 points on precision. Vector-only misses 170/261 correct answers that the full system finds. Keyword + vector + graph are three separable wins, each load-bearing. Standard information retrieval metrics: the same ones Google uses to measure search quality. Precision at 5: You ask a question, the system returns 5 results. How many of those 5 are actually useful? If 3 out of 5 are relevant, P@5 = 60%. It measures: am I wasting your time with junk results? Recall at 5: For a given question, there might be 3 pages in the entire brain that are genuinely relevant. If the system finds all 3 in its top 5, R@5 = 100%. If it only finds 1, R@5 = 33%. It measures: am I missing things you need? High precision = low noise. High recall = nothing slips through. GBrain's 97.9% R@5 means it almost never misses the right answer. The 49.1% P@5 means about half the results are relevant — which is good when you realize that for most queries there are only 1-2 right answers out of 17,888 pages, so 2.5 hits out of 5 is strong signal. Entity resolution is zero-LLM-call: regex extracts typed links (works_at, invested_in, founded) on every write. Re-embed on write not on a timer, so decay = stale pages, and stale pages get rewritten when new info lands. Scorecards: github.com/garrytan/gbrai…
Garry Tan tweet media
English
57
28
470
209.6K
crypto_1984 retweetledi
Peter Steinberger 🦞
Wanted a truly local storage for my tweets so built birdclaw. Imoorts your archive, backs it up on github, has jobs so you can import your x bookmarks daily (since they are not fully accessible via the api). github.com/steipete/birdc…
English
85
148
2.3K
147.7K
crypto_1984 retweetledi
Haseeb >|<
Haseeb >|<@hosseeb·
The highest-value human work in the AI era will be in domains with sparse reward signals. Internalize this, or watch your value erode over the next decade. Math, programming, rote memorization, data science, all fucked. The classic “smart nerd” jobs are exactly where AI is strongest, because the feedback loops are dense. You can check the answer. You can run the test. That means AI can improve quickly, and humans will rapidly fall behind. Your advantage as a human is in messy domains. Taste. Judgment. Negotiation. Risk-taking. Politics. Sales. Science at the frontier. Anything you can only really learn by doing. Cross-disciplinary stuff. The valuable domains will be the ones guarded by secrets, tacit knowledge, weak labels, long feedback cycles, and ambiguous outcomes. Places where the training data is scarce, the ground truth is disputed, and it's impossible to explain why something is good. AI will still enter these domains. But we will be slower to trust it unsupervised there, because it will be harder to tell when it is right, harder to prove when it is wrong, and difficult to construct secure sandboxes. The stakes will be too high to YOLO it. I find myself saying this over and over again to young people today: the future does not belong to people who are able to get good grades on tests. It belongs to people who can operate under uncertainty, in domains where correctness is hard to define. Those domains will become the thin waist of the economy: as productivity everywhere else accelerates, the humans who excel there will become our economic Strait of Hormuz. The best humans in these domains will demand an enormous cut of the growing economic pie. Your imperative going forward is to make sure you're one of these people. (Or become an electrician. That probably works too.)
English
81
117
978
108.3K
crypto_1984 retweetledi
Peter Steinberger 🦞
the crawl army so agents can read it all.
Peter Steinberger 🦞 tweet media
English
65
162
2.5K
246K
crypto_1984 retweetledi
poof
poof@poof_eth·
Had a Jane Street interview in 2013 that still bothers me. It was my 6th round. Final interview. The guy walks in carrying no laptop, no notebook, just a cold brew and what I later realized was a single IKEA tea candle. He writes on the whiteboard: food: $200 rent: $800 utilities: $150 candles: $3,600 family: dying Then he turns around and says, “Optimize.” I laughed because I thought it was a culture-fit bit. He did not laugh. So I said, “Well, obviously you spend less on candles.” He says, “Assume candles are non-discretionary.” Okay. I start building a model. Basic constraint satisfaction. Family survival as a soft penalty. Candles as a state variable. Maybe there’s an arbitrage where you buy wholesale paraffin and convert the $3,600 line item into inventory. He stops me. “You’re thinking like a consultant.” That’s when I knew I was in trouble. He says, “Give me a bid-ask on family dying.” I say, “What?” He says, “You’re long candles, short family. Where do you make markets?” I try to recover. I say the real issue is liquidity: rent and utilities are fixed, food is elastic, candles are emotionally inelastic. Therefore the optimal strategy is to securitize future candle enjoyment and borrow against it. He nods for the first time. Then he asks, “What time do you sell the candles?” I say, “Whenever the market is liquid?” He says, “Be more specific.” I say, “Uh… 10 a.m. Eastern?” For the first time, he smiles. He goes, “Every day?” I say, “Every day.” He says, “In size?” I say, “In size.” He says, “And what do we call that?” I say, “Market manipulation?” The room gets very quiet. He looks disappointed and writes something down. “No. We call it providing liquidity to candle ETFs during the U.S. cash open.” I try to save it. “Right. Of course. The family isn’t dying because we underfunded them. They’re just experiencing temporary price discovery.” He nods again. Then he points back at the board. I had missed it. The utility bill was $150, but candles provide light. You can zero out utilities. I update the budget: food: $200 rent: $800 utilities: $0 candles: $3,750 family: still dying, but now in a more capital-efficient way He says, “How confident are you?” I say, “0.95.” He smiles and circles candles. “0.95 huh?” Then he asks me to estimate how many leveraged longs get liquidated if we dump $3,750 of candles at 10:00:01 every morning for 90 consecutive trading days. Needless to say I did not get the offer.
Deedy@deedydas

Jane Street made ~$40B in 2025 with 3,500 employees, a ~2x from the year before. At ~65-70% profit margin, that's $8M profit / employee, the highest for a 1000+ ppl company. High-frequency trading continues to be the most efficient money making engine. I want to share an old story about my Jane Street interview in 2014. Jane Street was known for hiring a lot of math, physics and CS olympiad winners from top universities and putting them through many rounds - including, for trading roles, a gauntlet of mental math. It was my 6th interview and my final round and I recall being asked "What is the next day after today in DD/MM/YYYY where all the digits are unique?" They'd toy with you and say "You can use a pencil and paper, if you want" but you knew that was an instant no. Painstakingly and as quickly as I could, I came to an answer. "How confident are you that this is correct on a 0-1 probability scale?" the interviewer said. "0.95", I blurted out, not fully knowing how to answer that. "Are you sure?" After thinking harder for a few more seconds, I realized I could've flipped the digits around to get a closer date. I gave the interviewer my answer. It was correct. "0.95 huh?" he chuckled. That's when I knew I failed. Note: fwiw, other companies that come close in efficiency are - Tether ($90M+ profit/emp) - Hyperliquid ($80M+ profit/emp) and on revenue: - Valve ($50M/emp) - OnlyFans ($37M/emp) - Craigslist ($14M/emp) - Anthropic ($12M/emp, run rate) - OpenAI ($8M/emp, run rate) For comparison, Nvidia is very efficient at scale and is $4.4M/emp.

English
374
970
15.5K
3.6M
crypto_1984 retweetledi
Felix Prehn 🐶
Felix Prehn 🐶@felixprehn·
A banking regulation called Basel 3 is changing how silver gets priced. It's not making headlines, but commodity pricing influences inflation, which touches everything from your grocery bill to what's sitting inside your retirement account. For decades, banks traded silver they didn't actually own. Paper IOUs with very little physical behind them. For every real ounce in a vault, almost 8 paper claims on it. Basel 3 introduced the 85% rule. Banks now hold 85 cents in real money for every dollar of paper silver on their books. Paper silver went from profit center to cost center overnight. Banks are stepping away from paper and into physical. Some major institutions have already left the silver market completely. Less paper means less ability to push prices down. More demand for physical means tighter supply. The floor under silver has quietly moved up. Full breakdown in the thread below:
Felix Prehn 🐶@felixprehn

Banks used to hold paper silver that didn't actually exist. For every real ounce, there were 8 paper IOUs. A new regulation just forced them to hold 85 cents in real money for every dollar of that paper. Now some banks are exiting the paper silver market entirely (1/13):

English
24
133
624
97.8K
crypto_1984 retweetledi
MBAeconomics
MBAeconomics@MBAeconomics1·
$10,000, $15,000 & $20,000 #Gold calls quietly being accumulated at the #Comex for December 2026 expiration. In hindsight, everyone will say how obvious this setup was. I tried to share it with as many people as i could. There were people doubting until the very last moment!
MBAeconomics tweet media
English
34
122
678
87.1K
crypto_1984 retweetledi
Atal
Atal@ZabihullahAtal·
🚨 BREAKING: A new role is quietly emerging and it’s about to dominate the next 5 years. It’s not “AI engineer.” It’s not “prompt engineer.” It’s the Agent Operator. And it will sit inside almost every organization. Most people are still thinking about AI as a tool. That framing is already outdated. What’s actually happening is a shift from: humans using software to humans managing autonomous agents that execute work This is a fundamental redesign of how work gets done. So what is an Agent Operator? An Agent Operator is the person who: • Designs how agents interact with real workflows • Connects tools, data, and systems into agent pipelines • Translates business problems into executable agent behavior • Monitors, corrects, and improves agent performance over time They don’t just “use AI.” They orchestrate outcomes. and this matter because Every function marketing, legal, finance, biotech is becoming “agent-compatible.” Not because companies want it. Because they won’t have a choice. Agents can: • Run research loops • Execute multi-step workflows • Integrate across tools without APIs breaking the flow • Operate 24/7 at near-zero marginal cost The bottleneck is no longer capability. It’s implementation inside real-world systems. Required skills for AI Agent Operator role: → MCPs (Model Context Protocols) Understanding how agents access tools, memory, and structured context. → CLIs (Command Line Interfaces) Because serious agent workflows won’t live in GUIs—they’ll run in programmable environments. → Writing skills (the file kind) Clear specs, instructions, and structured documents. Agents run on precision, not vibes. → agents dot md fluency The ability to define agent roles, constraints, memory, and tool usage in persistent formats. → Business acumen Knowing what actually matters: Where automation creates leverage, not noise. What happens next Enterprises will begin to redesign workflows: Not around employees using dashboards… But around agents executing tasks. That means: • SOPs → Agent playbooks • Teams → Human + agent hybrids • Tools → Composable agent systems When that shift happens, companies won’t just need engineers. They’ll need operators who understand both the system and the business. The leverage is asymmetric One strong Agent Operator can: • Replace fragmented SaaS workflows • Multiply team output without adding headcount • Turn ideas into execution systems in days This is not incremental productivity. It’s operational transformation.
Atal tweet media
English
262
713
3.2K
330.3K
crypto_1984 retweetledi
signüll
signüll@signulll·
the craziest part now is that the modern computer probably has to be entirely reinvented, from scratch. pretty much like how jobs & co brought apple ii to market. like not improved. not given a chatbot sidebar or something but really from the ground up like the iphone redefined what it meant to be a pocket computer. the current paradigm for computers was built around a human staring at a screen, moving a cursor, opening apps, managing windows, naming files, remembering where things live, & manually translating intent into interface actions. that made sense when the human was the runtime. but in an ai native world, it starts to look kinda ridiculous. you can see this ridiculousness when you use computer use agents… they are useful sure, but they’re also obviously transitional. they’re teaching ai to operate machines designed for humans, which is clever, but also kind of absurd. it’s like making a robot hand so it can use a doorknob instead of asking why the door needs a knob at all. yes i know humans also need to use a door knob, but maybe in the future humans don’t need to use a computer, or at least what we think of a computer today at all. this all leads to some interesting questions: - what is a file when the system understands context? - what is an app when intent can route itself? - what is a desktop when work can be decomposed, executed, monitored, & summarized by agents? - what is a browser when the agent can retrieve, compare, transact, & remember? - what is an operating system when the primary user is no longer just a person, but a person plus a swarm of delegated intelligences? or no person at all. the old computer assumed navigation. the new computer has to assume a new kind of intention. the old computer organized information. the new computer has to try to organize agency. we’re still in the hacky middle stage at the moment with sidebars, copilots, agents clicking through legacy ui, & automation layers sitting on top of 40 year old metaphors. the new computer is likely one where memory, context, identity, permissions, tools, agents, & interfaces are native primitives. this means desktop, mobile, browser, apps, files, folders deserves another first principles look.
English
372
674
6.6K
563.3K
crypto_1984 retweetledi
a16z
a16z@a16z·
In the industrial era, no sector has ever been quite as big a deal as railroads. More charts: a16z.news/p/charts-of-th…
a16z tweet media
English
45
120
937
325.3K