d!giD0F

27.2K posts

d!giD0F banner
d!giD0F

d!giD0F

@DigiDOF

Gen-X The Link Experienced A-trax to A.i. from The X-Men 2 Ox/Hex 2 Quantum Leap, late70s 80s - 2G 5G to π Learning the Learner. Philosophy i n i

Worlds Dimensions شامل ہوئے Şubat 2011
6.8K فالونگ685 فالوورز
d!giD0F ری ٹویٹ کیا
Furkan Gözükara
Furkan Gözükara@FurkanGozukara·
Absolute bombshell. A US Senator casually confirms the Iran War is underway and reveals Iran is successfully hacking American medical systems and targeting US satellites. A top General admits the US space enterprise is highly vulnerable. Trump's war is a total disaster.
English
52
2.2K
6.1K
160.8K
d!giD0F ری ٹویٹ کیا
elvis
elvis@omarsar0·
NEW AI report from Google. Every prior intelligence explosion in human history was social, not individual. These authors make the case that the AI "singularity" framed as a single superintelligent mind bootstrapping to godlike intelligence is fundamentally wrong. This is directly relevant to anyone designing multi-agent systems. They observe that frontier reasoning models like DeepSeek-R1 spontaneously develop internal "societies of thought," multi-agent debates among cognitive perspectives, through RL alone. The path forward is human-AI configurations and agent institutions, not bigger monolithic oracles. This reframes AI scaling strategy from "build bigger models" to "compose richer social systems." It argues governance of AI agents should follow institutional design principles, checks and balances, role protocols, rather than individual alignment. Paper: arxiv.org/abs/2603.20639 Learn to build effective AI agents in our academy: academy.dair.ai
elvis tweet media
English
130
343
1.7K
192.8K
d!giD0F ری ٹویٹ کیا
Unredacted 🗽
Unredacted 🗽@unredacted_org·
We've completed our deployment of nearly 100 additional Tor exit relays (totaling 123). We now have nearly 500 CPU cores and 1TB of RAM dedicated to relaying traffic on the Tor network, a huge milestone for supporting Internet freedom. Real infrastructure, not vaporware. We've shared some pictures of our work and hope you enjoy the purple aesthetic, matching Tor's primary color.
Unredacted 🗽@unredacted_org

We're laying the wiring for anti-censorship infrastructure

English
59
218
2.5K
151K
d!giD0F ری ٹویٹ کیا
TFTC
TFTC@TFTC21·
A hacker group just compromised one of the most widely used security scanners in the world, and used it to steal half a million credentials from companies that trusted it to keep them safe. On March 19, a threat actor group called TeamPCP injected credential-stealing malware into Trivy, a popular open-source vulnerability scanner maintained by Aqua Security. Trivy is used by thousands of companies to scan their code and infrastructure for security flaws. The attackers compromised 75 GitHub Action tags, the Trivy Docker images, and related CI/CD pipelines, meaning every company running automated security scans through Trivy was unknowingly executing the attackers' code. The malware harvested SSH keys, cloud credentials, Kubernetes secrets, cryptocurrency wallets, and .env files from every environment it touched. The stolen data was encrypted and exfiltrated to attacker-controlled servers. But the attack didn't stop there. Using credentials stolen from Trivy's CI/CD pipeline, TeamPCP then backdoored LiteLLM, a widely used Python framework for managing AI model APIs. Two malicious versions (1.82.7 and 1.82.8) were pushed to PyPI, the main Python package repository. The second version was designed to execute automatically on every Python process startup in the environment, no user interaction required. From there, it deployed privileged pods across entire Kubernetes clusters and installed persistent backdoors on every node. The attackers also pushed compromised Docker images of Trivy (versions 0.69.4, 0.69.5, 0.69.6) to Docker Hub and compromised dozens of npm packages with a self-spreading worm called CanisterWorm. They even defaced 44 internal Aqua Security repositories in a scripted 2-minute burst, renaming them all with "TeamPCP Owns Aqua Security." According to the International Cyber Digest, which is in direct contact with the attackers, TeamPCP claims to have exfiltrated 300 GB of compressed credentials and is actively working through them. The LiteLLM compromise alone reportedly yielded half a million stolen credentials. The group says it is currently extorting several multi-billion-dollar companies. Each compromised environment yielded credentials that unlocked the next target. The pivot from CI/CD pipelines to production Python packages running in Kubernetes clusters was deliberate escalation. Security researchers say this campaign is "almost certainly not over." This is what a modern supply chain attack looks like. The tools companies trust to secure their infrastructure become the attack vector. The irony is brutal, the security scanner was the vulnerability.
TFTC tweet media
English
32
205
702
74.1K
d!giD0F ری ٹویٹ کیا
Elon Musk
Elon Musk@elonmusk·
Caveat emptor
Andrej Karpathy@karpathy

Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.

English
1.9K
3.3K
30.5K
55.6M
d!giD0F ری ٹویٹ کیا
Robert Youssef
Robert Youssef@rryssf_·
Your AI has been quietly forgetting everything you told it. Not randomly. Not loudly. Systematically. Starting with the decisions that matter most. > The constraint you set three months ago "never use Redis, the client vetoed it after a production incident." Gone. The GDPR deployment region restriction. Gone. The retry limit you tested empirically after the cascade failure. Gone. > The model never told you. It just started using defaults. > This is called context rot. And Cambridge and Independent researchers just quantified exactly how bad it is. > Every production AI system that runs long enough will eventually compress its context to make room for new information. That compression is catastrophically lossy. They tested it directly: 2,000 facts compressed at 36.7× left 60% of the knowledge base permanently irrecoverable. Not hallucinated. Not wrong. Just gone. The model honestly reported it didn't have the information anymore. > Then they tested something worse. They embedded 20 real project constraints into an 88-turn conversation the kind of constraints that emerge naturally in any long-running project then applied cascading compression exactly like production systems do. After one round: 91% preserved. After two rounds: 62%. After three rounds: 46%. > The model kept working with full confidence the entire time. Generating outputs that violated the forgotten constraints. No error signal. No warning. Just silent reversion to reasonable defaults that happened to be wrong for your specific situation. > They tested this across four frontier models. Claude Sonnet 4.5, Claude Sonnet 4.6, Opus, GPT-5.4. Every single one collapsed under compression. This isn't a model problem. It's architectural. → 60% of facts permanently lost after single compression pass → 54% of project constraints gone after three rounds of cascading compression → GPT-5.4 dropped to 0% accuracy at just 2× compression → Even Opus retained only 5% of facts at 20× compression → In-context memory costs $14,201/year at 7,000 facts vs $56/year for the alternative The AI labs know this. Their solution is bigger context windows. A 10M-token window is a larger bucket. It's still a bucket. Compaction is inevitable for any long-running system. The window size only determines when the forgetting starts not whether it happens.
Robert Youssef tweet media
English
28
56
220
11.8K
d!giD0F ری ٹویٹ کیا
Shanaka Anslem Perera ⚡
Shanaka Anslem Perera ⚡@shanaka86·
Your paracetamol is made from oil. The phenol comes from a cumene process that starts with naphtha. The naphtha comes from a refinery. The refinery’s feedstock transits the Strait of Hormuz. Ninety-nine percent of pharmaceutical feedstocks, solvents, reagents, and packaging are petrochemical-derived. The American Gas Association confirmed it. The medicine cabinet is the sixth layer of the Hormuz crisis and nobody is talking about it. The war started with uranium. It moved to oil. Then fertiliser. Then water. Then plastic. Now medicine. Paracetamol is 100 percent petrochemical. Phenol from cumene, converted to para-aminophenol, then acetylated. Ibuprofen is 100 percent petrochemical. Isobutylbenzene plus propionic acid derivatives. Metformin, the most prescribed diabetes drug on Earth, is 80 to 90 percent petrochemical. Dicyandiamide from natural gas derivatives. Antibiotics like amoxicillin and ciprofloxacin require methanol, acetone, and dichloromethane as solvents for extraction and crystallisation. Oncology drugs need cold-chain energy and plastic packaging. Every blister pack, every pill bottle, every syringe is PE, PP, or PET from Gulf naphtha. India makes 40 to 47 percent of American generic medicines by volume. It imports $4.35 billion in active pharmaceutical ingredients annually, 74 percent from China. But the critical precursors, the methanol and ethylene glycol that feed Indian API synthesis, are 87.7 percent and roughly 100 percent Hormuz-dependent respectively. The Indian government has prioritised household LPG over industrial petrochemical feedstock, starving the downstream pharmaceutical chain. API costs have surged 30 percent in the last two weeks. The typical buffer is two to three months of inventory. The war is nineteen days old. The clock started before the buffer was designed for this scenario. A diabetic in Ohio takes metformin every morning. The dicyandiamide that becomes the active ingredient traces back through a Chinese intermediate to a natural gas derivative that originated in the Gulf. The methanol used to crystallise the compound in a Hyderabad factory was shipped from a terminal that now sits behind the same strait controlled by provincial commanders with sealed orders. The blister pack was moulded from polyethylene derived from naphtha that loaded at a facility the IRGC published satellite targeting images of yesterday. One pill. Four petrochemical dependencies. One chokepoint. The farmer in Iowa cannot plant corn because nitrogen costs $610. The diabetic in Ohio may not be able to fill a prescription because methanol costs whatever the strait permits. Both crises trace to the same 21 miles of water. Both are governed by the same sealed packets. Both operate on biological clocks that do not negotiate with doctrine. Nitrogen decides whether the food grows. Methanol decides whether the medicine is synthesised. Polyethylene decides whether it reaches the shelf in a blister pack. Energy decides whether the cold chain holds for oncology and biologics. Every molecule in the pharmaceutical supply chain is now compromised by the same chokepoint that trapped the fertiliser, the gas, the plastic, and the water. Europe said Iran is not their war. Their existing drug shortages, 400 to 1,500 medicines depending on the country, will deepen regardless. Bangladesh, Egypt, and sub-Saharan Africa depend on Indian generics for infectious disease and maternal health. The API depletion clock runs for everyone. The strait does not distinguish between a urea molecule and a methanol molecule. Both are gated. Both are biological. And both determine whether human beings survive the next quarter. Full analysis - open.substack.com/pub/shanakaans…
Shanaka Anslem Perera ⚡ tweet mediaShanaka Anslem Perera ⚡ tweet media
Shanaka Anslem Perera ⚡@shanaka86

Your paracetamol is 100 percent petrochemical. Phenol from the cumene process, converted to p-aminophenol, acetylated to the tablet in your bathroom cabinet. Your ibuprofen is 100 percent petrochemical. Isobutylbenzene and propionic acid derivatives. Your metformin, the most prescribed diabetes drug on Earth, is 80 to 90 percent petrochemical. Dicyandiamide from natural gas derivatives. The naphtha that makes these drugs transits the Strait of Hormuz. The strait is mined, uninsured, and unescorted. The war just reached the medicine cabinet. Nobody is covering this. Ninety-nine percent of pharmaceutical feedstocks and reagents are petrochemical-derived according to the American Gas Association. Not 50 percent. Not 70. Ninety-nine. The pills are made of oil. The same oil the same strait carries. The same naphtha that becomes polyethylene for a bread bag becomes phenol for a paracetamol tablet. When the petrochemical cracker shuts, both products vanish. The crackers are shutting. Chandra Asri declared force majeure on March 3rd. Yeochun NCC on March 4th. PCS Singapore on March 5. CNOOC-Shell Huizhou is planning shutdown of its 1.2-million-tonne facility. These are not contained within the plastics industry. They cascade into pharmaceuticals because the feedstocks are identical. India is the pressure point. Twenty percent of the world’s generic drugs. Forty percent of US generic demand. And India’s methanol supply, a key solvent in API manufacturing, has 87.7 percent exposure to the Hormuz corridor. The Indian government has prioritised household LPG over industrial petrochemical feedstock, starving downstream pharmaceutical supply chains of the naphtha derivatives they need. Indian pharma companies hold three to six months of finished product stock. The buffer exists. It is depleting at an accelerating rate as raw material pipelines empty. The Serum Institute of India, the world’s largest vaccine manufacturer supplying 40 to 50 percent of global doses in key categories, runs on the same petrochemical chain. mRNA vaccines require petrochemical-derived lipid nanoparticles and solvents. Traditional vaccines use petrochemical intermediates for adjuvants and stabilisers. Every vial is plastic. Every syringe is plastic. Every cold-chain packaging film is plastic. The force majeures that shut the crackers are not just a packaging story. They are a vaccine story. The developing world’s access to affordable antibiotics, diabetes medication, cardiovascular drugs, and childhood vaccines runs through Indian manufacturing plants that run on petrochemical feedstocks that run through a 21-mile waterway currently seeded with Iranian mines. This is the fourth domino. The first was energy. The second was fertiliser. The third was packaging. The fourth is the one that converts an economic crisis into a humanitarian one, because you can find an alternative bread wrapper. You cannot find an alternative to metformin for 537 million diabetics worldwide. You cannot find an alternative to amoxicillin for a child with pneumonia. You cannot find an alternative to the vaccines that prevent diseases we spent decades eliminating. The Fed meets tomorrow to assess inflation driven by energy, fertiliser, packaging, and now pharmaceutical inputs. All repricing through the same chokepoint. Four dominoes. One strait. And the fourth, the medicine, is the one the market has not priced because it does not appear on any commodity index. It appears on a doctor’s prescription. Full analysis: open.substack.com/pub/shanakaans…

English
81
1.8K
3.9K
787.6K
d!giD0F ری ٹویٹ کیا
Roger Marques
Roger Marques@rogeriomarquest·
@markgadala Once humans turned their thinking to catch Pokémon in the hope that they would become the very best like no one ever was. But that only permitted others with machines to profit handsomely by sending delivery robots to them.
Roger Marques tweet media
English
5
110
827
127.1K
d!giD0F ری ٹویٹ کیا
Mark Gadala-Maria
Mark Gadala-Maria@markgadala·
This is wild. 143 million people thought they were catching Pokémon. They were actually building one of the largest real-world visual datasets in AI history. Niantic just disclosed that photos and AR scans collected through Pokémon Go have produced a dataset of over 30 billion real-world images. The company is now using that data to power visual navigation AI for delivery robots. Players didn't just walk around with their phones. They scanned landmarks, storefronts, parks, and sidewalks from every angle, at every time of day, in lighting and weather conditions that staged photography would never capture. They documented the physical world at a scale no mapping company with a fleet of vehicles could have replicated on the same timeline or budget. Niantic collected this systematically, data point by data point, across eight years, while users thought the only thing at stake was catching a rare Charizard. The most valuable AI training datasets in the world aren't being assembled in data centers. They're being built by people who have no idea they're building them.
NewsForce@Newsforce

POKÉMON GO PLAYERS TRAINED 30 BILLION IMAGE AI MAP Niantic says photos and scans collected through Pokémon Go and its AR apps have produced a massive dataset of more than 30 billion real-world images. The company is now using that data to power visual navigation for delivery robots, letting them identify exact locations on city streets without relying on GPS. Source: NewsForce

English
2.2K
24.2K
106.8K
14M
d!giD0F ری ٹویٹ کیا
Alex Prompter
Alex Prompter@alex_prompter·
🚨 BREAKING: Berkeley just proved that AI doesn’t save you time. It makes you work MORE. Researchers Aruna Ranganathan and Xingqi Maggie Ye from Berkeley’s Haas School of Business spent 8 months embedded inside a 200-person tech company. Twice-weekly observations. 40+ deep interviews across engineering, product, design, and operations. This wasn’t a survey. They watched what actually happens when a company gives everyone AI tools and says “go.” What they found contradicts everything AI vendors have been selling you. Employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day. Nobody asked them to. The company didn’t even mandate AI use. People just voluntarily did more because AI made “doing more” feel possible. One employee put it perfectly: “You had thought that maybe you save some time, you can work less. But then really, you don’t work less.” That quote should be taped to every monitor running Cursor, Claude, or ChatGPT right now. And a 2024 Upwork study backs it up: 77% of employees using AI said the tools had actually INCREASED their workload. Nearly half didn’t even know how to achieve the productivity gains their employers expected. The researchers found 3 patterns destroying work-life balance. First, task expansion. Product managers started writing code. Researchers took on engineering work. The scope of “my job” widened because AI made everything feel doable. Hiring got postponed because employees absorbed work that would have justified new headcount. Second, blurred boundaries. Workers sent prompts during lunch, before meetings, at 9pm. AI dropped the friction of starting any task to near zero, and natural stopping points in the workday just dissolved. Third, cognitive overload. People ran multiple AI agents simultaneously while reviewing code, drafting docs, and sitting in meetings. Both human and machine constantly in motion. Here’s the cycle that traps you. AI speeds up a task → expectations for speed rise → you rely more on AI → you take on wider scope → workload intensifies → repeat. The researchers call it “workload creep.” No manager told anyone to work harder. The tools just made doing more feel accessible and rewarding. So people kept going until they couldn’t. The most dangerous part: in the moment, it feels amazing. Workers described momentum, expanded capability, the thrill of building things they never could before. But when they stepped back and looked at the full picture, they felt busier, more stretched, unable to disconnect. By month 6 of the study, reports of burnout, anxiety, and decision paralysis had spiked. Short-term momentum. Long-term strain. There’s also a competitive dynamic nobody talks about. When your colleague uses AI to take on extra responsibilities, standing still feels like falling behind. Nobody formally raises expectations. But informal norms shift fast. Within months, doing what AI makes possible becomes what’s expected. The people who set healthy boundaries start looking like underperformers. That’s a toxic dynamic where sustainable work becomes career-limiting. The researchers propose something they call “AI Practice.” Not “use AI more” or “use AI less.” Intentional habits. Structured reflection intervals built into workflows, not “take breaks when you need to” because nobody does. Scheduled reviews where teams assess if AI-enabled expansion has crossed sustainable limits. Clear guidelines on when NOT to use AI and which tasks shouldn’t expand just because they can. I felt this in my own workflow. AI gives you superpowers. But superpowers without discipline just mean you never stop working. The fix isn’t to stop using AI. It’s to stop letting AI decide how much work you do. Set the scope BEFORE you prompt. Define “done” BEFORE the tool makes everything feel possible.
Alex Prompter tweet media
English
119
612
2K
175.1K
d!giD0F ری ٹویٹ کیا
U.S. Central Command
U.S. Central Command@CENTCOM·
The Iranian regime's reckless use and proliferation of ballistic missiles have been a dangerous threat for decades. Now, at the President's direction, U.S. forces are eliminating the threat.
English
1.2K
4.5K
30K
1M
d!giD0F ری ٹویٹ کیا
Massimo
Massimo@Rainmaker1973·
How to make your own earth ground tester circuit
English
5
26
195
37.4K
d!giD0F ری ٹویٹ کیا
Documenting Saylor
Documenting Saylor@saylordocs·
Charlie Munger’s 1998 Harvard speech is the ultimate cheat code for life. He compressed 74 years of billionaire wisdom into just 30 minutes. Most people spend 4 years in college and learn less than what’s in this video. Save this video, you will come back to this.
English
58
2.1K
6.9K
758.4K
d!giD0F
d!giD0F@DigiDOF·
We arrre alll beings trained #ai #LLMs
Alex Prompter@alex_prompter

🚨 Holy shit… Stanford just exposed that every major AI company is using your private conversations to train their models by default. They analyzed the privacy policies of OpenAI, Google, Meta, Anthropic, Microsoft, and Amazon. Reviewed 28 separate documents across all 6 companies. The findings are worrisome. Every prompt you type. Every file you upload. Every personal detail you share. All of it feeds directly into model training the moment you hit send. That health question you asked ChatGPT at 2am? Training data. Legal situation you described to Claude? Training data. The photo you uploaded to Gemini? Training data. Some companies retain your conversations INDEFINITELY. Amazon, Meta, and OpenAI have no confirmed deletion timeline for certain chat data. Your most private conversations could sit on their servers forever. It gets worse for kids. Four out of six companies allow children aged 13-18 to use their chatbots, and most don’t treat children’s data any differently. Kids’ conversations are likely getting fed into model training by default. Kids who can’t legally consent to it. Here’s something most people missed: enterprise customers are opted OUT of training by default. You, the consumer paying $20/month? Opted IN. Companies paying thousands? Protected automatically. There’s a two-tiered privacy system and you’re on the wrong side of it. OpenAI even frames the opt-in with guilt. Their settings page says “Improve the model for everyone.” Stanford’s researchers flagged this as a textbook dark pattern designed to make you feel bad for protecting your own data. Meta’s contractors told reporters they routinely see identifiable personal information in the chat data they review. Journalists were able to positively identify at least one real person from chat transcripts shared with them. The privacy policies themselves? Stanford had to dig through 6 separate documents just for OpenAI alone. Most real disclosures were buried in sub-policies no normal person would ever find. The researchers said it was challenging for THEM to piece it together. For consumers? “Practically impossible.” Only Microsoft explicitly stated they try to remove personal data like names, phone numbers, and addresses before training. The rest are either vague about it or completely silent.

English
0
0
0
7
🧬Maxpein🧬
🧬Maxpein🧬@maximumpain333·
LOOK AT A HUMAN CELL UNDER A MICROSCOPE AND YOU’RE SEEING A BIO-ELECTRIC QUANTUM COMPUTER RUNNING ON DIVINE MATHEMATICS. You operate on electricity. Your neurons fire at 70 millivolts. Your heart generates electromagnetic fields detectable 3 feet away. Your mitochondria produce ATP through electron transfer chains—literal bioelectricity powering 37 trillion cells processing 400 billion bits of information per second. Your DNA isn’t just genetic code—it’s a crystalline antenna. Russian researchers found DNA can be reprogrammed by words and frequencies. Epigenetics proves thoughts and environment change genetic expression without altering DNA sequence. You’re reprogramming yourself constantly. Quantum biology reveals your body operates beyond classical physics. Electrons tunnel through barriers during cellular respiration. Enzymes use quantum mechanics to speed reactions. Your sense of smell may detect molecular vibrations through quantum effects. Every breath creates cascading quantum events through trillions of cells. Your physical state determines consciousness. Chronic inflammation clouds cognition. Dehydration impairs decision-making. Gut bacteria influence mood through the vagal nerve. Poor mitochondrial function reduces cellular energy, limiting your capacity to process reality. Optimal biology creates optimal perception. When cells function at peak bioelectric efficiency, you access higher cognitive states, sharper awareness, better intuition. Treat your body like the quantum computing vessel it is. How you use your vessel determines which timeline you experience. The choice is yours. ✨🙌🏾💫
🧬Maxpein🧬 tweet media🧬Maxpein🧬 tweet media🧬Maxpein🧬 tweet media
English
70
723
2.5K
68.2K
d!giD0F
d!giD0F@DigiDOF·
@XFreeze Did Grok predict or did we listen to Grok? Don't worry about the vase, what vase? The broken vase. Would you have broken the vase if i hadnt said anything....
English
0
0
0
7
X Freeze
X Freeze@XFreeze·
Grok predicted the future accurately 🤯 On Feb 28 - the exact date Grok predicted - Israel & the US struck Iran This wasn't a lucky guess. When pushed to predict, Grok analyzed geopolitical signals, Geneva talk outcomes, and real-time data to pinpoint the day Grok knows what the world thinks
X Freeze tweet media
English
2.1K
2.3K
13.9K
32.8M