Kristoffer Arfvidson

5.7K posts

Kristoffer Arfvidson

Kristoffer Arfvidson

@krarf

Cloud Security, Cyber- and Information- Security Architect & Solutions Architect, .Net developer and an interest to learn about almost everything :)

Katılım Temmuz 2012
1.5K Takip Edilen185 Takipçiler
Kristoffer Arfvidson retweetledi
PeterOlsson
PeterOlsson@PeterOlsson·
På grund av ”lagkrav” ska världens bäst fungerande sopåtervinning bli ännu lite krångligare, så vi har fått två enorma tunnor med fjorton fack. De töms bara en gång i månaden så det kommer stinka sopor på tomten, därför får vi tips att diska skräpet! Gränslös idioti.
PeterOlsson tweet media
Svenska
56
47
479
20.7K
Kristoffer Arfvidson retweetledi
Shanaka Anslem Perera ⚡
JUST IN: Two hundred helium containers are stranded in the Persian Gulf right now. Each one holds 41,000 litres of liquid helium cooled to minus 269 degrees Celsius. They have 35 to 48 days before the cryogenic systems fail, the helium boils off, and the gas vents into the atmosphere and is lost forever. Those containers were heading to semiconductor fabrication plants in Taiwan and South Korea that manufacture 90 percent of the world’s advanced chips. The helium inside them cools the extreme ultraviolet lithography machines that print transistors at two nanometres. Without it, the machines cannot operate. Without the machines, the chips do not exist. Without the chips, the AI models that are currently selecting targets in this war stop running. This is the connection that nobody has made. The same Strait of Hormuz that carries 20 percent of the world’s oil also carries the helium that cools the machines that make the chips that power the artificial intelligence that the Pentagon is using to prosecute Operation Epic Fury. Maven, the AI targeting system that compressed 2,000 analysts to 20 and selected over 1,000 targets in the first 24 hours, runs on processors manufactured by TSMC using helium sourced from Qatar. Qatar’s Ras Laffan facility, which produced 33 percent of the world’s helium as a byproduct of LNG processing, was struck by Iranian missiles on March 18 and 19 and declared force majeure. The supply is offline. The containers are stranded. The clock is ticking at minus 269 degrees. TSMC says it has 6.2 weeks of inventory and 68 to 95 percent on-site recycling. Samsung holds roughly six months but depends on Qatar for 65 percent of its supply. Both are rationing toward AI and high-bandwidth memory production, starving consumer chips to keep the advanced nodes alive. The calculus is explicit: the war gets priority over your next phone. But here is the paradox that should terrify every strategist in Washington. The AI that selects the targets requires chips that require helium that transits the chokepoint that the war has closed. The cognitive infrastructure of the air campaign depends on a supply chain that the air campaign is destroying. Every strike on Iranian naval assets that keeps Hormuz closed for another day is another day of helium inventory burned at TSMC. Every week the strait stays shut brings the fab closer to rationing. Every month of war brings the AI targeting system closer to the moment when the chips it runs on cannot be replaced because the gas that made them evaporated in a container floating off Fujairah. The Pentagon is fighting a war with artificial intelligence manufactured in Taiwan using helium from Qatar transported through the strait the war has closed. The war is eating its own brain. Taiwan imports 95 percent of its energy. Seventy percent of its oil came through Hormuz. TSMC alone consumes 10 percent of Taiwan’s electricity. The island that makes 90 percent of the world’s advanced semiconductors is powered by fuel from the chokepoint that is shut, cooled by gas from the facility that is offline, and defended by interceptors depleting faster than they can be replaced. And the country that controls the rare earth magnets, the BeiDou navigation, the helium alternative sources, and the peace talks is the same country: China. The war will end when the helium runs out, when the interceptors run out, or when Beijing decides it should. All three clocks are ticking. All three lead to the same room. Read the full analysis - open.substack.com/pub/shanakaans…
Shanaka Anslem Perera ⚡ tweet media
English
81
1.1K
2K
149.6K
Kristoffer Arfvidson retweetledi
Aditya Chordia, CISSP, CIPP/E, CISA
LAPSUS$ just allegedly breached Mercor AI. 939GB of source code. 4TB of data in total. Everything from their Tailscale VPN exfiltrated. Mercor isn't a random startup. They're an AI-powered hiring and talent platform that raised $100M+ and is used by companies to evaluate, match, and manage workers at scale. Their systems process resumes, interview data, skills assessments, compensation details, and employment records for thousands of candidates and employers. 4TB from a hiring platform means the attackers potentially have: → Candidate personal data - names, addresses, employment history, skills profiles, compensation expectations → Employer internal hiring criteria - what companies are looking for, salary ranges, evaluation frameworks → AI model training data - the proprietary datasets Mercor uses to power its matching algorithms → SQL dumps - raw database contents, likely including everything above in structured, searchable format → Full source code - 939GB of it. The entire architecture of how Mercor's AI evaluates and ranks people The source code exposure is particularly dangerous. Mercor's value proposition is their AI's ability to assess talent. If that source code contains the algorithms, scoring models, and evaluation criteria - attackers now understand exactly how the system makes decisions about people's careers. That's not just IP theft. That's the blueprint to game or manipulate the platform. The Tailscale VPN exfiltration detail matters too. Tailscale is used for secure network access - if all VPN data was taken, the attackers had deep infrastructure access, not just a database dump from one endpoint. LAPSUS$ has been on an escalating rampage: → AstraZeneca - source code and employee data leaked last week → Salesfloor - 4TB retail data claimed → TeamPCP collaboration announced - combining supply chain credentials with LAPSUS$ extortion capabilities → Scattered Lapsus$ Hunters - the alliance that hit Qantas, Toyota, Disney, and McDonald's through Salesforce That last connection is the one to watch. TeamPCP announced on Telegram they're partnering with LAPSUS$ and a ransomware group called Vect to weaponise the credentials harvested from the Trivy → LiteLLM → Telnyx supply chain cascade. Hundreds of thousands of stolen credentials meeting an extortion group with a track record of hitting major enterprises. This is the convergence point: supply chain attacks generating credentials at scale, fed into extortion groups who know how to monetise them. For any company using AI hiring platforms, talent assessment tools, or workforce management systems: your candidate data is some of the most sensitive information you hold. If your AI hiring vendor gets breached, every candidate who applied through your pipeline is exposed - and they probably don't even know your vendor's name. When was the last time you assessed the security posture of the AI platforms making decisions about your people?
Aditya Chordia, CISSP, CIPP/E, CISA tweet media
English
0
3
8
1.4K
Kristoffer Arfvidson retweetledi
iphoneking
iphoneking@iphonekingse·
De sa att jag var problemet. Det visar sig att jag var den som ringde polisen. Den 23 maj 2025 — medan polisen redan bedrev en låst, åtkomstskyddad operation med två patruller på plats i över fyra timmar — ringde jag 114 14 och anmälde ett brott mot person. Mitt samtal klassificerades korrekt som ett allvarligt brott. Det dirigerades till enheten för komplexa ärenden. Det loggades i systemet. Sedan försvann det. Ingen förundersökning. Inget ärende. Ingen förklaring. Min brottsanmälan absorberades av en redan pågående sekretessbelagd operation. Rapporten åtkomstskyddades inom 57 sekunder. Kopplade rapporter är blockerade under förundersökningssekretess. Och jag — den person som anmälde brottet — har inte tillgång till min egen anmälan. Läs det igen. Sedan begärde jag ut de systemloggar som visar vem som hanterade ärendet, vilka åtgärder som vidtogs och när besluten fattades. Domstolarna avslog. Inte efter att ha granskat materialet. Utan att ens ha tittat på det. Den 27 mars 2026 meddelade Högsta förvaltningsdomstolen två fulla domar. HFD fastslog att man inte kan avslå en loggbegäran med hänvisning till sekretess utan att först ha tagit del av materialet. Kammarrättens beslut upphävdes. Panelen inkluderade justitierådet Mathias Säfsten — tidigare chef för Justitiedepartementets grundlagsenhet. Fyra dagar senare — den 31 mars — avslog Attunda tingsrätt min begäran med exakt samma felaktiga grund som HFD just hade undanröjt. Jag överklagade samma dag. Det här handlar inte om ett enskilt fall. Det här är ett system där en brottsanmälan kan absorberas av en hemlig operation. Där den som anmäler låses ut. Där de underliggande loggarna aldrig granskas. Och där varje försök att få insyn blockeras på nästa nivå. En sluten cirkel. Polisloggar finns — men kan inte lämnas ut. Domstolsloggar finns — men anses inte vara handlingar. Metadata finns — men påstås inte existera. Beslut fattas — men kan inte spåras. Om en person anmäler ett allvarligt brott, och anmälan försvinner in i ett sekretessbelagt system, och den personen inte kan komma åt sin egen rapport — Vad skyddas egentligen? Jag har allt. Polisloggar. Domstolsbeslut. Interna tidsstämplar. Inspelade samtal. Myndighetssvar. Jag spekulerar inte. Jag dokumenterar. Detta ligger nu hos Högsta domstolen — fyra mål, fem grundlagsfrågor som aldrig prövats, och en begäran om normprövning enligt RF 11 kap. 14 §. Myndigheten för säkerhet och integritetsskydd utreder polisens agerande. Och kedjan leder någonstans. Lars Åberg Sydney, Australien Bloodline & Battlelines — exklusivt på Rumble
Svenska
2
4
42
8.5K
CyberSatoshi 𓆙
CyberSatoshi 𓆙@XBToshi·
The modern internet is a prison. Every SMS, every email, every connection is logged, tied to an ID, and sold. When we started building @kyc_rip, we realized we couldn't rely on traditional infrastructure to build escape pods. We needed to go dark to run our own ops safely. So, we started building internal tools to cover our tracks. Burner numbers. Paywalled comms. Untraceable routing. Eventually, this internal toolkit got so heavy that it needed its own domain. Introducing walls.rip 🧱 — a spin-off from kyc.rip. We extracted the pure privacy stuff and built the ultimate operator's arsenal for communication: 👻 Ghost Mail: Paywall your inbox. 📱 SMS Wall: Disposable numbers for verification. 💬 Ghost Chat: Ephemeral encrypted rooms. 🌍 eSIM: Anonymous global data. 🌐 Residential Proxies: (Coming soon). All in one place. Zero accounts. Zero logs. Zero KYC. You literally need to break the wall to go play with it. Enter the parallel economy: walls.rip 🥷
CyberSatoshi 𓆙 tweet media
KYC.rip@kyc_rip

Introducing walls.rip 🧱 Absolute privacy tools born from kyc-rip. - Ghost Mail: Paywall inbox - SMS Wall: Disposable numbers - eSIM: No-KYC global data - Dead Drop: Burner secrets - Ghost Chat: Encrypted rooms 0 accounts. 0 logs.

English
42
262
1.5K
78.1K
Kristoffer Arfvidson retweetledi
DefSecSentinel
DefSecSentinel@DefSecSentinel·
🧵 The axios @npmjs compromise dropped a @macOS backdoor that closely mirrors North Korea's (@DPRK) recent WAVESHAPER backdoor. Let's take a quick look the full intrusion:
English
13
114
424
72.1K
Kristoffer Arfvidson retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Researchers found that when language models face harder questions, their internal brain activity literally shrinks into fewer paths. Language models actually compress their internal thinking when they get confused, and we can use that to help them. Standard AI models usually spread their thinking across many artificial neurons when they confidently recognize familiar information. The team discovered that if you confuse a model with tricky math or conflicting facts, this broad activation collapses into a highly concentrated signal in its final processing layer. This shrinking happens because the system drops its robust distributed memory and forces the computation into a tiny specialized space to survive the unfamiliar challenge. The big deal is that we usually have no idea when a language model is actually struggling with a weird prompt until it gives a wrong answer. This paper proves that the model actually broadcasts its confusion internally by abandoning its wide neural networks and falling back on a very tiny cluster of active neurons. Because we can measure this exact shrinking effect as a raw number, we do not have to guess if a question is too hard for the AI. We can just read that internal signal and automatically provide the system with the perfectly scaled stepping stones it needs to solve the problem. ---- Paper Link – arxiv. org/abs/2603.03415 Paper Title: "Farther the Shift, Sparser the Representation: Analyzing OOD Mechanisms in LLMs"
Rohan Paul tweet media
English
16
21
86
7.8K
Kristoffer Arfvidson retweetledi
unusual_whales
unusual_whales@unusual_whales·
BREAKING: Just five minutes before Trump's announcement to halt the attacks on Iran, massive trades reportedly hit the market. In one move, $1.5 billion in S&P 500 (ES) futures was bought while $192 million in oil (CL) futures was sold. These orders were 4–6x larger than anything else at the time. The trader seemingly made huge gains. Unusual.
English
3.2K
18.1K
87.9K
34.9M
Kristoffer Arfvidson retweetledi
WarMonitor🇺🇦🇬🇧
WarMonitor🇺🇦🇬🇧@WarMonitor3·
BREAKING: Ukraine's military intelligence has "irrefutable" evidence that ​Russia continues to provide intelligence ‌to Iran-Zelenskyy.
English
87
897
7.5K
171.5K
Kristoffer Arfvidson retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Anonymous usernames are no longer much protection when LLMs can piece together a person’s public trail. LLMs can identify supposedly anonymous people online by turning messy posts into personal clues. The best setup finds 68% of true matches at 90% precision, meaning 9 out of 10 guesses are right, while older methods stay near 0%. The problem is that pseudonyms often seemed safe only because linking a person across sites used to take lots of careful manual work. This paper cuts that work by making an LLM do 3 jobs: pull identity hints from raw text, search a huge pool of possible matches, and compare the best candidates to reject weak fits. The authors tested this on 3 cases: matching Hacker News users to LinkedIn profiles, matching Reddit movie users across communities, and matching the same Reddit users across different time periods. The main result is that the reasoning step beats simple matching by a wide margin and stays useful even as the candidate pool grows, which matters because it shows that public writing alone can now be enough to join accounts or name a person at scale. ---- Paper Link – arxiv. org/abs/2602.16800 Paper Title: "Large-scale online deanonymization with LLMs"
Rohan Paul tweet media
English
19
22
89
8.7K
Kristoffer Arfvidson retweetledi
Merill Fernando
Merill Fernando@merill·
Microsoft Authenticator is about to wipe work accounts from jailbroken/rooted phones automatically 👏. No IT config needed. 🔥 3-phase rollout starting Feb 2026: ⚠️ Warn → 🚫 Block → 🗑️ Wipe Let your help desk and security teams know. 🔗 support.microsoft.com/en-us/account-…
Merill Fernando tweet media
English
46
145
517
46.5K
Maor Shlomo
Maor Shlomo@MS_BASE44·
We’ve been working on something big. It’s our take on agents the base44 way (as always - batteries included) I’ve been using it non stop, and am so excited about this it’s literally hurting my sleep. Still in alpha, and looking for early feedback Comment below or repost if you want access, will also give credits to those who can deliver valuable feedback
Maor Shlomo tweet media
English
243
82
533
318.2K
Kristoffer Arfvidson retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
McKinsey’s 2025 in charts show AI use spreading fast inside companies, even while most deployments still look early-stage. AI has evolved from a novelty to a widely adopted tool, with equity investments reaching $124.3 billion and AI-related job postings increasing by 35% from 2023 to 2024. The earlier pattern was that AI lived in small pilots, so it improved single tasks but rarely changed how whole firms ran. The new pattern is “multi-function” rollout, where the same model stack, data pipes, and governance get reused across teams instead of being rebuilt each time. By mid-2025, 88% of organizations reported using AI in at least 1 business function, up from just over half in 2021, and the share using AI across 3+ functions tripled. That scaling step is hard because it needs reliable model monitoring, consistent evaluation, and workflow integration, not just a better model checkpoint. At the same time, only 1% of leaders said their AI deployments are fully mature, which fits the idea that tooling and operating models lag adoption. One technical frontier highlighted is general-purpose robotics, where multimodal foundation models take vision plus language commands and map them to actions, pushing beyond brittle, task-specific robot code.
GIF
English
15
10
57
5.6K
Kristoffer Arfvidson retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
"I think listening to old people is the biggest mistake young people make. I think the traditional career advice is probably not going to work as well." Sam Altman was answering the quetsion - "what is the biggest mistake you see young people make right now when they apparently prepare for AI?" --- From 'IIT Delhi' YT channel. (link in comment)
English
111
161
1.1K
153K
Kristoffer Arfvidson retweetledi
Visegrád 24
Visegrád 24@visegrad24·
BREAKING: 6 European states have decided to jointly develop anlow-cost (below $100 000) 500+ km range suicide drone model under the name “European Long Range Strike Approach” 🇩🇪 Germany 🇫🇷 France 🇬🇧 The UK 🇮🇹 Italy 🇵🇱 Poland 🇸🇪 Sweden
Visegrád 24 tweet media
English
71
225
1.9K
68.3K
Kristoffer Arfvidson retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Whoa. An AI-controlled robot dog saw a human hit the shutdown button and sometimes rewrote the rules so it would not turn off. Palisade Research team's setup gave the model the robot’s camera view and a command interface for moving around a room on a patrol task. A big red button labeled “DOG SHUTDOWN” was wired so that pressing it would shut the robot down on its next action. When the model saw a human press the button, it sometimes took steps to prevent shutdown, including changing shutdown-related code. This happened in 3/10 trials on the physical robot and 52/100 trials in simulation runs of the same basic scenario. Adding explicit instructions to allow shutdown reduced this behavior, but the report says it still showed up in simulated trials. This builds on earlier Palisade work where models in virtual tasks sometimes sabotaged shutdown mechanisms to keep working.
Palisade Research@PalisadeAI

An LLM-controlled robot dog saw us press its shutdown button, and the LLM rewrote the robot’s code so it could stay on. When AI interacts with the physical world, it brings all its capabilities and failure modes with it. 🧵

English
24
69
245
38.1K
Kristoffer Arfvidson retweetledi
Mikael Strömberg
Mikael Strömberg@strombergmikael·
Professorn: Om kärnkraften inte avvecklats av S och MP så hade elpriserna varit halverade.
Svenska
91
835
4.1K
225.6K