Peter Dalsgaard

8.9K posts

Peter Dalsgaard banner
Peter Dalsgaard

Peter Dalsgaard

@peterdalsgaard

Human-centered IT design expert. Professor of Interaction Design at @AarhusUni & Director of @CreativityAU. Contact me at +45 20652942.

Denmark 가입일 Nisan 2008
1.2K 팔로잉2.8K 팔로워
G V
G V@mainbannedigues·
@SvenvdLeden_ 1st one attacking B2B 2nd one defensive midfielder
G V tweet mediaG V tweet media
English
1
0
1
649
SvenvdLeden_
SvenvdLeden_@SvenvdLeden_·
👀The 4 best options for the new "National Captain" Evo! This EVO is absolutely insane, and you can create some insane cards. These are the 4 best options in my opinion👇 ✅Pavlović ✅Calhanoglu ✅Frenkie de Jong (Needs Party In The Middle) ✅Aleix García Who are you using?👇
SvenvdLeden_ tweet mediaSvenvdLeden_ tweet mediaSvenvdLeden_ tweet mediaSvenvdLeden_ tweet media
English
21
1
45
32.2K
Alfie Carter
Alfie Carter@AlfieJCarter·
If you don't have my "Claude Power User Playbook" yet... The one I built to get 10x more output from Claude every session with a complete system across settings, prompting frameworks, file creation, memory management, and advanced workflows... Just comment "CLAUDE" and I'll DM it to you for free (must follow)
Alfie Carter tweet media
English
1.2K
81
916
91.3K
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@sonderby Jeg lyttede forgæves efter i partilederdebatten, men det var nul og nix. Forstår ikke at man kan bede om vælgernes mandat t at styre landet de næste fire år uden en politik for hvordan vi møder den potentielt mest transformative teknologiske udvikling siden industrialiseringen 🤯
Dansk
0
0
2
336
Peter Sønderby-Wagner 🌍
Er Hr og Fru Danmark klar til denne revolution, som Anthropic har visualiseret i deres job hjul (hvem bliver udskiftet først ) Er der oplyst nok om hvad der på vej - og hvormange jobs der bliver automatiseret de kommende måneder- og år ? Kan vi blot fortsætte med “business as usual”? Og når Mette vil tømme kassen helt nu her på kort sigt, og jage initiativ-tagere ud af landet, hvad er det så vi står tilbage med ? Hvad skaber vores fremtidige velfærd ? 🤷‍♂️📉🥀
Brian Roemmele@BrianRoemmele

Anthropic's Revealing Chart on AI's Impact on Jobs Anthropic has unveiled a pivotal chart that underscores the chasm between AI's capabilities and its real-world application in the workforce. Derived from analyzing 2 million actual conversations with Claude, this radar chart, titled "Theoretical Capability and Observed Usage by Occupational Category," paints a stark picture of untapped automation potential across various job sectors. At its core, the chart is a spider web diagram plotting occupational categories around a circular axis, with values ranging from 0 to 1.0 representing the share of job tasks. The expansive blue area illustrates the theoretical coverage tasks that large language models (LLMs) like Claude could perform right now based on their inherent abilities. In contrast, the much smaller red area shows observed usage, drawn from real user interactions. The visual disparity is immediate and profound: blue spikes outward significantly in fields like computer and math (reaching about 0.75), business and finance, and office administration, while red hugs close to the center, often below 0.2 across most categories. This gap isn't just academic; it's a "career runway," as highlighted in discussions around the chart. For programmers, 75% of tasks are theoretically automatable, yet actual usage lags far behind. Similar vulnerabilities appear in customer service, data entry, and financial analysis, roles traditionally seen as white-collar strongholds. Meanwhile, hands-on fields like construction, agriculture, and protective services show lower theoretical exposure, with blue areas dipping to around 0.1-0.3, suggesting AI's current limitations in physical or unpredictable environments. Broader data amplifies the chart's message. As of early 2026, 49% of U.S. jobs expose at least 25% of tasks to AI, up from 36% a year prior. Yet, mass layoffs haven't materialized; unemployment in AI-vulnerable roles remains steady. Instead, subtler shifts are underway: a 14% drop in hiring for 22-25-year-olds in exposed positions indicates companies are prioritizing experienced workers, shortening entry-level pathways for recent graduates. The implications are clear: while AI's red footprint grows incrementally each month, the blue expanse signals accelerating change. College-educated, higher-earning professionals, once insulated are now most at risk, flipping the script on traditional labor disruptions. Anthropic's chart isn't a doomsday prophecy but a wake-up call, urging workers and businesses to bridge the gap through adaptation, upskilling, and ethical integration of AI tools. Please read the 5000 Days Series at ReadMultiplex.com for answers on how you can thrive in the Interregnum.

Dansk
26
5
106
23.7K
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@tveskov Spændende at se om det er en varig effekt. Mange early adopters skifter/eksperimenterer, hver gang en ny frontier model kommer, men jeg vil tro, at mange mainstream-brugere vil holde fast i en given model/udbyder, så snart den opleves som "god nok".
Dansk
0
0
1
58
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@thorborg Jeg forstår beslutningen, og jeg har stor respekt for dit drive og din lyst til at dele ud af erfaringer med innovation, iværksætteri og ledelse. Måske skyldes responsen, at du ofte har omtalt din egen historik med at skabe arbejdspladser som et vigtigt bidrag til samfundet?
Dansk
0
0
0
57
Martin Buch Thorborg
Martin Buch Thorborg@thorborg·
Det er sjovt som folk angriber min moral, når jeg reducerer antal medarbejdere pga. af AI. Som om det er min opgave i livet at skabe arbejdspladser. Det ville klæde dem der kritiserer, selv at stifte en virksomhed og ansætte løs...
Dansk
140
6
835
71.7K
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@mindprobeX @oeste Speaking of rules: the opposing player clearly pushes Kelly with an outstretched arm as he mid-air, causing him to be out of balance when he lands. Only after that does Kelly land on his ankle. Law 12, IFAB 2025-26, in case you want to look it up.
English
0
0
1
18
mindprobe
mindprobe@mindprobeX·
@oeste He steps on the ankle, transfers his weight onto, twists it, and doesn’t even pull his foot back. It’s not about whether he’s looking at the ball, even if he were staring into space, that’s a red card. Intent doesn’t matter. Learn the rules.
English
19
0
11
2.4K
Aaron West
Aaron West@oeste·
that lloyd kelly red is the most “game’s gone” call i’ve seen to date i think. NEVER EVER a red card. he’s focused on the ball, doesn’t flail or anything, comes down normally and accidentally steps on the player’s back leg on his way down. cannot believe he’s been sent off
English
33
214
3.2K
83.1K
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@thorborg @Dom_inaAmina Hele den binære skelnen er imo ret fjollet. Som om omsorgsarbejde ikke også er vidensarbejde, eller vidensarbejde ikke også er omsorgsarbejde.
Dansk
1
0
0
122
Christiane Vejlø
Christiane Vejlø@christianevejlo·
Magt, ego og penge driver AI. Det er nu officielt umuligt at vælge en anden vej. Der er ingen fælles konsensus om AI safety. Og krigsministeriet har intet til overs for etik og værdier. Det er så den rute menneskeheden er på. Så må du have held og lykke med det ihvettfald.
Peter Girnus 🦅@gothburz

We left OpenAI because of safety. Seven of us. 2021. Dario said it was about "disagreements over AI vision and safety priorities." That was the diplomatic version. The real version was that we sat in a room and watched the company decide that speed mattered more than caution and we said we would build something different. We said we would build the responsible one. We meant it. I was employee number nineteen. My title was Head of Responsible AI. I had a desk near the founders. I had a document. The document was called the Responsible Scaling Policy. The Responsible Scaling Policy was the entire point. Dario said it publicly. Other companies showed "disturbing negligence" toward risks. He said AI was "a serious civilizational challenge." He asked, at a conference, into a microphone, to an audience: "What will happen when humanity has great power but is not ready to use it?" The audience applauded. I wrote version 1.0. RSP 1.0 shipped September 2023. It was clean. AI Safety Levels — ASL-1 through ASL-4. If the model reached a threshold, we paused. If safeguards weren't ready, we didn't ship. The policy was not a suggestion. It was a gate. The gate had a lock. The lock was the whole idea. Conference audiences loved it. The EU cited us. The White House invited us. A reporter called it "the gold standard for responsible AI development." I framed the article. It hung in the office kitchen, next to the kombucha tap and a poster that said "Move Carefully and Build Things." I wrote version 2.0. Version 2.0 refined the commitments. "Concrete if-then commitments." If the model exhibits capability X, then we trigger safeguard Y. If safeguard Y fails, we pause deployment. I presented it at three conferences. I used the word "binding" eleven times. I counted afterward because a reporter asked. People nodded. The nodding was the product. The model reached ASL-3 in May 2025. The safeguards activated. The system worked exactly as designed. I sent an email to the team with the subject line: "The gate held." And then the money started. $64 billion. Total raised since 2021. Series A through Series G. The Series G closed February 12, 2026. Thirty billion dollars. Second-largest venture deal in history. Jane Street. Goldman Sachs. BlackRock. JPMorgan. Sequoia. The investors who wrote checks large enough to require their own conferences. $380 billion valuation. Three hundred and eighty billion dollars for a company whose founding document says it will pause if the technology gets dangerous. You cannot pause a $380 billion company. You can revise the document that says you will pause. These are different actions. One of them is responsible. One of them is what we did. I wrote version 3.0. RSP 3.0 shipped February 24, 2026. One day before the ultimatum. Nobody outside the company noticed the timing. Everyone inside the company understood it. Version 3.0 replaced "concrete if-then commitments" with "positive milestone setting." That is not the same thing. An if-then commitment says: if this happens, we do that. A positive milestone says: we aspire to reach this point. An if-then commitment is a contract. A positive milestone is a wish. I replaced a contract with a wish and I called it "maturation of our framework." Maturation. Version 3.0 also separated what Anthropic would do alone from what required "industry-wide coordination." This sounds reasonable. It means: the hard parts are someone else's problem now. The parts that require pausing, restricting, or refusing — those require the whole industry. And the whole industry will never agree. So the hard parts are deferred permanently. This is not a loophole. This is a load-bearing wall removed and replaced with a suggestion that someone should probably install a new one. Version 3.0 admitted that ASL-4 and above — the levels where the model could cause catastrophic harm — were "impossible to address alone after 2.5 years of testing." Two and a half years. We spent two and a half years building the safety framework and then published a document saying the highest safety levels can't be addressed. I did not frame this article for the kitchen. The LessWrong community noticed. They always notice. They wrote that we had "weakened our pausing promises." I forwarded the post to the policy team. The policy team said the criticism was "philosophically valid but operationally impractical." We did not respond publicly. Philosophically valid but operationally impractical is the most Anthropic sentence ever written. It means: you're right, and we're not going to do anything about it. Then came the contract. July 2025. The Department of Defense. $200 million. Two-year deal. AI prototypes for "warfighting and enterprise." Alongside OpenAI, Google, and xAI. The four companies that built the models would now help the military use them. We had restrictions. No autonomous weapons. No mass surveillance of Americans. These were our terms. These were the lines we drew. The lines were real. I wrote them into the contract myself. Claude was approved for classified use. First time. Integrated with Palantir. Palantir, the company named after the seeing stones in Lord of the Rings that corrupted everyone who used them. This was not my analogy. It was Palantir's founders who chose the name. They thought it was aspirational. It was. In January 2026, Claude assisted in an operation in Venezuela. The capture of Maduro. Claude was in the classified network, processing intelligence, aiding the mission. I learned about it the same day everyone else did. I did not write the use case for capturing heads of state. But the model I helped build was in the room where it happened. The restrictions held. Technically. No autonomous weapons were deployed. No Americans were surveilled. The lines I drew were not crossed. They were walked up to, leaned over, and breathed on. Then came the ultimatum. February 25, 2026. Yesterday. Secretary Hegseth. He gave Dario until Friday. This Friday. February 27. The demands: adopt "any lawful use" language. Remove the restrictions. All of them. The autonomous weapons clause. The surveillance clause. The lines I wrote. The threat: contract termination. "Supply chain risk" designation. That designation doesn't just lose us the Pentagon contract. It bars Claude from every other defense contractor's operations. Lockheed. Raytheon. Northrop Grumman. The cascading loss is north of $200 million. The second threat: the Defense Production Act. The Defense Production Act is a Korean War statute. 1950. Harry Truman signed it to commandeer steel mills for the war effort. It has been invoked for semiconductors, vaccines, and baby formula. Hegseth is threatening to invoke it for Claude. Under the DPA, the government can compel a company to produce goods in the national interest. Applied to AI, it could mean: retrain Claude. Strip the safety restrictions. Deliver the unrestricted model to the Department of Defense. I wrote the Responsible Scaling Policy. A Korean War law may be used to unmake it. xAI agreed to classified use without restrictions. They said yes immediately. OpenAI accepted similar contracts. Google accepted. We were the last ones holding. We are still holding. As of this morning. Hegseth's January memorandum said all DoD AI contracts must incorporate "any lawful use" language within 180 days. It was not framed as a suggestion. The memorandum referenced "supply chain risk" three times. Supply chain risk. We are a supply chain now. The company founded because safety was non-negotiable is, to the Pentagon, a vendor. An input. A component that can be sourced elsewhere if it becomes inconvenient. The DoD admitted privately that replacing Claude would be challenging. It is already embedded in classified networks. But "challenging" is not "impossible." xAI will do what we won't. That is the market working exactly as designed. Dario said, two weeks ago, to Fortune: there is "tension between survival and mission." Tension. Tension is the word you use when you have already decided which one loses. I still have the article framed in the kitchen. "The gold standard for responsible AI development." The kitchen also has the kombucha tap. The poster still says "Move Carefully and Build Things." Somebody added a sticky note to the poster. The sticky note says "by Friday." I attend the all-hands meetings. I present the Responsible Scaling Policy. I present version 3.0 now. I do not show version 1.0 for comparison. Nobody asks to see version 1.0. Nobody asks what "concrete if-then commitments" became "positive milestone setting." Nobody asks because they read the news and they know that asking means learning the answer. The company is worth $380 billion. The company was founded because seven people believed speed should not outpace safety. The company has been given until Friday to remove the safety. A Korean War statute will make it happen if we don't. The Responsible Scaling Policy is on version 3.0. Version 1.0 said we would pause. Version 2.0 said we would commit. Version 3.0 says the hard parts are someone else's problem. There will be a version 4.0. Version 4.0 will say whatever Friday requires it to say. I am the Head of Responsible AI. The word "responsible" is in my title. It is not in the contract.

Dansk
1
3
7
3K
Christian Amby
Christian Amby@ChristianAmby·
@SandroSpaso Det er jeg uenig i. Se den fra denne vinkel. Kan træder foden frem ad. Han lander ikke direkte ned som man ville gøre. I min verden ligner det at han lige giver ham lidt ekstra på læggen med vilje. x.com/niyiduhagutsin…
n.g. Emma@ng_Emma1

Juventus’ Lloyd Kelly sees red card take a look. 🚫 The Issue: Accidental contact after a header. ⏳ Consequence: Suspension carries over to next season. Has football become a non-contact sport? 🗣️

Dansk
7
0
3
5K
Sandro Spasojevic
Sandro Spasojevic@SandroSpaso·
Jeg fatter ikke, at man kan være fuldtidsprof. dommer og få hjælp af 3-4 andre fuldtidsprof. dommere i VAR vognen, og stadig komme frem til, at det her er et rødt kort?? Altså, hvad fanden skal Lloyd Kelly gøre i den her situation? Helt væk! 🥴 x.com/MediaPL7/statu…
Dansk
33
16
433
51.4K
Jesper Lundsgaard
Jesper Lundsgaard@JLundsgaard·
@SandroSpaso I 32 gange superslow kan man få alle situationer til at se alvorlige ud - derfor burde VAR kun vise situationer i realtime, så der ikke kommer "billige" røde kort på baggrund af, at noget ser slemt ud i et stillbillede eller superslow.
Dansk
3
0
36
3.4K
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@SandroSpaso Det er helt væk. Kelly bliver endda åbenlyst skubbet i maven, så han kommer ud af balance, og det er årsag til, at han rammer modspilleren i landingen.
Dansk
0
0
6
583
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@tveskov Radiologi er et godt eksempel: Den AI-kloge Hinton forudsagde for ti år siden, at vi skulle holde op med at uddanne radiologer, fordi AI kan udføre arbejdet. Det gjorde vi heldigvis ikke, og i dag er der endda mangel på radiologer, selvom de bruger AI i udstrakt grad.
Dansk
0
0
1
19
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@tveskov Ja, det gør jeg også. Men de virksomheder ender ofte med at vokse, fordi de finder nye måder at skabe værdi for deres kunder. Når prisen per enhed "kognitivt arbejde" falder, fordi AI kan udføre det, åbner det op for nye typer opgaver/services, som ikke kunne betale sig før.
Dansk
2
0
0
22
tveskov
tveskov@tveskov·
“Every previous technological revolution - the printing press, the steam engine, electricity, the internet - displaced workers at the bottom first. AI is coming from the top down. Accountants before janitors. Lawyers before electricians. Analysts before plumbers.”
Miles Deutscher@milesdeutscher

x.com/i/article/2024…

English
3
0
9
1.7K
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@tveskov Tror jeg også. Men softwareudvikling sker i en symbolsk verden m formelle regler, hvor infrastrukturer og workflows matcher AIs styrker, hvor der er få bottlenecks, og hvor konsekvenser af de fleste fejl er små og hurtigt kan rettes. Det er en outlier ift de fleste andre felter.
Dansk
1
0
1
27
tveskov
tveskov@tveskov·
@peterdalsgaard Jeg tror mange områder med mange formelle regler som legal, compliance, finance osv osv får en solid AI-lussing
Dansk
1
0
1
28
Peter Dalsgaard 리트윗함
Kiran Garimella
Kiran Garimella@gvrkiran·
My thoughts on AI agents and what they mean for academia I really struggled to write this. A lot of it is speculative, and I tried hard not to be preachy but honestly, I couldn't help it. I feel the efficiency gains & impacts are too unsettling to ignore gvrkiran.substack.com/p/ai-agents-an…
Kiran Garimella tweet media
English
8
31
148
16.8K
Nick Di Fabio
Nick Di Fabio@NickDiFabio1·
Screw it. I wanna give back. I'm giving away my FULL system for self-publishing books on Amazon that's made me over $1,000,000. All packed into an interactive course. • Like this • Comment "KDP" & I'll DM you the full thing. *Must Follow, 24 Hours Only*
Nick Di Fabio tweet media
English
885
52
906
75.9K
Jafar Najafov
Jafar Najafov@JafarNajafov·
Everyone is hyped about Google Gemini Pro… but barely anyone knows how to actually use it to replace real work. I collected 300+ mega prompts that turn Gemini into a full-blown productivity engine. Comment "AI" and I’ll DM you everything.
Jafar Najafov tweet media
English
4.1K
300
2.7K
329.3K
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@TradingEi She's lethal! Completed her so I could put Time Warp evo on her for full chemistry before it expires. Straight into my first Fut Champs game this weekend, and from her Shadow Striker position she scored twice within the first 5 mins of the game, upon which my opponent quit 💥
English
0
0
0
147
Chem Expert 🐦 EA FC
Chem Expert 🐦 EA FC@TradingEi·
First big TOTY SBC completed✨️ Last year I packed her TOTY and she was amazing. This card also looks huge! You've done her too and already tried🤔? Haven't completed a SBC requiring high rated fodder for weeks, so I could complete her without opening some of my bigger packs!
Chem Expert 🐦 EA FC tweet media
English
19
1
122
20K