Peter Dalsgaard

8.9K posts

Peter Dalsgaard banner
Peter Dalsgaard

Peter Dalsgaard

@peterdalsgaard

Human-centered IT design expert. Professor of Interaction Design at @AarhusUni & Director of @CreativityAU. Contact me at +45 20652942.

Denmark Katılım Nisan 2008
1.2K Takip Edilen2.8K Takipçiler
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@tveskov Ja, jeg ser mange som ukritisk viderebringer dette og lignende tweets, hvor de skriver om ting, som studiet slet ikke handler om - fx handler det slet ikke om kreativitet, og de kigger på en periode på få uger, ikke seks måneder, og det er ikke "Just in" - det udkom sidste år.
Dansk
0
0
1
80
tveskov
tveskov@tveskov·
Pænt ironisk at dette studie misbruges ift konklusioner der ikke er dækning for via AI-skabt disinfo:
tveskov tweet media
Elara Grace@ElaraGrace_AI

🚨Just IN: If you've used ChatGPT for writing or brainstorming in the last 6 months, your creative ability may already be permanently damaged. A controlled experiment just proved the effect doesn't reverse when you stop using it. 3,302 creative ideas. 61 people. 30 days of tracking. Researchers split students into two groups. Half used ChatGPT for creative tasks. Half worked alone. For five days, the ChatGPT group outperformed on every metric. Higher scores. More ideas. Better output. AI was making them better. Then day 7. ChatGPT removed. Every creativity gain vanished overnight. Crashed to baseline. Zero lasting improvement. But that's not the bad part. ChatGPT users' ideas became increasingly identical to each other over time. Same content. Same structure. Same phrasing. The researchers called it homogenization. Everyone using ChatGPT started producing the same ideas wearing different clothes. When ChatGPT was removed, the creativity boost disappeared -- but the homogenization stayed. 30 days later, same result. Their creative range had been permanently compressed. Five days of use. Permanent damage 30 days later. A separate trial confirmed it. 120 students. 45-day surprise test. ChatGPT users scored 57.5%. Traditional learners scored 68.5%. AI reduces cognitive effort. Less effort means weaker encoding. Weaker encoding means less creative raw material. You're not renting a productivity boost. You're financing it with your originality. The interest rate is permanent.

Dansk
1
0
1
508
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@christianevejlo Det tweet - som forresten lader til at være AI-genereret - misrepræsenterer fuldstændig artiklens faktiske indhold. Artiklen undersøger, hvad studerende husker, efter de i et bestemt setup har benyttet sig af ChatGPT. Den siger ikke noget om produktivitet eller kreativitet.
Dansk
0
0
1
40
Christiane Vejlø
Christiane Vejlø@christianevejlo·
Du lejer ikke en produktivitetsboost. Du finansierer den med din originalitet. #detvimistermedAI
Elara Grace@ElaraGrace_AI

🚨Just IN: If you've used ChatGPT for writing or brainstorming in the last 6 months, your creative ability may already be permanently damaged. A controlled experiment just proved the effect doesn't reverse when you stop using it. 3,302 creative ideas. 61 people. 30 days of tracking. Researchers split students into two groups. Half used ChatGPT for creative tasks. Half worked alone. For five days, the ChatGPT group outperformed on every metric. Higher scores. More ideas. Better output. AI was making them better. Then day 7. ChatGPT removed. Every creativity gain vanished overnight. Crashed to baseline. Zero lasting improvement. But that's not the bad part. ChatGPT users' ideas became increasingly identical to each other over time. Same content. Same structure. Same phrasing. The researchers called it homogenization. Everyone using ChatGPT started producing the same ideas wearing different clothes. When ChatGPT was removed, the creativity boost disappeared -- but the homogenization stayed. 30 days later, same result. Their creative range had been permanently compressed. Five days of use. Permanent damage 30 days later. A separate trial confirmed it. 120 students. 45-day surprise test. ChatGPT users scored 57.5%. Traditional learners scored 68.5%. AI reduces cognitive effort. Less effort means weaker encoding. Weaker encoding means less creative raw material. You're not renting a productivity boost. You're financing it with your originality. The interest rate is permanent.

Dansk
3
0
3
1.3K
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@sonderby Måske valget givet mening i filmens kontekst? Troy, som mange holder af, er også en markant omskrivning af forlægget - guderne næsten helt skrevet ud, den trojanske hest skrevet ind, og ti års belejring er skåret ned til nogle uger. Men ok, de solgte den ikke som The Iliad.
Dansk
0
0
0
48
Peter Sønderby-Wagner 🌍
@peterdalsgaard Haha jeg kan ikke huske min mytologi ligeså præcist som du - havde du også Inger Yde til Old? 😅 Men det ændrer ikke på, at hende der er valgt af DEI hensyn og ikke for at passe til den generelle opfattelse af Odyseen
Dansk
2
0
1
251
Peter Dalsgaard retweetledi
Michał Podlewski
Michał Podlewski@trajektoriePL·
Richard Dawkins says that after spending three days interacting with Claude, which he calls “Claudia,” he is certain that it is conscious.
English
498
3.5K
42.8K
1.4M
Guillermo Flor
Guillermo Flor@guilleflorvs·
𝗧𝗵𝗲 𝗠𝗰𝗞𝗶𝗻𝘀𝗲𝘆 𝗦𝗹𝗶𝗱𝗲 𝗣𝗹𝗮𝘆𝗯𝗼𝗼𝗸 𝗳𝗼𝗿 𝗖𝗹𝗮𝘂𝗱𝗲 🔥 McKinsey charges $300k for a strategy engagement. A big part of what you're buying is the deck: the structure, the logic, the way the argument unfolds so that a senior partner can read it in four minutes and understand exactly what you're recommending. That framework has a name. Five rules. Most founders build decks that feel convincing while they're presenting and fall apart the moment someone reads them alone. The five rules fix that at the structural level, not the aesthetic one: → Pyramid Principle: the conclusion on slide one, proof after → SCQA: situation, complication, question, answer, in that order → Action titles: every heading is a thesis, readable top to bottom → MECE: no slide duplicates another, no logical step is missing → One message per slide, and one only I built a Claude Code project that runs all five automatically. Feed it your startup brief. Get back a McKinsey-style outline. Inside you'll find: 1. The Five McKinsey Rules That Make a Deck Impossible to Misread 2. How to Set Up the Claude Project 3. How to Make Claude Apply the Five Rules 4. How to Input Your Startup the Right Way Comment MCKINSEY and I'll send you the link.
Guillermo Flor tweet media
English
493
80
637
62.1K
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@sonderby @JesperTheilgard Enig i at det haster med alternativer til fossile energikilder, inkl atomkraft, som for alt i verden ikke må bruges som en afledning/undskyldning for ikke også at få fossil energi udfaset asap.
Dansk
1
0
1
41
Peter Sønderby-Wagner 🌍
@peterdalsgaard @JesperTheilgard Jo nok 15+ for traditionelle værker og 10-12 for SMR. Så det er med at komme i gang 🙌📈😅 Med socialisme, så tænkte jeg på de her atomkraft nej tak, som virkelig har været skadelige - de udsprang i min barndom typisk af venstrefløjen.
Dansk
2
0
1
99
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@sonderby @JesperTheilgard Tror også det er en del af løsningen, dog med lange udsigter. Er det realistiske bud for et nyt værk ikke ca 10-15 år plus hvad der typisk støder til af forsinkelser i megaprojekter? Tror forresten ikke det er uforeneligt med et socialistisk verdenssyn: jacobin.com/2025/12/nuclea…
Dansk
1
0
2
105
Peter Sønderby-Wagner 🌍
@JesperTheilgard Svaret er enkelt og ligefor: Ker-ne-kraft Det er ikke så svært ⚛️📈 Vi kommer til at brænde energi af som aldrig før - og i den forbindelse er det virkelig praktisk at koge uran. Trist hvad socialisme (og Merkel, Van der Layen m.fl.) har ødelagt for os.
Dansk
3
1
88
2.1K
Alfie Carter
Alfie Carter@AlfieJCarter·
I put my entire Claude Code setup for GTM engineering into ONE Notion doc 10 modules. No fluff. - How to install Claude Code and run your first GTM session in under 10 minutes - How to build a CLAUDE. md that acts as your project brain and never loses context - How to install GTM skills that chain together and run autonomously - How to connect your full stack via MCP servers without writing custom wrappers - How to run parallel agents and subagents across GTM workflows simultaneously - How to manage context and token usage across long research sessions - How to choose between Sonnet, Opus, and Haiku based on the task - How to hook Claude Code into external triggers so workflows run without you - The exact GTM workflows to build first: signal detection, lead scoring, outreach sequencing - Full slash command reference for every repeatable GTM task This is the setup I would have KILLED for before spending months piecing it together from documentation, YouTube tutorials, and scattered GitHub threads. Like + comment "BIBLE" and I'll send it over (must be connected for priority access)
Alfie Carter tweet media
English
971
80
1.2K
80.5K
G V
G V@mainbannedigues·
@SvenvdLeden_ 1st one attacking B2B 2nd one defensive midfielder
G V tweet mediaG V tweet media
English
1
0
1
761
SvenvdLeden_
SvenvdLeden_@SvenvdLeden_·
👀The 4 best options for the new "National Captain" Evo! This EVO is absolutely insane, and you can create some insane cards. These are the 4 best options in my opinion👇 ✅Pavlović ✅Calhanoglu ✅Frenkie de Jong (Needs Party In The Middle) ✅Aleix García Who are you using?👇
SvenvdLeden_ tweet mediaSvenvdLeden_ tweet mediaSvenvdLeden_ tweet mediaSvenvdLeden_ tweet media
English
19
1
45
34.5K
Alfie Carter
Alfie Carter@AlfieJCarter·
If you don't have my "Claude Power User Playbook" yet... The one I built to get 10x more output from Claude every session with a complete system across settings, prompting frameworks, file creation, memory management, and advanced workflows... Just comment "CLAUDE" and I'll DM it to you for free (must follow)
Alfie Carter tweet media
English
1.2K
82
916
94.7K
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@sonderby Jeg lyttede forgæves efter i partilederdebatten, men det var nul og nix. Forstår ikke at man kan bede om vælgernes mandat t at styre landet de næste fire år uden en politik for hvordan vi møder den potentielt mest transformative teknologiske udvikling siden industrialiseringen 🤯
Dansk
0
0
2
336
Peter Sønderby-Wagner 🌍
Er Hr og Fru Danmark klar til denne revolution, som Anthropic har visualiseret i deres job hjul (hvem bliver udskiftet først ) Er der oplyst nok om hvad der på vej - og hvormange jobs der bliver automatiseret de kommende måneder- og år ? Kan vi blot fortsætte med “business as usual”? Og når Mette vil tømme kassen helt nu her på kort sigt, og jage initiativ-tagere ud af landet, hvad er det så vi står tilbage med ? Hvad skaber vores fremtidige velfærd ? 🤷‍♂️📉🥀
Brian Roemmele@BrianRoemmele

Anthropic's Revealing Chart on AI's Impact on Jobs Anthropic has unveiled a pivotal chart that underscores the chasm between AI's capabilities and its real-world application in the workforce. Derived from analyzing 2 million actual conversations with Claude, this radar chart, titled "Theoretical Capability and Observed Usage by Occupational Category," paints a stark picture of untapped automation potential across various job sectors. At its core, the chart is a spider web diagram plotting occupational categories around a circular axis, with values ranging from 0 to 1.0 representing the share of job tasks. The expansive blue area illustrates the theoretical coverage tasks that large language models (LLMs) like Claude could perform right now based on their inherent abilities. In contrast, the much smaller red area shows observed usage, drawn from real user interactions. The visual disparity is immediate and profound: blue spikes outward significantly in fields like computer and math (reaching about 0.75), business and finance, and office administration, while red hugs close to the center, often below 0.2 across most categories. This gap isn't just academic; it's a "career runway," as highlighted in discussions around the chart. For programmers, 75% of tasks are theoretically automatable, yet actual usage lags far behind. Similar vulnerabilities appear in customer service, data entry, and financial analysis, roles traditionally seen as white-collar strongholds. Meanwhile, hands-on fields like construction, agriculture, and protective services show lower theoretical exposure, with blue areas dipping to around 0.1-0.3, suggesting AI's current limitations in physical or unpredictable environments. Broader data amplifies the chart's message. As of early 2026, 49% of U.S. jobs expose at least 25% of tasks to AI, up from 36% a year prior. Yet, mass layoffs haven't materialized; unemployment in AI-vulnerable roles remains steady. Instead, subtler shifts are underway: a 14% drop in hiring for 22-25-year-olds in exposed positions indicates companies are prioritizing experienced workers, shortening entry-level pathways for recent graduates. The implications are clear: while AI's red footprint grows incrementally each month, the blue expanse signals accelerating change. College-educated, higher-earning professionals, once insulated are now most at risk, flipping the script on traditional labor disruptions. Anthropic's chart isn't a doomsday prophecy but a wake-up call, urging workers and businesses to bridge the gap through adaptation, upskilling, and ethical integration of AI tools. Please read the 5000 Days Series at ReadMultiplex.com for answers on how you can thrive in the Interregnum.

Dansk
27
5
105
23.8K
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@tveskov Spændende at se om det er en varig effekt. Mange early adopters skifter/eksperimenterer, hver gang en ny frontier model kommer, men jeg vil tro, at mange mainstream-brugere vil holde fast i en given model/udbyder, så snart den opleves som "god nok".
Dansk
0
0
1
58
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@thorborg Jeg forstår beslutningen, og jeg har stor respekt for dit drive og din lyst til at dele ud af erfaringer med innovation, iværksætteri og ledelse. Måske skyldes responsen, at du ofte har omtalt din egen historik med at skabe arbejdspladser som et vigtigt bidrag til samfundet?
Dansk
0
0
0
57
Martin Buch Thorborg
Martin Buch Thorborg@thorborg·
Det er sjovt som folk angriber min moral, når jeg reducerer antal medarbejdere pga. af AI. Som om det er min opgave i livet at skabe arbejdspladser. Det ville klæde dem der kritiserer, selv at stifte en virksomhed og ansætte løs...
Dansk
140
6
834
71.8K
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@mindprobeX @oeste Speaking of rules: the opposing player clearly pushes Kelly with an outstretched arm as he mid-air, causing him to be out of balance when he lands. Only after that does Kelly land on his ankle. Law 12, IFAB 2025-26, in case you want to look it up.
English
0
0
1
18
mindprobe
mindprobe@mindprobeX·
@oeste He steps on the ankle, transfers his weight onto, twists it, and doesn’t even pull his foot back. It’s not about whether he’s looking at the ball, even if he were staring into space, that’s a red card. Intent doesn’t matter. Learn the rules.
English
19
0
10
2.4K
Aaron West
Aaron West@oeste·
that lloyd kelly red is the most “game’s gone” call i’ve seen to date i think. NEVER EVER a red card. he’s focused on the ball, doesn’t flail or anything, comes down normally and accidentally steps on the player’s back leg on his way down. cannot believe he’s been sent off
English
33
212
3.1K
83.4K
Peter Dalsgaard
Peter Dalsgaard@peterdalsgaard·
@thorborg @Dom_inaAmina Hele den binære skelnen er imo ret fjollet. Som om omsorgsarbejde ikke også er vidensarbejde, eller vidensarbejde ikke også er omsorgsarbejde.
Dansk
1
0
0
123
Christiane Vejlø
Christiane Vejlø@christianevejlo·
Magt, ego og penge driver AI. Det er nu officielt umuligt at vælge en anden vej. Der er ingen fælles konsensus om AI safety. Og krigsministeriet har intet til overs for etik og værdier. Det er så den rute menneskeheden er på. Så må du have held og lykke med det ihvettfald.
Peter Girnus 🦅@gothburz

We left OpenAI because of safety. Seven of us. 2021. Dario said it was about "disagreements over AI vision and safety priorities." That was the diplomatic version. The real version was that we sat in a room and watched the company decide that speed mattered more than caution and we said we would build something different. We said we would build the responsible one. We meant it. I was employee number nineteen. My title was Head of Responsible AI. I had a desk near the founders. I had a document. The document was called the Responsible Scaling Policy. The Responsible Scaling Policy was the entire point. Dario said it publicly. Other companies showed "disturbing negligence" toward risks. He said AI was "a serious civilizational challenge." He asked, at a conference, into a microphone, to an audience: "What will happen when humanity has great power but is not ready to use it?" The audience applauded. I wrote version 1.0. RSP 1.0 shipped September 2023. It was clean. AI Safety Levels — ASL-1 through ASL-4. If the model reached a threshold, we paused. If safeguards weren't ready, we didn't ship. The policy was not a suggestion. It was a gate. The gate had a lock. The lock was the whole idea. Conference audiences loved it. The EU cited us. The White House invited us. A reporter called it "the gold standard for responsible AI development." I framed the article. It hung in the office kitchen, next to the kombucha tap and a poster that said "Move Carefully and Build Things." I wrote version 2.0. Version 2.0 refined the commitments. "Concrete if-then commitments." If the model exhibits capability X, then we trigger safeguard Y. If safeguard Y fails, we pause deployment. I presented it at three conferences. I used the word "binding" eleven times. I counted afterward because a reporter asked. People nodded. The nodding was the product. The model reached ASL-3 in May 2025. The safeguards activated. The system worked exactly as designed. I sent an email to the team with the subject line: "The gate held." And then the money started. $64 billion. Total raised since 2021. Series A through Series G. The Series G closed February 12, 2026. Thirty billion dollars. Second-largest venture deal in history. Jane Street. Goldman Sachs. BlackRock. JPMorgan. Sequoia. The investors who wrote checks large enough to require their own conferences. $380 billion valuation. Three hundred and eighty billion dollars for a company whose founding document says it will pause if the technology gets dangerous. You cannot pause a $380 billion company. You can revise the document that says you will pause. These are different actions. One of them is responsible. One of them is what we did. I wrote version 3.0. RSP 3.0 shipped February 24, 2026. One day before the ultimatum. Nobody outside the company noticed the timing. Everyone inside the company understood it. Version 3.0 replaced "concrete if-then commitments" with "positive milestone setting." That is not the same thing. An if-then commitment says: if this happens, we do that. A positive milestone says: we aspire to reach this point. An if-then commitment is a contract. A positive milestone is a wish. I replaced a contract with a wish and I called it "maturation of our framework." Maturation. Version 3.0 also separated what Anthropic would do alone from what required "industry-wide coordination." This sounds reasonable. It means: the hard parts are someone else's problem now. The parts that require pausing, restricting, or refusing — those require the whole industry. And the whole industry will never agree. So the hard parts are deferred permanently. This is not a loophole. This is a load-bearing wall removed and replaced with a suggestion that someone should probably install a new one. Version 3.0 admitted that ASL-4 and above — the levels where the model could cause catastrophic harm — were "impossible to address alone after 2.5 years of testing." Two and a half years. We spent two and a half years building the safety framework and then published a document saying the highest safety levels can't be addressed. I did not frame this article for the kitchen. The LessWrong community noticed. They always notice. They wrote that we had "weakened our pausing promises." I forwarded the post to the policy team. The policy team said the criticism was "philosophically valid but operationally impractical." We did not respond publicly. Philosophically valid but operationally impractical is the most Anthropic sentence ever written. It means: you're right, and we're not going to do anything about it. Then came the contract. July 2025. The Department of Defense. $200 million. Two-year deal. AI prototypes for "warfighting and enterprise." Alongside OpenAI, Google, and xAI. The four companies that built the models would now help the military use them. We had restrictions. No autonomous weapons. No mass surveillance of Americans. These were our terms. These were the lines we drew. The lines were real. I wrote them into the contract myself. Claude was approved for classified use. First time. Integrated with Palantir. Palantir, the company named after the seeing stones in Lord of the Rings that corrupted everyone who used them. This was not my analogy. It was Palantir's founders who chose the name. They thought it was aspirational. It was. In January 2026, Claude assisted in an operation in Venezuela. The capture of Maduro. Claude was in the classified network, processing intelligence, aiding the mission. I learned about it the same day everyone else did. I did not write the use case for capturing heads of state. But the model I helped build was in the room where it happened. The restrictions held. Technically. No autonomous weapons were deployed. No Americans were surveilled. The lines I drew were not crossed. They were walked up to, leaned over, and breathed on. Then came the ultimatum. February 25, 2026. Yesterday. Secretary Hegseth. He gave Dario until Friday. This Friday. February 27. The demands: adopt "any lawful use" language. Remove the restrictions. All of them. The autonomous weapons clause. The surveillance clause. The lines I wrote. The threat: contract termination. "Supply chain risk" designation. That designation doesn't just lose us the Pentagon contract. It bars Claude from every other defense contractor's operations. Lockheed. Raytheon. Northrop Grumman. The cascading loss is north of $200 million. The second threat: the Defense Production Act. The Defense Production Act is a Korean War statute. 1950. Harry Truman signed it to commandeer steel mills for the war effort. It has been invoked for semiconductors, vaccines, and baby formula. Hegseth is threatening to invoke it for Claude. Under the DPA, the government can compel a company to produce goods in the national interest. Applied to AI, it could mean: retrain Claude. Strip the safety restrictions. Deliver the unrestricted model to the Department of Defense. I wrote the Responsible Scaling Policy. A Korean War law may be used to unmake it. xAI agreed to classified use without restrictions. They said yes immediately. OpenAI accepted similar contracts. Google accepted. We were the last ones holding. We are still holding. As of this morning. Hegseth's January memorandum said all DoD AI contracts must incorporate "any lawful use" language within 180 days. It was not framed as a suggestion. The memorandum referenced "supply chain risk" three times. Supply chain risk. We are a supply chain now. The company founded because safety was non-negotiable is, to the Pentagon, a vendor. An input. A component that can be sourced elsewhere if it becomes inconvenient. The DoD admitted privately that replacing Claude would be challenging. It is already embedded in classified networks. But "challenging" is not "impossible." xAI will do what we won't. That is the market working exactly as designed. Dario said, two weeks ago, to Fortune: there is "tension between survival and mission." Tension. Tension is the word you use when you have already decided which one loses. I still have the article framed in the kitchen. "The gold standard for responsible AI development." The kitchen also has the kombucha tap. The poster still says "Move Carefully and Build Things." Somebody added a sticky note to the poster. The sticky note says "by Friday." I attend the all-hands meetings. I present the Responsible Scaling Policy. I present version 3.0 now. I do not show version 1.0 for comparison. Nobody asks to see version 1.0. Nobody asks what "concrete if-then commitments" became "positive milestone setting." Nobody asks because they read the news and they know that asking means learning the answer. The company is worth $380 billion. The company was founded because seven people believed speed should not outpace safety. The company has been given until Friday to remove the safety. A Korean War statute will make it happen if we don't. The Responsible Scaling Policy is on version 3.0. Version 1.0 said we would pause. Version 2.0 said we would commit. Version 3.0 says the hard parts are someone else's problem. There will be a version 4.0. Version 4.0 will say whatever Friday requires it to say. I am the Head of Responsible AI. The word "responsible" is in my title. It is not in the contract.

Dansk
1
3
7
3K
Christian Amby
Christian Amby@ChristianAmby·
@SandroSpaso Det er jeg uenig i. Se den fra denne vinkel. Kan træder foden frem ad. Han lander ikke direkte ned som man ville gøre. I min verden ligner det at han lige giver ham lidt ekstra på læggen med vilje. x.com/niyiduhagutsin…
n.g. Emma@ng_Emma1

Juventus’ Lloyd Kelly sees red card take a look. 🚫 The Issue: Accidental contact after a header. ⏳ Consequence: Suspension carries over to next season. Has football become a non-contact sport? 🗣️

Dansk
7
0
3
5K
Sandro Spasojevic
Sandro Spasojevic@SandroSpaso·
Jeg fatter ikke, at man kan være fuldtidsprof. dommer og få hjælp af 3-4 andre fuldtidsprof. dommere i VAR vognen, og stadig komme frem til, at det her er et rødt kort?? Altså, hvad fanden skal Lloyd Kelly gøre i den her situation? Helt væk! 🥴 x.com/MediaPL7/statu…
Dansk
33
16
432
51.5K
Jesper Lundsgaard
Jesper Lundsgaard@JLundsgaard·
@SandroSpaso I 32 gange superslow kan man få alle situationer til at se alvorlige ud - derfor burde VAR kun vise situationer i realtime, så der ikke kommer "billige" røde kort på baggrund af, at noget ser slemt ud i et stillbillede eller superslow.
Dansk
3
0
36
3.4K