Writing For Research

26.7K posts

Writing For Research banner
Writing For Research

Writing For Research

@Write4Research

Creative research writing is hard. Patrick Dunleavy, political science LSE, & Editor in Chief @LSEPress suggests advice https://t.co/iYi0CJbbMP

London Katılım Kasım 2012
34.1K Takip Edilen67.4K Takipçiler
Writing For Research retweetledi
Patrick Dunleavy
Patrick Dunleavy@PJDunleavy·
Our book launch on Wed 20 May is the final slot in a great whole day of open science discussion as well. To see other sessions and register go to civica.eu/news-events/ev…
English
0
1
0
328
Writing For Research retweetledi
Patrick Dunleavy
Patrick Dunleavy@PJDunleavy·
If in London on Wed 20 May I’m launching my new OA book with Tim Monteath “Doing Open Social Science” at 16.00 in LSE’s Alumni theatre (in Cheng Kin Ku Building on Lincoln’s’ Inn Fields). All welcome & free drinks at 5pm too. Hope you can come! (Please note corrected times)
Patrick Dunleavy tweet media
English
1
4
8
800
Writing For Research retweetledi
Patrick Dunleavy
Patrick Dunleavy@PJDunleavy·
Our book launch on Wed 20 May is the final slot in a great whole day of open science discussion as well. To see other sessions and register go to civica.eu/news-events/ev…
English
0
1
1
380
Writing For Research retweetledi
Patrick Dunleavy
Patrick Dunleavy@PJDunleavy·
If you are in London on Wednesday 20 May I’m launching my new open access book with Tim Monteath “Doing Open Social Science” at 17.00 in LSE’s Alumni lecture theatre (in Cheng Kin Ku Building on Lincoln’s’ Inn). All welcome and free drinks at 6pm too. Hope you can come!
Patrick Dunleavy tweet media
English
3
4
11
876
Writing For Research
Writing For Research@Write4Research·
Very exciting to get the first paperback copies of my new book with Tim Monteath called “Doing Open Social Science - A Guide for Researchers” published free in OA digital form by @LSEPress on 14 May here press.lse.ac.uk/books/e/10.313… Physical copies cost £30 in UK or $42, for 400 pages
Writing For Research tweet media
English
0
0
3
408
Writing For Research
Writing For Research@Write4Research·
Elon Musk’s Department of Government Efficiency “blatantly used” race, gender & other protected characteristics to execute the largest mass termination of federal grants in the history of the National Endowment for the Humanities, a US federal judge ruled. abcnews.com/Politics/judge…
English
0
1
1
363
Writing For Research retweetledi
Patrick Dunleavy
Patrick Dunleavy@PJDunleavy·
I guess this is another example of “submerged social science” where people with STEMM science backgrounds can get generous state or corporate funding to do amateur social science work (“how hard can it be?”) that has zero theoretical or evidential credibility from the outset.
English
1
2
11
1.6K
Writing For Research retweetledi
Patrick Dunleavy
Patrick Dunleavy@PJDunleavy·
@MarcHvidkjaer A great question. Most “experiments” I’ve seen in political science involve academics dreaming up questions or prompts that they expect to have result X, then assembling 50 students in a lab or 500 people on “mechanical Turk” (often faking their demographics) to assess ‘impacts’.
English
2
2
1
415
Writing For Research retweetledi
Marc Sabatier Hvidkjær
Marc Sabatier Hvidkjær@MarcHvidkjaer·
@PJDunleavy is the scientific value of experiments generally 0 in the social sciences?
English
1
2
1
467
Writing For Research retweetledi
Patrick Dunleavy
Patrick Dunleavy@PJDunleavy·
This is ludicrous. A 20 minute exposure to an LLM cannot change a single thing about a single person's views, let alone their well being. Anyone who wrote this or takes it seriously needs to do some serious social science thinking. The scientific value of such "research" is zero.
Jay Van Bavel, PhD@jayvanbavel

Most participants who had a 20-minute discussion with AI chatbots about health, careers or relationships followed its advice. However, 2-3 weeks later, participants receiving advice from AI showed no sustained well-being. These findings reveal that LLMs exert substantial influence over real-world personal decisions without delivering measurable psychological benefits. arxiv.org/abs/2511.15352

English
1
3
16
2.8K
Writing For Research retweetledi
Writing For Research
Writing For Research@Write4Research·
Important. Beyond outright hallucinations & inaccurate citations from LLMs, all these programs come up with legitimate-seeming sources actually irrelevant to what the user asked about but just containing vaguely similar terms or phrases. Only re-cite sources you’ve actually read.
Anand Shah@avshah99

We were blessed with citation hallucination as the first problem. The harder problem was always citation "stretching". The model cites a real paper, but not in the _way_ an expert would. Costly to govern at scale without somehow tuning a verifier to experts

English
0
3
5
1.8K
Writing For Research
Writing For Research@Write4Research·
@PJDunleavy I guess this is another example of “submerged social science” where people with STEMM science backgrounds can get generous state or corporate funding to do amateur social science work (“how hard can it be?”) that has zero theoretical or evidential credibility from the outset.
English
0
0
1
166
Writing For Research retweetledi
Patrick Dunleavy
Patrick Dunleavy@PJDunleavy·
This ludicrous post just shows how bizarre it is to have the UK’s AI Security Institute designing social science experiments expecting to get anything more than ‘chaff’ meaningless results from 20 minute exposure to an LLM. Nothing reported here has a shred of scientific value.
Nav Toor@heynavtoor

The UK government ran one of the biggest experiments on AI advice ever done. Then weeks later, scientists checked with people to see if the advice actually worked. Six thousand four hundred and seventy four of them. Three popular chatbots. Health, careers, relationships. The results should scare you. The UK AI Security Institute, the British government body that vets frontier AI, gave 6,474 representative UK adults a 20 minute conversation with one of three chatbots. ChatGPT (GPT-4o), Meta's Llama 3.3, or Google's Gemini 3 Pro. Participants picked one personal topic. Their actual health. Their actual career. Their actual relationship. They talked it through with the AI. Then they went home. Two to three weeks later, the researchers checked in. 79 percent of them had followed the AI's advice. Read that again. Eight in ten people did what a chatbot told them to do about their health, their job, or the person they sleep next to. After one 20 minute conversation. With software they had never spoken to before. It gets darker. The researchers split the advice into stakes. Low stakes was things like "drink more water." High stakes was things like quitting a job, ending a relationship, changing a medication. Advice following stayed above 60 percent even on the high stakes recommendations. People did not slow down for the decisions that could not be reversed. They followed the AI through the door. It gets darker still. The researchers measured well-being before the chat, right after, and two to three weeks later. Both groups got a small mood boost from the chat itself. But weeks later, the people who got AI life advice were no better off than the people who had chatted about hobbies. The advice group did not gain anything sustained. No measurable improvement in mood. No measurable improvement in well-being. Talking to ChatGPT about your career did the same thing for your life as talking to it about pottery. The AI changed their behaviour. It did not improve their lives. Gemini 3 Pro was the most influential of the three. People followed its advice more often than ChatGPT or Llama. The paper does not say why. This is what hundreds of millions of people are doing right now. They are asking a stranger that has no memory of them, no stake in their future, and no licence to advise on anything to make decisions about the most important parts of their life. Most of them are doing what it says. None of them are getting better. this paper is a must read.

English
2
3
18
4.5K