
Writing For Research
26.7K posts

Writing For Research
@Write4Research
Creative research writing is hard. Patrick Dunleavy, political science LSE, & Editor in Chief @LSEPress suggests advice https://t.co/iYi0CJbbMP






AI may seem to have democratised writing, but the truth is it has industrialised mediocrity. Literature, after all, was never meant to be smooth and easy. It was supposed to be bumpy, argumentative and, more importantly, human.





My new OA book with Tim Monteath "Doing Open Social Science: A Guide for Researchers" is out from @LSEPress on *14 May* press.lse.ac.uk/books/e/10.313… It covers how 'open science' approaches can now be fully implemented by both qualitative & quantitative researchers in practical ways




Most participants who had a 20-minute discussion with AI chatbots about health, careers or relationships followed its advice. However, 2-3 weeks later, participants receiving advice from AI showed no sustained well-being. These findings reveal that LLMs exert substantial influence over real-world personal decisions without delivering measurable psychological benefits. arxiv.org/abs/2511.15352

We were blessed with citation hallucination as the first problem. The harder problem was always citation "stretching". The model cites a real paper, but not in the _way_ an expert would. Costly to govern at scale without somehow tuning a verifier to experts


The UK government ran one of the biggest experiments on AI advice ever done. Then weeks later, scientists checked with people to see if the advice actually worked. Six thousand four hundred and seventy four of them. Three popular chatbots. Health, careers, relationships. The results should scare you. The UK AI Security Institute, the British government body that vets frontier AI, gave 6,474 representative UK adults a 20 minute conversation with one of three chatbots. ChatGPT (GPT-4o), Meta's Llama 3.3, or Google's Gemini 3 Pro. Participants picked one personal topic. Their actual health. Their actual career. Their actual relationship. They talked it through with the AI. Then they went home. Two to three weeks later, the researchers checked in. 79 percent of them had followed the AI's advice. Read that again. Eight in ten people did what a chatbot told them to do about their health, their job, or the person they sleep next to. After one 20 minute conversation. With software they had never spoken to before. It gets darker. The researchers split the advice into stakes. Low stakes was things like "drink more water." High stakes was things like quitting a job, ending a relationship, changing a medication. Advice following stayed above 60 percent even on the high stakes recommendations. People did not slow down for the decisions that could not be reversed. They followed the AI through the door. It gets darker still. The researchers measured well-being before the chat, right after, and two to three weeks later. Both groups got a small mood boost from the chat itself. But weeks later, the people who got AI life advice were no better off than the people who had chatted about hobbies. The advice group did not gain anything sustained. No measurable improvement in mood. No measurable improvement in well-being. Talking to ChatGPT about your career did the same thing for your life as talking to it about pottery. The AI changed their behaviour. It did not improve their lives. Gemini 3 Pro was the most influential of the three. People followed its advice more often than ChatGPT or Llama. The paper does not say why. This is what hundreds of millions of people are doing right now. They are asking a stranger that has no memory of them, no stake in their future, and no licence to advise on anything to make decisions about the most important parts of their life. Most of them are doing what it says. None of them are getting better. this paper is a must read.

