UCD Digital Policy
392 posts

UCD Digital Policy
@DigitalPolicyIE
The Centre for Digital Policy in @UCD_iSchool - helping to build digital policy expertise in the public & private sectors. #UCDdigitalpolicy

Thrilled to announce the publication of my book, 'Blogging and Gender Activism in Nigeria' with Palgrave! Challenging us to rethink activism itself, reminding us that not all voices are equally heard—even in spaces created for change. Grab one: lnkd.in/eGMNmHfT


🤖🏳️🌈 ChatGPT’s LGBTQIA+-related answers up to 84% less accurate than Google searches, study finds... 🧵 🕵️♀️ Led by researchers at the UCD School of Information and Communication Studies, the study explored how ChatGPT may reinforce incorrect beliefs. 🇮🇳 Participants from Ireland and India using GPT-3.5 to find info on LGBTQIA+ elected representatives. 🔎 One group used ChatGPT, and another used Google. All questions showed a notably higher accuracy rate when answered by Google than ChatGPT. ❌ ChatGPT mistakenly labelled some politicians as homosexual and hallucinated the names of non-existent politicians. 🤷♀️ Despite inaccuracies, people trusted ChatGPT's info more than Google's search results. 🗣️ According to Assistant Professor Marco Bastos, who led the study, the problem is unlikely to be fixed even as ChatGPT advances in capability. “GPT-4o, the current version of ChatGPT, is much more capable than GPT-3.5, which was used in our study, at retaining words of chat for context. Conversations and analysis with the tool feel much more natural." “However, the problem we identified in our study is triggered by proattitudinal information and the reinforcement of previously held beliefs, which go unchecked and unverified by users. This problem is not likely to be solved in current or future generations of ChatGPT, because hallucinations – the generation of inaccurate or fabricated information – remain a perennial problem of large language models like ChatGPT. “If anything, the more these tools are perceived to be accurate, the less likely users are to perform cross-checks on the information they receive.”

Finally out our experimental study on how ChatGPT can reinforce beliefs as incorrect information provided by the service goes unchecked and unverified by users. journals.sagepub.com/doi/10.1177/20…









Delighted to present @BradSmi with the @IDAIRELAND Special Recognition Award, acknowledging the longstanding partnership @Microsoft has had with Ireland for the past 40 years. Your steadfast commitment to Ireland is greatly appreciated.














