SALIS at DCU retweetledi

🚨 We’re putting “AI” into everything. But how reliable, safe and trustworthy is it really?
AI-powered language technologies now translate emergency bulletins, mediate everyday conversations, and even support clinical workflows. The fluency is impressive… and that’s precisely the problem: fluency can hide unreliability, unpredictable errors, and uneven performance across languages and user groups.
That’s why Sharon O'Brien and I introduce the Human-Centered AI Language Technology (HCAILT) model in a new, open access paper: an empathetic design framework to move from ethical aspiration to deployable practice in multilingual communication, building upon Ben Shneiderman's HCAI paradigm.
HCAILT operationalises three non-negotiables across the full language-tech pipeline:
✅ Reliability (consistent, domain-appropriate outputs)
✅ Safety culture (governance, AI literacy, reporting loops, bias audits, privacy-by-design - not just disclaimers)
✅ Trustworthiness (interfaces that make uncertainty visible and decisions contestable)
What drives HCAILT? Two drivers that keep the focus where it belongs:
1️⃣ Augment human cognition → reduce cognitive load and support accurate decision-making in multilingual settings.
2️⃣ Augment information dissemination → deliver rapid, context-appropriate multilingual communication in both routine and life-critical scenarios.
Two blueprint cases (where failure is not an option):
🏥 Healthcare communication: language barriers affect diagnosis, consent, rapport, and adherence. "Good enough" translation can still be unsafe.
🌪️ Crisis communication: misinformation, ambiguity, or latency can cost lives, and multilingual delivery must work under pressure (and often under connectivity constraints).
A call to the community:
We’re inviting industry, researchers, practitioners, and policymakers to test HCAILT across real multilingual communication settings. Importantly: “reliability”, “safety”, and “trustworthiness” aren’t abstract/utopian ideals, they’re context-dependent design requirements. So, in your specific use case:
- What does reliability actually mean? (Consistency? Domain fit?)
- What counts as safety? (Harm thresholds? Escalation routes? Accountability?)
- How do you define trustworthiness? (Transparency? Contestability? User control?)
And the key question: how can we design for those properties from the very beginning? If you’re working with multilingual AI in any setting (public services, crisis response, education, legal, business…), we’d love to hear what your “non-negotiables” are.

English


























