Rachel Vandernick
3.8K posts

Rachel Vandernick
@VandernickR
Digital marketing for internet obsessed brands. Freelancer for hire. Occasional adjunct & writer. ✡️ Tweeting marketing & freelancing.

I keep seeing posts all over the place where people are sharing how they use AI (LLMs) to do this or that - but in reality, LLMs can’t actually do the thing they are suggesting. In my opinion, the main problem here - which has been a problem since day 1 of these tools becoming available to the public - is that they rarely say when they don’t know something or can’t do something, but they usually give a confident answer anyway. It can feel exciting for someone new or unfamiliar with AI to ask ChatGPT to “act like a heart surgeon” or “pretend to be a tax expert” and get a confident, plausible response. This is especially true given the media frenzy around AI - it seems fair for most people to assume that this must be the revolutionary, life-changing technology we were promised. But in reality, those answers might have been entirely hallucinated and full of inaccurate or even dangerous information. The user often doesn’t know the difference. And I’m not saying I’m immune to this, I’ve found myself alarmed by inaccurate answers in ChatGPT many times only after using Google to fact-check its response. I don’t blame the general public for being misled by LLMs; it’s not like there is some exam required to use this technology that we are now learning can be genuinely dangerous. The real fault lies with AI companies for failing to teach users what these tools can and can’t do (though that would obviously clash with their extreme growth goals). But now that the cat is out of the bag - it’s very worrying to think how much incorrect information is circulating around the internet and in society, and how many people are making mistakes ranging from taking a harmless wrong turn to buying the wrong medicine for their needs. They’ll never do this, but it would be great if these AI companies took bigger steps to: - train users on the best way to use these tools (or not) - flag when certain topics are dangerous - improve the tools’ ability to say when they aren’t confident in their response (or have some type of confidence meter??) We could also, IDK, have laws regulating this stuff. ¯\_(ツ)_/¯ I also highly recommend following @BritneyMuller to learn more about the best ways to use LLMs, plus the risks and dangers of using it for the wrong things.


If you talk sh!t, you better back it up. Watch what happens when real Cuervo sh!t talkers put their taste to the test and go one round in the Octagon with a UFC Interim Champ, Tom Aspinall.


DO NOT FEED WILDLIFE. IF YOU’RE GOING TO SHARE A DONUT - AT LEAST ASK US IF WE WANT IT FIRST. BECAUSE WE DO. WE LOVE DONUTS. DO NOT GIVE IT TO WILDLIFE - IT IS BAD FOR THEM - AND IT ALSO MEANS LESS DONUTS FOR US. MAKE BETTER CHOICES.







@VandernickR Hi Rachel, sorry to hear about this! It appears you've been able to open a support ticket with our support team. You can see any updates here: spr.ly/6016d0eAI












