




Facts Central 🇺🇦
67.3K posts

@StillDelvingH
Facts are a thing/Alt-reality isn't/Stand against hate. Delved into shadowy funding of the international network of alt-reality propagandists (see pinned tweet)









@Rahll The point is to wreck (too efficient, I guess?) Community Notes with massive automated disinformation We all saw how Grok gets reprogrammed in real time to sell the Master's worldview as reality (Time between 1st and last screenshot was approximatively 5 minutes)












😮 #Complementdenquete a fait une étonnante découverte : France Travail a proposé et financé des formations où il était question d’aller à la rencontre de gnomes et d’elfes ! Notre journaliste a assisté à l’un de ces stages en caméra cachée.


WARRIORS FIRST: Hegseth cuts 93 military fellowships at Harvard, Princeton, replacing them with Hillsdale and Liberty University. New schools were selected for "intellectual freedom" and limited ties to foreign adversaries. Hegseth holds degrees from both Princeton and Harvard's Kennedy School. foxnews.com/politics/hegse…

Again this is perfectly normal behaviour for AI It doesn’t know what real means. It doesn’t know what fake means. It doesn’t know what any word means. All it knows is statistical prevalence of the supposedly expected answer. It’s speaking a language it does not understand.

🦔A researcher invented a fake eye condition called bixonimania, uploaded two obviously fraudulent papers about it to an academic server, and watched major AI systems present it as real medicine within weeks. The fake papers thanked Starfleet Academy, cited funding from the Professor Sideshow Bob Foundation and the University of Fellowship of the Ring, and stated mid-paper that the entire thing was made up. Google's Gemini told users it was caused by blue light. Perplexity cited its prevalence at one in 90,000 people. ChatGPT advised users whether their symptoms matched. The fake research was then cited in a peer-reviewed journal that only retracted it after Nature contacted the publisher. My Take The researcher made the papers as obviously fake as possible on purpose. The AI systems didn't catch it. Neither did the human researchers who cited it in real journals, which means people are feeding AI-generated references into their work without reading what they're actually citing. I've covered the FDA using AI for drug review, the NYC hospital CEO ready to replace radiologists, and ChatGPT Health launching this year. All of that is happening in the same environment where a condition funded by a Simpsons character and endorsed by the crew of the Enterprise was being presented as emerging medical consensus. The people making these deployment decisions seem to believe the pipeline from research to AI to patient is more supervised than it actually is. This experiment suggests it isn't supervised much at all. Hedgie🤗 nature.com/articles/d4158…
