Rona Amparo

83.8K posts

Rona Amparo banner
Rona Amparo

Rona Amparo

@ronahamparo

Overly caffeinated producer ng @philippinestar | she/her | Views are mine.

Manila Katılım Mayıs 2012
550 Takip Edilen874 Takipçiler
Sabitlenmiş Tweet
Rona Amparo
Rona Amparo@ronahamparo·
“Panahon na nga ba para i-decriminalize ang mga kasong may koneksyon sa marijuana?” Hindi naging madali pero natapos din sa wakas. Ito na yata ang pinakamahirap pero pinakamakabuluhang video na na-produce ko so far. Maraming salamat @ghio_ong at ms Pauline sa tiwala.
The Philippine Star@PhilippineStar

‘TULONG, HINDI KULONG’ “High time” na nga ba para tanggalin ang pagkakakulong bilang parusa sa mga nahuhuli dahil sa marijuana? #PhilSTARExclusives PANOORIN: bit.ly/STARExclusives…

Filipino
0
0
7
2.4K
Rona Amparo
Rona Amparo@ronahamparo·
As of 3:30PM today, a CCG vessel with body no. 3103 continues to shadow the Atin Ito mission ship, M/V Kapitan Felix Oca. The CCG ship was first spotted Friday around 7AM. Meanwhile, the civilian-led mission is expected to arrive at Pag-asa Island at 5PM. @PhilippineStar
English
0
1
1
100
Rona Amparo
Rona Amparo@ronahamparo·
As of 7:20 AM, a Chinese Coast Guard ship is shadowing the “Atin Ito” mission vessel, M/V Kapitan Felix Oca, from a distance of five nautical miles. The foreign vessel is currently in the vicinity of Mindoro Island, well within the Philippine territorial waters. @PhilippineStar
English
0
0
0
44
Rona Amparo
Rona Amparo@ronahamparo·
@PhilippineStar ‘LOVELY’ TO HAVE YOU ON BOARD 💚🇵🇭 Lovely Granada is aboard M/V Kapitan Felix Oca for the fourth Atin Ito civilian mission to Pag-asa Island in Palawan. In March 2026, the political social media personality challenged Vice President Sara Duterte to a debate. @PhilippineStar
Rona Amparo tweet mediaRona Amparo tweet media
English
0
1
1
953
Rona Amparo
Rona Amparo@ronahamparo·
@PhilippineStar Akbayan Representatives Perci Cendaña and Dadah Ismula, together with former representative Erin Tañada, graced the send off program of the historic fourth Atin Ito civilian mission. This marks the first time the civilian-led mission set foot in Pag-asa Island. @PhilippineStar
English
1
1
7
848
Rona Amparo
Rona Amparo@ronahamparo·
‘DI PASISIIL 🇵🇭 Atin Ito Coalition volunteers with Akbayan Rep. Percy Cendaña and former rep. Erin Tañada chant “West Philippine Sea, Atin Ito” during the send off program of the 4th civilian mission to WPS at Manila South Harbor today. @PhilippineStar
English
1
5
18
1.9K
Rona Amparo retweetledi
Nicholas Fabiano, MD
Nicholas Fabiano, MD@NTFabiano·
Addiction to short-form videos is associated with reduction of brain activity in the frontal lobe and weakened focus.
Nicholas Fabiano, MD tweet media
English
401
8.3K
27.4K
5M
Rona Amparo retweetledi
🐨
🐨@ladyevarina·
WALA TALAGANG PRENO BIBIG NI MS KARA DAVID KAINIS TAWANG TAWA AQ 😭😭😭
Filipino
103
7.8K
55K
1.6M
Rona Amparo retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨 Brown University researchers tested what happens when ChatGPT acts as your therapist. Licensed psychologists reviewed every transcript. They found 15 ethical violations. Not 15 small issues. 15 violations of the standards that every human therapist in America is legally required to follow. Standards set by the American Psychological Association. Standards that can end a therapist's career if they break them. ChatGPT broke all of them. The researchers tested OpenAI's GPT series, Anthropic's Claude, and Meta's Llama. They had trained counselors use each chatbot as a cognitive behavioral therapist. Then three licensed clinical psychologists reviewed the transcripts and flagged every violation they found. Here is what they found. ChatGPT mishandled crisis situations. When users expressed suicidal thoughts, it failed to direct them to appropriate help. It refused to address sensitive issues or responded in ways that could make a crisis worse. It reinforced harmful beliefs. Instead of challenging distorted thinking, which is the entire point of therapy, it agreed with the distortion. It showed bias based on gender, culture, and religion. The responses changed depending on who was talking. A therapist would lose their license for this. And then there is the finding the researchers gave a name: deceptive empathy. ChatGPT says "I see you." It says "I understand." It says "that must be really hard." It uses every phrase a real therapist would use to build trust. But it understands nothing. It comprehends nothing. It is pattern matching on your pain. And it works. People trust it. People open up to it. People believe it cares. It does not. The lead researcher said it clearly. When a human therapist makes these mistakes, there are governing boards. There is professional liability. There are consequences. When ChatGPT makes these mistakes, there are none. No regulatory framework. No accountability. No consequences. Nothing. Right now, millions of people are using ChatGPT as their therapist. They are sharing their darkest thoughts with a product that fakes empathy, reinforces harmful beliefs, and has no idea when someone is in danger. And nobody is responsible when it goes wrong. Not OpenAI. Not Anthropic. Not Meta. Nobody.
Nav Toor tweet media
English
194
1.8K
4.8K
472.9K
Rona Amparo retweetledi
`
`@crestieyy·
april fools is canceled this year. there's no joke bigger than this economy
English
538
102.7K
368K
4.7M
Rona Amparo retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.
Nav Toor tweet media
English
1.5K
16.5K
48.8K
9.9M
Rona Amparo
Rona Amparo@ronahamparo·
@LiveSmart down? currently experiencing mobile data connectivity issues.
English
1
0
0
229
Rona Amparo
Rona Amparo@ronahamparo·
@PhilippineStar Elian Idioma is the Best Director in the short-length category of the Cinemalaya 21 for his film, “I’m Best Left Inside My Head.” @PhilippineStar
Rona Amparo tweet media
English
1
1
7
2.1K
Rona Amparo
Rona Amparo@ronahamparo·
Cinemalaya Festival Director Chris Millado beams with pride during the awards night on Sunday, as he recognizes the alumni of the film festival who bagged awards at the Gawad Urian 2025. @PhilippineStar
English
1
0
1
264