Erika Legara

19.1K posts

Erika Legara banner
Erika Legara

Erika Legara

@eflegara

PH Katılım Nisan 2012
118 Takip Edilen4.2K Takipçiler
Erika Legara retweetledi
Khoa Vu
Khoa Vu@KhoaVuUmn·
Before AI, I can only have about 5 unfinished papers and 1 polished paper. AI boosted my productivity so much that I now have 136 unfinished papers and 1 polished paper.
English
28
109
1.5K
60.5K
Erika Legara
Erika Legara@eflegara·
How should boards govern AI systems that are already making decisions about credit, pricing, and hiring? Wrote about this for @bworldph. Short answer: Apply the same governance discipline you already use for financial models and cybersecurity programs. bworldonline.com/opinion/2026/0…
English
0
1
0
163
Kris Ablan
Kris Ablan@kris_ablan·
Kumusta lunes niyo mga beshies?
Kris Ablan tweet media
Filipino
3
0
4
250
Erika Legara
Erika Legara@eflegara·
We can’t just keep buying flashy tech from vendors. If users aren’t equipped to use it, it’s wasted investment. AI and other tech look great in demos, but real impact comes from operationalizing them. How do they work on the ground? How do we ensure adoption? Literacy and change management matter.
English
0
0
5
481
Erika Legara retweetledi
DepEd
DepEd@DepEd_PH·
"This is the latest 'love team'—AI and education. DepEd is a good home for it. AI can really fast-track solutions and jumpstart a lot of things in the education system." IN PHOTOS: The Department of Education, through the initiative of Secretary Sonny Angara, launched the Education Center for AI Research (E-CAIR). E-CAIR is a strategic initiative under DepEd that will tap AI technologies to solve long-standing challenges, particularly in distributing the voucher program efficiently, gathering health information of learners, improving client relations through a quick response tool for queries and concerns of the public, and mapping out schools that need additional facilities and learning materials. #DepEdPhilippines #BagongPilipinas
DepEd tweet mediaDepEd tweet mediaDepEd tweet mediaDepEd tweet media
English
2
3
8
2.1K
Erika Legara
Erika Legara@eflegara·
Yesterday, I had the honor to address over 150 bank compliance officers at the ABCOMP 2025 Annual Conference to talk about #AIgovernance. Sharing some of my thoughts from the session 👇🏻 AI Governance and Compliance: Who’s Accountable? eflegara.github.io/blog/20250212/
English
0
0
2
179
Erika Legara
Erika Legara@eflegara·
Thank you, Raffy!
English
0
0
1
190
Erika Legara
Erika Legara@eflegara·
Better late than never 🖤 Thank you, Tatler Philippines , for the thoughtful gesture of delivering my trophy 🏆 Now the Tatler Impact Award for Science & Innovation feels truly real! 🥰 Also grateful to @TatlerAsia for the Most Influential award. (I missed the Tatler Ball due to health reasons, and it truly pained me not to be there to celebrate such a special, special moment.)
Erika Legara tweet mediaErika Legara tweet media
English
2
0
15
593
Erika Legara
Erika Legara@eflegara·
The future of work isn't about saving every job 🤷🏻‍♀️that's simply not possible. We must prepare ourselves for inevitable changes. But with institutional support plus personal initiative, we can better navigate this new IR, regardless of whether we call it the 4th, 5th, or 6th.
English
1
0
0
155
Erika Legara
Erika Legara@eflegara·
Recently, I’ve been more candid about the importance of upskilling and reskilling to protect workers amid rapid tech changes, especially AI. Three ideas in a thread. 👇 #FutureOfWork #AI #Upskilling
English
1
0
2
401
Erika Legara
Erika Legara@eflegara·
@ljvmiranda I love how you’re so deeply immersed and really getting your hands dirty pushing for algorithmic innovations! Congrats, LJ! 👏🏼
English
1
0
1
91
Lj V. Miranda
Lj V. Miranda@ljvmiranda·
My favorite part about this release is that we were able to replicate our findings from the Tülu 3 post-training recipe here (e.g., on-policy preferences, RLVR) and found significant performance gains in our -DPO and -Instruct models! Find all artifacts here: huggingface.co/collections/al…
Ai2@allen_ai

Meet OLMo 2, the best fully open language model to date, including a family of 7B and 13B models trained up to 5T tokens. OLMo 2 outperforms other fully open models and competes with open-weight models like Llama 3.1 8B — As always, we released our data, code, recipes and more 🎁

English
1
2
16
1.1K