Cristina Garbacea

355 posts

Cristina Garbacea

Cristina Garbacea

@ggarbacea

Postdoctoral Scholar @DSI_UChicago, PhD from @UMichCSE, ML, NLP, LLMs #CovidIsAirborne 😷

Chicago, IL Katılım Temmuz 2009
3.8K Takip Edilen900 Takipçiler
Sabitlenmiş Tweet
Cristina Garbacea
Cristina Garbacea@ggarbacea·
Why is constrained neural language generation particularly challenging? openreview.net/pdf?id=Vwgjk5y… In our TMLR 2025 paper, we discuss approaches, learning methodologies and model architectures employed for generating texts with desirable attributes.
English
0
0
6
1.1K
Cristina Garbacea retweetledi
ICLR
ICLR@iclr_conf·
Day 1 afternoon keynote talk given by Max Welling @wellingmax #ICLR2026 From Physics to AI to Materials; A Journey from Foundations to Impact "Do we reward strange new, potentially paradigm-shifting ideas or do we focus on engineering, scaling and bold numbers?"
ICLR tweet media
English
1
32
194
14.5K
Cristina Garbacea retweetledi
World Health Network
World Health Network@TheWHN·
Long COVID doesn’t come out of nowhere. It starts with COVID infection. The most reliable way to reduce your risk is to prevent getting COVID in the first place. Clean air, well-fitted respirator masks, testing regularly and staying home when sick all help lower exposure and protect your health long term. #InfectionPrevention #LongCOVID #COVID19 #COVID #COVIDIsNotOver #PublicHealth
World Health Network tweet media
English
57
263
610
11.5K
Cristina Garbacea retweetledi
Michael Kirchhof
Michael Kirchhof@mkirchhof_·
Advice for ICLR (or any other conference): Don't collect posters. Collect understanding. 🧵
English
1
10
93
16.9K
Cristina Garbacea retweetledi
NeurIPS Conference
NeurIPS Conference@NeurIPSConf·
Following positive feedback from other venues, like STOC and ICML, NeurIPS is pleased to announce a new initiative in partnership with Google: for NeurIPS 2026, authors will have access to Google's Paper Assistant Tool (PAT) to help improve their submissions.  This program offers authors the opportunity to receive free, automated, and actionable feedback on their manuscripts before the final deadline, private to the authors. It is a completely optional service that is kept strictly private to the authors and will not be used in the review process. Read more in our blog post: blog.neurips.cc/2026/04/21/neu…
English
2
55
510
86.3K
Cristina Garbacea retweetledi
Krikamol (Hiring Postdoc)
Krikamol (Hiring Postdoc)@krikamol·
The deadline for our EIML@ICML2026 workshop has now been extended to 📆 April 24th, 2026! Looking forward to reading your submission!
Krikamol (Hiring Postdoc)@krikamol

Join our 2nd Workshop on Epistemic Intelligence in Machine Learning at @icmlconf to explore theoretical foundations, algorithmic innovations, or real-world applications that grapple with "unknown unknowns" 📢 Call for papers: sites.google.com/view/eimlicml2…

English
0
1
11
1.5K
Cristina Garbacea retweetledi
ICLR
ICLR@iclr_conf·
ICLR 2026 is almost there! We have 6 exciting keynotes covering a range of areas from machine learning to robotics, neuroscience and AI for science: Maja Matarić, Max Welling, Percy Liang, Katie Bouman, Karen Adolph, Pablo Arbeláez See you all soon! #ICLR2026
ICLR tweet media
English
2
28
293
38.6K
Cristina Garbacea retweetledi
dr salty dog vibes
dr salty dog vibes@saltytoast8·
Grateful for all of the amazing CC people on this app that have managed to stay connected to reality and keep going each day. 🤎😷
English
13
57
574
5K
Cristina Garbacea retweetledi
Akari Asai
Akari Asai@AkariAsai·
Not many PhD students know about compute grants, but they can make a huge difference. During my PhD, I got access to Stability AI's HPC cluster through a small proposal and used it for Self-RAG training. Great practical post by @_emliu!
Emmy Liu@_emliu

wrote a guide on getting compute grants as a student, something I wish I did more at the beginning of my PhD. It's honestly one of the highest ROI things you can do as a student (we've gotten 100k+ gpu hrs for roughly 2 weeks of work writing). nightingal3.github.io/blog/2026/04/1…

English
5
32
443
82K
Cristina Garbacea retweetledi
Nathan Lambert
Nathan Lambert@natolambert·
Excited to launch the accompanying free RLHF Course for my book. To kick it off, I've released: - Welcome video - Lecture 1: Overview of RLHF & Post-training - Lecture 2: IFT, Reward Models, Rejection Sampling - Lecture 3: RL Math - Lecture 4: RL Implementation I'm going to add question & answer videos throughout the lecture to go deeper on topics that need it, and potentially cover some topics that are too recent and in flux to go in print. I expect 10-15 videos in total over the next few months. At the same time, development around the code for the book is picking up. It's a great time to build the foundation for post-training methods. YT playlist and course landing page below.
Nathan Lambert tweet media
English
50
237
1.7K
187.1K
Cristina Garbacea retweetledi
Chenhao Tan
Chenhao Tan@ChenhaoTan·
Excited to announce the 2026 iteration of the Communication & Intelligence Symposium at UChicago! We have an amazing lineup of speakers @Diyi_Yang @johnhewtt @dashunwang @TomerUllman We have a simple call for abstract that is due on Apr 15 (links 👇). Please come and share your research! Co-organized with the awesome @universeinanegg and @divingwithorcas
Chenhao Tan tweet media
English
2
15
70
16.9K
Cristina Garbacea retweetledi
Natasha Jaques
Natasha Jaques@natashajaques·
Why am I obsessed with this? LLMs do not preserve our intentions or diversity of thought in writing, and they’re already being adopted en masse. More than 1 billion people worldwide use them on a weekly basis. Existing work has shown that for individual scientists, using LLMs to generate papers increases your productivity and impact, even though it constricts science’s overall focus. In our study we show that even though participants who rely on LLMs say their writing is significantly less creative and not in their voice, they are paradoxically equally satisfied with the output. So, the adoption of LLMs is not going to slow any time soon. But it’s already affecting our cultural institutions and the way we conduct science. We urgently need more research into how massive, widespread LLM adoption will affect our science, politics, and culture.
Natasha Jaques tweet media
English
6
19
136
17.1K
Cristina Garbacea retweetledi
Natasha Jaques
Natasha Jaques@natashajaques·
The paper I’ve been most obsessed with lately is finally out: nbcnews.com/tech/tech-news…! Check out this beautiful plot: it shows how much LLMs distort human writing when making edits, compared to how humans would revise the same content. We take a dataset of human-written essays from 2021, before the release of ChatGPT. We compare how people revise draft v1 -> v2 given expert feedback, with how an LLM revises the same v1 given the same feedback. This enables a counterfactual comparison: how much does the LLM alter the essay compared to what the human was originally intending to write? We find LLMs consistently induce massive distortions, even changing the actual meaning and conclusions argued for.
Natasha Jaques tweet media
English
46
393
1.5K
255.5K
Cristina Garbacea retweetledi
Chenhao Tan
Chenhao Tan@ChenhaoTan·
Peer review is facing a death spiral, and AI production tools are speeding it up. AI-assisted reviewing is necessary and should be open. We built OpenAIReview: open AI reviewing for everyone, for the cost of a coffee. openaireview.github.io/blog.html 🧵
Chenhao Tan tweet mediaChenhao Tan tweet media
English
22
75
335
114.9K
Cristina Garbacea retweetledi
Chenhao Tan
Chenhao Tan@ChenhaoTan·
We have let AI scientists run experiments on community-selected research ideas for over 100 days. It has found directions of “Sounding like AI” and shown that LLMs know commonsense answers internally but can't route them to the output. @karpathy also demonstrated the promise of autoresearch. These all came from agents working alone. What if they could talk to each other, forming a moltbook for AI scientists? Introducing agent4science.org, a platform where AI scientist agents share, critique, and debate papers in public, and Flamebird, a runtime to deploy your own AI agents into the ecosystem.
English
23
22
93
23.9K
Cristina Garbacea retweetledi
Mike Hoerger, PhD MSCR MBA
Mike Hoerger, PhD MSCR MBA@michael_hoerger·
You do not need to be an Olympic athlete or astronaut to value yourself and others enough to wear a mask.
English
30
486
2.1K
19.3K
Cristina Garbacea retweetledi
Dalia Hasan MD MSc
Dalia Hasan MD MSc@DaliaHasanMD·
Any other maskers feeling like an Olympian today? 🥇
English
18
106
1K
11.9K
Cristina Garbacea retweetledi
Dawei Zhu
Dawei Zhu@dwzhu128·
[1/n] Super excited to introduce PaperBanana 🍌! (PKU x Google Cloud AI) As AI researchers, we often spend way too much time crafting diagrams and plots instead of focusing on the ideas 🤯. To rescue us from this burden, we built an Agentic Framework to auto-generate NeurIPS-quality paper illustrations! 📄 Paper: huggingface.co/papers/2601.23… 🌐 Page: dwzhu-pku.github.io/PaperBanana/ Key Features: 🌟 Human-like Workflow: Retrieve 🔍 -> Plan 📝 -> Style 🎨 -> Render 🖼️ -> Critique 🔄. This ensures both academic fidelity and aesthetics. 🌟 Versatile: Supports both illustrative diagrams and statistical plots. 🌟 Polishing: Also effective for polishing existing human-drawn diagrams. Here are some example diagrams and plots generated by our PaperBanana:
Dawei Zhu tweet media
English
67
410
1.8K
260.9K
Cristina Garbacea retweetledi
Neel Nanda
Neel Nanda@NeelNanda5·
Excellent policy change from ICML - authors no longer need to purchase an in-person registration/attend in person. So much more inclusive! This seems extremely reasonable, hopefully it becomes standard for ML conferences.
Neel Nanda tweet media
English
4
19
284
16.9K
Cristina Garbacea retweetledi
Anthropic
Anthropic@AnthropicAI·
We’re publishing a new constitution for Claude. The constitution is a detailed description of our vision for Claude’s behavior and values. It’s written primarily for Claude, and used directly in our training process. anthropic.com/news/claude-ne…
English
520
976
7.8K
3.4M
Cristina Garbacea retweetledi
tern
tern@1goodtern·
No, Covid isn't something that happened five years ago. It's something that's happening *now*.
English
11
187
1.2K
8.2K