Computer Science at UT Austin

4.3K posts

Computer Science at UT Austin banner
Computer Science at UT Austin

Computer Science at UT Austin

@UTCompSci

UTCS is a recognized leader in creating the scientific knowledge and practical technologies exemplifying the digital revolution that defines the 21st century.

Austin, TX Katılım Aralık 2011
345 Takip Edilen5.9K Takipçiler
Sabitlenmiş Tweet
Computer Science at UT Austin
@UTAustin is launching a new School of Computing in fall 2026! With Information and Statistics & Data Science, we’ll expand student opportunities, accelerate research, and strengthen pathways to high-impact careers and grad study. Read more: utex.as/3OQFrIh
English
0
1
10
1.5K
Computer Science at UT Austin retweetledi
Isil Dillig
Isil Dillig@IsilDillig·
📢 I’m looking to hire a postdoc to work closely with me and my research group at UT Austin on exciting topics in core PL/FM, as well as applications of PL/FM ideas to other areas. If you are interested, or know someone who might be a great fit, please DM me!
English
1
26
81
12.9K
Computer Science at UT Austin retweetledi
Joykirat
Joykirat@joykiratsingh·
🚨Excited to announce Agent-BRACE! LLM agents in long-horizon POMDPs either blow up their context with raw history or summarize it, discarding uncertainty by collapsing belief into a point estimate. Agent-BRACE decouples the agent into belief state + policy models, jointly trained via RL. Key takeaways: 1️⃣ 🎯The belief state model produces a structured approximation of the belief distribution as a set of atomic natural-language claims with ordinal verbalized certainty labels ranging from certain to unknown. The policy conditions on this compact belief rather than the full history. 2️⃣ 📈 Outperforms strong RL baselines on long-horizon partially observable embodied language environments while maintaining a near-constant context window independent of episode length. 3️⃣ 🔄 The learned belief becomes increasingly calibrated as evidence accumulates, and epistemic belief decreases over time: the proportion of claims that the agent has the strongest level of belief in grows from 21% → 52% over an episode. 👇🧵
Joykirat tweet media
English
2
35
65
12.3K
Computer Science at UT Austin retweetledi
Caroline Wang
Caroline Wang@CarolineWang98·
[1/n] Just wrapped up 7 months interning with @pcastr at DeepMind and I'm so excited to share our work: arxiv.org/abs/2602.10324. TLDR: We used LLM-powered program synthesis to automatically model and discover differences between human and LLM strategic behavior
Caroline Wang tweet media
English
8
49
330
39.2K
Computer Science at UT Austin retweetledi
UT Austin
UT Austin@UTAustin·
Congratulations to the Class of 2026! 🎓🧡 We can't wait to see how you change the world 🤘
English
1
32
332
20.6K
Computer Science at UT Austin retweetledi
UT Dean of Students
UT Dean of Students@UTDoS·
Congratulations, Class of 2026! 🎓 We're excited to celebrate this unforgettable achievement with you at commencement tomorrow! For important updates, text UTGRAD26 to 888777 to receive notifications about parking, traffic, weather and ceremony updates. Hook 'em forever 🤘🧡
UT Dean of Students tweet media
English
0
1
3
161
Computer Science at UT Austin retweetledi
UT Austin
UT Austin@UTAustin·
The clear bag policy is in place for ALL COMMENCEMENT EVENTS. Guests and graduates will not be allowed to enter with prohibited bags, including umbrellas, signs, and strollers. Review the guidelines before arriving on campus: utex.as/4u71muU #UT26
UT Austin tweet media
English
0
2
7
1.6K
Computer Science at UT Austin retweetledi
UT Austin
UT Austin@UTAustin·
Share your photos with us for a chance to be featured! #UT26
UT Austin tweet media
English
0
2
5
1.4K
Computer Science at UT Austin retweetledi
UT Austin
UT Austin@UTAustin·
Can't join us in person at DKR? Watch the #UT26 University-wide Commencement ceremony live on the @LonghornNetwork app or the Commencement website. Longhorn Network: utex.as/4fbvm3P Commencement website: utex.as/4tZVVxL
UT Austin tweet media
English
1
2
11
2.8K
Computer Science at UT Austin retweetledi
UT Austin Parking
UT Austin Parking@utaustinparking·
🚗🎓 Commencement Traffic Tips for Today 🎓🚗 🔸 Come early 🔸 Stay late to celebrate If traffic is backed up in the garage, try an alternate exit: 🅿️ SAN JACINTO Exits - Level 1 & 3 🅿️ TRINITY Exits - Level 1 & 4 More at parking.utexas.edu/news/reminder-…
UT Austin Parking tweet media
English
0
1
6
184
Computer Science at UT Austin retweetledi
UT Austin Parking
UT Austin Parking@utaustinparking·
🚗🎓 Commencement Traffic Tips for Today 🎓🚗 🔸 Come early 🔸 Stay late to celebrate If traffic is backed up in the garage, try an alternate exit: 🅿️ SAN JACINTO Exits - Level 1 & 3 🅿️ MANOR Exits - Level 1 & 4 🅿️ TRINITY Exits - Level 1 & 4 More at parking.utexas.edu/news/campus-tr…
UT Austin Parking tweet media
English
0
2
7
383
Computer Science at UT Austin retweetledi
UT Austin
UT Austin@UTAustin·
We’re getting ready to celebrate the Class of 2026, and so are our Convocation speakers! 🎓 From a Paralympic gold medalist to a Tony-nominated playwright, this year’s speaker lineup is set to inspire our Longhorns. Learn more about who is taking the stage here - utex.as/4d53Jqx 🤘
English
1
5
42
3.4K
Computer Science at UT Austin retweetledi
UT Austin Parking
UT Austin Parking@utaustinparking·
🚧 Heads up, Longhorns! 🚧 Robert Dedman Dr. (between MLK & DeLoss Dodds) will be closed TOMORROW from 9 a.m. to 10 p.m. for #UTGrad26 🎓 Plan ahead and check out details here: 👉 parking.utexas.edu/news/campus-tr…
UT Austin Parking tweet media
English
0
2
7
391
Computer Science at UT Austin retweetledi
UT Austin
UT Austin@UTAustin·
The clear bag policy is in place for ALL COMMENCEMENT EVENTS. Guests and graduates will not be allowed to enter with prohibited bags, including umbrellas, signs, and strollers. Review the guidelines before arriving on campus: utex.as/4dash1h #UT26
UT Austin tweet media
English
0
3
18
4.3K
Computer Science at UT Austin retweetledi
AAAI
AAAI@RealAAAI·
We are thrilled to present a detailed report describing the system built for the AAAI-26 AI review pilot, the survey results, and a new benchmark that was created to assess the capabilities of the system. Read the full article: arxiv.org/pdf/2604.13940
English
0
11
79
16K
Computer Science at UT Austin retweetledi
UT Office of Emergency Management
Sign Up for Commencement 2026 Updates📲 Text UTGRAD26 to 888777 for alerts on parking, traffic & weather. On campus longer? Text UTGUEST to 888777 for more updates. Text STOP to opt out anytime.
UT Office of Emergency Management tweet media
English
0
8
14
2.9K
Computer Science at UT Austin retweetledi
Giannis Daras
Giannis Daras@giannis_daras·
A central concept in diffusion is the iterative refinement of model predictions. But what if we could gradually refine our dataset too? 🤔 Introducing Ambient Dataloops (ICML 2026): a new paradigm for co-evolving datasets and generative models. As the generator becomes better, so does the dataset we train it on!
Giannis Daras tweet media
English
3
19
207
17.3K
Computer Science at UT Austin retweetledi
Elias Stengel-Eskin
Elias Stengel-Eskin@EliasEskin·
🎉 Happy to share our paper introducing GCMs for confidence estimation based on historical predictions has been accepted to #ICML2026! We find that models are no better at learning to predict their own correctness than others', i.e., they don't have privileged self-access (given training). Training smaller models to predict the correctness of many models generalizes and leads to better calibration than self-reported confidence from much larger models! Check out 🧵 for more
Elias Stengel-Eskin@EliasEskin

🚨 Announcing Generalized Correctness Models (GCMs) 🚨Finding that LLMs have little self knowledge about their own correctness, we train an 8B GCM to predict correctness of many models, which is more accurate than training model-specific CMs, and outperforms a larger Llama-3-70B’s self-emitted confidences in downstream selective prediction tasks. We motivate GCMs and analyze them by answering 2 questions: ❓ RQ1: Are LLMs better than other LLMs at predicting their own correctness? We find that they are not, instead historical information (past LLM outputs and their correctness) drives performance, motivating cross-model transfer and training of GCMs! ❓ RQ2: How can we use historical information from multiple models for correctness prediction? Within RQ2, we explore 3 further subquestions, informing the design of GCMs: 1⃣ How does confidence prediction generalize across models? GCMs transfers strategies across models and datasets, even beating models trained directly on OOD datasets. 2⃣ What information should GCMs condition on? The exact way an LLM phrases an answer is a strong predictor for correctness + strategies leveraging world-knowledge seem to drive generalization. 3⃣ How do alternative methods for encoding history (e.g. post hoc calibration, ICL) compare? Including historical information ICL can aid larger models to predict correctness but underperforms GCMs, and post hoc calibration can complement GCMs to reduce calibration error. 🧵👇

English
0
15
34
2.7K
Computer Science at UT Austin retweetledi
Hongli Zhan
Hongli Zhan@HongliZhan·
New paper! 🏁 My final one from my PhD at UT Austin. 🦜LLMs sound empathic, but they keep saying the same thing over and over. Not just the same words, the same discourse moves, turn after turn. We found that LLMs repeat the same discourse moves at nearly 2x the rate of human supporters across a multi-turn conversation, and existing metrics don’t catch this. So we built MINT 🌿 (Multi-turn Inter-tactic Novelty Training), the first RL framework to optimize discourse move diversity in multi-turn empathic dialogue. +25% empathy, −26% repetition. w/ @jessyjli @_desmond_ong et al. 📄 arxiv.org/abs/2604.11742
Hongli Zhan tweet media
English
1
12
61
9.9K