Poorva Garg

21 posts

Poorva Garg banner
Poorva Garg

Poorva Garg

@PoorvaGarg11

Computer Science Ph.D. student at UCLA

Los Angeles, CA Katılım Temmuz 2022
126 Takip Edilen183 Takipçiler
Poorva Garg retweetledi
Arabdha Biswas
Arabdha Biswas@ArabdhaBiswas·
@sandyskim and I will be at #cshldata24 ! Sandy’s giving a talk tomorrow on hierarchical modeling of CRISPR screens I’ll be presenting a poster on integrating polygenic risk to improve LDL-C measurement Come check us out :) @hjpimentel @cshlmeetings
English
1
8
12
1.6K
Poorva Garg retweetledi
Yuchen Cui
Yuchen Cui@YuchenCui1·
🚀 I am recruiting PhD students for Fall 2025 at the UCLA Robot Intelligence Lab! 🤖 If you are interested in robot learning and human-robot interaction, mark me as a potential adivisor when you apply to the UCLA CS PhD program! #PhD #Robotics @CS_UCLA
Yuchen Cui tweet media
English
11
111
734
67.1K
Poorva Garg retweetledi
Christina Chance
Christina Chance@christinachanc·
1/n @uclanlp is researching how Black, LGBTQIA+, & women communities perceive and are affected by content moderation, as it relates to English-language social media content using reclaimed language. As part of this, we are recruiting annotators (forms.gle/KP6F9gDCo8Skjs…) …
English
1
12
21
3.3K
Poorva Garg retweetledi
Konstantinos Kallas
Konstantinos Kallas@KonsKallas·
I am looking for 1-2 PhD students interested broadly in computer systems, compilers, and/or PL! If you would like to do your PhD in a vibrant city with great weather, next to the sea 🌊, and the mountains 🏔️, make sure to apply to UCLA and mark my name as a potential advisor!
Konstantinos Kallas tweet media
English
1
86
280
26.6K
Poorva Garg retweetledi
Anji Liu
Anji Liu@liu_anji·
[1/n] 🚀Diffusion models for discrete data excel at modeling text, but they need hundreds to thousands of diffusion steps to perform well. We show that this is caused by the fact that discrete diffusion models predict each output token *independently* at each denoising step.
Anji Liu tweet media
English
5
32
211
36.3K
Poorva Garg retweetledi
Renato Lui Geh
Renato Lui Geh@renatogeh·
Where is the signal in LLM tokenization space? Does it only come from the canonical (default) tokenization? The answer is no! By looking at other ways to tokenize the same text, we get a consistent boost to LLM performance! arxiv.org/abs/2408.08541 1/5
Renato Lui Geh tweet mediaRenato Lui Geh tweet mediaRenato Lui Geh tweet media
English
1
8
33
6.3K
Poorva Garg
Poorva Garg@PoorvaGarg11·
HyBit uses the above theoretical results to bit-blast arbitrary discrete-continuous probabilistic programs. We scale with discrete structures in hybrid probabilistic programs better than state-of-the-art inference algorithms on a comprehensive suite of benchmarks.
Poorva Garg tweet media
English
1
0
11
399
Poorva Garg
Poorva Garg@PoorvaGarg11·
Are you looking for an inference algorithm that supports your discrete-continuous probabilistic program? Look no further! We have developed a new probabilistic programming language (PPL) called HyBit that provides scalable support for discrete-continuous probabilistic programs.
English
2
11
55
9.6K
Poorva Garg retweetledi
Honghua Zhang
Honghua Zhang@HonghuaZhang2·
Proposing Ctrl-G, a neurosymbolic framework that enables arbitrary LLMs to follow logical constraints (length control, infilling …) with 100% guarantees. Ctrl-G beats GPT4 on the task of text editing by >30% higher satisfaction rate in human eval. arxiv.org/abs/2406.13892
GIF
English
20
98
601
109K
Poorva Garg retweetledi
Siyan Zhao
Siyan Zhao@siyan_zhao·
🚨LLM RESEARCHERS🚨Want a free boost in speed and memory efficiency for your HuggingFace🤗LLM with ZERO degradation in generation quality? Introducing Prepacking, a simple method to obtain up to 6x speedup and 16x memory efficiency gains in prefilling prompts of varying lengths. arxiv.org/pdf/2404.09529… w/ @danielmisrael @guyvdb @adityagrover_
GIF
English
12
83
411
71.9K
Poorva Garg retweetledi
Kareem Ahmed
Kareem Ahmed@KareemYousrii·
Neuro-Symbolic (NeSy) methods inject constraints into NNs, but do not support autoregressive models e.g. transformers. We propose Pseudo-Semantic loss a NeSy framework for injecting arbitrary logical constraints into autoregressive models. At #NeurIPS2023! openreview.net/pdf?id=hVAla2O…
English
1
10
49
11.6K
Poorva Garg retweetledi
Zhe Zeng
Zhe Zeng@zhezeng0908·
Uncertainty quantification for neural networks via Bayesian model average is compelling, but uses just a few samples in practice😥 We propose 👾CIBER, a collapsed sampler to aggregate infinite NNs via volume computations, w/ better accuracy & calibration! arxiv.org/abs/2306.09686
English
3
6
58
10K