Robert Pless

1.5K posts

Robert Pless banner
Robert Pless

Robert Pless

@rbpless

Professor at GWU, Developer of @traffickcam and @projectRephoto, mercenary interest in glitter and contemporary art. BlueSky: @pless.bsky.social

Washington DC Katılım Haziran 2009
787 Takip Edilen982 Takipçiler
Robert Pless
Robert Pless@rbpless·
#sundog & more in the schefferville airport webcam today
Robert Pless tweet media
English
0
0
3
155
Robert Pless
Robert Pless@rbpless·
LOL, amazon recommending shoe size based on customer reviews...
Robert Pless tweet media
English
1
0
2
128
Robert Pless
Robert Pless@rbpless·
I'm excited to share that GWU is hosting a World Bank Symposium on AI & the Future of Human Capital. For those not steeped in the development, Human Capital == people’s skills, knowledge, and health worldbank.org/en/events/2025… Some flexibility in the deadline may be possible.
English
0
1
4
284
Robert Pless retweetledi
Bray Falls
Bray Falls@astrofalls·
This is what I’ve been up to the last year! Building the largest remote observatory in the world (By quantity of scopes)
Bray Falls tweet media
English
443
927
12.6K
8M
Robert Pless
Robert Pless@rbpless·
Finally, papers are routed to specific Area Chairs based on expertise. The research community isn't so big once you are in a specific area. I notice the community members that put extra effort into writing reviews, and those that don't. Thank you to those that do! (4/4)
English
0
0
1
104
Robert Pless
Robert Pless@rbpless·
In the past, as an area chair I would end up with a collection of reviews, some of which sucked, and would somewhat throw up my hands and say "these reviews aren't great and I have to decide based on them, but what can I do ... there are too many papers to do much else". (2/4)
English
2
0
0
278
Robert Pless
Robert Pless@rbpless·
I'm an Area Chair for NeurIPS this year. I spent most of today reviewing all the reviews for my 13 assigned papers to highlight insufficient reviews. A few reviews were incredibly shallow and some had hallmarks of GPT, and this was a chance to reject those. (1/4)
English
1
0
5
940
Robert Pless
Robert Pless@rbpless·
This step, where the executive decision maker is obliged to object to or approve of them, aligns the authority to make the decisions with the responsibility to get good reviews. Additionally, the Area Chairs see the name of the reviewer (I'm looking at you, reviewer #2). (3/4)
English
0
0
0
241
Robert Pless
Robert Pless@rbpless·
@rohitgUCF @karpathy I think andrej’s point is that even good data (like textbooks) is full of “crap” like little formatting snippets. I worry that dumb heuristics to fix things lose interesting content. Fineweb (afaik) is mostly filtering/dedup so probably not losing diversity
English
0
0
1
26
Rohit Gupta
Rohit Gupta@rohitgUCF·
@rbpless @karpathy would you say the kind of filtering they do in fineweb exceeds that threshold ?
English
1
0
0
38
Andrej Karpathy
Andrej Karpathy@karpathy·
Mildly obsessed with what the "highest grade" pretraining data stream looks like for LLM training, if 100% of the focus was on quality, putting aside any quantity considerations. Guessing something textbook-like content, in markdown? Or possibly samples from a really giant model? Curious what the most powerful e.g. 1B param model trained on a dataset of 10B tokens looks like, and how far "micromodels" can be pushed. As an example, (text)books are already often included in pretraining data mixtures but whenever I look closely the data is all messed up - weird formatting, padding, OCR bugs, Figure text weirdly interspersed with main text, etc. the bar is low. I think I've never come across a data stream that felt *perfect* in quality.
English
332
313
4.4K
546.2K
Robert Pless
Robert Pless@rbpless·
Question: Does anyone have a pre-computed FAISS indices for large datasets (e.g. LAION 2B, 5B) that have been used to train CLIP-like models, and would be willing to share the index or access for queries? The one below doesn't seem to work anymore: knn5.laion.ai/knn-service
English
1
0
0
167
Robert Pless
Robert Pless@rbpless·
I'm lucky to get to work on important problems with really great researchers. @abby621 is talking about our research on image search tools to support sex trafficking investigations tomorrow (wednesday) at noon eastern time, link in the following post: bsky.app/profile/astyli…
English
0
0
1
170
Robert Pless
Robert Pless@rbpless·
Birds (and bugs) flying over DC last night
Robert Pless tweet media
English
0
0
2
249
Jie Zhou
Jie Zhou@__jiezhou·
I'm super excited to start as an assistant professor in the Computer Science department at the George Washing University today! Thanks again for all the help along the way. Looking forward to everything ahead!
English
6
6
125
12.3K
Yoshitomo Matsubara
Yoshitomo Matsubara@yoshitomo_cs·
As #CVPR2024 concluded yesterday, I think my technical chair role came to an end It was my pleasure and great experience to closely work with many chairs, a senior advisor and @openreviewnet team for @CVPR ! Thank you all!
English
6
2
37
5.6K
Robert Pless
Robert Pless@rbpless·
At the AI Aspirations event in the old Newseum building. I love being in DC @GWtweets and finding ways to understand what AI research is most important.
Robert Pless tweet media
English
0
0
3
297
Robert Pless retweetledi
Davide Scaramuzza
Davide Scaramuzza@davsca1·
We are thrilled to share our groundbreaking paper published today in @Nature: "Low Latency Automotive Vision with Event Cameras." Paper: nature.com/articles/s4158… Video: youtu.be/dwzGhMQCc4Y Code & Dataset: github.com/uzh-rpg/dagr Frame-based sensors such as the RGB cameras used in the automotive industry face a bandwidth–latency trade-off: higher frame rates reduce perceptual latency but increase bandwidth demands, whereas lower frame rates save bandwidth at the cost of missing vital scene dynamics due to increased perceptual latency (see Fig. 1a of the paper). Event cameras have emerged as alternative vision sensors to address this trade-off. Event cameras measure the changes in intensity asynchronously, offering high temporal resolution and sparsity, markedly reducing bandwidth and latency requirements. Despite these advantages, event-camera-based algorithms are either highly efficient but lag behind image-based ones in accuracy or sacrifice the sparsity and efficiency of events to achieve comparable results. To overcome this, we propose a hybrid event- and frame-based object detector based on Deep Asynchronous GNNs, which preserves the advantages of each modality and thus does not suffer from this trade-off. Our method exploits the high temporal resolution and sparsity of events and the rich but low temporal resolution information in standard images to generate efficient, high-rate object detections, reducing perceptual and computational latency. In doing so, it emulates the slow-fast pathways in biological neural networks and uses them to its advantage. We show that using a 20-Hz RGB camera plus an event camera achieves the same latency as a 5,000-Hz camera with the bandwidth of a 50-Hz camera, i.e., an over 100-fold bandwidth reduction, without compromising accuracy. Our approach paves the way for efficient and robust perception in edge-case scenarios by uncovering the potential of event cameras. We release the code and the dataset (DSEC-Detection) to the public. Kudos to @DanielGehrig6, who, with this work, also received the UZH Annual Award for the Best PhD thesis! **Reference** Daniel Gehrig, Davide Scaramuzza Low Latency Automotive Vision with Event Cameras Nature, May 29, 2024. DOI: 10.1038/s41586-024-07409-w PDF (Open Access): nature.com/articles/s4158… Video (Narrated): youtu.be/dwzGhMQCc4Y Code & Datasets: github.com/uzh-rpg/dagr @UZH_en @UZH_Science @UZHspacehub @ERC_Research @nccrrobotics
YouTube video
YouTube
Davide Scaramuzza tweet media
English
13
71
468
35.2K