TJ Torres

354 posts

TJ Torres

TJ Torres

@Teejosaur

Machine learning for work. Pilot, snowboardist, and dog lover.

Oakland, CA Katılım Mayıs 2009
259 Takip Edilen406 Takipçiler
TJ Torres
TJ Torres@Teejosaur·
I have been spending the past few months developing a genAI project that enables direct prompt —> avatar creation in Roblox. It has been a really fun project to work on and the preview experience released today so I can now share. devforum.roblox.com/t/generate-ava…
English
0
0
6
146
TJ Torres retweetledi
Kevin Patrick Murphy
Kevin Patrick Murphy@sirbayes·
I am delighted to announce that the camera-ready version of my new book, "Machine Learning: Advanced Topics", is finally available online for free at probml.github.io/book2 (@mitpress will publish the hard copy in 2023.)
English
37
811
3.7K
0
TJ Torres retweetledi
Erik Bernhardsson
Erik Bernhardsson@bernhardsson·
This vuln exploit is insane. iMessage pre-renders gifs of any received messages outside a sandbox – trick it to render an obscure Turing complete file format made for printers in the 90s, and trigger a buffer overflow: googleprojectzero.blogspot.com/2021/12/a-deep…
English
4
15
96
0
TJ Torres retweetledi
Akshay Agrawal
Akshay Agrawal@akshaykagrawal·
New paper: Minimum-Distortion Embedding paper: arxiv.org/abs/2103.02559 We introduce a framework (generalizing spectral, PCA, MDS, UMAP ...) that leads to new embeddings code: pymde.org PyMDE lets you fit custom embeddings w/constraints on a GPU via @PyTorch
GIF
English
16
348
1.7K
0
TJ Torres
TJ Torres@Teejosaur·
Deciding to share some exciting career news: As of next Monday I’ll be joining the @Roblox Data Science team leading efforts for Trust and Safety. I’m really looking forward to take on the challenge of keeping the platform safe and enjoyable for all players.
English
1
0
11
0
TJ Torres
TJ Torres@Teejosaur·
@emollick This makes a lot of sense, but worth noting that adopting a strategy of more frequent smaller trials contrasts the way we reward progress in industry right now. PMs are often judged on % successful outcomes, not volume of experimentation. Would require a larger shift in mindset.
English
1
0
9
0
Ethan Mollick
Ethan Mollick@emollick·
Finding breakthroughs at large companies: A/B tests at Bing show most tweaks have tiny benefits requiring large tests to identify. Instead, run more small & less accurate tests: the top 2% of ideas are responsible for 74.8% of the gains. You can screen for radical ideas quickly!
Ethan Mollick tweet mediaEthan Mollick tweet media
English
5
156
788
0
TJ Torres retweetledi
Google AI
Google AI@GoogleAI·
Check out a new approach to vector similarity search that compresses dataset vectors to enable fast approximate distance computation while significantly boosting the accuracy of database queries compared to prior approaches. Learn about it here: goo.gle/30U126j
Google AI tweet media
English
6
252
824
0
TJ Torres retweetledi
Google DeepMind
Google DeepMind@GoogleDeepMind·
Moving away from negative pairs in self-supervised representation learning: our new SotA method, Bootstrap Your Own Latent (BYOL), narrows the gap between self-supervised & supervised methods simply by predicting previous versions of itself. See here: bit.ly/30MCugQ
Google DeepMind tweet mediaGoogle DeepMind tweet media
English
9
218
764
0
TJ Torres retweetledi
François Chollet
François Chollet@fchollet·
If you are a Black ML developer or data scientist in the US, show me your open-source projects / portfolio, and I will retweet. You deserve attention, respect, and opportunity.
English
61
231
1.3K
0
TJ Torres retweetledi
Google DeepMind
Google DeepMind@GoogleDeepMind·
For students and others interested in expanding their knowledge of AI during this period, we thought it might be helpful to ask our researchers what they consider to be the most impactful and insightful resources available to use #AtHomeWithAI (1/9)
English
31
930
3K
0
TJ Torres
TJ Torres@Teejosaur·
Really nice discussion on the intricacies of training in high cardinality multi-task settings.
Eric Jang@ericjang11

This talk by @karpathy youtu.be/IHH47nZ7FZU has convinced me that Tesla is several years ahead of most CV labs in regards to pushing the limits of DL. Commonplace questions like "how do you do early stopping for a multi-task model?" are non-trivial when at scale.

English
0
0
0
0
TJ Torres retweetledi
Ankur Handa
Ankur Handa@ankurhandos·
PointRend: Image Segmentation as Rendering arxiv.org/abs/1912.08193 They use adaptive sampling around regions of uncertainty to refine the labels going from coarse to fine resolution inspired by graphics based rendering techniques. code: github.com/facebookresear…
Ankur Handa tweet media
English
1
90
333
0
TJ Torres retweetledi
John McDonnell
John McDonnell@johnvmcdonnell·
New post, how a monkey can beat the S&P 500… I'm honestly still trying to wrap my head around why cap weighting is so bad, but at least on the historical record it's pretty clear that it's a losing strategy. jvmcdonnell.com/2020/02/03/how…
English
2
4
12
0
TJ Torres retweetledi
Ned 'no longer here' Potter
Ned 'no longer here' Potter@ned_potter·
Cows make milk. They milk themselves. Other cows check the milk (for free). Cows - get this - PAY THE FARMER to take the milk away. Then the farmer (you won't believe this, honestly) sells the milk *back to the cows.* #academicpublishing
English
103
3K
8.3K
0
TJ Torres retweetledi
Chris Said
Chris Said@Chris_Said·
I just wrote a 3-part blog post on optimizing sample sizes in A/B testing. It's kind of a longread, so if you just want the elevator pitch, here's a tweetstorm (1/20): chris-said.io/2020/01/10/opt…
English
9
66
359
0
TJ Torres
TJ Torres@Teejosaur·
@jxnlco 😂 I think last time for me was in E&M with multipole expansions.
English
0
0
0
0
jason liu
jason liu@jxnlco·
@Teejosaur I have not heard that word in years.
San Francisco, CA 🇺🇸 English
1
0
0
0
TJ Torres
TJ Torres@Teejosaur·
Ok this is a pretty novel idea (at least to me) and seems to have some really nice results in terms of long term dependence.
hardmaru@hardmaru

Found this gem @ #Neurips2019! Using orthogonal property of Legendre polynomials, Legendre Memory Units (LMU) can efficiently handle temporal dependencies spanning 100k timesteps, converge rapidly and use fewer internal state-variables compares to LSTMs. papers.nips.cc/paper/9689-leg…

English
1
0
3
0
TJ Torres retweetledi
hardmaru
hardmaru@hardmaru·
Deep Equilibrium Models Their method is equivalent to running an infinite depth (weight-tied) feedforward net, but has the notable advantage that they can analytically backpropagate through the equilibrium point using implicit differentiation. Cool work! arxiv.org/abs/1909.01377
hardmaru tweet mediahardmaru tweet mediahardmaru tweet media
English
0
39
190
0