Thomas O'Brien

665 posts

Thomas O'Brien banner
Thomas O'Brien

Thomas O'Brien

@listenaddress

Making research tools

Minnesota Katılım Nisan 2017
190 Takip Edilen221 Takipçiler
Thomas O'Brien
Thomas O'Brien@listenaddress·
Good looking out, Dean
Thomas O'Brien tweet media
English
0
0
2
750
Thomas O'Brien retweetledi
Arran Davis
Arran Davis@Arran_Davis·
Very lucky to have worked with @bbcideas and @UniofOxford to create a short video about our research! We explain why exercising with friends can reduce fatigue and improve performance, while also helping us to build and maintain the social bonds we need to be happy and healthy.
University of Oxford@UniofOxford

How exercising with family and friends can boost physical performance as well as improving mental health 🏃 @oxsocsci's Dr @Arran_Davis shares the science behind the benefits of exercising together. 🎬 | @bbcideas

Oxford, England 🇬🇧 English
4
7
33
4.3K
Thomas O'Brien
Thomas O'Brien@listenaddress·
One way to (get AI to) be creative is to take something interesting and translate it in a new direction. That's what our translator helps you do, try it out! levinlab.dev/fieldshift Inspired by @drmichaellevin’s research and @thesephist’s work on exploring latent space.
English
0
0
5
810
Thomas O'Brien retweetledi
Michael Levin
Michael Levin@drmichaellevin·
Excellent; here's how their paper's abstract looks when FieldShift (levinlab.dev/fieldshift) pivots that idea into non-brain-based body intelligence: A critical mystery in developmental biology lies in determining how molecular structure impacts the complex morphogenetic dynamics of body tissues. How do large-scale gene expression patterns in body tissues constrain states of bioelectrical activity and transitions between those states? We address these questions using a maximum entropy model of morphogenesis informed by spatial gene expression profiling. We demonstrate that the most probable anatomical states – characterized by minimal energy – display common bioelectric profiles across body areas: local spatially-contiguous sets of body regions reminiscent of morphogenetic systems are co-activated frequently. The predicted activation rate of these systems is highly correlated with the observed activation rate measured in a separate resting state voltage reporter dye data set, validating the utility of the maximum entropy model in describing cell bioelectrical dynamics. This approach also offers a formal notion of the energy of activity within a system, and the energy of activity shared between systems. We observe that within- and between-system energies cleanly separate morphogenetic systems into distinct categories, optimized for differential contributions to integrated versus segregated function. These results support the notion that energetic and transcriptional constraints circumscribe body dynamics, offering insights into the roles that morphogenetic systems play in driving whole-body bioelectric patterns.
English
3
5
33
2.1K
Thomas O'Brien
Thomas O'Brien@listenaddress·
Also pushed some updates to our domain translator (levinlab.dev/fieldshift): - Simplified UI - Input topic is detected and displayed - Output to any topic you want - Translation history - Share feature/page - Translates a lot faster
English
0
1
2
330
Mattia Rüfenacht
Mattia Rüfenacht@rufenmatt·
Wondering if someone has already tried to integrate downstream models for hypothesis testing in certain areas.
English
1
0
0
28
Mattia Rüfenacht
Mattia Rüfenacht@rufenmatt·
Nobel Prizes for AI still has some way to go. But systems like this, even if still very simple, are already helping scientists to overcome obstacles between domains and develop new ideas and hypotheses through comprehensive symmetry discoveries. pubs.rsc.org/en/content/art…
English
1
0
1
59
alexander
alexander@4tt4r·
I won’t be making it to Devcon this year because on October 11th I shipped the most important product of my life to date; My first-born son 🙂 I will certainly miss the reunions, exciting talks, and new connections I always find at Devcon, but I know ethereum and y’all will be great the whole time without me this year. I look forward to catching up on what I can from afar. For now, my biggest priority is my family ❤️
English
2
0
11
226
Thomas O'Brien
Thomas O'Brien@listenaddress·
dinos@din0s_

📚 Awesome Information Retrieval 🔍 I’ve compiled a list of some of my favorite IR papers from the past few years. If you’re new to the field and want to understand how Transformer-based retrieval models work before building your RAG application, this should serve as a great starting point. You'll read about techniques like: - Late interaction with ColBERT - Hard negative mining with ANCE - Knowledge distillation with MarginMSE - Sparse lexical expansion with SPLADE - Synthetic query generation with InPars - Generative Information Retrieval with DSI - Masked auto-encoder pre-training with RetroMAE - Instruction tuning with TART - Large-scale IR pre-training with E5 - Multi-purpose models with BGE-M3 - LLM adaptation techniques with LLM2Vec ... to name a few. It was a challenge, but I narrowed it down to 16 essential papers, with many more included in the extended version. Without further yapping, here is the list: - ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT (Khattab et al., 2020) arxiv.org/abs/2004.12832 - Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (Lewis et al., 2020) arxiv.org/abs/2005.11401 - Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval (Xiong et al., 2020) arxiv.org/abs/2007.00808 - Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation (Hofstätter et al., 2020) arxiv.org/abs/2010.02666 - BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models (Thakur et al., 2021) arxiv.org/abs/2104.08663 - SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking (Formal et al., 2021) arxiv.org/abs/2107.05720 - InPars: Data Augmentation for Information Retrieval using Large Language Models (Bonifacio et al., 2022) arxiv.org/abs/2202.05144 - Transformer Memory as a Differentiable Search Index (Tay et al., 2022) arxiv.org/abs/2202.06991 - RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder (Xiao et al., 2022) arxiv.org/abs/2205.12035 - Promptagator: Few-shot Dense Retrieval From 8 Examples (Dai et al., 2022) arxiv.org/abs/2209.11755 - MTEB: Massive Text Embedding Benchmark (Muennighoff et al., 2022) arxiv.org/abs/2210.07316 - Task-aware Retrieval with Instructions (Asai et al., 2022) arxiv.org/abs/2211.09260 - Text Embeddings by Weakly-Supervised Contrastive Pre-training (Wang et al., 2022) arxiv.org/abs/2212.03533 - BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation (Chen et al., 2024) arxiv.org/abs/2402.03216 - Generative Representational Instruction Tuning (Muennighoff et al., 2024) arxiv.org/abs/2402.09906 - LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders (BehnamGhader et al., 2024) arxiv.org/abs/2404.05961

English
2
0
3
286
Delip Rao e/σ
Delip Rao e/σ@deliprao·
If you are an ML researcher who also happens to be very good at solving problems with search, you will not get much of ML research done. That’s how powerful search is. I suspect, a lot of “ML research” could be avoided by gaining some IR expertise.
English
14
16
284
40.8K
Jerry Tworek
Jerry Tworek@MillionInt·
New tshirt pattern just dropped
Jerry Tworek tweet media
English
5
12
112
16.5K
Thomas O'Brien
Thomas O'Brien@listenaddress·
Dillard on seeing an eclipse. “It began with no ado. It was odd that such a well-advertised public event should have no starting gun, no overture, no introductory speaker. I should have known right then I was out of my depth.” Everytime I can’t believe how good she is.
English
0
0
3
744
Thomas O'Brien
Thomas O'Brien@listenaddress·
@utotranslucence ❤️❤️❤️ amazing idea. Would be great to get recommendations for next tasks too. It’d be tricky to do right, but if someone is sick for example, getting recommended healthy food options, vitamin stores, etc. that other folks can order from/run to would be rad.
English
0
0
1
87
•Freyja•
•Freyja•@utotranslucence·
Hey twitter. I'm making a course/app that walks you through co-ordinating help for a friend or family member in acute need, by recruiting their other friends and family, working out what needs to be done, and negotiating tasks. What would you call an app/course like this?
English
29
5
84
9.1K
Thomas O'Brien
Thomas O'Brien@listenaddress·
@anotherjesse really great chatting w you last night Jesse! Would love to chat more sometime.
English
1
0
1
60
Thomas O'Brien
Thomas O'Brien@listenaddress·
@allisondman Hey Allison! Starting to work on a v2 rn. Curious.. what kinds of things did you imagine wanting to do with a tool like this? And what domains might you want to translate to and from?
English
0
0
0
19
Allison Duettmann
Allison Duettmann@allisondman·
@drmichaellevin I have wished for sth like Fieldshift to exist for so long; congrats, @drmichaellevin! Are there any plans to add other fields to translate to apart from developmental bio, philosophy, and finance?
English
2
0
5
360
Michael Levin
Michael Levin@drmichaellevin·
Cool talk by Nicholas Humphrey youtube.com/watch?v=9QWaZp… on how consciousness evolved. He emphasizes warm-blooded brains, but processes he mentions are ancient, & predate neurons + pivot from navigating morphogenetic space to 3D behavioral space. levinlab.dev/fieldshift translates:
YouTube video
YouTube
Michael Levin tweet media
English
12
33
202
17.2K