DonnyB

71 posts

DonnyB banner
DonnyB

DonnyB

@donny_bertucci

earthling

somewhere at sometime Katılım Ekim 2021
87 Takip Edilen139 Takipçiler
Sabitlenmiş Tweet
DonnyB
DonnyB@donny_bertucci·
With millions of data entries, cross-filtering is painfully laggy Not with our (me and @domoritz) new JavaScript library FalconVis! With FalconVis, you can cross-filter your visualizations at scale with no interaction delay! Try it out! dig.cmu.edu/falcon-vis/cro… 🧵👇
GIF
English
3
43
274
34.7K
DonnyB
DonnyB@donny_bertucci·
I worked on bioatlas, check it out!
Brandon White@bwhite5290

To replace animal testing with AI, we need MASSIVE human datasets. Today, we're thrilled to share Axiom's new data exploration tool, providing the ability to visually explore the world's largest primary human liver toxicity dataset. Built with Axiom's proprietary wetlab protocols, our dataset includes detailed liver toxicity profiles for over 100,000 distinct molecules. The key to this dataset is our ability to do high-throughput, multiplexed high-content screening with primary human liver cells. Traditionally, toxicity assays either sacrifice throughput or sacrifice biological relevance (using easy-to-grow immortalized cell lines instead of real human cells). We managed to combine throughput, physiological relevance, and multiplexing in one platform. The assays run in a high throughput format using automation, meaning thousands of compound-dose conditions can be tested in one experiment. We achieved this using pooled primary human hepatocytes, which are often fragile and expensive. By systemizing our automation and quality control processes, we were able to run over 120+ batches on the same donor pool with incredible reproducibility and consistency. We did this while integrating many readouts per well, whereas many existing toxicity assays only do a single readout. Our multiplexed approach provides far more data per experiment enabling us to measure 10-20 different toxicity phenotypes such as apoptosis, necrosis, mitochondrial fission, endoplasmic reticulum stress, stress granule formation, microtubules, and more all from a single well on a 384-well plate! The combination of scale, high content information, and data quality is exactly what is needed to train highly accurate AI models in biology. If you're interested, please explore the dataset in the comments below and let me know if you want to chat about the details!

English
0
0
3
1.8K
DonnyB retweetledi
Brandon White
Brandon White@bwhite5290·
To replace animal testing with AI, we need MASSIVE human datasets. Today, we're thrilled to share Axiom's new data exploration tool, providing the ability to visually explore the world's largest primary human liver toxicity dataset. Built with Axiom's proprietary wetlab protocols, our dataset includes detailed liver toxicity profiles for over 100,000 distinct molecules. The key to this dataset is our ability to do high-throughput, multiplexed high-content screening with primary human liver cells. Traditionally, toxicity assays either sacrifice throughput or sacrifice biological relevance (using easy-to-grow immortalized cell lines instead of real human cells). We managed to combine throughput, physiological relevance, and multiplexing in one platform. The assays run in a high throughput format using automation, meaning thousands of compound-dose conditions can be tested in one experiment. We achieved this using pooled primary human hepatocytes, which are often fragile and expensive. By systemizing our automation and quality control processes, we were able to run over 120+ batches on the same donor pool with incredible reproducibility and consistency. We did this while integrating many readouts per well, whereas many existing toxicity assays only do a single readout. Our multiplexed approach provides far more data per experiment enabling us to measure 10-20 different toxicity phenotypes such as apoptosis, necrosis, mitochondrial fission, endoplasmic reticulum stress, stress granule formation, microtubules, and more all from a single well on a 384-well plate! The combination of scale, high content information, and data quality is exactly what is needed to train highly accurate AI models in biology. If you're interested, please explore the dataset in the comments below and let me know if you want to chat about the details!
English
9
28
152
25K
λux
λux@novasarc01·
new video from artem kirsanov provides detailed explanation about variational inference and evidence lower bound
λux tweet media
English
7
45
534
23.4K
DonnyB retweetledi
Will Epperson
Will Epperson@w_epperson·
Excited to share our #CHI2025 paper: "Interactive Debugging and Steering of Multi-Agent AI Systems"! As 🤖 AI agent teams 🤖 powered by LLMs take on complex tasks, debugging and steering these agents becomes crucial. How can developers better understand AI agent behavior?
Will Epperson tweet media
English
2
14
55
8.2K
DonnyB retweetledi
Will Epperson
Will Epperson@w_epperson·
Do you work with text data for your job or research? We’d love to talk to you! We're running a paid user study at CMU to understand how people explore and visualize text data while programming—and to gather feedback on our new open-source visualization tool. Details below 👇
English
3
9
12
2.2K
DonnyB retweetledi
Nari Johnson
Nari Johnson@narijohnson·
👋 I'm at #NeurIPS2024🇨🇦 workshops this weekend! I'll be presenting new work on participatory AI & public-sector AI governance on Sunday: 10:30am: Oral @ EvalEval 1:15pm: Poster @ EvalEval 2:00pm: Oral @ RegML
English
3
6
29
2.8K
DonnyB
DonnyB@donny_bertucci·
Life update: I quit my PhD. Not sure what's next for me, but I'll keep you all posted.
English
0
0
10
483
DonnyB retweetledi
Adam Coscia
Adam Coscia@AdamCoscia·
What happens when you let users integrate data on-the-fly into a #visualization? 🤔 Our #IEEEVIS2024 paper presents guidelines for designing visualization tools that integrate data automatically throughout #analysis! 📊 Let's explore how to combine integration and analysis: 🧵
English
1
3
17
1.2K
DonnyB retweetledi
Arpit Narechania
Arpit Narechania@arpitnarechania·
I'm (still) recruiting #PhDstu #PhD students to join our lab @HKUSTCSE @hkust in 🇭🇰from Fall 2025. I design visual analytics tools to enhance analytics workflows! Check my website: arpitnarechania.github.io If you are at #ieeevis #ieeevis2024, I'd love to chat, email me. More👇
Arpit Narechania@arpitnarechania

📢 I'm joining @HKUSTCSE @HKUST as an assistant professor in Jan'25. I'm recruiting PhD students for Spring/Fall'25 to join my new research lab. Interested in #DataViz, #HCI, #GIS, #NLProc, #AI and to work on interdisciplinary problems? Apply now! Help me spread the word🙏

English
0
2
6
706
DonnyB retweetledi
Venkat Sivaraman
Venkat Sivaraman@venkats_14·
D3 animations are amazing for SVG-based vis, but hard to scale to Canvas/WebGL. I made Counterpoint to help me make large animated embedding plots, and now I’m excited to share it as an open-source JS/TS framework. Presenting virtually @ieeevis this Wed! arxiv.org/abs/2410.05645
English
1
2
22
1.8K
DonnyB retweetledi
DonnyB retweetledi
Yu Fu 傅予
Yu Fu 傅予@YuFuTroy·
Excited to be presenting two papers next week–one at #UIST2024 on Wednesday (my first UIST!) and one virtually at #ieeevis on Thursday. Please make sure to check out the amazing work from my labmates @GT_Vis and our collaborators at VIS too!
Yu Fu 傅予 tweet mediaYu Fu 傅予 tweet media
English
1
7
25
2K
DonnyB retweetledi
VISxAI
VISxAI@VISxAI·
Join us for #VISxAI tomorrow October 13 from 8:30--11:30am ET! 🥳 👀Watch live with us on YouTube youtu.be/UUkftG2KH5o
YouTube video
YouTube
VISxAI tweet media
English
0
6
12
1.1K
DonnyB
DonnyB@donny_bertucci·
Also made a quick interface to find similar proteins from 569k swiss-prot proteins using the ProteinCLIP embeddings ocular.cc.gatech.edu/DS569k/
English
1
0
0
76