Bum Chul Kwon | 권범철 | @[email protected]

559 posts

Bum Chul Kwon | 권범철 | @bckwon@vis.social

Bum Chul Kwon | 권범철 | @[email protected]

@BCKwon

Researcher @IBMResearch. Data Visualization, Visual Analytics, Machine Learning, Health Care, HCI. Views are mine.

Entrou em Nisan 2009
589 Seguindo725 Seguidores
Bum Chul Kwon | 권범철 | @[email protected] retweetou
Biology+AI Daily
Biology+AI Daily@BiologyAIDaily·
STAR-VAE: Latent Variable Transformers for Scalable and Controllable Molecular Generation 1. The vast chemical space of drug-like molecules necessitates powerful generative models. STAR-VAE addresses this by combining a Transformer encoder and autoregressive Transformer decoder, trained on 79 million molecules using SELFIES to ensure syntactic validity. This approach enables both broad distribution learning and conditional generation guided by molecular properties. 2. A key innovation is the principled conditional latent-variable formulation. A property predictor provides a consistent conditioning signal to the latent prior, inference network, and decoder. This allows STAR-VAE to generate molecules with desired properties using limited labeled data, making it highly efficient for property-guided molecular design. 3. Efficiency is further enhanced through low-rank adapters (LoRA) in both the encoder and decoder. This enables fast fine-tuning with minimal data, making the model adaptable to new tasks without extensive retraining. This is crucial for practical applications where labeled data is scarce. 4. STAR-VAE demonstrates strong performance on benchmarks like GuacaMol and MOSES, matching or exceeding existing baselines. In conditional tasks, it shifts docking score distributions toward stronger binding affinities, as shown in the Tartarus protein–ligand design benchmark. This highlights its potential for drug discovery. 5. Latent space analyses reveal smooth, semantically structured embeddings, supporting both unconditional exploration and property-aware steering. This dual capability makes STAR-VAE a versatile tool for navigating the complex landscape of chemical space. 📜Paper: arxiv.org/abs/2511.02769… #MolecularGeneration #Transformer #VAE #DrugDiscovery #AIinChemistry
Biology+AI Daily tweet media
English
0
7
15
1.5K
Bum Chul Kwon | 권범철 | @bckwon@vis.social
🚀 Excited to introduce MMELON—our new multi-view molecular foundation model! By combining graph, image, and text representations, MMELON delivers state-of-the-art performances for prediction and regression tasks. Code: github.com/BiomedSciAI/bi… Preprint: arxiv.org/abs/2410.19704
Biology+AI Daily@BiologyAIDaily

Multi-view biomedical foundation models for molecule-target and property prediction @IBMResearch • The paper introduces MMELON, a multi-view molecular foundation model combining graph, image, and text views to enhance prediction of molecular properties. Unlike single-view models, MMELON leverages multiple representations for a richer, more versatile molecular embedding. • The model performs exceptionally well on 18 diverse tasks, including ligand-protein binding, molecular solubility, metabolism, and toxicity, balancing the strengths of each modality. This versatility is critical in drug discovery and computational chemistry. • MMELON integrates three views—graph, image, and text—to learn comprehensive molecular representations. The image view uses ImageMol (pre-trained on 10 million molecules), while the graph and text views are based on advanced transformer architectures, pre-trained on datasets of 200 million molecules. • A novel aspect is the “late fusion” of these different modalities, ensuring each modality contributes optimally depending on the downstream task. This approach yields interpretable results and allows for an analysis of how each view supports different predictions. • For validation, MMELON was applied to screen compounds against a large set of G Protein-Coupled Receptors (GPCRs). Of these, 33 GPCRs related to Alzheimer’s disease were identified, and strong binders were predicted, validated through in silico structure modeling. • The multi-view model shows strong correlations between predicted and experimental affinities, achieving a Pearson correlation of 0.78 for GPCR binding. This suggests the model’s robust application for identifying new therapeutics. • Compared to single-view models, MMELON delivers superior performance across classification and regression tasks, making it an essential tool for complex molecular property predictions in drug discovery. @jamorrone3 @jianying_hu @FeixiongCheng @jeriscience @BCKwon @timrumbell @dplatt_maths @YunguangQiu @diwakarmahajan 💻Code: github.com/BiomedSciAI/bi… 📜Paper: arxiv.org/abs/2410.19704 #biomedicalAI #drugdiscovery #foundationmodel #multiviewlearning #GPCR #Alzheimers #machinelearning #bioinformatics

English
0
1
10
595
Bum Chul Kwon | 권범철 | @[email protected] retweetou
The Nobel Prize
The Nobel Prize@NobelPrize·
BREAKING NEWS The 2024 #NobelPrize in Literature is awarded to the South Korean author Han Kang “for her intense poetic prose that confronts historical traumas and exposes the fragility of human life.”
The Nobel Prize tweet media
English
609
35.9K
60.6K
12.4M
Bum Chul Kwon | 권범철 | @[email protected] retweetou
Grace
Grace@graceguo43·
Counterfactuals explain and reduce over-reliance on AI in clinical settings, but how do we create counterfactuals for images like MRIs? And can we ensure their domain relevance? In our new #facct2024 paper (/w Lifu Deng @ATandonMD @EndertAlex @BCKwon), we present MiMICRI (1/6)
Grace tweet media
English
3
4
23
1.8K
Bum Chul Kwon | 권범철 | @[email protected] retweetou
Niklas Elmqvist
Niklas Elmqvist@NElmqvist·
It’s Friday, it’s the last day of #ieeevis, and we’re now getting ready for our paper on “Visualization Thumbnails”🎞️🌆🌁🌃 by authors Hwiyeon Kim, Joohee Kim, Yunha Han, Hwajung Hong, Oh-Sang Kwon, Young-Woo Park, myself, @SungahnK, and @BCKwon. Room 109! […]
Niklas Elmqvist tweet media
English
1
2
20
1.6K
Bum Chul Kwon | 권범철 | @bckwon@vis.social
Excited to be in Toronto for #ACL2023NLP! Check out the paper, source code, and video of our paper Finspector. I'll be presenting Finspector at @ibmresearch booth around 9am - 10am Monday & Tuesday and at the main conference hall around 11am - 12:30pm Wednesday. Let's talk 🤩!
Bum Chul Kwon | 권범철 | @[email protected]@BCKwon

How can we uncover hidden biases in language models that impact fairness? Our #ACL2023 demo paper introduces Finspector, an interactive visualization widget available as a Python package for Jupyter. Paper, Video, Code: bckwon.com/publication/fi… @nandanamihindu #nlp #fairness

English
0
2
11
1.5K
Bum Chul Kwon | 권범철 | @[email protected] retweetou
Menna El-Assady
Menna El-Assady@melassady·
Thank you to #EuroVis and the Early Career Award Committee for the award. I'm honored and humbled to be working in, and recognized by the amazing #ieeevis community. Looking forward to many more years of #visualization research and future collaborations. ☺️ #eurovis2023
Menna El-Assady tweet mediaMenna El-Assady tweet mediaMenna El-Assady tweet mediaMenna El-Assady tweet media
English
16
5
118
7.6K
Bum Chul Kwon | 권범철 | @bckwon@vis.social
To use Finspector, users i) prepare a test dataset containing sentences and relevant metadata; ii) compute pseudo-log-likelihood scores using pre-trained models like BERT, RoBERTa, and ALBERT; and iii) launch Finspector on Jupyter and visually explore the biases of the models.
Bum Chul Kwon | 권범철 | @bckwon@vis.social tweet media
English
1
0
1
189
Nicolas Kruchten
Nicolas Kruchten@nicolaskruchten·
I'm very excited to join @_hex_tech as Visualization Lead this week! 📈📊🗺️🎉
English
11
4
62
10.1K