

rishabh ranjan
54 posts

@_rishabhranjan_
Stanford CS PhD w @jure and @guestrin. Prev. CMU w @zacharylipton, IIT Delhi. I like neural networks.



Thoroughly enjoyed the discussions on PluRel and Relational Foundation Models during the talk! Thanks to an amazing audience @tempgraph_rg Slides: drive.google.com/file/d/1oF-hNY… Website: snap-stanford.github.io/plurel/ Github: github.com/snap-stanford/…


📚 Today at the Reading Group, Thu, Feb 26, 11am EST, we’re excited to host Vignesh Kothapalli @kvignesh1420 (Stanford University) presenting: PLUREL: Synthetic Data Unlocks Scaling Laws for Relational Foundation Models zoom link on our website See you there! 🚀


This Thursday (Feb 19, 11am EST) at the reading group: Rishabh Ranjan (Stanford) presents Relational Transformer: Toward Zero-Shot Foundation Models for Relational Data. Paper & code: github.com/snap-stanford/… Hope to see you there! zoom link on website!

This Thursday (Feb 19, 11am EST) at the reading group: Rishabh Ranjan (Stanford) presents Relational Transformer: Toward Zero-Shot Foundation Models for Relational Data. Paper & code: github.com/snap-stanford/… Hope to see you there! zoom link on website!

Relational Foundation Models face a scaling problem: diverse training datasets are rarely public due to privacy constraints 🔒. 🚀 We are excited to introduce "PluRel": a framework that synthesizes diverse multi-table relational databases from scratch, unlocking scaling laws for RFMs. 🧵 Kudos to the amazing collaborators at @StanfordAILab @Kumo_ai_team , and @SAP : @_rishabhranjan_ @VHudovernik @vijaypradwi @johanneshoffart @guestrin @jure

Relational Foundation Models face a scaling problem: diverse training datasets are rarely public due to privacy constraints 🔒. 🚀 We are excited to introduce "PluRel": a framework that synthesizes diverse multi-table relational databases from scratch, unlocking scaling laws for RFMs. 🧵 Kudos to the amazing collaborators at @StanfordAILab @Kumo_ai_team , and @SAP : @_rishabhranjan_ @VHudovernik @vijaypradwi @johanneshoffart @guestrin @jure




Relational Foundation Models face a scaling problem: diverse training datasets are rarely public due to privacy constraints 🔒. 🚀 We are excited to introduce "PluRel": a framework that synthesizes diverse multi-table relational databases from scratch, unlocking scaling laws for RFMs. 🧵 Kudos to the amazing collaborators at @StanfordAILab @Kumo_ai_team , and @SAP : @_rishabhranjan_ @VHudovernik @vijaypradwi @johanneshoffart @guestrin @jure


Transformers are great for sequences, but most business-critical predictions (e.g. product sales, customer churn, ad CTR, in-hospital mortality) rely on highly-structured relational data where signal is scattered across rows, columns, linked tables and time. Excited to finally share what I have been working on over the last year: a Foundation Model architecture which brings the power of Transformers to relational domains, enabling large-scale pretraining and zero-shot generalization in enterprise settings. 🧵1/n

Transformers are great for sequences, but most business-critical predictions (e.g. product sales, customer churn, ad CTR, in-hospital mortality) rely on highly-structured relational data where signal is scattered across rows, columns, linked tables and time. Excited to finally share what I have been working on over the last year: a Foundation Model architecture which brings the power of Transformers to relational domains, enabling large-scale pretraining and zero-shot generalization in enterprise settings. 🧵1/n

🚀 Announcing RelBench V2, a major update to our benchmark for foundation models on relational data! With V2, we are significantly expanding the benchmark’s scope to catalyze further research in Relational Deep Learning (RDL) and Relational Foundation Models (RFMs). Key features: 🍺 4 new databases, spanning domains like e-commerce and beer reviews to scientific research and clinical healthcare. 🧩 40 new predictive tasks, including 28 autocomplete tasks, across new and existing databases. 🔌 External data integrations: 70+ datasets from CTU, 7 datasets from 4DBInfer, and your own data via SQL connector, all in RelBench format. 🛠️ Bug fixes and performance improvements. 🔥 Introducing autocomplete tasks: As opposed to forecasting tasks, autocomplete tasks predict existing columns in the database. We found that models need to deeply understand the relational context to autocomplete database fields, a critical capability that expands the scope of real-world RDL applications. Learn more: 🌐 Website: relbench.stanford.edu 💻 GitHub: github.com/snap-stanford/… Huge thanks to @justingu32 @_rishabhranjan_ @jakub_peleska @VHudovernik @CKanatsoulis @fengyuli607, Tang Haiming, Alistiq and everyone else who contributed to our GitHub for making this possible!



Transformers are great for sequences, but most business-critical predictions (e.g. product sales, customer churn, ad CTR, in-hospital mortality) rely on highly-structured relational data where signal is scattered across rows, columns, linked tables and time. Excited to finally share what I have been working on over the last year: a Foundation Model architecture which brings the power of Transformers to relational domains, enabling large-scale pretraining and zero-shot generalization in enterprise settings. 🧵1/n


Transformers are great for sequences, but most business-critical predictions (e.g. product sales, customer churn, ad CTR, in-hospital mortality) rely on highly-structured relational data where signal is scattered across rows, columns, linked tables and time. Excited to finally share what I have been working on over the last year: a Foundation Model architecture which brings the power of Transformers to relational domains, enabling large-scale pretraining and zero-shot generalization in enterprise settings. 🧵1/n

