Rishabh Shukla

496 posts

Rishabh Shukla banner
Rishabh Shukla

Rishabh Shukla

@0_rishabh

Lead AI Engineer at @lifebitAI, previously NLP Research Engineer @factmata, CTO @neuronme l GSOC '14

London, England Katılım Mayıs 2012
269 Takip Edilen345 Takipçiler
Rishabh Shukla retweetledi
Google AI
Google AI@GoogleAI·
Announcing DataGemma, a set of open models that utilize Data Commons through Retrieval Interleaved Generation (RIG) & Retrieval Augmented Generation (RAG) to ground LLMs in real-world data for fact-checking, responsible AI development & more. Read more at goo.gle/datagemma
Google AI tweet media
English
9
107
446
37.3K
Rishabh Shukla retweetledi
elvis
elvis@omarsar0·
Mitigating Hallucination in LLMs This paper summarizes 32 techniques to mitigate hallucination in LLMs. Introduces a taxonomy categorizing methods like RAG, Knowledge Retrieval, CoVe, and more. Provides tips on how to apply these methods and highlights the challenges and limitations inherent in them. arxiv.org/abs/2401.01313
elvis tweet media
English
20
271
1.2K
126.3K
Rishabh Shukla
Rishabh Shukla@0_rishabh·
Excited to participate in the @PistoiaAlliance Digital Transformation in R&D conference today! I’ll be joining a panel to discuss federated technologies and what @lifebitAI is doing to enable federated data access, analysis and ML across multi-omics, EHR and imaging data.
English
1
3
12
0
Rishabh Shukla retweetledi
Google DeepMind
Google DeepMind@GoogleDeepMind·
In a major scientific breakthrough, the latest version of #AlphaFold has been recognised as a solution to one of biology's grand challenges - the “protein folding problem”. It was validated today at #CASP14, the biennial Critical Assessment of protein Structure Prediction (1/3)
GIF
English
119
2.4K
9.7K
0
Rishabh Shukla retweetledi
Papers with Code
Papers with Code@paperswithcode·
🎉 Papers with Code partners with arXiv! Code links are now shown on arXiv articles, and authors can submit code through arXiv. Read more: medium.com/paperswithcode…
Papers with Code tweet media
English
42
1.7K
6.1K
0
Rishabh Shukla retweetledi
Marc von Wyl
Marc von Wyl@marc_wyl·
@markus_zechner @omarsar0 I recommend web.stanford.edu/~jurafsky/slp3/ on NLP in general. Linguistics wise, I am reading Emily M. Bender's two books morphology and syntax, and semantics and pragmatics. They are not easy to absorb but are very interesting. I'd love to see more linguistics recommendations as well.
English
1
5
34
0
Rishabh Shukla retweetledi
Douwe Kiela
Douwe Kiela@douwekiela·
Excited to announce the Hateful Memes Challenge, a new dataset and competition for vision and language, focusing on multimodal understanding and reasoning: arxiv.org/abs/2005.04790 "Mean" memes shown here as an illustration of the problem. Some highlights: (1/7)
Douwe Kiela tweet media
English
13
94
386
0
Rishabh Shukla retweetledi
David Duvenaud
David Duvenaud@DavidDuvenaud·
Classifiers are secretly energy-based models! Every softmax giving p(c|x) has an unused degree of freedom, which we use to compute the input density p(x). This makes classifiers into generative models without changing the architecture. arxiv.org/abs/1912.03263
David Duvenaud tweet media
English
9
339
1.4K
0
Rishabh Shukla retweetledi
hardmaru
hardmaru@hardmaru·
“Now What?” John J. Hopfield Hopfield is a theoretical physicist, molecular biologist, neuroscientist, and AI researcher. His broad background led him to publish a 5-page (unrefereed) paper on what is now known as Hopfield Networks, his most cited work. pni.princeton.edu/john-hopfield/…
hardmaru tweet media
English
2
37
182
0
Rishabh Shukla retweetledi
Papers with Code
Papers with Code@paperswithcode·
Trends 📈 - track the popularity of deep learning frameworks for paper implementations. Current share in Q3 2019: PyTorch 38% (up 6%), TensorFlow 22% (down 2%), other 39% (down 3%) paperswithcode.com/trends
Papers with Code tweet media
English
3
45
133
0
Rishabh Shukla retweetledi
Google DeepMind
Google DeepMind@GoogleDeepMind·
The Hamiltonian formalism is fundamental to physics and possesses many properties useful for machine learning. Our new papers show how to bring these properties to neural networks:
English
8
144
624
0