AMAR P.

140 posts

AMAR P. banner
AMAR P.

AMAR P.

@MeAmarPotdar

To make this world a better place, One code at a Time. Generative AI , Deep Learning, Computer Vision.

Pune, India Katılım Ocak 2010
828 Takip Edilen20 Takipçiler
AMAR P. retweetledi
Sebastian Raschka
Sebastian Raschka@rasbt·
Just putting the two Gemma 4 variants side by side here for easy reference. #architecture-diff-tool" target="_blank" rel="nofollow noopener">sebastianraschka.com/llm-architectu…
Sebastian Raschka tweet media
English
1
10
63
25K
Aadhaar
Aadhaar@UIDAI·
@MeAmarPotdar @ceo_uidai @UIDAIMumbai Dear Individual, Regarding your query, we would like to inform you that it may be a temporary issue. As checked, website is working fine. You are requested to try again by visiting UIDAI website.
English
1
0
0
186
AMAR P.
AMAR P.@MeAmarPotdar·
Am trying to find the nearest Aadhaar enrolment center, site to book appointment is simply not showing any near by centers. I tried with different pincode, it still wont work. Called 1947 and they shared the same link over SMS. Help resolve this @UIDAI @ceo_uidai @UIDAIMumbai
AMAR P. tweet media
English
1
0
0
232
AMAR P. retweetledi
Swapna Kumar Panda
Swapna Kumar Panda@swapnakpanda·
When it comes to System Design, These are the Top 10 YouTube Channels:
Swapna Kumar Panda tweet media
English
20
98
600
42.1K
AMAR P.
AMAR P.@MeAmarPotdar·
I didn’t know about GST also, What is % of GST?
English
0
0
0
18
AMAR P. retweetledi
sushma date
sushma date@sushmadate·
A thread on the land grab by ARAI on Reserved Forest land on @VetalTekdi and why ARAI needs to be shifted to some alternate site which is not in a forest or on a hill 1/n
sushma date tweet media
English
16
83
244
40.2K
camenduru
camenduru@camenduru·
😲 LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control 🤯 Jupyter Notebook 🥳 Thanks to Jianzhu Guo ❤ Dingyun Zhang ❤ Xiaoqiang Liu ❤ Zhizhou Zhong ❤ Yuan Zhang ❤ Pengfei Wan ❤ Di Zhang ❤ 🍊jupyter: please try it 🐣 github.com/camenduru/Live…
Filipino
15
98
483
38.5K
AMAR P.
AMAR P.@MeAmarPotdar·
I really need to improve my handwriting 😭
English
0
0
0
12
Alex Reibman 🖇️
Alex Reibman 🖇️@AlexReibman·
Mistral is France’s answer to OpenAI. And it’s all open source. They just threw Paris’s largest ever AI hackathon. 1,000+ hackers applied to build what’s possible with open source LLMs. Here are the finalists from the @MistralAI x @cerebral_valley hackathon in Paris (🧵):
Alex Reibman 🖇️ tweet mediaAlex Reibman 🖇️ tweet mediaAlex Reibman 🖇️ tweet media
English
93
896
6.7K
1.7M
Varun Mayya
Varun Mayya@waitin4agi_·
feel the agi
Varun Mayya tweet media
English
630
25
1K
294.1K
AMAR P. retweetledi
elvis
elvis@omarsar0·
Stanford CS25 - Transformers United So much fun catching up with these Transformer lectures. There is a lot of content I'm already familiar with but I always love reviewing stuff to build on my understanding of complex concepts and learn new ones along the way. I find that in the field of LLMs, there are many different perspectives and interpretations so it's good to keep an open mind to different takes and explanations. This approach helps strengthen my intuition about LLMs. Pair it with a few coding sessions along the way and it's well worth every minute. At least that is how I've always made good use of these lectures. All the latest lectures are highly recommended.
elvis tweet media
English
10
315
1.6K
139.1K
AMAR P. retweetledi
Dr. Simon
Dr. Simon@goddek·
Until the 1970, the majority of the people were fit as a fiddle! No keto, vegan, or paleo diets. No home aerobics or gym memberships. No fancy fitness tech or wellness influencers. They also weren't drinking protein shakes or counting calories. So, what went wrong? A THREAD 🧵⬇️
Dr. Simon tweet media
English
5K
18.3K
66.7K
11.6M
AMAR P. retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
New YouTube video: 1hr general-audience introduction to Large Language Models youtube.com/watch?v=zjkBMF… Based on a 30min talk I gave recently; It tries to be non-technical intro, covers mental models for LLM inference, training, finetuning, the emerging LLM OS and LLM Security.
YouTube video
YouTube
Andrej Karpathy tweet media
English
530
2.8K
16.6K
5.1M
Westside
Westside@WestsideStores·
@MeAmarPotdar Hey, we apologize for the trouble. Kindly DM us with the details of the issue for better assistance.❤️ Westside
English
1
0
0
33
AMAR P.
AMAR P.@MeAmarPotdar·
@WestsideStores worst online/store experience. Received used or tried shoes. Will refrain from “wasteside”here onwards all together. Since store doesn’t had packed 😒or new one, ordered online. But got the same issue 🙁
AMAR P. tweet mediaAMAR P. tweet mediaAMAR P. tweet media
English
2
0
0
15
AMAR P. retweetledi
Sergios Karagiannakos
Sergios Karagiannakos@KarSergios·
𝐃𝐢𝐫𝐞𝐜𝐭 𝐏𝐫𝐞𝐟𝐞𝐫𝐞𝐧𝐜𝐞 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧 (𝐃𝐏𝐎): 𝐨𝐧𝐞 𝐨𝐟 𝐭𝐡𝐞 𝐦𝐨𝐬𝐭 𝐞𝐱𝐜𝐢𝐭𝐢𝐧𝐠 𝐫𝐞𝐜𝐞𝐧𝐭 𝐚𝐝𝐯𝐚𝐧𝐜𝐞𝐦𝐞𝐧𝐭𝐬 𝐢𝐧 𝐋𝐋𝐌𝐬 DPO was introduced as an alternative to Reinforcement Learning from Human Feedback (RLHF). What RLHF does is basically train a reward model in order to sync the model's output with human preference. Then, it fine-tunes the LLM using reinforcement learning to maximize this estimated reward without drifting too far from the original model. On the other hand, DPO treats the problem as a classification problem without the need for applying reinforcement learning. The authors show that we can directly optimize an LLM on preference data. They're doing so by mapping the reward functions to optimal policies and thus transforming the loss function over rewards to a loss function over policies. As a result, we can train a policy network (instead of a reward network) that captures both the LLM and the rewards. From a mathematical perspective, the loss function of DPO is basically a binary cross-entropy loss(maximum likelihood objective) between the desired policies (that adhere to human preference) and the actual policies. During training, the gradient of the loss increases the likelihood of the preferred outputs and decreases the likelihood of the dispreferred outputs. This results in a training algorithm that is more stable and less computationally demanding than RLHF. And most importantly, it seems to be on par with the performance of RLHF. Original paper: arxiv.org/pdf/2305.18290…
Sergios Karagiannakos tweet media
English
0
4
13
1.7K
AMAR P.
AMAR P.@MeAmarPotdar·
@Aadhaar_Care Please check dm and respond early. Unable to login into myaadhar portal. kindly help me resolve the issue .
English
0
0
0
7
AMAR P. retweetledi
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
1/n Retrieval augmented generation (RAG) architectures have become popular for knowledge-intensive NLP tasks. However, they face some primary problems: 1. Imperfect retrieval providing irrelevant or distracting passages along with useful context. 2. Full context augmentation leading to in-passage distraction, even from positive passages. 3. Models learning to incorrectly rely on negative passages. 4. Distracting content causing models to hallucinate or utilize spurious correlations. FILCO addresses these key problems in RAG architectures: - It filters context within passages to provide only the precise spans needed for generation. This removes distracting content from both positive and negative passages. - Operating at flexible granularity allows tightly focused context selection from retrieved passages. - The filtering happens automatically without needing annotation, reranking, or changing retrieval. - By focusing augmentation on truly relevant knowledge, FILCO reduces hallucination and reliance on spurious correlations. - It achieves substantial gains of 1-9 accuracy points across diverse QA, dialog, and fact checking datasets.
Carlos E. Perez tweet media
English
4
63
375
116.9K