Manaal Faruqui

3.7K posts

Manaal Faruqui

Manaal Faruqui

@manaalfar

Senior Staff Research Scientist @Google Bard. Love eating, movies, travel and politics. Spread love, not war.

New York, NY Katılım Mart 2010
626 Takip Edilen3.2K Takipçiler
Manaal Faruqui
Manaal Faruqui@manaalfar·
@dhillon_p Actually I tried using OCI and the security guard in Bangalore let me through but in Bhopal he said he needs the passport - so I have been using the passport since :) I think OCI is not a general purpose ID, also see here: reddit.com/r/nri/comments…
English
1
0
6
153
Paramveer Dhillon
Paramveer Dhillon@dhillon_p·
@manaalfar Was curious…what’s the correct answer? :) did the domestic flight let you use OCI?
English
1
0
0
102
Manaal Faruqui
Manaal Faruqui@manaalfar·
While taking a domestic flight in India, I had a genuine question of whether I can use my OCI (overseas citizen of India) card as a valid proof of ID, and so I googled it and got the following conflicting results. Factuality remains a core problem to fix.
Manaal Faruqui tweet media
English
2
0
7
1.6K
Jiao Sun
Jiao Sun@sunjiao123sun_·
Today marks my first year at Google Gemini! “The best autorater will give you the best LLM” This has been my mission at work and I’ve been working hard towards this goal! Hopefully I’ll get to share with you some research I’ve been cooking in Jan! (0/n)
Jiao Sun tweet mediaJiao Sun tweet media
English
18
6
631
65.4K
Manaal Faruqui retweetledi
Brian McBride
Brian McBride@BrianDMcBride·
This Kamala Harris ad narrated by Viola Davis is a cinematic masterpiece.
English
443
10.4K
39.4K
1.6M
Manaal Faruqui retweetledi
Satyapriya Krishna
Satyapriya Krishna@SatyaScribbles·
🚀 Excited to share the research I worked on during my summer internship at @GoogleAI! We developed FRAMES (Factuality, Retrieval, And reasoning MEasurement Set), a challenging high-quality benchmark for evaluating retrieval-augmented large language models. FRAMES tests LLMs on retrieving relevant info, reasoning across documents, and providing factual responses to complex questions. 🧵👇 #AI #RAG #Eval Dataset link: huggingface.co/datasets/googl… Paper link: huggingface.co/papers/2409.12…
Satyapriya Krishna tweet media
English
3
10
37
9.6K
Manaal Faruqui retweetledi
Prateek Yadav
Prateek Yadav@prateeky2806·
Ever wondered if model merging works at scale? Maybe the benefits wear off for bigger models? Maybe you considered using model merging for post-training of your large model but not sure if it generalizes well? cc: @GoogleAI @GoogleDeepMind @uncnlp 🧵👇 Excited to announce my internship work on large-scale model merging! We explore what happens when you combine larger and larger language models (up to 64B parameters!) and how different factors –model size, base model quality, merging methods, and # of experts– impact held-in performance and generalization. 📰: arxiv.org/abs/2410.03617
Prateek Yadav tweet media
English
6
86
391
85.7K
Manaal Faruqui
Manaal Faruqui@manaalfar·
What she meant was: This M***f**** 😂🤣
English
0
0
0
553
Manaal Faruqui retweetledi
Acyn
Acyn@Acyn·
Kaling: The real reason I'm here is that deep down, I truly believe that as a woman of color and a single mother of three, it is incredibly important that I be appointed ambassador to Italy.
English
67
475
16.1K
1.7M
Manaal Faruqui retweetledi
Seth Abramson
Seth Abramson@SethAbramson·
I agree with those who say this video should go viral *every single day* until Americans go to the polls in November
English
1.1K
18.1K
37.1K
5.3M
Manaal Faruqui
Manaal Faruqui@manaalfar·
It's delightful that we are #1 here, but... We can be #3 in the next few weeks, and then again become #1 sometime What's really exciting about working on Gemini is that we will bring the best user-experience in our products through these models, these metrics are by-products.😉
Arena.ai@arena

Exciting News from Chatbot Arena! @GoogleDeepMind's new Gemini 1.5 Pro (Experimental 0801) has been tested in Arena for the past week, gathering over 12K community votes. For the first time, Google Gemini has claimed the #1 spot, surpassing GPT-4o/Claude-3.5 with an impressive score of 1300 (!), and also achieving #1 on our Vision Leaderboard. Gemini 1.5 Pro (0801) excels in multi-lingual tasks and delivers robust performance in technical areas like Math, Hard Prompts, and Coding. Huge congrats to @GoogleDeepMind on this remarkable milestone! Gemini (0801) Category Rankings: - Overall: #1 - Math: #1-3 - Instruction-Following: #1-2 - Coding: #3-5 - Hard Prompts (English): #2-5 Come try the model and let us know your feedback! More analysis below👇

English
1
1
24
2.9K
Manaal Faruqui retweetledi
Aes🇺🇸
Aes🇺🇸@AesPolitics1·
Houston Pastor is all in for Kamala Harris. This speech is fucking legendary.
English
1.4K
11.5K
43.9K
2.9M
Manaal Faruqui retweetledi
𝗱𝗮𝗻𝗻𝘆🫧💚
𝗱𝗮𝗻𝗻𝘆🫧💚@beyoncegarden·
"mr. vice president i'm speaking, I'M speaking. k🙂‍↕️." SHE'S SO MOTHER😭😭😭
English
1.2K
11.9K
200.1K
17.9M
Manaal Faruqui retweetledi
Kamala Harris
Kamala Harris@KamalaHarris·
On behalf of the American people, I thank Joe Biden for his extraordinary leadership as President of the United States and for his decades of service to our country. I am honored to have the President’s endorsement and my intention is to earn and win this nomination.
English
28K
65K
564.7K
27.7M
Manaal Faruqui retweetledi
Vinod Khosla
Vinod Khosla@vkhosla·
Hard for me to support someone with no values, lies, cheats, rapes, demeans women, hates immigrants like me. He may cut my taxes or reduce some regulation but that is no reason to accept depravity in his personal values. Do you want President who will set back climate by a decade in his first year? Do you want his example for your kids as values?
Elon Musk@elonmusk

@vkhosla @realDonaldTrump @GovWhitmer @GovernorShapiro Come on, Vinod. Trump/Vance LFG!!

English
3.5K
8K
80.8K
14.7M
Manaal Faruqui retweetledi
AK
AK@_akhaliq·
Google announces Leave No Context Behind Efficient Infinite Context Transformers with Infini-attention This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key
AK tweet media
English
6
171
787
94.5K
Manaal Faruqui retweetledi
Aran Komatsuzaki
Aran Komatsuzaki@arankomatsuzaki·
Google presents Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention 1B model that was fine-tuned on up to 5K sequence length passkey instances solves the 1M length problem arxiv.org/abs/2404.07143
Aran Komatsuzaki tweet media
English
26
242
1.1K
205.6K
Manaal Faruqui retweetledi
Jack Krawczyk
Jack Krawczyk@JackK·
Bard with Gemini Pro is now available in over 40 languages and 230+ countries and territories, bringing its top 2 most preferred status across the world (👀 @lmsysorg). AND you can now bring your imagination to life with image generation. It's optimized for speed and is available in English across most countries... let your imagination run wild, at no cost. AND we’re also making it easier to corroborate Bard’s responses. We’re expanding our Double Check feature, which is already used by millions of people in English, to more than 40 languages across the globe. Thank you, PaLM2 for all you did to get us here 🫡. Rest easy. Ok back to work... read more here: blog.google/products/bard/…
English
71
123
538
141.4K
Manaal Faruqui retweetledi
Yu Su
Yu Su@ysu_nlp·
(this will be the last response just for the record; this type of engagement is not why I use this app) 1. Dataset was released along with the paper. again, eval on a dataset of this scale really doesn't take long, especially for google 2. this was an one-off project crowdsourced by a group of researchers of similar interests, not a well-planned-ahead-of-time project funded by any agency or big tech.
English
8
17
605
105.7K
Manaal Faruqui retweetledi
Yu Su
Yu Su@ysu_nlp·
Hi @emilymbender, I'm one of the lead authors of MMMU. I can certify that 1) Google didn't fund this work, and 2) Google didn't have early access. They really like the benchmark after our release and worked very hard to get the results. It doesn't take that long to eval on a dataset
@emilymbender.bsky.social@emilymbender

Returning to transparency, I see that they point to MMMU, which was published on arXiv (not peer reviewed) on November 27, 2023. Google must have had early access to this work, which I suspect means that Google funded it, but the paper doesn't acknowledge any funding source. /12

English
9
51
923
648.3K