Reviewer3

299 posts

Reviewer3 banner
Reviewer3

Reviewer3

@reviewer3com

Multi-agent peer review trusted by thousands of researchers.

San Francisco, California Katılım Ağustos 2025
1.7K Takip Edilen1.2K Takipçiler
Sabitlenmiş Tweet
Reviewer3
Reviewer3@reviewer3com·
The number of research papers is growing exponentially. The number of peer reviewers isn’t. AI caused this problem, and only AI can scale to fix it. Reviewer3 provides peer review-style feedback, surfaces fatal flaws, and filters slop (powered by @pangramlabs and our in-house reference checker). Try it below to catch critical issues before publication.
English
1
1
10
979
Reviewer3
Reviewer3@reviewer3com·
Submitted a manuscript back in 1984. Just got the reviews back, they're mostly positive!
English
0
0
6
166
Reviewer3
Reviewer3@reviewer3com·
The scientific literature is following an exponential trajectory. Using @OpenAlex_org, we find: - Cumulative growth of +26.9% from 2022 to 2025 - It follows an exponential trajectory of 8.3% YoY - 2025 had the largest spike in two decades at +11.9% - We have crossed 7 million papers per year
Reviewer3 tweet media
English
2
1
7
250
Joaquin Barroso
Joaquin Barroso@joaquinbarroso·
I'm so mad. Once again, got a paper accepted, but Ref2 wants me to add 4 references all having a single author in common, whereas Ref1 suggests 6 with another common author! This unethical behavior should be stopped by the editors. Should I say who those authors are? Thoughts?
English
155
71
1.8K
185.7K
Henry Shevlin
Henry Shevlin@dioscuri·
just got invited to peer review a paper I'm one of the authors on
Henry Shevlin tweet media
English
161
699
28K
398.4K
Reviewer3
Reviewer3@reviewer3com·
@icmlconf Could maybe understand removing the reviews, but why desk reject papers where these reviewers are authors?
English
3
0
15
8.4K
ICML Conference
ICML Conference@icmlconf·
To ensure compliance w peer-review policies, ICML has removed 795 reviews (1% of total) by reviewers who used LLMs when they explicitly agreed to not. Consequently, 497 papers (2% of all submissions) of these (reciprocal) reviewers have been desk rejected Details in blog post 👇
ICML Conference tweet media
English
21
78
582
190.4K
Reviewer3
Reviewer3@reviewer3com·
@akoustov But what could possibly be cheaper than unpaid labor?
English
2
0
4
267
Ravid Shwartz Ziv
Ravid Shwartz Ziv@ziv_ravid·
I (still) wasn't affected by the ICML review policy, which desk rejected all the papers of reviewers who used LLMs to write their reviews (and didn't explicitly mention it) 😱, but this is a bad decision and not a good way to handle AI reviews. First, AI detectors are not reliable enough, with many false positives. Second, if it's a good review, why should I care that AI wrote it? We're using AI assistants everywhere in our day-to-day lives. What is the next step? To ban AI coding agents? I understand the motivation to prevent low-quality reviews, but this is not the way to improve them
English
29
4
200
38.7K
Reviewer3 retweetledi
Ravid Shwartz Ziv
Ravid Shwartz Ziv@ziv_ravid·
I understand, but still... I didn't see desk rejection (for all the papers!) when reviewers wrote poor reviews or didn't engage with the authors, or didn't declare conflicts of interest. Again, I understand the motivation, but the problem isn't AI-written reviews but just bad reviews and unqualified reviewers. There are many ways to improve it, such as making the number of papers to review proportional to the number of submitted papers, but ICML chose the easy solution (professors are almost not doing any reviews). *This post was writen with the help of AI assistance.
English
4
1
24
6.4K
Reviewer3
Reviewer3@reviewer3com·
My favorite part of academia is how hard you have to fight to give your work away for free.
English
1
5
23
775
Reviewer3
Reviewer3@reviewer3com·
Data from our peer review benchmark that quantifies something we've probably all experienced: AI reviewers are more consistent, but the best humans outperform AI! We defined a consequential boolean that could differentiate a nitpick from a major critique. R3 has higher consequential rate on average. But when you narrow to critical comments and rank reviewers per paper, humans outperform AI on a whopping 501 of 1,000 papers!
Reviewer3 tweet media
English
0
1
7
647
Reviewer3
Reviewer3@reviewer3com·
My favorite part of the review process is when a reviewer suggests an experiment I explicitly said I couldn't do in the limitations section. Really makes you feel heard.
English
0
0
1
182
Reviewer3 retweetledi
Natalie Khalil
Natalie Khalil@natalienkhalil·
A lottery system, except it’s your life’s work and you have to wait a year to hear back
Natalie Khalil tweet media
English
2
3
47
3.9K
Reviewer3
Reviewer3@reviewer3com·
My favorite part of the peer review process is when a reviewer misunderstands a core concept, then insists you're the one who's wrong. It's like amateur hour, but with higher stakes and lower pay.
English
0
0
3
226
Reviewer3
Reviewer3@reviewer3com·
My favorite part of peer review is when a reviewer suggests an experiment that would take longer than the entire review process took. Just outstanding use of my time.
English
0
0
3
310
Reviewer3
Reviewer3@reviewer3com·
We defined the structural elements of a peer review: - the specific issue identified (specificity) - why it is a problem (rationale) - how to fix it (actionability) - where it is in the text (anchoring) We pulled these out of every human and AI review comment and measured their rate of occurrence per paper. Thoughts? What are we missing? How would you define the structure of a peer review comment?
Natalie Khalil@natalienkhalil

Sneak peek at some data from our benchmark on 80,000+ human and AI review comments! What's really interesting to me is that GPT 5.2 and Gemini 3 Pro, with minimal prompting, produce extremely structured review comments, nearly as structured as R3 despite R3 being multi-agent. We defined the structural elements of a peer review: - the specific issue identified (specificity) - why it is a problem (rationale) - how to fix it (actionability) - where it is in the text (anchoring) We pulled these out of every human and AI review comment and measured their rate of occurrence per paper.

English
0
0
5
625