
q.e.d Science
196 posts

q.e.d Science
@qedScience
qed is transforming scientific research with AI
Katılım Nisan 2025
10 Takip Edilen1.9K Takipçiler

@nivmast @OdedRechavi So many misses. You can't afford another one. qed for grants is live. 🏀
English

Important announcement!!!🫵💥💫
Would you have a tooth pulled if it helped your chances to get an important grant funded?
Absurd question (obviously), but the situation right now is so bad funding-wise, that I bet some of you actually considered it for a second…
Well, don’t get desperate - we created a new tool that might help! (keep your teeth!)
I’m excited to announce that as of today we are officially releasing “QED for Grants” for everyone. What started off as an extension of our existing paper review platform, grew in the last few months to an entirely new design. We’ve been working like crazy on this, and although we have more things we want to add in the (very near) future, we decided to release our AI for grants NOW, earlier than planned. It’s not perfect, no AI is, but for the first time, when I run my own grants through @qedscience, I feel it gets the research, finds real problems, and gives me very useful feedback that I can implement before submission. It’s like sending it to 20 scientists from my domain, knowing they’ll agree to dedicate their entire week to carefully read and comment on every line.
It’s very important to write your own grants yourself, it makes you think hard and you learn a lot from doing it, and q.e.d’s system is designed to preserve these positive aspects and augment them - you get feedback on your own writing, we don’t write for you!!
But at the same time, a typical PI spends many months every year writing proposals and sadly only a tiny fraction gets funded, even if the ideas are good. When you are forced to submit an unreasonable amount of grants the quality of the writing drops, and rejection rates increase. Not because the essence is bad. It’s simply too competitive right now (the cuts made it so much worse) and if your proposal is not super clear and tight, and if it’s not a perfect fit for the grant you’re submitting, you’re doomed.
Our grant solution is not an authoring, text-generating tool. It gives you constructive feedback on your writing (it comments on the deep things, not grammar and typos). It’s meant to help you with the questions that torment you late at night (“is this a good fit?”, “Is this novel enough?”, “Did I miss something?”). Tens of thousands of you already use q.e.d to improve your manuscripts and critically read papers, we built the grant tool by the same principles (you’ll identify many of the features that you told us you like).
We’ve processed thousands of proposals, learned where things fail, where reviewers get stuck, why good ideas come out weak. We interviewed hundreds of scientists, and also experts who work in funding agencies and university research authorities, and implemented their feedback (we’re constantly looking for more feedback). Our AI is always happy to give you constructive (and polite!) critique, and it will go through your grant line-by-line, forcing you to improve clarity, flag weak points, and push the whole thing to a higher standard. We study, in scale, what gets funded and what doesn’t, and what is the perfect fit for each type of grant.
So please, use it, pressure-test it, tell us where it fails, and together we’ll improve it every day to put you in the best position for actually testing your ideas in the real world. As always with q.e.d, the system is completely secured and private, and we are NOT training on your data (see the FAQ on our website).
Please like, retweet, and share with your favorite colleagues! (link to the platform below in the thread👇)
English

@OdedRechavi So many misses. You can't afford another one. qed for grants is live. 🏀
English

@OdedRechavi The amount of energy that goes into missing… we had to do something about it.😊 Check qedscience.com
English

Scientists spend 100+ days a year writing grants. Almost none get funded. Not because the ideas are bad, but because the system is broken, and you fall in the cracks.
Today we launch qed for grants, an AI reviewer that makes YOUR proposal stronger before you submit (it doesn’t replace you, you do the writing). Novelty. Logic. Methodology. Fit for the call. Making sure nothing is missed.
Early access open. 🔒🩶 @nivmast @OdedRechavi
English

"We're partnering with qed science to explore how AI tools might help readers engage with reviews, while keeping human expert judgement central."
@behrenstimb , eLife Editor-in-Chief, in an update to their entire author and user base.
£2.4m from the Wellcome Trust. 3 years of rethinking peer review. Happy to be on this journey.
#OpenScience #PeerReview #AIinScience
English


New @FrontiersIn report: 53% of reviewers use AI. But 59% use it to draft reports. Only 19% for methodology or statistical analysis. That's not deep use. That's autocomplete with extra steps.
English
q.e.d Science retweetledi

If only there were a way to spot gaps in papers immediately instead of waiting 49 years… @qedScience
The Lancet@TheLancet
Retraction—Today, we retract an unsigned 1977 commentary suggesting talc powder containing asbestos was not harmful. The Lancet was informed that the author had undisclosed competing interests and breached publication ethics. /4
English
q.e.d Science retweetledi

Big thanks for the fresh goodies @qedScience @OdedRechavi 🙏
The T-shirt is amazing… now I just need the right occasion to pull it off 😄

English

@NIH award rates just hit historic lows. Every submission has to be sharper. But 70% of researchers still use AI for writing polish. Only 19% for methodology or statistics. The pressure is rising. The tools are stuck at the surface❗😓📉
English


@DrEmmaZang Hi Emma, did you try our platform? @qedScience does that!
English

Hot take for the future of peer review: Journals should start asking for a lightweight AI-based replication check (e.g., via Claude) at submission.
Not to replace reviewers, but to catch coding errors, logic inconsistencies, and reproducibility issues before a paper reaches them. At this point, many of these checks are fast, cheap, and automatable. There’s little reason to rely solely on human detection.
Even with restricted data, this is feasible. Authors can generate simulated datasets that preserve structure and run identical pipelines. The goal is just basic verification.
More broadly, we need to rethink how we use reviewer time. Not every submission needs 3 full human reviews. A more efficient pipeline might look like: editorial triage, AI-assisted checks/review, targeted human evaluation where it matters most. If done well, this could raise standards while reducing burden on the system.
English

New preprint! We show that conjugation accelerates the segregation of #plasmid alleles -> horizontal transfer is a route for allele segregation in MGE evolution. Led by Lisa Hartmann, with @MarioSanter @NilsHuelter. Naturally, reviewed by @qedScience!
doi.org/10.64898/2026.…

English

Amazing! Congratulations @DaganLab
Tal Dagan@DaganLab
New preprint! We show that conjugation accelerates the segregation of #plasmid alleles -> horizontal transfer is a route for allele segregation in MGE evolution. Led by Lisa Hartmann, with @MarioSanter @NilsHuelter. Naturally, reviewed by @qedScience! doi.org/10.64898/2026.…
English

Not every day an AI gets called polite...especially when our friends at Claude have started signing off with heart emojis. 🤖❤️
A peer-reviewed study in @EMBO just named qed as a platform built for exactly what authors want most: structured criticism that's hard to hear, but impossible to ignore.
One AI vs three human reviewers. The lines ran close.
And that was v1.0.
We're just getting started.
English
q.e.d Science retweetledi

After testing @qedScience review, I suspect we may yet remember Reviewer 2 with fondness.
Journal club: I asked students to review a paper of their choice & compare their reports with qed. Early results: qed stricter, spots more gaps. We’ll summarize it at the end of the semester
English

