q.e.d Science

196 posts

q.e.d Science banner
q.e.d Science

q.e.d Science

@qedScience

qed is transforming scientific research with AI

Katılım Nisan 2025
10 Takip Edilen1.9K Takipçiler
Oded Rechavi
Oded Rechavi@OdedRechavi·
Important announcement!!!🫵💥💫 Would you have a tooth pulled if it helped your chances to get an important grant funded? Absurd question (obviously), but the situation right now is so bad funding-wise, that I bet some of you actually considered it for a second… Well, don’t get desperate - we created a new tool that might help! (keep your teeth!) I’m excited to announce that as of today we are officially releasing “QED for Grants” for everyone. What started off as an extension of our existing paper review platform, grew in the last few months to an entirely new design. We’ve been working like crazy on this, and although we have more things we want to add in the (very near) future, we decided to release our AI for grants NOW, earlier than planned. It’s not perfect, no AI is, but for the first time, when I run my own grants through @qedscience, I feel it gets the research, finds real problems, and gives me very useful feedback that I can implement before submission. It’s like sending it to 20 scientists from my domain, knowing they’ll agree to dedicate their entire week to carefully read and comment on every line. It’s very important to write your own grants yourself, it makes you think hard and you learn a lot from doing it, and q.e.d’s system is designed to preserve these positive aspects and augment them - you get feedback on your own writing, we don’t write for you!! But at the same time, a typical PI spends many months every year writing proposals and sadly only a tiny fraction gets funded, even if the ideas are good. When you are forced to submit an unreasonable amount of grants the quality of the writing drops, and rejection rates increase. Not because the essence is bad. It’s simply too competitive right now (the cuts made it so much worse) and if your proposal is not super clear and tight, and if it’s not a perfect fit for the grant you’re submitting, you’re doomed. Our grant solution is not an authoring, text-generating tool. It gives you constructive feedback on your writing (it comments on the deep things, not grammar and typos). It’s meant to help you with the questions that torment you late at night (“is this a good fit?”, “Is this novel enough?”, “Did I miss something?”). Tens of thousands of you already use q.e.d to improve your manuscripts and critically read papers, we built the grant tool by the same principles (you’ll identify many of the features that you told us you like). We’ve processed thousands of proposals, learned where things fail, where reviewers get stuck, why good ideas come out weak. We interviewed hundreds of scientists, and also experts who work in funding agencies and university research authorities, and implemented their feedback (we’re constantly looking for more feedback). Our AI is always happy to give you constructive (and polite!) critique, and it will go through your grant line-by-line, forcing you to improve clarity, flag weak points, and push the whole thing to a higher standard. We study, in scale, what gets funded and what doesn’t, and what is the perfect fit for each type of grant. So please, use it, pressure-test it, tell us where it fails, and together we’ll improve it every day to put you in the best position for actually testing your ideas in the real world. As always with q.e.d, the system is completely secured and private, and we are NOT training on your data (see the FAQ on our website). Please like, retweet, and share with your favorite colleagues! (link to the platform below in the thread👇)
English
13
38
135
31K
q.e.d Science
q.e.d Science@qedScience·
@OdedRechavi So many misses. You can't afford another one. qed for grants is live. 🏀
English
0
0
2
256
q.e.d Science
q.e.d Science@qedScience·
Scientists spend 100+ days a year writing grants. Almost none get funded. Not because the ideas are bad, but because the system is broken, and you fall in the cracks. Today we launch qed for grants, an AI reviewer that makes YOUR proposal stronger before you submit (it doesn’t replace you, you do the writing). Novelty. Logic. Methodology. Fit for the call. Making sure nothing is missed. Early access open. 🔒🩶 @nivmast @OdedRechavi
English
5
33
162
23.4K
q.e.d Science
q.e.d Science@qedScience·
"We're partnering with qed science to explore how AI tools might help readers engage with reviews, while keeping human expert judgement central." @behrenstimb , eLife Editor-in-Chief, in an update to their entire author and user base. £2.4m from the Wellcome Trust. 3 years of rethinking peer review. Happy to be on this journey. #OpenScience #PeerReview #AIinScience
English
7
6
18
43.6K
Ivano Amelio
Ivano Amelio@ivanoamelio·
Now ready to submit our next paper, with my new T-shirt... thanks to @qedScience 🙂😉
Ivano Amelio tweet media
English
1
1
15
578
q.e.d Science
q.e.d Science@qedScience·
New @FrontiersIn report: 53% of reviewers use AI. But 59% use it to draft reports. Only 19% for methodology or statistical analysis. That's not deep use. That's autocomplete with extra steps.
English
0
1
3
2K
q.e.d Science retweetledi
Bost_lab
Bost_lab@BostLab72311·
Big thanks for the fresh goodies @qedScience @OdedRechavi 🙏 The T-shirt is amazing… now I just need the right occasion to pull it off 😄
Bost_lab tweet media
English
1
2
18
6.9K
q.e.d Science
q.e.d Science@qedScience·
@NIH award rates just hit historic lows. Every submission has to be sharper. But 70% of researchers still use AI for writing polish. Only 19% for methodology or statistics. The pressure is rising. The tools are stuck at the surface❗😓📉
English
0
3
8
3.7K
q.e.d Science
q.e.d Science@qedScience·
This. Journals and universities stuck in a loop neither side has the incentive to break. Until we change how science is actually evaluated, the same shortcuts keep winning. Just getting started 👇
Oded Rechavi@OdedRechavi

I think I’ll start posting about the lessons I’m learning as part of this new thing I’ve been doing (my attempt to change the landscape of scientific publishing and consequently how science is done) One lesson I’ve learned (and also unlearned…) is that it’s very convenient to put all the blame on journals. I’ve done it myself for years. And yes, many of the criticisms are valid. They make way too much money at our expense and are often not very good at distinguishing good science from bad science. Some of them (not all of them! There are good journals too!) bring very little value and can even slow scientific progress. They can be inefficient and biased, and journal names are a very poor substitute for quality. But the more I work on this, the harder it is for me to believe that journals are the only problem (even specifically when it comes just to publishing science). Universities are equally at fault. And I don’t just mean that we, the scientists doing the reviewing, are part of the problem (which we are, obviously). I mean the institutions we belong to, and the way they make decisions. Hiring, promotions, funding allocation - these processes are often opaque, subjective, and not particularly scientific. They are slow, inefficient, and they rely on journal brands as a shortcut. I used to think journals were driving this, but it’s obviously more like a loop. Journals could not stay the way they are if universities changed how they evaluate quality, because they would lose much of their justification to exist. But universities do not evaluate science directly, because there is too much of it and not enough experts available and time (or money to pay reviewers). So they rely on journal prestige, while journals rely on institutional reputation. Where you do your science ends up mattering more than what you discover, and this affects publication, which affects funding, which determines whether you can even pursue your ideas. This can be exploited, of course, but I don’t think institutions (or the responsible faculty/management) behave this way because they are evil or greedy. They do it because evaluating science properly is ridiculously hard and time-consuming, and the system does not reward doing it well. But the important question is can we change the way our universities work, or is it an impossible task? What I've learned working on this problem is that we can. In addition to engaging with management we can influence the system in other ways. In many cases we don’t need their approval. We are the ones who form the committees. I believe we can break the loop, if we target the mechanism of science evaluation. Journals will keep their power, shortcuts will keep dominating, and the same biases will keep reproducing themselves unless we change how we evaluate science (how we do review). If we can find ways to critically evaluate science at scale, rigorously and transparently, we can change how decisions are made.

English
0
2
4
2.7K
Emma Zang 臧熙璐
Emma Zang 臧熙璐@DrEmmaZang·
Hot take for the future of peer review: Journals should start asking for a lightweight AI-based replication check (e.g., via Claude) at submission. Not to replace reviewers, but to catch coding errors, logic inconsistencies, and reproducibility issues before a paper reaches them. At this point, many of these checks are fast, cheap, and automatable. There’s little reason to rely solely on human detection. Even with restricted data, this is feasible. Authors can generate simulated datasets that preserve structure and run identical pipelines. The goal is just basic verification. More broadly, we need to rethink how we use reviewer time. Not every submission needs 3 full human reviews. A more efficient pipeline might look like: editorial triage, AI-assisted checks/review, targeted human evaluation where it matters most. If done well, this could raise standards while reducing burden on the system.
English
13
13
133
29.9K
q.e.d Science
q.e.d Science@qedScience·
Not every day an AI gets called polite...especially when our friends at Claude have started signing off with heart emojis. 🤖❤️ A peer-reviewed study in @EMBO just named qed as a platform built for exactly what authors want most: structured criticism that's hard to hear, but impossible to ignore. One AI vs three human reviewers. The lines ran close. And that was v1.0. We're just getting started.
English
2
6
19
4.6K
q.e.d Science retweetledi
Olga Heidingsfeld
Olga Heidingsfeld@1_6_30_3_5·
After testing @qedScience review, I suspect we may yet remember Reviewer 2 with fondness. Journal club: I asked students to review a paper of their choice & compare their reports with qed. Early results: qed stricter, spots more gaps. We’ll summarize it at the end of the semester
English
1
3
4
1.4K