Max Callaghan

182 posts

Max Callaghan

Max Callaghan

@MaxCallaghan5

Postdoctoral researcher @PIK_Climate on natural language processing and climate science.

Tham gia Şubat 2020
570 Đang theo dõi320 Người theo dõi
Max Callaghan
Max Callaghan@MaxCallaghan5·
Incidentally, if you are interested in working with us on how we can responsibly use ML to assist evidence synthesis, we have an open position at PIK. #position,id=f4e4c990-a7ea-4395-bc60-011e0033655c,popup=y" target="_blank" rel="nofollow noopener">potsdam.pi-asp.de/bewerber-web/?… Today is the last day the position is open, but please get in touch ASAP if you need extra time to apply
English
0
0
0
36
Max Callaghan
Max Callaghan@MaxCallaghan5·
There is a lot more to do on improving and evaluating stopping criteria. We set out a blueprint for some of this in the paper, but it requires lots of engagement and further work, some of which is already happening - e.g. in DESTINY destiny-evidence.github.io/website/ @wellcometrust
English
1
0
0
30
Max Callaghan
Max Callaghan@MaxCallaghan5·
To help users navigate this landscape, we argue that organisations like @cochranecollab and @CampbellReviews need to update their methodological guidance to help users distinguish between well-justified and ill-justified stopping criteria 12/N
English
1
0
0
32
Max Callaghan
Max Callaghan@MaxCallaghan5·
In the paper, we argue that we should prefer the former type of criteria to the latter. I don't think this should be controversial, but again and again when I have argued this, I have met resistance. If you disagree, please tell me why you think we don't need statistics here!10/N
English
1
0
0
24
Max Callaghan
Max Callaghan@MaxCallaghan5·
Some stopping criteria make transparent assumptions and use appropriate statistics to communicate the risk of missing relevant studies (like the one we developed 4 years ago doi.org/10.1186/s13643…, other promising alternatives are available :)) 8/N
English
1
0
0
23
Max Callaghan
Max Callaghan@MaxCallaghan5·
Other stopping criteria do not do this, but rely on heuristics, like stopping after 50/100/200 consecutive irrelevant records. 9/N
English
1
0
0
18
Max Callaghan
Max Callaghan@MaxCallaghan5·
This is unrelated to how fancy our model is. Whenever we use an ML-generated prediction, we need ways to manage and communicate the uncertainty that comes with relying on that prediction. This is a *necessary condition* for the *responsible* use of AI/ML 7/N
English
1
0
0
24
Max Callaghan
Max Callaghan@MaxCallaghan5·
Stopping criteria offer ways to *estimate* an appropriate time to stop screening, managing the risk of missing relevant studies while hopefully minimising the time spent screening irrelevant studies. We can only ever estimate this, because we don't have all the relevant info 6/N
English
1
0
0
21
Max Callaghan
Max Callaghan@MaxCallaghan5·
This means we can stop screening before we have seen all the potentially relevant documents. But to do this, and actually save some work, we need to know when to stop. This is where stopping criteria come in 😎 5/N
English
1
0
0
21
Max Callaghan
Max Callaghan@MaxCallaghan5·
These products employ ML (or AI if we want to sound fancy)-prioritised screening: we screen some records by hand, and use these to train a model to predict the relevance of further records, and screen these by hand in descending order of predicted relevance, then retrain 4/N
Max Callaghan tweet media
English
1
0
0
33
Max Callaghan
Max Callaghan@MaxCallaghan5·
While I was away on parental leave, our paper was published on the urgent need for well-justified stopping criteria when using ML to speed up screening in systematic reviews: doi.org/10.1186/s13643… 1/N
Max Callaghan tweet media
English
1
3
5
346
Max Callaghan
Max Callaghan@MaxCallaghan5·
@srtoolbox Stopping after 50/100 consecutive irrelevant records is - without further analysis, and except for in some limited senses - statistically incoherent. Just because we use machine learning, it doesn't mean we should abandon the high quality methods we otherwise emply in SRs
English
1
0
0
10
Systematic Review Toolbox
Systematic Review Toolbox@srtoolbox·
Great to see this published, a survey of systematic literature tools, focusing on AI technologies for screening and extraction: doi.org/10.1007/s10462…
Francesco Osborne@FraOsborne

Happy to announce that our survey paper on #AI tools for literature reviews is now published in #OpenAccess by AI Review! We analyse 32 groundbreaking tools and discuss key challenges. #LLMs are revolutionizing the field! Paper: link.springer.com/article/10.100…

English
1
7
22
2K