Rune Busk Damgaard

5.5K posts

Rune Busk Damgaard banner
Rune Busk Damgaard

Rune Busk Damgaard

@rbdamgaard

Associate Professor, Section for Medical Biotechnology, Technical University of Denmark. Studies ubiquitin signalling in inflammation, metabolism, and disease.

Copenhagen, Denmark Katılım Aralık 2015
1.5K Takip Edilen2.1K Takipçiler
Sabitlenmiş Tweet
Rune Busk Damgaard
Rune Busk Damgaard@rbdamgaard·
More evidence that LUBAC has functions beyond immune signalling. Our recent study shows that LUBAC and OTULIN regulate metabolic signalling: specifically they govern AMPK signalling to control metabolic adaption, autophagy, and cell death during energetic stress. This suggests that the metabolic symptoms observed in patients with LUBAC deficiency and ORAS, including glycogen storage disease, lipodystrophy, and liver disease, may be caused directly my metabolic dysregulation rather than being secondary to the systemic inflammation. nature.com/articles/s4141…
English
0
2
5
590
Rune Busk Damgaard retweetledi
Salim S. Hayek, MD
Salim S. Hayek, MD@salimhayek·
One of my NIH grant reviews came back last year with comments clearly LLM-generated. The PMIDs cited against the proposal were hallucinated. The papers did not exist. The score still counted toward the funding decision.
English
18
56
434
67.4K
Rune Busk Damgaard retweetledi
Rune Busk Damgaard retweetledi
Oded Rechavi
Oded Rechavi@OdedRechavi·
Academia feels increasingly like playing Monkey Island, if you miss something as absurd as picking up a rubber chicken with a pulley in the first scene, you only realize much later that you’re stuck and can’t pass to the next stage. I was interviewed today by @ScienceMagazine about the funding crisis following the ERC announcement. I argued that when a grant rejection can hinge on something small, and you then have to wait years to resubmit, the cost becomes so high that instead of suggesting “high-risk high-gain” plans scientists will prepare more cautious, less ambitious proposals. It turns into a perverse game of simplifying your ideas and trying (unsuccessfully) to predict what an overstretched reviewer pool will find easiest to digest. "Abrupt change to European funder’s rules leaves researchers shut out | Science | AAAS" science.org/content/articl…
English
3
17
95
15.8K
Rune Busk Damgaard
Rune Busk Damgaard@rbdamgaard·
@WalentekLab @MicrobiomDigest I think it depends on our definitions maybe. You seem to argue based on how easy it is to produce fake data de novo? My argument is that you can as easily make fraudulent data by manipulating microscopy images, either by photoshopping, biased selection, or false labelling.
English
1
0
0
16
Rune Busk Damgaard
Rune Busk Damgaard@rbdamgaard·
@WalentekLab @MicrobiomDigest Dont agree. I respect your opinion, but I think it is prejudiced and circular. Sleuths identify many fake/photoshopped tissue sections. And just look at how far AI has come in 2-3 years. Imagine in 10. You have just decided the standard is different for your favourite data.
English
1
0
0
22
Rune Busk Damgaard
Rune Busk Damgaard@rbdamgaard·
@WalentekLab @MicrobiomDigest You write “so I stopped believing in them as primary source for validating a concept”. I’m not sure ai understand how that is misaligned witt what I wrote? In what context do you believe in them then?
English
1
0
0
15
Aykut Uz
Aykut Uz@aykutuz·
Thanks for the reply. The real reason: Scientific work is messy. You try one thing and mess it up, then try another. Sometimes you didn’t set up the environment correctly and the measurement ends up being garbage. The scientist then feels justified in eliminating these data. That’s somewhat understandable. Sometimes the measurements give results that really don’t support the main thesis, while the majority do. One may think, “Perhaps this is because of the environment setup that I did incorrectly. I shouldn’t let these points sabotage our work - especially since other people are doing the same.” This is problematic. Spherical chickens are fine as assumptions, but this shouldn’t extend into scientific results. How can we understand/infer there's such an inclination in “scientific” publications?: We don’t see nearly enough “negative” result papers: “We tried this and it didn’t turn out as we expected.” That is fine, it is science. It is as valid as work that supports a thesis. These papers should have outnumbered the positive ones, but they do not. So we can infer a motive of “sphericalization” of experimental results. A little into the modus operandi: What happens to these unpublished works? The model is usually updated only until a new set of experiments arouse that support a version. That introduces a subtle, systemic selection bias into scientific work. In a scientific paper, I demand: 1. an abstract that has a neutral statement vis-à-vis the thesis, 2. which should be submitted “before” the experiments, and 3. experiment data which is messy. 4. The body should mainly explain the model/thesis, not attempt to prove it. As a result, I expect to see many “negative” result scientific papers. If you agree with any of the above, it is your responsibility to steer the current way “scientific” publication works toward how science should ideally work.
English
2
0
0
13
Rune Busk Damgaard
Rune Busk Damgaard@rbdamgaard·
@aykutuz Your demands may work in computer science, but that’s not how it works in biology.
English
1
0
0
18
Rune Busk Damgaard retweetledi
Jess Riedel
Jess Riedel@Jess_Riedel·
I'll bet against this. The foundations of science have never been about trust (nullius in verba, etc), and convincing media is not the ground truth of our trust network. That our scientific institutions use certain trust mechanisms for efficiency reasons does not mean the field melts when those specific mechanisms no longer work, nor does it mean we have to learn to survive in a trustless dystopia. We just develop new trust mechanisms. This sort of alarmism is analogous to all the hand-wringing over deep fakes, which were predictably a nothing burger. blog.jessriedel.com/2018/04/23/meh…
Crémieux@cremieuxrecueil

Science is about to get absolutely nuked. Unless we get extremely strict about providing and opening up code and data and documenting lab experiments rigorously, a torrent of credible-looking but fraudulent papers is upon us.

English
5
8
42
4.8K
Rune Busk Damgaard
Rune Busk Damgaard@rbdamgaard·
@aykutuz And for some data types, it’s a requirement and people share the raw data.
English
1
0
1
11
Rune Busk Damgaard
Rune Busk Damgaard@rbdamgaard·
@aykutuz There are at least 3 reasons: 1) it’s not required by most journals, 2) it’s an enormous effort and extra cost, 3) it provides opportunity for people working in bad faith to try and undermine the work. And there are probably more reasons than that.
English
1
0
1
15
Aykut Uz
Aykut Uz@aykutuz·
@rbdamgaard Let’s assume this is true. Then “most” should be happy if raw data availability becomes the norm; to make “minority “ fraudsters’ job difficult; right?
English
2
0
0
46