'None of these software developers has provided information on how effective their algorithm is relative to visual screening by someone with a trained eye for detecting image manipulation. I have written about the need for transparency about the effectiveness of these algorithms, so that users are informed about their capabilities and limitations.'
#ResearchIntegrity
@cyrilpedia One aspect mentioned is the need to archive research data at the institution. Even though we have this as an official policy at our institute, I find it frustratingly hard to run after team members and get them to archive their raw data post-publication (which is mostly programs and calculations, we are a theory group). Most have to be reminded many many times.
Any thoughts on this? What are your experiences?
@FMarquardtGroup I think this is a general problem - there are people tinkering with workflows that go from e-lab books to figure source data, which I think are the future. The problem is the transitional period, where the backlog has to be properly archived, but hopefully moving forward this will be a lot less painful. As an editor, I have had to send back papers because the authors could not send source data requested by reviewers. One problem is what to do with data that takes up a lot of space, like live imaging and so forth. As most things in science, I think the quickest way to change is to have funders request and monitor data deposition.
@FMarquardtGroup It's absurd how far some of these things get - like the Surgisphere papers in NEJM & Lancet, journals need to do a better job. But going back to your original post on this thread, so do institutions.