Hi, Sarven.

You might consider including in your call an explicit reference to
nanopublications [1] as an example of how to address point (5).

About source code, there's a project, SciForge [1], working on the
idea of making scientific software citable.

My two cents...

Andrea

----
[1]http://nanopub.org/
[2]http://www.gfz-potsdam.de/en/research/organizational-units/technology-transfer-centres/cegit/projects/sciforge/

On Tue, Jul 29, 2014 at 2:51 AM, Sarven Capadisli <[email protected]> wrote:
> On 2014-07-29 00:45, Andrea Splendiani wrote:
>>
>> while I agree with you all, I was thinking: is the lack of reproducibility
>> an issue due to the way results are represented ?
>> Apart for some fields (e.g.: bioinformatics), materials, samples,
>> experience are probably more relevant and much harder to reproduce.
>
>
> I think that depends on who we ask and how much they care about
> reproducibility.
>
> *IMHO*, the SW/LD research scene is not exactly hard-science. It leans more
> on engineering and development than following the pure scientific method.
> Majority of the research that's coming out of this area focuses on showing
> positive and useful results, and that appears to materialize in some ways
> like:
>
> * My code can beat up your code.
> * We have something that is ground breaking.
> * We have some positive results, and came up with a research problem.
>
> How often do you come across negative results in the proceedings i.e., some
> *exploration* which ended up at a dead end?
>
> It is trivial to find the evaluation section of a paper often replaced with
> benchmarks. Kjetil, pointed at this issue eloquently at ISWC 2013:
> http://folk.uio.no/kjekje/2013/iswc.pdf . Emphasizing on the need to do
> careful design of experiments where required.
>
> In other cases, one practically needs to run after the authors 1) to get a
> copy of the original paper, 2) the tooling or whatever they built or 3) the
> data that they used or produced. It is generally assumed that if some text
> is in a PDF, and gets a go ahead from a few reviewers, it passes as science.
> Paper? Code? Data? Environment? Send me an email please.
>
> I am generalizing the situation of course. So, please put your pitchforks
> down. There is a lot of great work, and solid science conducted by the SW/LD
> community. But lets not keep our eyes off Signal:Noise.
>
> So, yes, making efforts toward reproducibility is important to redeem
> ourselves. If you think that reproducibility in some other fields is more
> relevant and harder, well, then, I think we should be able to manage things
> on our end, don't you think?
>
> The benefit of having the foundations for reproducibility via LD is that, we
> make it possible to query our research process and output, and introduce the
> possibility to compare atomic parts of the experiments, or even detect and
> fix issues.
>
> If we can't handle the technicality that goes into creating "linked
> research", how can we expect the rest of world to get on board? And we are
> not dealing with a technical problem here. It is blind obedience and
> laziness. There is absolutely nothing stopping us from playing along with
> the archaic industry models and publishing methods temporarily (for a number
> of good and valid reasons), if and only if, we first take care of ourselves
> and have complete control over things. Publish on your end, pass a stupid
> fixed copy to the conference/publisher. Then see how quickly the research
> "paper" landscape changes.
>
> As I've stated at the beginning, it all depends on who we ask and how much
> they care. Do we? If so, what are we going to do about it?
>
> -Sarven
> http://csarven.ca/#i
>



-- 
Andrea Perego, Ph.D.
European Commission DG JRC
Institute for Environment & Sustainability
Unit H06 - Digital Earth & Reference Data
Via E. Fermi, 2749 - TP 262
21027 Ispra VA, Italy

https://ec.europa.eu/jrc/

----
The views expressed are purely those of the writer and may
not in any circumstances be regarded as stating an official
position of the European Commission.

Reply via email to