I would like to second the Zenodo recommendation. Github is not reliable enough
for reproducible research (your files can disappear at any point - or can
change without notice), that's why Zenodo was created. It assumes that your
package has the list of DOIs to offer, but that should be ideally
I appreciate the welcome! Also - I believe that replying to an email is the
way to respond here, but please let me know if that's not the case.
In any event - passing in a cluster context is an interesting idea. I will
think about that. Also, it seems that despite me telling myself to write
bug-fr
Dear Stephen Abrams,
Welcome to R-package-devel!
В Thu, 13 Feb 2025 22:20:50 -0500
Stephen Abrams пишет:
> A secondary worry is that even if I resolve this, there might be
> something else causing threads to spin up.
Instead of using detectCores() [*] and creating cluster objects
yourself, how
Hi - my submission was rejected with the following error in one of my
vignettes.
On Debian GNU/Linux trixie/sid:
Error: processing vignette 'modeling_with_binary_classifiers.Rmd'
failed with diagnostics:
24 simultaneous processes spawned
On Windows:
Error: processing vignette 'modeling_with_bin
Dear John,
Our workflow for an open and reproducible workflow is to publish the data
via Zenodo. https://zenodo.org/ is maintained by CERN.
- The data is freely available.
- Your data is easy to cite.
- Every version gets its own DOI + one stable DOI that always points to the
most recent version.
Seconded... have the support for obtaining the desired file be completely
initiated by the user, and explicitly pass the filename into the functions that
use the data. It is also easier to trace which file was used in a past analysis
this way... auto config seems convenient, but it is hard to re
Not an answer, but a request from someone often working behind firewalls
and/or machines not connected to the internet. Please have a way to have
the package search for the data at some user specified location such as
a local directory.
Best,
Jan
On 14-02-2025 15:54, John Clarke wrote:
Thanks so much Rafael, I think piggyback is exactly what I was looking for.
I wonder if it is possible/best practice to include a call to it during the
install.packages('MyPackage') process so that the data is available prior
to running tests in the R CMD build Github Action (and also for users to
Hi John,
There are different alternatives on where to host the data (e.g. OSF, a
proprietary server, Github etc). The solution I've been adopting in most of
my packages is to use a combination of a proprietary server and Github.
So the data is first downloaded from our own server and only if our
Hi folks,
I've looked around for this particular question, but haven't found a good
answer. I have a versioned dataset that includes about 6 csv files that
total about 15MB for each version. The versions get updated every few years
or so and are used to drive the model which was written in C++ but
On 11.02.2025 10:40, Rolf Turner wrote:
On Mon, 10 Feb 2025 21:55:07 +
Bernd.Gruber wrote:
Hi,
I have a quick question. I have an older package (dartR) that is now
superseded by a series of new packages.
Still we noticed that several users have not updated yet and moved to
the new pac
11 matches
Mail list logo