There are a few magics people have built to help with this, though I’m not
sure how actively maintained they are - see the store extension (
https://ipython.readthedocs.io/en/stable/config/extensions/storemagic.html)
or ipycache (
https://github.com/rossant/ipycache). Your best approach is still -a
If you are interested in a generic solution, hibernating kernels could be
proposed in a Jupyter Enhancement Proposal[1]. However, supporting this
across operating systems may be a challenge (CryoPID or BLCR could work on
Linux, not sure about feasibility on Windows) and would require significant
in
The basic idea is to save the results of a long computation to a file and
then skip the computation if that file exists. You can then delete the file
if you want the computation to run. Many people use the pickle module to
save Python objects to disk but there are thousands of other ways to manage
Hi Jason,
thank you for your quick answer.
Are there any simple ways to do what you suggest?
Otherwise, could I open the Jupyter Notebook on Google cloud to prevent the
re-run everytime?
Or, are there any other possibilities?
Unfortunately, I have been working with Jupyter from few weeks, so I
I don't think this is something that jupyter can solve. You should likely
investigate how to cache intermediate results to disk in your analysis
pipelines if you want to skip repeating some computations.
Jason
On Thu, May 18, 2023 at 10:38 AM Luca Marconi
wrote:
> Hi everybody,
> I am working