Is this still an issue? (I missed the convo due to an overzealous spam
filter; this is the only message I have)
I often use AWS Spot instances (bidding on instances other people
previsioned but put up for auction as it's not always needed) to get
results extremely quickly without hearing a fan or to test changes on a
"large" system.
What do you need and how long (roughly, eg days, hours...)?
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/general-purpose-instances.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/memory-optimized-instances.html
Take your pick. m4.16xlarge is 64 cores and 256Gib of RAM, x1e.16xlarge
64 cores, just shy of 2 Tb of ram, x1e.32xlarge is 128 cores and 3.9 Tb
of Ram
Alec
PS: Migrating what to what? Wasn't the git migration done years ago?
Remember I only have the quoted message!
On 09/07/18 21:03, Eric S. Raymond wrote:
Florian Weimer <f...@deneb.enyo.de>:
* Eric S. Raymond:
The bad news is that my last test run overran the memnory capacity of
the 64GB Great Beast. I shall have to find some way of reducing the
working set, as 128GB DD4 memory is hideously expensive.
Do you need interactive access to the machine, or can we run the job
for you?
If your application is not NUMA-aware, we probably need something that
has 128 GiB per NUMA node, which might be bit harder to find, but I'm
sure many of us have suitable lab machines which could be temporarily
allocated for that purpose.
I would need interactive access.
But that's now one level way from the principal problem; there is
somme kind of recent metadata damage - or maybe some "correct" but
weird and undocumented stream semantics that reposurgeon doesn't know
how to emulate - that is blocking correct conversion.