Roy Smith <r...@panix.com> wrote: > Let's say I've got a program which consumes 60 GB of RAM, so I'm renting > the 2xlarge instance to run it. My software architect could recode the > program to be more efficient, and fit into just 30 GB, saving me > $3000/year. How much of his time is it worth to do that? He's costing > me about $600/day, so if he can do it in a week, it'll take a year to > recoup my investment.
Exactly. That is what I said "just throw more RAM at it". We see this in scientific HPC too. What does it cost of optimizing software compared to just using a bigger computer? It virtually never pays off. Or Python related: Python might be slow, but how much should we value our own time? A simulation which took one week to complete, how long did it take to write the code? When should we use C++ or Fortran instead of Python? Ever? There is a reason scientists are running Python on even the biggest supercomputers today. Hardware might be expensive, but not compared to human resources. And that gap just increases as hardware is getting cheaper. So we should optimize for human resources rather than for hardware. And with 64 bit that is finally possible. (It isn't always possible with 32 bit, so that is a different story.) Sturla -- https://mail.python.org/mailman/listinfo/python-list