Just to put a resolution on this. I did some testing and compression
does work but to get extant tables to compress you have to reimport your
database. So the procedure would be to:
1. Turn on compression in my.cnf following the doc.
2. mysqldump the database you want to compress
3. recreate that database (drop and remake it)
4. reimport the database
This can take a bit if your database is large. However when I tested
this with our production database it went from 130G on disk to 29G, a
factor of 4.5 improvement (this is using the default settings and zlib).
I haven't had time to actually do it for real on our live system and see
if there is a performance hit in terms of scheduling but we keep a
sizable buffer in memory so I'm not anticipating any thing.
My verdict then is that if you are going to do it, do it before your
database grows too big as doing the dump and reimport will take a while
(for me it was about 4 hours start to finish on my test system).
-Paul Edmon-
On 12/2/2021 1:06 PM, Baer, Troy wrote:
My site has just updated to Slurm 21.08 and we are looking at moving to the
built-in job script capture capability, so I'm curious about this as well.
--Troy
-----Original Message-----
From: slurm-users <slurm-users-boun...@lists.schedmd.com> On Behalf Of Paul
Edmon
Sent: Thursday, December 2, 2021 10:30 AM
To: slurm-users@lists.schedmd.com
Subject: [slurm-users] Database Compression
With the advent of the ability to store jobscripts in the slurmdb, our
db is growing at a fairly impressive rate (which is expected). That
said I've noticed that our database backups are highly compressible
(factor of 24). Not being a mysql expert I hunted around to see if it
could do native compression and it can:
https://urldefense.com/v3/__https://mariadb.com/kb/en/innodb-page-compression/__;!!KGKeukY!jgWBQWFG0eUgbghBOl8d4w1h4_lBv5VNloBkfgr5pOYVq0V1xjV-hToRXTQZ$
My question is if anyone has had any experience with using page
compression for mariadb and if there are any hitches I should be aware of?
-Paul Edmon-