On 06/07/2010 12:33 AM, Julien Cigar wrote:
> Stephen Thompson wrote:
>>
>> Hello everyone,
>>
>> We recently attempted a mysql to postgresql migration for our bacula
>> 5.0.2 server. The data migration itself was successful, however we are
>> disappointly either getting the same or significantly worse
>> performance out of the postgres db.
>>
>> I was hoping that someone might have some insight into this.
>>
>> Here is some background:
>>
>> software:
>> centos 5.5 (64bit)
>> bacula 5.0.2 (64bit)
>> postgresql 8.1.21 (64bit)
>
> Why 8.1 ..? 8.1 is more than 5 years old ...
>
Yes, Alan Brown answered this for me, but, yeah, it's a restriction 
based on our policy to use Centos (5.5.) packages which is at postgresql 
8.1.  It might be worth trying a compiled version of the latest release 
to at least be able to compare and possibly make an argument for a 
non-Centos package in this case.


>> (previously... mysql-5.0.77 (64bit) MyISAM)
>>
>> database:
>> select count(*) from File --> 1,439,626,558
>> du -sk /var/lib/pgsql/data --> 346,236,136 /var/lib/pgsql/data
>>
>> hardware:
>> 1Tb EXT3 external fibre-RAID storage
>> 8Gb RAM
>> 2Gb SWAP
>> 2 dual-core [AMD Opteron(tm) Processor 2220] CPUs
>>
>>
>> Some of the postgres tuning that I've attempted thus far (comments are
>> either default or alternatively settings I've tried without effect):
>>
>> #shared_buffers = 1000 # min 16 or max_connections*2, 8KB each
>> shared_buffers = 262144 # 2Gb
>
> This is too large, set shared_buffers to something like 256-512 MB
>

I can try that.
The postgres tuning documents I'd read said to try 1/4 the RAM as a 
starting point, which is how I arrived at 2Gb.


>> #work_mem = 1024 # min 64, size in KB
>> work_mem = 524288 # 512Mb
>
> Don't forget that work_mem is allocated *per-operation* (maybe several
> times per query). 512 MB seems too large for me
>

Thanks, I can try reducing this and the shared_buffers.


>> #maintenance_work_mem = 16384 # min 1024, size in KB
>> maintenance_work_mem = 2097152 # 2Gb
>> #checkpoint_segments = 3 # in logfile segments, min 1, 16MB each
>> checkpoint_segments = 16
>> #checkpoint_warning = 30 # in seconds, 0 is off
>> checkpoint_warning = 16
>> #effective_cache_size = 1000 # typically 8KB each
>> #effective_cache_size = 262144 # 256Mb
>> effective_cache_size = 6291456 # 6Gb
>
> 6GB seems OK to me
>
>> #random_page_cost = 4 # units are one sequential page fetch cost
>> random_page_cost = 2
>>
>
> only reduce random_page_cost if you have fast disks (SAS, ...)

Thanks, I'll try putting this back to the default of 4.

>> Now, as to what I'm 'seeing'. Building restore trees are on par with
>> my previous mysql db, but what I'm seeing as significantly worse are:
>>
>> mysql postgresql
>> Within Bat:
>> 1) Version Browser (large sample job) 3min 9min
>> 2) Restore Tree (average sample job) 40sec 25sec
>> 3) Restore Tree (large sample job) 10min 8.5min
>> 2) Jobs Run (1000 Records) 10sec 2min
>>
>> Within psql/mysql:
>> 1) select count(*) from File; 1sec 30min
>>
>> Catalog dump:
>> 1) mysqldump/pgdump 2hrs 3hrs
>>
>>
>> I get a win on building Restore trees, but everywhere else, it's
>> painfully slow. It makes the bat utility virtually unusable as an
>> interface. Why the win (albeit moderate) in some cases but terrible
>> responses in others?
>>
>> I admit that I am not familiar with postgres at all, but I tried to
>> walk through some of the postgres tuning documents, including the
>> notes in the bacula manual to arrive at the above settings. Also note
>> that I've tried several variants on the configuration above (including
>> the postgres defaults), don't have a detailed play by play of the
>> results, but the time results above seemed typical regardless of what
>> settings I tweaked.
>>
>> Any help would be greatly appreciated!
>> Stephen
>
>

It doesn't sound like I'm doing anything egregiously wrong.

I am still surprised at how slow postgres is compared to mysql on the 
same hardware after all I've read and heard about postgres superiority.
Don't get me wrong, I understand it's strengths, but for an application 
like Bacula, it doesn't seem like many of it's features are really 
needed, and if it runs more slowly...  I may very well continue to run 
with mysql which is rather disappointing.

thanks!
Stephen
-- 
Stephen Thompson               Berkeley Seismological Laboratory
step...@seismo.berkeley.edu    215 McCone Hall # 4760
404.538.7077 (phone)           University of California, Berkeley
510.643.5811 (fax)             Berkeley, CA 94720-4760

------------------------------------------------------------------------------
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to