Re: [PERFORM] Linux Filesystems again - Ubuntu this time

2010-07-27 Thread Whit Armstrong
Thanks. But there is no such risk to turning off write barriers? I'm only specifying noatime for xfs at the moment. Did you get a substantial performace boost from disabling write barriers? like 10x or more like 2x? Thanks, Whit On Tue, Jul 27, 2010 at 1:19 PM, Kevin Grittner wrote: > "Kev

Re: [PERFORM] Linux Filesystems again - Ubuntu this time

2010-07-27 Thread Whit Armstrong
Kevin, While we're on the topic, do you also diable fsync? We use xfs with battery-backed raid as well. We have had no issues with xfs. I'm curious whether anyone can comment on his experience (good or bad) using xfs/battery-backed-cache/fsync=off. Thanks, Whit On Tue, Jul 27, 2010 at 9:48 A

[PERFORM] enum for performance?

2009-06-17 Thread Whit Armstrong
I have a column which only has six states or values. Is there a size advantage to using an enum for this data type? Currently I have it defined as a character(1). This table has about 600 million rows, so it could wind up making a difference in total size. Thanks, Whit -- Sent via pgsql-perfor

Re: [PERFORM] partition question for new server setup

2009-04-30 Thread Whit Armstrong
wrote: > > On 4/29/09 7:28 AM, "Whit Armstrong" wrote: > >> Thanks, Scott. >> >>> I went with ext3 for the OS -- it makes Ops feel a lot better. ext2 for a >>> separate xlogs partition, and xfs for the data. >>> ext2's drawbacks are not relev

Re: [PERFORM] partition question for new server setup

2009-04-30 Thread Whit Armstrong
Thanks, Scott. > I went with ext3 for the OS -- it makes Ops feel a lot better. ext2 for a > separate xlogs partition, and xfs for the data. > ext2's drawbacks are not relevant for a small partition with just xlog data, > but are a problem for the OS. Can you suggest an appropriate size for the x

Re: [PERFORM] partition question for new server setup

2009-04-28 Thread Whit Armstrong
Thanks, Scott. So far, I've followed a pattern similar to Scott Marlowe's setup. I have configured 2 disks as a RAID 1 volume, and 4 disks as a RAID 10 volume. So, the OS and xlogs will live on the RAID 1 vol and the data will live on the RAID 10 vol. I'm running the memtest on it now, so we st

Re: [PERFORM] partition question for new server setup

2009-04-28 Thread Whit Armstrong
are there any other xfs settings that should be tuned for postgres? I see this post mentions "allocation groups." does anyone have suggestions for those settings? http://archives.postgresql.org/pgsql-admin/2009-01/msg00144.php what about raid stripe size? does it really make a difference? I th

Re: [PERFORM] partition question for new server setup

2009-04-28 Thread Whit Armstrong
I see. Thanks for everyone for replying. The whole discussion has been very helpful. Cheers, Whit On Tue, Apr 28, 2009 at 3:13 PM, Kevin Grittner wrote: > Whit Armstrong wrote: >>>   echo noop >/sys/block/hdx/queue/scheduler >> >> can this go into /etc/init.

Re: [PERFORM] partition question for new server setup

2009-04-28 Thread Whit Armstrong
> echo noop >/sys/block/hdx/queue/scheduler can this go into /etc/init.d somewhere? or does that change stick between reboots? -Whit On Tue, Apr 28, 2009 at 2:16 PM, Craig James wrote: > Kenneth Marshall wrote: Additionally are there any clear choices w/ regard to filesystem t

Re: [PERFORM] partition question for new server setup

2009-04-28 Thread Whit Armstrong
Thanks, Scott. Just to clarify you said: > postgres.  So, my pg_xlog and all OS and logging stuff goes on the > RAID-10 and the main store for the db goes on the RAID-10. Is that meant to be that the pg_xlog and all OS and logging stuff go on the RAID-1 and the real database (the /var/lib/postgr

[PERFORM] partition question for new server setup

2009-04-28 Thread Whit Armstrong
I have the opportunity to set up a new postgres server for our production database. I've read several times in various postgres lists about the importance of separating logs from the actual database data to avoid disk contention. Can someone suggest a typical partitioning scheme for a postgres se