Axel Rau wrote:
some erp software requires a change of my pgsql cluster from
locale Cencoding UTF-8
to
locale de_DE.UTF-8encoding UTF-8
Most of my databases have only ASCII text data (8 bit UTF8 code range)
in the text columns.
Does the above change influence index performa
Am 11.09.2008 um 11:29 schrieb Peter Eisentraut:
What other performance impacts can be expected?
The performance impact is mainly with string comparisons and sorts.
I suggest you run your own tests to find out what is acceptable in
your scenario.
Im not yet convinced to switch to non-C
Axel Rau wrote:
Im not yet convinced to switch to non-C locale. Is the following
intended behavior:
With lc_ctype C: select lower('ÄÖÜ'); => ÄÖÜ
With lc_ctype en_US.utf8 select lower('ÆÅË'); => æåë
? (Both have server encoding UTF8)
I would expect exactly that.
--
Sent via pgsql-p
I'm about to buy a new server. It will be a Xeon system with two
processors (4 cores per processor) and 16GB RAM. Two RAID extenders
will be attached to an Intel s5000 series motherboard, providing 12
SAS/Serial ATA connectors.
The server will run FreeBSD 7.0, PostgreSQL 8, apache, PHP, mail
On Thu, Sep 11, 2008 at 06:29:36PM +0200, Laszlo Nagy wrote:
> The expert told me to use RAID 5 but I'm hesitating. I think that RAID 1+0
> would be much faster, and I/O performance is what I really need.
I think you're right. I think it's a big mistake to use RAID 5 in a
database server where
On Thu, 11 Sep 2008, Laszlo Nagy wrote:
So the basic system will reside on a RAID 1 array, created from two SAS
disks spinning at 15 000 rpm. I will buy 10 pieces of Seagate Barracuda
320GB SATA (ES 7200) disks for the rest.
That sounds good. Put RAID 1 on the pair, and RAID 1+0 on the rest. I
On Thu, Sep 11, 2008 at 06:18:37PM +0100, Matthew Wakeling wrote:
> On Thu, 11 Sep 2008, Laszlo Nagy wrote:
>> So the basic system will reside on a RAID 1 array, created from two SAS
>> disks spinning at 15 000 rpm. I will buy 10 pieces of Seagate Barracuda
>> 320GB SATA (ES 7200) disks for the r
>>> Kenneth Marshall <[EMAIL PROTECTED]> wrote:
> On Thu, Sep 11, 2008 at 06:18:37PM +0100, Matthew Wakeling wrote:
>> On Thu, 11 Sep 2008, Laszlo Nagy wrote:
>>> So the basic system will reside on a RAID 1 array, created from two
SAS
>>> disks spinning at 15 000 rpm. I will buy 10 pieces of Seaga
going to the same drives. This turns your fast sequential I/O into
random I/O with the accompaning 10x or more performance decrease.
Unless you have a good RAID controller with battery-backed-up cache.
All right. :-) This is what I'll have:
Boxed Intel Server Board S5000PSLROMB with
On Thu, Sep 11, 2008 at 10:29 AM, Laszlo Nagy <[EMAIL PROTECTED]> wrote:
> I'm about to buy a new server. It will be a Xeon system with two processors
> (4 cores per processor) and 16GB RAM. Two RAID extenders will be attached
> to an Intel s5000 series motherboard, providing 12 SAS/Serial ATA
> c
On Thu, Sep 11, 2008 at 11:47 AM, Laszlo Nagy <[EMAIL PROTECTED]> wrote:
> I cannot spend more money on this computer, but since you are all talking
> about battery back up, I'll try to get money from the management and buy
> this:
>
> Intel(R) RAID Smart Battery AXXRSBBU3, optional battery back up
Hmm, I would expect this tunable to potentially be rather file system
dependent, and potentially raid controller dependant. The test was using
ext2, perhaps the others automatically prefetch or read ahead? Does it
vary by RAID controller?
Well I went and found out, using ext3 and xfs. I have a
Greg Smith wrote:
The point I was trying to make there is that even under impossibly
optimal circumstances, you'd be hard pressed to blow out the disk's
read cache with seek-dominated data even if you read a lot at each
seek point. That idea didn't make it from my head into writing very
well
Laszlo Nagy wrote:
I cannot spend more money on this computer, but since you are all
talking about battery back up, I'll try to get money from the management
and buy this:
Intel® RAID Smart Battery AXXRSBBU3, optional battery back up for use
with AXXRAK18E and SRCSAS144E. RoHS Complaint.
T
On Thu, 11 Sep 2008, Laszlo Nagy wrote:
The expert told me to use RAID 5 but I'm hesitating.
Your "expert" isn't--at least when it comes to database performance.
Trust yourself here, you've got the right general idea.
But I can't make any sense out of exactly how your disks are going to be
Drives have their own read-ahead in the firmware. Many can keep track of 2
or 4 concurrent file accesses. A few can keep track of more. This also
plays in with the NCQ or SCSI command queuing implementation.
Consumer drives will often read-ahead much more than server drives optimized
for i/o pe
Sorry, I forgot to mention the Linux kernel version I'm using, etc:
2.6.18-92.1.10.el5 #1 SMP x86_64
CentOS 5.2.
The "adaptive" read-ahead, as well as other enhancements in the kernel, are
taking place or coming soon in the most recent stuff. Some distributions
offer the adaptive read-ahead as a
On Thu, 11 Sep 2008, Scott Carey wrote:
Drives have their own read-ahead in the firmware. Many can keep track of 2
or 4 concurrent file accesses. A few can keep track of more. This also
plays in with the NCQ or SCSI command queuing implementation.
Consumer drives will often read-ahead much m
On Thu, Sep 11, 2008 at 3:36 PM, <[EMAIL PROTECTED]> wrote:
> On Thu, 11 Sep 2008, Scott Carey wrote:
>
>> Drives have their own read-ahead in the firmware. Many can keep track of
>> 2
>> or 4 concurrent file accesses. A few can keep track of more. This also
>> plays in with the NCQ or SCSI com
On Thu, 11 Sep 2008, Scott Marlowe wrote:
On Thu, Sep 11, 2008 at 3:36 PM, <[EMAIL PROTECTED]> wrote:
by even if it didn't, most modern drives read the entire cylinder into their
buffer so any additional requests to the drive will be satisfied from this
buffer and not have to wait for the disk
On Thursday 11 September 2008, [EMAIL PROTECTED] wrote:
> while I agree with you in theory, in practice I've seen multiple
> partitions cause far more problems than they have prevented (due to the
> partitions ending up not being large enough and having to be resized
> after they fill up, etc) so I
On Thu, 11 Sep 2008, Alan Hodgson wrote:
On Thursday 11 September 2008, [EMAIL PROTECTED] wrote:
while I agree with you in theory, in practice I've seen multiple
partitions cause far more problems than they have prevented (due to the
partitions ending up not being large enough and having to be
I also thought that LVM is unsafe for WAL logs and file system journals with
disk write cache -- it doesn't flush the disk write caches correctly and
build write barriers.
As pointed out here:
http://groups.google.com/group/pgsql.performance/browse_thread/thread/9dc43991c1887129
by Greg Smith
http
On Thu, Sep 11, 2008 at 4:33 PM, <[EMAIL PROTECTED]> wrote:
> On Thu, 11 Sep 2008, Scott Marlowe wrote:
>
>> On Thu, Sep 11, 2008 at 3:36 PM, <[EMAIL PROTECTED]> wrote:
>>>
>>> by even if it didn't, most modern drives read the entire cylinder into
>>> their
>>> buffer so any additional requests t
Moving this thread to Performance alias as it might make more sense for
folks searching on this topic:
Greg Smith wrote:
On Tue, 9 Sep 2008, Amber wrote:
I read something from
http://monetdb.cwi.nl/projects/monetdb/SQL/Benchmark/TPCH/index.html
saying that PostgreSQL can't give the correct
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
> * However Empty rows results is occuring consistently
> (Infact Q11 also returned empty for me while it worked in their test)
> Queries: 4,5,6,10,11,12,14,15
> (ACTION ITEM: I will start separate threads for each of those queries in
>HACKERS
On Thu, 11 Sep 2008, Alan Hodgson wrote:
LVM plus online resizable filesystems really makes multiple partitions
manageable.
I've seen so many reports blaming Linux's LVM for performance issues that
its managability benefits don't seem too compelling.
--
* Greg Smith [EMAIL PROTECTED] http:/
On Thu, Sep 11, 2008 at 11:30 PM, Jignesh K. Shah <[EMAIL PROTECTED]> wrote:
> Moving this thread to Performance alias as it might make more sense for
> folks searching on this topic:
You should be using DBT-3. Similarly, a scale factor of 10 is
pointless. How many data warehouses are only 10GB?
28 matches
Mail list logo