Den 30/11/2012 kl. 17.06 skrev Shaun Thomas :
> On 11/30/2012 09:44 AM, Niels Kristian Schjødt wrote:
>
> Just a note on your iostat numbers. The first reading is actually just a
> summary. You want the subsequent readings.
>
>> The pgsql_tmp dir is not changing at all it's constantly empty (a
Yeah, this area is changing very fast!
I agree - choosing carefully is important, as there are still plenty of
older models around that are substantially slower. Also choice of
motherboard chipset can strongly effect overall performance too. The 6
Gbit/s ports on Sandy and Ivy bridge Mobos [1]
Well, it seems that my data can be outdated, sorry for that. I've just
checked performance numbers on Tom's hardware and it seems that best sad
really do 500 MB/s. Some others do 100. So, I'd say one must choose wisely
(as always :-) ).
Best regards,
Vitalii Tymchyshyn
1 груд. 2012 00:43, "Mark Ki
Hmm - not strictly true as stated: 1 SSD will typically do 500MB/s
sequential read/write. 1 HDD will be lucky to get a 1/3 that.
We are looking at replacing 4 to 6 disk RAID10 arrays of HDD with a
RAID1 pair of SSD, as they perform about the same for sequential work
and vastly better at random
On 11/30/2012 09:44 AM, Niels Kristian Schjødt wrote:
Just a note on your iostat numbers. The first reading is actually just
a summary. You want the subsequent readings.
The pgsql_tmp dir is not changing at all it's constantly empty (a size
of 4.0K).
Good.
Filesystem 1K-blocksUsed A
Hmm very very interesting. Currently I run at "medium" load compared to the
very high loads in the night.
This is what the CPU I/O on new relic show:
https://rpm.newrelic.com/public/charts/8RnSOlWjfBy
And this is what iostat shows:
Linux 3.2.0-33-generic (master-db) 11/30/2012 _x86_64_
On 11/30/2012 08:48 AM, Niels Kristian Schjødt wrote:
I forgot to run 'sudo mount -a' I feel so embarrassed now :-( - In
other words no the drive was not mounted to the /ssd dir.
Yeah, that'll get ya.
I still see a lot of CPU I/O when doing a lot of writes, so the
question is, what's next. S
Den 30/11/2012 kl. 15.02 skrev Shaun Thomas :
> On 11/29/2012 08:32 PM, Niels Kristian Schjødt wrote:
>
>> If I do a "sudo iostat -k 1"
>> I get a lot of output like this:
>> Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
>> sda 0.00 0.00 0.00
On 11/30/2012 02:37 AM, Vitalii Tymchyshyn wrote:
Actually, what's the point in putting logs to ssd? SSDs are good for
random access and logs are accessed sequentially.
While this is true, Niels' problem is that his regular HDs are getting
saturated. In that case, moving any activity off of t
On 11/29/2012 08:32 PM, Niels Kristian Schjødt wrote:
If I do a "sudo iostat -k 1"
I get a lot of output like this:
Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
sda 0.00 0.00 0.00 0 0
sdb 0.00 0.00
On 11/30/2012 07:31 AM, Niels Kristian Schjødt wrote:
In theory what difference should it make to the performance, to have
a pool in front of the database, that all my workers and web servers
connect to instead of connecting directly? Where is the performance
gain coming from in that situation?
Okay, So to understand this better before I go with that solution:
In theory what difference should it make to the performance, to have a pool in
front of the database, that all my workers and web servers connect to instead
of connecting directly? Where is the performance gain coming from in tha
Niels Kristian Schjødt wrote:
>> You said before that you were seeing high disk wait numbers. Now
>> it is zero accourding to your disk utilization graph. That
>> sounds like a change to me.
> Hehe, I'm sorry if it somehow was misleading, I just wrote "a lot
> of I/O" it was CPU I/O
>>> A lot of
SSDs are not faster for sequential IO as I know. That's why (with BBU or
synchronious_commit=off) I prefer to have logs on regular HDDs.
Best reag
2012/11/30 Willem Leenen
>
> Actually, what's the point in putting logs to ssd? SSDs are good for
> random access and logs are accessed sequential
Actually, what's the point in putting logs to ssd? SSDs are good for random
access and logs are accessed sequentially. I'd put table spaces on ssd and
leave logs on hdd
30 лист. 2012 04:33, "Niels Kristian Schjødt"
напис.
Because SSD's are considered faster. Then you have to put the most p
Oh, yes. I don't imagine DB server without RAID+BBU :)
When there is no BBU, SSD can be handy.
But you know, SSD is worse in linear read/write than HDD.
Best regards, Vitalii Tymchyshyn
2012/11/30 Mark Kirkwood
> Most modern SSD are much faster for fsync type operations than a spinning
> disk
When I try your command sequence I end up with the contents of the new
pg_xlog owned by root. Postgres will not start:
PANIC: could not open file "pg_xlog/000100060080" (log file
6, segment 128): Permission denied
While this is fixable, I suspect you have managed to leave the xlo
Most modern SSD are much faster for fsync type operations than a
spinning disk - similar performance to spinning disk + writeback raid
controller + battery.
However as you mention, they are great at random IO too, so Niels, it
might be worth putting your postgres logs *and* data on the SSDs an
Actually, what's the point in putting logs to ssd? SSDs are good for random
access and logs are accessed sequentially. I'd put table spaces on ssd and
leave logs on hdd
30 лист. 2012 04:33, "Niels Kristian Schjødt"
напис.
> Hmm I'm getting suspicious here. Maybe my new great setup with the SSD's
Hmm I'm getting suspicious here. Maybe my new great setup with the SSD's is not
really working as it should., and maybe new relic is not monitoring as It
should.
If I do a "sudo iostat -k 1"
I get a lot of output like this:
Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
s
Den 30/11/2012 kl. 02.24 skrev "Kevin Grittner" :
> Niels Kristian Schjødt wrote:
>
>> Okay, now I'm done the updating as described above. I did the
>> postgres.conf changes. I did the kernel changes, i added two
>> SSD's in a software RAID1 where the pg_xlog is now located -
>> unfortunately the
Niels Kristian Schjødt wrote:
> Okay, now I'm done the updating as described above. I did the
> postgres.conf changes. I did the kernel changes, i added two
> SSD's in a software RAID1 where the pg_xlog is now located -
> unfortunately the the picture is still the same :-(
You said before that y
Den 28/11/2012 kl. 17.54 skrev Shaun Thomas :
> On 11/28/2012 10:19 AM, Niels Kristian Schjødt wrote:
>
>> https://rpm.newrelic.com/public/charts/h2dtedghfsv
>
> Doesn't this answer your question?
>
> That iowait is crushing your server into the ground. It's no surprise updates
> are taking s
In later kernels these have been renamed:
Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-32-generic x86_64)
$ sysctl -a|grep dirty
vm.dirty_background_ratio = 5
vm.dirty_background_bytes = 0
vm.dirty_ratio = 10
vm.dirty_bytes = 0
vm.dirty_writeback_centisecs = 500
vm.dirty_expire_centisecs = 30
Hi, I have started to implement your suggestions . I have a small error so far
though. The "vm.dirty_writeback_ratio = 1" command rerurns:
error: "vm.dirty_writeback_ratio" is an unknown key
I'm on ubuntu 12.04
Den 28/11/2012 kl. 17.54 skrev Shaun Thomas :
> On 11/28/2012 10:19 AM, Niels Kri
On 11/28/2012 11:44 AM, Niels Kristian Schjødt wrote:
Thanks a lot - on the server I already have one additional SSD 250gb
disk, that I don't use for anything at the moment.
God. An SSD would actually be better for your data, as it follows
more random access patterns, and xlogs are more se
On 11/28/2012 10:19 AM, Niels Kristian Schjødt wrote:
https://rpm.newrelic.com/public/charts/h2dtedghfsv
Doesn't this answer your question?
That iowait is crushing your server into the ground. It's no surprise
updates are taking several seconds. That update you sent us *should*
execute on the
max_connections = 1000 looks bad... why not a pooler in place?
Cheers
Bèrto
On 28 November 2012 16:19, Niels Kristian Schjødt
wrote:
> max_connections = 1000
--
==
If Pac-Man had affected us as kids, we'd all be running around in a
darkened room munching pills and
Okay guys,
Thanks for all the great help and advice already! Let me just clear some
things, to make my question a little easier to answer :-)
Now my site is a search engine for used cars - not just a car shop with a few
hundred cars.
The update query you look at, is an update that is executed on
I assume that SQL databases ( Banks? Telecom?) can handle an used car shop. No
need for an unstructured data tool.
> +1, sql databases has limited number of inserts/updates per second. Even
> with highend hardware you won't have more than XXX operations per
> second. As Thomas said, you shoul
Niels Kristian Schjødt wrote:
> PS. I'm on postgres 9.2 on a server with 32gb ram, 8 cores and
> two 3T disks in a software raid 1 setup.
In addtion to the excellent advice from Shaun, I would like to
point out a few other things.
One query runs on one core. In a test of a single query, the othe
W dniu 28.11.2012 15:07, Shaun Thomas pisze:
> On 11/28/2012 06:57 AM, Niels Kristian Schjødt wrote:
>
> Before I go crazy, here... you really need to tell us what "not enough"
> means. You didn't provide an explain analyze, so we don't know what your
> actual performance is. But I have my suspici
On 11/28/2012 06:57 AM, Niels Kristian Schjødt wrote:
Before I go crazy, here... you really need to tell us what "not enough"
means. You didn't provide an explain analyze, so we don't know what your
actual performance is. But I have my suspicions.
So as you can see, it's already pretty optimize
Hi, i have these update queries, that run very often, and takes too long time,
in order for us to reach the throughput we are aiming at. However, the update
query is very simple, and I can't figure out any way to improve the situation.
The query looks like this:
UPDATE "adverts" SET "last_obser
34 matches
Mail list logo