Huan,
Residential memory is part of the process memory that is now swapped and is
in RAM. This includes also memory shared with other processes so sum of RES
for all processes may be greater that total physical memory.
I recommend this article
http://www.depesz.com/2012/06/09/how-much-ram-is-post
Thanks very much, Glyn, Jeff, and Tom. That was very clearly explained.
A related case, see the following top dump. The Postgres process is using
87g residential memory, which I thought was the physical memory consumed by
a process that can't be shared with others. While, the free+cached is about
Jeff Janes writes:
> On Thu, Jan 15, 2015 at 3:30 AM, Huan Ruan wrote:
>> I thought 'shared_buffers' sets how much memory that is dedicated to
>> PostgreSQL to use for caching data, therefore not available to other
>> applications.
> While PostgreSQL has reserves the right to use 32GB, as long a
On Thu, Jan 15, 2015 at 3:30 AM, Huan Ruan wrote:
> Hi All
>
> I thought 'shared_buffers' sets how much memory that is dedicated to
> PostgreSQL to use for caching data, therefore not available to other
> applications.
>
> However, as shown in the following screenshots, The server (CentOS 6.6
> 6
> From: Huan Ruan
>To: pgsql-performance@postgresql.org
>Sent: Thursday, 15 January 2015, 11:30
>Subject: [PERFORM] shared_buffers vs Linux file cache
>
>
>
>Hi All
>
>
>I thought 'shared_buffers' sets how much memory that is dedicated to
>Pos
Hi All
I thought 'shared_buffers' sets how much memory that is dedicated to
PostgreSQL to use for caching data, therefore not available to other
applications.
However, as shown in the following screenshots, The server (CentOS 6.6
64bit) has 64GB of RAM, and 'shared_buffer' is set to 32GB, but the
On 01/12/12 11:21, Daniel Farina wrote:
On Fri, Nov 30, 2012 at 12:38 PM, Bruce Momjian wrote:
Or Debian. Not sure what would justify use of Ubuntu as a server,
except wanting to have the exact same OS as their personal computers.
We have switched from Debian to Ubuntu: there is definitely n
On Fri, Nov 30, 2012 at 12:38 PM, Bruce Momjian wrote:
> Or Debian. Not sure what would justify use of Ubuntu as a server,
> except wanting to have the exact same OS as their personal computers.
We have switched from Debian to Ubuntu: there is definitely non-zero
value in the PPA hosting (althou
On 11/30/2012 02:38 PM, Bruce Momjian wrote:
Or Debian. Not sure what would justify use of Ubuntu as a server,
except wanting to have the exact same OS as their personal computers.
Honestly not sure why we went that direction. I'm not in the sysadmin
group, though I do work with them pretty
On Fri, Nov 30, 2012 at 02:01:45PM -0600, Shaun Thomas wrote:
> On 11/30/2012 01:57 PM, Ben Chobot wrote:
>
> >Hm, this sounds like something we should look into. Before we start
> >digging do you have more to share, or did you leave it with the "huh,
> >that's weird; this seems to fix it" solutio
On 11/30/2012 01:57 PM, Ben Chobot wrote:
Hm, this sounds like something we should look into. Before we start
digging do you have more to share, or did you leave it with the "huh,
that's weird; this seems to fix it" solution?
We're still testing. We're still on the -31 kernel. We tried the -33
On Nov 30, 2012, at 8:06 AM, Shaun Thomas wrote:
> I say that because you mentioned you're using Ubuntu 12.04, and we were
> having some problems with PG on that platform. With shared_buffers over
> 4GB, it starts doing really weird things to the memory subsystem.
> Whatever it does causes the ker
On Thu, Oct 18, 2012 at 1:50 PM, Jeff Janes wrote:
> On Wed, Oct 10, 2012 at 10:36 PM, Scott Marlowe
> wrote:
>> On Wed, Oct 10, 2012 at 4:37 PM, Claudio Freire
>> wrote:
>>>
>>> In my case, if I set it too high, I get impossibly suboptimal plans
>>> when an index scan over millions of rows hi
On Thu, Oct 18, 2012 at 4:23 PM, Jeff Janes wrote:
>> @Claudio So you are basically saying that if I have set effective_cache_size
>> to 10GB and I have 10 concurrent processes which are using 10 different
>> indices which are for example 2GB,
>
> It is the size of the table, not the index, which
On Wed, Oct 10, 2012 at 10:36 PM, Scott Marlowe wrote:
> On Wed, Oct 10, 2012 at 4:37 PM, Claudio Freire
> wrote:
>>
>> In my case, if I set it too high, I get impossibly suboptimal plans
>> when an index scan over millions of rows hits the disk way too often
>> way too randomly. The difference
On Wed, Oct 10, 2012 at 1:12 PM, Strahinja Kustudić
wrote:
> @Claudio So you are basically saying that if I have set effective_cache_size
> to 10GB and I have 10 concurrent processes which are using 10 different
> indices which are for example 2GB,
It is the size of the table, not the index, whi
On Thu, Oct 11, 2012 at 11:17 AM, Josh Berkus wrote:
>
>> Does anyone see effective_cache_size make a difference anyway? If so,
>> in what circumstances?
>
> E_C_S, together with random_page_cost, the table and index sizes, the
> row estimates and the cpu_* costs, form an equation which estimates
Jeff,
> Does anyone see effective_cache_size make a difference anyway? If so,
> in what circumstances?
E_C_S, together with random_page_cost, the table and index sizes, the
row estimates and the cpu_* costs, form an equation which estimates the
cost of doing various kinds of scans, particularly
On Wed, Oct 10, 2012 at 4:37 PM, Claudio Freire wrote:
> On Wed, Oct 10, 2012 at 7:33 PM, Jeff Janes wrote:
>>> Well, the real question is whether, while traversing the index, if some
>>> of the pages are going to be removed from the cache by other process
>>> cache usage. effective_cache_size i
On Wed, Oct 10, 2012 at 7:33 PM, Jeff Janes wrote:
>> Well, the real question is whether, while traversing the index, if some
>> of the pages are going to be removed from the cache by other process
>> cache usage. effective_cache_size is not figuring the cache will remain
>> between queries.
>
>
On Wed, Oct 10, 2012 at 2:03 PM, Bruce Momjian wrote:
> On Wed, Oct 10, 2012 at 10:12:51PM +0200, Strahinja Kustudić wrote:
>> @Claudio So you are basically saying that if I have set effective_cache_size
>> to
>> 10GB and I have 10 concurrent processes which are using 10 different indices
>> whi
On Wed, Oct 10, 2012 at 7:06 PM, Ondrej Ivanič wrote:
>> Generally going over 4GB for shared_buffers doesn't help.. some of the
>> overhead of bgwriter and checkpoints is more or less linear in the size of
>> shared_buffers ..
>
> Nothing is black or white; It's all shades of Grey :) It depends on
Hi,
On 10 October 2012 19:11, Julien Cigar wrote:
>> shared_buffers = 10GB
>
>
> Generally going over 4GB for shared_buffers doesn't help.. some of the
> overhead of bgwriter and checkpoints is more or less linear in the size of
> shared_buffers ..
Nothing is black or white; It's all shades of G
On Wed, Oct 10, 2012 at 10:12:51PM +0200, Strahinja Kustudić wrote:
> @Bruce Thanks for your articles, after reading them all I don't think
> disabling
> swap is a good idea now. Also you said to see the effective_cache_size I
> should
> check it with free. My question is should I use the value t
On Wed, Oct 10, 2012 at 5:12 PM, Strahinja Kustudić
wrote:
> @Claudio So you are basically saying that if I have set effective_cache_size
> to 10GB and I have 10 concurrent processes which are using 10 different
> indices which are for example 2GB, it would be better to set the
> effective_cache s
@Bruce Thanks for your articles, after reading them all I don't think
disabling swap is a good idea now. Also you said to see the
effective_cache_size I should check it with free. My question is should I
use the value that free is showing as cached, or a little lower one, since
not everything in th
On Wed, Oct 10, 2012 at 12:12 AM, Strahinja Kustudić
wrote:
> Hi everyone,
>
> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10
> 15K SCSI drives which is runing Centos 6.2 x64.
How many drives in the RAID?
> This server is mainly used
> for inserting/updating large am
On Wed, Oct 10, 2012 at 3:18 PM, Shaun Thomas wrote:
>> Why does nobody every mention that concurrent access has to be taken
>> into account?
>
>
> That's actually a good point. But if you have one giant database, the
> overlap of which tables are being accessed by various sessions is going to
> b
On 10/10/2012 12:05 PM, Claudio Freire wrote:
Why does nobody every mention that concurrent access has to be taken
into account?
That's actually a good point. But if you have one giant database, the
overlap of which tables are being accessed by various sessions is going
to be immense.
Ther
On Wed, Oct 10, 2012 at 02:05:20PM -0300, Claudio Freire wrote:
> On Wed, Oct 10, 2012 at 1:10 PM, Bruce Momjian wrote:
> >> >shared_buffers = 10GB
> >>
> >> Generally going over 4GB for shared_buffers doesn't help.. some of
> >> the overhead of bgwriter and checkpoints is more or less linear in
>
On Wed, Oct 10, 2012 at 1:10 PM, Bruce Momjian wrote:
>> >shared_buffers = 10GB
>>
>> Generally going over 4GB for shared_buffers doesn't help.. some of
>> the overhead of bgwriter and checkpoints is more or less linear in
>> the size of shared_buffers ..
>>
>> >effective_cache_size = 90GB
>>
>> e
On Wed, Oct 10, 2012 at 10:11:30AM +0200, Julien Cigar wrote:
> On 10/10/2012 09:12, Strahinja Kustudić wrote:
> >Hi everyone,
>
> Hello,
>
> >
> >I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM
> >and RAID10 15K SCSI drives which is runing Centos 6.2 x64. This
> >server is mainl
On Wed, Oct 10, 2012 at 09:12:20AM +0200, Strahinja Kustudić wrote:
> Hi everyone,
>
> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10
> 15K
> SCSI drives which is runing Centos 6.2 x64. This server is mainly used for
> inserting/updating large amounts of data via copy
On 10/10/2012 09:35 AM, Strahinja Kustudić wrote:
#sysctl vm.dirty_ratio
vm.dirty_ratio = 40
# sysctl vm.dirty_background_ratio
vm.dirty_background_ratio = 10
Och. That looks a lot like an old RHEL or CentOS system. Change
those ASAP. Currently your system won't start writing dirty buffers
On 10/10/2012 09:49 AM, Strahinja Kustudić wrote:
I will change those, but I don't think this is that big of an issue if
most of the IO is done by Postgres, since Postgres has it's own
mechanism to tell the OS to sync the data to disk. For example when it's
writing a wal file, or when it's writi
I will change those, but I don't think this is that big of an issue if most
of the IO is done by Postgres, since Postgres has it's own mechanism to
tell the OS to sync the data to disk. For example when it's writing a wal
file, or when it's writing a check point, those do not get cached.
Regards,
Shaun,
running these commands:
#sysctl vm.dirty_ratio
vm.dirty_ratio = 40
# sysctl vm.dirty_background_ratio
vm.dirty_background_ratio = 10
shows that these values are even higher by default. When you said RAID
buffer size, you meant the controllers cache memory size?
Regards,
Strahinja
On We
On 10/10/2012 02:12 AM, Strahinja Kustudić wrote:
totalused free shared buffers cached
Mem: 96730 96418311 071 93120
Wow, look at all that RAM. Something nobody has mentioned yet, you'll
want to set some additional kernel parameters for this
Thanks for your help everyone.
I set:
shared_buffers = 4GB
effective_cache_size = 72GB
work_mem = 128MB
maintenance_work_mem = 4GB
checkpoint_segments = 64
checkpoint_completion_target = 0.9
random_page_cost = 3.5
cpu_tuple_cost = 0.05
Where can I get the values for random_page_cost and for cpu_t
On 10/10/2012 10:30, Strahinja Kustudić wrote:
Thanks for very fast replies everyone :)
@Laurenz I know that effective cache size is only used for the query
planner, what I was saying is that if I tell it that it can have 90GB
cached items, that is not trues, since the OS and Postgres process
Thanks for very fast replies everyone :)
@Laurenz I know that effective cache size is only used for the query
planner, what I was saying is that if I tell it that it can have 90GB
cached items, that is not trues, since the OS and Postgres process itself
can take more than 6GB, which would mean 90G
On 10/10/2012 09:12, Strahinja Kustudić wrote:
Hi everyone,
Hello,
I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and
RAID10 15K SCSI drives which is runing Centos 6.2 x64. This server is
mainly used for inserting/updating large amounts of data via
copy/insert/update com
Strahinja Kustudic wrote:
>> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10
>> 15K SCSI drives
>> which is runing Centos 6.2 x64. This server is mainly used for
>> inserting/updating large amounts of
>> data via copy/insert/update commands, and seldom for running sele
On 10.10.2012 09:12, Strahinja Kustudić wrote:
> Hi everyone,
>
> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and
> RAID10 15K SCSI drives which is runing Centos 6.2 x64. This server is
> mainly used for inserting/updating large amounts of data via
> copy/insert/update command
Hm, I just notices that shared_buffers + effective_cache_size = 100 > 96GB,
which can't be right. effective_cache_size should probably be 80GB.
Strahinja Kustudić | System Engineer | Nordeus
On Wed, Oct 10, 2012 at 9:12 AM, Strahinja Kustudić
wrote:
> Hi everyone,
>
> I have a Postgresql 9.1 d
Hi everyone,
I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10
15K SCSI drives which is runing Centos 6.2 x64. This server is mainly used
for inserting/updating large amounts of data via copy/insert/update
commands, and seldom for running select queries.
Here are the rel
On Fri, May 28, 2010 at 5:02 PM, Greg Smith wrote:
> Merlin Moncure wrote:
>>
>> I'm of the opinion (rightly or wrongly) that the prevailing opinions
>> on how to configure shared_buffers are based on special case
>> benchmarking information or simply made up.
>
> Well, you're wrong, but it's OK;
If, like me, you came from the Oracle world, you may be tempted to throw a
ton of RAM at this. Don't. PG does not like it.
On Fri, May 28, 2010 at 4:11 PM, Scott Marlowe wrote:
> On Mon, May 24, 2010 at 12:25 PM, Merlin Moncure
> wrote:
> > *) shared_buffers is one of the _least_ important perfo
On Mon, May 24, 2010 at 12:25 PM, Merlin Moncure wrote:
> *) shared_buffers is one of the _least_ important performance settings
> in postgresql.conf
Yes, and no. It's usually REALLY helpful to make sure it's more than
8 or 24Megs. But it doesn't generally need to be huge to make a
difference.
Merlin Moncure wrote:
I'm of the opinion (rightly or wrongly) that the prevailing opinions
on how to configure shared_buffers are based on special case
benchmarking information or simply made up.
Well, you're wrong, but it's OK; we'll forgive you this time. It's true
that a lot of the earlier
On Fri, May 28, 2010 at 2:57 PM, Greg Smith wrote:
> Merlin Moncure wrote:
>>
>> I would prefer to see the annotated performance oriented .conf
>> settings to be written in terms of trade offs (too low? X too high? Y
>> setting in order to get? Z). For example, did you know that if crank
>> max_l
Merlin Moncure wrote:
I would prefer to see the annotated performance oriented .conf
settings to be written in terms of trade offs (too low? X too high? Y
setting in order to get? Z). For example, did you know that if crank
max_locks_per_transaction you also increase the duration of every
query
2010/5/28 Konrad Garus :
> 2010/5/27 Cédric Villemain :
>
>> Exactly. And the time to browse depend on the number of blocks already
>> in core memory.
>> I am interested by tests results and benchmarks if you are going to do some
>> :)
>
> I am still thinking whether I want to do it on this prod m
2010/5/27 Cédric Villemain :
> Exactly. And the time to browse depend on the number of blocks already
> in core memory.
> I am interested by tests results and benchmarks if you are going to do some :)
I am still thinking whether I want to do it on this prod machine.
Maybe on something less critic
2010/5/27 Konrad Garus :
> 2010/5/27 Cédric Villemain :
>
>> well, that is the projection of file in memory. only projection, but
>> the memory is still acquire. It is ok to rework this part and project
>> something like 128MB and loop. (in fact the code is needed for 9.0
>> because segment can be
2010/5/27 Cédric Villemain :
> well, that is the projection of file in memory. only projection, but
> the memory is still acquire. It is ok to rework this part and project
> something like 128MB and loop. (in fact the code is needed for 9.0
> because segment can be > 1GB, I didn't check what is th
2010/5/27 Konrad Garus :
> 2010/5/27 Cédric Villemain :
>
>> pgmincore() and pgmincore_snapshot() both are able to mmap up to 1GB.
>
> Does it mean they can occupy 1 GB of RAM? How does it relate to amount
> of page buffers mapped by OS?
well, that is the projection of file in memory. only project
2010/5/27 Cédric Villemain :
> pgmincore() and pgmincore_snapshot() both are able to mmap up to 1GB.
Does it mean they can occupy 1 GB of RAM? How does it relate to amount
of page buffers mapped by OS?
--
Konrad Garus
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.or
2010/5/27 Konrad Garus :
> 2010/5/27 Cédric Villemain :
>
>> It works thanks to mincore/posix_fadvise stuff : you need linux.
>> It is stable enough in my own experiment. I did use it for debugging
>> purpose in production servers with succes.
>
> What impact does it have on performance?
pgmincore
2010/5/27 Cédric Villemain :
> It works thanks to mincore/posix_fadvise stuff : you need linux.
> It is stable enough in my own experiment. I did use it for debugging
> purpose in production servers with succes.
What impact does it have on performance?
Does it do anything, is there any interacti
2010/5/27 Konrad Garus :
> 2010/5/26 Cédric Villemain :
>
>> At the moment where a block is requested for the first time (usualy
>> 8kb from postgres, so in fact 2 blocks in OS), you have 'duplicate'
>> buffers.
>> But, depending of your workload, it is not so bad because those 2
>> blocks should n
2010/5/26 Cédric Villemain :
> At the moment where a block is requested for the first time (usualy
> 8kb from postgres, so in fact 2 blocks in OS), you have 'duplicate'
> buffers.
> But, depending of your workload, it is not so bad because those 2
> blocks should not be requested untill some time
2010/5/24 Konrad Garus :
> 2010/3/11 Paul McGarry :
>
>> I'm basically wondering how the postgresql cache (ie shared_buffers)
>> and the OS page_cache interact. The general advice seems to be to
>> assign 1/4 of RAM to shared buffers.
>>
>> I don't have a good knowledge of the internals but I'm won
On Tue, May 25, 2010 at 5:58 AM, Konrad Garus wrote:
> 2010/5/24 Merlin Moncure :
>
>> *) a page fault to disk is a much bigger deal than a fault to pg cache
>> vs os/ cache.
>
> That was my impression. That's why I did not touch our 2/16 GB setting
> right away. I guess that 2 more gigabytes in O
2010/5/24 Merlin Moncure :
> *) a page fault to disk is a much bigger deal than a fault to pg cache
> vs os/ cache.
That was my impression. That's why I did not touch our 2/16 GB setting
right away. I guess that 2 more gigabytes in OS cache is better than 2
more (duplicated) gigabytes in PG share
On Wed, Mar 10, 2010 at 9:28 PM, Paul McGarry wrote:
> Hi there,
>
> I'm after a little bit of advice on the shared_buffers setting (I have
> read the various docs on/linked from the performance tuning wiki page,
> some very helpful stuff there so thanks to those people).
>
> I am setting up a 64b
On May 24, 2010, at 4:25 AM, Konrad Garus wrote:
> Do shared_buffers duplicate contents of OS page cache? If so, how do I
> know if 25% RAM is the right value for me? Actually it would not seem
> to be true - the less redundancy the better.
You can look into the pg_buffercache contrib module.
>
2010/3/11 Paul McGarry :
> I'm basically wondering how the postgresql cache (ie shared_buffers)
> and the OS page_cache interact. The general advice seems to be to
> assign 1/4 of RAM to shared buffers.
>
> I don't have a good knowledge of the internals but I'm wondering if
> this will effectively
Greg Smith writes:
> However, that doesn't actually solve any of the problems I was talking about
> though, which is why I'm not even talking about that part. We need the glue
> to pull out software releases, run whatever testing tool is appropriate, and
> then save the run artifacts in some stan
Dimitri Fontaine wrote:
I still think the best tool around currently for this kind of testing is
tsung
I am happy to say that for now, pgbench is the only actual testing tool
supported. Done; now I don't need tsung.
However, that doesn't actually solve any of the problems I was talking
abo
Greg Smith writes:
> I'm not sure how to make progress on similar ideas about
> tuning closer to the filesystem level without having something automated
> that takes over the actual benchmark running and data recording steps; it's
> just way too time consuming to do those right now with every too
See below:
On Wed, Mar 10, 2010 at 9:28 PM, Paul McGarry wrote:
> Hi there,
>
> I'm after a little bit of advice on the shared_buffers setting (I have
> read the various docs on/linked from the performance tuning wiki page,
> some very helpful stuff there so thanks to those people).
>
> I am se
Alvaro Herrera wrote:
Maybe it would make more sense to try to reorder the fsync calls
instead.
The pretty obvious left behind idea from 8.3 spread checkpoint
development was to similarly spread the fsync calls around. Given that
we know, for example, Linux with ext3 is going to dump the
Tom Lane escribió:
> Alvaro Herrera writes:
> > Tom Lane escribió:
> >> That's not going to do anything towards reducing the actual I/O volume.
> >> Although I suppose it might be useful if it just cuts the number of
> >> seeks.
>
> > Oh, they had no problems with I/O volume. It was relation ext
Alvaro Herrera writes:
> Tom Lane escribió:
>> That's not going to do anything towards reducing the actual I/O volume.
>> Although I suppose it might be useful if it just cuts the number of
>> seeks.
> Oh, they had no problems with I/O volume. It was relation extension
> lock that was heavily c
Tom Lane escribió:
> Alvaro Herrera writes:
> > Tom Lane escribi�:
> >> Reorder to what, though? You still have the problem that we don't know
> >> much about the physical layout on-disk.
>
> > Well, to block numbers as a first step.
>
> fsync is a file-based operation, and we know exactly zip
Alvaro Herrera writes:
> Tom Lane escribió:
>> Reorder to what, though? You still have the problem that we don't know
>> much about the physical layout on-disk.
> Well, to block numbers as a first step.
fsync is a file-based operation, and we know exactly zip about the
relative positions of dif
Tom Lane escribió:
> Alvaro Herrera writes:
> > Maybe it would make more sense to try to reorder the fsync calls
> > instead.
>
> Reorder to what, though? You still have the problem that we don't know
> much about the physical layout on-disk.
Well, to block numbers as a first step.
However, th
Alvaro Herrera writes:
> Maybe it would make more sense to try to reorder the fsync calls
> instead.
Reorder to what, though? You still have the problem that we don't know
much about the physical layout on-disk.
regards, tom lane
--
Sent via pgsql-performance mailing l
Greg Stark escribió:
> On Tue, Mar 16, 2010 at 2:30 PM, Tom Lane wrote:
> > "Pierre C" writes:
> >> Does PG issue checkpoint writes in "sorted" order ?
> >
> > No. IIRC, a patch for that was submitted, and rejected because no
> > significant performance improvement could be demonstrated. We don
Greg Stark wrote:
On Tue, Mar 16, 2010 at 2:30 PM, Tom Lane wrote:
"Pierre C" writes:
Does PG issue checkpoint writes in "sorted" order ?
No. IIRC, a patch for that was submitted, and rejected because no
significant performance improvement could be demonstrated.
If the OS fi
Pierre C wrote:
Actually, I meant that in the case of a seq scan, PG will try to use
just a few buffers (a ring) in shared_buffers instead of thrashing the
whole buffers. But if there was actually a lot of free space in
shared_buffers, do the pages stay, or do they not ?
Pages inserted into t
On Tue, Mar 16, 2010 at 2:30 PM, Tom Lane wrote:
> "Pierre C" writes:
>> Does PG issue checkpoint writes in "sorted" order ?
>
> No. IIRC, a patch for that was submitted, and rejected because no
> significant performance improvement could be demonstrated. We don't
> have enough information abou
On Tue, Mar 16, 2010 at 1:48 PM, Pierre C wrote:
> Actually, I meant that in the case of a seq scan, PG will try to use just a
> few buffers (a ring) in shared_buffers instead of thrashing the whole
> buffers. But if there was actually a lot of free space in shared_buffers, do
> the pages stay, or
"Pierre C" writes:
> Does PG issue checkpoint writes in "sorted" order ?
No. IIRC, a patch for that was submitted, and rejected because no
significant performance improvement could be demonstrated. We don't
have enough information about the actual on-disk layout to be very
intelligent about thi
I think the logic you are referring to is the clock sweep buffer
accounting
scheme. That just makes sure that the most popular pages stay in the
buffers. If your entire db fits in the buffer pool then it'll all get in
there real fast.
Actually, I meant that in the case of a seq scan, PG wil
On Tue, Mar 16, 2010 at 7:24 AM, Pierre C wrote:
>
> I wonder about something, too : if your DB size is smaller than RAM, you
> could in theory set shared_buffers to a size larger than your DB provided
> you still have enough free RAM left for work_mem and OS writes management.
> How does this int
-My warnings about downsides related to checkpoint issues with larger
buffer pools isn't an opinion at all; that's a fact based on limitations
in how Postgres does its checkpoints. If we get something more like
Oracle's incremental checkpoint logic, this particular concern might go
away.
Dave Crooke wrote:
There seems to be a wide range of opinion on this I am new to PG
and grew up on Oracle, where more SGA is always a good thing ... I
know people who run Oracle on 2TB Superdome's with titanic SGA sizes
to keep the whole DB in RAM. I'd be using a 40GB+ Oracle SGA on that
There seems to be a wide range of opinion on this I am new to PG and
grew up on Oracle, where more SGA is always a good thing ... I know people
who run Oracle on 2TB Superdome's with titanic SGA sizes to keep the whole
DB in RAM. I'd be using a 40GB+ Oracle SGA on that box of yours.
A lot of
On Thu, Mar 11, 2010 at 5:19 PM, Paul McGarry wrote:
> On 11 March 2010 16:16, Ben Chobot wrote:
>
>> I *can* say a 10GB shared_buffer value is working "well" with my 128GB of
>> RAM. whether or not it's "optimal," I couldn't say without a lot of
>> experimentation I can't afford to do righ
On 11 March 2010 16:16, Ben Chobot wrote:
> I *can* say a 10GB shared_buffer value is working "well" with my 128GB of
> RAM. whether or not it's "optimal," I couldn't say without a lot of
> experimentation I can't afford to do right now. You might have a look at the
> pg_buffercache contri
Hi there,
I'm after a little bit of advice on the shared_buffers setting (I have
read the various docs on/linked from the performance tuning wiki page,
some very helpful stuff there so thanks to those people).
I am setting up a 64bit Linux server running Postgresql 8.3, the
server has 64gigs of m
On Mar 11, 2010, at 12:39 AM, Greg Smith wrote:
>
> Giving all the buffers to the database doesn't work for many reasons:
> -Need a bunch leftover for clients to use (i.e. work_mem)
> -Won't be enough OS cache for non-buffer data the database expects
> cached reads and writes will perform well o
Paul McGarry wrote:
IE when Postgres reads something from disk it will go into both the OS
page cache and the Postgresql shared_buffers and the OS page cache
copy is unlikely to be useful for anything.
That's correct. However, what should happen over time is that the
popular blocks in Post
On Mar 10, 2010, at 6:22 PM, Paul McGarry wrote:
> Hi there,
>
> I'm after a little bit of advice on the shared_buffers setting (I have
> read the various docs on/linked from the performance tuning wiki page,
> some very helpful stuff there so thanks to those people).
>
> I am setting up a 64bit
Hi there,
I'm after a little bit of advice on the shared_buffers setting (I have
read the various docs on/linked from the performance tuning wiki page,
some very helpful stuff there so thanks to those people).
I am setting up a 64bit Linux server running Postgresql 8.3, the
server has 64gigs of m
"George Sexton" wrote:
> Could someone please explain what the role of shared buffers is?
This Wiki page might be useful to you:
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
The short answer (from that page) is:
"The shared_buffers configuration parameter determines how
I'm reading the docs for 8.4.2, section 18.4.1 Memory. I'm trying to figure
out what reasonable values for my usage would be.
The doc says:
shared_buffers (integer)
Sets the amount of memory the database server uses for shared memory
buffers.
While circular definitions are always right, I'm
On Apr 14, 2008, at 3:31 PM, Tom Lane wrote:
Gregory Stark <[EMAIL PROTECTED]> writes:
The transition domain where performance drops dramatically as the
database
starts to not fit in shared buffers but does still fit in
filesystem cache.
It looks to me like the knee comes where the DB no lo
1 - 100 of 148 matches
Mail list logo