Hi All,
We have installed postgres 8.4.2 on production.
We have a parition table structure for one of the table.
when i am drop the master table we get the following error.
drop table table_name cascade;
WARNING: out of shared memory
ERROR: out of shared memory
HINT: You might need to increa
On Wed, Feb 24, 2010 at 4:31 PM, Dave Crooke wrote:
> This is a generic SQL issue and not PG specific, but I'd like to get
> an opinion from this list.
>
> Consider the following data:
>
> # \d bar
> Table "public.bar"
> Column | Type | Modifiers
> +
On Tue, Mar 9, 2010 at 6:04 PM, Merlin Moncure wrote:
> On Tue, Mar 9, 2010 at 4:38 AM, Vidhya Bondre
> wrote:
> > Hi All,
> >
> > We have installed postgres 8.4.2 on production.
> >
> > We have a parition table structure for one of the table.
> >
> > when i am drop the master table we get the
On Tue, Mar 9, 2010 at 4:38 AM, Vidhya Bondre wrote:
> Hi All,
>
> We have installed postgres 8.4.2 on production.
>
> We have a parition table structure for one of the table.
>
> when i am drop the master table we get the following error.
>
> drop table table_name cascade;
> WARNING: out of shar
Do keep the postgres xlog on a seperate ext2 partition for best
performance. Other than that, xfs is definitely a good performer.
Mike Stone
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgs
On Tue, 09 Mar 2010 08:00:50 +0100, Greg Smith
wrote:
Scott Carey wrote:
For high sequential throughput, nothing is as optimized as XFS on Linux
yet. It has weaknesses elsewhere however.
When files are extended one page at a time (as postgres does)
fragmentation can be pretty high on
"Pierre C" wrote:
> Greg Smith wrote:
>> I'm curious what you feel those weaknesses are.
>
> Handling lots of small files, especially deleting them, is really
> slow on XFS.
> Databases don't care about that.
I know of at least one exception to that -- when we upgraded and got
a newer versio
Cool trick I didn't realise you could do this at the SQL level without
a custom max() written in C.
What I ended up doing for my app is just going with straight SQL that
generates the "key" tuples with a SELECT DISTINCT, and then has a dependent
subquery that does a very small index scan to
Pierre C escribió:
On Tue, 09 Mar 2010 08:00:50 +0100, Greg Smith
wrote:
Scott Carey wrote:
For high sequential throughput, nothing is as optimized as XFS on
Linux yet. It has weaknesses elsewhere however.
When files are extended one page at a time (as postgres does)
fragmentation can
On Tue, 9 Mar 2010, Pierre C wrote:
On Tue, 09 Mar 2010 08:00:50 +0100, Greg Smith wrote:
Scott Carey wrote:
For high sequential throughput, nothing is as optimized as XFS on Linux
yet. It has weaknesses elsewhere however.
When files are extended one page at a time (as postgres does) fr
On Tue, Mar 9, 2010 at 8:27 AM, Vidhya Bondre wrote:
>>
>> are you using the same postgresql.conf? have you created more
>> partitions? using advisory locks?
>
> Yes we are using same conf files. In a week we create around 5 partitions.
> We are not using advisory locks
>>
>> In any event, incre
Dear PostgreSQL Creators, I am frequently using PostgreSQL server to manage the
data, but I am stuck ed now with a problem of large objects deleting, namely it
works too slow. E.g., deleting of 900 large objects of 1 Mb size takes around
2.31 minutes. This dataset is not largest one which I am w
John KEA wrote:
> I am stuck ed now with a problem of large objects deleting,
> please, give me a few advices what do I have to do to improve the
> deleting time?
You've come to the right place, but we need more information to be
able to help. Please review this page and repost with the su
On Mar 8, 2010, at 11:00 PM, Greg Smith wrote:
> Scott Carey wrote:
>> For high sequential throughput, nothing is as optimized as XFS on Linux yet.
>> It has weaknesses elsewhere however.
>>
>
> I'm curious what you feel those weaknesses are. The recent addition of
> XFS back into a more ma
On Mar 9, 2010, at 4:39 PM, Scott Carey wrote:
>
> On Mar 8, 2010, at 11:00 PM, Greg Smith wrote:
>
> * At least with CentOS 5.3 and thier xfs version (non-Redhat, CentOS extras)
> sparse random writes could almost hang a file system. They were VERY slow.
> I have not tested since.
>
Ju
Scott Carey wrote:
I'm also not sure how up to date RedHat's xfs version is -- there have been
enhancements to xfs in the kernel mainline regularly for a long time.
They seem to following SGI's XFS repo quite carefully and cherry-picking
bug fixes out of there, not sure of how that relates
16 matches
Mail list logo