On Tue, Apr 29, 2008 at 11:16 AM, Gauri Kanekar
<[EMAIL PROTECTED]> wrote:
> Andrew,
>
> Can you explain me in detail why u said vacuum full is making the things
> worst.
1. VACUUM FULL takes exclusive lock on the table. That makes table
unavailable for read/writes.
2. VACUUM FULL moves live tupl
On Tue, Apr 29, 2008 at 10:41 AM, Gauri Kanekar
<[EMAIL PROTECTED]> wrote:
> But unless we do full vacuum the space is not recovered. Thats y we prefer
> full vacuum.
There is no point in recovering the space by moving tuples and
truncating the relation (that's what VACUUM FULL does) because you a
Andrew,
Can you explain me in detail why u said vacuum full is making the things
worst.
We do vacuum full, as vacuum verbose analyse dont regain space for us.
~ Gauri
On Mon, Apr 28, 2008 at 9:52 PM, Andrew Sullivan <[EMAIL PROTECTED]>
wrote:
> On Mon, Apr 28, 2008 at 07:35:37PM +0530, Gauri Ka
"Gauri Kanekar" <[EMAIL PROTECTED]> writes:
> Vacuum requires exclusive lock on "table1" and if any of the background or
> application is ON vacuum don't kick off. Thats the reason we need to get the
> site down.
As has been pointed out to you repeatedly, "vacuum" hasn't required
exclusive lock si
On Tue, 29 Apr 2008, Gauri Kanekar wrote:
Basically we have some background process which updates "table1" and we
don't want the application to make any changes to "table1" while vacuum.
Vacuum requires exclusive lock on "table1" and if any of the background or
application is ON vacuum don't kic
But unless we do full vacuum the space is not recovered. Thats y we prefer
full vacuum.
~ Gauri
On Tue, Apr 29, 2008 at 10:38 AM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Tue, 29 Apr 2008, Gauri Kanekar wrote:
>
> Basically we have some background process which updates "table1" and we
> > don
Basically we have some background process which updates "table1" and we
don't want the application to make any changes to "table1" while vacuum.
Vacuum requires exclusive lock on "table1" and if any of the background or
application is ON vacuum don't kick off. Thats the reason we need to get the
s
On Mon, Apr 28, 2008 at 5:37 PM, Adonias Malosso <[EMAIL PROTECTED]> wrote:
> Thank you for the answer. Good to know about this enterprise DB feature.
No problem.
> I´ll follow using pgloader.
That's fine. Though, I'd really suggest pg_bulkload, it's quite a bit faster.
--
Jonah H. Harris, Sr
[EMAIL PROTECTED] ("Gauri Kanekar") writes:
> We have a table "table1" which get insert and updates daily in high
> numbers, bcoz of which its size is increasing and we have to vacuum
> it every alternate day. Vacuuming "table1" take almost 30min and
> during that time the site is down. We need to
Jonah,
Thank you for the answer. Good to know about this enterprise DB feature.
I´ll follow using pgloader.
Regards.
Adonias Malosso
On Sat, Apr 26, 2008 at 10:14 PM, Jonah H. Harris <[EMAIL PROTECTED]>
wrote:
> On Sat, Apr 26, 2008 at 9:25 AM, Adonias Malosso <[EMAIL PROTECTED]>
> wrote:
> >
Hello,
Executive summary :
Last year I wrote a database benchmark which simulates a forum.
It works on Postgres and MySQL.
It could be useful.
I have no time to touch this, so it is rotting on my harddrive.
Who wants to adopt it ? I will put it on pgfoundry.
I can spend a few hours documenting
On Mon, Apr 28, 2008 at 9:38 AM, Gauri Kanekar
<[EMAIL PROTECTED]> wrote:
> All,
>
> We have a table "table1" which get insert and updates daily in high numbers,
> bcoz of which its size is increasing and we have to vacuum it every
> alternate day. Vacuuming "table1" take almost 30min and during th
On Mon, 28 Apr 2008 14:40:25 -0400
Gregory Stark <[EMAIL PROTECTED]> wrote:
> We certainly can pass TPC-C. I'm curious what you mean by 1/4 though?
> On similar hardware? Or the maximum we can scale to is 1/4 as large
> as Oracle? Can you point me to the actual benchmark runs you're
> referring t
Adonias Malosso wrote:
Hi All,
I´d like to know what´s the best practice to LOAD a 70 milion rows, 101
columns table
from ORACLE to PGSQL.
The current approach is to dump the data in CSV and than COPY it to
Postgresql.
Uhm. 101 columns you say? Sounds interesting. There are dataloaders
li
John Rouillard wrote:
We can't do this as we are backfilling a couple of months of data
into tables with existing data.
Is this a one off data loading of historic data or an ongoing thing?
The only indexes we have to drop are the ones on the primary keys
(there is one non-primary key index
"Josh Berkus" <[EMAIL PROTECTED]> writes:
> Greg,
>
>> What I was referring to by "passing" TPC-E was the criteria for a conformant
>> benchmark run. TPC-C has iirc, only two relevant criteria: "95th percentile
>> response time < 5s" and "average response time < 95th percentile response
>> time".
You got the order slightly wrong I guess.
1) hardware
Would only come first if your RAM is really too small, or you use RAID5
on write-heavy tables, or what limits you is transaction fsync (hint :
8.3).
Adding RAM is cheap.
2) rewriting my queries and table structures
On Mon, 28 Apr 2008, John Rouillard wrote:
2008-04-21 11:36:43 UTC @(2761)i: LOG: checkpoints ... (27 seconds apart)
so I changed:
checkpoint_segments = 30
checkpoint_warning = 150
That's good, but you might go higher than 30 for a bulk loading operation
like this, particularly on 8.1
On Mon, Apr 28, 2008 at 06:53:09PM +0100, Heikki Linnakangas wrote:
> John Rouillard wrote:
> >We are running postgresql-8.1.3 under Centos 4
> You should upgrade, at least to the latest minor release of the 8.1
> series (8.1.11), as there has been a bunch of important bug and security
> fixes. O
John Rouillard wrote:
We are running postgresql-8.1.3 under Centos 4
You should upgrade, at least to the latest minor release of the 8.1
series (8.1.11), as there has been a bunch of important bug and security
fixes. Or even better, upgrade to 8.3, which has reduced the storage
size of espec
On Mon, 28 Apr 2008, Gauri Kanekar wrote:
We are doing vacuum full every alternate day. We also do vacuum analyze
very often. We are currently using 8.1.3 version...Have already tried
all the option listed by you, thats y we reached to the decision of
having a replication sytsem.
Andrew Sull
Hi all:
We are loading in a number (100+) of sql files that are about 100M in
size. It takes about three hours to load the file. There is very
little load on the database other than the copy from operations.
We are running postgresql-8.1.3 under Centos 4 on a RAID 1/0 array
with 4 disks (so we h
[EMAIL PROTECTED] ("A B") writes:
> So, it is time to improve performance, it is running to slow.
> AFAIK (as a novice) there are a few general areas:
>
> 1) hardware
> 2) rewriting my queries and table structures
> 3) using more predefined queries
> 4) tweek parameters in the db conf files
>
> Of
A B wrote:
So, it is time to improve performance, it is running to slow.
AFAIK (as a novice) there are a few general areas:
1) hardware
2) rewriting my queries and table structures
3) using more predefined queries
4) tweek parameters in the db conf files
Of these points:
1) is nothing I can do
A B wrote:
So, it is time to improve performance, it is running to slow.
AFAIK (as a novice) there are a few general areas:
1) hardware
2) rewriting my queries and table structures
3) using more predefined queries
4) tweek parameters in the db conf files
Of these points:
1) is nothing I can do
On Mon, Apr 28, 2008 at 07:48:48PM +0530, Gauri Kanekar wrote:
> Slony don't do automatic failover. And we would appreciate a system with
> automatic failover :(
No responsible asynchronous system will give you automatic failover.
You can lose data that way.
A
--
Andrew Sullivan
[EMAIL PROTECT
On Mon, Apr 28, 2008 at 07:35:37PM +0530, Gauri Kanekar wrote:
> Peter,
>
> We are doing vacuum full every alternate day. We also do vacuum analyze very
> often.
VACUUM FULL is making your problem worse, not better. Don't do that.
> We are currently using 8.1.3 version.
You need immediately to
> 1) hardware
> 2) rewriting my queries and table structures
> 3) using more predefined queries
> 4) tweek parameters in the db conf files
>
> Of these points:
> 1) is nothing I can do about right now, but in the future perhaps.
> 2) will be quite hard right now since there is more code than
I have complete the benchmarks I have made with Postgres and I have talk about
some weeks ago on postgres performance mailing list (see post shared_buffers).
On the follow link you can find a doc that contains the graphs generated.
http://www.mediafire.com/?lk4woomsxlc
Regards
Gaetano Mendola
Greg,
What I was referring to by "passing" TPC-E was the criteria for a conformant
benchmark run. TPC-C has iirc, only two relevant criteria: "95th percentile
response time < 5s" and "average response time < 95th percentile response
time". You can pass those even if 1 transaction in 20 takes 10-
So, it is time to improve performance, it is running to slow.
AFAIK (as a novice) there are a few general areas:
1) hardware
2) rewriting my queries and table structures
3) using more predefined queries
4) tweek parameters in the db conf files
Of these points:
1) is nothing I can do about right n
Salman,
Slony don't do automatic failover. And we would appreciate a system with
automatic failover :(
~ Gauri
On Mon, Apr 28, 2008 at 7:46 PM, salman <[EMAIL PROTECTED]>
wrote:
> Gauri Kanekar wrote:
>
> > Peter,
> >
> > We are doing vacuum full every alternate day. We also do vacuum analyze
Thats one of the thingsto be done in near future.
But it need some changes from application point of view. :( ... so just got
escalated for that reason.
But for now, which one will be a well suited replication system ?
~ Gauri
On Mon, Apr 28, 2008 at 7:43 PM, Brad Nicholson <[EMAIL PROTECTED]>
w
Gauri Kanekar wrote:
Peter,
We are doing vacuum full every alternate day. We also do vacuum analyze very
often.
We are currently using 8.1.3 version.
Auto vacuum is already on. But the table1 is so busy that auto vacuum don't
get sufficient chance to vacuum it :(.
Have already tried all the opt
On Mon, 2008-04-28 at 19:35 +0530, Gauri Kanekar wrote:
> Peter,
>
> We are doing vacuum full every alternate day. We also do vacuum
> analyze very often.
> We are currently using 8.1.3 version.
> Auto vacuum is already on. But the table1 is so busy that auto vacuum
> don't get sufficient chance
Peter,
We are doing vacuum full every alternate day. We also do vacuum analyze very
often.
We are currently using 8.1.3 version.
Auto vacuum is already on. But the table1 is so busy that auto vacuum don't
get sufficient chance to vacuum it :(.
Have already tried all the option listed by you, that
2008/4/28 Gauri Kanekar <[EMAIL PROTECTED]>:
> All,
>
> We have a table "table1" which get insert and updates daily in high
> numbers, bcoz of which its size is increasing and we have to vacuum it every
> alternate day. Vacuuming "table1" take almost 30min and during that time the
> site is down.
All,
We have a table "table1" which get insert and updates daily in high numbers,
bcoz of which its size is increasing and we have to vacuum it every
alternate day. Vacuuming "table1" take almost 30min and during that time the
site is down.
We need to cut down on this downtime.So thought of havin
Gregory Stark wrote:
TPC-E has more stringent requirements which explicitly require very consistent
response times and I doubt 8.2 would have been able to pass them.
Sure it would. Just not for a very large scale factor ;-).
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
"Josh Berkus" <[EMAIL PROTECTED]> writes:
>> I think TPC-E will make both of these major improvements much more important.
>> I suspect it would be hard to get 8.2 to even pass TPC-E due to the
>> checkpoint
>> dropouts.
>
> You'd be surprised, then. We're still horribly, horribly lock-bound on
Hi,
Le dimanche 27 avril 2008, Greg Smith a écrit :
> than SQL*PLUS. Then on the PostgreSQL side, you could run multiple COPY
> sessions importing at once to read this data all back in, because COPY
> will bottleneck at the CPU level before the disks will if you've got
> reasonable storage hardwa
41 matches
Mail list logo