The following update was captured in the database log and the elapsed time
was 1058.956 ms. A later explain analyze shows total run time of 730 ms.
Although isn't the actual time to update the row 183 ms. Where is the
other 547 ms coming from? Updating the two secondary indexes??
Oct 27 08
[EMAIL PROTECTED] (Tom Lane) wrote in
news:[EMAIL PROTECTED]:
> Denis <[EMAIL PROTECTED]> writes:
>> The following update was captured in the database log and the elapsed
>> time was 1058.956 ms. A later explain analyze shows total run time
>> of 730 ms. Althou
I've read all the posts in thread, and as I understood in version 9.2 some
patches were applied to improve pg_dump speed. I've just installed
PostgreSQL 9.2.1 and I still have the same problem. I have a database with
2600 schemas in it. I try to dump each schema individually, but it takes too
much
Tom Lane-2 wrote
> Denis <
> socsam@
> > writes:
>> I've read all the posts in thread, and as I understood in version 9.2
>> some
>> patches were applied to improve pg_dump speed. I've just installed
>> PostgreSQL 9.2.1 and I still have the same p
Tom Lane-2 wrote
> Denis <
> socsam@
> > writes:
>> Here is the output of EXPLAIN ANALYZE. It took 5 seconds but usually it
>> takes from 10 to 15 seconds when I am doing backup.
>
>> Sort (cost=853562.04..854020.73 rows=183478 width=219) (actual
>> t
Tom Lane-2 wrote
> Denis <
> socsam@
> > writes:
>> Tom Lane-2 wrote
>>> Hmmm ... so the problem here isn't that you've got 2600 schemas, it's
>>> that you've got 183924 tables. That's going to take some time no matter
>>>
We have a web application where we create a schema or a database with a
number of tables in it for each customer. Now we have about 2600 clients.
The problem we met using a separate DB for each client is that the creation
of new DB can take up to 2 minutes, that is absolutely unacceptable. Using
s
Samuel Gendler wrote
> On Thu, Nov 8, 2012 at 1:36 AM, Denis <
> socsam@
> > wrote:
>
>>
>> P.S.
>> Not to start a holywar, but FYI: in a similar project where we used MySQL
>> now we have about 6000 DBs and everything works like a charm.
>>
>
Jeff Janes wrote
> On Thu, Nov 8, 2012 at 1:04 AM, Denis <
> socsam@
> > wrote:
>>
>> Still I can't undesrtand why pg_dump has to know about all the tables?
>
> Strictly speaking it probably doesn't need to. But it is primarily
> designed for dump
Hello All.
I have a lot of tables and indexes in database. I must to determine which
indexes are not using or using seldon in databese . I enabled all posible
statistics in config but a can`t uderstand how to do this.
Thanks.
p.s for example i need this to reduce database size for increase backup
tgres also ships with pg_bench, which is a simpler OLTP benchmark that I believe is similar to a TPC-B.
--Denis Lussier
CTO
http://www.enterprisedb.com
On 7/21/06, Petronenko D.S. <[EMAIL PROTECTED]> wrote:
Hello,does anybody use OSDB benchmarks for postgres?if not, which kind of bechma
l" case). I believe the impact was something around a 12% average slowdown for the handful of PLpgSQL functions we tested when this feature is turned on.
Would the community be potentially interested in this feature if we created a BSD Postgres patch of this feature for PLpgSQL (l
the ansi standard for stored procs allows for explicit
transaction control inside of a stored procedure?
--Luss
On 7/27/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Denis Lussier" <[EMAIL PROTECTED]> writes:
> Would the community be potentially interested in this feature if
he db's you are choosing to test.
--Denis Lussier
CTO
http://www.enterprisedb.com
>
e simply means that if you buy a Platinum Subscription to our product, then you can keep the source code under your pillow and use it internally at your company however you see fit.
--Denis Lussier
CTO
http://www.enterprisedb.com
On 7/29/06, Luke Lonergan <[EMAIL PROTECTED]> wrot
cores comparing it to a similarly equiped 1 socket AMD dual core workstation. I'll keep the data size small to fit entirely in RAM so the DBT2 isn't it's usual disk bound dog when you run it the "right" way (according to tpc-c guidelines).
--Denis Dweeb from EnterpriseDBOn
es are too busy to implement PITR until after a disaster strikes. I know that in the past I've personally been guilty of this on several occasions.
--Denis EnterpriseDB (yeah, rah, rah...)On 8/1/06, Merlin Moncure <[EMAIL PROTECTED]> wrote:
On 8/1/06, George Pavlov <[EMAIL PROTECTE
I was kinda thinking that making the Block Size configurable at InitDB time would be a nice & simple enhancement for PG 8.3. My own personal rule of thumb for sizing is 8k for OLTP, 16k for mixed use, & 32k for DWH.
I have no personal experience with XFS, but, I've seen numerous internal edb-postg
of full Posix compliance did cause some problems for configuring PITR.
--Denis http://www.enterprisedb.comOn 8/3/06, Chris Browne <[EMAIL PROTECTED]
> wrote:Of course, with a big warning sticker of "what is required for Oracle
to work properly is implemented, anything more is not
If the real world applications you'll be running on the box are Java
(or use lots of prepared statements and no stored procedures)... try
BenchmarkSQL from pgFoundry. Its extremely easy to setup and use.
Like the DBT2, it's an oltp benchmark that is similar to the tpc-c.
--Denis Lu
an they are not occosaionally extrenmely useful. All hints are
effectively harmless/helpful suggestions, the planner is free to
ignore them if they are not feasible.
--Denis Lussier
Founder
http://www.enterprisedb.com
On 10/7/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Craig A. James&q
Hi all,
As the author of BenchmarkSQL and the founder of EnterpriseDB I
can assure you that BenchmarkSQL was NOT written specifically for
PostgreSQL.It is intended to be a completely database agnostic
tpc-c like java based benchmark.
However; as Jonah correctly points out in painstaking
I'm a BSD license fan, but, I don't know much about *BSD otherwise (except
that many advocates say it runs PG very nicely).
On the Linux side, unless your a dweeb, go with a newer, popular & well
supported release for Production. IMHO, that's RHEL 5.x or CentOS 5.x. Of
course the latest SLES & UB
should be changed first to improve speed ?
* memory ?
*???
Thanks a lot for any advice (I know there are plenty of archived
discussions on this subject but it's always difficult to know what very
important, and what's general as opposed to specific solutions)
Have a nice day !
Denis
Grzegorz Jaśkiewicz a écrit :
>
>
> On Wed, Oct 28, 2009 at 12:11 PM, Denis BUCHER <mailto:dbuche...@hsolutions.ch>> wrote:
>
> Dear all,
>
> I need to optimize a database used by approx 10 people, I don't need to
> have the perfect config, s
gt; (although heavier near the start of the day).
Ok great, thanks for the advice, I added it at the end of the process...
Denis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hello Greg,
Greg Smith a écrit :
> On Wed, 28 Oct 2009, Denis BUCHER wrote:
>
>> For now, we only planned a VACUUM ANALYSE eacha night.
>
> You really want to be on a later release than 8.1 for an app that is
> heavily deleting things every day. The answer to most VACUUM
Perhaps making your select be explicitely part of a read-only
transaction rather than letting java make use of an implicit
transaction (which may be in auto commit mode)
On 11/30/09, Waldomiro wrote:
> Hi everybody,
>
> I have an java application like this:
>
> while ( true ) {
> Thread.slee
Sounds more like a school project than a proper performance question.
On 11/28/09, Reydan Cankur wrote:
> Hi,
>
> I am trying to run postgresql functions with threads by using OpenMP.
> I tried to parallelize slot_deform_tuple function(src/backend/access/
> common/heaptuple.c) and added below lin
Mathieu Nebra wrote:
Alexander Staubo a écrit :
On Tue, Jun 23, 2009 at 1:12 PM, Mathieu Nebra wrote:
This "flags" table has more or less the following fields:
UserID - TopicID - LastReadAnswerID
We are doing pretty much same thing.
My problem is that everytime a user RE
Is tsvector_update_trigger() smart enough to not bother updating a
tsvector if the text in that column has not changed?
If not, can I make my own update trigger with something like
if new.description != old.description
return tsvector_update_trigger('fti_all', 'pg_catalog.english',
Dimitri Fontaine wrote:
Hi,
Le 24 juin 09 à 18:29, Alvaro Herrera a écrit :
Oleg Bartunov wrote:
On Wed, 24 Jun 2009, Chris St Denis wrote:
Is tsvector_update_trigger() smart enough to not bother updating a
tsvector if the text in that column has not changed?
no, you should do check
ual time=19.635..39.824 rows=40018 loops=1)
Recheck Cond: ((intarr1 && '{0,1}'::integer[]) AND (intarr2 &&
'{2,4}'::integer[]))
-> Bitmap Index Scan on test_intarr1_intarr2_idx
(cost=0.00..4.26 rows=1 width=0) (actual time=19.38
understanding of the PG internals to
massage pre-existing code... Feel free to message me off list with pointers if
you think I might be able to help.
- Original Message -
> From: Tom Lane
> To: Denis de Bernardy
> Cc: "pgsql-performance@postgresql.org"
> Sent: Wednesda
> - Original Message -
>> From: Tom Lane
>> To: Denis de Bernardy
>> Cc: "pgsql-performance@postgresql.org"
>
>> Sent: Wednesday, May 4, 2011 4:12 PM
>> Subject: Re: [PERFORM] row estimate very wrong for array type
>>
>>
- Original Message -
> From: Josh Berkus
> To: postgres performance list
> Cc:
> Sent: Thursday, May 5, 2011 2:02 AM
> Subject: Re: [PERFORM] amazon ec2
> So memcached basically replaces the filesystem?
>
> That sounds cool, but I'm wondering if it's actually a performance
> speedup.
I might have misread, but:
> select * from connections where locked_by <> 4711
> union all
> select * from connections_locked where locked_by = 4711;
The first part will result in a seq scan irrespective of indexes, and the
second has no index on locked_by. The best you can do is to eliminate t
[big nestloop with a huge number of rows]
You're in an edge case, and I doubt you'll get things to run much faster: you
want the last 1k rows out of an 18M row result set... It will be slow no matter
what you do.
What the plan is currently doing, is it's going through these 18M rows using a
fo
- Forwarded Message -
>From: Denis de Bernardy
>To: Jens Hoffrichter
>Sent: Tuesday, June 28, 2011 12:59 AM
>Subject: Re: [PERFORM] Getting rid of a seq scan in query on a large table
>
>
>> Hash Cond: (posts.poster_id = posters.poster_id)
>
>>
The thread below has the test case that we were able to use
to reproduce the issue.
http://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php
The last messages on this subject are from April of
2005. Has there been any successful ways to significantly reduce the
impact
Dumping a database which contains a table with a bytea
column takes approximately 25 hours and 45 minutes. The database has 26
tables in it. The other 25 tables take less than 5 minutes to dump so almost
all time is spent dumping the bytea table.
prd1=# \d ybnet.ebook_master;
Tab
We are evaluating PostgreSQL for a typical data warehouse application. I
have 3 tables below that are part of a Star schema design. The query listed
below runs in 16 seconds on Oracle 9.2 and 3+ minutes on PostgreSQL 7.3.3
Here are the details.
I'm wondering what else can be done to tune this ty
42 matches
Mail list logo