On Sat, Aug 19, 2017 at 10:37 AM, anand086 wrote:
Your email is very hard to read, the formatting and line wrapping is
heavily mangled. You might want to attach the plans as files attachments
instead of or in addition to putting the in the body.
> -> Index Only Scan using ui_nkey_test_tab on
I think you query is a bit confusing and have many subqueries, so I tried
to simplify
If you cant´t have more import_num = 0 to the same login, try this
SELECT count(*)
FROM test_tab tab1
LEFT JOIN test_tab tab2
ON tab1.login = tab2.login AND tab2.import_num = '0'
WHERE
tab2.login IS NULL
Do you have an index on login column ?
If not, try creating an index and taking off those DISTICTs.
Em seg, 21 de ago de 2017 às 15:33, Justin Pryzby
escreveu:
> On Sat, Aug 19, 2017 at 10:37:56AM -0700, anand086 wrote:
> >
> +
On Sat, Aug 19, 2017 at 10:37:56AM -0700, anand086 wrote:
> +---+|
>
> QUER
Any thoughts on this?
--
View this message in context:
http://www.postgresql-archive.org/Performance-Issue-Materialize-tp5979128p5979481.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.o
I am a Postgres Newbie and trying to learn :)We have a scenario wherein, one
of the SQL with different input value for import_num showing different
execution plan.As an example, with import_num = '4520440' the execution plan
shows Nested Loop and is taking ~12secs. With import_num = '4520460'
exec
On Mon, Mar 06, 2017 at 12:17:22PM +, Dinesh Chandra 12108 wrote:
> Below is the output of Query SELECT * FROM pg_stats WHERE tablename='point'
> AND attname='domain_class_id' ;
>
>
> schemaname | tablename | attname | inherited | null_frac | avg_width
> | n_distinct | most_common_v
On Mon, Mar 6, 2017 at 7:20 AM, Piotr Gasidło wrote:
> We are having some performance issues after we upgraded to newest
> version of PostgreSQL, before it everything was fast and smooth.
>
> Upgrade was done by pg_upgrade from 9.4 directly do 9.6.1. Now we
> upgraded to 9.6.2 with no improvement.
We are having some performance issues after we upgraded to newest
version of PostgreSQL, before it everything was fast and smooth.
Upgrade was done by pg_upgrade from 9.4 directly do 9.6.1. Now we
upgraded to 9.6.2 with no improvement.
Some information about our setup: Freebsd, Solaris (SmartOS),
trator (Oracle/PostgreSQL)| Cyient Ltd. Noida.
-Original Message-
From: Justin Pryzby [mailto:pry...@telsasoft.com]
Sent: 06 March, 2017 10:54 AM
To: Dinesh Chandra 12108
Cc: Nur Agus ; Jeff Janes ;
pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issue in PostgreSQL
On Sun, Mar 05, 2017 at 08:23:08PM -0800, Jeff Janes wrote:
> On Fri, Mar 3, 2017 at 4:44 AM, Dinesh Chandra 12108
> wrote:
> > The below is the output for psql=> EXPLAIN ANALYZE SELECT DISTINCT
> > feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence
> > oe ON p.feature_id =
On Fri, Mar 3, 2017 at 4:44 AM, Dinesh Chandra 12108 <
dinesh.chan...@cyient.com> wrote:
> Dear Nur,
>
>
>
> The below is the output for psql=> EXPLAIN ANALYZE SELECT DISTINCT
> feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence
> oe ON p.feature_id = oe.evd_feature_id WHERE
Dinesh Chandra 12108 writes:
> The below is the output for psql=> EXPLAIN ANALYZE SELECT DISTINCT feature_id
> FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON
> p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND
> (p.modification_time > '2015-05-10 00:06:5
,Noida-Dadri Road, Noida - 201 305,India.
From: Nur Agus [mailto:nuragus.li...@gmail.com]
Sent: 03 March, 2017 5:54 PM
To: Dinesh Chandra 12108
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issue in PostgreSQL server...
Hello Dinesh,
You can try the EXPLAIN tool
psql=&g
Hello Dinesh,
You can try the EXPLAIN tool
psql=> EXPLAIN ANALYZE SELECT DISTINCT feature_id FROM evidence.point p
INNER JOIN evidence.observation_evidence oe ON p.feature_id =
oe.evd_feature_id WHERE p.domain_class_id IN (11) AND (p.modification_time
> '2015-05-10 00:06:56.056 IST' OR oe.modific
Dear Experts,
I need your suggestions to resolve the performance issue reported on our
PostgreSQL9.1 production database having 1.5 TB Size. I have observed that,
some select queries with order by clause are taking lot of time in execution
and forcing applications to give slow response.
The co
Thanks all for explain!
On Mon, Jan 2, 2017 at 9:36 PM, Tom Lane wrote:
> =?UTF-8?B?0JDQvdC00YDQtdC5INCl0L7Qt9C+0LI=?= writes:
> > create table t1 (id serial, str char(32));
>
> > create function f1(line text) returns void as $$
> > begin
> > perform * from t1 where str = line;
> > end;
> > $
=?UTF-8?B?0JDQvdC00YDQtdC5INCl0L7Qt9C+0LI=?= writes:
> create table t1 (id serial, str char(32));
> create function f1(line text) returns void as $$
> begin
> perform * from t1 where str = line;
> end;
> $$ language plpgsql;
This query is specifying a text comparison (text = text operator).
Si
Hi
2017-01-02 15:34 GMT+01:00 Андрей Хозов :
> Hello there!
>
> I have an performance issue with functions and args type.
>
> Table and data:
> create table t1 (id serial, str char(32));
> insert into t1 (str) select md5(s::text) from generate_series(1, 100)
> as s;
>
> And simple functions:
Hello there!
I have an performance issue with functions and args type.
Table and data:
create table t1 (id serial, str char(32));
insert into t1 (str) select md5(s::text) from generate_series(1, 100)
as s;
And simple functions:
create function f1(line text) returns void as $$
begin
perform
On Wed, Nov 30, 2016 at 6:26 AM, Marc-Olaf Jaschke
wrote:
> Hi,
>
> i have a performance issue with bitmap index scans on huge amounts of big
> jsonb documents.
>
>
> = Background =
>
> - table with big jsonb documents
> - gin index on these documents
> - queries using index conditions wi
Thanks for the explanation!
Best Regards,
Marc-Olaf
Marc-Olaf Jaschke · Softwareentwickler
shopping24 GmbH
Werner-Otto-Straße 1-7 · 22179 Hamburg
Telefon: +49 (0) 40 6461 5830 · Fax: +49 (0) 40 64 61 7879
marc-olaf.jasc...@s24.com · www.s24.com
AG Hamburg HRB 63371
vertreten durch Dr. Björn Schä
> big_jsonb @> '[{"x": 1, "filler": "cfcd208495d565ef66e7dff9f98764da"}]';
> I wonder why bitmap heap scan adds such a big amount of time on top of the
> plain bitmap index scan.
> It seems to me, that the recheck is active although all blocks are exact
> [1] and that pg is loading the jsonb for
Hi,
i have a performance issue with bitmap index scans on huge amounts of big jsonb
documents.
= Background =
- table with big jsonb documents
- gin index on these documents
- queries using index conditions with low selectivity
= Example =
select version();
> PostgreSQL 9
in postgres 9.5.0 i have partitioned table, that collect data by months, i
tried to use new postgres feature foreign table inheritance & pushed one month
of data to another postgres server, so i got foreign table. when i am running
my query from my primary server, query takes 7x more time to exe
On Tue, Aug 4, 2015 at 8:40 PM, Ram N wrote:
>
> Thanks much for responding guys. I have tried both, building multi column
> indexes and GIST, with no improvement. I have reduced the window from 180
> days to 30 days and below are the numbers
>
> Composite index - takes 30 secs
>
> With Btree ind
Thanks much for responding guys. I have tried both, building multi column
indexes and GIST, with no improvement. I have reduced the window from 180
days to 30 days and below are the numbers
Composite index - takes 30 secs
With Btree indexing - takes 9 secs
With GIST - takes >30 secs with kind
On Fri, Jul 31, 2015 at 10:55 AM, Ram N wrote:
>
> Thanks Qingqing for responding. That didn't help. It in fact increased the
> scan time. Looks like a lot of time is being spent on the NestedLoop Join
> than index lookups though I am not sure how to optimize the join.
>
Good news is that optimiz
On Fri, Jul 31, 2015 at 3:06 PM, Matheus de Oliveira <
matioli.math...@gmail.com> wrote:
> CREATE INDEX ON table2 USING gin (tstzrange(start_date, end_date,
> '()'));
The index should be USING GIST, not GIN. Sorry.
--
Matheus de Oliveira
On Thu, Jul 30, 2015 at 4:51 AM, Ram N wrote:
> select sum(a), count(id), a.ts, st from table1 a, table2 b where a.ts >
> b.start_date and a.ts < b.end_date and a.ts > '2015-01-01 20:50:44.00
> +00:00:00' and a.ts < '2015-07-01 19:50:44.00 +00:00:00' group by a.ts,
> st order by a.ts
Yo
Thanks Qingqing for responding. That didn't help. It in fact increased the
scan time. Looks like a lot of time is being spent on the NestedLoop Join
than index lookups though I am not sure how to optimize the join. I am
assuming its in memory join, so I am not sure why it should take such a lot
of
On Thu, Jul 30, 2015 at 12:51 AM, Ram N wrote:
> " -> Index Scan using end_date_idx on public.table2 b
> (cost=0.43..23181.37 rows=345833 width=52) (actual time=0.063..622.274
> rows=403936 loops=181)"
> "Output: b.serial_no, b.name, b.st, b.end_date, b.a,
> b.sta
Hi,
I am trying to see if I can do anything to optimize the following plan.
I have two tables and I am doing a join between them. After joining it
calculates aggregates (Sum and Count)
Table 1 : timestamp (one per day) for 2 years (730 records)
Table 2 : Window based validity records. Window here
Thanks for the feedback, everybody.
I spent a couple of days trying to optimise this;
As mentioned , the increased memory is not an option for me, as this query
is part of a report that can be run by any user on an ad hoc basis.
Allocating the required memory to any session on demand is not feasi
On 08/28/2014 01:50 AM, gmb wrote:
> Can somebody please confirm whether aggregate functions such as GROUP BY
> should use indexes ?
Sometimes. In your case, the index has one more column than the GROUP
BY, which makes it less likely that Postgres will use it (since
depending on the cardinality
2014-08-28 14:29 GMT+03:00 gmb :
> Unfortunately , I don't have a lot of memory available ( 65 connections ,
> work_mem = 64MB in pg conf ).
>
You don't have to change cluster-wide settings here.
You can issue `SET` command from your client right before running your
query, only your session will
Thanks for these suggestions
Unfortunately , I don't have a lot of memory available ( 65 connections ,
work_mem = 64MB in pg conf ).
>> I think index will be of no help here, as (1) you're reading whole table
>> anyway and (2) `amount` is not part of your index.
I did not think that the the fie
On Thu, Aug 28, 2014 at 11:50 AM, gmb wrote:
> Can somebody please confirm whether aggregate functions such as GROUP BY
> should use indexes ?
Yes, if the planner deems it faster than other approaches. It can make
wrong choices for many reasons, but usually when your planner tunables
like random_
2014-08-28 12:08 GMT+03:00 gmb :
> GroupAggregate (cost=303425.31..339014.43 rows=136882 width=48) (actual
> time=4708.181..6688.699 rows=287268 loops=1)
> Buffers: shared read=23899, temp read=30974 written=30974
> -> Sort (cost=303425.31..306847.34 rows=1368812 width=48) (actual
> time=47
> Can you send `EXPLAIN (analyze, buffers)` for your query instead?
> It'll show exactly what's going on.
GroupAggregate (cost=303425.31..339014.43 rows=136882 width=48) (actual
time=4708.181..6688.699 rows=287268 loops=1)
Buffers: shared read=23899, temp read=30974 written=30974
-> Sort
2014-08-28 11:50 GMT+03:00 gmb :
> It seems as if the planner is not using the PRIMARY KEY as index which was
> my assumption.
>
Can you send `EXPLAIN (analyze, buffers)` for your query instead?
It'll show exactly what's going on.
--
Victor Y. Yegorov
Hi all
I have the following table with 10+ million records:
create table ddetail (
ddet_id serial,
co_id integer,
client_id integer,
doc_no varchar,
line_id integer,
batch_no integer,
amount NUMERIC ,
...,
constraint PRIMAR KEY ( co_id , client_id , doc_no , line_id, ddet_id )
) ;
When doing
Jenish wrote:
> This server is the dedicated database server.
>
> And I am testing the limit for the concurrent active users. When I
> am running my test for 400 concurrent user ie. Active connection.
> I am getting good performance but when I am running the same the
> same test for 950 concurr
Dne 27.6.2011 22:14, Jenish napsal(a):
> And I am testing the limit for the concurrent active users. When I am
> running my test for 400 concurrent user ie. Active connection. I am
> getting good performance but when I am running the same the same test
> for 950 concurrent users I am getting very
Hi ,
This server is the dedicated database server.
And I am testing the limit for the concurrent active users. When I am
running my test for 400 concurrent user ie. Active connection. I am getting
good performance but when I am running the same the same test for 950
concurrent users I am getting
Dne 27.6.2011 17:01, Jenish napsal(a):
> Hi,
>
> DB : POSTGRES 8.4.8
> OS : Debian
> HD : SAS 10k rpm
>
> Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM
How much data are we talking about? Does that fit into the shared
buffers or is it significantly larger? Do the trigger
Dne 27.6.2011 17:58, Jenish napsal(a):
>
> Hi,
>
> I have already checked all the statements present in the trigger, no one
> is taking more then 20 ms.
>
> I am using 8-Processor, Quad-Core Server ,CPU utilization is more then
> 90-95 % for all. (htop result)
So all cores are 95% utilized? Tha
On Mon, Jun 27, 2011 at 9:22 AM, Jenish wrote:
> Hi All,
>
> I am facing some performance issue with insert into some table.
>
> I am using postgres 8.4.x
>
> Table is having 3 before insert trigger and one after insert trigger.
>
> With all triggers enable it is inserting only 4-5 record per seco
Hi,
I have already checked all the statements present in the trigger, no one is
taking more then 20 ms.
I am using 8-Processor, Quad-Core Server ,CPU utilization is more then 90-95
% for all. (htop result)
DB has 960 concurrent users.
io : writing 3-4 MB per second or less (iotop result).
Scen
> Hi,
>
> DB : POSTGRES 8.4.8
> OS : Debian
> HD : SAS 10k rpm
>
> Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM
>
> After insert trigger is again calling 2 more trigger and insert record in
> another table depends on condition.
>
> with all trigger enable there are 8 inser
Hi,
DB : POSTGRES 8.4.8
OS : Debian
HD : SAS 10k rpm
Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM
After insert trigger is again calling 2 more trigger and insert record in
another table depends on condition.
with all trigger enable there are 8 insert and 32 updates(app
Jenish wrote:
> I am using postgres 8.4.x
With x being what? On what OS and hardware?
> Table is having 3 before insert trigger and one after insert
> trigger.
>
> With all triggers enable it is inserting only 4-5 record per
> second.
>
> But if I disable after insert trigger it is able t
Hi All,
I am facing some performance issue with insert into some table.
I am using postgres 8.4.x
Table is having 3 before insert trigger and one after insert trigger.
With all triggers enable it is inserting only 4-5 record per second.
But if I disable after insert trigger it is able to inser
I have created two tables. In the first table i am using many fields to
store 3 address.
as well as in b table, i am using array data type to store 3 address.
is
there any issue would face in performance related things which one
will
cause the performance issue.
The array is interest
On Mon, Feb 14, 2011 at 5:36 AM, Pavel Stehule wrote:
> Hello
>
> 2011/2/14 dba :
>>
>>
>> create table a( address1 int,address2 int,address3 int)
>> create table b(address int[3])
>>
>> I have created two tables. In the first table i am using many fields to
>> store 3 address.
>> as well as in b
Hello
2011/2/14 dba :
>
>
> create table a( address1 int,address2 int,address3 int)
> create table b(address int[3])
>
> I have created two tables. In the first table i am using many fields to
> store 3 address.
> as well as in b table, i am using array data type to store 3 address. is
> there an
create table a( address1 int,address2 int,address3 int)
create table b(address int[3])
I have created two tables. In the first table i am using many fields to
store 3 address.
as well as in b table, i am using array data type to store 3 address. is
there any issue would face in performance rel
nterprises
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Dave Dutcher
Sent: Wednesday, June 17, 2009 1:39 PM
To: 'Mark Steben'; pgsql-performance@postgresql.org
Cc: 'Rich Garabedian'
Subject: Re: [PERFOR
>We have two machines. Both running Linux Redhat, both running postgres
8.2.5.
>Both have nearly identical 125 GB databases. In fact we use PITR Recovery
to
>Replicate from one to the other.
I have to ask the obvious question. Do you regularly analyze the machine
you replicate too?
Dave
2009/6/17 Mark Steben :
> A few details – I can always provide more
Could you send:
1. Exact text of query.
2. EXPLAIN ANALYZE output on each machine.
3. VACUUM VERBOSE output on each machine, or at least the last 10 lines.
...Robert
--
Sent via pgsql-performance mailing list (pgsql-performa
Hi, sorry about the blank post yesterday - let's try again
We have two machines. Both running Linux Redhat, both running postgres
8.2.5.
Both have nearly identical 125 GB databases. In fact we use PITR Recovery
to
Replicate from one to the other. The machine we replicate to runs a query
w
2008/2/5, Ansgar -59cobalt- Wiechers wrote:
> Your planner estimates are way off. Try increasing the statistics target
> for the columns used in this query and re-analyze the tables after doing
> so.
I first set STATISTICS to 1000 for column zoneindex_test and saw no
significant improvement (with
On Feb 5, 2008 12:47 PM, Viviane Lestic <[EMAIL PROTECTED]> wrote:
> Could someone help me analyze this problem?
> I don't manage to see if the problem comes from bad tsearch tuning,
> postgresql configuration, or something else...
Can you try to replace zoneindex_test @@ q with zoneindex_test @@
On 2008-02-05 Viviane Lestic wrote:
> QUERY PLAN
> -
> Sort (cost=2345.54..2345.58 rows=16 width=308) (actual
> time=270638.774..270643.142 row
Hi,
I'm having a performance problem on a request using Tsearch2: the
request strangely takes several minutes.
I've tried to follow Tsearch tuning recommendations, I've searched
through the archives, but I can't seem to find a solution to solve my
problem.
The ts_vector field was created using d
Hi all,
I'm having a strange performance issue with two almost similar queries, the
one running as expected, the other one taking far more time. The only
difference is that I have "uniid in (10)" in the normally running query and
"uniid in (9,10)" in the other one. The number of rows resulting fro
Hi all,
I'm having a strange performance issue with two almost similar queries, the
one running as expected, the other one taking far more time. The only
difference is that I have "uniid in (10)" in the normally running query and
"uniid in (9,10)" in the other one. The number of rows resulting fro
Hi Guys,
I have something odd. I have Gallery2 running on PostgreSQL 8.1, and
recently I upgraded to 8.1.9-1.el4s1.1 (64bit). The issue here really is
how do I get PostgreSQL to work with their horrible code. The queries
they generate look something like :
SELECT blah, blah FROM table1, table2
Hi Guys,
Following Tom Lane's advice I upgraded to 8.2, and that solved all my
problems. :D
Thank you so much for your input, I really appreciate it.
Kind regards
Willo van der Merwe
---(end of broadcast)---
TIP 6: explain analyze is your fr
Willo van der Merwe <[EMAIL PROTECTED]> writes:
> I have something odd. I have Gallery2 running on PostgreSQL 8.1, and
> recently I upgraded to 8.1.9-1.el4s1.1 (64bit). The issue here really is
> how do I get PostgreSQL to work with their horrible code. The queries
> they generate look something li
In response to Willo van der Merwe <[EMAIL PROTECTED]>:
> Hi Guys,
>
> I have something odd. I have Gallery2 running on PostgreSQL 8.1, and
> recently I upgraded to 8.1.9-1.el4s1.1 (64bit). The issue here really is
> how do I get PostgreSQL to work with their horrible code. The queries
> they gen
Hi Guys,
I have something odd. I have Gallery2 running on PostgreSQL 8.1, and
recently I upgraded to 8.1.9-1.el4s1.1 (64bit). The issue here really is
how do I get PostgreSQL to work with their horrible code. The queries
they generate look something like :
SELECT blah, blah FROM table1, table2 WH
Looks like that was the problem - got a day under the belt now with the
8.2.4 rev and all is back to normal.
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
Karl Denninger wrote:
Aha!
BIG difference. I won't know for sure until the biz day tomorrow but
the "first blush" look
"Karl Denninger" <[EMAIL PROTECTED]> writes:
> Not sure where to start here. It appears that I'm CPU limited and the problem
> may be that this is a web-served application that must connect to the Postgres
> backend for each transaction, perform its queries, and then close the
> connection down -
Karl Denninger skrev:
> I've got an interesting issue here that I'm running into with 8.2.3
>
> This is an application that has run quite well for a long time, and has
> been operating without significant changes (other than recompilation)
> since back in the early 7.x Postgres days. But now we'r
Aha!
BIG difference. I won't know for sure until the biz day tomorrow but
the "first blush" look is that it makes a HUGE difference in system
load, and I no longer have the stats collector process on the top of the
"top" list..
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
Karl Denninger <[EMAIL PROTECTED]> writes:
> Hm. now that's interesting. Stats collector IS accumulating
> quite a bit of runtime. me thinks its time to go grab 8.2.4.
I think Merlin might have nailed it --- the "stats collector bug" is
that it tries to write out the stats file way m
Hm. now that's interesting. Stats collector IS accumulating
quite a bit of runtime. me thinks its time to go grab 8.2.4.
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
Merlin Moncure wrote:
On 7/25/07, Karl Denninger <[EMAIL PROTECTED]> wrote:
Yeah, the problem
On 7/25/07, Karl Denninger <[EMAIL PROTECTED]> wrote:
Yeah, the problem doesn't appear to be there. As I said, if I look at the
PS of the system when its bogging, there aren't a whole bunch of processes
stuck doing these, so while this does take a second or two to come back,
that's not that ba
Yeah, the problem doesn't appear to be there. As I said, if I look at
the PS of the system when its bogging, there aren't a whole bunch of
processes stuck doing these, so while this does take a second or two to
come back, that's not that bad.
Its GENERAL performance that just bites - the syst
Karl Denninger <[EMAIL PROTECTED]> writes:
> But here's the query that has a habit of taking the most time
> select forum, * from post where toppost = 1 and (replied > (select
> lastview from forumlog where login='theuser' and forum=post.forum and
> number is null)) is not false AND (rep
I've got an interesting issue here that I'm running into with 8.2.3
This is an application that has run quite well for a long time, and has
been operating without significant changes (other than recompilation)
since back in the early 7.x Postgres days. But now we're seeing a LOT
more load tha
Hi!thanks for the quick helpi managed to reduce the response time from seemingly taking infinity to a tolerable level, which is by the way a huge improvement. What helped was the functional index on date_trunc('day',yourfieldhere) as Scott suggested.
I tried to disable the geqo, but it didn't make
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Joshua D. Drake wrote:
>> Enterprises are not going to compile. They are going to accept the
>> latest support by vendor release.
>>
>> Redhat has a tendency to be incredibly stupid about this particular
>> area of their packaging.
> Stupid how?
Re
Joshua D. Drake wrote:
> >>
> >
> >try turning off genetic query optimization. regarding the rhel4
> >issue...does rhel not come with a c compiler? :)
>
> Enterprises are not going to compile. They are going to accept the
> latest support by vendor release.
>
> Redhat has a tendency to be incr
try turning off genetic query optimization. regarding the rhel4
issue...does rhel not come with a c compiler? :)
Enterprises are not going to compile. They are going to accept the
latest support by vendor release.
Redhat has a tendency to be incredibly stupid about this particular
area
On 7/27/06, Eliott <[EMAIL PROTECTED]> wrote:
Hi!
I hope I'm sending my question to the right list, please don't flame if it's
the wrong one.
I have noticed that while a query runs in about 1.5seconds on a 8.xx version
postgresql server on our 7.4.13 it takes around 15-20 minutes. Since we are
On Thu, 2006-07-27 at 09:23, Eliott wrote:
> Hi!
>
> I hope I'm sending my question to the right list, please don't flame
> if it's the wrong one.
>
> I have noticed that while a query runs in about 1.5seconds on a 8.xx
> version postgresql server on our 7.4.13 it takes around 15-20 minutes.
> Si
Hi!I hope I'm sending my question to the right list, please don't flame if it's the wrong one.I have noticed that while a query runs in about 1.5seconds on a 8.xx version postgresql server on our 7.4.13 it takes around 15-20 minutes. Since we are using RHEL4 on our server we are stuck with
7.4.13.
To: grupos
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] PERFORMANCE ISSUE ODBC x LIBPQ C++ Application
>
> > Hi !
> >
> > My company is evaluating to compatibilizate our system (developed in
> > C++) to PostgreSQL.
> >
>
> Hi !
>
> My company is evaluating to compatibilizate our system (developed in
> C++) to PostgreSQL.
>
> Our programmer made a lot of tests and he informed me that the
> performance using ODBC is very similar than using libpq, even with a
big
> number of simultaneous connections/queries. Of cour
Hi !
My company is evaluating to compatibilizate our system (developed in
C++) to PostgreSQL.
Our programmer made a lot of tests and he informed me that the
performance using ODBC is very similar than using libpq, even with a big
number of simultaneous connections/queries. Of course that for
> 20,000 itemsTakes on average 0.078seconds/room
> 385,000 items Takes on average .11seconds/room
> 690,000 items takes on average .270seconds/room
> 1,028,000 items Takes on average .475seconds/room
>
> As can be seen the time taken to process each room increas
> My statistics(Athlon 1.8Ghz)
>
> 20,000 itemsTakes on average 0.078seconds/room
> 385,000 items Takes on average .11seconds/room
> 690,000 items takes on average .270seconds/room
> 1,028,000 items Takes on average .475seconds/room
[snip]
> I a
get rid of any unnecessary indexes?
i've found that droping indexes and re-creating them isn't usually worth the
effort
mount the disk with the noatime option which saves you the time involved in
updating the last access time on files
make sure you're doing all the inserts in one transaction..
Peter,
One possibility is to drop all the indexes, do the insert and re-add the
indexes.
The more indexes that exist and the more rows that exist, the more costly
the insert.
Regards,
Joseph
At 05:48 PM 9/24/2003 +1200, peter wrote:
Hello,
I have been trying to get my Postgres database to d
Hello,
I have been trying to get my Postgres database to do faster inserts.
The environment is basically a single user situation.
The part that I would like to speed up is when a User copys a Project.
A Project consists of a number of Rooms(say 60). Each room contains a
number of items.
A proje
97 matches
Mail list logo