Hello again,
thanks for all the quick replies.
It seems i wasn't entirely correct on my previous post, i've mixed up
some times/numbers.
Below the correct numbers
MSSQL: SELECT COUNT(*) from JOIN (without insert) 17 minutes
PostgreSQL: SELECT COUNT(*) from JOIN (without insert) 33 m
Hi Nicky,
Did you tried to create an index to avoid the sequential scans?
Seq Scan on src_faktuur_verrsec t0...
I think, you should try
CREATE INDEX src.src_faktuur_verrsec_codesubstr ON
src.src_faktuur_verrsec (substr(src.src_faktuur_verrsec.code,1,2))
Cheers
Sven.
nicky schrieb:
Hello a
Hello Sven,
We have the following indexes on src_faktuur_verrsec
/
CREATE INDEX src_faktuur_verrsec_idx0
ON src.src_faktuur_verrsec
USING btree
(id);
CREATE INDEX src_faktuur_verrsec_idx1
ON src.src_faktuur_verrsec
USING btree
(substr(code::text, 1, 2));
C
> PostgreSQL elects not to use them. I assume, because it most
> likely needs to traverse the entire table anyway.
>
> if i change: / substr(t0.code,1,2) not in
> ('14','15','16','17')/
> to (removing the NOT): /substr(t0.code,1,2) in
> ('14','15','16','17')/
>
> it uses the i
Hi Nick,
I'm not that good to advice how to get PostgreSQL to use an index to get
your results faster.
Did you try "not (substr(t0.code,1,2) in ('14','15','16','17'))"?
Cheers
Sven.
nicky schrieb:
Hello Sven,
We have the following indexes on src_faktuur_verrsec
/
CREATE INDEX src_faktu
Arjen van der Meijden
wrote:
Here is a graph of our performance measured on PostgreSQL:
http://achelois.tweakers.net/~acm/pgsql-t2000/T2000-schaling-postgresql.png
...
The "perfect" line is based on the "Max" value for 1 core and then just
multiplied by the amount of cores to have
On 22-6-2006 15:03, David Roussel wrote:
Sureky the 'perfect' line ought to be linear? If the performance was
perfectly linear, then the 'pages generated' ought to be G times the
number (virtual) processors, where G is the gradient of the graph. In
such a case the graph will go through the or
"soni de" <[EMAIL PROTECTED]> writes:
> I am getting following error while inserting a row into the "abc" table:
> *ERROR: fmgr_info: function 2720768: cache lookup failed*
What PG version is this? (I can tell from the spelling of the error
message that it's older than 7.4.) If it's pre-7.3 the
Arjen van der Meijden wrote:
First of all, this graph has no origin. Its a bit difficult to test with
less than one cpu.
Sure it does. I ran all the tests. They all took infinite time, and I got
zero results. And my results are 100% accurate and reliable. It's perfectly
valid data. :-)
C
Richard Frith-Macdonald <[EMAIL PROTECTED]> writes:
> I have a producer/consumer setup where various producer processes
> insert new records into a table and consumer processes mark those
> records as having been handled when they have dealt with them, but
> leave the records in the table so
Ron St-Pierre <[EMAIL PROTECTED]> writes:
> We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
> RHEL, postgres 8.1) and ported our old database over to it (single cpu,
> 2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however
> some queries are super slow. O
Peter Wilson <[EMAIL PROTECTED]> writes:
> I'd tweaked a couple of parameters in postgres.conf - the significant one I
> thought being random_page_cost, so I changed this back to default and did a
> 'service postgresql reload' - no difference, but I wasn't sure whether this
> could be changed vi
On Jun 21, 2006, at 13:08, Tom Lane wrote:
There are some reports in the archives of particular usage patterns
where they pretty much suck, because GetDomainConstraints() searches
pg_constraint every time it's called. We do what we can to avoid
calling that multiple times per query, but for som
On Jun 21, 2006, at 18:19, Josh Berkus wrote:
Well, current case-insensitivity hacks definitely aren't compatible
with
LIKE as far as "begins with" indexes are concerned.
Yes, currently I use LOWER() for my indexes and for all LIKE, =, etc.
queries. This works well, but ORDER by of course
On Jun 21, 2006, at 19:24, Michael Glaesemann wrote:
I haven't seen it mentioned in this thread yet, but have you looked
at citext?
http://gborg.postgresql.org/project/citext/projdisplay.php
I don't have any experience with it, but perhaps it can do what
you're looking for.
Yes, I've see
How to speed the following query? It seems to run forever.
explain SELECT
bilkaib.DB,
CASE WHEN dbkonto.objekt1='+' THEN bilkaib.DBOBJEKT ELSE '' END AS dbobjekt,
CASE WHEN dbkonto.objekt2='+' THEN bilkaib.DB2OBJEKT ELSE '' END AS
db2objekt,
CASE WHEN dbkonto.objekt3='+' THEN bilkaib.DB3OBJEKT EL
"Andrus" <[EMAIL PROTECTED]> writes:
> How to speed the following query? It seems to run forever.
> explain SELECT
> bilkaib.DB,
> CASE WHEN dbkonto.objekt1='+' THEN bilkaib.DBOBJEKT ELSE '' END AS dbobjekt,
> CASE WHEN dbkonto.objekt2='+' THEN bilkaib.DB2OBJEKT ELSE '' END AS
> db2objekt,
> CASE
On Jun 21, 2006, at 5:53 PM, Ron St-Pierre wrote:
Jim C. Nasby wrote:
On Wed, Jun 21, 2006 at 02:27:41PM -0700, Ron St-Pierre wrote:
We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM,
raid 4, RHEL, postgres 8.1) and ported our old database over to
it (single cpu,
RAID *4*?
"Meetesh Karia" <[EMAIL PROTECTED]> writes:
> ... But, once again we ran into the same
> situation where a query that normally executes in ~15ms wouldn't finish. As
> before, there were no ungranted locks and threads weren't waiting on a
> lock. I attached gdb to one of the stuck postgres process
Richard Frith-Macdonald <[EMAIL PROTECTED]> writes:
> What has confused me is why a query using an empty index should be
> slow, irrespective of the state of the table that the index applies to.
Is it actually empty, or have you just deleted-and-not-yet-vacuumed
all the rows in the index?
I had
20 matches
Mail list logo