You are right ... How embarrassing ... Why did I not see this?
I will change the index and check again. I guess that the problem should be
fixed then.
Thanks a lot!
Christian
Deriva GmbH Financial IT and Consulting
Christian Schröder
ull and reindexed.
Can anybody explain the difference? Why is the current table so slow? And what
can we do to improve performance?
Thanks for your help,
Christian
----
Deriva GmbH Financial IT and Consulting
Christian Schröder
Gesc
deriva.de
D-37079 Göttingen
Amtsgericht Göttingen | HRB 3240
Geschäftsführer: Dirk Baule, Christian Schröder
Deriva CA Certificate: http://www.deriva.de/deriva-ca.cer
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
t of memory
error occurs.
Interestingly, if we change the type specification in the call to
"spi_prepare" from "isin" to "char(12)" the problem no longer occurs.
Can you explain this behavior?
Regards,
Christian
--
Deriva GmbH Tel.: +49 551 48
Hi list,
I have the following query:
SELECT *
FROM base
INNER JOIN pt USING (x) WHERE pt.y IN ('1121', '11411', '11421',
'1161', '1162');
"explain analyze" yields the following result:
QUERY PLAN
-
Tom Lane wrote:
I've applied a patch for this. It will be in 8.3.8, or if you're
in a hurry you can grab it from our CVS server or here:
Thanks a lot for your effort and the quick response!
Regards,
Christian
--
Deriva GmbH Tel.: +49 551 489500-42
Financial IT an
Grzegorz Jaśkiewicz wrote:
On Wed, Apr 15, 2009 at 1:25 PM, Simon Riggs wrote:
The cost of the query seems accurate, so the absence of
attachment_isins_attachment_idx on the 8.3 plan looks to be the reason.
There's no way it would choose to scan 8115133 rows on the pkey if the
other index wa
Grzegorz Jaśkiewicz wrote:
set work_mem=24000; before running the query.
postgres is doing merge and sort on disc, that's always slow.
Ok, but why is the plan different in 8.2? As you can see the same query
is really fast in 8.2, but slow in 8.3.
is there an index on column isin ?
There
Hi list,
we have just migrated one of our databases from 8.2.12 to 8.3.7. We now
experience a strange problem: A query that was really fast on the 8.2
server is now much slower on the 8.3 server (1 ms vs. 60 sec). I had a
look at the query plan and it is completely different. Both servers run
Scott Marlowe wrote:
you can run out of memory if too many connections try to use too much
of it at the same time, that's why it is advisable to set work_mem per
connection/query, should the connection/query require more.
Definitely.
I understand why this is advisable; however, something
how much memory they consume?
This would help to find out if the value can be changed without running
out of memory.
Regards,
Christian Schröder
--
Deriva GmbH Tel.: +49 551 489500-42
Financial IT and Consulting Fax: +49 551 489500-91
Hans-Böckler-Straße 2
Hi list,
if I want to find all records from a table that don't have a matching
record in another table there are at least two ways to do it: Using a
left outer join or using a subselect. I always thought that the planner
would create identical plans for both approaches, but actually they are
q
Merlin Moncure wrote:
On 1/13/09, Christian Schröder wrote:
Hi list,
I have written a function that returns a setof record. The function has a
table name as a parameter and the resulting records have the same structure
as this table. Is there any easy way to specify this when I call the
Hi list,
I have written a function that returns a setof record. The function has
a table name as a parameter and the resulting records have the same
structure as this table. Is there any easy way to specify this when I
call the function? If the table has many columns then it's annoying to
spec
Christian Schröder wrote:
When I join both tables using key1 and key2 there will be exactly
1630788 rows because for each row in table2 there *must* exist a row
in table1. But the query planner doesn't think so:
# explain analyze select * from table1 inner join table2 using (key1,
Tom Lane wrote:
Hmph ... looks like plperl is shy a few SPI_push/SPI_pop calls.
I've applied a patch for this --- it'll be in the next set of update
releases.
Great. Thanks a lot! The whole PostgreSQL stuff is really amazing! :-)
Regards,
Christian
--
Deriva GmbH
Hi list,
I have found the following problem: I have declared a domain datatype
with a check constraint. The check constraint uses a plpgsql function:
CREATE FUNCTION domain_ok(value integer) RETURNS boolean AS $$
BEGIN
RETURN value > 0;
END;
$$ LANGUAGE plpgsql;
CREATE DOMAIN testdomain int
Christian Schröder wrote:
in our PostgreSQL 8.2.9 database I have these tables:
create table table1 (
key1 char(12),
key2 integer,
primary key (key1, key2)
);
create table table2 (
key1 char(12),
key2 integer,
key3 varchar(20),
primary
Hi list,
I have experienced the following situation: A join between two tables
(one with ~900k rows, the other with ~1600k rows) takes about 20 sec on
our productive database. I have created two tables in our test database
with the same data, but with fewer fields. (I have omitted several
fiel
Filip Rembiałkowski wrote:
create table table1 (
key1 char(12),
key2 integer,
primary key (key1, key2)
);
create table table2 (
key1 char(12),
key2 integer,
key3 varchar(20),
primary key (key1, key2, key3),
Hi list,
in our PostgreSQL 8.2.9 database I have these tables:
create table table1 (
key1 char(12),
key2 integer,
primary key (key1, key2)
);
create table table2 (
key1 char(12),
key2 integer,
key3 varchar(20),
primary key (key1, key2, ke
Dilyan Berkovski wrote:
I am using PostgreSQL 8.2, and I am interested in creating a table B that inherits table A, but with all it's data!
create table B {a int} inherits A, just adds the structure of table A, not its data.
PostgreSQL's inheritance works the other way around: If table B inh
Craig Ringer wrote:
If I'm wrong about any of this (which is not unlikely, really) then if
anyone else is "lucky" enough to be using Access with PostgreSQL and
knows of a better solution or workaround, please feel free to correct me.
We have been working with the combination of a PostgreSQL ba
Scott Marlowe wrote:
Serial is a "pseudotype". It represents creating an int or bigint and
a sequence then assigning a default value for the column and setting
dependency in the db so the sequence will be dropped when the table
gets dropped. If you don't want to recreate the table, you can do
t
Christian Schröder wrote:
we have a PostgreSQL 8.2 database on an xfs filesystem. By mistake the
first blocks of this filesystem were overwritten. An xfs_repair
reconstructed the superblock and also found many orphaned files and
directories. Actually, all we have on the filesystem now is in
Hi list,
we have a PostgreSQL 8.2 database on an xfs filesystem. By mistake the
first blocks of this filesystem were overwritten. An xfs_repair
reconstructed the superblock and also found many orphaned files and
directories. Actually, all we have on the filesystem now is in
"lost+found". ;-)
W
Tomasz Ostrowski wrote:
This is wrong. RAID5 is slower than RAID1.
You should go for RAID1+0 for fast and reliable storage. Or RAID0 for
even faster but unreliable.
I did not find a clear statement about this. I agree that RAID10 would
be better than RAID5, but in some situations RAID5 at lea
Christian Schröder wrote:
So I would like to use a faster disk for these temporary files, too,
but I could not find where the temporary files are located. Is there a
separate directory? I have found a "pgsql_tmp" directory inside of the
database directories ("base//pgsql_tmp&
Hi list,
I want to optimize the performance of our PostgreSQL 8.2 server. Up to
now the server has a raid1 where the whole database is located
(including tha WAL files). We will now move the database to a raid5
(which should be faster than the raid1) and will also move the WAL to a
separate di
Tom Lane wrote:
Table accesses done by a view are checked according to the privileges
of the owner of the view, not of whoever invoked the view. It's a
bit inconsistent because function calls done in the view are not handled
that way (though I hope we change them to match, someday).
Phew, som
Albe Laurenz wrote:
User ts_frontend, the owner of the view ts_frontend.v_editors, does not
have the SELECT privilege on the underlying table public."EDITORS".
Because of that neither he nor anybody else can select from the view,
although ts_frontend is able to create the view.
Indeed, you ar
Albe Laurenz wrote:
One possibility I see is that there is more than one table
called "EDITORS" and they get confused.
What do you get when you
SELECT t.oid, n.nspname, t.relname
FROM pg_catalog.pg_class t JOIN
pg_catalog.pg_namespace n ON t.relnamespace = n.oid
WHERE t.relname='EDITORS';
Hi list,
yesterday I moved our database from one server to another. I did a full
dump of the database and imported the dump into the new server. Since
then I have a strange problem which I cannot explain ...
I have a table public."EDITORS":
Table "public.EDITORS"
Column |
Michael Glaesemann wrote:
On Nov 17, 2007, at 3:53 , Christian Schröder wrote:
Unfortunately, the trick from the docs (chapter 4.2.12) using "case
... then" does not work inside an "if" statement (the "then" of the
"case" is interpreted as belonging
Hi list,
the logical operators "and" and "or" are commutative, i.e. there is no
"short-circuiting". Especially when doing PL/pgSQL development it would
sometimes be very handy to have this short circuiting. Unfortunately,
the trick from the docs (chapter 4.2.12) using "case ... then" does not
Tom Lane wrote:
Hah, I've got it. (Should have searched Red Hat's bugzilla sooner.)
What you are hitting is a glibc bug, as explained here:
http://sources.redhat.com/ml/libc-hacker/2007-10/msg00010.html
If libpthread is loaded after first use of dcgettext, then subsequent
uses are at risk of han
Tom Lane wrote:
I recompiled the server with debugging symbols enabled and then did the
following experiment: I started a query which I knew would take some
time. While the query executed I disconnected my dial-up line. After
reconnecting the backend process was still there (still SELECTing).
Tom Lane wrote:
=?ISO-8859-1?Q?Christian_Schr=F6der?= <[EMAIL PROTECTED]> writes:
Although I do not yet have any processes that are stuck inside a
statement, there are some that are idle, but do not respond to SIGINT or
even SIGTERM. Is this sufficient?
Dunno. Have you looked at thei
Tom Lane wrote:
OK. For the moment I confess bafflement. You had offered access
to your system to probe more carefully --- once you've built up
two or three stuck processes again, I would like to take a look.
Although I do not yet have any processes that are stuck inside a
statement, ther
Tom Lane wrote:
I don't think you ever mentioned exactly what platform you're running
on; it seems to be some 64-bit Linux variant but you didn't say which.
The machine has two dual-core Xeon 5130 cpus. The os is openSUSE 10.2
(x86-64). The output of uname -a is:
Linux db2 2.6.18.8-0.7-de
Tom Lane wrote:
control has already returned from the kernel. What I think is that the
perl stuff your session has done has included some action that changed
the condition of the backend process ... exactly what, I have no idea.
Can you show us the plperl functions that were used in these sessio
Tom Lane wrote:
=?ISO-8859-1?Q?Christian_Schr=F6der?= <[EMAIL PROTECTED]> writes:
I don't want to "kill -9" the processes because the last time
I did this the database was in recovery mode for a substantial amount of
time.
A useful tip on that: if you perform a manual CHECKPOINT just
Hi all,
any news about this issue? Anything else that I can do to help you?
Meanwhile there are 4 connections in the same state. (I did not do the
whole investigation on all 4, but since they all do not respond on a
SIGINT I assume that they all have the same problem.)
It may also be interestin
Alvaro Herrera wrote:
Please try "thread apply all bt full" on gdb.
The first lines where the symbols are loaded are of course identical.
The output of the command is in my opinion not very helpful:
(gdb) thread apply all bt full
Thread 1 (Thread 47248855881456 (LWP 7129)):
#0 0x2af
Tom Lane wrote:
* The only place internal_flush would call errmsg is here:
ereport(COMMERROR,
(errcode_for_socket_access(),
errmsg("could not send data to client: %m")));
So why is it unable to send data to the client?
The user
Hi list,
once again I do not understand how the query planner works and why it
apparently does not find the best result.
I have a table with about 125 million rows. There is a char(5) column
with a (non-unique) index. When I try to find the distinct values in
this column using the following sql
Tom Lane wrote:
What we can be reasonably certain of is that that backend wasn't
reaching any CHECK_FOR_INTERRUPTS() macros. Whether it was hung up
waiting for something, or caught in a tight loop somewhere, is
impossible to say without more data than we have. AFAIR the OP didn't
even mention w
Tom Lane wrote:
=?ISO-8859-1?Q?Christian_Schr=F6der?= <[EMAIL PROTECTED]> writes:
I have a strange problem: When I connect to one of my databases, the
current_user immediatly changes without any interaction from my side.
That's bizarre. Do you have anything in ~/.psqlrc? I'm also wo
Hi list,
I have a strange problem: When I connect to one of my databases, the
current_user immediatly changes without any interaction from my side.
This is what I do:
[EMAIL PROTECTED]:~> psql -h db2 testdb
Welcome to psql 8.2.5, the PostgreSQL interactive terminal.
Type: \copyright for dist
Tom Lane wrote:
Ok, you wrote "Postgres will recover automatically", but could this take
several minutes?
Yeah, potentially. I don't suppose you have any idea how long it'd been
since your last checkpoint, but what do you have checkpoint_timeout and
checkpoint_segments set to?
I did
Tom Lane wrote:
"Michael Harris" <[EMAIL PROTECTED]> writes:
The tip is ''kill -9' the postmaster', which has two important
differences to the scenario I just described:
1) kill -9 means the OS kills the process without allowing it to clean
up after itself
2) The postmaster is the master post
Ow Mun Heng wrote:
look for the query's procpid and then issue a select
pg_cancel_backend('the_id')
Does it do any harm if I kill (either with signal 9 or signal 15) the
single backend process (and not the postmaster)?
Regards,
Christian
--
Deriva GmbH Tel.: +4
Alvaro Herrera wrote:
Christian Schröder wrote:
I think it is my job as db admin to make the database work the way my users
need it, and not the user's job to find a solution that fits the database's
needs ...
Is there really nothing that I can do?
You can improve the s
John D. Burger wrote:
Christian Schröder wrote:
Or would it be possible to tweak how the planner determines the
selectivity? I have read in the docs (chapter 54.1) that in case of
more than one condition in the where clause, independency is assumed.
In my case ("... where test like
Hi list,
I am still fighting with the really slow database queries (see
http://www.nabble.com/%22like%22-vs-%22substring%22-again-t4447906.html),
and I still believe that the cause of the problem is that the query
planner makes incorrect estimations about the selectivity of the "where"
clauses
Hi list,
last week I asked a question about a query with several joins and a
"like" operator which was really slow. When I replaced "like" with
"substring" (which was possible because the comparison was simply "bla
like '123%'") the query became extremely faster because the query
optimizer ca
Gregory Stark wrote:
Christian Schröder <[EMAIL PROTECTED]> writes:
...
-> Seq Scan on table2 (cost=0.00..186.64 rows=2 width=4) (actual time=0.052..2.259 rows=42 loops=1)
Filter: (c ~~ '1131%'::text)
...
-> Seq Scan on tabl
Hi list,
if you please have a look at the following query:
SELECT DISTINCT a FROM table1
INNER JOIN table2 USING (b)
INNER JOIN view1 USING (a)
WHERE c like '1131%'
AND d IS NOT NULL
AND e IS NOT NULL;
Unfortunately, I have not been able to construct a suitable test case,
so I had to take the q
Side note: Why does Thunderbird send HTML mails albeit being configured
for sending plain text mails? Sorry for that! And sorry for being off-topic.
Regards,
Christian
--
Deriva GmbH Tel.: +49 551 489500-42
Financial IT and Consulting Fax: +49 551 489500-91
H
Mikko Partio wrote:
Isn't 128MB quite low considering the "current standard" of 25%
- 50% of total ram?
I had also read a statement about using this amount of memory as shared
buffers. Exactly that was the reason why I set it to such a high value,
but I am now convinced that this
Martijn van Oosterhout wrote:
You've got it completely wrong.
Hm, you seem to be right. :(
I have now decreased the "shared_buffers" setting to 128 MB. I have also
found some tuning pages with warnings about not setting the value too
high. I'm sure that I have read these pages before, but I s
Tom Lane wrote:
Ok, I can do this, but why can more memory be harmful?
Because you've left no room for anything else? The kernel, the various
other daemons, the Postgres code itself, and the local memory for each
Postgres process all require more than zero space.
So doe
hubert depesz lubaczewski wrote:
On Wed, Aug 22, 2007 at 07:07:20PM +0200, Christian Schröder wrote:
These are the current settings from the server configuration:
shared_buffers = 3GB
this is *way* to much. i would suggest lowering it to 1gig *at most*.
Ok, I can do this, but
Hi list,
I am struggling with some "out of memory" errors in our PostgreSQL
database which I do not understand. Perhaps someone can give me a hint.
The application which causes the errors runs multi-threaded with 10
threads. Each of the threads performs several select statements on the
database
Alain Roger wrote:
> insert into immense.statususer (statususer_id, statususer_type) values
> (SELECT nextval( 'statususer_statususer_id_seq' ),'customer');
The correct syntax would be:
insert into immense.statususer (statususer_id, statususer_type) values
((SELECT nextval( 'statususer_statususer_
Thank you for your tips. I think I will change the tables and use some
minimal date instead of a null value to represent a constraint that is
valid all the time. An additional advantage of this approach is that I
can then make sure that the time intervals (I not only have a start
date, but also an
Berend Tober wrote:
> Christian Schröder wrote:
>> Peter Eisentraut wrote:
>>
>>> A first step in that direction would be to rethink the apparently
>>> troublesome use of null values.
>> Some of the limits are
>> only valid after a given
Peter Eisentraut wrote:
> I submit that you should rethink your database schema and properly
> normalize it. You are attempting to retool the algebra that underlies
>
I don't quite understand why this is a question of normalization. As far
as I can see, my table seems to be normalized as far
Hi list!
Consider the following table definition:
Column | Type | Modifiers
+--+---
id | integer | not null
date | date |
value | double precision |
The id and date field together are some sort of prim
69 matches
Mail list logo