More info:
I upgraded the database from 8.4 to 9.1 using pg_upgrade, so I have no
way of running explain using 8.4.
I don't want to do an EXPLAIN ANALYZE because it would bog down the
server for too long. I know what it is doing, it's doing a seqscan.
This is a table with ~ 5.5 million rows
I've tried the upgrade a few times, but I always get one type of failure
or another.
Performing Consistency Checks
-
Checking current, bin, and data directories ok
Checking cluster versions ok
Checking database user is a
I saw this in the mailing list archives without an answer, so for future
reference:
DECLARE
...
line TEXT;
BEGIN
...
FOR line IN EXECUTE ''EXPLAIN ANALYZE LOOP
RAISE NOTICE ''% '' , line;
END LOOP;
--
Sent via pgsql-general mailing list (pgsql-general@
What's up with Postgres and array dimensions? The table definition
doesn't even store how many dimensions the array should be.
[local]:playpen=> create table arrtest(
playpen(>id SERIAL PRIMARY KEY,
playpen(> arrone int[],
playpen(> arrtwo int[][]
playpen(> );
NOTICE: CREATE TABLE will
Magnus Hagander wrote:
There has been no change there. From what I can tell, that part of the
code hasn't been touched since 2004. That shouldn've have worked in
8.4.2 either - but it's only checked on server startup, so if you
changed the mode after startup you wouldn't have noticed.
Hmm. I
I upgraded from 8.4.2 to 8.4.3, and I couldn't start postgres. This was
printed on STDERR
FATAL: data directory "path-to-my-data-dir" has group or world access
DETAIL: Permissions should be u=rwx (0700).
This change was not documented in the release notes, and I don't think
permission != 700
After adding a date column to a table, I started getting these errors
from my triggers:
ERROR: table row type and query-specified row type do not match
DETAIL: Query has too few columns.
The triggers just did simple UPDATEs on table2 when table1 was
updated/inserted
A db restart solved the
Goboxe wrote:
Hi,
I just receive a new server with 5 x 73GB SAS harddisk.
I tried to maximize the total usable space when configure using RAID
5.
What I plan to do to configure all the 5 harddisks using RAID 5.
Windows operating system also will be installed the same RAID 5 ( I
going to have c:
Martijn van Oosterhout wrote:
On Sat, Oct 04, 2008 at 08:30:32PM -0400, Joseph S wrote:
In this case, with the function being called over and over again inside
a nested loop, it would be worth the effort. I'm not even talking about
caching the result for the whole transaction block,
Tom Lane wrote:
"Gurjeet Singh" <[EMAIL PROTECTED]> writes:
Shouldn't PG make all efforts to not execute something when the result is
already known?
Not if said effort would cost more than is saved, which would be by far
the most likely result if we tried to cache all function results.
Aren't static functions supposed to executed only once per transaction?
I have a query plan that looks like this:
Unique (cost=441872.58..441875.39 rows=562 width=4)
-> Sort (cost=441872.58..441873.98 rows=562 width=4)
Sort Key: elog.uid
-> Nested Loop (cost=89.52..441
ws=1 width=0)
Filter: ((sgcode = 156) AND (sacode = ANY ('{2,1}'::smallint[])))
(3 rows)
Time: 0.986 ms
Tom Lane wrote:
Joseph S <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
Can you force it to use the partial index by dropping the other index?
(Use begin; drop index ...;
Tom Lane wrote:
Can you force it to use the partial index by dropping the other index?
(Use begin; drop index ...; explain ...; rollback; to avoid dropping
the index for real.) It's quite unclear at this point whether it
I tried, and it ends up using a seqscan.
--
Sent via pgsql-general ma
Lane wrote:
Joseph S <[EMAIL PROTECTED]> writes:
It seems that postgres can't figure out that it can use the index on
sacode unless I put "d2.sacode > 0" in my where clause.
Works for me ...
regression=# create table d2(sgcode int, sacode int);
CREATE TABLE
regression=
I have a table, d2, that has a field sacode that is almost always null.
In fact the stanullfrac in pg_statistic for this column is 1. I have
this index on my table:
"d2_sgcode_sacode_idx" btree (sgcode, sacode) WHERE sacode IS NOT NULL
AND sacode > 0
The first version of my query wasn't u
Andrew Sullivan wrote:
2) How can I use pg_autovacuum table to disable autovac for this table?
The docs are not clear on how to do this.
I think that would be a bad idea, given that autovacuum seems to think
you need to do it.
I don't want to totally disable it, but I want to change the
I'm running 8.2.6. I have a log table that is always INSERTed to, and
no updates or deletes are ever run on. For some reason the autovacuum
decided it needed to vacuum this table, and it is slowing down my
production server.
So my questions are:
1) Why vacuum, if this table is never updated
No, not the pg data directory. The pg binaries. My problem was caused
by the timezone files installed under /usr/local/pgsql/share were not
world readable due the way I had the umask of root set when I did the
compile and install.
Erik Jones wrote:
What? Are you actually saying that a PG
Tom Lane wrote:
No, he's complaining that the share/timezone file tree wasn't
world-readable. I think that's plain old pilot error though.
The install script shouldn't be second-guessing the umask it's
given, any more than it second-guesses the file ownerships.
If binaries are made readable a
Tom Lane wrote:
Joseph S <[EMAIL PROTECTED]> writes:
Was there something wrong with the tzdata distributed in 8.2.7?
current_timestamp is still using EST, but the date command run from the
command line is correctly returning EDT
Works for me ... what have you got TimeZone
Was there something wrong with the tzdata distributed in 8.2.7?
current_timestamp is still using EST, but the date command run from the
command line is correctly returning EDT
[local]:owl=# select version(), current_timestamp;
version
Tom Lane wrote:
Joseph S <[EMAIL PROTECTED]> writes:
On one of my servers postgres thinks that we're back on standard time
already:
What's its TimeZone setting?
On the server that is working:
=> show TimeZone;
TimeZone
US/Eastern
(1 row)
On the one tha
On one of my servers postgres thinks that we're back on standard time
already:
[local]:db=# select current_timestamp;
now
---
2007-10-29 15:06:10.049795-05
(1 row)
Time: 0.807 ms
[local]:db=# select version();
Tom Lane wrote:
Joseph S <[EMAIL PROTECTED]> writes:
Tom Lane committed:
- Restrict pg_relation_size to relation owner, pg_database_size to DB
owner, and pg_tablespace_size to superusers. Perhaps we could
weaken the first case to just require SELECT privilege, but that
doesn't w
David Fetter wrote:
Tom Lane committed:
>
- Restrict pg_relation_size to relation owner, pg_database_size to DB
owner, and pg_tablespace_size to superusers. Perhaps we could
weaken the first case to just require SELECT privilege, but that
doesn't work for the other cases, so use owner
How many rows are in this table?
Sanjay wrote:
Hi All,
Say I have a simple table WEBSITE(website_id int4 PRIMARY KEY, name
VARCHAR(30)). While I try this:
EXPLAIN ANALYZE SELECT * FROM WEBSITE WHERE website_id = 1
the output is:
Keep an eye on pg_stat_activity and pg_locks to see if any lock
contention is going on.
mr19 wrote:
I have a process that updates ~ 1500 rows in a table once a second. Every 5
minutes (almost exactly) the update takes ~ 15 seconds (normally < 1). I
have run htop/top on the machine during thi
Tom Lane wrote:
Joseph S <[EMAIL PROTECTED]> writes:
Me too. I don't change my db schema that much, but I experience bloat
in the pg_tables that I don't expect. For instance pg_opclass needs a
VACUUM FULL/REINDEX once a week or I notice the indexes are larger than
the tabl
Most of the time I just hit 'Reply' and since this list
doesn't set the Reply-to: the replies go to the OP as well.
Tom Lane wrote:
Joseph S <[EMAIL PROTECTED]> writes:
My pg_shdepend table has a size of 16,384, but
pg_shdepend_depender_index has a size of 19,169,280 and
David Fetter wrote:
== PostgreSQL Weekly News - July 29 2007 ==
- Arrange to put TOAST tables belonging to temporary tables into
special schemas named pg_toast_temp_nnn, alongside the pg_temp_nnn
Does this apply to indexes on temp tables as well? Or is this not even
an issue with indexes
My pg_shdepend table has a size of 16,384, but
pg_shdepend_depender_index has a size of 19,169,280 and
pg_shdepend_reference_index has a size of 49,152. When I try to reindex
the table I get:
ERROR: shared table "pg_shdepend" can only be reindexed in stand-alone mode
So is there any way I c
I just moved one of my desktops and my laptop from Fedora 6 to Unbuntu
7.04 because Fedora lacked hardware support that Unbuntu and my Fedora
machines had all sorts of problems like sound dropping out and machines
locking up. (Also the Fedora installers are terrible).
My small gripes about Ub
If you don't mind the downtime it would be simpler to upgrade to 8.2 and
then worry about Slony. If you do mind you can use slony to do the
upgrade which needs much less downtime as you switch servers.
Geoffrey wrote:
We plan to upgrade from our current 7.4.17 Postgresql to 8.2.? in the
near
I am planning on segmenting one of my large tables into many child
tables using "CREATE TABLE ... INHERITS (parent)", but I want all the
triggers on my parent table to also fire on UPDATE/INSERT/DELETE on my
child table. As far as I can tell, there is no way to do this with the
CREATE TABLE co
I set the ulimit for data to 4194304k:
core file size (blocks, -c) 0
data seg size (kbytes, -d) 4194304
file size (blocks, -f) unlimited
pending signals (-i) 1024
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimite
Try doing select * from pg_locks to see how many locks you have out.
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Nik wrote:
The reason I am running pg_dumpall is that I would like to create an
exact copy of my database on another server. I cannot use create
database with template command since there are connections to the
original database. The dumpall method seems somewhat cumbersome since
it generates on
Stephen Harris wrote:
I'm vacuuming every night after the inserts are done.
You should vacuum after the deletes and before the inserts, so the
inserts can go into the just reclaimed space.
---(end of broadcast)---
TIP 1: if posting/reading thro
Dmitry Koterov wrote:
And the general question - why SELECT queries slowdown during the
CHECKPOINT? I thought that Postgres is a version-based database and read
queries are never blocked...
Because the disk is busy.
---(end of broadcast)---
TI
Martijn van Oosterhout wrote:
On Wed, Mar 28, 2007 at 02:16:55PM -0400, Joseph S wrote:
I thought the performance penalty came from making the executable
bigger, which means the executor has to skip over the debug symbols
somehow, which eats up cpu. If the symbols are in a different file this
Tom Lane wrote:
Joseph Shraibman writes:
Well 1) I'd like to avoid the performance penalty for including debug
symbols
There is none. If there were, it would certainly apply to debuginfo
as well --- debuginfo is merely moving the symbols over to a different
file after the compiler finishes.
Geoffrey wrote:
Joseph S wrote:
Fedora/Redhat has debuginfo packages that drop files with debug
symbols in /usr/src/debug/, and gdb can use them. Does anyone know
how this works and how I can get those files from the pg tarball?
Where have you found these packages? Is this an rpm you have
Tom Lane wrote:
Joseph S writes:
Fedora/Redhat has debuginfo packages that drop files with debug symbols
in /usr/src/debug/, and gdb can use them. Does anyone know how this
works and how I can get those files from the pg tarball?
You download and install the postgresql-debuginfo RPM that
Fedora/Redhat has debuginfo packages that drop files with debug symbols
in /usr/src/debug/, and gdb can use them. Does anyone know how this
works and how I can get those files from the pg tarball?
---(end of broadcast)---
TIP 3: Have you checked
How come the slony list isn't on the nntp server?
Martijn van Oosterhout wrote:
There's a mailing list for slony, you might have better luck there.
http://gborg.postgresql.org/mailman/listinfo/slony1-general
Have a nice day,
---(end of broadcast)-
The serializable transaction *can't* see those rows, they were created
and obsoleted after the start of the transaction. The point of make the
transaction serializable in the first place was to allow VACUUM to
reclaim those rows.
Alvaro Herrera wrote:
Joseph S wrote:
I realize this t
at 04:25:09PM -0400, Joseph S wrote:
The problem is that the "old" transaction can see effects of later
started transactions, so VACUUM can't delete the later stuff either...
How can it see effects of transactions that started after it?
Check the documentation for the diff
But if the tuple in question was created and then deleted after the
transaction, the transaction should still not need to see it.
Martijn van Oosterhout wrote:
On Thu, Oct 19, 2006 at 04:25:09PM -0400, Joseph S wrote:
The problem is that the "old" transaction can see effects of lat
Martijn van Oosterhout wrote:
Sure, don't keep transactions open for so long. Is there a particular
reason you do that?
Because I have a leak somewhere? Even if the transaction isn't that old
it will still cause me some table bloat in the meantime.
The problem is that the "old" transaction
49 matches
Mail list logo