> Possibly. You should be able to tell what's taking so much time by
> keeping an eye on your server during the restore. You could set
> log_statement = 'all' (or just 'ddl' if there is significant other
> activity in other databases) during the restore, and keep an eye on
> pg_stat_activity if yo
One thing you may want to look at (if this is PG 8.4.x) is the number of
large objects in pg_largeobjects. If your apps don't use large objects
this is not relevant. If they do, then it may be. I've noticed that
pg_restore no longer reports the restoration of individual LO items. It
used to b
I have just restored a database about 347GB in size
postgres=# select * from pg_database_size('dbname');
pg_database_size
--
346782483256
using this command:
pg_restore -d dbname -O -Fc dbname.backup
It started at 13/5 21:28 and at 14/5 18:47 all user tables were imported
Have you run analyze on the table?
Yes
How selective is the condition node = '1010101010' and the date range. In
particular, do you have an idea what percentange of the table fits into
that date range?
There are around 1000 nodes and there is data for two years, so it must
be around 1/4
This is the index:
CREATE INDEX idx_stat_date_node_type
ON public.stat
USING btree
(date, node, "type");
explain SELECT * FROM public.stat WHERE node = '1010101010' AND
("date" <= '2008-06-30'::date AND "date" >= '2008-01-01'::date)
Try changing the index order to node, date rather t
You can prevent postgres from using the index on node by changing the reference
in the WHERE clause to an expression, like so:
SELECT * FROM public.stat WHERE node||'' = '1010101010' AND
("date" <= '2008-06-30'::date AND "date" >= '2008-01-01'::date);
Perhaps this will lead the optimizer to
I'm wondering why this index is not used for my query.
This is the index:
CREATE INDEX idx_stat_date_node_type
ON public.stat
USING btree
(date, node, "type");
When quering an exact date, it is used
explain SELECT * FROM public.stat WHERE node = '1010101010' AND date =
'2008-01-01'
"Index
\d my.table is showing me the primary key as:
"unitstat_pkey" PRIMARY KEY, btree (id)
But the looking into the table structure produced by
pg_dump -s -n my -t table db
I'm getting gthe primary key shown as:
ADD CONSTRAINT unistat_pkey PRIMARY KEY (id);
That has been the name of it, but I alte
Hi,
\d my.table is showing me the primary key as:
"unitstat_pkey" PRIMARY KEY, btree (id)
But the looking into the table structure produced by
pg_dump -s -n my -t table db
I'm getting gthe primary key shown as:
ADD CONSTRAINT unistat_pkey PRIMARY KEY (id);
That has been the name of it, but I a
and re-index on the affected table.
Sorry, was to fast
Poul
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
I had a similar problem and overcame it by temporarily setting
zero_damaged_pages, then doing a full vacuum and re-index on the affected table.
Thanks, I suppose I need to reindex the table afterwards, or they can
point to non existent data ?
Poul
---(end of
During some time I have had more problems with invalid data in different
parts of a PostgreSQL database.
Until now it has been pointers to non present clog files and an index
file, but now it's in a data file.
I'm getting this error when doing a backup:
invalid page header in block 5377 of rel
Try on both servers:
show lc_collate
Unfortunately you have to initdb to set the locale. You can't set it per
database or change it once it's initialized.
Right you are, but I suppose I can do a dump using pg_dumpall overwrite the
cluster
with initdb --locale=en_US and restore it agai
What's the output of the following query on each server?
select name, setting from pg_settings where name ~ '^lc_';
Ahh there's a difference
mydb=# select name, setting from pg_settings where name ~ '^lc_';
name | setting
-+-
lc_collate | en_US
lc_ctype| en_U
I'm wondering why the sort order on these two servers behaves differently.
The only difference I can see is that one is using a 32.bits Ubuntu and
the other is on 64bits versions of Ubuntu.
But why this difference ?
Thanks,
Poul
mydb=# select * from test order by felt1;
felt1
test1
I have a java application receiving data from a thousand devices on
periodic basis.
When receiving data the application gets a database connection, inserts
a row and closes the connection again.
If this process takes more than 15 seconds, the device assumes the
connection dead and makes a new on
Mike Chiarappa skrev:
Hello Poul,
take a look to thread:
http://forum.sun.com/jive/thread.jspa?forumID=123&threadID=101711
It works for me !!! :o))
Mike
Thanks Mike, had solved that issue the same way :)
Don't know how it works internally, but I guess the JDBC driver is
reporting the is
Just because it works with Derby doesn't mean it's PostgreSQL's fault.
There's plenty of things that "work" in certain databases that really
shouldn't.
In any case, you might try asking on pgsql-jdbc, since there's more
java-heads over there. I'd also recommend turning query logging on so
y
I have some troubles getting Sun Java Studio Creator & Sun Java
Application Server to work with
PostgreSQL.
Trying different syntaxes for the SQL statement (schema or no schema) in
the session bean
I get either the error: "No columns in table: pmh.tablename" or
"Cannot change transaction isola
Given that you're using duration logging and JDBC, I wonder whether you
didn't trip over this recently-identified bug:
http://archives.postgresql.org/pgsql-hackers/2006-08/msg00815.php
Patch is here:
http://archives.postgresql.org/pgsql-committers/2006-08/msg00278.php
Sorry but didn't, but
dmesg
[2425253.737383] postmaster[4792]: segfault at 2aaab6f0e000 rip
2b73795b rsp 7f8c9228 error 4
Any suggestions ?
Do you trust that machine's RAM?
Can you try running memtest86 for some extended period of time?
(just to make sure it's not a hardware issue)
Last night one of my databases broke down temporary because of a
segmentation fault.
It has only happended this time and the database was fully recovered
afterwards,
but I was wondering what I can do anything to prevent it from happening
again
It happened while the backup was running (pg_dump
Try export TMPDIR=/var/tmp.
Also, unless you have a really good reason to be using -Ft dump format,
I'd recommend -Fc format instead. It's better tested and doesn't suffer
from various limitations of the tar format, notably the need for a temp
file.
Thanks, it takes much longer time,
/tmp: write failed, filesystem is full
pg_dump: [tar archiver] could not write to tar member (wrote 0,
attempted 101)
Is there a way to make pg_dump use /var/tmp as a temporary filespace
instead ?
Thanks, Poul
---(end of broadcast)---
TIP 9
Rodrigo Gonzalez wrote:
I answer here so each one can help you.
Cannot understand what you meanindex is numbered? Are you talking
about "autoincrement" as called in mysql? use sequences please
Sorry, forgot that your email address was in the reply-to field.
I was seeking a solution on
I can see it has been discussed before, why COUNT(*) tends to be slow on
PostgreSQL compared with ex. MySQL.
As far as I understood it has something to do with missing numbering on
the rows in the indexes and that there should be plenty of reasons not
to implement that in PostgreSQL, not that I
The ## operator is for finding the closest point to one geometric
object from another (e.g., point ## lseg), not for searching an
entire table. In this example you'll have to calculate the distance
to each point and order the results by it:
SELECT position, position <-> '55.4,-6.4'::point AS dis
I have made the table shown below and inserted two points,
but I can't find the proper syntax for finding the nearest point
create table store_point(position point);
insert into store_point values ('55.512345, -6.5');
insert into store_point values ('55.123456, -6.1');
select * from store
Doing the following query:
explain analyze SELECT * FROM my.tablename WHERE node = '1234567890'
AND date BETWEEN '2005-03-27' AND NOW() ORDER BY id DESC
ORDER BY node,id is forcing the planner to use the right index.
Thanks,
Poul
---(end of broadcast)---
Doing the following query:
explain analyze SELECT * FROM my.tablename WHERE node = '1234567890' AND
date BETWEEN '2005-03-27' AND NOW() ORDER BY id DESC
takes 1,3 sec. with this result:
Sort (cost=52971.52..53033.26 rows=24693 width=93) (actual
time=1141.002..1252.995 rows=25109 loops=1)
S
Raj Gupta wrote:
Hi,
We are trying to migrate our database from 7.4.3 of Postgresql to 8.0.3.
While migrating a table, we got the following error:
ERROR: could not write block 2830 of relation 1663/2276041/4965853:
Operation not permitted
This came when pg was trying to create an index
How do you know the delay isn't in the client's lookup of the server?
You are absolutely right!
I am using the ip address to connect to, so I thougt that was it.
But the client is doing netbios queries seen with ethereal.
Thanks to both of you.
Poul
---(end of
Is it possible that you've got DNS timeouts happening or something
like that? Though I don't think PG does reverse DNS lookups,
especially not by default.
Watching the connection process with 'tcpdump' to see where the stall
happens might be informative.
-Doug
DNS was also my first thought
I have a problem with slow connects to a database server.
In a little Java application I am starting 100 threads, each one making
a database connection. The program is running on my laptop.
When connecting to one server (AMD Opteron 2GHz with Solaris &
pgsql version 8.0.3) the 100 connections ta
10GB of RAM isn't that farfetched nowadays.
However I/O might be a problem. A single drive can typically write/read
about 10MB a second (64KB chunks random access - not sure if you'd want
to bet on getting sequential throughput ;) ).
Anyway, it'll be something interesting to see ;).
Link.
I think you're being horribly optimistic if you actually want 1
concurrent connections, with users all doing things. Even if you only
allow 1MB for each connection that's 10GB of RAM you'd want. Plus a big
chunk more to actually cache your database files and do work in. Then,
if you had 1
Calculated the values should be
SEMMNI = 1 / 16
SEMMNS = (1 / 16) * 17 "plus room for other applications" How much
should that be ?
And where can I change those values on a 2.6 kernel ?
I will try to answer myself with another question.
Can it be that it should be changed in
/usr/src/
I'm trying to setup a database for 1 concurrent users for a test.
I have a system with 1GB of RAM where I will use 512MB for PostgreSQL.
It is running SuSE 9.3
I have changed SHMMAX & SHMALL
echo "536870912" >/proc/sys/kernel/shmmax
echo "536870912" >/proc/sys/kernel/shmall
and max_connectio
It helps if you spell "privilege" correctly ;-)
Thanks, stupud mistake ... :)
Poul
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
See the has_table_privilege() function.
Thanks, but how does it work ?
select has_table_privelege('public.mytable', 'select');
ERROR: function has_table_privelege("unknown", "unknown") does not exist
HINT: No function matches the given name and argument types. You may
need to add explicit t
I'm wondering if it's possible for the ordinary user to check if SELECT
is granted on a give table, without doing the select ?
I want to use it in an application with a menu structure, where I only
want to enable those options where SELECT on a the tables used in the
given menu is granted.
Inst
To find the nearest value in number_column to some CONSTANT (where you
replace constant with a number), try:
select *,(number_column - CONSTANT)^2 as d from tablename order by d limit
1;
Does that do it for you?
Sean
It does ideed, not that I understood how, but I will find out.
Thank you v
To find the nearest value in number_column to some CONSTANT (where you
replace constant with a number), try:
select *,(number_column - CONSTANT)^2 as d from tablename order by d limit
1;
Does that do it for you?
Sean
It does ideed, not that I understood how, but I will find out.
Thank you
Does anyone know how to find the row with the nearest numeric value, not
necessarily an exact match ?
Thanks,
Poul
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EM
You're not there yet: you want what Richard said, namely
I realized that it wasn't optimal for all nodes, namely those with a lot
of rows.
So you are absolutely right, I followed the suggestion of Richard and it
works perfect.
Thank you all, I learned a lesson of indexes today...
Poul
I have a problem creating a usable index for the following simple query:
SELECT * FROM my.table WHERE node = '10' ORDER BY id DESC LIMIT 1
id is a serial, so the query is to find the latest entry to a given
node and id is the primary key.
You're not necessarily getting the latest entry, jus
I have a problem creating a usable index for the following simple query:
SELECT * FROM my.table WHERE node = '10' ORDER BY id DESC LIMIT 1
id is a serial, so the query is to find the latest entry to a given node
and id is the primary key.
The table contains around 1 million records and the query
Google is your friend, follow this
http://archives.postgresql.org/pgsql-odbc/2004-02/msg00029.php
Poul
Zlatko Matić wrote:
Hello.
I just read all about TCP/IP,
pg_hba.conf, listen_adresses and similar topics regarding remote TCP/IP
connections,
but where can I find explan
Poul MÃller Hansen wrote:
I'm using Postgresql version 7.4.7 and jdbc driver version
pg74.215.jdbc3.jar.
Do you have a clue on what's going on ?
No, I don't. Do you have any more information? What is your code doing
when it fails? Just issuing a reg
This sort of thing has been seen to occur when multiple client-side
threads try to use the same database connection without proper locking
to ensure only one thread uses it at a time. See for example
http://archives.postgresql.org/pgsql-hackers/2004-09/msg00104.php
I am aware of the max_connections and recompiling the kernel (FreeBSD),
but how many connections can one database server handle without
degrading performance to much.
I am in a situation where I can end up in handling thousands of
connections, not doing much load each, but they are connected.
Of
2005-04-02 00:09:01 ERROR: invalid string enlargement request size
1358954492
2005-04-02 00:09:01 WARNING: AbortTransaction and not in in-progress
state
2005-04-02 00:09:01 FATAL: invalid frontend message type 82
This indicates the client code didn't follow the
Hi,
I'm experiencing some strange problems with a Java application which I
have made.
Suddently it looses connection with the database, and in the server log
i can find:
2005-04-02 00:09:01 ERROR: invalid string enlargement request size
1358954492
2005-04-02 00:09:01 WARNING: AbortTransactio
53 matches
Mail list logo