I have this table "replica_sync_log" which is updated thousands of
times a day to reflect the state of various schemas in a database
which acts as an offline secondary to various other databases (each
of the source databases is mapped to its own schema in the
secondary). The table has the f
Are you dumping the whole database or just a single table? If it's
the former, try the latter and see if you still get errors.
If pg_dump is not working, maybe some system table is hosed. What
errors are you getting?
If you can get in via psql, log in as a superuser and execute:
COPY mytab
On Sep 27, 2006, at 7:28 AM, Rafal Pietrak wrote:
Hi,
I fell into the following problem (unfortunately, the database
contents
has sensitive customer information, so can publish very little of
that).
Currently postgress process takes close to 100% CPU time.
I've restarted the process a mo
On Sep 27, 2006, at 12:35 PM, Rafal Pietrak wrote:
Thenx Duncan for the analysis.
This happend again, so I'm able to peek at the details you've pointed
out.
On Wed, 2006-09-27 at 09:33 -0700, Casey Duncan wrote:
Sounds like it was blocked (unsure by what). You can use pg_locks to
I have some databases that have grown significantly over time (as
databases do). As the databases have grown, I have noticed that the
statistics have grown less and less accurate. In particular, the
n_distinct values have become many OOM too small for certain foreign
key columns. Predictabl
On Sep 28, 2006, at 8:51 PM, Tom Lane wrote:
[..]
The information we've seen says that the only statistically
reliable way
to arrive at an accurate n_distinct estimate is to examine most of the
table :-(. Which seems infeasible for extremely large tables,
which is
exactly where the problem
On Sep 29, 2006, at 9:14 AM, Tom Lane wrote:
[ expanding this thread, as it now needs wider discussion ]
"Paul B. Anderson" <[EMAIL PROTECTED]> writes:
Actually, I was not filling all of the arrays in sequential order. I
added code to initialize them in order and the function seems to be
work
On Oct 6, 2006, at 11:12 AM, John D. Burger wrote:
Richard Huxton wrote:
Should I always cluster the tables? That is, even if no column
jumps out as being involved in most queries, should I pick a
likely one and cluster on it?
Well you cluster on an index, and if you don't think the ind
select extract(epoch from interval '2 hours')/60;
'epoch' returns the number epoch seconds that comprise the interval.
That differs from 'seconds' which just returns the "seconds place",
which is zero for 2:00:00 of course.
-Casey
On Oct 6, 2006, at 12:22 PM, Chris Hoover wrote:
If I subt
On Oct 18, 2006, at 5:20 AM, Ilja Golshtein wrote:
When starting a database from scratch it is much faster to import the
data and then create the indexes. The time to create index on a full
table is less than the extra time from each index update from the
inserts. The more indexes to update the
On Oct 19, 2006, at 8:19 AM, <[EMAIL PROTECTED]>
<[EMAIL PROTECTED]> wrote:
I want to upgrade a system from 7.3.8 ro 8.1.5
I am new to Psql and looking for handy hints
Any known problems or pit-falls ?
You'll need to dump the database and reload (pg_dump and pg_restore),
8.1 uses a diffe
On Nov 13, 2006, at 1:05 PM, Glen Parker wrote:
Matthew T. O'Connor wrote:
Glen Parker wrote:
I would like a way to run the autovacuum daemon on demand
periodically. Every night at 2 AM, for example.
Anybody know if this is possible? If not, it's a feature
request :-)
Autovacuum can be
When I configure statement_timeout globally, I typically override it
for superusers and other accounts used by dbas. Just issue:
ALTER USER postgres SET statement_timeout = 0;
Repeat for other superusers (slony, etc). Then the policy won't apply
to them.
-Casey
On Nov 16, 2006, at 6:46
On Dec 4, 2006, at 1:11 PM, Anton Melser wrote:
Hi,
I am just starting at a company and we are inheriting a previously
built solution. It looks pretty good but my previous experience with
pg is seriously small-time compared with this...
I am very new at the job, and don't know what hd config we
On Dec 5, 2006, at 11:04 PM, Joost Kraaijeveld wrote:
Does PostgreSQL lock the entire row in a table if I update only 1
column?
Know that updating 1 column is actually updating the whole row. So if
one transaction updates column A of a row, it will block another
concurrent transaction that
On Dec 6, 2006, at 2:34 PM, Wei Weng wrote:
On Tue, 2006-12-05 at 15:56 -0500, Wei Weng wrote:
I have a table that has roughly 200,000 entries and many columns.
The query is very simple:
SELECT Field1, Field2, Field3... FieldN FROM TargetTable;
TargetTable has an index that is Field1.
I t
You could use slony for this a couple of ways. One is a simpler more
hacky way, another is a bit more involved but perhaps more "correct":
For the simple way, setup the source database as a provider and the
remote replica as a normal slony subscriber.Don't run the slon
daemons all the t
We have a production system with multiple identical database
instances on the same hardware, with the same configuration, running
databases with the exact same schema. They each have different data,
but the database sizes and load patterns are almost exactly the same.
We are running pg 8.1.
On Feb 15, 2007, at 1:50 PM, Peter Eisentraut wrote:
Casey Duncan wrote:
2007-02-15 00:35:03.324 PST ERROR: could not access status of
transaction 2565134864
2007-02-15 00:35:03.325 PST DETAIL: could not open file "pg_clog/
098E": No such file or directory
The first time this h
On Feb 15, 2007, at 1:46 PM, Alvaro Herrera wrote:
Casey Duncan wrote:
We have a production system with multiple identical database
instances on the same hardware, with the same configuration, running
databases with the exact same schema. They each have different data,
but the database sizes
On Feb 15, 2007, at 2:44 PM, Alvaro Herrera wrote:
Casey Duncan wrote:
On Feb 15, 2007, at 1:46 PM, Alvaro Herrera wrote:
[..]
Can you relate it to autovacuum?
Maybe. Here's what I get when I crank up the logging to debug4:
2007-02-15 14:20:48.771 PST DEBUG: StartTransaction
2007-
On Feb 15, 2007, at 5:21 PM, Alvaro Herrera wrote:
Casey Duncan wrote:
Interestingly I can manually vacuum that table in all of the
databases on this machine without provoking the error.
Except template0 I presume? Is this autovacuum running in template0
perchance? I note that 800
On Feb 15, 2007, at 5:50 PM, Alvaro Herrera wrote:
Casey Duncan wrote:
On Feb 15, 2007, at 5:21 PM, Alvaro Herrera wrote:
Casey Duncan wrote:
To fix the problem, set pg_database.datallowconn=true for template0,
then connect to it and do a VACUUM FREEZE. Then set
datallowconn=false
I have some nightly statisics queries that runs against a view which
unions several large tables. Recently one of these queries started
running into out of memory errors. This is on postgresql 8.1.8
running on 32-bit Debian Linux.
Here is the error in the log including the query (excluding
Anyone know of a unit testing framework for plpgsql stored procedures?
We are about to convert a bunch of stored procedures from Oracle over
to Postgresql. We currently use the utPLSQL package to unit test
those, so something comparable to it would be optimal.
We'll probably roll our own if nothin
I have this report query that runs daily on a table with several
hundred million rows total using pg 8.1.3 on Debian Linux on hw with
dual opteron processors:
SELECT count(*) FROM webhits
WHERE path LIKE '/radio/tuner_%.swf' AND status = 200
AND date_recorded >= '3/10/2006'
On Mar 13, 2006, at 9:50 AM, Michael Fuhr wrote:
On Sun, Mar 12, 2006 at 11:36:23PM -0800, Casey Duncan wrote:
SELECT count(*) FROM webhits
WHERE path LIKE '/radio/tuner_%.swf' AND status = 200
AND date_recorded >= '3/10/2006'::TIMESTAMP
AN
On Mar 13, 2006, at 5:25 PM, Casey Duncan wrote:
[..]
If I restart the postmaster, the query will complete in the expected
time.
Does the problem eventually start happening again? If so, after
how long? How did you determine that the restart is relevant? Do
you consistently see different
On Mar 14, 2006, at 1:31 PM, Casey Duncan wrote:
On Mar 13, 2006, at 5:25 PM, Casey Duncan wrote:
[..]
If I restart the postmaster, the query will complete in the expected
time.
Does the problem eventually start happening again? If so, after
how long? How did you determine that the restart
On May 8, 2006, at 3:33 PM, Karen Hill wrote:
What is your favorite front end for end users to interact with your
postgresql db? Is it java, .net, web apache + php, MS-Access, ruby on
rails? Why is it your favorite? Which would you recommend for end
users on multiple OSes?
This is totally d
Trying to drop a database, this morning I ran into the not so unusual
error:
dropdb: database removal failed: ERROR: database "test_seg1" is
being accessed by other users
however, when I did "select * from pg_stat_activity" on the pg
server, it showed no connection to that db. Then I loo
On May 17, 2006, at 12:34 PM, Tom Lane wrote:
Casey Duncan <[EMAIL PROTECTED]> writes:
however, when I did "select * from pg_stat_activity" on the pg
server, it showed no connection to that db. Then I looked at the
processes:
tmp0% ps ax | grep test_seg1
10317 ?D
Are there any other partitions on that machine with space available?
If so you could move some files there from your postgres data dir and
symlink to them from their original location. At least then you might
get it to start so you can get a pg_dump to work.
-Casey
On Jun 19, 2006, at 1:43
It seems you were running a pre-8.x postgresql version before, its
data files are not compatible with the new version you have now.
You'll need to find out the version that used to be installed by
looking at the PG_VERSION file in your postgres data directory.
Once you do that you will need
34 matches
Mail list logo