Hi,
I'm in the unfortunate position of having "invalid page header(s) in
block 58591 of relation "pg_toast_302599". I'm well aware that the
hardware in question isn't the most reliable one. None the less, I'd
like to restore as much of the data as possible.
A pg_filedump analysis of the file
Hi,
Tom Lane wrote:
Hm, looks suspiciously ASCII-like. If you examine the page as text,
is it recognizable?
Doh! Yup, is recognizable. It looks like some PHP serialized output:
png%";i:84;s:24:"%InfoToolIconActive.png%";i:85;s:29:"%InfoToolIconHighlighted.png%";i:86;s:26:"%InfoToolIconInact
Hi,
David Fetter wrote:
Very few people actually need synchronous replication, and those who
do buy Oracle's RAC (and curse it) or use DB2's offering (and also
curse it ;). For most purposes, fast asynchronous replication is good
enough.
While this is certainly true, please keep in mind that
Hi,
Markus Schiltknecht wrote:
I've done that (zeroing out the pg_toast table page) and hope
> the running pg_dump goes through fine.
Unfortunately, pg_dump didn't go through. I already did some REINDEXing
and VACUUMing. Vacuum fixed something (sorry, don't I recall
Hi,
Matthew wrote:
Hey all, new postgres user here. We are trying to setup/research an
HA/Replicated solution with Postrgresql between a datacenter in LA and a
d.c. in NY.
We have a private LAN link between the two D.C.'s with a max round-trip
of 150ms.
We will have a web server at each d.c. (
Hi,
Gregory Stark wrote:
Only if your application is single-threaded. By single-threaded I don't refer
to operating system threads but to the architecture. If you're processing a
large batch file handling records one by one and waiting for each commit
before proceeding then it's single threaded.
Hi,
Bill Moran wrote:
I'm curious as to how Postgres-R would handle a situation where the
constant throughput exceeded the processing speed of one of the nodes.
Well, what do you expect to happen? This case is easily detectable, but
I can only see two possible solutions: either stop the node
Hi,
Marko Kreen wrote:
Such situation is not a specific problem to Postgres-R or to
synchronous replication in general. Asyncronous replication
will break down too.
Agreed, except that I don't consider slowness as 'breaking down'.
Regards
Markus
---(end of broadcast
Hello Bill,
Bill Moran wrote:
It appears as if I miscommunicated my point. I'm not expecting
PostgreSQL-R to break the laws of physics or anything, I'm just
curious how it reacts. This is the difference between software
that will be really great one day, and software that is great now.
Agree
Hi,
Bill Moran wrote:
First off, "clustering" is a word that is too vague to be useful, so
I'll stop using it. There's multi-master replication, where every
database is read-write, then there's master-slave replication, where
only one server is read-write and the rest are read-only. You can
ad
Hi,
Bill Moran wrote:
While true, I feel those applications are the exception, not the rule.
Most DBs these days are the blogs and the image galleries, etc. And
those don't need or want the overhead associated with synchronous
replication.
Uhm.. do blogs and image galleries need replication a
Hi,
Decibel! wrote:
But is the complete transaction information safely stored on all nodes
before a commit returns?
Good question. It depends very much on the group communication system
and the guarantees it provides for message delivery. For certain, the
information isn't safely stored on e
Hi,
Denis Gasparin wrote:
Why not to implement a connection pooling server side as apache for
example does?
This has certainly been discussed before.
IIRC the real argument against that was, that fork() isn't the most
expensive thing to do anymore. And Postgres does lots of other stuff
afte
Hi,
novnov wrote:
OK, this has been very informative and I'd like to thank the three of you.
Asynchronous replication to readonly slaves is something I will look into.
I've never touched posgtres replication; and Scott mentioned that he was not
familiar with PGCluster, so there must be some ot
Hello Sharmi Joe,
sharmi Joe wrote:
Is there a way to get the oracle's rank() over partition by queries in
postgresql?
These are known as window functions. AFAIK Gavin Sherry is working on an
implementation for Postgres.
Regards
Markus
---(end of broadcast)
Hi,
Ketema Harris wrote:
as expected I can do select * from states and get everything out of the
child table as well. What I can't do is create a FK to the states table
and have it look in the child table as well. Is this on purpose? Is it
possible to have FK that spans into child tables?
Hi,
I've been trying to add a unique constraint on a row and a function
result of a row. I.e.:
CREATE TABLE test (
id SERIAL PRIMARY KEY,
t1 TEXT NOT NULL,
t2 TEXT NOT NULL,
UNIQUE (t1, lower(t2)));
That fails with a syntax error (on 8.2beta1). While UNIQUE(t1,
Emanuele Rocca wrote:
you'll get a duplicate key error.
Thank you, that solves my problem.
Although it makes me wonder even more why I'm not allowed to define such
a constraint. Looks like all the necessary backend code is there.
Regards
Markus
---(end of broadcast
Hi,
this is sort of a silly question, but: what's the proper way to
intentionally generate an error? I'm writing tests for pyPgSQL and want
to check it's error handling. Currently, I'm using:
SELECT "THIS PRODUCES AN SQL ERROR";
Is there any better way to generate errors? Probably even gener
Hello Matthias,
[EMAIL PROTECTED] wrote:
In PL/pgSQL you could use the RAISE command:
http://www.postgresql.org/docs/8.1/interactive/plpgsql-errors-and-messages.h
tml
Thank you, good to know. Unfortunately I'm not in a PL/PgSQL function,
just a plain query. Some standard functions which invok
Hallo Stefan,
Stefan Sassenberg wrote:
Hello,
I've got a failing sql-Script that I execute with the psql command. The
Script contains:
I've been unable to reproduce the error with just that snippet (on
debian with PostgreSQL 8.1.4). Can you provide a stripped down test case?
---
Hi,
One of our PostgreSQL 8.1.5 databases constantly crashed on a certain
query (backend SEGFAULTs). I've figured the crashes were caused by a
very long IN() clause.
You can easily reproduce the crash by feeding the output of the python
script below to your database.
Fortunately, 8.2 (as o
Hi,
thanks for testing, unfortunately I don't have a 8.0 around. And as 8.2
works and is probably coming very soon...
Regards
Markus
Shelby Cain wrote:
I don't get a segfault on 8.0.8 under linux or 8.1.4 under win32. The backend
(correctly I assume) issues a hint to increase max_stack_de
ne in question. I've now set it to 7000 and I also get a warning
instead of a SEGFAULT.
Thank you!
Markus
[1]:
http://www.postgresql.org/docs/8.1/interactive/runtime-config-resource.html
Alvaro Herrera wrote:
Markus Schiltknecht wrote:
Hi,
One of our PostgreSQL 8.1.5 databases consta
Hi,
Richard Huxton wrote:
If you can reliably reproduce it (I can't here - Debian on x86) - a
bug-report on the bugs mailing list or the website would probably be
appreciated by the developers. PG version, OS version, method of install
etc.
I've thought about that, but I somehow just *knew*
Hi,
I'm trying to install a SSL certificate. psql correctly shows:
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
how can I check what certificate it has sent? Or what key it uses?
Thank you
Markus
---(end of broadcast)---
TIP 2: Don't
Hi,
I want to convert some large objects to bytea fields on the server.
Searching through the documentation didn't reveal any hints. Am I
missing something or is there really no such thing as a
lo_convert_to_bytea function?
Regards
Markus
---(end of broadcast)--
this is less
attractive ;-)
Regards
Markus
Dimitri Fontaine wrote:
Hi,
Le mardi 14 novembre 2006 14:36, Markus Schiltknecht a écrit :
I want to convert some large objects to bytea fields on the server.
Searching through the documentation didn't reveal any hints. Am I
missing something o
Hi,
Tomi N/A wrote:
> When the subselect returns a lot of results, pgsql really takes it's
time.
8.1.something
PostgreSQL 8.2 improved a lot for IN clauses with lots of values. I
think it now performs as good as an equal join query.
Regards
Markus
---(end of broa
Hi,
Joost Kraaijeveld wrote:
Does PostgreSQL lock the entire row in a table if I update only 1
column?
Yes. In PostgreSQL, an update is much like a delete + insert. A
concurrent transaction will still see the old row. Thus the lock only
prevents other writing transactions, not readers.
Reg
Hi,
Dave Cramer wrote:
Apparently I've completely misunderstood MVCC then
Probably not. You are both somewhat right.
Jens Schipkowski wrote:
>> Thats not right. UPDATE will force a RowExclusiveLock to rows
>> matching the WHERE clause, or all if no one is specified.
That almost right, Ro
postgres needs to be able to hold the file at least once in memory.
Any idea on how to speed this up?
Regards
Markus
Dimitri Fontaine wrote:
Hi,
Le mardi 14 novembre 2006 14:36, Markus Schiltknecht a écrit :
I want to convert some large objects to bytea fields on the server.
Searching through
Hi,
John D. Burger wrote:
Sure, but they won't use PG either, for essentially the same reason,
since =all= PG support is "third party".
Maybe. But at least these third parties can take the source and build
their own product on top of it, without significant limitations.
So one can debate if
Hi,
I've sort of solved the problem for me. I'm now doing one single
lo_read() to fetch the bytea field. Those functions do not operate on
the large object OID, but one needs to open them first with lo_open().
I'm doing another hack to get the size of the large object.
All combined in a sql
Hello Dennis,
Dennis wrote:
Is there any feasible way to achieve geographical redundancy of postgresql
database?
As nobody mentioned it up until now: please check the very nice
documentation about High Availability and Failover here:
http://www.postgresql.org/docs/8.2/static/high-availabil
Hi,
I've just stumbled across the Mimer SQL Validator (commercial product):
http://developer.mimer.com/validator/
Not that I know it...
Anyway, there are different things (like PHP scripts or stored
procedures and such), which do a whole lot of other logic and/or
processing which influences th
Hi,
marcelo Cortez wrote:
Yes i know, but if your define bytea field and store
bytea in this field , decode don't work,
Sure it does:
test=# select encode(E'\\000\\001', 'escape')::text;
encode
--
\000\x01
(1 row)
If you inspect the function, you'll find that encode can *only
Hi,
marcelo Cortez wrote:
Are you sure you tested with a real bytea field?
Yeah , i store bytea using encode function , how you
say .
I never said 'use encode function to store bytea'. I tried to explain
that encode returns TEXT.
The field of my table is bytea type , and store real
by
Hi,
Geoffrey wrote:
We are trying to track down an issue with our PostgreSQL application. We
are running PostgreSQL 7.4.13 on Red Hat Enterprise ES 3.
We have a situation where the postgres backend process drops core and
dies.
Are there some log messages of the dying process, especially jus
Hi,
Henrik Zagerholm wrote:
Which takes about 80 seconds to complete.
The hardware is a Pentium 4 2.8GHz with 1GB HyperX memory.
Is this normal? What can I tweak in postgresql.conf to speed up big
to_tsvector()?
Hm.. seems not too unreasonable to me.
Take a look at the stemmers or dictionar
Hi,
I'm fiddling with to_tsvector() and parse() from tsearch2, trying to get
the word position from those functions. I'd like to use the tsearch2
parser and stemmer, but I need to know the exact position of the word as
well as the original, unstemmed word.
What I came up with so far is prett
Hello Teodor,
Teodor Sigaev wrote:
It's not supposed usage... Why do you need that?
Well, long story... I'm still using my own indexing on top of the
tsearch2 parsers and stemming.
However, two obvious cases come to mind:
- autocompletion, where I want to give the user one of the possible
Hi,
Teodor Sigaev wrote:
I'm fiddling with to_tsvector() and parse() from tsearch2, trying to
get the word position from those functions. I'd like to use the
tsearch2 parser and stemmer, but I need to know the exact position of
the word as well as the original, unstemmed word.
It's not suppo
Hi,
Teodor Sigaev wrote:
Word number is used only in ranking functions. If you don't need a
ranking than you could safely strip positional information.
Huh? I explicitly *want* positional information. But I find the word
number to be less useful than a character number or a simple (byte)
poi
Hello Teodor,
Teodor Sigaev wrote:
byte offset of word is useless for ranking purpose
Why is a word number more meaningful for ranking? Are the first 100
words more important than the rest? That seems as ambiguous as saying
the first 1000 bytes are more important, no?
Or does the ranking w
Hi,
Mike Rylander wrote:
No, the first X aren't more important, but being able to determine
word proximity is very important for partial phrase matching and
ranking. The closer the words, the "better" the match, all else being
equal.
Ah, yeah, for word-pairs, that certainly helps.
Thanks.
Re
Hi,
hubert depesz lubaczewski wrote:
i contacted the company some time ago, and the information i got was
that their product is based on query-replication.
Yes, AFAIK, their solution is two phase commit based, like Sequoia.
Regards
Markus
---(end of broadcast)---
Hi,
Devrim GÜNDÜZ wrote:
Yes, AFAIK, their solution is two phase commit based, like Sequoia.
I thought it was PGCluster. At least this is what I understood from the
drawings.
Uhm, you're right, it looks very similar to PgCluster, not Sequoia. So
it's not two phase commit based, right?
Reg
Hi,
tom wrote:
Initially it seems that the WHERE IN (...) approach takes a turn for the
worse when the list gets very large.
What version do you use? PostgreSQL 8.2 had great improvements for that
specific issue. Did you try EXPLAIN?
Regards
Markus
---(end of broa
Hi,
David Fetter wrote in the weekly news:
Another PostgreSQL Diff Tool 1.0.0_beta20 released.
http://pgfoundry.org/projects/apgdiff/
Why is it 'another' one? What others exist? (Specifically, are there
ones, which don't depend on java?)
Regards
Markus
---(end of b
Hi,
thanks for the links. I've had a quick look at the first two and comment
my findings:
Robert Treat wrote:
Theres this one which uses tcl:
https://sourceforge.net/projects/pgdiff
Seems outdated: 2002, PostgreSQL 7.2, ~1500 lines of code. (which I
don't really understand, I simply don't
Hi,
when using LIMIT, how do I tell the planner to only call a function for
rows it returns?
An example: I want to fetch the top five categories. A function
get_category_text_path(cat_id int) returns the textual representation of
the category. For that I do something like:
SELECT id, get_categor
Hello Terry,
Thanks a lot. That's so simple I didn't see it. (The original query is
much more complex.)
The only problem is, rank is not a column of category itself, but a
joined row. With this solution, the join will have to be performed
twice. But since this doesn't cost that much and because t
On Tue, 2006-05-02 at 14:02 +0200, Martijn van Oosterhout wrote:
> How about:
>
> SELECT id, get_category_text_path(id)
> FROM (SELECT id FROM category
> ORDER BY rank
> LIMIT 5) as x;
Oh that works? Great!
Let me see, with 'rank' from a joined table that looks like:
SELECT id, get_cate
Hi,
is there an easy way to convert a large object to a bytea field?
Thanks
Markus
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Hi,
I was trying to create an updateable view. Suddenly I got foreign key
violations when using nextval('myseq').
As I understand, the rewriter does something similar to a simple text
replacement (I guess copying the plan tree nodes?) so that nextval gets
evaluated again for every rule that appli
On Fri, 2006-05-12 at 15:57 +0200, Martijn van Oosterhout wrote:
> It's a known problem. It's also one of the reasons why triggers are
> recommended over rules. And it's not desirable behaviour.
Well, triggers cannot be used to create writeable views, can they?
> There have been discussions about
Hi Martijn,
On Fri, 2006-05-12 at 18:05 +0200, Martijn van Oosterhout wrote:
> But it can't really. In the example that started this thread, there are
> two seperate rules and after rewriting the executor will be presented
> two seperate queries.
Ah, thank you, that explains the difficulties with
Hi Arnaud,
perhaps you can still use Slony-I for replication and have another tool
automatically handle connections (check out PgPool[1] or SQLRelay[2]).
Or go for a middleware replication solution. Check C-JDBC[3], perhaps
there is something similar for ODBC?
LifeKeeper seems to handle replicat
Hi Jonathon,
does the following command succeed?
# psql template1 -c "CREATE DATABASE test;"
The database 'postgres' is a system table which has been added in 8.2
(or 8.1 already, dunno). It should exist if you used the correct initdb
and postmaster.
What does psql -l say?
And did you rech
Hi Jonathon,
Jonathon McKitrick wrote:
: # psql template1 -c "CREATE DATABASE test;"
CREATE DATABASE
is the result.
Looks good. Can you connect to that database then?
: What does psql -l say?
FATAL: database 'postgres' does not exist
As Tom said: check if you are really calling you self
Hi,
I'm getting the following error from my python script, which tries to
insert lots of data in one transaction:
libpq.OperationalError: ERROR: failed to fetch new tuple for AFTER trigger
I have several AFTER triggers in place, which one raises this error? I'm
sure I only INSERT data, no U
something changed so that it's worth trying current CVS? I'll try to
come up with a test case, the problem is not easy to isolate, though.
Regards
Markus
Tom Lane wrote:
Markus Schiltknecht <[EMAIL PROTECTED]> writes:
I'm getting the following error from my python script, whi
On Mon, 2006-07-24 at 14:54 -0400, Tom Lane wrote:
> Right offhand the only way that I could see for the tuple to disappear
> before the trigger fires is if a concurrent VACUUM removed it, which
> should not happen for a tuple inserted by a still-active transaction.
> If you've got autovacuum runni
Hi,
how can I get the database name or OID of the current backend in a SPI
function (in plain C)? I tried including storage/proc.h and accessing
MyProc->databaseId, but that leads to a segfault :-( (and seems like
the wrong way to do it.)
The SPI documentation didn't help.
Thank you
Marku
Whoops, sorry, there was another reason for the segfault. Using
MyProc->databaseId works. Is it the right way to do it, though?
Markus Schiltknecht wrote:
Hi,
how can I get the database name or OID of the current backend in a SPI
function (in plain C)? I tried including storage/proc.h
Hi,
thank you both. I first tried that, but the segfault really irritated
me. It's now working fine with miscadmin.h. Sorry for the noise.
Regards
Markus
Tom Lane wrote:
Actually I'd recommend you use the global MyDatabaseId from
"miscadmin.h". It'll be the same value, but it's always best
Tony Caduto wrote:
http://newsvac.newsforge.com/newsvac/06/08/28/1738259.shtml
Don't know the validity of this dvd order test they did, but the article
claims Postgresql only did 120 OPM.
Seems a little fishy to me.
Now, this article really s**ks! First of all, the original contest was
spec
Scott Marlowe wrote:
Was this all the same basic task implemented by different teams then?
Yep.
Can we see the code? hack it? I'm sure someone here could help out.
Sure.
I don't care about the contest, but it would be nice to be able to put
out a version that could compete with MySQL's.
69 matches
Mail list logo