There will be a BOF at LinuxWorld, San Francisco on Wednesday (today),
6:30pm in room B3.
--
Bruce Momjian| http://candle.pha.pa.us
[EMAIL PROTECTED] | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your b
Finally figuring that enough is enough, I've been spending the past few
days working on the list archives ...
I've reformatted, so far, the following lists into a cleaner format:
pgsql-hackers
pgsql-sql
pgsql-bugs
pgsql-general
pgadmin-hackers
pga
On 28 Aug 2001, Doug McNaught wrote:
> "Gurunandan R. Bhat" <[EMAIL PROTECTED]> writes:
>
> > Is a postgres function to pack an entire row from a table into a
> > string available? I want something analogous to "serialize()" in php.
[expl. snipped]
>
> What problem are you trying to solve?
Hi,
I'm running 6.5.2 (I plan on upgrading as soon as I get this fixed)
and I get the following error when I try to dump the db:
[kmay@infosport hockey]$ pg_dump gamesheet > gamesheet1.out
dumpRules(): SELECT failed for table attendance. Explanation from
backend: 'ERR
OR: cache lookup of attri
Hi,
I have a confusing situation that I suspect is system related but would
like to know what needs fixing. Both systems run postgresql-7.1.2 with
perl and tcl enabled. (I've been trying to get pgreplica to work).
==
Host1 is a Pentium
"Gurunandan R. Bhat" <[EMAIL PROTECTED]> writes:
> Is a postgres function to pack an entire row from a table into a
> string available? I want something analogous to "serialize()" in php.
I'm not aware of one. You could certainly, and fairly easily, write a
PL/pgSQL (or PL/TCL etc) functi
Hi,
I've got a DB where the WAL files were lost. I know I've lost some data, but
is there anyway to get what is in the primary DB back out again? PG won't
start with bad WAL files so...:(
Thanks for the help.
GB
--
GB Clark II | Roaming FreeBSD Admin
[EMAIL PROTECTED] | Genera
I have an application includes data which describe tree-like structures
of varying width and breadth. It is important for me to be able to
recover the node-IDs for these data trees. The goal is to be able
to recover the tree node ID list given the root node ID:
Essentially, the tables are of th
Hi all,
Is a postgres function to pack an entire row from a table into a
string available? I want something analogous to "serialize()" in php.
Thanks and regards,
Gurunandan
---(end of broadcast)---
TIP 2: you can get off all lists at o
> Yup. We wrote the client that is accessing the database. It's using
> PHP, and we don't even *use* transactions currently. But that isn't the
> problem. From what I gather so far, the server is under fairly high
> load (6 right now) so vacuuming the database (520MB in files, 5MB dump)
> take
On 28 Aug 2001, Doug McNaught wrote:
> > Maybe stale indexes? Aborted vacuums? What on earth would cause that?
>
> VACUUM doesn't currently vacuum indexes. Yes, it's a serious wart. :(
Ah, now that makes sense. It would also explain why our daily inserts
of many thousands of rows on a fairl
Hi List!
I'm using the libpq C library and I'm looking for a C function,
that returns me the version (e.g. 7.1.3) of the backend server
I'm conected to!
Some's any idea?
Thanx and greetings
Steve
---(end of broadcast)---
TIP 4: Don't 'kill -9' t
Shaun Thomas <[EMAIL PROTECTED]> writes:
> Actually, on a whim, I dumped that 520MB database to it's 5MB file, and
> reimported it into an entirely new DB. It was 14MB. We vacuum at least
> once an hour (we have a loader that runs every hour, it may run multiple
> concurrent insert scripts). W
On 28 Aug 2001, Doug McNaught wrote:
> You obviously know what you're doing, but are you absolutely sure one
> of your clients isn't holding a transaction open? That'll hang vacuum
> every time...
Yup. We wrote the client that is accessing the database. It's using
PHP, and we don't even *use*
Shaun Thomas wrote:
...
> The really strange thing is, one of our newwer databases has
> started hanging on vacuums. That's a 7.1.1, so the 8k thing shouldn't be
> any kind of issue in the slightest thanks to the new internal structures.
>
> But something is corrupt enough to break vaccum badly.
> Yeah, I know. I was just trying to defend mysql. ^_^ We use both, and so
> far, it's been the smaller headache, so...
That may be true... until you have to implement transactions and/or foreign
keys at the application level.
> The really strange thing is, one of our newwer databases has
> st
I have a table called session:
Table "session"
Attribute | Type | Modifier
+--+--
sessionid | character(32)| not null
usernumber | integer | not null
timestamp | timestamp with time
On Mon, 27 Aug 2001, Tom Lane wrote:
> The latter is what I'm interested in, since \d doesn't invoke anything
> that I'd consider crash-prone. Could you submit a debugger backtrace
> from the crash?
I should do that. But, since it's the back-end that's crashing, I'd need
to find some way of ge
On Tue, Aug 28, 2001 at 10:34:52PM +0900, Tatsuo Ishii wrote:
>
> > The 'BIG5' is client encoding only. PG can on the fly encode
> > data from some multibyte (unicode, mule_internal) encoding
> > used for server to big5 used on client, but you can't directly
> > use big5 at server (DB).
>
> N
> On Tue, Aug 28, 2001 at 06:03:55PM +0800, Michael R. Fahey wrote:
> > I compiled 7.1.3 with configure --multibyte=UNICODE and
> > --enable-unicode-conversion (Red Hat Linux 6.1, Kernel 2.2.19).
> >
> > Now I'm trying to follow the instructions given by Tatsuo Ishii in his
> > 18 March 2001 pos
Bhuvaneswari <[EMAIL PROTECTED]> writes:
> ERROR: mdopen: couldn't open test1: No such file or directory
Looks like you tried to roll back a DROP TABLE. You can get out of the
immediate problem by doing
touch $PGDATA/base/db1/test1
and then drop the table.
Next, run do not walk to an ar
hi,
I am getting the following error while doing vacuumdb,
ERROR: mdopen: couldn't open test1: No such file or directory
vacuumdb: database vacuum failed on db1.
Here 'db1' is the database and 'test1' is a table. When, displaying the
structure of the
table 'test1', it comes correctly. But I can'
22 matches
Mail list logo