> The web site lists a lot of the limitations for postgresql, but not the
> max number of tables that are allowed in a given db. Anyone care to come
> up with a number?
It is limited by your filesystem's ability to handle large directories.
I have noticed in the past (not with PostgreSQL per se
Tom Lane wrote:
>
> Hiroshi Inoue <[EMAIL PROTECTED]> writes:
> >> I'd say you shouldn't release it at all.
>
> > As far as I see I'm not releasing it.
>
> Oh, I hadn't looked closely at reindex_index. Hmm... okay, you are
> holding the lock acquired there.
Yes reindex_index is guarding itself
Hiroshi Inoue <[EMAIL PROTECTED]> writes:
>> I'd say you shouldn't release it at all.
> As far as I see I'm not releasing it.
Oh, I hadn't looked closely at reindex_index. Hmm... okay, you are
holding the lock acquired there. But you're still acquiring it way too
late for my taste. All of the
The web site lists a lot of the limitations for postgresql, but not the
max number of tables that are allowed in a given db. Anyone care to come
up with a number?
- Brandon
b. palmer, [EMAIL PROTECTED]p
Tom Lane wrote:
>
> Hiroshi Inoue <[EMAIL PROTECTED]> writes:
> > Where do I release the lock ?
>
> I'd say you shouldn't release it at all.
As far as I see I'm not releasing it.
regards,
Hiroshi Inoue
---(end of broadcast)---
TIP 4: Don't 'kill
Hiroshi Inoue <[EMAIL PROTECTED]> writes:
> Where do I release the lock ?
I'd say you shouldn't release it at all. Let it be held until end of
transaction.
rel = heap_open(relid, AccessExclusiveLock);
...
heap_close(rel, NoLock); /* close rel, keep lock till end of xact
Tom Lane wrote:
>
> Hiroshi Inoue <[EMAIL PROTECTED]> writes:
> > One backend would be blocked by another one because reindex_relation
> > calls reindex_index and reindex_index grabs an ExclusiveLock on the
> > relation.
> > Am I missing anything ?
>
> It'd be okay if you *held* the lock througho
Hiroshi Inoue <[EMAIL PROTECTED]> writes:
> One backend would be blocked by another one because reindex_relation
> calls reindex_index and reindex_index grabs an ExclusiveLock on the
> relation.
> Am I missing anything ?
It'd be okay if you *held* the lock throughout. Grabbing and releasing
it i
> > > I guess this is because PG really has to lock the hash table entry in
> > > both cases. It does, however, make HASH indices completely useless for
> > > any table that you might want to update.
> > >
> > > Is this a known feature?
> >
> > Yes, I have heard about this problem. Would you te
> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]]
>
> Alexander Loginov <[EMAIL PROTECTED]> writes:
> > I have only one small question. Can I do REINDEX during inserting
> > of information into tables. Or I must block somehow updating of
> > tables.
>
> Hmmm
On Thu, Aug 16, 2001 at 08:10:41PM -0400, [EMAIL PROTECTED] wrote:
> Ok I set enable_hashjoin and enable_mergejoin to off
> and performance is much much worse: just over 1 second
> job becomes a minute job
>
> Perhaps I should re-check if the database
> gets bigger.
>
> Thanks a lot
>
> On Thu,
> how can i use unix timestamp as a data type?
You don't want to. Really.
> i thought that the timestamp data type use the unix timestamp but
> found out that it didnt.. :(
Right. One second resolution and limited range was considered a problem.
> do i need to use an int data type for holding
Jeremy Hansen writes:
> I have three tables, primary key, bug_id. Bug_id is a foreign key in the
> other two tables, with a cascade on update and delete.
>
> What I've done is created three sequences for each table for the bug_id.
> Sometimes the table with the primary key gets its sequence out
> Any suggestions would be more then appreciate as always. Is there a
> better way to do what I'm trying to do?
I would recommend using only one sequence for the master table. Then just
reference that sequence value for the two foreign keys. For example:
BEGIN;
INSERT INTO Bugs (bug_date) VALUE
"Ben-Nes Michael" <[EMAIL PROTECTED]> writes:
> Its not that I care about holes, but if roll back mean to return the DB to
> its original status because a foul sql chain command then logically I expect
> that everything will roll back like the action was never accorded
That's the general rule, bu
Andrew Snow wrote:
>
>
> > I have found (and confirmed by studying the PostgreSQL
> > source) that to reliably insert arbitrary binary data into a
> > bytea column there are only 3 characters which need to be
> > escaped: \000, \047 ( ' ), \134 ( \ ). Here's the PHP
> > function that I've been usi
[EMAIL PROTECTED] writes:
> Looking at the docs Peter Eisentraut added this note at the bottom of the
>documentation for character data.
> ---
> The maximum value for "n" is 2147483648. The longest possible string is actually
>only about 1 GB, though. The storage size of "(4+n)" is actually inc
> I have found (and confirmed by studying the PostgreSQL
> source) that to reliably insert arbitrary binary data into a
> bytea column there are only 3 characters which need to be
> escaped: \000, \047 ( ' ), \134 ( \ ). Here's the PHP
> function that I've been using:
Postgresl, in treatin
Its not that I care about holes, but if roll back mean to return the DB to
its original status because a foul sql chain command then logically I expect
that everything will roll back like the action was never accorded
> Why do people care about holes anyway? I've never understood that...
>
-
> If I design a table with SERIAL type and then try to insert few rows, but
> some of the rows are rolled back the roll back process does not roll back
> the counter status.
>
> Is there a way to do a roll back to the counter ?
And I suppose if I deleted some rows you'd want
all rows with higher
Two estimates I undestand are quite
good.
select distinct id on body_index where string='book'
returns about 1500 rows. That matches with
the bottom line of the plan
There are 5139 rows in table item. It is
the same number of rows in the plan for
sequential scan
If I were doing a maual join
> I am creating a DB which is going to have to store a lot of small
> images. Nothing will be over 10k. This is with PG7.2. I have a few
> questions: Should I use blob or bytea (neither seem to be well
> documented), and how do I access that binary information? For
> instance, if I have this t
Hey guys,
I'm a little confused by the documentation on varchar data type in postgresql.
Looking at the docs Peter Eisentraut added this note at the bottom of the
documentation for character data.
---
The maximum value for "n" is 2147483648. The longest possible string is actually only
about 1
On Thu, 16 Aug 2001, Martijn van Oosterhout wrote:
> You can tell the order anyway. The order doesn't change, it's only that
> there may be numbers missing,
Please note the message from "Michael Ansley (UK)". If two of more
connections get a cache from a sequence and the inserts are 'sporatic'
[EMAIL PROTECTED] writes:
> Can someone explain why pg is doing
> a sequential scan on table item with the following
> statement
Looks like a fairly reasonable plan to me, if the rows estimates are
accurate. Are they?
regards, tom lane
---(end of
> PostgreSQL 7.1.2 Administrator's Guide says as follows:
>
> Chapter 8. Backup and Restore
> 8.1. SQL Dump
>
> Important: When your database schema relies on OIDs (for instances as
> foreign keys) you must instruct pg_dump to dump the OIDs as well. To do
> this, use the -o c
> It would be convenient to have fast access to binary data in
> the database
> that is similar to what other databases do. This would allow more
> applications to run on PostgreSQL.
For small sets of files you're probably right. When you start to reach
millions of files, it makes much more s
You still can tell the order even if you have holes in the key...
Carlos Felipe Zirbes
DBServer Assessoria em Sistemas de Informação
E-mail: [EMAIL PROTECTED]
Fone: (51) 3342-8055
Fax: (51) 3342-4838
-Original Message-
From: Roderick A. Anderson [mailto:[EMAIL PROTECTED]]
Sent: quinta-
Can someone explain why pg is doing
a sequential scan on table item with the following
statement
-
q=> explain select distinct h.id,i.item,i.heading,i.finish from item i ,body_index h
where h.id=i.item and (h.string='book') order by finish;
NOTICE: QUERY PLAN:
Unique (cost=6591.46..6606.5
On Thu, 16 Aug 2001, Martijn van Oosterhout wrote:
> Why do people care about holes anyway? I've never understood that...
The single seat syndrome? (It's my database and I'm the only one using
it.)
Trying to put intelligence into the field? (I can tell the order the
entries were made in the t
From: "Ben-Nes Michael" <[EMAIL PROTECTED]>
> If I design a table with SERIAL type and then try to insert few rows, but
> some of the rows are rolled back the roll back process does not roll back
> the counter status.
>
> Is there a way to do a roll back to the counter ?
No - sequences (as used
* "Eric Ridge" <[EMAIL PROTECTED]> wrote:
|
| Also, if you're not going to be searching the bytes of the file (which
| I'm sure you're not), why put it in the database?
It would be convenient to have fast access to binary data in the database
that is similar to what other databases do. This w
> > Correct me if I'm wrong, but the only thing "lztext" can't store is
> > NULLs, which could escaped somehow with other (rare) characters.
> > Still overhead, but you shouldn't be storing large pieces of binary
> > data this way, I think.
>
> Here is not a problem with storing binary data
> only, a problem can be 30% grow of data... (you can use
> "lztext" that is compressed datype:-).
Alternativly, you can store "pointers" to the images in the database.
Such as a local filepath or url (file:///usr/local/myimages/foo.gif or
ftp://user:passwd@host/pub/myimages/foo.gif). Then you
Hi all,
Just wondering if anyone knows of or has tested for PostgreSQL buffer
exploits over the various interfaces (JDBC, ODBC, psql, etc) or directly
through socket connections?
Working on a sensitive application at the moment, and I've realised I've
never seen anyone mention testing PostgreSQL
On Thu, Aug 16, 2001 at 06:52:32PM +1000, Andrew SNow wrote:
>
> > Yes, but good is encode binary data (image) to same ascii safe
> > encoding like base64, else you will have problem in INSERT/UPDATE
> > queries.
>
> Correct me if I'm wrong, but the only thing "lztext" can't store is
> NULLs,
> Yes, but good is encode binary data (image) to same ascii safe
> encoding like base64, else you will have problem in INSERT/UPDATE
> queries.
Correct me if I'm wrong, but the only thing "lztext" can't store is
NULLs, which could escaped somehow with other (rare) characters. Still
overhead,
On Thu, 16 Aug 2001, Karel Zak wrote:
> On Thu, Aug 16, 2001 at 06:02:00AM -, Dr. Evil wrote:
>
> > CREATE TABLE imagetable (
> >
> >image BYTEA,
> >
> > );
> >
> > and then I do this in PHP:
> >
> > $result = pg_exec($db, "SELECT image FROM imagetable WHERE ..
38 matches
Mail list logo