Rod Taylor wrote:
>An 8 k page will hold approx 140 tuples based on your structure. So,
>for every ~100 updates you'll want to run vacuum (regular, not full) on
>the table
Alas, for this application, that means a vacuum once every 5 seconds or
so. I'll see if I can set up a separate little task t
On Sat, May 31, 2003 at 17:17:38 -0600,
Dave E Martin XXIII <[EMAIL PROTECTED]> wrote:
> Speaking of which, since the row is locked with select for update (so it
> can't be involved in any other transactions anyway) and the change
> doesn't change the length of the row, can't it just be updated
On Sat, May 31, 2003 at 16:56:56 -0600,
Dave E Martin XXIII <[EMAIL PROTECTED]> wrote:
>
> (ok, experimented a bit more just now)
> Hm, it appears that degredation occurs with the index as well, I guess
> at the time I created the index, it just initially did better because it
> got to skip al
On Fri, May 30, 2003 at 16:33:55 +0400,
Nick Altmann <[EMAIL PROTECTED]> wrote:
> Hello!
>
> I think it's not bug, but difficulty.
>
> I cannot use trusted connections for my localhost. I use password. But psql,
> pg_dump, pg_restore doesn't support password parameter in command line. Also, i
Hello!
I think it's not bug, but difficulty.
I cannot use trusted connections for my localhost.
I use password. But psql, pg_dump, pg_restore doesn't support password parameter
in command line. Also, i cannot do nightly backup if I don't use trusted
connections. Cannot you add password co
Hi,
My pg_dumpall is first time running on PostgreSQL 7.2 upon Solaris.8.
Expectatively, it's well doing in test platform but not in productive one.
I don't find the difference except the name of the users : pgsql in test,
postgres in production. Their home directories change too.
But, LD_LIBRAR
I am trying to connect to postgreSQL through PHP. I am
facing lots of problems. Itried to configure postgreSQL
for the above purpose.I tried the following command to set
PGALLOWTCPIP=yes
sudo vi /etc/postgresql/postmaster.init
It says : postgres is not in sudoers file.
What does it mean?
Can I
POSTGRESQL BUG REPORT TEMPLATE
Your name : Gerhard Dieringer
Your email address : [EMA
Hi,
I have installed postgresql on a qube2 running netbsd 1.6, when I try
and init the database, I get this error;
IpcSemaphoreCreate: semget(key=4, num=17, 03600) failed: No space left
on device
This error does *not* mean that you have run out of disk space.
It occurs when either the syste
> vacuum verbose bigint_unique_ids;
> INFO: --Relation public.bigint_unique_ids--
> INFO: Index bigint_unique_ids__table_name: Pages 29; Tuples 1: Deleted
> 5354.
> CPU 0.01s/0.04u sec elapsed 0.05 sec.
> INFO: Removed 11348 tuples in 79 pages.
> CPU 0.00s/0.02u sec elapsed 0.02 sec.
>
Tom Lane Writes:
Bruno Wolff III <[EMAIL PROTECTED]> writes:
It probably has one visible row in it. If it can changed a lot, there
may be lots of deleted tuples in a row. That would explain why an
index scan speeds things up.
Right, every UPDATE on unique_ids generates a dead row, and a seqscan
h
Tom Lane wrote:
Bruno Wolff III <[EMAIL PROTECTED]> writes:
It probably has one visible row in it. If it can changed a lot, there
may be lots of deleted tuples in a row. That would explain why an
index scan speeds things up.
Right, every UPDATE on unique_ids generates a dead row, and a seq
Robert Creager <[EMAIL PROTECTED]> writes:
>> I'm interested to narrow down exactly what was the issue here.
> shared_buffers was 1024, now 8192
> max_fsm_relations was 1000, now 1
> max_fsm_pages was 2, now 10
> wal_buffers was 8, now 16
> sort_mem was 1024, now 64000
> vacuum_mem was
Bruno Wolff III <[EMAIL PROTECTED]> writes:
> It probably has one visible row in it. If it can changed a lot, there
> may be lots of deleted tuples in a row. That would explain why an
> index scan speeds things up.
Right, every UPDATE on unique_ids generates a dead row, and a seqscan
has no altern
On Sat, 31 May 2003, Dave E Martin XXIII wrote:
> select next_id from unique_ids where name=whatever for update;
> update unique_ids set next_id=next_id+1 where name=whatever;
> pass on value of old next_id to other code...
>
> where unique_ids is:
>
> create table unique_ids (
> name text not
15 matches
Mail list logo