On Thu, Jul 28, 2005 at 03:12:33PM -0400, Greg Stark wrote:
> I think occasionally people get bitten by not having their pg_* tables being
> vacuumed or analyzed regularly. If you have lots of tables and the stats are
> never updated for pg_class or related tables you can find the planner taking a
On Fri, Jul 29, 2005 at 12:05:46AM +0200, Havasvölgyi Ottó wrote:
> create function pidtest_del(_pid integer) returns void as $$
> declare
> row pidtest;
> begin
> perform pid from pidtest where pid>=_pid for update;
> delete from pidtest where pid=_pid;
> for row in select * from pidtest where pid
On Thu, Jul 28, 2005 at 05:48:21PM -0500, Guy Rouillier wrote:
> Jan Wieck wrote:
>
> > Then again, the stats file is only written. There is nothing that
> > actually forces the blocks out. On a busy system, one individual stats
> > file will be created, written to, renamed, live for 500ms and be
>> This is Linux 2.4.26 and an ext3 filesystem.
> With the dir_index feature or without?
With, I believe. It is enabled in the superblock (tune2fs -O dir_index)
but this was not done when the filesystem was created so only new
directories are indexed I think. I don't think there's a way to in
Jan Wieck wrote:
> Then again, the stats file is only written. There is nothing that
> actually forces the blocks out. On a busy system, one individual stats
> file will be created, written to, renamed, live for 500ms and be
> thrown away by the next stat files rename operation. I would assume
> t
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Folks,
If I wanted to embed SQL code in a C++ file is there a way to use ecpg
to generate the proper C++ files?
- --
Peter L. Berghold [EMAI
On Thu, Jul 28, 2005 at 09:43:44PM +0200, Peter Wiersig wrote:
> On Thu, Jul 28, 2005 at 08:31:21PM +0100, Phil Endecott wrote:
> >
> > This is Linux 2.4.26 and an ext3 filesystem.
>
> With the dir_index feature or without?
Also, with data=ordered, data=writeback or data=journal?
(First one is d
Hi,
Oh, sorry.
CREATE TABLE pidtest (pid integer, szoveg text) WITHOUT OIDS;
INSERT INTO pidtest (pid, szoveg) VALUES (3, 'three');
INSERT INTO pidtest (pid, szoveg) VALUES (4, 'four');
INSERT INTO pidtest (pid, szoveg) VALUES (5, 'five');
INSERT INTO pidtest (pid, szoveg) VALUES (6, 'six');
IN
Josh Berkus wrote:
D-
Anybody interested in contributing to this?
Interested, yes. Free time's a little harder to come by, as you know ...
What is this thing "free time"? I have yet to see or hear of it.
--
Your PostgreSQL solutions provider, Command Prompt, Inc.
24x7 support - 1.8
D-
> Anybody interested in contributing to this?
Interested, yes. Free time's a little harder to come by, as you know ...
--
--Josh
Josh Berkus
Aglio Database Solutions
San Francisco
---(end of broadcast)---
TIP 3: Have you checked our extensiv
On 07/28/2005-05:55AM, Stefan 'Kaishakunin' Schumacher wrote:
>
> I was a little bit confused about the uniqueness of GUID (esp. using
> only 16 Bit [1]) and read the article about the UUID:
> http://en.wikipedia.org/wiki/Universally_Unique_Identifier
>
16 BYTE not bit. From the wiki...
A U
Scott Marlowe wrote:
Yeah, I found these three facets of the OP's system a bit disconcerting:
QUOTE ---
This is for a web application which uses a new connection for each CGI
request.
The server doesn't have a particularly high disk bandwidth and this
mysterious activity had been the bottlene
On Thu, Jul 28, 2005 at 08:31:21PM +0100, Phil Endecott wrote:
>
> This is Linux 2.4.26 and an ext3 filesystem.
With the dir_index feature or without?
Peter
---(end of broadcast)---
TIP 6: explain analyze is your friend
Hello again,
Just to give a bit of background, in case it is useful: this is my
family tree website, treefic.com. I have a schema for each user, each
with about a dozen tables. In most cases the tables are small, i.e.
tens of entries, but the users I care about are the ones with tens of
tho
On Wed, Jul 27, 2005 at 16:57:21 -0400,
John DeSoi <[EMAIL PROTECTED]> wrote:
>
> On Jul 27, 2005, at 4:46 PM, Scott Marlowe wrote:
>
> >So, how can two databases, not currently talking to one another,
> >guarantee that their GUIDs don't collide? using a large randomly
> >generated name space o
Jan Wieck <[EMAIL PROTECTED]> writes:
> >> PostgreSQL itself doesn't work too well with tens of thousands of tables.
> > Really? AFAIK it should be pretty OK, assuming you are on a filesystem
> > that doesn't choke with tens of thousands of entries in a directory.
> > I think we should put down
[EMAIL PROTECTED] ("Stefan 'Kaishakunin' Schumacher") writes:
> So as I understand the GUIDs only apply to the Microsoft Universe[TM], or
> are there any other serious apllications using it?
No, the RFC (4122) was written by a combination of people including
Rich Salz, and is an encoding of the DC
[EMAIL PROTECTED] (Scott Marlowe) writes:
> On Wed, 2005-07-27 at 15:32, John DeSoi wrote:
>> On Jul 27, 2005, at 4:49 AM, Tino Wildenhain wrote:
>>
>> > I'd create a sequence:
>> >
>> > CREATE SEQUENCE global_unique_id_seq;
>> >
>> > and a function:
>> >
>> > CREATE OR REPLACE FUNCTION newid()
>>
Jan Wieck <[EMAIL PROTECTED]> writes:
> Then again, the stats file is only written. There is nothing that actually
> forces the blocks out. On a busy system, one individual stats file will be
> created, written to, renamed, live for 500ms and be thrown away by the next
> stat files rename operati
On Thu, 2005-07-28 at 13:40, Jan Wieck wrote:
> On 7/28/2005 2:28 PM, Tom Lane wrote:
>
> > Jan Wieck <[EMAIL PROTECTED]> writes:
> >> On 7/28/2005 2:03 PM, Tom Lane wrote:
> >>> Well, there's the problem --- the stats subsystem is designed in a way
> >>> that makes it rewrite its entire stats col
[EMAIL PROTECTED] (Chris Travers) writes:
>>At least on Windows, the GUID is derived in part from the computers
>>primary MAC address. No, it's not a guarantee, but it's pretty unlikely
>>:-)
> The danger is not that the MAC address will be duplicated, but that
> other factors will lead to an MD5
Jan Wieck <[EMAIL PROTECTED]> writes:
> On 7/28/2005 2:28 PM, Tom Lane wrote:
>> Jan Wieck <[EMAIL PROTECTED]> writes:
>>> PostgreSQL itself doesn't work too well with tens of thousands of
>>> tables.
>>
>> Really? AFAIK it should be pretty OK, assuming you are on a filesystem
>> that doesn't ch
On 7/28/2005 2:28 PM, Tom Lane wrote:
Jan Wieck <[EMAIL PROTECTED]> writes:
On 7/28/2005 2:03 PM, Tom Lane wrote:
Well, there's the problem --- the stats subsystem is designed in a way
that makes it rewrite its entire stats collection on every update.
That's clearly not going to scale well to
Jan Wieck <[EMAIL PROTECTED]> writes:
> On 7/28/2005 2:03 PM, Tom Lane wrote:
>> Well, there's the problem --- the stats subsystem is designed in a way
>> that makes it rewrite its entire stats collection on every update.
>> That's clearly not going to scale well to a large number of tables.
>> Off
wayne schlemitz wrote:
I want to unscribe and have tried from the website and
from a web address from this group to no avail. Please
get me off the list.
What didn't work on the website?
You can go here
http://www.postgresql.org/community/lists/subscribe
and choose "unsubscribe" and the list
On 7/28/2005 2:03 PM, Tom Lane wrote:
Phil Endecott <[EMAIL PROTECTED]> writes:
For some time I had been trying to work out why every connection to my
database resulted in several megabytes of data being written to the
disk, however trivial the query. I think I've found the culprit:
global/p
Has anyone had the chance to review this book yet?
Jerry
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to
Phil Endecott <[EMAIL PROTECTED]> writes:
> For some time I had been trying to work out why every connection to my
> database resulted in several megabytes of data being written to the
> disk, however trivial the query. I think I've found the culprit:
> global/pgstat.stat. This is with 7.4.7.
Dear all,
I just installed postgresql 0.8.3 on the fedora core linux 3,
I also installed pgadmin 1.0.2 in my computer. But I am having trouble
connecting postgresql by using pgadmin, it shows me the error message” An
error occurred: Error connecting to the server: could connect to serve
I want to unscribe and have tried from the website and
from a web address from this group to no avail. Please
get me off the list.
Wayne
__
Do you Yahoo!?
Yahoo! Mail - Find what you need with new enhanced search.
http://info.mail.yahoo.com/mai
Bryan Field-Elliot <[EMAIL PROTECTED]> writes:
> We have this simple query:
>
> select status, count(*) from customer group by status;
>
> There is already a btree index on status, but, the customer table is
> huge, and this query must be executed very frequently... an "explain" on
> this query
On Thu, 2005-07-28 at 11:19, Bryan Field-Elliot wrote:
> We have this simple query:
>
> select status, count(*) from customer group by status;
>
> There is already a btree index on status, but, the customer table is
> huge, and this query must be executed very frequently... an "explain"
> on this
On 7/28/05, Dorine Periolat <[EMAIL PROTECTED]> wrote:
>
Always send a copy to the list... not only because is better to get
more answers but i actually see little my own mail... just check the
lists... ;)
> What is contrib/dblink ?
>
Is a module some contrib to make querys that take tables from
David Fetter wrote:
> On Thu, Jul 28, 2005 at 09:19:49AM -0700, Bryan Field-Elliot wrote:
>
>>We have this simple query:
>>
>>select status, count(*) from customer group by status;
>>
>>There is already a btree index on status, but, the customer table is
>>huge, and this query must be executed ver
Hi,
Well, since we don't seem to have the actual code (you seem to have
"paraphrased" it), I can't tell for sure.
However, here (assuming this is sort of the way you actually are using
it):
my $target_sth = $target_dbh->prepare(
q{ SELECT columns
FROM the tabl
Bryan Field-Elliot wrote:
We have this simple query:
select status, count(*) from customer group by status;
There is already a btree index on status, but, the customer table is
huge, and this query must be executed very frequently... an "explain" on
this query shows that it is quite costly (a
# [EMAIL PROTECTED] / 2005-07-27 12:21:34 -0700:
> I found using the shell works but using the phAdminIII GUI is the one
> that gives me problems. I've even tried running it on EMS PostgreSQL
> Manager 3. Same results. Is this normal?
>
> Got a couple more questions regarding cursors.
> 1. When
On Thu, Jul 28, 2005 at 09:19:49AM -0700, Bryan Field-Elliot wrote:
> We have this simple query:
>
> select status, count(*) from customer group by status;
>
> There is already a btree index on status, but, the customer table is
> huge, and this query must be executed very frequently... an
> "exp
We have this simple query:
select status, count(*) from customer group by status;
There is already a btree index on status, but, the customer table is huge, and this query must be executed very frequently... an "explain" on this query shows that it is quite costly (and we notice it runs slowl
From the 7.4.6 Documentation:
[Begin Quote]
27.9. Notice Processing
Notice and warning messages generated by the server are not returned by the
query execution functions, since they do not imply failure of the query.
Instead they are passed to a notice handling function, and execution
continue
subscribe
Hi everyone,
Which is the best configuration to pg_autovaccum?
Are there benchmarks showing the improvements, after and before the
service had started?
Thanks in advance.
--
Fernando Lujan
---(end of broadcast)---
TIP 9: In versions below 8.0, t
Does anyone know how Raise Notice Messages get sent back to the client
when a function is executed?
Thanks,
Tony
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Dear Postgresql experts,
For some time I had been trying to work out why every connection to my
database resulted in several megabytes of data being written to the
disk, however trivial the query. I think I've found the culprit:
global/pgstat.stat. This is with 7.4.7.
This is for a web app
germ germ wrote:
> Thank you all for your help. I got it working, once.
>
> Right after I made the change and tested it,
> everything worked perfect. Then I had a freak nose
> bleed- (This has to be my 3rd nose bleed in my life
> ever). I frantically starting closing windows and
> shells. While i
Thank you all for your help. I got it working, once.
Right after I made the change and tested it,
everything worked perfect. Then I had a freak nose
bleed- (This has to be my 3rd nose bleed in my life
ever). I frantically starting closing windows and
shells. While in my frantic state, I deleted m
germ germ <[EMAIL PROTECTED]> writes:
> This is the error: 2005-07-28 08:51:08 ERROR:
> permission denied for sequence requests_req_num_seq
> I've tried these grants, but no luck:
> GRANT ALL ON FUNCTION nextval(integer) TO wwwrun,
> postgres;
> GRANT ALL ON FUNCTION nextval('requests_req_num_seq
germ germ <[EMAIL PROTECTED]> writes:
> This is the error: 2005-07-28 08:51:08 ERROR:
> permission denied for sequence requests_req_num_seq
>
> I've tried these grants, but no luck:
> GRANT ALL ON FUNCTION nextval(integer) TO wwwrun,
> postgres;
> GRANT ALL ON FUNCTION nextval('requests_req_num_s
On Thu, Jul 28, 2005 at 07:04:54AM -0700, germ germ wrote:
> This is the error: 2005-07-28 08:51:08 ERROR:
> permission denied for sequence requests_req_num_seq
>
> I've tried these grants, but no luck:
> GRANT ALL ON FUNCTION nextval(integer) TO wwwrun,
> postgres;
> GRANT ALL ON FUNCTION nextva
On 7/28/05, Dorine Periolat <[EMAIL PROTECTED]> wrote:
> hi,
>
> is it possible to make a trigger on Table T1 on
> a database B1 that makes an action on a Table T2 on
> a database B2 ???
>
> I'm using postgre 7.2
>
> Thanks
> Dorine
No, at least you are using contrib/dblink
what about upgradi
Mark,
> I currently have a defunct Debian 3.0 system running PostgreSQL 7.4.1, in
> which I am going to do an entire blob backup dump of all the databases in
> hopes to restore them on my new, more secure system running the latest
> OpenBSD, and PgSQL 8.0.
>
> So my question is: Are they any issue
Timothy Perrigo wrote:
Is there anything similar to pg_get_serial_sequence that will work with
tables that have an inherited serial column? (...)
Sorry, I should have searched before posting, I started a new thread.
Please refer to my solution posted today as "pg_get_serial_sequence and
in
This is the error: 2005-07-28 08:51:08 ERROR:
permission denied for sequence requests_req_num_seq
I've tried these grants, but no luck:
GRANT ALL ON FUNCTION nextval(integer) TO wwwrun,
postgres;
GRANT ALL ON FUNCTION nextval('requests_req_num_seq')
TO wwwrun, postgres;
GRANT ALL ON FUNCTION next
This is probably just a general DBI question but I get the following
errors when I run my code:
DBD::Pg::st fetchrow_array failed: no statement executing at ./snap.pl
line 110.
DBD::Pg::st fetchrow_array failed: no statement executing at ./snap.pl
line 110.
Line 110 is a disconnect statement:
$t
Alvaro Herrera wrote:
On Wed, Jul 27, 2005 at 11:46:05AM -0400, Robert Treat wrote:
Seems unlikely unless folks like Tom Lane, Stephan Szabo, and Richard
Huxton have unsubscribed and resubscribed lately... Funny thing is it
isnt every messages, but maybe half of them. And its not to specifi
hi,
is it possible to make a trigger on Table T1 on
a database B1 that makes an action on a Table T2 on
a database B2 ???
I'm using postgre 7.2
Thanks
Dorine
Hi All;
Actually, as I am remembering how I worked through this problem...
I ended up withh a simple shell script something like:
#!/bin/bash
dumpfile="dump.pgsql"
pg_dump -s dbname > $dumpfile
for a in table1 table2 table3 table4 table5 table5 table6
do
pg_dump -a -t $a >> $dumpfile
done
H
Hi all,
I've come accross a problem with pg_get_serial_sequence(table,
serial_column) when dealing with inherited tables if the serial was
defined in the parent table. The function would not return the serial
name in this case if the child table is specified.
To solve it, and improve its u
germ germ wrote:
I have been trying to figure out why I can't insert
into a table and I think it has something to do with
the sequnce.
I am able to use able to properly insert into the
table using the the shell, but I am not able to insert
using a php script:
INSERT INTO requests (time_stamp, r
Hi Martjin,
That's what I meant.. I would do my pg_dumps using: pg_dump -U postgres -Ft -b dbname > dbname.tar
And restore them via: pg_restore -O -U postgres -d dbname dbname.tar
That should be cool, yes?
MarkOn 7/27/05, Martijn van Oosterhout wrote:
Binary backups are not
I found using the shell works but using the phAdminIII GUI is the one
that gives me problems. I've even tried running it on EMS PostgreSQL
Manager 3. Same results. Is this normal?
Got a couple more questions regarding cursors.
1. When I try to run this statement (Declare curs1 refcursor;) I ge
Hi
I am wrting a application that involves a lot of floating point number
crunching.
My data is stored in tables of the form:
TABLE data (
date_id INT,
value FLOAT)
I have just noticed in the documention that the FLOAT data type is stored
in 8 bytes (ie 64 bits) as opposed to the REAL data typ
On Thu, Jul 28, 2005 at 05:32:03AM -0700, germ germ wrote:
> I have been trying to figure out why I can't insert
> into a table and I think it has something to do with
> the sequnce.
>
> I am able to use able to properly insert into the
> table using the the shell, but I am not able to insert
> us
On 7/28/05, germ germ <[EMAIL PROTECTED]> wrote:
I have been trying to figure out why I can't insertinto a table and I think it has something to do withthe sequnce.I am able to use able to properly insert into thetable using the the shell, but I am not able to insert
using a php script:INSERT INTO
I have been trying to figure out why I can't insert
into a table and I think it has something to do with
the sequnce.
I am able to use able to properly insert into the
table using the the shell, but I am not able to insert
using a php script:
INSERT INTO requests (time_stamp, req_num,
recommended
On Thu, Jul 28, 2005 at 01:22:31PM +0200, Havasvölgyi Ottó wrote:
> Is it normal that when I select for update a record, but I don't select all
> the fields, that the contents of fields not selected will be deleted:
>
> create table pidtest(pid integer, szoveg text) without oids;
>
> select pid
Havasvölgyi Ottó wrote:
Hi,
Is it normal that when I select for update a record, but I don't select
all the fields, that the contents of fields not selected will be deleted:
create table pidtest(pid integer, szoveg text) without oids;
select pid from pistest where pid>5 for update;
After co
Hi,
Is it normal that when I select for update a record, but I don't select all
the fields, that the contents of fields not selected will be deleted:
create table pidtest(pid integer, szoveg text) without oids;
select pid from pistest where pid>5 for update;
After committing (autocommit), th
Next time please send a seperat E-mail to the list.
I do not want to know your Password...
Am 2005-07-28 17:14:27, schrieb Kanaga:
> Hi all
> - Original Message -
> From: <[EMAIL PROTECTED]>
> To: "kanaga" <[EMAIL PROTECTED]>
> Sent: Thursday, July 28, 2005 4:48 PM
> Subject: Welcome t
Hi all
I have the like 'psql: FATAL: Ident authentication failed for user'.
Whn i was the cmd 'psql -d template1 -U wifidog -W -f
sql/wifidog-postgres-schema.sql'
If anyone plz give the idea ...
With Regards,
kanagavel
- Original Message -
From: <[EMAIL PROTECTED]>
To: "kanaga" <[EM
On Wed, Jul 27, 2005 at 08:34:56PM -0700, Joshua D. Drake wrote:
> Josh Berkus wrote:
> >KL-
> >
> >
> >>What I think we need is a C program that dumps directly from MySQL into
> >>PostgreSQL sql.
> >
> >
> >Why C? PerlDBI or JDBC should be able to do this readily enough.
>
> Uhmmm isn't that wh
With the last test I made, the database is swapping but in a very slow
way... 100K every 10 minutes and that seems to not be a problem... in
the sense that the server doesn't slow down...
Today I'll make other tests and let you know.
Thank you,
Denis
Scott Marlowe wrote:
On Wed, 2005-07-27
I picked 260 because 256 are the max clients in the apache configuration
file.
My problem is that every apache request need to make a connection to the
database, so if I have all the 256 apache processes working, I need at
least 256 pg_pool processes.
However, with the pg_pgpool installed in e
73 matches
Mail list logo