Philip Warner <[EMAIL PROTECTED]> writes:
> At 21:14 19/04/01 -0400, Tom Lane wrote:
>> But you don't really need to look at the index (if it even exists
>> at the time you do the ANALYZE). The extent to which the data is
>> ordered in the table is a property of the table, not the index.
> But t
At 21:14 19/04/01 -0400, Tom Lane wrote:
>Philip Warner <[EMAIL PROTECTED]> writes:
>> I'm not sure you want to know how well sorted it is in general, but you do
>> want to know the expected cost in IOs of reading all records from a given
>> index node, so you can more accurately estimate indexsca
> > > Sorry you are seeing trouble. I missed seeing your traffic on the dsssl
> > > list to which I am subscribed; which one are you using?
> >
> > [EMAIL PROTECTED]
>
> The mailing list you should be on is [EMAIL PROTECTED]
> (see http://lists.oasis-open.org), which is more about docbook process
Greg Hulands <[EMAIL PROTECTED]> writes:
> I am a newbie for developing with postgresql. I have included the header file
> by : #include (I am using 7.1) When I compile it i get
> the following errors:
> In pgconnection.h
> line 41 postgres_fe.h: No such file or directory
> line 42 libpq-fe.h: No
Philip Warner <[EMAIL PROTECTED]> writes:
> I'm not sure you want to know how well sorted it is in general, but you do
> want to know the expected cost in IOs of reading all records from a given
> index node, so you can more accurately estimate indexscan costs. AFAICS it
> does not require that th
Hi,
I am a newbie for developing with postgresql. I have included the header file
by : #include (I am using 7.1) When I compile it i get
the following errors:
In pgconnection.h
line 41 postgres_fe.h: No such file or directory
line 42 libpq-fe.h: No such file or directory
The files are located
At 20:48 19/04/01 -0400, Tom Lane wrote:
>
>> This sounds great; can the same be done for clustering. ie. pick a random
>> sample of index nodes, look at the record pointers and so determine how
>> well clustered the table is?
>
>My intention was to use the same tuples sampled for the data histogr
At 18:37 19/04/01 -0400, Tom Lane wrote:
>(2) Statistics should be computed on the basis of a random sample of the
>target table, rather than a complete scan. According to the literature
>I've looked at, sampling a few thousand tuples is sufficient to give good
>statistics even for extremely larg
Philip Warner <[EMAIL PROTECTED]> writes:
> At 18:37 19/04/01 -0400, Tom Lane wrote:
>> (2) Statistics should be computed on the basis of a random sample of the
>> target table, rather than a complete scan. According to the literature
>> I've looked at, sampling a few thousand tuples is sufficien
Bruce Momjian <[EMAIL PROTECTED]> writes:
>> that a pure ANALYZE command needs only a read lock on the target table,
>> not an exclusive lock as VACUUM needs, so it's much more friendly to
>> concurrent transactions.
> 7.1 already does the ANALYZE part of VACUUM ANALYZE with lighter
> locking.
R
> (1) The statistics-gathering process should be available as a standalone
> command, ANALYZE [ tablename ], not only as part of VACUUM. (This was
> already discussed and agreed to for 7.1, but it never got done.) Note
> that a pure ANALYZE command needs only a read lock on the target table,
> n
Request for comments:
Overview
The planner is currently badly handicapped by inadequate statistical
information. The stats that VACUUM ANALYZE currently computes are:
per-table:
number of disk pages
number of tuples
per-column:
dispersion
minimum and m
First, let me say that just because Oracle does it this way doesn't make
it better but...
Oracle divides privileges into 2 categories:
Object privileges
System privileges
The Object privileges are the ones you describe. And I agree
fundamentally with your design. Although I would have (a) used
"V. M." <[EMAIL PROTECTED]> writes:
> DBI->connect(dbname=mydb) failed: Backend startup failed
Have you looked in the postmaster log to see *why* this is happening?
Some sort of resource-limitation problem is my bet.
> I'm asking to all of you the CORRECT sequence of actions to do a monitor
> t
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> pg_privilege (
> priobj oid, -- oid of table, column, function, etc.
> prigrantor oid, -- user who granted the privilege
> prigrantee oid, -- user who owns the privilege
What about groups? What a
Mike Mascari writes:
> That looks quite nice. I do have 3 quick questions though. First, I
> assume that the prigrantee could also be a group id?
Yes. It was also suggested making two different grantee columns for users
and groups, but I'm not yet convinced of that. It's an option though.
> S
Tom Lane writes:
> I'm tempted to rip out the configure check for sfio, but if it's only
> broken on some systems and really is useful on others, then we have to
> try to figure out how to tell if it's broken :-(
I just installed sfio here and built PostgreSQL with it and didn't see any
differen
So, this will remove the relacl field from pg_class, making pg_class
a fixed tuple-length table: that might actually speed access: there
are shortcircuits in place to speed pointer math when this is true.
The implementation looks fine to me, as well. How are group privileges
going to be handled w
Peter Eisentraut wrote:
> I have devised something more efficient:
>
> pg_privilege (
> priobj oid, -- oid of table, column, etc.
> prigrantor oid, -- user who granted the privilege
> prigrantee oid, -- user who owns the privilege
>
> pris
try now?
On Thu, 19 Apr 2001, Peter Eisentraut wrote:
> What did you do to the CVS server? It takes hours to update a single
> file, half a day to run cvs diff. This has been like that for about 48
> hours.
>
> --
> Peter Eisentraut [EMAIL PROTECTED] http://funkturm.homeip.net/~peter
>
>
Dear Hackers,
i'm using 7.1 in a production environment, porformace is very good, you've
made a vesy good job.
But there are problems, sometimes backend "failed to start":
(mandrake 7.2, mod_perl 1.24, apache 1.3.14, Apache::DBI)
(deadlock_timeout=2000
max_connections=300)
DBI->connect(dbname
Oldtimers might recall the last thread about enhancements of the access
privilege system. See
http://www.postgresql.org/mhonarc/pgsql-hackers/2000-05/msg01220.html
to catch up.
It was more or less agreed that privilege descriptors should be split out
into a separate table for better flexibilit
The Hermit Hacker <[EMAIL PROTECTED]> writes:
> I tried to do a 'kill ' like I would have in v7.0.3, doesn't affect
> it ...
Huh? Doesn't kill default to -TERM on your machine? That works fine
for me ...
regards, tom lane
---(end of broadcast)--
Okay, I *swear* I tried both 'kill ' and 'kill -TERM ' this
morning before I sent this out .. just tried it again and it worked :(
*shrug*
On Thu, 19 Apr 2001, Tom Lane wrote:
> The Hermit Hacker <[EMAIL PROTECTED]> writes:
> > I tried to do a 'kill ' like I would have in v7.0.3, doesn't affec
What did you do to the CVS server? It takes hours to update a single
file, half a day to run cvs diff. This has been like that for about 48
hours.
--
Peter Eisentraut [EMAIL PROTECTED] http://funkturm.homeip.net/~peter
---(end of broadcast)
"Henryk Szal" <[EMAIL PROTECTED]> writes:
> My typical short transaction run in 3 seconds (on heavy loaded system 30
> sec.). But without 'timeout
> on lock' it can run 60-180 minutes because someone (user or administrator)
> run long transaction.
> Timeout value is negligible. I set one to 10 se
On Thu, 19 Apr 2001, Tom Lane wrote:
> Jan Wieck <[EMAIL PROTECTED]> writes:
> > IMHO there's nothing fundamentally wrong with having pg_dump
> > dumping the constraints as special triggers, because they are
> > implemented in PostgreSQL as triggers. ...
> > The advantage of
On Thu, 19 Apr 2001, Tom Lane wrote:
> Jan Wieck <[EMAIL PROTECTED]> writes:
> > IMHO there's nothing fundamentally wrong with having pg_dump
> > dumping the constraints as special triggers, because they are
> > implemented in PostgreSQL as triggers. ...
> > The advantage of
I figured out the bug that Alex Pilosov was reporting about constructs
like
create table customers (...)
create function cust_name(customers) ...
select cust_name(a) from customers a, addresses b ...
The problem is that whole-tuple parameters to functions are represented
a
> > > Sorry you are seeing trouble. I missed seeing your traffic on the dsssl
> > > list to which I am subscribed; which one are you using?
> > [EMAIL PROTECTED]
> The mailing list you should be on is [EMAIL PROTECTED]
> (see http://lists.oasis-open.org), which is more about docbook processing
> a
OK, we have it on the TODO list, so it will hopefully be added soon, in
some fashion. I like the SET or the BEGIN TIMEOUT options.
> Hi,
>
> for 10 years i develop DB application using 'timeout on lock' feature
> (Informix,Ingres,AdabasD,RDB,...).
> I think about migrate with this application t
> Does anybody knows how I can get the difference between two date (timestamp)
> is secondes or convert timestamp to integer or float type (in any acceptable
> sense).
lockhart=# select date_part('epoch', timestamp 'today' - timestamp
'yesterday');
date_part
---
86400
Tatsuo Ishii writes:
> > Sorry you are seeing trouble. I missed seeing your traffic on the dsssl
> > list to which I am subscribed; which one are you using?
>
> [EMAIL PROTECTED]
The mailing list you should be on is [EMAIL PROTECTED]
(see http://lists.oasis-open.org), which is more about docbook
Jan Wieck <[EMAIL PROTECTED]> writes:
> IMHO there's nothing fundamentally wrong with having pg_dump
> dumping the constraints as special triggers, because they are
> implemented in PostgreSQL as triggers. ...
> The advantage of having pg_dump output these constraints as
>
At 08:42 19/04/01 -0500, Jan Wieck wrote:
>>
>> It's because pg_dump is not designed to dump these constraints *as*
>> constraints. We just need to make pg_dump clever enough to do that.
>
>IMHO there's nothing fundamentally wrong with having pg_dump
>dumping the constraints as special tr
My typical short transaction run in 3 seconds (on heavy loaded system 30
sec.). But without 'timeout
on lock' it can run 60-180 minutes because someone (user or administrator)
run long transaction.
Timeout value is negligible. I set one to 10 sec. because if my two (3 sec.)
transaction are in conf
Oh, forgot to point out that this is v7.0.3 I am running, on linux
kernel 2.0.5
P.
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Hi,
I'm trying to see if I can patch this bug myself because we are under some
time constraints. Can anyone give me a tip regarding where in the postgres
source the internal UTF-8 code is converted during a dump?
I believe that the character 0xAD is a ASCII character that looks l
I am in the middle of a rather nasty experience that I hope someone out
there can help solve.
My hard disk partition with the postgres data directory got full. I
tried to shut down postgres so I could clear some space, nothing
happened. So I did a reboot. On restart (after clearing some
pg_sort
Am Mittwoch, 18. April 2001 10:47 schrieben Sie:
(...)
>
> Yes, Oracle support using not exact format mask, but Oracle's to_date
> is very based on date/time and not support others things:
>
> SVRMGR> select to_date('333.222.4.1.2001', '333.222.FMdd.FMmm.') from
> dual;
> TO_DATE('
>
Philip Warner wrote:
> At 16:25 18/04/01 -0400, Joel Burton wrote:
> >
> >Do we know if the problem is in pg_dump, or is there no way
> >to pass the tgconstrrelid value in the CREATE CONSTRAINT TRIGGER
> >statement?
> >
>
> It's because pg_dump is not designed to dump these constraints *as*
> cons
This option will be OPTIONAL.
Tom Lane wrote in message <[EMAIL PROTECTED]>...
>Bruce Momjian <[EMAIL PROTECTED]> writes:
>> Added to TODO:
>> * Add SET parameter to timeout if waiting for lock too long
>
>I repeat my strong objection to any global (ie, affecting all locks)
>timeout. Such a "fea
Hi,
for 10 years i develop DB application using 'timeout on lock' feature
(Informix,Ingres,AdabasD,RDB,...).
I think about migrate with this application to postgresql, and with this
feature i don't need to modify my ready to run code specially for
postgresql. This feature guard me against blockin
Hello Group!
Does anybody knows how I can get the difference between two date (timestamp)
is secondes or convert timestamp to integer or float type (in any acceptable
sense).
Thank you in advance,
Sergiy.
---(end of broadcast)---
TIP 6: Have you
I tried to do a 'kill ' like I would have in v7.0.3, doesn't affect
it ... so, how to get rid of idle process that have been sitting around
for a long time, without having to shutdown the database itself?
pgsql 64484 0.0 1.0 15352 10172 p4- ISat08PM 0:00.15 postmaster: hordemgr
horde
On Thu, 19 Apr 2001, Tatsuo Ishii wrote:
> > It actually was pretty quick. The fixes were more cleaning up strange
> > conversion from HTML to LaTeX.
>
> Looks nice, but I'm afraid I have to do all the work above for 489
> HTML files:-)
It's not all that bad. There's really only 486, the other
> > > The only way PG could apply reasonable timeouts would be for the
> > > application to dictate them,
> >
> > That is exactly what we are talking about here.
>
> No. You wrote elsewhere that the application sets "30 seconds" and
> leaves it. But that 30 seconds doesn't have any applicat
47 matches
Mail list logo