On Sun, 2004-10-03 at 22:23, Michael Glaesemann wrote:
> Hello all,
>
> Recently I've been thinking about different methods of managing users
> that log into a PostgreSQL-backed application. The users I'm thinking
> of are not necessarily DBAs: they're application users that really
> shouldn't
Michael Glaesemann wrote:
Hello all,
Recently I've been thinking about different methods of managing users
that log into a PostgreSQL-backed application. The users I'm thinking
of are not necessarily DBAs: they're application users that really
shouldn't even be aware that they are being served
Hello all,
Recently I've been thinking about different methods of managing users
that log into a PostgreSQL-backed application. The users I'm thinking
of are not necessarily DBAs: they're application users that really
shouldn't even be aware that they are being served by the world's most
advanc
Matthew T. O'Connor wrote:
On Sun, 2004-10-03 at 21:01, Gaetano Mendola wrote:
Tom Lane wrote:
Gaetano Mendola <[EMAIL PROTECTED]> writes:
Christopher Browne wrote:
pg_autovacuum -d 3 -v 300 -V 0.5 -S 0.8 -a 200 -A 0.8
I'm not very familiar at all with appropriate settings for
autovacuum,
but does
On Sun, 2004-10-03 at 21:01, Gaetano Mendola wrote:
> Tom Lane wrote:
> > Gaetano Mendola <[EMAIL PROTECTED]> writes:
> >
> >>Christopher Browne wrote:
> >>pg_autovacuum -d 3 -v 300 -V 0.5 -S 0.8 -a 200 -A 0.8
> >
> > I'm not very familiar at all with appropriate settings for
autovacuum,
> > but
Tom Lane wrote:
Gaetano Mendola <[EMAIL PROTECTED]> writes:
Christopher Browne wrote:
Assuming that the tables in question aren't so large that they cause
mass eviction of buffers, it should suffice to do a plain VACUUM (and
NOT a "VACUUM FULL") on the tables in question quite frequently.
This is
> Then, every once in a while, a separate process would go in, see the
> highest value on idfield < 250M, and rewrite the idfield on all of the
> tuples where idfield > 250M. It would be efficient due to the partial
> index. It limits the number of documents to 250M, but I'm sure that
> can be al
> Given that they have improved their SysV IPC support steadily over the
> past few Darwin releases, I don't see why you'd expect them to not be
> willing to do this. Having a larger default limit costs them *zero* if
> the feature is not used, so what's the objection?
The objection would be atti
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] ("Scott
Marlowe") transmitted:
> On Sun, 2004-10-03 at 11:48, Mike Nolan wrote:
>> > On Sun, 2004-10-03 at 08:58, David Garamond wrote:
>> > > Am I correct to assume that SERIAL does not guarantee that a sequence
>> > > won't
A long time ago, in a galaxy far, far away, [EMAIL PROTECTED] (David Garamond) wrote:
> Am I correct to assume that SERIAL does not guarantee that a sequence
> won't skip (e.g. one successful INSERT gets 32 and the next might be
> 34)?
What is guaranteed is that sequence values will not be repeate
On 3 Oct 2004 at 11:24, Scott Marlowe wrote:
> On Sun, 2004-10-03 at 06:33, stig erikson wrote:
> There are a few tools I've seen that will try to convert ASP to PHP, but
> for the most part, they can't handle very complex code, so you're
> probably better off just rewriting it and learning PHP on
On Sun, Oct 03, 2004 at 11:36:20 -0400,
Jean-Luc Lachance <[EMAIL PROTECTED]> wrote:
> I agree, NS or EW long lat should be the same.
> I was just pointing to the wrong figure. Also, if ll_to_earth takes lat
> first, it should report an error for a |lat| > 90...
I disagree with this. Latitudes
On Sun, 2004-10-03 at 11:48, Mike Nolan wrote:
> > On Sun, 2004-10-03 at 08:58, David Garamond wrote:
> > > Am I correct to assume that SERIAL does not guarantee that a sequence
> > > won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?
> > >
> > > Sometimes a business requir
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Sunday 03 October 2004 10:21 am, Scott Marlowe wrote:
> On Sun, 2004-10-03 at 08:58, David Garamond wrote:
> > Am I correct to assume that SERIAL does not guarantee that a sequence
> > won't skip (e.g. one successful INSERT gets 32 and the next migh
> On Sun, 2004-10-03 at 08:58, David Garamond wrote:
> > Am I correct to assume that SERIAL does not guarantee that a sequence
> > won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?
> >
> > Sometimes a business requirement is that a serial sequence never skips,
> > e.g. wh
Gaetano Mendola <[EMAIL PROTECTED]> writes:
> Christopher Browne wrote:
>>> Assuming that the tables in question aren't so large that they cause
>>> mass eviction of buffers, it should suffice to do a plain VACUUM (and
>>> NOT a "VACUUM FULL") on the tables in question quite frequently.
> This is
On Sun, 2004-10-03 at 06:33, stig erikson wrote:
> Hello.
> i have an slightly off topic question, but i hope that somebody might know.
>
> at the moment we have a database on a MS SQL 7 server.
> This data will be transfered to PostgreSQL 7.4.5 or PostgreSQL 8 (when
> it is released). so far so
On Sun, 2004-10-03 at 08:58, David Garamond wrote:
> Am I correct to assume that SERIAL does not guarantee that a sequence
> won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?
>
> Sometimes a business requirement is that a serial sequence never skips,
> e.g. when generatin
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
This is a PGP-signed copy of the checksums for following
PostgreSQL versions:
7.4.5
7.4.4
7.3.7
7.3.6
7.3.5
7.2.5
The latest copy of the checksums for these and other versions, as well
as information on how to verify the files you download for yo
Scott Ribe <[EMAIL PROTECTED]> writes:
>> I have asked Apple about using a saner default for shmmax, but a few
>> more complaints in their bug system wouldn't hurt.
> I suspect it won't help, since their official position is already "don't use
> shmget, use mmap instead"...
Given that they have i
> I have asked Apple about using a saner default for shmmax, but a few
> more complaints in their bug system wouldn't hurt.
I suspect it won't help, since their official position is already "don't use
shmget, use mmap instead"...
--
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303)
I agree, NS or EW long lat should be the same.
I was just pointing to the wrong figure. Also, if ll_to_earth takes lat
first, it should report an error for a |lat| > 90...
Michael Fuhr wrote:
On Sat, Oct 02, 2004 at 09:29:16PM -0400, Jean-Luc Lachance wrote:
Maybe it would work with the right lo
> select
> earth_distance(ll_to_earth('122.55688','45.513746'),ll_to_earth('122.396357','47.648845'));
>
> The result I get is this:
>
> 128862.563227506
>
> The distance from Portland to Seattle is not 128862
> miles.
It is 128000m = 128km.
Welcome to the metric system :)
Bye, Chris.
---
On Fri, Oct 01, 2004 at 01:17:38PM -0700, ben f wrote:
> So I am renaming a table, and the last stumbling block
> that I've met is the associated sequence. I tried the
> commands suggested @
>
> http://mailman.fastxs.net/pipermail/dbmail-dev/2004-August/004307.html
>
> ie:
>
> CREATE SEQUENCE
Am I correct to assume that SERIAL does not guarantee that a sequence
won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?
Sometimes a business requirement is that a serial sequence never skips,
e.g. when generating invoice/ticket/formal letter numbers. Would an
INSERT INTO
You may want to take a look at the ltree and tablefunc contrib
modules. They both allow you to do something like this, and the
abstract away the difficulty of query building. ltree will allow you
to precompute the tree, and the tablefunc module has a connectby()
function for runtime parent-child
I'm running PostgreSQL 8.0 beta 1. I'm using the
earthdistance to find the distance between two
different latitude and logitude locations.
Unfortunately, the result seems to be wrong.
Here is what I'm doing:
select
earth_distance(ll_to_earth('122.55688','45.513746'),ll_to_earth('122.396357','47.6
So I am renaming a table, and the last stumbling block
that I've met is the associated sequence. I tried the
commands suggested @
http://mailman.fastxs.net/pipermail/dbmail-dev/2004-August/004307.html
ie:
CREATE SEQUENCE $newseq
SELECT setval('$newseq', max($column)) FROM $table
ALTER TABLE $t
--
Software Tree Revs Up JDX OR-Mapper
With Innovative And High-Performance Features
--
Software Tree has announced JDX 4.5, the versatile and patented
Object
[EMAIL PROTECTED] (mike cox) writes:
> I'm running PostgreSQL 8.0 beta 1. I'm using the
> earthdistance to find the distance between two
> different latitude and logitude locations.
> Unfortunately, the result seems to be wrong.
>
> Here is what I'm doing:
> select
> earth_distance(ll_to_earth(
Hi everybody,
I'm doing the following query:
select * from messages order by random() limit 1;
in the table messages I have more than 200 messages and a lot of times,
the message retrieved is the same. Anybody knows how I could do a more
"random" random?
Thank you very much
--
Arnau
__
Christopher Browne wrote:
> [EMAIL PROTECTED] (Aleksey Serba) wrote:
>
>> Hello!
>>
>> I have 24/7 production server under high load.
>> I need to perform vacuum full on several tables to recover disk
>> space / memory usage frequently ( the server must be online during
>> vacuum time )
On Sun, 3 Oct 2004 12:34:57 +0200
Kristian Rink <[EMAIL PROTECTED]> wrote:
> Though not running postgresql for that solution: We are running an
> enterprise-scaled document management system to keep track of
> currently > 2*10^3 documents (mostly *.hpgl and *.plt files, some
^^
Hi there, Joolz;
On Sun, 3 Oct 2004 10:48:25 +0200 (CEST)
"Joolz" <[EMAIL PROTECTED]> wrote:
> Google was contradictory, some people even had performance
> problems when using the filesystem/pointer approach and went to
> BLOBs for that reason. Can anyone tell me (or point me in the
> right dir
Hello everyone,
Sorry if this is a FAQ, but I've groups.googled the subject and
can't find a definite answer (if such a thing exists). I'm working
on a db in postgresql on a debian stable server, ext3 filesystem.
The db will contain files, not too many (I expect somewehere between
10 and 100 files
35 matches
Mail list logo