remi.cura wrote
> What is best practice for input?
There is none; you have options because different scenarios require
different solutions.
> I see 3 solutions :
> _give table name as input || so no order unless I use view, doesn't work
> with CTE and all.
> _give array and use unnest/arrayagg |
On Tue, Sep 17, 2013 at 2:06 AM, Rémi Cura wrote:
> The only kind of function taking set of record as input I know of is
> aggregate function, but it returns only one row and the output of union can
> take multiple row.
This may or may not help (I suggest posting a more complete example of
what y
On Sun, May 19, 2013 at 4:44 AM, Julian wrote:
> On 19/05/13 13:02, Chris Travers wrote:
> >
> > I actually think that bringing some object-oriented principles into
> > database design can result in some very useful things, provided that one
> > remembers that applications are modelling behavior
On 19/05/13 13:02, Chris Travers wrote:
>
> I actually think that bringing some object-oriented principles into
> database design can result in some very useful things, provided that one
> remembers that applications are modelling behavior while databases are
> modelling information (and so the tw
Am 17.05.2013 21:21, schrieb Alfonso Afonso:
> Hi Frank
>
> Although you are thinking in OOP, the SQL is itself one definition
> model that you should not ignore and, IMHO, try to follow the
> normalization statements.
>
> You can build a robust and normalized schema (table primarylocation ,
> ta
In general, I find table inheritance most helpful when one wants to re-use
interfaces over multiple independent tables. For example instead of a
global notes table, a bunch of notes tables attached to things, but with
centralized schema management. In general in these cases you want
something ide
On 17/05/13 22:46, Frank Lanitz wrote:
> Hi folkes,
>
> I'm looking for a nice way to build this scenario:
> I've got a lot of locations with some special types. For example I've
> got workplaces, places like real laboratories and virtual places like
> maybe parcel service. For each of the differe
Hi Frank
Although you are thinking in OOP, the SQL is itself one definition model that
you should not ignore and, IMHO, try to follow the normalization statements.
You can build a robust and normalized schema (table primarylocation , table
secondlocation that have a idprimarylocation, etc.) and
Just our experience in LedgerSMB
On Fri, May 17, 2013 at 5:46 AM, Frank Lanitz wrote:
> Hi folkes,
>
> I'm looking for a nice way to build this scenario:
> I've got a lot of locations with some special types. For example I've
> got workplaces, places like real laboratories and virtual places
On Fri, 17 Aug 2012 08:53:05 -0400
Moshe Jacobson wrote:
> I do not know of anything that can't be done from within psql.
> We use non-privileged user roles in postgres for day-to-day
> operations. When I need to modify the schema, I become postgres (you
> can do \c - postgres) and do what I need
Frank Lanitz wrote:
> I'm looking for some kind of best practice for a non-privilege
postgres
> user. As not all operations can be done within psql you might need
> access to postgres- on command line from time to time. Currently this
is
> done via root-privvileges and >su - postgres< directly on d
I do not know of anything that can't be done from within psql.
We use non-privileged user roles in postgres for day-to-day operations.
When I need to modify the schema, I become postgres (you can do \c -
postgres) and do what I need to do, then revert back to my regular user.
On Fri, Aug 17, 2012
On 11/19/10 23:14, Andy Colson wrote:
On 11/19/2010 4:04 PM, Ivan Voras wrote:
On 11/19/10 15:49, Andy Colson wrote:
unlogged will only help insert/update performance. Lookup tables sound
readonly for a majority of time. (I'm assuming lots of reads and every
once and a while updates). I doubt
On 11/19/2010 4:04 PM, Ivan Voras wrote:
On 11/19/10 15:49, Andy Colson wrote:
unlogged will only help insert/update performance. Lookup tables sound
readonly for a majority of time. (I'm assuming lots of reads and every
once and a while updates). I doubt that unlogged tables would speed up
loo
On 11/19/10 15:49, Andy Colson wrote:
unlogged will only help insert/update performance. Lookup tables sound
readonly for a majority of time. (I'm assuming lots of reads and every
once and a while updates). I doubt that unlogged tables would speed up
lookup tables.
Are FreeBSD's temp tables st
On 11/18/2010 4:56 PM, Fredric Fredricson wrote:
Hi,
I have designed a handful databases but is absolutely no SQL-expert. Nor
have I had any formal database training and have never worked with
someone who had. What I know about SQL I have read in the documentation,
found with google, and learned
On 11/18/10 2:56 PM, Fredric Fredricson wrote:
What puzzle me though is that this use is never mentioned in the
discussions, at least as far as I can see. Am I doing something
"strange"? Is this something you should not have to do if you have
"proper" database design?
in certain other enter
On Mon, Mar 29, 2010 at 02:08:23PM +, Edgardo Portal wrote:
> On 2010-03-29, Juan Backson wrote:
> > --0016e64ccb10fb54050482f07924
> > Content-Type: text/plain; charset=ISO-8859-1
> >
> > Hi,
> >
> > I am using Postgres to store CDR data for voip switches. The data
> > size quickly goes abou
On 2010-03-29, Juan Backson wrote:
> --0016e64ccb10fb54050482f07924
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi,
>
> I am using Postgres to store CDR data for voip switches. The data size
> quickly goes about a few TBs.
>
> What I would like to do is to be able to regularly archive the
Hi
Instead of dropping the table, I would like to archive the old table into a
format that can be read and retrieved.
Can I db_dump on each child table?
What is the best way to do it? db_dump and make the data into csv and then
tar.gz it or backup it up into a pg archived format?
thanks,
jb
O
In response to Juan Backson :
> Hi,
>
> I am using Postgres to store CDR data for voip switches. The data size
> quickly
> goes about a few TBs.
>
> What I would like to do is to be able to regularly archive the oldest data so
> only the most recent 6 months of data is available.
>
> All t
On Jan 31, 2010, at 2:46 AM, Joe Kramer wrote:
> Hi,
>
> I need to store a lot of large files (thousands of 10-100 MB files)
> uploaded through my web application and I find that storing them in
> database as bytea field is not practical for backup purposes.
> My database has full backup perfor
On 01/31/2010 04:46 AM, Joe Kramer wrote:
Hi,
I need to store a lot of large files (thousands of 10-100 MB files)
uploaded through my web application and I find that storing them in
database as bytea field is not practical for backup purposes.
My database has full backup performed every 12 hour
On 31/01/2010 6:46 PM, Joe Kramer wrote:
Hi,
I need to store a lot of large files (thousands of 10-100 MB files)
uploaded through my web application and I find that storing them in
database as bytea field is not practical for backup purposes.
My database has full backup performed every 12 hours
Joe Kramer wrote:
Hi,
I need to store a lot of large files (thousands of 10-100 MB files)
uploaded through my web application and I find that storing them in
database as bytea field is not practical for backup purposes.
My database has full backup performed every 12 hours and backup is
encrypte
Very nice. Thanks!
Thanks and Regards
Yan Cheng CHEOK
--- On Tue, 1/12/10, Pavel Stehule wrote:
> From: Pavel Stehule
> Subject: Re: [GENERAL] Best Practice when Encounter Invalid Stored Procedure
> Parameters
> To: "Yan Cheng Cheok"
> Cc: pgsql-general@postg
hello
2010/1/12 Yan Cheng Cheok :
> In c++, whenever we encounter an unexpected parameters, here is what we
> usually did :
>
> bool fun(int i) {
> if (i < 0) {
> return false;
> }
> }
>
> void fun(int i) {
> if (i < 0) {
> throw std::exception("Invalid parameter");
> }
better way, though!
I'm sure this only my second of several more hurdles to overcome before I'm
finished with the transition.
Your wisdom will be appreciated!
CG
From: Tom Lane
To: CG
Cc: pgsql-general@postgresql.org
Sent: Wednesday, July 15
___
From: Arndt Lehmann
To: pgsql-general@postgresql.org
Sent: Thursday, July 16, 2009 5:22:26 AM
Subject: Re: [GENERAL] best practice transitioning from one datatype to another
On Jul 16, 5:46 pm, a.w...@netzmeister-st-pauli.de (Andreas Wenk)
wrote:
> Arndt Lehmann schrieb:
>
On Jul 16, 5:46 pm, a.w...@netzmeister-st-pauli.de (Andreas Wenk)
wrote:
> Arndt Lehmann schrieb:
>
> > On Jul 16, 8:05 am, t...@sss.pgh.pa.us (Tom Lane) wrote:
> >> CG writes:
> >>> While transitioning from 8.1 to 8.4, I need to transition to the internal
> >>> UUID type in place of the contrib/
Arndt Lehmann schrieb:
On Jul 16, 8:05 am, t...@sss.pgh.pa.us (Tom Lane) wrote:
CG writes:
While transitioning from 8.1 to 8.4, I need to transition to the internal UUID
type in place of the contrib/uniqueidentifier module. I've built the database
around uniqueidentifier, so nearly every tab
On Jul 16, 8:05 am, t...@sss.pgh.pa.us (Tom Lane) wrote:
> CG writes:
> > While transitioning from 8.1 to 8.4, I need to transition to the internal
> > UUID type in place of the contrib/uniqueidentifier module. I've built the
> > database around uniqueidentifier, so nearly every table has one co
CG writes:
> While transitioning from 8.1 to 8.4, I need to transition to the internal
> UUID type in place of the contrib/uniqueidentifier module. I've built the
> database around uniqueidentifier, so nearly every table has one column of
> that data type. It's going to be tedious to
> ALTER
Rodrigo De Le? wrote:
Pg is pretty smart for almost any case.
Which still doesn't say anything about best practice. In the end I found the
relevant part of the SQL spec, correct forms are HOUR and MINUTE where the
associated numbers are integers.
--
Mark Morgan Lloyd
markMLl .AT. telemetry
On 9/24/07, Mark Morgan Lloyd <[EMAIL PROTECTED]> wrote:
> In a similar vein, are fractional hours best specified as
>
> select time '11:00' - interval '3 hour 45 minute';
>
> or as
>
> select time '11:00' - interval '3.75 hour';
>
> or with the interval converted to an integer numb
On Thu, Aug 16, 2007 at 01:56:52AM +0800, Phoenix Kiula wrote:
> This is very useful, thanks. This would be "bytea"? Quick questions:
>
> 1. Even if it were bytea, would it work with regular SQL operators
> such as regexp and LIKE?
bytea is specifically designed for binary data, as such it has al
On Thu, 16 Aug 2007, Phoenix Kiula wrote:
1. Even if it were bytea, would it work with regular SQL operators
such as regexp and LIKE?
2. Would tsearch2 work with bytea in the future as long as the stuff
in it was text?
As far as I know, regexp, [i]like, tsearch2, etc. all require valid text
On 16/08/07, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> On 16/08/07, Ben <[EMAIL PROTECTED]> wrote:
> > On Thu, 16 Aug 2007, Phoenix Kiula wrote:
> >
> > > I am not advocating what others should do. But I know what I need my
> > > DB to do. If I want it to store data that does not match puritanical
On 16/08/07, Ben <[EMAIL PROTECTED]> wrote:
> On Thu, 16 Aug 2007, Phoenix Kiula wrote:
>
> > I am not advocating what others should do. But I know what I need my
> > DB to do. If I want it to store data that does not match puritanical
> > standards of textual storage, then it should allow me to...
On Thu, 16 Aug 2007, Phoenix Kiula wrote:
I am not advocating what others should do. But I know what I need my
DB to do. If I want it to store data that does not match puritanical
standards of textual storage, then it should allow me to...
It does allow that: store it as a BLOB, and then treat
On 8/15/07, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> > What, exactly, does that mean?
> >
> > That PostgreSQL should take things in invalid utf-8 format and just store
> > them?
> > Or that PostgreSQL should autoconvert from invalid utf-8 to valid
> > utf-8, guessing the proper codes?
> >
> > Se
> What, exactly, does that mean?
>
> That PostgreSQL should take things in invalid utf-8 format and just store
> them?
> Or that PostgreSQL should autoconvert from invalid utf-8 to valid
> utf-8, guessing the proper codes?
>
> Seriously, what do you want pgsql to do with these invalid inputs?
PG
On 8/15/07, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> On 15/08/07, Ivan Zolotukhin <[EMAIL PROTECTED]> wrote:
> > Hello,
> >
> > Actually I tried smth like $str = @iconv("UTF-8", "UTF-8//IGNORE",
> > $str); when preparing string for SQL query and it worked. There's
> > probably a better way in PHP
On 15/08/07, Ivan Zolotukhin <[EMAIL PROTECTED]> wrote:
> Hello,
>
> Actually I tried smth like $str = @iconv("UTF-8", "UTF-8//IGNORE",
> $str); when preparing string for SQL query and it worked. There's
> probably a better way in PHP to achieve this: simply change default
> values in php.ini for t
On Aug 15, 2007, at 7:41 AM, Ivan Zolotukhin wrote:
What is the best practice to process such a broken strings before
passing them to PostgreSQL? Iconv from utf-8 to utf-8 dropping bad
characters?
This rings of GIGO... if your user enters garbage, how do you know
what they wanted? You don'
Hello,
Actually I tried smth like $str = @iconv("UTF-8", "UTF-8//IGNORE",
$str); when preparing string for SQL query and it worked. There's
probably a better way in PHP to achieve this: simply change default
values in php.ini for these parameters:
mbstring.encoding_translation = On
mbstring.subst
Hello,
Well, PostgreSQL is correct entirely, I would post this message to the
-hackers list otherwise :) The question was rather about application
processing of user input not about change of database reaction on
broken UTF-8 string. But I am 100% sure one should fix the input in
this case since w
On 15/08/07, Ivan Zolotukhin <[EMAIL PROTECTED]> wrote:
> Hello,
>
> Imagine a web application that process text search queries from
> clients. If one types a text search query in a browser it then sends
> proper UTF-8 characters and application after all needed processing
> (escaping, checks, etc)
On Wed, Aug 15, 2007 at 03:41:30PM +0400, Ivan Zolotukhin wrote:
> Hello,
>
> Imagine a web application that process text search queries from
> clients. If one types a text search query in a browser it then sends
> proper UTF-8 characters and application after all needed processing
> (escaping, ch
Mason Hale wrote:
> So, I'm fishing to see of there are alternatives to taking our
> production database down for 6+ hours do this upgrade. Can slony or
> wal files be of help here?
Slony can, but WAL files will not work between different versions.
> Also -- due to hardware limitations, we need t
Mason Hale wrote:
From what I've read about postgres 8.2, upgrading from any previous
version requires a full database dump and restore.
I am working with largish database (~70GB) that I would like to
upgrade to 8.2.
A full database dump currently takes ~3 hrs, I expect a restore
require a si
On 5/26/06, Jim Nasby <[EMAIL PROTECTED]> wrote:
Only helps if the OP is willing to run on HEAD; grant on sequence isnot in 8.1 (at least not according to the docs).you can grant on sequences using syntax for tables. works:([EMAIL PROTECTED]:5810) 08:59:21 [depesz]
# create sequence test;CREATE SEQ
Em Quinta 25 Maio 2006 19:33, Jim Nasby escreveu:
>
> Only helps if the OP is willing to run on HEAD; grant on sequence is
> not in 8.1 (at least not according to the docs).
>
> As for triggers, I don't really see how that would make any sense.
A trigger could start some modification in a table wh
On May 24, 2006, at 10:50 AM, Bruno Wolff III wrote:
On Mon, May 22, 2006 at 12:59:06 +0300,
Joe Kramer <[EMAIL PROTECTED]> wrote:
On a related note, which objects need to be GRANTed specifically?
There is a saying that following objects can have permissions
GRANTed:
1. TABLE
2. DATABASE
3.
On Mon, May 22, 2006 at 12:59:06 +0300,
Joe Kramer <[EMAIL PROTECTED]> wrote:
> On a related note, which objects need to be GRANTed specifically?
> There is a saying that following objects can have permissions GRANTed:
> 1. TABLE
> 2. DATABASE
> 3. FUNCTION
> 4. LANGUAGE
> 5. SCHEMA
> 6. TABLESPA
On a related note, which objects need to be GRANTed specifically?
There is a saying that following objects can have permissions GRANTed:
1. TABLE
2. DATABASE
3. FUNCTION
4. LANGUAGE
5. SCHEMA
6. TABLESPACE
What about SEQUENCE, TRIGGER? PostgreSQL manual has no mention about this.
Thanks.
On 5/2
You can find some helpful grant scripts here:http://pgedit.com/tip/postgresql/access_control_functions
On 5/19/06, Joe Kramer <[EMAIL PROTECTED]> wrote:
Hello,I need to grant all privileges on all objects in database. Withoutusing SUPERUSER.It's strange that GRANT ALL PRIVILEGES ON DATABASE is usel
> I will ask, though, why use XML/XSL, why not use a format that lets
> you load the data to tables, then you do a huge number of tricks with
> it prior to generating the DDL, not the least of which is diff'ing
> current structure to see what needs to be changed.
>
Well, XML/XSLT is indeed not th
Csaba Nagy wrote:
[leaving the original text, as it is reply to an older posting]
On Fri, 2006-03-31 at 21:26, Jim Nasby wrote:
On Mar 29, 2006, at 3:25 AM, Csaba Nagy wrote:
Could somebody explain me, or point me to a resource where I can find
out what i
[leaving the original text, as it is reply to an older posting]
On Fri, 2006-03-31 at 21:26, Jim Nasby wrote:
> On Mar 29, 2006, at 3:25 AM, Csaba Nagy wrote:
>
> >> Could somebody explain me, or point me to a resource where I can find
> >> out what is the recommended practice when a live db need
On Mar 29, 2006, at 3:25 AM, Csaba Nagy wrote:
Could somebody explain me, or point me to a resource where I can find
out what is the recommended practice when a live db needs to be
replaced
with a new version of it that has a slightly different structure?
Our development infrastructure inc
On Mar 28, 2006, at 8:40 PM, Robert Treat wrote:
Depends on how much data you need to modify. For small tables, I
stick
with ALTER TABLE because it's a lot cleaner/easier. For larger
tables,
you might want to CREATE TABLE AS SELECT ..., or maybe copy out
and copy
back in.
This seems back
> Could somebody explain me, or point me to a resource where I can find
> out what is the recommended practice when a live db needs to be replaced
> with a new version of it that has a slightly different structure?
Our development infrastructure includes a development data base cluster,
with one
Hello,
> Could somebody explain me, or point me to a resource where I can find
> out what is the recommended practice when a live db needs to be replaced
> with a new version of it that has a slightly different structure?
>
> What do you usually do in a situation like this?
That's a big problem f
On Tuesday 28 March 2006 17:31, Jim C. Nasby wrote:
> On Wed, Mar 29, 2006 at 12:24:04AM +0200, SunWuKung wrote:
> > In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED]
> > > On Tue, Mar 28, 2006 at 09:28:12PM +0200, SunWuKung wrote:
> > > > This is going to be an amateur question...
> > > >
> > > >
On Tue, 2006-03-28 at 16:24, SunWuKung wrote:
> In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED]
> says...
> > On Tue, Mar 28, 2006 at 09:28:12PM +0200, SunWuKung wrote:
> > > This is going to be an amateur question...
> > >
> > > Could somebody explain me, or point me to a resource where I can
On Wed, Mar 29, 2006 at 12:24:04AM +0200, SunWuKung wrote:
> In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED]
> says...
> > On Tue, Mar 28, 2006 at 09:28:12PM +0200, SunWuKung wrote:
> > > This is going to be an amateur question...
> > >
> > > Could somebody explain me, or point me to a resource
In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED]
says...
> On Tue, Mar 28, 2006 at 09:28:12PM +0200, SunWuKung wrote:
> > This is going to be an amateur question...
> >
> > Could somebody explain me, or point me to a resource where I can find
> > out what is the recommended practice when a live
On Tue, Mar 28, 2006 at 09:28:12PM +0200, SunWuKung wrote:
> This is going to be an amateur question...
>
> Could somebody explain me, or point me to a resource where I can find
> out what is the recommended practice when a live db needs to be replaced
> with a new version of it that has a sligh
SunWuKing.
> Could somebody explain me, or point me to a resource where I can find
> out what is the recommended practice when a live db needs to be replaced
> with a new version of it that has a slightly different structure?
Put the new database on a development machine.
Do a dump of the o
On Sat, Dec 11, 2004 at 10:40:42PM -0600, Guy Rouillier wrote:
> Nilesh Doshi wrote:
> > Also, I thought vacuuming will be easier if oracle schema becomes
> > database in postgres. For example in our case each schema is like
> > 80-90 gb, smaller compare to vacuuming on 400gb.
>
> I'm very new to
ge.net>cc:
"[EMAIL PROTECTED]"
Sent by:
<[EMAIL PROTECTED]>
[EMAIL PROTECTED] Subject
On Fri, 2004-12-10 at 17:49, Nilesh Doshi wrote:
> Hi All,
>
> I'm new to postgres, so I need your help.
>
> We are in the process of migrating from oracle to postgres. DB size is about
> 400gb.
> My question is about schemas in oracle and postgres. Does every schema in
> oracle becomes a separ
Nilesh Doshi wrote:
> My question is about schemas in oracle and postgres. Does every
> schema in oracle becomes a separate database in postgres ? OR it is
> still like oracle, where all schemas are part of big database ?
You can have multiple schemas in a single PostgreSQL database. Unlike
Orac
> I do #1- most connections are not persistent though I have done those
> before as well. Security-wise I'm been reviewing the pros and cons
> of this and so far I really can't make much of an argument for #2.
> Just the opposite in fact. The one thing I will be doing though,
> for more security,
Hi Keith and others,
Personally I am rather humble regarding my ability to write unassailable
programs from a security perspective, so I tend to use individual database
accounts for individual users. I know that under certain environments this
may not scale well and connection pooling may be requ
On Tuesday 13 January 2004 22:13, Keith G. Murphy wrote:
> John Sidney-Woollett wrote:
> > What you could consider is one or more pools which map to the "roles"
> > that your (web) app supports. For example, if a user needs "minimal
> > rights" access to db resources, then your cgi (request handler
Keith Murphy wrote:
At some point, I may try rolling my own PAM module (as Tom Lane
suggested) that uses the user's browser-authenticated username and
password to map to a PostgreSQL username that constitutes a "role"
(assuming that's possible).
One option is to add an extra layer of indirection
scott.marlowe wrote:
On Tue, 13 Jan 2004, Keith G. Murphy wrote:
I'm trying to get a feel for what most people are doing or consider best
practice.
Given a mod_perl application talking to a PostgreSQL database on the
same host, where different users are logging onto the web server using
LDAP
Tom Lane wrote:
"Keith G. Murphy" <[EMAIL PROTECTED]> writes:
Hmmm, mightn't it be kind of nice if there were PAM or krb5 maps in
addition to ident maps?
ISTM the whole point of PAM is that you plug in your desired security
policy outside of the application. You shouldn't be asking for more
s
"Keith G. Murphy" <[EMAIL PROTECTED]> writes:
> Hmmm, mightn't it be kind of nice if there were PAM or krb5 maps in
> addition to ident maps?
ISTM the whole point of PAM is that you plug in your desired security
policy outside of the application. You shouldn't be asking for more
security frammis
On Tue, 13 Jan 2004, Keith G. Murphy wrote:
> I'm trying to get a feel for what most people are doing or consider best
> practice.
>
> Given a mod_perl application talking to a PostgreSQL database on the
> same host, where different users are logging onto the web server using
> LDAP for authen
Keith G. Murphy said:
> Perhaps I can answer my own question. I could use ident and a map that
> lists the web server username as able to map to the different "role"
> usernames.
Someone else also mentioned and I personally agree that it's better to
authenticate in the application layer (using wh
John Sidney-Woollett wrote:
Keith G. Murphy said:
That sounds like an excellent compromise. How do you typically handle
the mechanics of authentication from web server to PostgreSQL on the
connect, using this scheme?
Sorry but I can't help you out here, I'm too much of a newbie with
Postgres -
Keith G. Murphy said:
> That sounds like an excellent compromise. How do you typically handle
> the mechanics of authentication from web server to PostgreSQL on the
> connect, using this scheme?
Sorry but I can't help you out here, I'm too much of a newbie with
Postgres - I was hoping that someon
John Sidney-Woollett wrote:
Keith G. Murphy said:
2) have the web server connecting to the database actually using the
user's account (possibly using LDAP authentication against PostgreSQL),
and controlling access to different database entities through GRANT, etc.
My experience with java web/ap
D]
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] Best practice
[EMAIL PROTECTED] writes:
> We use Ingres where I work and when setting up a server we always
> try to ensure that the log and data files are on different spindles.
This would be good practice in PG simply for performance reasons.
87 matches
Mail list logo