We are planning to use Postgres for one of our production database
system. I noticed when I go to http://www.postgresql.org/download/,
the one click binary package for Linux is actually an enterpriseDB
package (http://www.enterprisedb.com/products/pgdownload.do#linux-x64)
I also noticed that Enterp
Hi PostgreSQL,
I'd like to specify a pattern then apply that pattern to match each
element of an array:
rconover=# select 'foobar%' ~~ ANY (ARRAY['bar', 'cat', 'foobar:asdf']);
?column?
--
f
(1 row)
I'd like the the pattern would be evaluated against all of the array
elements, bu
paulo matadr wrote:
> you knowns a tool for automatic converter plsql in pgplsql?
> this tool exist?
EnterpriseDB claim that they can do something like this,
but I don't believe that there is any tool which can do
more than assist you.
Yours,
Laurenz Albe
--
Sent via pgsql-general mailing list
On Friday 12 December 2008 03:59:42 Tom Lane wrote:
> Greg Smith writes:
> > On Thu, 11 Dec 2008, Phillip Berry wrote:
> >> I'm not running PITR and checkpoint_segments is set to 100 as this is
> >> home to a very write intensive app.
> >
> > That's weird then. It shouldn't ever keep around more
On Thu, 2008-12-11 at 19:33 +, Simon Riggs wrote:
> On Thu, 2008-12-11 at 11:29 -0800, Joshua D. Drake wrote:
>
> > > As I said before, if you think something is missing, submit a software
> > > or a doc patch and submit it to peer review. Until then, I think its
> > > misleading to claim that
On Thu, 2008-12-11 at 11:29 -0800, Joshua D. Drake wrote:
> > As I said before, if you think something is missing, submit a software
> > or a doc patch and submit it to peer review. Until then, I think its
> > misleading to claim that only your magic spice makes replication work
> > correctly and
On Thu, 2008-12-11 at 19:24 +, Simon Riggs wrote:
> > > True, we rely on the existence of rsync, scp etc.. and go to great pains
> > > to provide as much choice as possible.
> > >
> > > If you think other things are required you are welcome to contribute
> > > them so they can be verified fau
On Thu, 2008-12-11 at 09:52 -0800, Joshua D. Drake wrote:
> On Thu, 2008-12-11 at 17:37 +, Simon Riggs wrote:
> > On Thu, 2008-12-11 at 09:14 -0800, Joshua D. Drake wrote:
> >
> > > I think this statement is misleading. The only thing core contains is
> > > the ability to use a bunch of util
>> I have a question concerning psql. I found that psql has a defined
>> command '-t' and that it turns off printing of column names
>> and result
>> row count footers, etc.
>>
>> what I look for, is a command, which would turn off result row count
>> footer, but would print column names.
>>
Hi guys,
you knowns a tool for automatic converter plsql in pgplsql?
this tool exist?
Thanks
Paulo Moraes
Veja quais são os assuntos do momento no Yahoo! +Buscados
http://br.maisbuscados.yahoo.com
I installed the Postgres Database on my work planning replace Oracle on the new
IT systems. I have noticed some differences (between Oracle and Posrgres) that
I would like clarify.
On postgres, I create 2 users and their schemas. Schema "User1" owned by
"User1" and Schema "User2" owned by "User
"Scott Marlowe" writes:
> On Thu, Dec 11, 2008 at 9:59 AM, Tom Lane wrote:
>> AFAIK the only non-PITR reason for WAL files to not get recycled is if
>> checkpoints were failing. Do you still have the postmaster log from
>> before the original crash, and if so is there anything in there about
>>
On Thu, 2008-12-11 at 17:37 +, Simon Riggs wrote:
> On Thu, 2008-12-11 at 09:14 -0800, Joshua D. Drake wrote:
>
> > I think this statement is misleading. The only thing core contains is
> > the ability to use a bunch of utilities (with the exception of
> > pg_standby) that aren't in core to p
2008/12/11 Angel Alvarez <[EMAIL PROTECTED]>:
> Hi all
>
> pgagent.sql creates a new schema for pgagent stuff but it shows it as catalog
>
> whats the diference? as it seems they are created almost equal..
They are the same. pgAdmin just classes the pgagent schema as a
catalog so it doesn't get in
On Thu, 2008-12-11 at 09:14 -0800, Joshua D. Drake wrote:
> On Thu, 2008-12-11 at 17:09 +, Simon Riggs wrote:
> > On Wed, 2008-12-10 at 18:34 -0500, Rutherdale, Will wrote:
> > > Thanks very much, Steve.
>
> > Yes, everything you need for log shipping has been contributed to the
> > main proj
On Thu, Dec 11, 2008 at 5:30 AM, Thom Brown <[EMAIL PROTECTED]> wrote:
> What do you folk think is the best way to manage deployments to databases?
> This would include things like table/view/function creations/changes and
> possibly static data changes.
The easiest way I've found to do it is to c
On Thu, 2008-12-11 at 17:09 +, Simon Riggs wrote:
> On Wed, 2008-12-10 at 18:34 -0500, Rutherdale, Will wrote:
> > Thanks very much, Steve.
> Yes, everything you need for log shipping has been contributed to the
> main project. If you read things elsewhere, please refer closely to the
> docs w
On Wed, 2008-12-10 at 18:34 -0500, Rutherdale, Will wrote:
> Thanks very much, Steve.
>
> The main (but not only) type of data replication activity I'm interested
> in right now would be the warm standby. Thus it appears from the
> documents you showed me that log shipping is one solution curren
On Thu, Dec 11, 2008 at 10:09 AM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> Don't forget that the OP mentioned earlier that he had very long help
> open connections with possible long help open transactions.
Long held. held. not help.
--
Sent via pgsql-general mailing list (pgsql-general@pos
On Thu, Dec 11, 2008 at 9:59 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Greg Smith <[EMAIL PROTECTED]> writes:
>> On Thu, 11 Dec 2008, Phillip Berry wrote:
>>> I'm not running PITR and checkpoint_segments is set to 100 as this is
>>> home to a very write intensive app.
>
>> That's weird then. It sh
Greg Smith <[EMAIL PROTECTED]> writes:
> On Thu, 11 Dec 2008, Phillip Berry wrote:
>> I'm not running PITR and checkpoint_segments is set to 100 as this is
>> home to a very write intensive app.
> That's weird then. It shouldn't ever keep around more than 201 WAL
> segments. I've heard one rep
On Wed, Dec 10, 2008 at 08:41:30PM -0700, Scott Marlowe wrote:
> one of the real time replication. Failover in slony is pretty easy to
> do and happens in seconds. But you do have to resubscribe the master
> as a slave and copy everything over again after a failover to make the
> old master the
I need to add some complex constraints at the DB.
For example.
Do not allow a line item of inventory to be changed if it does result in
the same number of joints originaly shipped.
These will involve several tables.
What is the best approach for this?
Here is what I have been trying.
CREAT
I'm asking this as a more general question on which will perform better.
I'm trying to get a set of comments and their score/rankings from two
tables.
*comments*
cid (integer, primary key)
title
body
*comment_ratings*
cid (integer, primary key)
uid (integer, primary key)
score
*Option 1* (Single
Hi all
pgagent.sql creates a new schema for pgagent stuff but it shows it as catalog
whats the diference? as it seems they are created almost equal..
for pgagent 'catalog'
CREATE SCHEMA pgagent
AUTHORIZATION postgres;
COMMENT ON CATALOG pgagent IS 'pgAgent system tables';
for public schema
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Maybe we could rephrase it as "whole-database VACUUM"?
> "database-wide VACUUM"?
Yeah, that's probably better, because I think we use that phrase in
the documentation already.
regards, tom lane
--
Sent via
Tom Lane wrote:
> Greg Smith <[EMAIL PROTECTED]> writes:
> > Not exactly. What it said was "To avoid a database shutdown, execute a
> > full-database VACUUM". In that context, "full" means you vacuum
> > everything in the database, but only regular VACUUM is needed. VACUUM
> > FULL, as you le
Greg Smith <[EMAIL PROTECTED]> writes:
> Not exactly. What it said was "To avoid a database shutdown, execute a
> full-database VACUUM". In that context, "full" means you vacuum
> everything in the database, but only regular VACUUM is needed. VACUUM
> FULL, as you learned the hard way, is a m
What do you folk think is the best way to manage deployments to databases?
This would include things like table/view/function creations/changes and
possibly static data changes.
Any good solutions out there?
Thanks
Thom
On Thu, Dec 11, 2008 at 3:54 AM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Wed, 10 Dec 2008, Liraz Siri wrote:
>
>> Besides Sun Microsystems hasn't been a financially healthy organization
>> for quite a few years, as evidenced by its rather dismal stock performance:
>> http://finance.google.com/fi
Greg Smith wrote:
> On Wed, 10 Dec 2008, Liraz Siri wrote:
>
>> Linux may still be behind Solaris in a few areas but I'll wager Linux
>> will catch up and make Solaris completely, utterly obsolete in the not
>> too distant future.
I shouldn't have posted this comment. It's flamebait.
> Great, fr
> No probably not. I mean they are all pretty easy (especially log
> shipping) but it is definitely true they are slow, depending on the size
> of the database.
>
As an alternative is there a clustering or multi master replication
scheme that would be useful in a WAN? Preferably with a "prefered
32 matches
Mail list logo