> "JG" == John Gibson <[EMAIL PROTECTED]> writes:
JG> Hi, all.
JG> I need to upgrade my dual Xeon PostgreSQL engine.
JG> Assuming similar memory and disk sub-systems, I am considering a Quad
JG> Xeon system vs. a Dual Itanium for PostgreSQL. I believe that the
Save the money from the dual i
On Wednesday 11 February 2004 19:49, Mark Harrison wrote:
> Apache has a nice feature: it creates copies of all the default
> configuration files, so that it's easy to diff and see what has
> been modified in the config files.
>
> Can this be included in createdb as well?
Postfix has a useful util
I'm converting an application to use the V3 protocol features in the 7.4
libpq. As I need to make a design choice regarding the use of prepared
statements, I'm wondering what ressources does a prepared statement use on
the server ? If I need to create several hundred in each backend, is there a
big
Mark Harrison wrote:
> Apache has a nice feature: it creates copies of all the default
> configuration files, so that it's easy to diff and see what has
> been modified in the config files.
>
> Can this be included in createdb as well?
Uh, the defaults are already in /pgsql/share.
--
Bruce Mo
On Wed, Feb 11, 2004 at 09:47:54AM -0600, Arthur Ward wrote:
> >> While we're at it, what about temporary functions?
> ...
> > Whether it's worth the trouble is another question. What's the
> > use-case?
>
> ...
>
> I don't find lack of temporary functions to be a hindrance. Perhaps it's a
> nic
Apache has a nice feature: it creates copies of all the default
configuration files, so that it's easy to diff and see what has
been modified in the config files.
Can this be included in createdb as well?
Thanks,
Mark
Here's a patch:
*** initdb.sh-orig 2004-02-11 11:25:49.0 -0800
--- ini
Here's the link for rules.
http://www.postgresql.org/docs/7.3/static/rules-insert.html
Richard Huxton wrote:
On Wednesday 11 February 2004 15:56, C G wrote:
Dear All,
Could anyone explain why this function does will not work? The error
message is
DETAIL: exceptions.RuntimeError: maximum recursi
You could write a rule to trigger a record into a table as well.
Richard Huxton wrote:
On Wednesday 11 February 2004 15:56, C G wrote:
Dear All,
Could anyone explain why this function does will not work? The error
message is
DETAIL: exceptions.RuntimeError: maximum recursion depth exceeded.
CREA
On Tue, 10 Feb 2004, CSN wrote:
>
> I have a pretty simple select query that joins a table
> (p) with 125K rows with another table (pc) with almost
> one million rows:
>
> select p.*
> from product_categories pc
> inner join products p
> on pc.product_id = p.id
> where pc.category_id = $category_
On Wednesday February 11 2004 9:57, Tom Lane wrote:
> "Ed L." <[EMAIL PROTECTED]> writes:
> > Then what scenarios, if any, merit theory (2) over theory (1)?
>
> I'd only consider a large-cache setting on a machine that's dedicated to
> running the database (where "dedicated" means "that's the only
On Wed, 11 Feb 2004, C G wrote:
>
> > > Dear All,
> > >
> > > Could anyone explain why this function does will not work? The error
> >message
> > > is
> > > DETAIL: exceptions.RuntimeError: maximum recursion depth exceeded.
> > >
> > > CREATE FUNCTION testing() RETURNS trigger AS'
> > >
> > > pl
Ashish,
> Thanx Josh. My conceptual difficulty was logging into postgres using perl
> since for sybase I have been using the specialized sybperl. But I guess
> the standard documentation will help me there.
Personally, I use the FreeTDS module:
www.freetds.org
... which may only work for older Sy
Folks,
Since GForge runs on PostgreSQL, I wanted to see if anyone in our community
could give testimony on running it or using it, professionally, or on another
open source project. Please e-mail me off list. Thanks!
--
Josh Berkus
Aglio Database Solutions
San Francisco
---
On Wednesday February 11 2004 9:18, Tom Lane wrote:
> "Ed L." <[EMAIL PROTECTED]> writes:
> > In general, would it be true to say that if one does *not* anticipate
> > contention for kernel disk cache space from non-DB processes (e.g., the
> > dedicated db server), then you probably want to use the
On Wed, 11 Feb 2004, C G wrote:
> Dear All,
>
> Could anyone explain why this function does will not work? The error message
> is
> DETAIL: exceptions.RuntimeError: maximum recursion depth exceeded.
>
> CREATE FUNCTION testing() RETURNS trigger AS'
>
> plan=plpy.prepare(''INSERT INTO t1 values
> Dear All,
>
> Could anyone explain why this function does will not work? The error
message
> is
> DETAIL: exceptions.RuntimeError: maximum recursion depth exceeded.
>
> CREATE FUNCTION testing() RETURNS trigger AS'
>
> plan=plpy.prepare(''INSERT INTO t1 values ($1)'',[''text''])
> plpy.execute(
"Ed L." <[EMAIL PROTECTED]> writes:
> Then what scenarios, if any, merit theory (2) over theory (1)?
I'd only consider a large-cache setting on a machine that's dedicated to
running the database (where "dedicated" means "that's the only thing you
care about performance of", as in your first scenar
Ashish,
> postgresql you said (I saw this on a list):
> > Also, if you have a *running* Sybase database, conversion is a lot
> > easier ... you can use Perl::DBI to read directly from sybase to a COPY
> > file, and then load the COPY file into Postgres.
>
> I am brand new to postgres and do have a
C G wrote:
Dear All,
Could anyone explain why this function does will not work? The error
message is
DETAIL: exceptions.RuntimeError: maximum recursion depth exceeded.
CREATE FUNCTION testing() RETURNS trigger AS'
plan=plpy.prepare(''INSERT INTO t1 values ($1)'',[''text''])
plpy.execute(plan,[
"Ed L." <[EMAIL PROTECTED]> writes:
> In general, would it be true to say that if one does *not* anticipate
> contention for kernel disk cache space from non-DB processes (e.g., the
> dedicated db server), then you probably want to use theory (1)? If one
> *does* anticipate such contention (e.g
Iker Arizmendi <[EMAIL PROTECTED]> writes:
> How are function parameters of rowtype specified when
> calling them from a client such as libpq?
Something like
select myfunc(t.*) from tab t where ...
regards, tom lane
---(end of broadcast)---
"muteki muteki" <[EMAIL PROTECTED]> writes:
> Since we have our systems being deployed to numerous
> remote systems (psql 7.2.3), upgrading the entire database
> (with data migration) will be the least preferable
> solution.
At the very least you should be running 7.2.4. We do not make
dot-releas
C G wrote:
Dear All,
Could anyone explain why this function does will not work? The error
message is
DETAIL: exceptions.RuntimeError: maximum recursion depth exceeded.
CREATE FUNCTION testing() RETURNS trigger AS'
plan=plpy.prepare(''INSERT INTO t1 values ($1)'',[''text''])
plpy.execute(plan,[
On Tuesday February 10 2004 11:17, Tom Lane wrote:
>
> Well, if you go *really* small then you find a lot of CPU time gets
> wasted shuffling data from kernel cache to PG cache. The sweet spot
> for theory (1) seems to be to set shared_buffers in the range of 1000 to
> 1 buffers. (Time was th
Dear All,
Could anyone explain why this function does will not work? The error message
is
DETAIL: exceptions.RuntimeError: maximum recursion depth exceeded.
CREATE FUNCTION testing() RETURNS trigger AS'
plan=plpy.prepare(''INSERT INTO t1 values ($1)'',[''text''])
plpy.execute(plan,[''blah''])
r
>> While we're at it, what about temporary functions?
...
> Whether it's worth the trouble is another question. What's the
> use-case?
I have a data-loading script that transforms data from an intermediate
form in work tables to its final resting place in production. Part of this
is a major strin
On Wed, 11 Feb 2004, NTPT wrote:
> Take 1900 ms.. In this case i try to increase effective_cache_size step
> by step 64,128,256,512,1024 but increase effective_cache_size up from
> 512 have no dramatic impact on performance.
Note that effective_cache_size ONLY affects the query plan chosen. I
How are function parameters of rowtype specified when
calling them from a client such as libpq? Is there a syntax
similar to that for arrays? (eg, {x, y, z} )
Thanks,
Iker
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please sen
Le Mercredi 11 Février 2004 12:12, Erwin Van de Velde a écrit :
> If anyone has built such functions already, I'd gladly accept, and you can
> win a line in my thank word ;-)
Dear Erwin,
I built a small centralised database built for Ulogd and ran into the same
questions. You can either use impl
On Tue, Feb 10, 2004 at 08:29:50PM -0800, muteki muteki wrote:
> Hi,
>
> I am currently having the corrupted tables issues
> described in the following link (possibly caused by
> power failure, which happens pretty often)
> http://archives.postgresql.org/pgsql-admin/2003-04/msg00012.php.
> Since w
Hi,
I'm building a central logging system for security applications as my master
thesis, but I've run into some troubles:
Different applications make database logs using different formats:
- Timestamps as timestamps or as numeric values
- IP addresses in dotted notation (aaa.bbb.ccc.ddd) or as n
On Wednesday 11 February 2004 09:42, NTPT wrote:
>
> It seems that :
>
> 1: Settings memory limits too high, whnen machine start using swap space
> is WROSE then give postgres as low memory as possible.
Swapping is always bad news.
> 2: settings of sort_mem have bigger impact on performance th
32 matches
Mail list logo