I don't think so -- I followed the instructions here:
http://www.robbyonrails.com/articles/2006/05/29/install-ruby-rails-and-postgresql-on-osx
But looking around, I see there's a pg_ctl in
/usr/local/bin, but 'port contents postgresql8' shows
a pg_ctl in /opt/local/lib/pgsql8/bin.
~ $ ll /opt/lo
CSN <[EMAIL PROTECTED]> writes:
> DETAIL: The database cluster was initialized without
> HAVE_INT64_TIMESTAMP but the server was compiled with
> HAVE_INT64_TIMESTAMP.
> HINT: It looks like you need to recompile or initdb.
Is it possible you have two PG installs on this machine, and you're
tryin
Hello,
O.k. so how about a phased approach?
1. Contact maintainers to create their new projects on pgfoundry and
begin moving tickets
2. Migrate CVS
3. Migrate mailing lists
Sincerely,
Joshua D. Drake
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4
I installed and started PostgreSQL and it worked fine
for days. Then I restarted my computer and now I can't
start PostgreSQL ('pg_ctl -D pgdata -l pgdata/psql.log
start'). Here's what's in my log:
LOG: received immediate shutdown request
WARNING: terminating connection because of crash of
anoth
On 27-Aug-06, at 11:47 PM, Joshua D. Drake wrote:
Greg Sabino Mullane wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I have been looking at the migration of Gborg lately. It looks
like the
only two active projects on that site are Slony, and pljava.
Libpqxx has
recently moved to the
Greg Sabino Mullane wrote:
[ There is text before PGP section. ]
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
>
> > I would like more information on this deficiency and what causes it so I
> > know when to anticipate it. This resulted in a rather nasty bug which
> > took me ages to trac
Tom Lane wrote:
> Martijn van Oosterhout writes:
> > Sure, UNIQUE constraints are not deferrable. With normal constraints
> > you can defer the check until the end of transaction and be in an
> > inconsistant state for while. However, PostgreSQL doesn't support this
> > for uniqueness checks.
>
>
On Mon, 2006-08-28 at 17:34, Peter Eisentraut wrote:
> Scott Marlowe wrote:
> > I'm wondering if the source code is available.
>
> http://www.heise.de/ct/dbcontest/teilnehmer.shtml
>
> > My guess is it was full of MySQLisms and the postgresql "port" was
> > written without indexes, no transaction
On Mon, 2006-08-28 at 17:06, Markus Schiltknecht wrote:
> Tony Caduto wrote:
> > http://newsvac.newsforge.com/newsvac/06/08/28/1738259.shtml
> >
> > Don't know the validity of this dvd order test they did, but the article
> > claims Postgresql only did 120 OPM.
> > Seems a little fishy to me.
>
Scott Marlowe wrote:
> I'm wondering if the source code is available.
http://www.heise.de/ct/dbcontest/teilnehmer.shtml
> My guess is it was full of MySQLisms and the postgresql "port" was
> written without indexes, no transactions, and relied on running
> dozens of queries that postgresql could
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Brandon Aiken wrote:
> To be fair, that's the fault of the previous designer, not MySQL.
> You don't blame Stanley when your contractor uses 2" plain nails
> when he needed 3" galvanized. The tool isn't to blame just
> because someone used it incorrec
This is a little strange - my response to this post apparently got lost
in the net?? I haven't received it back through the list nor it's
visible in the archieve. Yet, my exim logfile contains entry indicating
'delivery complited'???
But to the point.
All the EXPLAIN ANALISE I did on posggres v8.
Tony Caduto wrote:
http://newsvac.newsforge.com/newsvac/06/08/28/1738259.shtml
Don't know the validity of this dvd order test they did, but the article
claims Postgresql only did 120 OPM.
Seems a little fishy to me.
Now, this article really s**ks! First of all, the original contest was
spec
On Mon, 2006-08-28 at 16:02, Joshua D. Drake wrote:
> Tony Caduto wrote:
> > http://newsvac.newsforge.com/newsvac/06/08/28/1738259.shtml
> >
> > Don't know the validity of this dvd order test they did, but the article
> > claims Postgresql only did 120 OPM.
> > Seems a little fishy to me.
> >
>
Chris Mair wrote:
PS: this is, by the way a few months old, I'm wondering why MySQL
does the press release only now...
Because they don't have anything else to talk about, and are filling a vacuum?
Cheers,
NL
---(end of broadcast)---
TIP 3: H
Tom Lane <[EMAIL PROTECTED]> writes:
> Bruce and some other people thought this was confusing, so it's been
> changed for 8.2.
No kidding. They confused me.
Well Thanks for the explanation,
The new messages are infinitely clearer
--
greg
---(end of broadcast)
Looks like it was a design contest not a benchmark to me. Surprise,
surprise, the team that personally designs a DBMS has the best
performing DBMS. The second place winner, Alexander Burger, is the
author of the solution he used: Pico LISP. The third place team,
MonetDB, used their solution, Mon
In response to Chris Mair <[EMAIL PROTECTED]>:
>
> > http://newsvac.newsforge.com/newsvac/06/08/28/1738259.shtml
> >
> > Don't know the validity of this dvd order test they did, but the article
> > claims Postgresql only did 120 OPM.
> > Seems a little fishy to me.
>
> There was just one submi
> http://newsvac.newsforge.com/newsvac/06/08/28/1738259.shtml
>
> Don't know the validity of this dvd order test they did, but the article
> claims Postgresql only did 120 OPM.
> Seems a little fishy to me.
There was just one submission for PostgreSQL made by one guy who didn't
manage to finish
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> I would like more information on this deficiency and what causes it so I
> know when to anticipate it. This resulted in a rather nasty bug which
> took me ages to track down. Is anyone able+willing to explain a little
> here or should I ask in -hac
Tony Caduto wrote:
> http://newsvac.newsforge.com/newsvac/06/08/28/1738259.shtml
>
> Don't know the validity of this dvd order test they did, but the
> article claims Postgresql only did 120 OPM.
The contest evaluated the solutions sent in by whoever wanted to
participate. This doesn't prove any
Tony Caduto wrote:
http://newsvac.newsforge.com/newsvac/06/08/28/1738259.shtml
Don't know the validity of this dvd order test they did, but the article
claims Postgresql only did 120 OPM.
Seems a little fishy to me.
This has got to be a complete joke.
Joshua D. Drake
--
=== The Postgr
Martijn van Oosterhout wrote:
> Sure, UNIQUE constraints are not deferrable. With normal constraints
> you can defer the check until the end of transaction and be in an
> inconsistant state for while. However, PostgreSQL doesn't support
> this for uniqueness checks.
Note that even a nondeferred un
Richard Broersma Jr wrote:
> Is this related to the current limitations of "SET CONSTRAINTS"?
Only in a vague and nonspecific way.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of broadcast)---
TIP 3: Have you checked our ex
Martijn van Oosterhout writes:
> Sure, UNIQUE constraints are not deferrable. With normal constraints
> you can defer the check until the end of transaction and be in an
> inconsistant state for while. However, PostgreSQL doesn't support this
> for uniqueness checks.
Actually, what the spec says
> Naz Gassiep <[EMAIL PROTECTED]> writes:
> > I would like more information on this deficiency and what causes it so I
> > know when to anticipate it.
>
> The uniqueness constraint is checked on a row-by-row basis, so if you
> update one row to hold the same value as another row holds, you get an
On Tue, Aug 29, 2006 at 06:17:39AM +1000, Naz Gassiep wrote:
> I would like more information on this deficiency and what causes it so I
> know when to anticipate it. This resulted in a rather nasty bug which
> took me ages to track down. Is anyone able+willing to explain a little
> here or shoul
Naz Gassiep <[EMAIL PROTECTED]> writes:
> I would like more information on this deficiency and what causes it so I
> know when to anticipate it.
The uniqueness constraint is checked on a row-by-row basis, so if you
update one row to hold the same value as another row holds, you get an
error immed
http://newsvac.newsforge.com/newsvac/06/08/28/1738259.shtml
Don't know the validity of this dvd order test they did, but the article
claims Postgresql only did 120 OPM.
Seems a little fishy to me.
--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for
I wrote:
> No, I think Bruce fixed this recently. It's just a cosmetic mistake in
> the error message so we didn't back-patch it.
No, strike that, I remember the discussion now. The pre-8.2 code is
correct on its own terms, which is that it's telling you what size
number you tried to put in:
re
Naz Gassiep wrote:
No, the subsequent UPDATEs were just there to show you they worked... I
was only interested in the failed update, and why it failed. The DB was
consistent before the query, and it would have been after the query, so
I did not understand why the query failed unless the query m
Gregory Stark <[EMAIL PROTECTED]> writes:
> Scott Marlowe <[EMAIL PROTECTED]> writes:
>> test=> insert into test values (123123123123123.2);
>> ERROR: numeric field overflow
>> DETAIL: The absolute value is greater than or equal to 10^14 for field
>> with precision 12, scale 2.
> Uhm 10^14? What
I would like more information on this deficiency and what causes it so I
know when to anticipate it. This resulted in a rather nasty bug which
took me ages to track down. Is anyone able+willing to explain a little
here or should I ask in -hackers ?
Regards,
- Naz.
Michael Glaesemann wrote:
O
No, the subsequent UPDATEs were just there to show you they worked... I
was only interested in the failed update, and why it failed. The DB was
consistent before the query, and it would have been after the query, so
I did not understand why the query failed unless the query made teh DB
inconsis
On Aug 29, 2006, at 4:46 , Peter Eisentraut wrote:
Naz Gassiep wrote:
conwatch=# UPDATE replies SET rgt = rgt + 2 WHERE postid = 18 AND rgt
= 11;
ERROR: duplicate key violates unique constraint "replies_rgt_postid"
This is a well-known deficiency in PostgreSQL. You will have to work
arou
Peter Eisentraut wrote:
Naz Gassiep wrote:
If the violation of the constraint really is being caused
WITHIN the query, doesn't that violate the principle of atomicity?
I.e., operations and entities should be considered a single entire
construct rather than a collection of smaller
Naz Gassiep wrote:
I am getting an error that I think I understand, but that I didn't think
should happen.
Below is the output from psql that I am getting to trigger this error.
If the violation of the constraint really is being caused WITHIN the
query, doesn't that violate the principle of a
Naz Gassiep wrote:
> If the violation of the constraint really is being caused
> WITHIN the query, doesn't that violate the principle of atomicity?
> I.e., operations and entities should be considered a single entire
> construct rather than a collection of smaller, discrete parts.
The principle of
To be fair, that's the fault of the previous designer, not MySQL. You
don't blame Stanley when your contractor uses 2" plain nails when he
needed 3" galvanized. The tool isn't to blame just because someone used
it incorrectly.
MySQL works great for what it does: high speed at a cost of data
inte
"Brandon Aiken" <[EMAIL PROTECTED]> writes:
> Oh, I agree. PostgreSQL is a much more well-behaved RDBMS than MySQL
> ever was. I'm more inclined to select PostgreSQL over MySQL, but I may
> not be able to convince management that it's a better choice no matter
> how technically superior I can sh
On Mon, 2006-08-28 at 13:54, Gregory Stark wrote:
> Scott Marlowe <[EMAIL PROTECTED]> writes:
>
> > test=> create table test (a numeric(12,2));
> > CREATE TABLE
> > test=> insert into test values (123123123123123.2);
> > ERROR: numeric field overflow
> > DETAIL: The absolute value is greater tha
I am getting an error that I think I understand, but that I didn't think
should happen.
Below is the output from psql that I am getting to trigger this error.
If the violation of the constraint really is being caused WITHIN the
query, doesn't that violate the principle of atomicity? I.e., oper
Scott Marlowe <[EMAIL PROTECTED]> writes:
> test=> create table test (a numeric(12,2));
> CREATE TABLE
> test=> insert into test values (123123123123123.2);
> ERROR: numeric field overflow
> DETAIL: The absolute value is greater than or equal to 10^14 for field
> with precision 12, scale 2.
Uh
Oh, I agree. PostgreSQL is a much more well-behaved RDBMS than MySQL
ever was. I'm more inclined to select PostgreSQL over MySQL, but I may
not be able to convince management that it's a better choice no matter
how technically superior I can show it to be.
--
Brandon Aiken
CS/IT Systems Engineer
On Mon, 2006-08-28 at 12:28, Brandon Aiken wrote:
> I'm considering migrating our MySQL 4.1 database (barf!) to PostgreSQL 8
> or MySQL 5.
>
> The guy who originally designed the system made all the number data
> FLOATs, even for currency items. Unsurprisingly, we've noticed math
> errors resul
"Brandon Aiken" <[EMAIL PROTECTED]> writes:
> I'm considering migrating our MySQL 4.1 database (barf!) to PostgreSQL 8
> or MySQL 5.
>
> The guy who originally designed the system made all the number data
> FLOATs, even for currency items. Unsurprisingly, we've noticed math
> errors resulting f
=?ISO-8859-1?Q?Bj=F8rn?= T Johansen <[EMAIL PROTECTED]> writes:
> I am trying to create a function but I don't get further than this and
> something is already wrong...
> The function so far look like this..:
> CREATE OR REPLACE FUNCTION getNettoHastighet (INTEGER) RETURNS INTEGER AS '
> DECLARE
I'm considering migrating our MySQL 4.1 database (barf!) to PostgreSQL 8
or MySQL 5.
The guy who originally designed the system made all the number data
FLOATs, even for currency items. Unsurprisingly, we've noticed math
errors resulting from some of the aggregate functions. I've learned
MySQL
I am trying to create a function but I don't get further than this and
something is already wrong...
The function so far look like this..:
CREATE OR REPLACE FUNCTION getNettoHastighet (INTEGER) RETURNS INTEGER AS '
DECLARE
ordreid_val ALIAS FOR $1;
tmprec RECORD;
opplag integer;
t
Exactly. Sorry for being so careless. Was thinking something else after being bugged up.Done now. Thanks alot,~HarpreetOn 8/28/06, Jorge Godoy
<[EMAIL PROTECTED]> wrote:"Harpreet Dhaliwal" <
[EMAIL PROTECTED]> writes:> I did> sudo yum install postgresql-plperl*> and it says dependency perl-base =
"Harpreet Dhaliwal" <[EMAIL PROTECTED]> writes:
> I did
> sudo yum install postgresql-plperl*
> and it says dependency perl-base = 2:5.8.8 is missing.
> I did yum install perl-base = 2:5.8.8 and its says nothing to do
> Tried sudo yum install perl-base = 2:5.8.8 and says nothing do
> Even tried yu
No, you can make this work just fine if you JOIN right.
You're way is a more concise way of expressing it, though.
Tom's trick
SELECT DISTINCT ON (object_id, object_val_type_id) * from object_val
ORDER BY object_id DESC, object_val_type_id DESC, observation_date
DESC
Runs about twice as fa
I didsudo yum install postgresql-plperl*and it says dependency perl-base = 2:5.8.8 is missing.I did yum install perl-base = 2:5.8.8 and its says nothing to doTried sudo yum install perl-base = 2:5.8.8
and says nothing doEven tried yum install perl-base* and still says nothing to doDon't know whats
"Harpreet Dhaliwal" <[EMAIL PROTECTED]> writes:
> Can anyone give me the right path for downlaod postgresql-plperl package for
> fedora core 5 (32 bit) postgresql version 8.1.4.
sudo yum install postgresql-pl
regards, tom lane
---(end of broadcast)
Alban Hertroys <[EMAIL PROTECTED]> writes:
> There's practically no difference between SELECT 1 FROM ... and SELECT *
> FROM ...; the only added costs (AFAIK) are for actually fetching the
> column values and such. Pretty cheap operations.
You're both glossing over exactly the wrong thing, parti
Can anyone give me the right path for downlaod postgresql-plperl package for fedora core 5 (32 bit) postgresql version 8.1.4.The one that i found closest to the needs is not working. Tried a few more but all of them throw the same problem.
What would be the most authentic source? Thanks,~Harpreet
I have setup a new server to hold my postgres 8.1 database.
when i do a pg_dump to a file mydb.tar where to i copy it to on the new
server to restore it
also will all the schemas be copied also
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Martijn van
Oost
On Mon, Aug 28, 2006 at 04:18:12PM +0200, Bjørn T Johansen wrote:
> On Mon, 28 Aug 2006 07:20:02 -0600 Michael Fuhr <[EMAIL PROTECTED]> wrote:
> > select extract(epoch from sum(Til - Fra)) * 1000.0 ...
>
> Do you know if this is supported on older versions of PostgreSQL
> as well? (eg. 7.4.x)
Yes
On Mon, 28 Aug 2006 07:20:02 -0600
Michael Fuhr <[EMAIL PROTECTED]> wrote:
> On Mon, Aug 28, 2006 at 10:48:47AM +0200, Bjørn T Johansen wrote:
> > select sum(Til - Fra) as total from Log_stop where OrdreID = 3434
> >
> > but I would like the result to be in milliseconds, is this possible? If so,
"Harpreet Dhaliwal" <[EMAIL PROTECTED]> writes:
> I tried to install postgresql-plperl-8.1package.
> It asks for a few dependencies.
> i did yum install of those dependencies but says "nothing to do"
My guess is that you're trying to install the wrong package, ie an
RPM built for a different distr
"Jasbinder Bali" <[EMAIL PROTECTED]> writes:
> Its because my trigger has to initiate some unix tools and the code for
> the same is already written in Perl.
> So my trigger just needs to call the Perl program that would do the needful
> eventually.
Seems like you should be writing the trigger in
On Mon, Aug 28, 2006 at 10:48:47AM +0200, Bjørn T Johansen wrote:
> select sum(Til - Fra) as total from Log_stop where OrdreID = 3434
>
> but I would like the result to be in milliseconds, is this possible? If so,
> how?
> (the fields Til and Fra is of type Time)
You could use extract(epoch from
On Mon, 2006-08-28 at 14:50 +0200, Alban Hertroys wrote:
> Rafal Pietrak wrote:
>
> > But when I look at ANALYSE output of comlog SELECT, I can see, that:
> > 1. the seq-scans is more expensive here: 170ms and 120ms respectively.
> > Any reasons for that?
> > 2. each scan has an additional job of:
On Mon, Aug 28, 2006 at 02:38:07PM +0200, Bobby Gontarski wrote:
> I do:
> pg_dump -Ft mydb > mydb.tar
> pg_restore -d newdb mydb.tar
> and I get tuns of errors. Like:
> pg_restore: [archiv?? (db)] could not execute query: ERROR: relation
> "pg_ts_cfg map" already exists
Normally you restore on
Rafal Pietrak wrote:
Well. The logfiles don't have their own indexes but make foreign key
references over brand1/brand2/clty columns. Unique constreins are on the
target tables.
So there's no index on the logfiles then? (A foreign key constraint
doesn't create an index). It doesn't seem like i
I do:
pg_dump -Ft mydb > mydb.tar
pg_restore -d newdb mydb.tar
and I get tuns of errors. Like:
pg_restore: [archiv߰ (db)] could not execute query: ERROR: relation "pg_ts_cfg
map" already exists
Command was: CREATE TABLE pg_ts_cfgmap (
ts_name text NOT NULL,
tok_alias text NOT NULL,
dict_name text[
On Mon, 2006-08-28 at 13:04 +0200, Alban Hertroys wrote:
> Rafal Pietrak wrote:
> > Total runtime: 822.901 ms
> > (7 rows)
> > -
>
> Just to make sure: You do have an appropriate index over the tables in
> that UNION?
Well. The logfiles don't have their ow
Rafal Pietrak wrote:
Thank you All for explanations. Looks loke that's what I was looking
for.
UNION ALL is quite satisfactory (830ms).
And yet, somwhere I loose c.a. 600ms (as compared to 120ms+80ms of each
respective 'raw' subquery) which as percentage seem signifficant.
Does anybody know
Thank you All for explanations. Looks loke that's what I was looking
for.
UNION ALL is quite satisfactory (830ms).
And yet, somwhere I loose c.a. 600ms (as compared to 120ms+80ms of each
respective 'raw' subquery) which as percentage seem signifficant.
Does anybody know where the processing g
I have a statement looking like this...:
select sum(Til - Fra) as total from Log_stop where OrdreID = 3434
but I would like the result to be in milliseconds, is this possible? If so, how?
(the fields Til and Fra is of type Time)
Regards,
BTJ
--
--
On mán, 2006-08-28 at 10:23 +0200, Rafal Pietrak wrote:
> Hi all,
>
> Is there a way to speed up the query to my 'grand total' logfile,
> constructed as a UNION of smaller (specialised) logfiles?
>
I do not know if this is relevant to your case, but
possibly you can use a UNION ALL instead of a
Silvela, Jaime (Exchange) wrote:
The obvoious way to get the latest measurement of type A would be to
join the table against
SELECT object_id, object_val_type_id, max(observation_date)
FROM object_val
GROUP BY object_id, object_val_type_id
I'm not sure this is actually the result you want; doe
Am Montag, 28. August 2006 10:23 schrieb Rafal Pietrak:
> Is there a way to speed up the query to my 'grand total' logfile,
> constructed as a UNION of smaller (specialised) logfiles?
If it is sufficient for your purpose, you will find UNION ALL to be
significantly faster.
--
Peter Eisentraut
h
Hi all,
Is there a way to speed up the query to my 'grand total' logfile,
constructed as a UNION of smaller (specialised) logfiles?
Access to log1/log2 is quick (If I'm reading ANALYSE log correctly, it's
c.a. 100ms each - and it feels like that, so presumebly I'm reading
ANALYSE just OK), but th
I tried to install postgresql-plperl-8.1package.It asks for a few dependencies.i did yum install of those dependencies but says "nothing to do"Can you tell me whats wrong with it?
Thanks,~Harpreet.On 8/28/06, Harpreet Dhaliwal <[EMAIL PROTECTED]> wrote:
I'm fedora core 5 user with PG 8.1
On 8/28/06
Its because my trigger has to initiate some unix tools and the code forthe same is already written in Perl.So my trigger just needs to call the Perl program that would do the needful eventually.~Jas
On 8/28/06, Gerald Timothy G Quimpo <[EMAIL PROTECTED]> wrote:
On Mon, 2006-08-28 at 01:29 -0400, Ja
76 matches
Mail list logo