On fre, 2009-09-18 at 14:05 +1000, Johnson, Trevor wrote:
> Using Moodle with PostgreSQL 8.4 and we get warning messages...
>
> 2009-09-18 13:48:11 ESTWARNING: nonstandard use of escape in a string
> literal at character 209
>
> 2009-09-18 13:48:11 ESTHINT: Use the escape string syntax for
> es
On Fri, 2009-09-18 at 14:05 +1000, Johnson, Trevor wrote:
> Are these just warnings or is there something we need to fix?
They are just warnings. The application is apparently written assuming
the non-standard quoting for string literals.
> If so is it okay to turn the warnings with escape_string
Using Moodle with PostgreSQL 8.4 and we get warning messages...
2009-09-18 13:48:11 ESTWARNING: nonstandard use of escape in a string
literal at character 209
2009-09-18 13:48:11 ESTHINT: Use the escape string syntax for escapes,
e.g., E'\r\n'.
"standard_conforming_strings" is set to off, i
On Thu, Sep 17, 2009 at 8:37 PM, Jonathan Harahush wrote:
> I do have PostGIS installed and I use it for other things (geoserver), but
> I'm not knowledgeable enough about it to the point where I understand how to
> get it to work with the Google Maps API. I'll look into it. In the
> meantime, I
I do have PostGIS installed and I use it for other things (geoserver), but
I'm not knowledgeable enough about it to the point where I understand how to
get it to work with the Google Maps API. I'll look into it. In the
meantime, I was hoping to create something based off of the GMaps/PHP/MySQL
ex
On Thu, Sep 17, 2009 at 5:53 PM, John R Pierce wrote:
> afaik, postgresql doesn't 'certify' anything, and certainly most of us on
> this email list do not speak for postgresql.org, we're mostly just users.
However, many of us on the list ARE certifiable. But that's a different story.
--
Sent
A bit out in left field,
Writing your own haversine in Postgres seems a bit like reinventing a wooden
wheel when you gan get a free pneumatic one...
Any reason not to just install PostGIS & fully support geometries & projections
in Postgres?
You can build the geometries provided to the functio
Marco Fortina wrote:
Hello there!
A customer of our company would like to create a 2 nodes cluster
connected to an external NAS storage. I would like to know if
PostgreSQL supports its datafiles on this kind of storage and if this
solution is certified.
active/standby type cluster, where
Mike Christensen writes:
> This behavior kinda gets me sometimes too, especially in WHERE clauses..
> I'm a bit curious as to why this is so bad. I could see why it would
> be expensive to do, since your clause wouldn't be indexed - but why is
> the syntax itself not allowed?
It's not logically
It's the whole query as far as I can tell. The app takes input from
the user --- the user enters an address and chooses a radius ("show me
all facilities within 5 miles of this address") and then the latitude
and longitude of the address and the radius is passed into the query
so that the database
This behavior kinda gets me sometimes too, especially in WHERE clauses..
I'm a bit curious as to why this is so bad. I could see why it would
be expensive to do, since your clause wouldn't be indexed - but why is
the syntax itself not allowed? Repeating the clause isn't gonna gain
you any speed,
> An EXPLAIN (EXPLAIN ANALYSE if it's not going to hurt things) of some of
> your common queries would help a lot here.
Yes, we are just about to start getting into that sort of thing.
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of "In Defense of F
i was able to get it sorted out with the latest release these initdb and pg_ctl
commands as follows
initdb --pgdata=/pgsql/data
#postgresql.conf contents consist of
# this is a comment
#what port to run on
port = 5432
#hostname or address on which the postmaster is to listen for connection
Jonathan writes:
> Here is my PHP with SQL:
> $query = sprintf("SELECT 'ID', 'FACILITY', 'ADDRESS', latitude,
> longitude, ( 3959 * acos( cos( radians('%s') ) * cos( radians
> ( latitude ) ) * cos( radians( longitude ) - radians('%s') ) + sin
> ( radians('%s') ) * sin( radians( latitude ) ) ) ) AS
Alban Hertroys writes:
> I'm seeing something strange with the row-estimates on an empty table.
It's intentional that we don't assume an empty table is empty.
Otherwise you could get some spectacularly awful plans if you
create a table, fill it, and immediately query it.
Martin Gainty writes:
> i flipped to my regular account and re-created the db with initdb -D newFolder
> i have noticed that initdb basically deletes everything ..and the reason for
> doing that would be?
Oh? It should refuse to do anything if the target directory is not
empty; and does act tha
Hi!
I am looking at the PHP/MySQL Google Maps API store locator example
here:
http://code.google.com/apis/maps/articles/phpsqlsearch.html
And I'm trying to get this to work with PostgreSQL instead of MySQL.
I've (slightly) modified the haversine formula part of my PHP script
but I keep getting
yes
i flipped to my regular account and re-created the db with initdb -D newFolder
i have noticed that initdb basically deletes everything ..and the reason for
doing that would be?
initdb -D dataDir --noclean
allows initdb to retain the data folder and postgresql.conf configuration file
conten
Allan Kamau wrote:
> Hi,
> I do have a query which make use of the results of an aggregate
> function (for example bit_or) several times in the output column list
> of the SELECT clause, does PostgreSQL simply execute the aggregate
> function only once and provide the output to the other calls to t
Hello,
2009/9/17 Arnold, Sandra :
> We are in the process of migrating from Oracle to PostgreSQL. One of the
> things that we are needing to find out is what to use in place of Oracle
> supplied functionality such as "DBMS_OUTPUT" and "UTL_FILE". We are
> currently using this type of functionali
On Thu, Sep 17, 2009 at 8:06 AM, Marco Fortina wrote:
>
> Hello there!
>
> A customer of our company would like to create a 2 nodes cluster connected
> to an external NAS storage. I would like to know if PostgreSQL supports its
> datafiles on this kind of storage and if this solution is certified.
Hello there!
A customer of our company would like to create a 2 nodes cluster connected to
an external NAS storage. I would like to know if PostgreSQL supports its
datafiles on this kind of storage and if this solution is certified.
Thanks
Marco Fortina
Senior Consultant
Mobile:+39 348 524673
On Thu, Sep 17, 2009 at 3:35 PM, Scott Marlowe wrote:
> True, but with a work_mem of 2M, I can't imagine having enough sorting
> going on to need 4G of ram. (2000 sorts? That's a lot) I'm betting
> the OP was looking at top and misunderstanding what the numbers mean,
> which is pretty common rea
On Thu, Sep 17, 2009 at 1:53 PM, Arnold, Sandra wrote:
> We are in the process of migrating from Oracle to PostgreSQL. One of the
> things that we are needing to find out is what to use in place of Oracle
> supplied functionality such as "DBMS_OUTPUT" and "UTL_FILE". We are
> currently using thi
On Thu, Sep 17, 2009 at 1:16 PM, Jonathan wrote:
> Hi!
>
> I am looking at the PHP/MySQL Google Maps API store locator example
> here:
>
> http://code.google.com/apis/maps/articles/phpsqlsearch.html
>
> And I'm trying to get this to work with PostgreSQL instead of MySQL.
>
> I've (slightly) modifi
> I'm gonna make a SWAG that you've got 4 to 4.5G shared buffers, and if
> you subract that from DRS you'll find it's using a few hundred to
> several hundred megs. still a lot, but not in the 4G range you're
> expecting. What does top say about this?
I've just add this in my cronjob with "top -
DBMS_OUTPUT is used to either display output or write output to a file.
UTL_FILE is used to open a file and then write data to a file. Most of the
time we use these two packages to create log files from PL/SQL stored
procedures/packages.
-Original Message-
From: pgsql-general-ow...@p
On Thu, Sep 17, 2009 at 03:56:09PM -0400, Alan McKay wrote:
> Our databases are pretty big, and our queries pretty complex.
How big is "big" and how complex is "complex"?
An EXPLAIN (EXPLAIN ANALYSE if it's not going to hurt things) of some of
your common queries would help a lot here.
--
Sam
On Thu, Sep 17, 2009 at 1:56 PM, Alan McKay wrote:
> On Thu, Sep 17, 2009 at 3:35 PM, Scott Marlowe
> wrote:
>> True, but with a work_mem of 2M, I can't imagine having enough sorting
>> going on to need 4G of ram. (2000 sorts? That's a lot) I'm betting
>> the OP was looking at top and misunder
On Thu, Sep 17, 2009 at 03:53:36PM -0400, Arnold, Sandra wrote:
> We are in the process of migrating from Oracle to PostgreSQL. One of
> the things that we are needing to find out is what to use in place of
> Oracle supplied functionality such as "DBMS_OUTPUT" and "UTL_FILE".
For those of us who
Martin Gainty wrote:
> thanks for the prompt response
> i took the stack of bad calls and placed them in a .sh and they all ran
> flawlessly..
> there is a delta in there somewhere
>
> I have a followup (if i may)
> I am able to get past the initdb but when i run the postgres on the data
> fol
We are in the process of migrating from Oracle to PostgreSQL. One of the
things that we are needing to find out is what to use in place of Oracle
supplied functionality such as "DBMS_OUTPUT" and "UTL_FILE". We are currently
using this type of functionality in Stored Procedures and packages. W
On Thu, Sep 17, 2009 at 1:35 PM, Scott Marlowe wrote:
> On Thu, Sep 17, 2009 at 1:31 PM, Bill Moran wrote:
>> In response to Scott Marlowe :
>>
>>> On Thu, Sep 17, 2009 at 12:56 PM, Alan McKay wrote:
>>> > Is there any way to limit a query to a certain amount of RAM and / or
>>> > certain runtim
On Thu, Sep 17, 2009 at 1:31 PM, Bill Moran wrote:
> In response to Scott Marlowe :
>
>> On Thu, Sep 17, 2009 at 12:56 PM, Alan McKay wrote:
>> > Is there any way to limit a query to a certain amount of RAM and / or
>> > certain runtime?
>> >
>> > i.e. automatically kill it if it exceeds either b
In response to Scott Marlowe :
> On Thu, Sep 17, 2009 at 12:56 PM, Alan McKay wrote:
> > Is there any way to limit a query to a certain amount of RAM and / or
> > certain runtime?
> >
> > i.e. automatically kill it if it exceeds either boundary?
> >
> > We've finally narrowed down our system cras
thanks for the prompt response
i took the stack of bad calls and placed them in a .sh and they all ran
flawlessly..
there is a delta in there somewhere
I have a followup (if i may)
I am able to get past the initdb but when i run the postgres on the data
folder I get no postgresql.conf found s
On Thu, Sep 17, 2009 at 1:19 PM, Alan McKay wrote:
>> Generally speaking work_mem limits ram used. What are your
>> non-default postgresql.conf settings?
>
> This cannot be right because we had queries taking 4G and see our
> setting is such :
Are you sure they were using that much memory? If y
> Generally speaking work_mem limits ram used. What are your
> non-default postgresql.conf settings?
This cannot be right because we had queries taking 4G and see our
setting is such :
work_mem = 2MB # min 64kB
I'll have to find a copy of the default file to figure out
On Thu, Sep 17, 2009 at 12:56 PM, Alan McKay wrote:
> Is there any way to limit a query to a certain amount of RAM and / or
> certain runtime?
>
> i.e. automatically kill it if it exceeds either boundary?
>
> We've finally narrowed down our system crashes and have a smoking gun,
> but no way to fi
Is there any way to limit a query to a certain amount of RAM and / or
certain runtime?
i.e. automatically kill it if it exceeds either boundary?
We've finally narrowed down our system crashes and have a smoking gun,
but no way to fix it in the immediate term. This sort of limit would
really help
On Thu, Sep 17, 2009 at 08:34:20PM +0200, hubert depesz lubaczewski wrote:
> On Thu, Sep 17, 2009 at 01:22:52PM -0500, Peter Hunsberger wrote:
> > You can't have a foreign key that doesn't have relational integrity,
> > it is no longer a foreign key.
>
> you do realize that having foreign key defi
Martin Gainty wrote:
> experiencing weird 14001 errors .no logs not evt nothing to go by so i
> completely rebuilt the cygwin postgres
>
> ran cygwin
> then i ran the initdb as follows
>
> $ ./postgresql-8.4.1/src/bin/initdb/initdb.exe -D /pgsql/data
> The files belonging to this database system
On Thu, Sep 17, 2009 at 01:22:52PM -0500, Peter Hunsberger wrote:
> You can't have a foreign key that doesn't have relational integrity,
> it is no longer a foreign key. If you don't want the delay then don't
> define the key, at least until some point at which you can take the
> delay. If there
Hello all,
I'm seeing something strange with the row-estimates on an empty table.
The table in question is merely a template-table that specialised
tables inherit from, it will never contain any data. Nevertheless,
after importing my creation script and vacuum analyse the result I see
is
On Thu, 2009-09-17 at 12:15 -0600, Scott Marlowe wrote:
> On Thu, Sep 17, 2009 at 12:12 PM, Joshua D. Drake
> wrote:
> > On Thu, 2009-09-17 at 12:05 -0600, Scott Marlowe wrote:
> >> On Thu, Sep 17, 2009 at 12:00 PM, Joshua D. Drake
> >> wrote:
> >> > On Thu, 2009-09-17 at 11:48 -0600, Scott Mar
On Thu, Sep 17, 2009 at 12:44 PM, hubert depesz lubaczewski
wrote:
> On Thu, Sep 17, 2009 at 12:31:14PM -0500, Peter Hunsberger wrote:
>> On Thu, Sep 17, 2009 at 11:40 AM, hubert depesz lubaczewski
>> wrote:
>> >
>> > So, since (as we know) foreign keys are not fault-proof, wouldn't it be
>> > go
On Thu, Sep 17, 2009 at 12:12 PM, Joshua D. Drake
wrote:
> On Thu, 2009-09-17 at 12:05 -0600, Scott Marlowe wrote:
>> On Thu, Sep 17, 2009 at 12:00 PM, Joshua D. Drake
>> wrote:
>> > On Thu, 2009-09-17 at 11:48 -0600, Scott Marlowe wrote:
>> >> I'm trying to do a parallel restore with pg_restor
On Thu, 2009-09-17 at 12:05 -0600, Scott Marlowe wrote:
> On Thu, Sep 17, 2009 at 12:00 PM, Joshua D. Drake
> wrote:
> > On Thu, 2009-09-17 at 11:48 -0600, Scott Marlowe wrote:
> >> I'm trying to do a parallel restore with pg_restore -j but I'm only
> >> seeing one CPU being used really. The fi
On Thu, Sep 17, 2009 at 12:00 PM, Joshua D. Drake
wrote:
> On Thu, 2009-09-17 at 11:48 -0600, Scott Marlowe wrote:
>> I'm trying to do a parallel restore with pg_restore -j but I'm only
>> seeing one CPU being used really. The file is custom format, but was
>> made by pg_dump for pgsql 8.3. Is
On Thu, 2009-09-17 at 11:48 -0600, Scott Marlowe wrote:
> I'm trying to do a parallel restore with pg_restore -j but I'm only
> seeing one CPU being used really. The file is custom format, but was
> made by pg_dump for pgsql 8.3. Is that a problem? Do I need a backup
> made with 8.4 to run paral
I'm trying to do a parallel restore with pg_restore -j but I'm only
seeing one CPU being used really. The file is custom format, but was
made by pg_dump for pgsql 8.3. Is that a problem? Do I need a backup
made with 8.4 to run parallel restore?
--
Sent via pgsql-general mailing list (pgsql-gen
On Thu, Sep 17, 2009 at 12:31:14PM -0500, Peter Hunsberger wrote:
> On Thu, Sep 17, 2009 at 11:40 AM, hubert depesz lubaczewski
> wrote:
> >
> > So, since (as we know) foreign keys are not fault-proof, wouldn't it be
> > good to provide a way to create them without all this time-consuming
> > chec
On Thu, Sep 17, 2009 at 11:40 AM, hubert depesz lubaczewski
wrote:
>
> So, since (as we know) foreign keys are not fault-proof, wouldn't it be
> good to provide a way to create them without all this time-consuming
> check?
No.
If you don't want the behavior of a foreign key then just don't defin
experiencing weird 14001 errors .no logs not evt nothing to go by so i
completely rebuilt the cygwin postgres
ran cygwin
then i ran the initdb as follows
$ ./postgresql-8.4.1/src/bin/initdb/initdb.exe -D /pgsql/data
The files belonging to this database system will be owned by user "postgres".
T
In response to Neil Saunders :
> Hi all,
>
> I maintain an online property rental application. The main focus of the UI
> is the search engine, which I'd now like to improve by allowing filtering of
> the search results shown on some criteria, but provide a count of the number
> of properties tha
Hi,
would it be possible to add a way to create foreign key without checking
of prior data?
Before you will say it's a bad idea, because then you might get invalid
data - i know. You can geet invalid data in column that is checked by
foriegn key even now - by temporarily disablnig triggers and/or
Neil Saunders wrote:
Hi all,
I maintain an online property rental application. The main focus of the
UI is the search engine, which I'd now like to improve by allowing
filtering of the search results shown on some criteria, but provide a
count of the number of properties that meet that criter
Neil Saunders wrote:
Hi all,
I maintain an online property rental application. The main focus of
the UI is the search engine, which I'd now like to improve by allowing
filtering of the search results shown on some criteria, but provide a
count of the number of properties that meet that criter
Nathaniel writes:
> When using PQputCopyData and PQgetCopyData to send and receive binary data
> from postgres, would you include/expect headers and trailers (as well as the
> tuples themselves) as you would in a binary file named 'file_name' if you
> were executing the SQL "COPY BINARY table_n
Hi all,
I maintain an online property rental application. The main focus of the UI
is the search engine, which I'd now like to improve by allowing filtering of
the search results shown on some criteria, but provide a count of the number
of properties that meet that criteria.
For example, we're lo
Hello,
When using PQputCopyData and PQgetCopyData to send and receive binary data from
postgres, would you include/expect headers and trailers (as well as the tuples
themselves) as you would in a binary file named 'file_name' if you were
executing the SQL "COPY BINARY table_name FROM/TO 'file_n
On Thu, Sep 17, 2009 at 11:23:12AM -0400, Mark Styles wrote:
> On Thu, Sep 17, 2009 at 10:29:11AM -0400, Paul M Foster wrote:
> > I can't find a way to do this purely with SQL. Any help would be
> > appreciated.
> >
> > Table 1: urls
> >
> > id | url
> > --
> > 1 | alfa
> > 2 | bra
On Thu, Sep 17, 2009 at 9:55 AM, Ian Harding wrote:
> I have never had this particular problem in PostgreSQL, it seems to
> "just know" when queries can be "flattened" and indexes used. I know
> that takes tons of work. Thank you for that.
>
> Here's the Oracle question.
>
> http://stackoverflow
I have never had this particular problem in PostgreSQL, it seems to
"just know" when queries can be "flattened" and indexes used. I know
that takes tons of work. Thank you for that.
Here's the Oracle question.
http://stackoverflow.com/questions/1439500/oracle-index-usage-in-view-with-aggregates
On Thu, Sep 17, 2009 at 04:20:57PM +0100, Sam Mason wrote:
> On Thu, Sep 17, 2009 at 10:29:11AM -0400, Paul M Foster wrote:
> > I want all the records of the
> > url table, one row for each record, plus the userid field that goes with
> > it, for a specified user (paulf), with NULLs as needed
>
>
On Thu, Sep 17, 2009 at 10:29:11AM -0400, Paul M Foster wrote:
> I can't find a way to do this purely with SQL. Any help would be
> appreciated.
>
> Table 1: urls
>
> id | url
> --
> 1 | alfa
> 2 | bravo
> 3 | charlie
> 4 | delta
>
> Table 2: access
>
> userid | url_id
> ---
On Thu, Sep 17, 2009 at 10:29:11AM -0400, Paul M Foster wrote:
> I want all the records of the
> url table, one row for each record, plus the userid field that goes with
> it, for a specified user (paulf), with NULLs as needed
Maybe something like this?
SELECT a.userid, u.url
FROM urls u
Hi,
I'd look into outer joins
http://www.postgresql.org/docs/8.1/static/tutorial-join.html
> I can do *part* of this with various JOINs, but the moment I specify
> userid = 'paulf', I don't get the rows with NULLs
If you want all fields from one table and only those matching from another
use
Folks:
I can't find a way to do this purely with SQL. Any help would be
appreciated.
Table 1: urls
id | url
--
1 | alfa
2 | bravo
3 | charlie
4 | delta
Table 2: access
userid | url_id
---
paulf | 1
paulf | 2
nancyf | 2
nancyf | 3
The access table is related to th
> 2009/9/15 el dorado :
> > Hello!
> > I need PG 8.4 built from source code for WinXP. So I got archive
> > postgresql-8.4.1.tar.gz, unpacked it and built postgres by MinGW.
> > Everything seeds to be fine until we tried to test pg_dump. It failed (not
> > always but often).
> > Command:
> > pg_
My standard encoding is UTF-8 on all levels so I don't need this
high-cost call:
plpy.execute("select setting from pg_settings where name =
'server_encoding'");
Additionally I want to get the original cases.
For this purpose my solution is still fitting to my need. But it is not
the one you
On Thu, Sep 17, 2009 at 12:01:57AM -0400, Alvaro Herrera wrote:
> http://wiki.postgresql.org/wiki/Strip_accents_from_strings
I'm still confused as to why plpython doesn't know the server's encoding
already; seems as though all text operations are predicated on knowing
this and hence all but the mo
Ignore me - I missed the previous thread with the same question.
--
Richard Huxton
Archonet Ltd
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
sulmansarwar wrote:
> Hi,
>
> I am new to PostgreSQL. I have been trying to restore a compressed(.gz)
> database using
>
> gunzip -c filename.gz | psql dbname
>
> After some 8 or 9 tables are restored the program exists giving error:
> Segmentation Fault.
> Exception Type: EXC_BAD_ACCESS (SIGS
74 matches
Mail list logo