Mike Christensen wrote:
> However, if the query
> planner can't assume the data is in a certain order on disk, what's the
> point of having this at all?
One benefit is that it reduces the number of pages that must be read
from disk to retrieve all tuples that have a particular value for the
inde
On Tue, Mar 3, 2009 at 11:53 AM, Sanjay Arora wrote:
> Is it possible to host postgreSQL on Amazon's cloud? What are the issues
> involved?
Runs just fine under Ubuntu. We haven't tried it under serious load though.
--
Stuart Bishop
http://www.stuartbishop.net/
--
Sent via pgsql-general mai
Daniel Manesajian, 17.03.2009 04:33:
Hi,
I'm trying to get an advance taste of the window function feature that I
believe is supposed to be in 8.4. I'm running 8.4devel snapshot (dated
Jan-01 which seems kind of old) grabbed from the snapshot page on the
postgresql website.
When I try a si
Windows 2008 is basically a vista with some (unfortunately not enough)
of the cruft removed and some nice additions in the server side
totally irrelevant to postgresql.
So as long as it's fine on vista, it should be fine on 2008. I don't
regal offhand if 8.2 is or if it requires 8.3, but th
On 17 mar 2009, at 05.22, "Joshua D. Drake"
wrote:
On Mon, 2009-03-16 at 23:02 -0400, Tom Lane wrote:
"Raji Sridar (raji)" writes:
I don't mind going to 8.3 or later. I want to ensure that Postgres
is
tested on Windows 2008 and patches created like other platforms.
How do
I enable that
Start by looking here
http://www.postgresql.org/docs/8.3/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY
And if you search the lists you'll find whole discussions on this topic that
have been repeated over and over.
Without generalising too much, for a dedicated machine you
Hi there!
If I have a database that have a size of a few GB and run this on a
machine with 48 GB of ram, What parameters should I set to make
Postgres use all the memory that is available? For a not so very
technical person it is a little cryptic to find out.
Thanks.
--
Sent via pgsql-general
On Tue, Mar 17, 2009 at 4:22 AM, Joshua D. Drake wrote:
> On Mon, 2009-03-16 at 23:02 -0400, Tom Lane wrote:
>> "Raji Sridar (raji)" writes:
>> > I don't mind going to 8.3 or later. I want to ensure that Postgres is
>> > tested on Windows 2008 and patches created like other platforms. How do
>> >
Hi all,
I have a little problem,
I would like to execute
select * from table
and i would like to retrieve the number of records without make
select count(*) from table
I could use directly the table instead of select, and in this this
case I'm searching for
something like the reltuples field i
Hello,
I'm currently developing a program for centralizing the vehicle fleet GPS
information -http://openggd.sourceforge.net-, written in C++.
The database should have these requirements:
- The schema for this kind of data consists of several arguments -latitude,
longitude, time, speed. etc-, no
On Tue, Mar 17, 2009 at 12:25:08PM +0100, Juan Pereira wrote:
> I'm currently developing a program for centralizing the vehicle fleet GPS
> information -http://openggd.sourceforge.net-, written in C++.
>
> The database should have these requirements:
...
> - The database also should create a ta
Hi Juan,
First of all congratulations on you project :)
We, at MADEIRA GPS, use Postgresql and PostGIS as the corner stone of our
fleet management solution and have tens of *millions* of records in a single
vehicles history table without any visible performance problem (we do however
clean it
Juan Pereira wrote:
> - The database also should create a table for every truck -around 100
> trucks-.
Why?
That's a rather clumsy design that makes it really hard to get aggregate
data across the fleet or do many interesting queries.
You're almost always better off using a single table with a
I am getting correct results with following query i.e. binary objects as
they are inserted.
select docimages.* from docimages where docimages.accobjecttype=9 ORDER BY
docimages.docimagename;
but when I created a function as follows, binary objects are not shown,
rather a number represting the lar
Raji Sridar (raji) wrote:
Hi,
I would like to know if Postgres 8.2.x is supported on Windows 2008 (32
bit). If not, what is version of Postgres supported on Windows 2008? PLs
let me know.
Thanks
Raji
Hi Raji,
Windows Server 2008 is excellent (Sorry if I upset any *nix fanbois).
And has
About the string "Smith \& Jones".
According to the documentation,
INSERT INTO thing (name) VALUES ('Smith E'\\'& Jones');
must work. But it doesn't. So, double the enclosed quotes:
INSERT INTO thing (name) VALUES ('Smith E''\\''& Jones');
Doesn't.
It works fine, but with a warning, as
INS
Enrico Pirozzi wrote:
> and i would like to retrieve the number of records without make
>
> select count(*) from table
>
> I could use directly the table instead of select, and in this this
> case I'm searching for
> something like the reltuples field in the pg_class table, but I need
> this valu
>
> According to the documentation,
>
> INSERT INTO thing (name) VALUES ('Smith E'\\'& Jones');
>
> must work. But it doesn't. So, double the enclosed quotes:
>
> INSERT INTO thing (name) VALUES ('Smith E''\\''& Jones');
>
> Doesn't.
>
> It works fine, but with a warning, as
>
> INSERT INTO thing
Neanderthelle Jones wrote:
> About the string "Smith \& Jones".
>
> According to the documentation,
>
> INSERT INTO thing (name) VALUES ('Smith E'\\'& Jones');
>
> must work. But it doesn't.
I think you'll find the documentation says to use:
SELECT E'Smith \\& Jones';
Note that the
On Tue, Mar 17, 2009 at 10:35:20PM +1030, Neanderthelle Jones wrote:
> About the string "Smith \& Jones".
>
> According to the documentation,
>
> INSERT INTO thing (name) VALUES ('Smith E'\\'& Jones');
>
> must work. But it doesn't.
You're putting things in the wrong places! The "E" says that
2009/3/17 Thom Brown
>
> I could be wrong, but shouldn't it be:
>
> INSERT INTO thing (name) VALUES ('Smith E'\\& Jones');
>
> I'm not sure why you're including an extra single or double-quote in the
> string.
>
> Regards
>
> Thom
>
Sorry, (damn copy & paste). I meant:
INSERT INTO thing (name)
c k writes:
> I am getting correct results with following query i.e. binary objects as
> they are inserted.
> select docimages.* from docimages where docimages.accobjecttype=9 ORDER BY
> docimages.docimagename;
> but when I created a function as follows, binary objects are not shown,
There is n
Neanderthelle Jones wrote:
About the string "Smith \& Jones".
According to the documentation,
INSERT INTO thing (name) VALUES ('Smith E'\\'& Jones');
must work. But it doesn't. So, double the enclosed quotes:
INSERT INTO thing (name) VALUES ('Smith E''\\''& Jones');
The E can't b
Hi All,
I wanted to know if there is a way to upload kml/gpx data directly to
the DB.
So far, I have been using shp2pgsql to upload data from shape files.
I don't want to convert it to shape and then upload it.
If someone knows of a way to upload without converting to shape file,
please let m
Mike Christensen wrote:
> I'm wondering what the performance advantage is to using a clustered
> index.
In Postgres you don't cluster indexes. You cluster tables. It only
means that the table is written from scratch, following the index order.
So what it gives you is locality of access for qu
On Tue, Mar 17, 2009 at 7:47 AM, Craig Ringer
wrote:
> Juan Pereira wrote:
>
>
>> - The database also should create a table for every truck -around 100
>> trucks-.
>
> Why?
>
> That's a rather clumsy design that makes it really hard to get aggregate
> data across the fleet or do many interesting q
Subha Ramakrishnan wrote:
Hi All,
I wanted to know if there is a way to upload kml/gpx data directly to
the DB.
By upload you mean...? You can always just insert the date opaque as
raw field or text or use large object (lo).
So far, I have been using shp2pgsql to upload data from shape file
Yes, the problem is with PGAdmin and ODBC connection to MS access.
The function (given is first mail) is giving same results i.e. only IDs not
the actual data from PGAdmin and ODBC.
But for following query PGAdmin gives again only IDs, but MS Access through
ODBC gives the actual binary data.
selec
On Tue, 2009-03-17 at 09:16 -0400, Alvaro Herrera wrote:
> > I'm quite sure SQL Server doesn't work this way and I'm not sure
> about
> > Oracle. Can someone enlighten me on the exact benefit of this?
> Thanks!!
>
> Yeah, they use a completely different definition of "clustered index"
> from o
Juan,
* Juan Pereira (juankarlos.open...@gmail.com) wrote:
> - The schema for this kind of data consists of several arguments -latitude,
> longitude, time, speed. etc-, none of them is a text field.
I would think you might want *some* text fields, for vehicle
identification, as a seperate table a
* Subha Ramakrishnan (su...@gslab.com) wrote:
> So far, I have been using shp2pgsql to upload data from shape files.
> I don't want to convert it to shape and then upload it.
Have you looked at ogr2ogr? It looks to support KML as a format, and
has PostGIS support, though I'm not sure if it can ha
> If you want an accurate, up-to-date count then you'll need to use
> count(*) or have a trigger keep a summary-count for you. A simple
> implementation will reduce concurrency to writes on that table however.
Yes I solved by a trigger
> Lots of discussion in the mailing-list archives on this
John R Pierce wrote:
> Stefan Kaltenbrunner wrote:
>> So in my understanding LVM is safe on disks that have write cache
>> disabled or "behave" as one (like a controller with a battery backed
>> cache).
>
> what about drive write caches on battery backed raid controllers? do
> the controllers ens
Craig Ringer wrote:
>> You're almost always better off using a single table with a composite
>> primary key like (truckid, datapointid) or whatever. If you'll be doing
>> lots of queries that focus on individual vehicles and expect performance
>> issues then you could partition the table by truck
Juan Pereira wrote:
> Craig Ringer wrote:
>
>
> >> You're almost always better off using a single table with a composite
> >> primary key like (truckid, datapointid) or whatever. If you'll be doing
> >> lots of queries that focus on individual vehicles and expect performance
> >> issues then you
Richard Huxton, 17.03.2009 13:26:
Enrico Pirozzi wrote:
and i would like to retrieve the number of records without make
select count(*) from table
I could use directly the table instead of select, and in this this
case I'm searching for
something like the reltuples field in the pg_class table,
Thomas Kellerer writes:
> Can a trigger solution really give an accurate count in a concurrent
> insert/delete scenario?
In principle yes, but AFAIK no one has really coded it up in full
detail. See the design that was hashed out in some previous
mailing-list thread, involving delta-count recor
Hi all, executing the following command inside pgAdmin on my Windows
Vista (please avoid comment, I pray you) :
copy anagrafica_import from 'c:\\temp\\anagraficaANIDIs.csv' WITH CSV
I get the following error:
WARNING: nonstandard use of \\ in a string literal
LINE 1: copy anagrafica_import fro
On 17/03/2009 14:45, Ivano Luberti wrote:
> Hi all, executing the following command inside pgAdmin on my Windows
> Vista (please avoid comment, I pray you) :
>
> copy anagrafica_import from 'c:\\temp\\anagraficaANIDIs.csv' WITH CSV
Try putting an 'E' in front of the path, like this:
from E
Hello,
I am facing the following problem, nothing tough, I guess (hope)..think it
should be some syntactical problem:
1. When trying to update two columns
UPDATE myFirstTable SET (fistCol, secCol) = ( SELECT anotherFistCol,
anotherSecondCol FROM mySecondTable )
I am getting: ERROR: s
Thanks but it keeps on not finding the file: the warning has disappeared
ERROR: could not open file "c:\temp\anagraficaANIDIs.csv" for reading:
No such file or directory
** Errore **
ERROR: could not open file "c:\temp\anagraficaANIDIs.csv" for reading:
No such file or director
On Tue, Mar 17, 2009 at 8:25 AM, Juan Pereira
wrote:
> Craig Ringer wrote:
>
>
>>> You're almost always better off using a single table with a composite
>>> primary key like (truckid, datapointid) or whatever. If you'll be doing
>>> lots of queries that focus on individual vehicles and expect perf
On 17/03/2009 15:04, Ivano Luberti wrote:
> Thanks but it keeps on not finding the file: the warning has disappeared
>
>
> ERROR: could not open file "c:\temp\anagraficaANIDIs.csv" for reading:
> No such file or directory
You haven't said whether the file is on the same machine as the server -
Stephen Frost wrote:
> As mentioned, you might want to eliminate duplicate entries; no sense
> storing information that can be trivially derived.
It's pretty easy to do that with a trigger - and you can add a degree of
noise correction too, so that "wobble" in GPS position doesn't get
recorded -
On Tue, 17 Mar 2009, Sam Mason wrote:
> You're putting things in the wrong places! The "E" says that the
> following literal is using C style escaping. I.e. you want to say:
>
> E'Smith \\& Jones'
Thanks. Now I understand.
Elle.
--
Sent via pgsql-general mailing list (pgsql-general@post
On Tue, Mar 17, 2009 at 03:55:47PM +0100, adam.slachta wrote:
> 1. When trying to update two columns
>
> UPDATE myFirstTable SET (fistCol, secCol) = ( SELECT anotherFistCol,
> anotherSecondCol FROM mySecondTable )
The more common say of expressing this would be something like:
UPDATE myFirst
I'm sorry, you are right that is the problem
I had interpreted that as the file should reside on the same machine
where pgAdmin (or another client) runs , not the server.
Thank you again
Raymond O'Donnell ha scritto:
> On 17/03/2009 15:04, Ivano Luberti wrote:
>
>> Thanks but it keeps on not
adam.slachta wrote:
>
> 1. When trying to update two columns
>
> UPDATE myFirstTable SET (fistCol, secCol) = ( SELECT anotherFistCol,
> anotherSecondCol FROM mySecondTable )
In addition to Sam's reply what you've got there would never work anyway
unless mySecondTable only contains one row. You'
On 17/03/2009 15:28, Ivano Luberti wrote:
> I'm sorry, you are right that is the problem
> I had interpreted that as the file should reside on the same machine
> where pgAdmin (or another client) runs , not the server.
>
> Thank you again
You're welcome! That actually cost me a half-hour or so of
Hello,
I am creating a small web app that uses Postgres. The number of hits/day
will be small, maybe a 1000 pages/day. I'm planning on running this in a
virtual machine, (Windows Server 2008 with the virtual machine hosting
Postgres running Windows Server 2003 R2 x64, or maybe XP x64). The datab
Richard Sickler wrote:
Hello,
I am creating a small web app that uses Postgres. The number of
hits/day will be small, maybe a 1000 pages/day. I'm planning on
running this in a virtual machine, (Windows Server 2008 with the
virtual machine hosting Postgres running Windows Server 2003 R2 x64,
In response to Richard Sickler :
> Hello,
>
> I am creating a small web app that uses Postgres. The number of hits/day
> will be small, maybe a 1000 pages/day. I'm planning on running this in a
> virtual machine, (Windows Server 2008 with the virtual machine hosting
> Postgres running Windows S
Richard Sickler wrote:
> Does anyone have any experience running Postgres in a virtual machine, or
> recommendations (or could point me to a place where I could read more about
> Postgres and Virtual machines)? Thanks.
Not done it on a Windows VM, but there shouldn't be any special
difficulties.
Juan,
* Juan Pereira (juankarlos.open...@gmail.com) wrote:
> The main reason why we thought using a table per truck was because
> concurrent load: if there are 100 trucks trying to write in the same table,
> maybe the performance is worse than having 100 tables, due to the fact that
> the table is
On Mar 17, 2009, at 4:47 AM, Craig Ringer wrote:
The question is: Which DBMS do you think is the best for this kind of
application? PostgreSQL or MySQL?
As you can imagine, PostgreSQL.
My main reasons are that in a proper transactional environment (ie
you're not using scary MyISAM tables) Pg
Merlin,
> I agree though
> that a single table approach is best unless 1) the table has to scale
> to really, really large sizes or 2) there is a lot of churn on the
> data (lots of bulk inserts and deletes).
while agreeing, an additional question: could you please pronounce
"really, really large
Harald Armin Massa, 17.03.2009 15:00:
That is: what table size would you or anybody consider really, really
large actually?
I recently attended and Oracle training by Tom Kyte and he said (partially joking though) that a database is only large when the size is measured in terrabytes :)
So re
On Tue, 2009-03-17 at 17:44 +0100, Thomas Kellerer wrote:
> Harald Armin Massa, 17.03.2009 15:00:
> > That is: what table size would you or anybody consider really, really
> > large actually?
>
> I recently attended and Oracle training by Tom Kyte and he said (partially
> joking though) that a da
Well, I should have been more specific and not use such a simplified
example, which only lead you into wrong direction. What I am really tried
to solve instead of
UPDATE myFirstTable SET (fistCol, secCol) = ( SELECT anotherFistCol,
anotherSecondCol FROM mySecondTable )
is this:
x
On Tue, Mar 17, 2009 at 10:00 AM, Harald Armin Massa wrote:
> Merlin,
>
>> I agree though
>> that a single table approach is best unless 1) the table has to scale
>> to really, really large sizes or 2) there is a lot of churn on the
>> data (lots of bulk inserts and deletes).
>
> while agreeing, a
On Tue, Mar 17, 2009 at 05:44:48PM +0100, Thomas Kellerer wrote:
> So really, really large would mean something like 100 petabytes
>
> My personal opinion is that a "large" database has more than ~10 million
> rows in more than ~10 tables.
Surely anything like "large" or "small" is a relative me
Which one should one install? Is there any difference?
They are both free right? And both has fulltext support, right?
Jen
On Tue, 2009-03-17 at 18:23 +0100, Jennifer Trey wrote:
> Which one should one install? Is there any difference?
>
>
> They are both free right? And both has fulltext support, right?
>
It depends on your requirements. I always suggest using a version that
natively supports your operating syste
On Tue, Mar 17, 2009 at 05:57:01PM +0100, adam.slachta wrote:
> Do you think there might be a way how to solve this not leading into
> splitting the update into two separate update statements for each of the two
> columns (max_breach, limit_value) ?
That's quite a query; I'm guessing it's from Ora
Hi Tom,
psql --version returns 8.4devel. I would in fact like to try a newer snapshot,
but I'm not sure how to get one. The devel snapshot 'one-click-installer' link
provided at postgresql.org gives a 2009-01-01 version with no options for
anything newer or older. I was thinking my next ste
This is question for Juan, have you asked the MySQL mailing list? What do
they say about this?
On Tue, Mar 17, 2009 at 9:05 AM, Erik Jones wrote:
>
> On Mar 17, 2009, at 4:47 AM, Craig Ringer wrote:
>
> The question is: Which DBMS do you think is the best for this kind of
>>> application? Postg
On Tue, 17 Mar 2009, Marco Colombo wrote:
If LVM/dm is lying about fsync(), all this is moot. There's no point
talking about disk caches.
I decided to run some tests to see what's going on there, and it looks
like some of my quick criticism of LVM might not actually be valid--it's
only the p
Greg Smith wrote:
> There are some known limitations to Linux fsync that I remain somewhat
> concerned about, independantly of LVM, like "ext3 fsync() only does a
> journal commit when the inode has changed" (see
> http://kerneltrap.org/mailarchive/linux-kernel/2008/2/26/990504 ). The
> way files
Dear all,
Benetl, a free ETL tool for files using postgreSQL, is out in version
2.6 !!!
You can freely download it at : www.benetl.net
You can learn more about ETL tools at:
http://en.wikipedia.org/wiki/Extract,_transform,_load
Thanks for your interest.
Regards,
--
Benoît Carpentier
www.
I would only like this as a feature if the optimizer can really take
advantage of this. Clustering on every insert or update just for the
fun of it won't really give us anything but more expensive writes.
I kinda figured if SQL Server and Oracle have it, they probably take
full advantage of i
On Tue, 2009-03-17 at 15:26 -0700, Mike Christensen wrote:
> I would only like this as a feature if the optimizer can really take
> advantage of this. Clustering on every insert or update just for the
> fun of it won't really give us anything but more expensive writes.
>
> I kinda figured if SQL
On Tue, 17 Mar 2009, Ron Mayer wrote:
I wonder if there should be an optional fsync mode
in postgres should turn fsync() into
fchmod (fd, 0644); fchmod (fd, 0664);
to work around this issue.
The test I haven't had time to run yet is to turn the bug exposing program
you were fiddling with i
Greg Smith wrote:
> On Tue, 17 Mar 2009, Marco Colombo wrote:
>
>> If LVM/dm is lying about fsync(), all this is moot. There's no point
>> talking about disk caches.
>
> I decided to run some tests to see what's going on there, and it looks
> like some of my quick criticism of LVM might not actua
I set up a failover system with one primary server and one standby server.
In the standby server's log, I saw the lines like below:
Command for restore: copy "E:\archive\0001004C"
"pg_xlog\RECOVERYXLOG"
The above line means that: the transaction log file 000100
Ron Mayer wrote:
> Greg Smith wrote:
>> There are some known limitations to Linux fsync that I remain somewhat
>> concerned about, independantly of LVM, like "ext3 fsync() only does a
>> journal commit when the inode has changed" (see
>> http://kerneltrap.org/mailarchive/linux-kernel/2008/2/26/9905
On Wed, 18 Mar 2009, Marco Colombo wrote:
If you fsync() after each write you want ordered, there can't be any
"subsequent I/O" (unless there are many different processes cuncurrently
writing to the file w/o synchronization).
Inside PostgreSQL, each of the database backend processes ends up w
If one wishes to use one's own database server
of PostGreSQL for commercial purposes,
does one need to purchase a commercial
license from the
"PostgreSQL Global Development Group"?
or may it be used for commercial purpose,
legally, at no cost?
On Mar 17, 2009, at 8:41 PM, Zachary Mitchell, BCIS wrote:
If one wishes to use one's own database server
of PostGreSQL for commercial purposes,
does one need to purchase a commercial
license from the
"PostgreSQL Global Development Group"?
or may it be used for commercial purpose,
legally
Zachary Mitchell, BCIS wrote:
If one wishes to use one's own database server of PostGreSQL for
commercial purposes,
does one need to purchase a commercial license from the
"PostgreSQL Global Development Group"?
or may it be used for commercial purpose, legally, at no cost?
absolutely any us
On Sat, 2009-03-14 at 15:27 -0700, Christophe wrote:
> Hi,
>
> The video is now available for download! You can find it at:
>
> http://blog.thebuild.com/sfpug/sfpug-unison-20090311.mov
>
It is also on Vimeo:
http://www.vimeo.com/3732938
Joshua D. Drake
> Thanks,
> -- Christophe
>
--
2009/3/18 John R Pierce :
>
> see http://www.postgresql.org/about/licence
>
http://www.fsf.org/licensing/licenses/index_html
and see there for a broad listing of the kinds of licenses
-
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http
Greetings,
I've gotten myself in a pickle and had a postgresql (8.2) instance
fill its disk completely and shutdown itself down. I've moved the
entire data directory to a new, larger slice however postmaster never
finishes "starting". Despite configuring postgresql.conf for excessive
'verboseness'
On Mar 17, 2009, at 9:57 PM, Joshua D. Drake wrote:
It is also on Vimeo:
http://www.vimeo.com/3732938
Joshua D. Drake
Thanks!
-
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Tue, 17 Mar 2009, Aaron Glenn wrote:
Despite configuring postgresql.conf for excessive 'verboseness' nothing
gets outputted to syslog or the --log specified file.
You shouldn't trust those destinations for getting really unusual errors
starting the server. Change your log_destination temp
Joshua D. Drake wrote:
On Tue, 2009-03-17 at 18:23 +0100, Jennifer Trey wrote:
Which one should one install? Is there any difference?
They are both free right? And both has fulltext support, right?
It depends on your requirements. I always suggest using a version that
natively supp
Hi,
Thanks for the reply.
I did take a look at ogr2ogr which can convert kml to shape. But i was
wondering if there's some direct way..:)
And by upload I meant adding geometry data to the DB.
Thanks & regards,
Subha
Stephen Frost wrote:
* Subha Ramakrishnan (su...@gslab.com) wrote:
So fa
On top of what the other poster said, I'm wondering if you're not
getting any kind of "postmaster not cleanly shutdown, recovery
initiated or something like that when you first start it up. You
don't tend to see a lot of messages after that until recovery is
completed.
What does top and / or vmst
87 matches
Mail list logo