On Tue, Apr 21, 2009 at 9:19 PM, Steve Singer wrote:
> On Tue, 21 Apr 2009, David Fetter wrote:
>
>> On Tue, Apr 21, 2009 at 08:15:00PM +0100, Peter Childs wrote:
>>>
>>> Hmm Interestingly OSM have just switched from MySQL to PostgreSQL.
>>
>> Can we get somebody from OSM to talk about this on the
On Tue, 21 Apr 2009, David Fetter wrote:
On Tue, Apr 21, 2009 at 08:15:00PM +0100, Peter Childs wrote:
Hmm Interestingly OSM have just switched from MySQL to PostgreSQL.
Can we get somebody from OSM to talk about this on the record?
I've forwarded this request the to the OSM talk list. Hop
On Tue, Apr 21, 2009 at 08:15:00PM +0100, Peter Childs wrote:
> Hmm Interestingly OSM have just switched from MySQL to PostgreSQL.
Can we get somebody from OSM to talk about this on the record?
Cheers,
David.
--
David Fetter http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: df
Scott Marlowe escribió:
> On Tue, Apr 21, 2009 at 1:15 PM, Peter Childs wrote:
> > Hmm Interestingly OSM have just switched from MySQL to PostgreSQL.
>
> On the news blog page it mentioned switching to MonetDB. I saw
> nothing about pgsql there. Do they store it in pgsql for manipulation
> then
* Scott Marlowe (scott.marl...@gmail.com) wrote:
> On Tue, Apr 21, 2009 at 1:15 PM, Peter Childs wrote:
> > Hmm Interestingly OSM have just switched from MySQL to PostgreSQL.
>
> On the news blog page it mentioned switching to MonetDB. I saw
> nothing about pgsql there. Do they store it in pgsq
* Scott Marlowe (scott.marl...@gmail.com) wrote:
> On Tue, Apr 21, 2009 at 1:15 PM, Peter Childs wrote:
> > Hmm Interestingly OSM have just switched from MySQL to PostgreSQL.
>
> On the news blog page it mentioned switching to MonetDB. I saw
> nothing about pgsql there. Do they store it in pgsq
On Tue, Apr 21, 2009 at 1:15 PM, Peter Childs wrote:
> Hmm Interestingly OSM have just switched from MySQL to PostgreSQL.
On the news blog page it mentioned switching to MonetDB. I saw
nothing about pgsql there. Do they store it in pgsql for manipulation
then export to MonetDB?
--
Sent via pg
2009/3/19 Shane Ambler :
> Thomas Kellerer wrote:
>>
>> Harald Armin Massa, 17.03.2009 15:00:
>>>
>>> That is: what table size would you or anybody consider really, really
>>> large actually?
>>
>> I recently attended and Oracle training by Tom Kyte and he said (partially
>> joking though) that a d
On Monday 23. March 2009, Juan Pereira wrote:
>On March 20, I asked for help in the Newbie MySQL forum, got no
> answers.
>
>Then the forum administrator moved the post to the PostgreSQL MySQL
> forum -a forum that deals with PostgreSQL migration issues-, and
> again no answers.
This kind of suppo
On March 20, I asked for help in the Newbie MySQL forum, got no answers.
Then the forum administrator moved the post to the PostgreSQL MySQL forum -a
forum that deals with PostgreSQL migration issues-, and again no answers.
http://forums.mysql.com/read.php?83,253709,253709#msg-253709
Regards
J
You would get better results if you posted in mysql forums.
http://forums.mysql.com/
Amitabh
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hello
it isn't correct comparation.
MySQL people use mainly web forum
regards
Pavel Stehule
2009/3/20 Juan Pereira :
> John Cheng wrote:
>
>>> This is question for Juan, have you asked the MySQL mailing list?
>
> I'm afraid MySQL general list isn't as dynamic as PostgreSQL general list.
>
> htt
John Cheng wrote:
>> This is question for Juan, have you asked the MySQL mailing list?
I'm afraid MySQL general list isn't as dynamic as PostgreSQL general list.
http://lists.mysql.com/mysql/216795
MySQL general list: 4 answers in about 48 hours
PostgreSQL general list: 27 answers in about 72 h
Just to add to this list, I have been using Postgresql to store data
for multiple GPS applications handling more than 150-200 vehicles.
Some of the tables that I have are running into 20 - 25 million rows
at the max, and on average 10 million rows. I am yet to see a problem
from the database side,
On Thu, Mar 19, 2009 at 11:50 AM, Scott Marlowe wrote:
> On Tue, Mar 17, 2009 at 5:25 AM, Juan Pereira
> wrote:
>> Hello,
>>
>> The question is: Which DBMS do you think is the best for this kind of
>> application? PostgreSQL or MySQL?
>
> Another advantage pgsql has is that many ddl operations on
On Tue, Mar 17, 2009 at 5:25 AM, Juan Pereira
wrote:
> Hello,
>
> The question is: Which DBMS do you think is the best for this kind of
> application? PostgreSQL or MySQL?
Another advantage pgsql has is that many ddl operations on tables do
NOT require exclusive locks on those tables. Creating i
Thomas Kellerer wrote:
Harald Armin Massa, 17.03.2009 15:00:
That is: what table size would you or anybody consider really,
really large actually?
I recently attended and Oracle training by Tom Kyte and he said
(partially joking though) that a database is only large when the size
is measured
juankarlos.open...@gmail.com (Juan Pereira) writes:
> Quite interesting! The main reason why we thought using a table per
> truck was because concurrent load: if there are 100 trucks trying to
> write in the same table, maybe the performance is worse than having
> 100 tables, due to the fact that t
Merlin Moncure writes:
> A good rule of thumb for large is table size > working ram. Huge
> (really large) is 10x ram.
Or better yet, large is data > working ram. Very large is data > directly
attached drives... That means that without fairly expensive hardware you start
talking about "very lar
John Cheng wrote:
>> This is question for Juan, have you asked the MySQL mailing list?
Not yet. Admitting my ignorance in databases, I'm trying to understand all
the concepts discussed in this thread .
Be sure today I will ask the MySQL list.
Thanks
2009/3/17 John Cheng
> This is question f
At 12:05 AM 3/18/2009, Erik Jones wrote:
On Mar 17, 2009, at 4:47 AM, Craig Ringer wrote:
The question is: Which DBMS do you think is the best for this kind of
application? PostgreSQL or MySQL?
As you can imagine, PostgreSQL.
My main reasons are that in a proper transactional environment (i
At 10:00 PM 3/17/2009, Harald Armin Massa wrote:
Merlin,
> I agree though
> that a single table approach is best unless 1) the table has to scale
> to really, really large sizes or 2) there is a lot of churn on the
> data (lots of bulk inserts and deletes).
while agreeing, an additional questio
This is question for Juan, have you asked the MySQL mailing list? What do
they say about this?
On Tue, Mar 17, 2009 at 9:05 AM, Erik Jones wrote:
>
> On Mar 17, 2009, at 4:47 AM, Craig Ringer wrote:
>
> The question is: Which DBMS do you think is the best for this kind of
>>> application? Postg
On Tue, Mar 17, 2009 at 05:44:48PM +0100, Thomas Kellerer wrote:
> So really, really large would mean something like 100 petabytes
>
> My personal opinion is that a "large" database has more than ~10 million
> rows in more than ~10 tables.
Surely anything like "large" or "small" is a relative me
On Tue, Mar 17, 2009 at 10:00 AM, Harald Armin Massa wrote:
> Merlin,
>
>> I agree though
>> that a single table approach is best unless 1) the table has to scale
>> to really, really large sizes or 2) there is a lot of churn on the
>> data (lots of bulk inserts and deletes).
>
> while agreeing, a
On Tue, 2009-03-17 at 17:44 +0100, Thomas Kellerer wrote:
> Harald Armin Massa, 17.03.2009 15:00:
> > That is: what table size would you or anybody consider really, really
> > large actually?
>
> I recently attended and Oracle training by Tom Kyte and he said (partially
> joking though) that a da
Harald Armin Massa, 17.03.2009 15:00:
That is: what table size would you or anybody consider really, really
large actually?
I recently attended and Oracle training by Tom Kyte and he said (partially joking though) that a database is only large when the size is measured in terrabytes :)
So re
Merlin,
> I agree though
> that a single table approach is best unless 1) the table has to scale
> to really, really large sizes or 2) there is a lot of churn on the
> data (lots of bulk inserts and deletes).
while agreeing, an additional question: could you please pronounce
"really, really large
On Mar 17, 2009, at 4:47 AM, Craig Ringer wrote:
The question is: Which DBMS do you think is the best for this kind of
application? PostgreSQL or MySQL?
As you can imagine, PostgreSQL.
My main reasons are that in a proper transactional environment (ie
you're not using scary MyISAM tables) Pg
Juan,
* Juan Pereira (juankarlos.open...@gmail.com) wrote:
> The main reason why we thought using a table per truck was because
> concurrent load: if there are 100 trucks trying to write in the same table,
> maybe the performance is worse than having 100 tables, due to the fact that
> the table is
Stephen Frost wrote:
> As mentioned, you might want to eliminate duplicate entries; no sense
> storing information that can be trivially derived.
It's pretty easy to do that with a trigger - and you can add a degree of
noise correction too, so that "wobble" in GPS position doesn't get
recorded -
On Tue, Mar 17, 2009 at 8:25 AM, Juan Pereira
wrote:
> Craig Ringer wrote:
>
>
>>> You're almost always better off using a single table with a composite
>>> primary key like (truckid, datapointid) or whatever. If you'll be doing
>>> lots of queries that focus on individual vehicles and expect perf
Juan Pereira wrote:
> Craig Ringer wrote:
>
>
> >> You're almost always better off using a single table with a composite
> >> primary key like (truckid, datapointid) or whatever. If you'll be doing
> >> lots of queries that focus on individual vehicles and expect performance
> >> issues then you
Craig Ringer wrote:
>> You're almost always better off using a single table with a composite
>> primary key like (truckid, datapointid) or whatever. If you'll be doing
>> lots of queries that focus on individual vehicles and expect performance
>> issues then you could partition the table by truck
Juan,
* Juan Pereira (juankarlos.open...@gmail.com) wrote:
> - The schema for this kind of data consists of several arguments -latitude,
> longitude, time, speed. etc-, none of them is a text field.
I would think you might want *some* text fields, for vehicle
identification, as a seperate table a
On Tue, Mar 17, 2009 at 7:47 AM, Craig Ringer
wrote:
> Juan Pereira wrote:
>
>
>> - The database also should create a table for every truck -around 100
>> trucks-.
>
> Why?
>
> That's a rather clumsy design that makes it really hard to get aggregate
> data across the fleet or do many interesting q
Juan Pereira wrote:
> - The database also should create a table for every truck -around 100
> trucks-.
Why?
That's a rather clumsy design that makes it really hard to get aggregate
data across the fleet or do many interesting queries.
You're almost always better off using a single table with a
Hi Juan,
First of all congratulations on you project :)
We, at MADEIRA GPS, use Postgresql and PostGIS as the corner stone of our
fleet management solution and have tens of *millions* of records in a single
vehicles history table without any visible performance problem (we do however
clean it
On Tue, Mar 17, 2009 at 12:25:08PM +0100, Juan Pereira wrote:
> I'm currently developing a program for centralizing the vehicle fleet GPS
> information -http://openggd.sourceforge.net-, written in C++.
>
> The database should have these requirements:
...
> - The database also should create a ta
Hello,
I'm currently developing a program for centralizing the vehicle fleet GPS
information -http://openggd.sourceforge.net-, written in C++.
The database should have these requirements:
- The schema for this kind of data consists of several arguments -latitude,
longitude, time, speed. etc-, no
40 matches
Mail list logo