On Wed, 2007-06-13 at 00:17 -0700, womble wrote:
> Thanks for the response Malcolm- comments/follow-up inline below.
> 
> > 
> > You'll need to do manual transaction management, for a start, just like
> > you would with any two processes from remote machines accessing the same
> > data. The only point of coordination is the database, so you need to
> > work through there. We don't do this by default, since it adds a lot of
> > overhead when not needed.
> > 
> > Also, I don't think we have the equivalent of "select for update"
> > transaction locking at the moment (I may be wrong -- I have never had
> > need to use the manual transaction management stuff beyond playing with
> > it once to see what was possible), so you may need to write some extra
> > code for Django to do that. Feel free to discuss any design issues on
> > the django-developers list when you get to that point, since it would be
> > useful to have.
> 
> Ok, do you know of any examples of how to setup/use the transaction
> management facilities?  (This is stretching my experience with dbs, so
> am looking for some examples to see how it's put together.)

Not off the top of my head, no. There are some example outlines in the
transaction documentation, though. Did you see that:
http://www.djangoproject.com/documentation/transactions/ ?

[...]
> Sorry, didn't mean my question that way... am well aware that they're
> both fine products.  My (simplistic) understanding is that MySql is
> faster, and Postgresql has more advanced transaction/constraint/stored
> procedure capabilities, and as such, was wondering if that meant there
> was more (advanced?) transaction support if a postgresql backend were used.

At the level Django supports (and even a bit beyond that), I think
they're pretty much similar. Both backends have full commit/rollback
support for a bunch of operations that must be atomic. They also both
have row-level locking for update locking (this is the thing not
supported by Django at the moment).

Procedural support is probably shaded slightly in favour of PostgreSQL
because it supports stored procedures in external languages, however, I
may be selling MySQL short there because I haven't had to do much with
stored procedures in that (I'm not a huge fan of them for most cases).
And MySQL's existing support may well be enough even if it isn't as much
as possible.

The speed difference between the two is not a lot for "many" (in a
slightly undefined sense) situations. There was, however, a
well-publicised benchmark at tweakers.net last year that showed
PostgreSQL being a lot faster than MySQL in their particular setup. On a
CPU-bound test with the database in memory, PostgreSQL scaled a lot
better on some multi-core/multi-CPU machines. I'm not sure that creating
the most blazingly fast single-server machines known to mankind is
always the economical choice, though (disk speed is a big issue, as are
disk space, redundancy and mean time to failure, so you need N of these
monsters in any case).

It used to be that MySQL was much faster if you only had a few (a dozen
or two) simultaneous connections on read-heavy usage. That isn't as
universally true any longer (particularly once you introduce
transactions into the picture because that slows things down a bit).

One can see that the equation isn't totally one-sided by noting
high-performance MySQL users like Livejournal, Flickr, Slashdot,
wikipedia, Curses Gaming (the latter being a big Django user --
http://www.davidcramer.net/other/43/rapid-development-serving-500000-pageshour.html
 is confidence-inspiring). The first four obviously have fairly advanced set 
ups that you aren't going to get out of the box with something like Django, but 
it shows MySQL works.

[Lest I sound like a MySQL fanboy, it might be over-compensation. My
database of choice is PostgreSQL and I've used it in some pretty
high-traffic and reasonably large database size setups. Again, though,
at the high ends of performance, you do end up becoming a bit of an
expert in database and system tuning. But by then it's a good problem,
because, typically, requiring high-performance means you're successful.]


> BTW- thanks for the good summary of how to select a db (saves me writing
>  something similar for our co-ops ;-) )- in our case, we want more
> advanced data integrity, and speed is not (as yet) an issue.  We're
> currently using MySql, but am toying with trying Postgresql... another
> reason why we'd like a mature db-layer to help shield us from these
> decisions :-).

If you're using Django for a lot of the front-end stuff, it won't be
hard to switch later if you really need to (it will be painful to move
the data, but Django won't care much -- only places where you're using
custom SQL might need some attention). So you can probably punt a little
bit here (or at least not view it as a life-changing decision).

Wringing every last cycle out of the database is going to be a bit of
work (but possible) whichever one you choose. Feature sets are very
close to each other. Support from the respective communities is good in
both cases. Commerical support is available in both cases. Both are
first-class Django citizens (i.e. we aren't going to stop providing a
backend for either one). Whichever back end you're most comfortable
administering might be the thing to go with. If you really have high
performance/high volume needs, feeling comfortable with the db is a big
issue.

Regards,
Malcolm


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to