/--- On Tue, Sep 05, 2000 at 11:48:38AM -0400, Dan Sugalski wrote:
| >- two-phase commit handler, rollback coordinator (the above two
| > is
| > connected to this: very simple algorhythm!)
|
| Here's the killer. This is *not* simple. At all. Not even close.
|
| Doing this properly with data sources you completely control in a
| multi-access situation (read: with threads) is *hard*. It requires
| possibly
| unbounded amounts of scratch space, a lot of cooperation with
| anything that
| accesses external data sources (like DBD::Oracle, say), and either
| cooperation or lots of sophisticated programming for things that
| stay
| internal. (We would, for example, have to get very clever for code
| that
| accesses external libraries if they didn't support the transaction
| interface, possibly snapshotting and later replacing all the
| global state
| they keep. Which may not be possible)
All we need to do is similar to handling of the "local", and call
the callbacks.
I don't want the core to handle data sources which are NOT
transaction-enabled. I only want to handle our objects and tied
hashes (keep it simple and stupid). DBD::Oracle is not perl
transaction-enabled, but it can support that if somebody write the
required proper callbacks. I don't want the core to handle that
DBD::Oracle needs to handle!
The rule is that: IF you want to use transactions, USE
transaction-enabled objects and tied interfaces!
The commit is done with the following algorhithm: (this is done at
the end of a block where a transaction has finishes)
- gather variables that are transaction-enabled.
- finds TIE_PREPARE and PREPARE callbacks.
- Call them
- If any of that returns 0, then we call all TIE_ROLLBACK and
ROLLBACK callbacks
- If all returns with 1, then we call all TIE_COMMIT and COMMIT
callbacks
I don't want to snapshot any module that doesn't support
transactions. This must not be the goal. It's impossible, I agree.
But we can extend the transaction-support by modules which can
handle transactions later.
Look at the very simple example in the RFC130 v5. This is
file-handler (ties the content of a file to a hash), and does the
locking, reading, two-phase commit, etc in perl. (this is not
perfect, because "link -f" is not atomic, but very near to that).
There are companies that builds mission-critical systems, and they
will build objects for their own data-storage systems. Not all data
are in databases. Someone may use text files mixed with database
tables (imagine a freemail system: if an error occured when deleting
the mail from the mailbox, we must not decrease the mail-count).
So: we don't want to handle what is impossible. Look how good it is
even if we use the transaction-enabled modules.
All we can do if somebody doesn't support the
transaction-environment is emulating the rollback and commit with
FETCH-es and STORE-s. We don't have a chance to emulate that with
objects. That's the limitations.
| >then all things are very
| >straightforward, aren't they? It is _not_ complicated at all. It
| >is
| >all Perlish!
|
| This is far from straightforward. Keen, yes, but very far from
| straightforward. It would also add a lot of code to the core, and
| possibly
| place some heavy demands on the resources of machines running perl
| code.
If we do all you imagine, yes, but we only want to implement what is
to be implemented in the core, and this is not more than the 2pc
alg.
\---
dLux
--
This Message is Powered by VI