I would set a Dropbox shared folder on both machines (there is a server
dropbox.py for linux)

So create a script which copies the .sqlite in to Dropbox folder.

On city machine a scheduled task read records on sqlite and import new
records directly to mysql.

You are going to need a 'signal' field on every record. I am using 'N' for
new, 'U' for updated, 'D' to deactivate.

In this scenario it is not a good idea to delete any data, so use the
is_active bool to change the status for deactivated.

In the other side you can have the city machine exporting on to the  sqlite
database the records that should go to the forest.

Also, you need to set a sqlite-status table, or file to be readed on both
sides before any transaction occurs. (to avoid race condition)

The better solution would be a mysql master-slave structure. But if yoi
have connection issues you can go with a home made queue.

http://zerp.ly/rochacbruno
Em 17/07/2012 08:49, "José Luis Redrejo Rodríguez" <jredr...@debian.org>
escreveu:

> I had to to do something similar a couple of years ago (between
> several waste water plants and the control center) and ended using a
> similar approach to what "nick name" said:
> - In the control center I used mysql
> - In the waste water plants I used a sqlite database per day
> (initializating the database every day at 00:00 and backing up the
> previous file in another directory)
> - Every record in the plants had a datetime stamp
> - The plants just send the sqlite files gzipped (and splitted in small
> bits because my connection was really bad) and the control center just
> received the bits, joined them, unziped the sqlite files and import
> their data into mysql using the plant-datetime as key to avoid
> duplicated items.
>
>
> Regards.
> José L.
>
>
> 2012/7/13 nick name <i.like.privacy....@gmail.com>:
> > On Wednesday, July 11, 2012 6:26:00 PM UTC-4, Massimo Di Pierro wrote:
> >>
> >> I am planning to improve this functionality but it would help to know if
> >> it works for you as it is and what problems you encounter with it.
> >
> >
> > I originally used the "export-to-csv", but a few months ago, I switched
> to
> > just shipping the sqlite files (actually the whole "databases" directory
> > with .table files); That handles everything like types, blobs, fractional
> > seconds in the database, etc, without any conversion. It is also faster
> when
> > processing the files at the other end - especially if you have indices
> and
> > have a non-trivial import requirement. It should be opened with
> > auto_import=True on the receiving end, of course.
> >
> > (you'd still need an "export" to a new .sqlite database, or use sqlite's
> > backup command, to make sure you get the database in a consistent state
> --
> > unless you know that the database is in a fully committed state when you
> > send it).
> >
> > If the connection is not reliable, the classic solution is a queuing
> system
> > like MSMQ / MQSeries / RabbitMQ (which is often non-trivial to manage),
> but
> > you could just export (csv, .sqlite, whatever) to a dropbox-or-similar
> > synced directory (e.g. sparkleshare lets you own the repository and not
> rely
> > on dropbox.com servers), and import it on the server side when the file
> has
> > changed. much, much simpler and works just as well for one way
> communication
> > that does not require the lowest possible latency.
>
> --
>
>
>
>

-- 



Reply via email to