Hi there,

First off, read up on the GAE bulk loader:
http://code.google.com/appengine/docs/python/tools/uploadingdata.html
i think the newer releases support CSV, though i have not used it for
CSV.

below i'm pasting some old code of mine for uploading and downloading
CSV, including fixup of references.  it is specific to my database, so
you will need to tweak it.  also, it is a year old and has not been
used for close to a year because my dataset quickly grew to larger
than could be processed in the 30 second request limit.

#...@todo requires membership of admin
@auth.requires_login()
def export():
    """
    Export the database as a CSV file.  Note that this CSV file format
is
    particular to web2py and will allow upload via L{replace_db} to
this app
    running on any database type that web2py supports.
    """
    s = StringIO.StringIO()
    db.export_to_csv_file(s)
    response.headers['Content-Type'] = 'text/csv'
    response.headers['Content-Disposition']= \
        'attachment; filename=rockriver_db_'+str(now)+'.csv'
    return s.getvalue()

#...@todo requires membership of admin
@auth.requires_login()
def replace_db():
    """
    Truncate all tables, and replace with data from the uploaded CSV
file.
    Note that this is intended to load data from the web2py formatted
csv file
    as downloaded from L{export}
    """
    id_map = None
    form = FORM(INPUT(_type='file', _name='data'),
                INPUT(_type='submit'))
    if form.accepts(request.vars):
        for table in db.tables:
            db[table].truncate()
        id_map = {}
 
db.import_from_csv_file(form.vars.data.file,id_map=id_map,unique=False)
        #...@todo: fix up song media references
        songs = db(db.song.id>0).select()
        for song in songs:
            if not song.media_ids:
                continue
            new_media = []
            medias = song.media_ids
            for m in medias:
                new_media.append(id_map['media_ids'][m])
            song.update_record(media_ids = new_media)
        #...@todo: fix up recording.media references
        recordings = db(db.recording.id>0).select()
        for r in recordings:
            if not r.media_ids:
                continue
            new_media = []
            medias = r.media_ids
            for m in medias:
                if id_map['media_ids'].has_key(m):
                    new_media.append(id_map['media_ids'][m])
            r.update_record(media_ids = new_media)
        #...@todo: fix up product.elements references
        products = db(db.product.id>0).select()
        for p in products:
            if not p.elements_ids:
                continue
            new_song = []
            songs = p.elements_ids
            for s in songs:
                if id_map['song'].has_key(s):
                    new_song.append(id_map['song'][s])
            p.update_record(elements_ids = new_song)
    return dict(form=form, id_map=id_map)

good luck!

christian

On Nov 21, 5:36 pm, olifante <tiago.henriq...@gmail.com> wrote:
> Hi everybody,
>
> I'm having trouble finding out what is the appropriate way to load
> data into a web2py webapp running on GAE. I created a script that
> parses some files and inserts data into a local web2py instance using
> "web2py -S myapp -M -R myscript.py", but I see no way of doing the
> same either for a local GAE instance (running with dev_appserver) or
> for a deployed GAE instance.
>
> I know that you can export the entire database from a standard web2py
> instance using something like this:
>
> db.export_to_csv_file(open('somefile.csv', 'wb'))
>
> Unfortunately, since you cannot use the web2py shell with GAE, I don't
> see how I can import that database dump either into the local GAE
> instance or the deployed GAE instance.
>
> Can anybody help?

Reply via email to