problem here is basically: when you change the password tables are created/consistent with your model or not ? Seems to me that it's your case (you just have to change the password, tables and columns are there). If so, just delete all in /databases and set fake_migrate_all: it should recreate all the needed files. If not, every "instance" (dev-->test-->prod) has their own .table associated (and will migrate accordingly), so it shouldn't be a problem as long as the "databases" folder is not synced when you push around your code.
On Tuesday, August 28, 2012 7:17:02 PM UTC+2, Chris wrote: > > Background: web2py keeps track of database definitions using files in each > app's ./databases directory. The basic concept is, for each table > definition a pickle file is created storing attributes defined in web2py / > not necessarily defined in the database; and the name of each pickle file > incorporates a hash based on the database URI + the table name, so one app > can access multiple databases and not experience name collisions if tables > with the same name occur in more than one database. Seems like a good > approach overall. > > The only problem I've had with this approach is, the hash is based on the > full URI, and therefore it changes when the user name and/or password for a > database connection changes. If you change a password, then you have to > change the URI, and the "same" database will have a different hash, and the > next time you run the app, web2py won't be able to identify the .table > files for that DB, and you end up with a bunch of "Table already exists" > errors and possibly a failed migration to clean up. Changing passwords > regularly is good practice, and in my case where we move code amon servers > as a project moves from dev --> test --> stage --> prod, this changes > fairly often. > > Here's my solution to the problem: > > 0. know that the hash is computed as hashlib.md5('uri string').hexdigest() > 1. get the old URI, run it through MD5 --> oldhash > 2. get the new URI, run it though MD5 --> newhash > 3. now assuming you are running bash ... > 4. cd /to/the/databases/folder > 5. for i in oldhash*.table; do j=`echo $i | sed 's/oldhash/newhash/g'`; > cp "$i" "$j"; done > > In other words, copy all existing .table files the name of which used the > old hash, to use the new hash. > > Two questions: > > (1) Do you have a better way to deal with this? > > (2) I wonder if a change to the current behavior would be better -- > change the logic to build a hash using all of the URI except the password > part? Changing server or DB name feels like a real change to me; and in > general changing just the user ID is too since the user may have different > permissions, views etc. in the same database. But changing just the > password? Should not change the underlying identity of the database > connection or database object definitions. (In my view of the world.) > What do you think? > > > Cheers > -- Chris > --