Hi,

I'm using havana and recent we ran into an issue with heat related to character sets.

In heat/db/sqlalchemy/api.py in user_creds_get() we call
_decrypt() on an encrypted password stored in the database and then try to convert the result to unicode. Today we hit a case where this errored out with the following message:

UnicodeDecodeError: 'utf8' codec can't decode byte 0xf2 in position 0: invalid continuation byte

We're using postgres and currently all the databases are using SQL_ASCII as the charset.

I see that in icehouse heat will complain if you're using mysql and not using UTF-8. There doesn't seem to be any checks for other databases though.

It looks like devstack creates most databases as UTF-8 but uses latin1 for nova/nova_bm/nova_cell. I assume this is because nova expects to migrate the db to UTF-8 later. Given that those migrations specify a character set only for mysql, when using postgres should we explicitly default to UTF-8 for everything?

Thanks,
Chris

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to