----- Original Message -----
> I think there is a summit topic about what to do about a good 'oslo.db'
> (not sure if it got scheduled?)

Will look.

> 
> I'd always recommend reconsidering just copying what nova/cinder and a few
> others have for there db structure.
> 
> I don't think that has turned out so well in the long term (a 6000+ line
> file is not so good).
> 
> As for a structure that might be better, in taskflow I followed more of
> how ceilometer does there db api. It might work for u.
> 
> - https://github.com/openstack/ceilometer/tree/master/ceilometer/storage
> -

The Connection / Model object paradigm in Ceilometer was what I was assuming 
was "recommended" and was where mentially I was starting (it's similar but not 
identical to trove, ironic, and heat).  The ceilometer model is what I would 
describe as a resource manager class (Connection) that hides implementation (by 
mapping Sqlalchemy to the Model* objects).  So storage/base.py | 
storage/models.py define a rough domain model.  Russell, is that what you're 
advocating against (because of the size of the eventual resource manager class)?

Here's a couple of concrete storage interaction patterns

  simple application/component/sensor persistence with clean validation back to 
REST consumers
    traditional crud, probably 3-8 resources over time will follow this pattern
    best done via object model type interactions and then a direct persist 
operation

  elaborate a plan description for the application (yaml/json/etc) into the 
object model
    will need to retrieve specific sets of info from the object model
    typically one way
    may potentially involve asynchronous operations spawned from the initial 
request to retrieve more information

  translate the plan/object model into a HEAT template
    will need to retrieve specific sets of info from the object model
    typically one way

  create/update a HEAT stack based on changes
    likely will set the stack id into the object model
    might return within milliseconds or seconds

  provision source code repositories
    might return within milliseconds or minutes

  provision DNS
    this can take from within milliseconds to seconds, and DNS is likely only 
visible to an API consumer after minutes.

  trigger build flows
    this may take milliseconds to initiate, but minutes to complete

The more complex operations are likely separate pluggable service 
implementations (read: abstracted) that want to call back into the object model 
in a simple way, possibly via methods exposed specifically for those use cases.

I *suspect* that Solum will never have the complexity Nova does in persistence 
model, but that we'll end up with at around 20 tables in the first 2 years.  I 
would expect API surface area to be slightly larger than some projects, but not 
equivalent to keystone/nova by any means.

> https://github.com/stackforge/taskflow/tree/master/taskflow/persistence/bac
> kends
> 
> I also have examples of alembic usage in taskflow, since I also didn't
> want to use sqlalchemy-migrate for the same reasons russell mentioned.
> 
> -
> https://github.com/stackforge/taskflow/tree/master/taskflow/persistence/bac
> kends/sqlalchemy
> 
> Feel free to bug me about questions.

Thanks

_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to