On Tue, Sep 12, 2017 at 3:53 PM, Boris Pavlovic <bo...@pavlovic.me> wrote: > Mike, > > Great intiative, unfortunately I wasn't able to attend it, however I have > some thoughts... > You can't simplify OpenStack just by fixing few issues that are described in > the etherpad mostly..
This is exactly how one gets started, though, by dragging the skeletons to light. I, too, was unable to attend due to scheduling, but as PTL of a project complicated by years of tech debt, before even being an anointed OpenStack project, this topic is of particular interest to me. > > TC should work on shrinking the OpenStack use cases and moving towards the > product (box) complete solution instead of pieces of bunch barely related > things.. I agree and disagree with what you say here. Shrinking use cases misses the mark an order of magnitude or three. However, focusing on the outcome is exactly what needs to happen for everyone to walk away with the warm fuzzies, upstream and downstream alike. > > Simple things to improve: > This is going to allow community to work together, and actually get feedback > in standard way, and incrementally improve quality. > > 1) There should be one and only one: > 1.1) deployment/packaging(may be docker) upgrade mechanism used by everybody > 1.2) monitoring/logging/tracing mechanism used by everybody > 1.3) way to configure all services (e.g. k8 etcd way) > 2) Projects must have standardize interface that allows these projects to > use them in same way. > 3) Testing & R&D should be performed only against this standard deployment You keep using that word. This feels like a "you can have it any color you like, so long as it's black" argument. This is great for manufacturing tangible products that sit on a shelf somewhere. Not so much for a collection of software, already well into the maturation phase, that is the collective output of hundreds, nay, thousands of minds. What you propose almost never happens in practice, as nice as it sounds. The outcome is significantly more important than what people to do get there. I hereby refer to XKCD rule #927 on the topic of standards, only partly in jest. > > Hard things to improve: > > OpenStack projects were split in far from ideal way, which leads to bunch of > gaps that we have now: > 1.1) Code & functional duplications: Quotas, Schedulers, Reservations, > Health checks, Loggign, Tracing, .... Yup. Large software projects have some duplication, it's natural and requires occasional love. It takes people to actively battle the tech debt, and not everyone has the luxury of a fully dedicated team. > 1.2) Non optimal workflows (booting VM takes 400 DB requests) because data > is stored in Cinder,Nova,Neutron.... SQL is SQL, though, so I don't see what you're getting at. I'm sure some things need some tuning and queries need some optimization, but I hung up my DBA hat years ago. > 1.3) Lack of resources (as every project is doing again and again same work > about same parts) I read that last part to mean people and not so much technical limitations. If I've correctly read things with my corporate lens on, that's a universal pain that is felt by nearly every specialized field of work, and OpenStack is by no means unique. Downstream consumers of OpenStack code are only willing to financially support so many specialists, and they can support more than the Foundation. If the problem is people, convince more people to contribute, since we're remaking the universe. > > What we can do: > > 1) Simplify internal communication > 1.1) Instead of AMQP for internal communication inside projects use just > HTTP, load balancing & retries. In my experiences, AMQP has mostly sat there in the background, until someone comes along and touches it. We haven't touched the openstack-ops-messaging cookbook beyond testing enhancements and deprecations in at least a cycle because it just works. Retries just mask an underlying problem. With my operator hat on, I don't want my client to try N times if the service is intermittently failing. > > 2) Use API Gateway pattern > 3.1) Provide to use high level API one IP address with one client > 3.2) Allows to significant reduce load on Keystone because tokens are > checked only in API gateway > 3.3) Simplifies communication between projects (they are now in trusted > network, no need to check token) I don't see this as being something to beholden OpenStack development teams to implement and maintain, even if people pay for this functionality or implement it on their own. That's more of a use case, not a feature request. > > 3) Fix the OpenStack split > 3.1) Move common functionality to separated internal services: Scheduling, > Logging, Monitoring, Tracing, Quotas, Reservations (it would be even better > if this thing would have more or less monolithic architecture) No, please, just... no. A monolithic architecture is fine for dev, but it falls apart prematurely in the lifecycle when you throw the spurs to it. > 3.2) Somehow deal with defragmentation of resources e.g. VM Volumes and > Networks data which is heavily connected. That's for the implementation phase, not development. You can put volume storage and VMs on the same machine, if you want/need to do so. This smells like... another use case! > > > 4) Don't be afraid to break things > Maybe it's time for OpenStack 2: Blue polka dots with green stripes! With a racing stripe! And a whipped pony on top. > > In any case most of people provide API on top of OpenStack for usage > In any case there is no standard and easy way to upgrade > > So basically we are not losing anything even if we do not backward > compatible changes and rethink completely architecture and API. Quis custodiet ipsos custodes? Who ensures the usage APIs align with the service APIs align with the architecture? What happens when one group responsible for one API doesn't talk to the other because their employers changed directions? I'm not convinced an "incremental" all the things approach can benefit anyone, particularly one that demands more of people. > > > I know this sounds like science fiction, but I believe community will > appreciate steps in this direction... I'm going to invoke PHK here and show my roots: *ahem* Quality happens only when someone is responsible for it. A dramatic sweeping change from one extreme to the other is just being along for the ride when the pendulum swings. It's not time to throw in the towel on OpenStack quite yet. We're all looking for an agreeable positive outcome that will benefit all of our employers and their customers, but it doesn't work to profess a Grand Unified Way when there needn't necessarily be one. I thought there needed to be one, on my flight back from Boston. Then I ran for PTL a third cycle. :) > > > Best regards, > Boris Pavlovic > > On Tue, Sep 12, 2017 at 2:33 PM, Mike Perez <thin...@gmail.com> wrote: >> >> Hey all, >> >> The session is over. I’m hanging near registration if anyone wants to >> discuss things. Shout out to John for coming by on discussions with >> simplifying dependencies. I welcome more packagers to join the >> discussion. >> >> https://etherpad.openstack.org/p/simplifying-os >> >> — >> Mike Perez >> >> >> On September 12, 2017 at 11:45:05, Mike Perez (thin...@gmail.com) wrote: >> > Hey all, >> > >> > Back in a joint meeting with the TC, UC, Foundation and The Board it was >> > decided as an area >> > of OpenStack to focus was Simplifying OpenStack. This intentionally was >> > very broad >> > so the community can kick start the conversation and help tackle some >> > broad feedback >> > we get. >> > >> > Unfortunately yesterday there was a low turn out in the Simplification >> > room. A group >> > of people from the Swift team, Kevin Fox and Swimingly were nice enough >> > to start the conversation >> > and give some feedback. You can see our initial ether pad work here: >> > >> > https://etherpad.openstack.org/p/simplifying-os >> > >> > There are efforts happening everyday helping with this goal, and our >> > team has made some >> > documented improvements that can be found in our report to the board >> > within the ether >> > pad. I would like to take a step back with this opportunity to have in >> > person discussions >> > for us to identify what are the area of simplifying that are worthwhile. >> > I’m taking a break >> > from the room at the moment for lunch, but I encourage people at 13:30 >> > local time to meet >> > at the simplification room level b in the big thompson room. Thank you! >> > >> > — >> > Mike Perez >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best, Samuel Cassiba __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev