That sounds good, Syed. On Jul 27, 2017, at 2:14 PM, Syed Ahmed <sah...@cloudops.com<mailto:sah...@cloudops.com>> wrote:
Mike, you are absolutely right. I have added 4 new fields in the disk_offering table. The driver code won't need to change as I would pass the min and max IOPS after translating them. I am not using a fifth parameter since it is an either or situation, if you pass IOPS/GB in your API call and also pass an IOPS value, I will error out saying that you can only use one of those. Thanks, -Syed On Thu, Jul 27, 2017 at 2:53 PM, Tutkowski, Mike <mike.tutkow...@netapp.com<mailto:mike.tutkow...@netapp.com>> wrote: So then, based on the use case you mentioned, you are saying you don't really care about minimum limits, right? Are the values you specify for the disk offering going to be translated into the standard min and max values that get stored in the volumes table? If that is the case, then the storage driver code won't need to change. You would perform the translation and then pass in the min and max values to the driver as is done today. In that situation, you would only need four new fields in the cloud.disk_offering table. Perhaps a fifth column saying whether you were using IOPS/GB or the standard way. On Jul 27, 2017, at 12:45 PM, Syed Ahmed <sah...@cloudops.com<mailto:sah...@cloudops.com><mailto:sah...@cloudops.com<mailto:sah...@cloudops.com>>> wrote: Hi Mike, In case of min and max values of IOPS for a specific offering, there is another use case. We want to offer tiered storage. Right now if we have a disk offering, there is no way for us to limit the IOPS that the customer can set. We want to have say an offering which scales upto 10k IOPS and if the want more IOPS, they must switch to a higher tiered offering which has its values set to a higher limit. As for compatibility with existing offering. You are right, the existing offerings will still work as expected. An IOPS/GB setting will be used independently of the current method (fixed or custom) Thanks, -Syed On Thu, Jul 27, 2017 at 2:34 PM, Tutkowski, Mike <mike.tutkow...@netapp.com<mailto:mike.tutkow...@netapp.com><mailto:mike.tutkow...@netapp.com<mailto:mike.tutkow...@netapp.com>>> wrote: Hi Syed, I have a couple questions. What about the minimum number of IOPS a storage provider can support? For example, with SolidFire, in some releases we can go down as low as 100 IOPS per volume and in newer releases as low as 50 IOPS per volume. Perhaps you should just leave it to the storage driver to confine itself to its minimum and maximum values. This would not require such parameters to be passed to the disk offering. Another question I have is how compatibility will work between this proposed feature and the existing way this works. I assume it will be an either or situation. Thanks! Mike > On Jul 27, 2017, at 9:34 AM, Syed Ahmed > <sah...@cloudops.com<mailto:sah...@cloudops.com><mailto:sah...@cloudops.com<mailto:sah...@cloudops.com>>> > wrote: > > Hi All, > > I am planning to add 4 new parameters to the disk offering. The use case for > this is as follows: > > We want to provide a provisioned IOPS style offering to our customers with > managed storage like SolidFire. The model is similar to GCE where we have > IOPS scale with the size based on a predefined ratio. So for this I want to > add two options. minIOPSPerGB and maxIOPSPerGB. Now, based on what storage > you have, you have limits on the highest values for your min and max IOPS and > after which you don't want to scale your IOPS (SolidFire for example can do > 10k minIOPS and 20k max IOPS). To support this, I have to add two more > parameters, highestMinIOPS, highestMaxIOPS. This should work with existing > disk offerings without problem. I am looking for comments on this approach. > Would really appreciate your reviews. > > Thanks, > -Syed