I've started in on making Wei's feature and mine mutually exclusive.
We had talked about using radio buttons to do this in the GUI, but I don't
see radio buttons in use at present in the CS GUI.
Instead, I'm following the current paradigm in the GUI where a combo box
presents options to accomplis
OK, I've gone ahead and updated Review Board with a new diff.
Please check it out and let me know what you think.
Thanks!
On Mon, Jun 17, 2013 at 2:52 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> Sounds good
>
> I have a little bit of clean-up work to do and then I must figure o
Sounds good
I have a little bit of clean-up work to do and then I must figure out how
to generate a patch (since I was using merge rather than rebase during
development), so it will probably be tomorrow.
On Mon, Jun 17, 2013 at 2:50 PM, John Burwell wrote:
> Mike,
>
> Great news. Let me know
Mike,
Great news. Let me know when Review Board is updated, and I will perform a
second round review.
Thanks,
-John
On Jun 17, 2013, at 4:48 PM, Mike Tutkowski
wrote:
> FYI: I added the IOPS-capacity parameter and related code over the weekend.
>
> The final bit of work for me comes when W
FYI: I added the IOPS-capacity parameter and related code over the weekend.
The final bit of work for me comes when Wei's code is merged into master
and I pull it down.
At that point, I need to add the GUI and API logic to support mutual
exclusion of our features.
On Sat, Jun 15, 2013 at 11:12
On Fri, Jun 14, 2013 at 05:56:27PM -0400, John Burwell wrote:
> Mike,
>
> Awesome. +1.
>
> Thanks for your patience with the back and forth,
> -John
+1 - this now looks like a great approach.
uct and project whether
> commercial or open source has a different set of features and objectives.
>
>
> - Si
> - Original Message -
>
> From: "John Burwell"
> To: dev@cloudstack.apache.org
> Sent: Friday, June 14, 2013 2:59:47 PM
> Subjec
- Original Message -
From: "John Burwell"
To: dev@cloudstack.apache.org
Sent: Friday, June 14, 2013 2:59:47 PM
Subject: Re: [MERGE] disk_io_throttling to MASTER
Simon,
Yes, it is CloudStack's job to protect, as best it can, from oversubscribing
resources. I would argue
his whole IOPS calculation is getting rather
> >> complicated, and could probably be much simpler than this. Over
> >> subscription is a fact of life on virtually all storage, and is really
> no
> >> different in concept than multiple virt instances on a single piece of
gineers to keep track of IOPS utilization, and plan for spindle
>> augmentation as required.
>>> Is it really the job of CS to become yet another management layer on top
>> of this?
>>>
>>> - Original Message -
>>>
>>> From: "Mi
to keep track of IOPS utilization, and plan for spindle
>> augmentation as required.
>> > Is it really the job of CS to become yet another management layer on
>> top of this?
>> >
>> > - Original Message -
>> >
>> > From: "Mike Tutkowsk
e
> engineers to keep track of IOPS utilization, and plan for spindle
> augmentation as required.
> > Is it really the job of CS to become yet another management layer on top
> of this?
> >
> > ----- Original Message -
> >
> > From: "Mike Tutkowski&q
uot;John Burwell" , "Wei Zhou"
> Sent: Friday, June 14, 2013 1:00:26 PM
> Subject: Re: [MERGE] disk_io_throttling to MASTER
>
> 1) We want number of IOPS currently supported by the SAN.
>
> 2) We want the number of IOPS that are committed (sum of min IOPS for
uired.
> Is it really the job of CS to become yet another management layer on top
> of this?
>
> - Original Message -
>
> From: "Mike Tutkowski"
> To: dev@cloudstack.apache.org
> Cc: "John Burwell" , "Wei Zhou" >
> Sent: Friday,
e job of CS to become yet another management layer on top of
this?
- Original Message -
From: "Mike Tutkowski"
To: dev@cloudstack.apache.org
Cc: "John Burwell" , "Wei Zhou"
Sent: Friday, June 14, 2013 1:00:26 PM
Subject: Re: [MERGE] disk_io_throttli
1) We want number of IOPS currently supported by the SAN.
2) We want the number of IOPS that are committed (sum of min IOPS for each
volume).
We could do the following to keep track of IOPS:
The plug-in could have a timer thread that goes off every, say, 1 minute.
It could query the SAN for the
"As I mentioned previously, I am very reluctant for any feature to come
into master that can exhaust resources."
Just wanted to mention that, worst case, the SAN would fail creation of the
volume before allowing a new volume to break the system.
On Fri, Jun 14, 2013 at 11:35 AM, Mike Tutkowski <
Hi John,
Are you thinking we add a column on to the storage pool table, IOPS_Count,
where we add and subtract committed IOPS?
That is easy enough.
How do you want to determine what the SAN is capable of supporting IOPS
wise? Remember we're dealing with a dynamic SAN here...as you add storage
nod
Mike,
Querying the SAN only indicates the number of IOPS currently in use. The
allocator needs to check the number of IOPS committed which is tracked by
CloudStack. For 4.2, we should be able to query the number of IOPS committed
to a DataStore, and determine whether or not the number request
Yeah, I'm not sure I could come up with anything near an accurate
assessment of how many IOPS are currently available on the SAN (or even a
total number that are available for volumes). Not sure if there's yet an
API call for that.
If I did know this number (total number of IOPS supported by the S
Mike,
Please see my comments in-line below.
Thanks,
-John
On Jun 13, 2013, at 6:09 PM, Mike Tutkowski
wrote:
> Comments below in red.
>
> Thanks
>
>
> On Thu, Jun 13, 2013 at 3:58 PM, John Burwell wrote:
>
>> Mike,
>>
>> Overall, I agree with the steps to below for 4.2. However, we may
Alternatively, we could always add a new table, storage_features. It could
contain three columns: Our typical ID column, a foreign key that maps back
to the storage_pool table, and a column for the feature's name (this string
would map to an enumeration in the codebase).
Ex.
storage_pool
ID
>
> Overall, I agree with the steps to below for 4.2. However, we may want to
> throw an exception if we can not fulfill a requested QoS. If the user is
> expecting that the hypervisor will provide a particular QoS, and that is
> not possible, it seems like we should inform them rather than silen
Comments below in red.
Thanks
On Thu, Jun 13, 2013 at 3:58 PM, John Burwell wrote:
> Mike,
>
> Overall, I agree with the steps to below for 4.2. However, we may want to
> throw an exception if we can not fulfill a requested QoS. If the user is
> expecting that the hypervisor will provide a p
Mike,
Overall, I agree with the steps to below for 4.2. However, we may want to
throw an exception if we can not fulfill a requested QoS. If the user is
expecting that the hypervisor will provide a particular QoS, and that is not
possible, it seems like we should inform them rather than silen
Mike,
See my comments in-line below.
Thanks,
-John
On Jun 13, 2013, at 5:31 PM, Mike Tutkowski
wrote:
> My thinking is, for 4.2, while not ideal, we will need to put some burden
> on the admin to configure a Disk Offering in a way that makes sense. For
> example, if he wants to use storage Qo
change yet.
>
> ** **
>
> *From:* Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> *Sent:* Thursday, June 13, 2013 1:41 PM
> *To:* dev@cloudstack.apache.org
> *Cc:* Edison Su
> *Subject:* Re: [MERGE] disk_io_throttling to MASTER (Second Round)
>
> ** **
>
> Actually, I
How about set hypervisorType to Any? Haven't take a look at the master change
yet.
From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
Sent: Thursday, June 13, 2013 1:41 PM
To: dev@cloudstack.apache.org
Cc: Edison Su
Subject: Re: [MERGE] disk_io_throttling to MASTER (Second
So, here's my suggestion for 4.2:
Accept the values as they are currently required (four new fields for Wei's
feature or two new fields for mine).
The Add Disk Offering dialog needs three new radio buttons:
1) No QoS
2) Hypervisor QoS
3) Storage Qos
The admin needs to specify storage tags tha
My thinking is, for 4.2, while not ideal, we will need to put some burden
on the admin to configure a Disk Offering in a way that makes sense. For
example, if he wants to use storage QoS with supported Min and Max values,
he'll have to put in a storage tag that references the SolidFire primary
stor
Comments below in red.
Thanks
On Thu, Jun 13, 2013 at 2:54 PM, John Burwell wrote:
> Mike,
>
> Please see my comment in-line below.
>
> Thanks,
> -John
>
> On Jun 13, 2013, at 1:22 AM, Mike Tutkowski
> wrote:
>
> > Hi John,
> >
> > I've put comments below in red.
> >
> > Thanks!
> >
> >
> > O
Mike,
Please see my comment in-line below.
Thanks,
-John
On Jun 13, 2013, at 1:22 AM, Mike Tutkowski
wrote:
> Hi John,
>
> I've put comments below in red.
>
> Thanks!
>
>
> On Wed, Jun 12, 2013 at 10:51 PM, John Burwell wrote:
>
>> Mike,
>>
>> First and foremost, we must ensure that th
Actually, I am noticing some new behavior around picking a storage pool for
zone-wide storage.
The current implementation that I've brought down from master no longer
finds storage for me because my plug-in is zone wide and not associated
with a hypervisor.
Edison?
On Thu, Jun 13, 2013 at 1:13
Hi Edison,
I notice after I updated from master that Hypervisor Type is now a required
parameter for creating a storage pool.
Should I just use HypervisorType.Any?
Thanks!
On Thu, Jun 13, 2013 at 12:21 PM, John Burwell wrote:
> Wei,
>
> I published my review. I didn't see any code to valida
Wei,
I published my review. I didn't see any code to validating the rate values
(e.g. values greater than 0, values less than an maximum value). Did I miss it?
I also noticed that 0 is being used when no value has been specified. I
recommend using the Long type rather primitive long in order
Yeah, I think it's fine for Wei to merge in his changes. I can then fetch
and merge into my branch and add the additional GUI and API code for mutual
exclusion.
On Thu, Jun 13, 2013 at 9:34 AM, Wei ZHOU wrote:
> John,
>
> Please review the code on https://reviews.apache.org/r/11782
> The storag
John,
Please review the code on https://reviews.apache.org/r/11782
The storage provisioned IOPS does not affect hypervisor throttled I/O, I
think.
Mike may change UI and java code for storage provisioned IOPS after the
merge.
-Wei
2013/6/13 John Burwell
> Wei,
>
> There are open questions on
Wei,
There are open questions on the thread regarding mutual exclusion of
hypervisor throttled I/O and storage provisioned IOPS. We need to
understand how and where it will be implemented in both the UI and
service layers. Also, can you resend the Review Board review? My
email search skills hav
If nobody object, I will merge into master today.
-Wei
2013/6/11 John Burwell
> Mike,
>
> From a CloudStack perspective, it will keep implementation specific
> concepts from the base data model, and provide a great test case to develop
> a mechanism to capture this information in 4.3. Ideally
No problem, John. Thanks for taking the time!
I've addressed most issues in the review tonight and should be able to
complete the remaining tomorrow.
Talk to you later!
On Wed, Jun 12, 2013 at 5:21 PM, John Burwell wrote:
> Mike,
>
> I just published my review @ https://reviews.apache.org/r/1
Hi John,
I've put comments below in red.
Thanks!
On Wed, Jun 12, 2013 at 10:51 PM, John Burwell wrote:
> Mike,
>
> First and foremost, we must ensure that these two features are mutually
> exclusive in 4.2. We don't want to find a configuration that contains both
> hypervisor and storage IOP
Mike,
First and foremost, we must ensure that these two features are mutually
exclusive in 4.2. We don't want to find a configuration that contains both
hypervisor and storage IOPS guarantees that leads to non-deterministic
operations. Restricting QoS expression to be either hypervisor or sto
Hi,
Yeah, Alex, I think that's the way we were planning (with storage tags). I
believe John was just throwing out an idea that - in addition to storage
tags - we could look into these allocators (storage QoS being preferred,
then hypervisor QoS if storage QoS is not available, but hypervisor QoS i
Mike,
I just published my review @ https://reviews.apache.org/r/11479/.
I apologize for the delay,
-John
On Jun 12, 2013, at 12:43 PM, Mike Tutkowski
wrote:
> No problem, John.
>
> I still want to re-review it by myself before coming up with a new patch
> file.
>
> Also, maybe I should firs
Sent: Wednesday, June 12, 2013 3:31 PM
> To: John Burwell
> Cc: dev@cloudstack.apache.org; Wei Zhou
> Subject: Re: [MERGE] disk_io_throttling to MASTER
>
> We'd also have to recognize that if the Min value is filled in and Storage QoS
> is not available that Hypervisor QoS (r
We'd also have to recognize that if the Min value is filled in and Storage
QoS is not available that Hypervisor QoS (rate limiting here) cannot
satisfy that constraint.
On Wed, Jun 12, 2013 at 4:26 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> Yeah, this is interesting.
>
> We'll h
Yeah, this is interesting.
We'll have to wait and see what Wei's thoughts are on this.
On Wed, Jun 12, 2013 at 4:17 PM, John Burwell wrote:
> Mike,
>
> Yes. To your point, the appropriate logic would be to check the Volume
> allocated by the StorageAllocator. If it doesn't support a QoS, the
Mike,
Yes. To your point, the appropriate logic would be to check the Volume
allocated by the StorageAllocator. If it doesn't support a QoS, then the VM
allocator would attempt to fulfill the QoS through the hypervisor. Another
question would be -- what would be in the behavior in the event
It looks like Wei has four new fields:
Max Read IOPS, Max Write IOPS, Max Read BPS, and Max Write BPS
I have two:
Max IOPS and Min IOPS
Mine can be set by the admin or the admin can choose to have the end user
fill in these values.
On Wed, Jun 12, 2013 at 4:13 PM, Mike Tutkowski <
mike.tutkow
As far as compressing read and write IOPS fields into one value (one for
Min and one for Max), that is Wei's area as my feature does not distinguish
between read and write IOPS.
On Wed, Jun 12, 2013 at 4:11 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> I see, John.
>
> That is an i
I see, John.
That is an interesting idea.
We'd also have to change the storage allocator(s) to favor QoS-supported
storage if those fields are filled in.
On Wed, Jun 12, 2013 at 4:09 PM, John Burwell wrote:
> Mike,
>
> My thought is that we present the min/max IOPS fields for read/write
> ope
Mike,
My thought is that we present the min/max IOPS fields for read/write operations
for all compute and disk offerings. When the VM is allocated, we determine the
best way to fulfill that QoS. It sounds like storage level guarantees would
always be preferred. If no storage is available to
I agree with your "None" radio button point, John.
On Wed, Jun 12, 2013 at 4:03 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> I hate to say it, but I believe Storage QoS with a Min and Max will always
> be optimal over hypervisor rate limiting.
>
> The only time you'd want to use h
I hate to say it, but I believe Storage QoS with a Min and Max will always
be optimal over hypervisor rate limiting.
The only time you'd want to use hypervisor rate limiting is if storage QoS
is not available.
We currently have no way to know what storage the root or data disk will be
deployed to
Mike,
Please see my comments/questions in-line below.
Thanks,
-John
On Jun 12, 2013, at 5:37 PM, Mike Tutkowski
wrote:
> Hi Wei,
>
> So, not entirely sure I follow.
>
> Will what I wrote earlier work? Here is a copy of what I wrote:
>
> Let's just called these radio buttons 1) Hypervisor Q
Hi Wei,
So, not entirely sure I follow.
Will what I wrote earlier work? Here is a copy of what I wrote:
Let's just called these radio buttons 1) Hypervisor QoS and 2) Storage QoS
for the purpose of this e-mail.
By default, neither radio button is selected and no QoS fields are visible.
If the
Mike, John
The Disk I/O Throttling works like:
(1) For User VM,
(1.1) root disks: service offering -> default value in global configuration
(1.2) data disks: disk offering -> default value in global configuration
(2) For System VM
root disks: service offering
-Wei
2013/6/12 John Burwell
> M
OK, we will have to wait for Wei on that one.
My guess (and that's all it is is a guess) is since Wei's four new fields
have been added to both the Add Compute Offering and the Add Disk Offering
dialogs is that Hypervisor QoS is on a disk-by-disk basis. Otherwise you
could have conflicting informa
Mike,
That is one possibility. The other possibility is that hypervisor is going to
throttle I/O on all disks attached. Therefore, we need answers to the
following questions:
1. Is I/O throttling applied to the root disk or all disks attached to
the VM?
2. If I/O throttling i
Hey John,
Perhaps I don't fully understand how Wei's feature works.
I guess I thought if you choose Hypervisor QoS, you do so on Compute
Offerings (root disks) and/or Disk Offerings (data disks).
>From my thinking, you're root disk could be under Hypervisor QoS, but your
data disk could be under
Mike,
As I understand these two patches, the throttled I?O settings are applied from
the hypervisor side, and possibly defined in a compute offering where
provisioned IOPS are defined on the storage side through disk offerings. I
don't see how the management server could enforce this mutual ex
Hi John,
So, maybe I'm wrong about this, but what I was thinking is that we'd build
two radio buttons into the Add Disk Offering dialog (we can ignore Compute
Offerings for 4.2 since my feature doesn't yet support them).
Let's just called these radio buttons 1) Hypervisor QoS and 2) Storage QoS
f
Mike,
Looking through the code, I am trying to understand how CreateDiskOfferingCmd
would have the context to identify the conflict. Naively, it seems to me that
this rule would need to be enforced when a virtual machine is being deployed.
Looking through the code, it seems like we should add
Hi John,
So, here's what I was planning to do. Of course feel free to correct me on
this approach.
I think it's OK if Wei merges his code into master and then I can draw from
the main repo and merge master into mine locally.
1) Once I get Wei's code and merge, I plan to add a little GUI code to
Mike,
Yes, these server-side rails need to be defined and implemented before either
patch can be merged. From my perspective, I would like to see the rule
implemented in the hypervisor as part of the validation of the virtual machine
definition. We also need to make sure that this mutual excl
Currently they are not yet implemented.
We have to make sure they are implemented in the GUI from a usability
standpoint, but the API must check for consistency and throw an exception
if necessary.
On Wed, Jun 12, 2013 at 11:03 AM, John Burwell wrote:
> Mike,
>
> Are the checks only implemente
Mike,
Are the checks only implemented in the UI?
Thanks,
-John
On Jun 12, 2013, at 1:02 PM, Mike Tutkowski
wrote:
> Hi John,
>
> Wei and I have discussed making the two features mutually exclusive. We
> agree with you that only one should be active at a time. We plan to
> implement in the G
Hi John,
Wei and I have discussed making the two features mutually exclusive. We
agree with you that only one should be active at a time. We plan to
implement in the GUI a mechanism (maybe radio buttons) to turn his feature
on and mine off and vice versa.
I was thinking if I wait until he checks
Mike,
How have Wei and you resolved the issue of conflicting QoS mechanisms
between the Hypervisor and Storage layers? Have the affected FSs been
updated with that decision?
In terms of merge timing, can you describe the dependencies between the patches?
Thanks,
-John
On Jun 12, 2013, at 12
No problem, John.
I still want to re-review it by myself before coming up with a new patch
file.
Also, maybe I should first wait for Wei's changes to be checked in and
merge those into mine before generating a new patch file?
On Wed, Jun 12, 2013 at 10:40 AM, John Burwell wrote:
> Mike,
>
> I
Mike,
I just realized that I forgot to publish my review. I am offline ATM,
but I will publish it in the next couple of hours.
Do you plan to update your the patch in Review Board?
Sorry for the oversight,
-John
On Jun 12, 2013, at 2:26 AM, Mike Tutkowski
wrote:
> Hi Edison, John, and Wei
On Wed, Jun 12, 2013 at 12:26:32AM -0600, Mike Tutkowski wrote:
> Thanks for all the work on this everyone!
+1
Hi Edison, John, and Wei (and whoever else is reading this :) ),
Just an FYI that I believe I have implemented all the areas we wanted
addressed.
I plan to review the code again tomorrow morning or afternoon, then send in
another patch.
Thanks for all the work on this everyone!
On Tue, Jun 11,
Sure, that sounds good.
On Tue, Jun 11, 2013 at 12:11 PM, Wei ZHOU wrote:
> Hi Mike,
>
> It looks the two feature do not have many conflicts in Java code, except
> the cloudstack UI.
> If you do not mind, I will merge disk_io_throttling branch into master this
> week, so that you can develop ba
Hi Mike,
It looks the two feature do not have many conflicts in Java code, except
the cloudstack UI.
If you do not mind, I will merge disk_io_throttling branch into master this
week, so that you can develop based on it.
-Wei
2013/6/11 Mike Tutkowski
> Hey John,
>
> The SolidFire patch does no
Will you be on today's Go to meeting? We can talk about your stuff.
> -Original Message-
> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> Sent: Tuesday, June 11, 2013 10:20 AM
> To: dev@cloudstack.apache.org
> Subject: Re: [MERGE] disk_io_throttling to
I am OK with it either way.
Edison, do you still have a preference?
Thanks!
On Tue, Jun 11, 2013 at 11:14 AM, John Burwell wrote:
> Mike,
>
> So my dependency graph below is incorrect. If there is no dependency
> between object_store and solidfire, why wouldn't merge them separately? I
> as
My comments are below in *red*.
Thanks!
On Tue, Jun 11, 2013 at 11:01 AM, John Burwell wrote:
> Mike,
>
> Please see my responses in-line below.
>
> Thanks,
> -John
>
> On Jun 10, 2013, at 11:08 PM, Mike Tutkowski
> wrote:
>
> > Let me make sure I follow where we're going here:
> >
> > 1) The
Mike,
So my dependency graph below is incorrect. If there is no dependency between
object_store and solidfire, why wouldn't merge them separately? I ask because
the object_store patch is already very large. As a reviewer try to comprehend
the changes, a series of smaller of patches is easier
Hey John,
The SolidFire patch does not depend on the object_store branch, but - as
Edison mentioned - it might be easier if we merge the SolidFire branch into
the object_store branch before object_store goes into master.
I'm not sure how the disk_io_throttling fits into this merge strategy.
Perha
Mike,
We have a delicate merge dance to perform. The disk_io_throttling, solidfire,
and object_store appear to have a number of overlapping elements. I understand
the dependencies between the patches to be as follows:
object_store <- solidfire -> disk_io_throttling
Am I correct that
Mike,
Please see my responses in-line below.
Thanks,
-John
On Jun 10, 2013, at 11:08 PM, Mike Tutkowski
wrote:
> Let me make sure I follow where we're going here:
>
> 1) There should be NO references to hypervisor code in the storage plug-ins
> code (this includes the default storage plug-in
Also, if we are good with Edison merging my code into his branch before
going into master, I am good with that.
We can remove the StoragePoolType.Dynamic code after his merge and we can
deal with Burst IOPS then, as well.
On Mon, Jun 10, 2013 at 9:08 PM, Mike Tutkowski <
mike.tutkow...@solidfire
Let me make sure I follow where we're going here:
1) There should be NO references to hypervisor code in the storage plug-ins
code (this includes the default storage plug-in, which currently sends
several commands to the hypervisor in use (although it does not know which
hypervisor (XenServer, ESX
"+1 -- Burst IOPS can be implemented while avoiding implementation
attributes. I always wondered about the details field. I think we should
beef up the description in the documentation regarding the expected format
of the field. In 4.1, I noticed that the details are not returned on the
createSt
> -Original Message-
> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> Sent: Monday, June 10, 2013 4:07 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [MERGE] disk_io_throttling to MASTER
>
> That's a good point about the "managed&
That's a good point about the "managed" property being storable in the
storage_pool_details table. We don't need another column for it in the
storage_pool table. In the current url field is where this kind of
information can be passed along and it can be stored in the
storage_pool_details table, if
Mike,
From a CloudStack perspective, it will keep implementation specific concepts
from the base data model, and provide a great test case to develop a mechanism
to capture this information in 4.3. Ideally, I want CloudStack to exploit
these implementation specific features. I just want to pr
I am pretty sure it is not a blocker for me to drop Burst IOPS. Let me run
that past Product Management, but I don't think it's a problem.
On Mon, Jun 10, 2013 at 4:01 PM, John Burwell wrote:
> Mike,
>
> Yes. I realize my other reply did not explicitly state leaving the Min
> and Max IOPS fiel
I see the unmanaged() method in the life cycle object now (didn't know this
was there before). I also notice it is not being invoked anywhere.
I can make use of this method to help the storage framework determine if it
should send a message to the hypervisor in use to create an SR (or
datastore fo
Mike,
Yes. I realize my other reply did not explicitly state leaving the Min and Max
IOPS fields in the data model as these seem to generic terms across storage
implementations.
Thanks,
-John
On Jun 10, 2013, at 5:57 PM, Mike Tutkowski
wrote:
> More generally speaking, you're looking to re
Mike,
I am asking whether or not we can defer the entire notion of Burst IOPS in the
data model (e.g. the driver could default it to Max IOPS when configuring a SAN
volume) for 4.2. In 4.3, we design a facility for extended driver data that
would allow drivers to expose these implementation-sp
More generally speaking, you're looking to remove Burst IOPS from
CloudStack for 4.2, but we would keep Min and Max (and they would be
displayed in the Disk Offering dialog as I've proposed)?
On Mon, Jun 10, 2013 at 3:52 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> Just to make su
Just to make sure I understand your request, are you looking to display Min
and Max (as long as Wei's feature is not in use), but not display Burst
IOPS?
On Mon, Jun 10, 2013 at 3:50 PM, John Burwell wrote:
> Mike,
>
> My concern becomes that we start ballooning the data model and user
> interf
Mike,
My concern becomes that we start ballooning the data model and user interface
with a fields that are documented as, "If using SolidFire, then Burst IOPS is
honored and foo and bar are not. For solution X, Burst IOPS is ignored, but
foo and bar apply." It may have to hold until 4.3, but
Right...all of that makes sense to me, Edison.
I think John's concern with my patch file (which implements the approach
you outlined in your last paragraph) is that he didn't like the "attach"
command performing logic to create an SR if the SR isn't present.
He wanted a new hypervisor command to
Hi,
I don't think we can restrict Burst to SolidFire only because there really
is no way to know that the Disk Offering will be serviced by SolidFire.
This really is a flaw with the way CloudStack handles storage tags.
For example, the Disk Offering could use a storage tag that references
SolidF
Mike,
Glad to see this. This is what I have questioned about.
I accepted John's and your opnion that hypervisor IOPS and storage IOPS are
mutually exclusion. The way you mentioned is a good way to go.
A small question is, will burst IOPS only appear when storage device is
SolidFire?
-Wei
201
A major issue for current QoS providers is not being able to utilize over a
Max amount even when it is highly desirable and the storage system can
support it. That's why I'm thinking it will be a feature others attempt to
imitate.
On Mon, Jun 10, 2013 at 2:44 PM, Mike Tutkowski <
mike.tutkow...@s
My thinking is that Min and Max are industry standard and Burst is a new
concept that could catch on.
On Mon, Jun 10, 2013 at 2:29 PM, John Burwell wrote:
> Wei,
>
> In this case, we can have the hypervisor or storage providing the quality
> of service guarantee. Naively, it seems reasonable t
1 - 100 of 157 matches
Mail list logo