OK, thanks for logging it!
On Fri, Jan 10, 2014 at 11:51 AM, Marcus Sorensen wrote:
> too late :-)
>
>
> On Fri, Jan 10, 2014 at 11:47 AM, Mike Tutkowski
> wrote:
> > Yeah, I agree. I can log a bug for this later and include the contents of
> > this e-mail.
> >
> >
> > On Fri, Jan 10, 2014 at 1
too late :-)
On Fri, Jan 10, 2014 at 11:47 AM, Mike Tutkowski
wrote:
> Yeah, I agree. I can log a bug for this later and include the contents of
> this e-mail.
>
>
> On Fri, Jan 10, 2014 at 10:33 AM, Marcus Sorensen wrote:
>
>> This should be simple enough to fix by making sure the avoid object
Yeah, I agree. I can log a bug for this later and include the contents of
this e-mail.
On Fri, Jan 10, 2014 at 10:33 AM, Marcus Sorensen wrote:
> This should be simple enough to fix by making sure the avoid object
> references pools matching the current tag, and only the current tag,
> every tim
This should be simple enough to fix by making sure the avoid object
references pools matching the current tag, and only the current tag,
every time. This means removing a matching pool from the avoid set if
it exists in the current avoid set, after the state where all pools
are set to avoid.
On Fr
Yeah, the object 'avoid' in the deployment planner is passed along
throughout the whole chain and added to, so the non-matching data disk
pool ends up in avoid when searching for a root disk pool, and at that
point it will never be chosen. What's kind of interesting as well is
that the opposite is
I added some debug, and do see some weird stuff, like:
2014-01-10 10:04:27,335 DEBUG
[storage.allocator.ClusterScopeStoragePoolAllocator]
(Job-Executor-14:job-22034 = [ 0946b816-2a5d-433f-a90b-853a465db45a ])
Found pools matching tags: [Pool[484|BSSAN]]
2014-01-10 10:04:27,336 DEBUG
[storage.alloc
Well, we can wait and see if anyone disagrees that it's a bug. Maybe if no
one does by end of day tomorrow I can log a bug for it.
On Thu, Jan 9, 2014 at 11:35 PM, Marcus Sorensen wrote:
> Sure, I just wanted to make sure it wasn't expected first. I thought
> perhaps the service offering was sup
Sure, I just wanted to make sure it wasn't expected first. I thought
perhaps the service offering was supposed to trump all in the case of
deploy.
On Thu, Jan 9, 2014 at 11:33 PM, Mike Tutkowski
wrote:
> Would you like me to open a bug, Marcus, and use your example or were you
> planning on doing
Would you like me to open a bug, Marcus, and use your example or were you
planning on doing so?
Thanks
On Thu, Jan 9, 2014 at 10:56 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> Now I remember why I didn't log the bug...I wanted to repro it to
> understand the sequence in more det
Now I remember why I didn't log the bug...I wanted to repro it to
understand the sequence in more detail.
Sounds like you have it nailed down, though (you have reproduced this and
understand when it doesn't work).
On Thu, Jan 9, 2014 at 10:53 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wr
Now that you mention it, I think CS does get confused about this. I may
have seen this last week.
The way it works (depending on if you're using an ISO or a template), CS
acts in a non-intuitive way. In one case (I can't remember if this is for
ISOs or templates), you pick a Compute Offering and a
*allocates the root disk*
On Thu, Jan 9, 2014 at 10:47 PM, Marcus Sorensen wrote:
> To clarify, deploying service offering 'foo' and then
> creating/attaching a disk offering 'bar' afterward works fine, but if
> we try to deploy together, it doesn't interrogate any storage pools
> for capacity on
To clarify, deploying service offering 'foo' and then
creating/attaching a disk offering 'bar' afterward works fine, but if
we try to deploy together, it doesn't interrogate any storage pools
for capacity on the data disk deploy. The behavior is as if the code
asks for all storage pools matching 'f
In addition, do check if there are at least 1 PS for each of the tags in
the same cluster.
Pasting the snippet of logs with failure would help. (But do paste the
entire snippet for vm deployment)
On 09/01/14 8:14 PM, "Mike Tutkowski" wrote:
>Are you saying there was no primary storage tagged "
No, we have two primary storages, one tagged "foo", one tagged "bar".
We have two disk offerings implementing these. We can verify that data
disks deploy properly to both. Then we have a service offering with
storage tag "foo". We can deploy a VM with a data disk tagged "foo"
but not one named "bar
Are you saying there was no primary storage tagged "bar"?
If that is the case, I guess I would expect the deployment of the VM to
fail because not all of the criteria could be met.
On Thu, Jan 9, 2014 at 7:08 PM, Marcus Sorensen wrote:
> Is this a bug (perhaps fixed), or expected behavior?
>
>
16 matches
Mail list logo