Agree with Sean. A short error name in response body would be better for
applications who consume OpenStack. To my understand, the
X-OpenStack-Error-Help-URI proposed by jpipes will be a uri to error
resolution method. Usually, a consumer application needn't to load its
content.
On Feb 3, 2015 9:28
Dear stackers,
FYI. Eventually I report this problem to libguestfs. A workaround has been
included into libguestfs code to fix it. Thanks for your supporting!
https://bugzilla.redhat.com/show_bug.cgi?id=1123007
On Sat, Jun 7, 2014 at 3:27 AM, Qin Zhao wrote:
> Yuriy,
>
> And I th
Yuriy,
And I think if we use proxy object of multiprocessing, the green thread
will not switch during we call libguestfs. Is that correct?
On Fri, Jun 6, 2014 at 2:44 AM, Qin Zhao wrote:
> Hi Yuriy,
>
> I read multiprocessing source code just now. Now I feel it may not solve
>
ethod to calls to this new Manager.
>
>
> On Thu, Jun 5, 2014 at 7:21 PM, Qin Zhao wrote:
>
>> Hi Yuriy,
>>
>> Thanks for reading my bug! You are right. Python 3.3 or 3.4 should not
>> have this issue, since they have can secure the file descriptor. Before
>
; *http://bugs.python.org/issue7213
>> <http://bugs.python.org/issue7213>*
>>
>>
>> On Wed, Jun 4, 2014 at 10:41 PM, Qin Zhao wrote:
>>
>>> Hi Zhu Zhu,
>>>
>>> Thank you for reading my diagram! I need to clarify that this problem
>>&
rg/issue7213>*
>
>
> On Wed, Jun 4, 2014 at 10:41 PM, Qin Zhao wrote:
>
>> Hi Zhu Zhu,
>>
>> Thank you for reading my diagram! I need to clarify that this problem
>> does not occur during data injection. Before creating the ISO, the driver
>> code will
hope that. 2) Resource tracker of each compute node can do the
calculation according to its settings (or its calculation method, if it is
pluggable), so that resource usage behavior can be specified to meet the
unique requirement.
--
Qin Zhao
___
Op
very much. So I do not think that is an acceptable solution.
On Wed, Jun 4, 2014 at 12:00 PM, Zhu Zhu wrote:
> Hi Qin Zhao,
>
> Thanks for raising this issue and analysis. According to the issue
> description and happen scenario(
> https://docs.google
and eventlet. The
situation is a little complicated. Is there any expert who can help me to
look for a solution? I will appreciate for your help!
--
Qin Zhao
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi
ons to
> determine what commands
> > to issue to have them sync their contents to disk. If you are unsure how
> to do this,
> > the safest approach is to simply stop these running services normally.
> > "
> > This just pushes all the respon
approach is to simply stop these running services normally.
> > "
> > This just pushes all the responsibility to guarantee the consistency of
> the instance to the end user.
> > It's absolutely not convenient and I doubt whether it's appropriate.
>
> Hi Ricky,
>
> I gue
oes it
> negatively impact the service of compute to other tenants/users and will
> not negatively impact the scaling factor of Nova either.
>
> I'm just not as optimistic as you are that once legacy IT folks have
> their old tools, they will consider changing their habits. ;)
>
> Best,
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
Qin Zhao
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
start each
application again. Every thing is there. User can continue his work very
easily. If the user spawn and boot a new VM, he will need to take a lot of
time to resume his work. Does that make sense?
On Fri, Mar 7, 2014 at 2:20 PM, Joe Gordon wrote:
> On Wed, Mar 5, 2014 at 11:45 AM, Qi
u, Mar 6, 2014 at 1:33 AM, Joe Gordon wrote:
> On Wed, Mar 5, 2014 at 8:59 AM, Qin Zhao wrote:
> > Hi Joe,
> > If we assume the user is willing to create a new instance, the workflow
> you
> > are saying is exactly correct. However, what I am assuming is that the
> user
PM, Qin Zhao wrote:
> > Hi Joe, my meaning is that cloud users may not hope to create new
> instances
> > or new images, because those actions may require additional approval and
> > additional charging. Or, due to instance/image quota limits, they can
> not do
> > that
sting
instance will be preferred sometimes. Creating a new instance will be
another story.
On Wed, Mar 5, 2014 at 3:20 AM, Joe Gordon wrote:
> On Tue, Mar 4, 2014 at 1:06 AM, Qin Zhao wrote:
> > I think the current snapshot implementation can be a solution sometimes,
> but
> &g
t; >> > ___
> >> > OpenStack-dev mailing list
> >> > OpenStack-dev@lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
It may be something worthwhile discussing at the up
> and coming summit.
> Thanks
> Gary
>
> From: Qin Zhao
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, February 25, 2014 2:05
am not quite sure if we can model it in Nova, and let the user to
create snapshot chain via Nova api. Has it been discussed in design session
or mail group? Anybody know that?
On Tue, Feb 25, 2014 at 6:40 PM, John Garbutt wrote:
> On 25 February 2014 09:27, Qin Zhao wrote:
> > Hi,
> &
Hi,
One simple question about VCenter driver. I feel the VM snapshot function
of VCenter is very useful and is loved by VCenter users. Does anybody think
about to let VCenter driver support it?
--
Qin Zhao
___
OpenStack-dev mailing list
OpenStack-dev
two anyway (between the lunch hour in your TZ and
> the fact that it just moves the problem of being at midnight to the
> folks in US Eastern TZ). Also, I think if there is interest that a
> better solution might be to implement something like the Ceilometer
> team does and alternate the time each week.
>
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
Qin Zhao
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
21 matches
Mail list logo