Ceph-RBD,
> Gluster or any choice from a long list of supported/integrated backend
> storage devices.
>
> John
>
> _______
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscrib
d in
the mean time drop VFS cache every once in a while (say every 60s). That's
roughly the performance you can get when your storage system gets into a
'steady' state (i.e. objects # has out grown memory size). This will give
you idea of pretty much the worst case.
> Jonathan
er or later.
> Cheers,
> Jonathan Lu
>
>
> __**_
> Mailing list:
> https://launchpad.net/~**openstack<https://launchpad.net/~openstack>
> Post to : openstack@lists.launchpad.net
> Unsubscribe :
> https://launchpad
acity not set;volume node info collection broken.
> 2013-02-22 01:14:02.783 WARNING cinder.scheduler.manager
> [req-56d30373-c93b-41f8-8fdd-1d9b123f5f40
> d863ce5682954b268cd92ad8da440de7 1c13d4432d5c486dbc0c54030d5ceb00]
> Failed to schedule_create_volume: No valid host was found.
>
>
> Could anyone give me some suggestions? Thanks in advance.
>
>
> --
> Thanks
> Harry Wei
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
ronment?
> Because I want to make the availability of scheduler service higher.
>
> thanks!
> 2013-02-19
> ________
> Wangpan
>
> 发件人:Huang Zhiteng
> 发送时间:2013-02-19 10:15
> 主题:Re: [Openstack] [Nova] Question about mul
t;
> Thanks!
> 2013-02-19
>
> Wangpan
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : htt
Git-review is python script, therefore it's possible to install it onto
windows system, however it is more convenient to use Linux, even with a
virtual machine.
On Feb 4, 2013 9:32 AM, "Xiazhihui (Hashui, IT)"
wrote:
> Hi All,
>
> ** **
>
>Can I use a window XP to submit the codes? Git-r
It seems you also have tgt patch for HLFS, personally I'd prefer iSCSI
support over qEMU support since iSCSI is well supported by almost every
hypervisor.
On Jan 19, 2013 9:23 PM, "harryxiyou" wrote:
> On Sat, Jan 19, 2013 at 7:00 PM, Huang Zhiteng
> wrote:
> >
_
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
__
For development efforts, it is better to use openstack-dev list instead of
this general openstack list. You can also join #openstack-cinder IRC
channel in freenode for online discussion with cinder developers.
On Jan 18, 2013 9:27 PM, "harryxiyou" wrote:
> On Fri, Jan 18, 2013 at 8:35 PM, yang,
ing if someone helps us with extra infos on that
>
> Maybe i would give some tests ;-)
>
>
>
> --
> Thanks
> Harry Wei
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe
t
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscrib
> future (as is typically done).
>
I'm not saying copying is the right thing to do. I totally agree we
should avoid doing this. Fixing the slowness is also important. Oslo
core devs, please take a look at the review queue, I've patch
slight modifications to
> fit Cinder's use case that given a bit of work could easily be shared.
>
> John
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : ht
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/api.py", line 1014,
> in volume_get
> 2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp raise
> exception.VolumeNotFound(volume_id=volume_id)
position for swift. It is just the image container for the
>> glance server?
>> 2. Is there any rules when making choice?
>> 2. What's your tend to choice in so many solution? and why?
>>
>>
>> --
>> Lei Zhang
>>
>> Blog: http://jeffrey4l.g
it fixes it
> that would be awesome. Also, it would be fery helpful if you can report a bug
> for me to reference in my merge proposal. I will see what I can do to write a
> few tests and have a potential fix for multiple schedulers.
>
> Vish
--
Regards
Huang Zhiteng
30
>
> at this point 12min later out of 200 instances 168 are active 22 are
> errored and 10 are still "building". Notably only 23 actual VMs are
> running on "nova-23":
>
> root@nova-23:~# virsh list|grep instance |wc -l
> 23
>
> So tha
On Wed, Oct 31, 2012 at 10:07 AM, Vishvananda Ishaya
wrote:
>
> On Oct 30, 2012, at 7:01 PM, Huang Zhiteng wrote:
>
>> I'd suggest the same ratio too. But besides memory overcommitment, I
>> suspect this issue is also related to how KVM do memory allocation (it
>&
mentation on the standard sized nodes
> for similar reasons.
>
> -Jon
>
> _______
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https:
s
>> for similar reasons.
>>
>> -Jon
>>
>> _______
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More hel
nal Message-
> From: openstack-bounces+philip.day=hp@lists.launchpad.net
> [mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of
> Huang Zhiteng
> Sent: 10 October 2012 04:28
> To: Jonathan Proulx
> Cc: openstack@lists.launchpad.net
> Subject: Re: [
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
l.
>
> Comments/Feedback welcome !
>
> --
> Thierry Carrez (ttx)
> Release Manager, OpenStack
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https
lt;60 seconds. FYI.
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
chpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.n
ailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
> ___
> Mailing list: https:/
_
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help : https://help.launchpad.net/ListHelp
>
>
> ________
any better.
>
>
>
> --
> +Hugo Kuo+
> tonyt...@gmail.com
> +886 935004793
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchp
>
> Regards
>
>
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
--
Regards
Huang Zhiteng
gt; environment. And if something will scale well under the rigors of netperf
> workloads it will probably scale well under "real" workloads. Such scaling
> under netperf may not be necessary, but it should be sufficient.
>
> happy benchmarking,
>
> rick jones
>
wonder whether anybody can give me any help on that ?
>
>
> Thanks a lot!
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpa
lution in most use cases.
> pick your mirror: /kernel/Documentation/cgroups/memory.txt
>
> would be the best docs I know of.
>
> -Matt
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to
appreciate that!
Regards,
HUANG, Zhiteng
Intel SSG/SSD/SOTC/PRC Scalability Lab
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.lau
o capacity
> planning, bottleneck identification, trending.
>
> ** **
>
> Building up an open, standard and consistent set will avoid duplicate
> effort as sites deploy to production and allow us to keep the monitoring up
> to date when the internals of OpenStack change.
>
> ** **
cribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
>
>
>
>
>
> ___
>
> Mailing list: https://launchpad.net/~openstack
>
> Post to : openstack@lists.launchpad.net
penstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to :
net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
That'll be very helpful. Thanks
On Wed, Sep 28, 2011 at 11:26 PM, Jay Pipes wrote:
> We should be able to do that, yes. I have to figure out how to do it,
> but I will create a bug for it in Launchpad and track progress.
>
> Cheers,
> jay
>
> On Wed, Sep 28, 2011
/help.launchpad.net/ListHelp
>
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
. So I was
wondering if there's some kinda mechanism to limit to resource one compute
node could use, something like the 'weight' in OpenNebula.
I'm using Cactus (with GridDynamic's RHEL package), default scheduler
policy, one zone only.
Any suggestion?
42 matches
Mail list logo