applied to VM or to a workload tasks).
Though blkio.weight feature of cgroup is working for normal device (ex-
sda, sdb). I am setting IO schedulers to CFQ for both rbd and normal disks.
Any clue what i am missing.
Regards,
Vikrant
On Mon, Mar 10, 2014 at 12:08 AM, Vikrant Verma wrote:
> Hi
Yes.. it is possible.
Also to cross check you can run the below command on the host
"ps -elf | grep ceph"
you can see all ceph-osd daemon running.. something like "ceph-osd
--cluster= -i -f
On Tue, Mar 11, 2014 at 6:38 PM, Ashish Chandra <
mail.ashishchan...@gmail.com> wrote:
> Hi Zeeshan,
Hi All,
blkio weight sharing is not working on Ceph Block device (RBD),IOPS number
are not getting divided as per the weight proportion but are getting
divided equally.
i am using CFQ IO scheduler for RBD.
I did it getting it worked once but now i am not able to get it worked as i
have upgraded c
Hi All,
Is it possible to map OSDs from different hosts (servers) to a Pool in ceph
cluster?
In Crush Map we can add a bucket mentioning the host details (hostname and
its weight).
Is it possible to configure a bucket which contains OSDs from different
hosts?
if possible please let me know ho
how to provide multiple pools name in nova.conf? Do we need to give the
configuration same as of cinder.conf for multi backend config.
please provide the config details for nova.conf for multi pools.
___
ceph-users mailing list
ceph-users@lists.ceph.com
h
>
> Sébastien Han
> Cloud Engineer
>
> "Always give 100%. Unless you're giving blood."
>
> Phone: +33 (0)1 49 70 99 72
> Mail: sebastien@enovance.com
> Address : 10, rue de la Victoire - 75009 Paris
> Web : www.enovance.com - Twitter : @enovance
&g
Hi All,
I am using cinder as a front end for volume storage in Openstack
configuration.
Ceph is used as storage back-end.
Currently cinder uses only one pool (in my case pool name is "volumes" )
for its volume storage.
I want cinder to use multiple ceph pools for volume storage
--follow
the configuration/design does not seems to be efficient as you
suggested, rather i am trying now to put QoS on volumes itself.
Thanks for your suggestions.
Regards,
Vikrant
On Thu, Feb 13, 2014 at 7:28 PM, Kurt Bauer wrote:
> Hi,
>
> Vikrant Verma
> 12. Februar 2014 19:03
>
yes, I want to use multiple hard drives with a single OSD.
Is it possible to have it?
Regards,
Vikrant
On Wed, Feb 12, 2014 at 10:14 PM, Loic Dachary wrote:
>
>
> On 12/02/2014 12:28, Vikrant Verma wrote:
> > Hi All,
> >
> > I have one quick question -
> >
&g
Hi All,
I have one quick question -
Is it possible to have One Ceph-OSD-Daemon managing more than one Object
Storage Device in a Ceph Cluster?
Regards,
Vikrant
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ce
Hi All,
I have a cluster of size 3120 GB running, but i noticed that at my monitor
node the log is consuming lot of space and it is increasing very rapidly.
In one day itself it goes to 100 GB.
Please help me to stop or reduce the the logging at monitor.
log location file in monitor node - /var/
ould configure the flavor below
# as 'keystone'.
#flavor=
Regards,
Vikrant
On Tue, Dec 10, 2013 at 7:04 PM, Karan Singh wrote:
> Hi Vikrant
>
> Can you share ceph auth list and your glance-api.conf file output.
>
> What are your plans with respect to configuration , wh
Hi Steffen,
WIth respect to your post as mentioned in the below link
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003370.html
I am facing the same issue, here is my error log from api.log
"2013-12-10 02:47:36.156 32509 TRACE glance.api.v1.upload_utils File
"/usr/lib/python2
Hi All,
I am able to Add a Ceph Monitor (step 3) as per the link
http://ceph.com/docs/master/start/quick-ceph-deploy/ (Setting Up Ceph
Storage Cluster)
But when I am executing the gatherkey command, I am getting the
warnings(highlighted in yellow). Please find the details –
Command – “ceph-
14 matches
Mail list logo