Utilities you are specified are for ubuntu distributions. So no need of
them .
first restart httpd and check whether you can access from http of you
fully qualified domain name like http://{fqdn}:80
it if works then add rgw.conf to /etc/httpd/conf.d folder and restart
httpd. No do the same checki
There could be millions of tennants. Looking deeper at the docs, it looks like
Ceph prefers to have one OSD per disk. We're aiming at having backblazes, so
will be looking at 45 OSDs per machine, many machines. I want to separate the
tennants and separately encrypt their data. The encryption
I'm a bit slow, but I finally stared at the log output for long enough to see
this:
2014-03-10 22:59:12.551103 7fec017fa700 15 calculated
digest=R+4z9J6PyXugdHAYJDKJiLPKpWo=
2014-03-10 22:59:12.551113 7fec017fa700 15
auth_sign=OHAxWvf8U8t4CVWq0pKKwxZ2Xko=
2014-03-10 22:59:12.551114 7fec017fa700
Hi,
We have multiple Disks (12) in a single host . Is it possible to run
multiple OSds on single host and attach each OSD with single disk ?
I assum OSD-Daemon listen to particular port which has to be changed in
above case.
any suggestion ?
--
Regards
Zeeshan Ali Shah
System Administrator -
>> 2014-03-10 22:59:12.531134 7fec017fa700 20 SCRIPT_URL=/user
>> 2014-03-10 22:59:12.531135 7fec017fa700 20
>> SCRIPT_URI=http://admin..liquidweb.com/user
>> 2014-03-10 22:59:12.531136 7fec017fa700 20 HTTP_AUTHORIZATION=AWS
>> 08V6K45V9KPVK7MIWWMG:OHAxWvf8U8t4CVWq0pKKwxZ2Xko=
>>
>> 2014-03-
Hi Zeeshan, it is possible to run multiple OSDs in a single host, with each
OSD tied to seperate disk.
If you are using ceph-deploy, you can use :
Hostname : ceph-node1
device : sdb, sdc ,sdd , ...
ceph-deploy osd prepare : sdb: /dev/sdb :sdc:/dev/sdc
ceph-deploy osd activate : sdb: /dev/sdb :
When using "rbd create ... --image-format 2" in some cases this CMD is rejected
by
EINVAL with the message "librbd: STRIPINGV2 and format 2 or later required for
non-default striping"
But, in v0.61.9 "STRIPINGV2 and format 2" should be supported
[root@rx37-3 ~]# rbd create --pool SSD-r2 --size
Please check the kernel version . Only kernel version 3.10 and above are
supported to create format type 2 images.
On Tue, Mar 11, 2014 at 7:16 PM, Kasper Dieter wrote:
> When using "rbd create ... --image-format 2" in some cases this CMD is
> rejected by
> EINVAL with the message "librbd: STRI
I know, that format2 in rbd.ko is supported with kernel version 3.10 and above.
But, if I want to create an rbd-image
only the Ceph Userland services should be involved, shouldn't it ?
-Dieter
BTW the kernel version on the nodes hosting the OSDs processes is
2.6.32-358.el6.x86_64
but I
of course. rbd userland utilities provide you create images on RADOS as
block storage.
On Tue, Mar 11, 2014 at 7:37 PM, Kasper Dieter wrote:
> I know, that format2 in rbd.ko is supported with kernel version 3.10 and
> above.
>
> But, if I want to create an rbd-image
> only the Ceph Userland se
So, should I open a bug report ?
STRIPINGV2 feature was added in Ceph v0.53, and I'm running v0.61 and using
'--image-format 2' during 'rbd create'
Regards,
-Dieter
On Tue, Mar 11, 2014 at 03:13:28PM +0100, Srinivasa Rao Ragolu wrote:
>of course. rbd userland utilities provide you create
Hi Dieter,
you have a problem with your command.
You set order = 16 so your RBD objects is going to be 65536 bytes
Then you tell RBD that you stripe-unit is going to be 65536 which is the size
of your full object.
Either decrease the size of —stripe-unit to 8192 for example
Or increase order s
If the stripe size and object size are the same it's just chunking --
that's our default. Should work fine.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, Mar 11, 2014 at 8:23 AM, Jean-Charles LOPEZ
wrote:
> Hi Dieter,
>
> you have a problem with your command.
>
> You
Hi Greg,
but our default also has stripe-count = 1 so that no more than 1 stripe-unit is
included in each order x object.
So if you do --order 16—stripe-unit 65536 —stripe-count 1 it then works
I’m not sure if this is what you meant.
JC
On Mar 11, 2014, at 08:32, Gregory Farnum wrote:
> If
On 03/10/2014 10:30 PM, Pawel Veselov wrote:
Now, I'm getting this. May be any idea what can be done to straighten
this up?
This is weird. Can you please share the steps taken until this was
triggered, as well as the rest of the log?
-Joao
-12> 2014-03-10 22:26:23.748783 7fc0397e5
On Tue, Mar 11, 2014 at 9:15 AM, Joao Eduardo Luis wrote:
> On 03/10/2014 10:30 PM, Pawel Veselov wrote:
>
>>
>> Now, I'm getting this. May be any idea what can be done to straighten
>> this up?
>>
>
> This is weird. Can you please share the steps taken until this was
> triggered, as well as the
> There could be millions of tennants. Looking deeper at the docs, it looks
> like Ceph prefers to have one OSD per disk. We're aiming at having
> backblazes, so will be looking at 45 OSDs per machine, many machines. I want
> to separate the tennants and separately encrypt their data. The enc
Hi,
I'm trying to follow the instructions for QEMU rbd installation at
http://ceph.com/docs/master/rbd/qemu-rbd/
I tried to write a raw qemu image to ceph cluster using the following command
qemu-img convert -f raw -O raw ../linux-0.2.img rbd:data/linux
OSD seems to be working, but it seems to
Hey everyone, hopefully you can give me some direction here.
The Scenario: We have multiple dual XEON servers with 4 x 1Tb hard drives as
well as servers with 8, 12 & 16 x 1Tb drives. Our plan is to deploy
Openstack with XCP (XEN) as a hypervisor and hopefully utilizing the direct
attached stor
> 1. Is it possible to install Ceph and Ceph monitors on the the XCP
> (XEN) Dom0 or would we need to install it on the DomU containing the
> Openstack components?
I'm not a Xen guru but in the case of KVM I would run the OSDs on the
hypervisor to avoid virtualization overhead.
> 2. I
On Tue, Mar 11, 2014 at 1:38 PM, Sushma Gurram
wrote:
> Hi,
>
>
>
> I'm trying to follow the instructions for QEMU rbd installation at
> http://ceph.com/docs/master/rbd/qemu-rbd/
>
>
>
> I tried to write a raw qemu image to ceph cluster using the following
> command
>
> qemu-img convert -f raw -O
Hi guys and gals,
I'm able to do live migration via 'nova live-migration', as long as my
instances are sitting on shared storage. However, when they are not, nova
live-migrate fails, due to a shared storage check.
To get around this, I attempted to do a live migration via libvirt directly.
Usi
It seems good with master branch. Sorry about the confusion.
On a side note, is it possible to create/access the block device using librbd
and run fio on it?
Thanks,
Sushma
-Original Message-
From: Gregory Farnum [mailto:g...@inktank.com]
Sent: Tuesday, March 11, 2014 2:00 PM
To: Sushma
On Tue, Mar 11, 2014 at 2:24 PM, Sushma Gurram
wrote:
> It seems good with master branch. Sorry about the confusion.
>
> On a side note, is it possible to create/access the block device using librbd
> and run fio on it?
...yes? librbd is the userspace library that QEMU is using to access
it to b
Just to be complete, a TCP Dump:
Starting tcpick 0.2.1 at 2014-03-11 21 :11 UTC
Timeout for connections is 600
tcpick: reading from test.pcap
1 SYN-SENT 10.255.247.241 :39729 > 10.30.77.227 :http
1 SYN-RECEIVED 10.255.247.241 :39729 > 10.30.77.227 :http
1 ESTABLISHED 10.255.247.241 :39729 >
Hello
I followed everything in the set up documentation setting up a test cluster on
an XCP install and got this:
Invoked (1.3.5): /usr/bin/ceph--
deploy admin domUs1 domUs2 domUs3 domUca
Pushing admin keys and conf to domUs1
connected to host: domUs1
detect platform information from remote host
On Mar 10, 2014, at 8:30 PM, Yehuda Sadeh wrote:
>> 2014-03-10 22:59:12.551012 7fec017fa700 10 auth_hdr:
>> GET
>>
>>
>> Mon, 10 Mar 2014 22:59:42 GMT
>> /user
>
> This is related to the issue. I assume it was signed as /admin/user,
> but here we just use /user because that what's passed in th
Thanks Greg.
I tried rbd-fuse and it's throughput using fio is approx. 1/4 that of the
kernel client.
Can you please let me know how to setup RBD backend for FIO? I'm assuming this
RBD backend is also based on librbd?
Thanks,
Sushma
-Original Message-
From: Gregory Farnum [mailto:g...
Hi, all
I've started use Ceph on a small production enviroment for a couple of
weeks. And I've learned to use commands like 'ceph osd perf' and 'ceph
--admin-daemon ... perf dump' to find latency information about my
cluster.
However, I don't quite understand the exact meaning of those performan
> I tried rbd-fuse and it's throughput using fio is approx. 1/4 that of the
> kernel client.
>
> Can you please let me know how to setup RBD backend for FIO? I'm assuming
> this RBD backend is also based on librbd?
You will probably have to build fio from source since the rbd engine is new:
htt
Yes.. it is possible.
Also to cross check you can run the below command on the host
"ps -elf | grep ceph"
you can see all ceph-osd daemon running.. something like "ceph-osd
--cluster= -i -f
On Tue, Mar 11, 2014 at 6:38 PM, Ashish Chandra <
mail.ashishchan...@gmail.com> wrote:
> Hi Zeeshan,
Hi All,
I am facing a weird situation with Ceph Block device (rbd) -
The blkio.weight feature of cgroup seems to be not working on Ceph block
device (say on /dev/rbd1 and /dev/rbd1), the iops number reported by the
workload (fio or dd) should be in proportion to the applied weights
(weights appli
Hi,
I saw radosgw-admin doesn't require radosgw daemon.
I was wondering if by adding creation of buckets and manipulation of
objects to radosgw-admin (or a library similar to it) ceph would have
a great tool between the low and high levels of rados and radosgw, and
simpler than cephfs.
Of course
Hi,
My name is Ashraful Arefeen. I want to use ceph for testing purpose. Is it
possible to use it in a single machine (I mean in one computer)? If it is
possible then what will be the preferable configuration of the computer and
in that case what are required software apart from ceph? I have start
Hi all,
Is it possible that ceph support windows client? Now I can only use RESTful
API(Swift-compatible) through ceph object gateway,
but the languages that can be used are java, python and ruby, not C# or C++. Is
there any good wrapper for C# or C++,thanks.
Thanks & Regards
Li JiaMin
System
35 matches
Mail list logo