Hello,
Ceph is still not compiling when if I add Kinetic support option.
Could you have a look on the log and tell me what's missing ?
--
Best regards,
Julien
On 12/02/2014 09:53 AM, Julien Lutran wrote:
It's ok for KeyValueDB.cc now, but I have another problem with
src/os/KineticStore.h :
h
Hi
Why command 'rbd list'executed on monitor stuck,any prompt should be
appreciated!
Backtree:
[] futex_wait_queue_me+0xde/0x140
[] futex_wait+0x179/0x280
[] do_futex+0xfe/0x5e0
[] SyS_futex+0x80/0x180
[] system_call_fastpath+0x16/0x1b
[] 0x
Best Regards!
YangBin
--
Hello Manoj
My answers to your queries.
# For testing purpose you can install Ceph on virtual machines ( multiple
instances of virtual box for multiple mon, osd ). Its good to practice Ceph
with multiple MON and OSD.
# For real data storage , please use physical servers , virtual servers are
g
Hi,
Since firefly, ceph can support cache tiering.
Cache tiering: support for creating ‘cache pools’ that store hot, recently
accessed objects with automatic demotion of colder data to a base tier.
Typically the cache pool is backed by faster storage devices like SSDs.
I'm testing cache tiering, a
dear list,
Is there anyone have met this?
everytime I restart radosgw, radosgw will crash. (I am using giant.)
here's the backtrace:
#0 0x003ec020e75d in read () from /lib64/libpthread.so.0
#1 0x0037bb6b554c in read (fd=21, buf=0x7fffcff0e9cc, count=4) at
/usr/include/bits/unis
On Wed, Dec 17, 2014 at 2:07 AM, Kevin Shiah wrote:
> setfattr -n ceph.dir.layout.stripe_count -v 2 dir
>
> And return:
>
> setfattr: dir: Operation not supported
Works for me on master. What ceph version are you using?
John
___
ceph-users mailing lis
On 12/17/2014 11:21 AM, John Spray wrote:
> On Wed, Dec 17, 2014 at 2:07 AM, Kevin Shiah wrote:
>> setfattr -n ceph.dir.layout.stripe_count -v 2 dir
>>
>> And return:
>>
>> setfattr: dir: Operation not supported
>
> Works for me on master. What ceph version are you using?
>
I just tried someth
On Wed, Dec 17, 2014 at 10:25 AM, Wido den Hollander wrote:
> I just tried something similar on Giant (0.87) and I saw this in the logs:
>
> parse_layout_vxattr name layout.pool value 'cephfs_svo'
> invalid data pool 3
> reply request -22
>
> I resolves the pool to a ID, but then it's unable to s
On 12/17/2014 12:35 PM, John Spray wrote:
> On Wed, Dec 17, 2014 at 10:25 AM, Wido den Hollander wrote:
>> I just tried something similar on Giant (0.87) and I saw this in the logs:
>>
>> parse_layout_vxattr name layout.pool value 'cephfs_svo'
>> invalid data pool 3
>> reply request -22
>>
>> I r
Hello Loic,
Thanks for you help, I’ve take a look to my crush map and I replace "step
chooseleaf indep 0 type osd” by "step choose indep 0 type osd” and all PGs was
created successfully.
At.
Italo Santos
http://italosantos.com.br/
On Tuesday, December 16, 2014 at 8:39 PM, Loic Dachary wr
mount reports:
"mount: error writing /etc/mtab: Invalid argument"
fstab entry is:
vnb.proxmox.softlog,vng.proxmox.softlog,vnt.proxmox.softlog:/ /mnt/test
ceph_netdev,defaults,namemin,secretfile=/etc/pve/priv/admin.secret 0 0
However the mounts is successful and a mtab e
Both fuse and kernel module fail to mount,
The mons & mds are on two other nodes, so they are available when this node is
booting.
They can be mounted manually after boot.
my fstab:
idmin /mnt/cephfs fuse.ceph defaults,nonempty,_netdev 0 0
vnb.proxmox.softlog,vng.proxmox.softlog,vnt.pro
Hmm, from a quick google it appears you are not the only one who has
seen this symptom with mount.ceph. Our mtab code appears to have
diverged a bit from the upstream util-linux repo, so it seems entirely
possible we have a bug in ours somewhere. I've opened
http://tracker.ceph.com/issues/10351 t
Can you tell us more about how they fail? Error messages on console,
anything in syslog?
In the absence of other clues, you might want to try checking that the
network is coming up before ceph tries to mount.
John
On Wed, Dec 17, 2014 at 1:34 PM, Lindsay Mathieson
wrote:
> Both fuse and kernel
On Wed, 17 Dec 2014 02:02:52 PM John Spray wrote:
> Can you tell us more about how they fail? Error messages on console,
> anything in syslog?
Not quite sure what to look for, but I did a quick scan on ceph through dmesg
& syslog, nothing stood out
>
> In the absence of other clues, you might
Cache tiering is a stable, functioning system. Those particular commands
are for testing and development purposes, not something you should run
(although they ought to be safe).
-Greg
On Wed, Dec 17, 2014 at 1:44 AM Yujian Peng
wrote:
> Hi,
> Since firefly, ceph can support cache tiering.
> Cache
Dear All,
We have set up ceph and used it for about one year already.
Here is a summary of the setting. We used 3 servers to run the ceph.
cs02, cs03, cs04
Here is how we set up the ceph:
1. We created several OSDs on three of these servers. using command like:
> ceph-deploy osd create cs02:
Hey there,
Is there a good work around if our SSDs are not handling D_SYNC very well? We
invested a ton of money into Samsung 840 EVOS and they are not playing well
with D_SYNC. Would really appreciate the help!
Thank you,
Bryson
___
ceph-users m
Hi,all
I found content below at
http://ceph.com/docs/master/rados/operations/crush-map :
step choose firstn {num} type {bucket-type}
Description: Selects the number of buckets of the given type. The
number is usually the number of replicas in the pool (i.e., pool size).
On Tue, Dec 16, 2014 at 6:19 AM, Cyan Cheng wrote:
> Dear All,
>
> We have set up ceph and used it for about one year already.
>
> Here is a summary of the setting. We used 3 servers to run the ceph.
>
> cs02, cs03, cs04
>
> Here is how we set up the ceph:
>
> 1. We created several OSDs on three o
Hi,
We have some problems with "ceph-deploy install node"
This is the error I get when I run the installation:
[mon01][INFO ] Running command: sudo rpm --import
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[mon01][INFO ] Running command: sudo rpm --import
https://ceph.co
Strange, when I visit https://ceph.com, I get a certificate that
doesn't expire until 10 February 2015.
Perhaps check the clock on your node isn't in the future?
John
On Wed, Dec 17, 2014 at 4:16 PM, Emilio wrote:
> Hi,
>
> We have some problems with "ceph-deploy install node"
>
> This is the e
Hi,
Thanks for the update : good news are much appreciated :-) Would you have time
to review the documentation at https://github.com/ceph/ceph/pull/3194/files ?
It was partly motivated by the problem you had.
Cheers
On 17/12/2014 14:03, Italo Santos wrote:
> Hello Loic,
>
> Thanks for you hel
Yes, sorry this server was in the past!
Thx!
On 17/12/14 17:40, John Spray wrote:
Strange, when I visit https://ceph.com, I get a certificate that
doesn't expire until 10 February 2015.
Perhaps check the clock on your node isn't in the future?
John
On Wed, Dec 17, 2014 at 4:16 PM, Emilio w
Hello,
I’ve take a look to this documentation (which help a lot) and if I understand
right, when I set a profile like:
===
ceph osd erasure-code-profile set isilon k=8 m=2 ruleset-failure-domain=host
===
And create a pool following the recommendations on doc, I’ll need (100*16)/2 =
800 PGs,
On 17/12/2014 18:18, Italo Santos wrote:
> Hello,
>
> I’ve take a look to this documentation (which help a lot) and if I understand
> right, when I set a profile like:
>
> ===
> ceph osd erasure-code-profile set isilon k=8 m=2 ruleset-failure-domain=host
> ===
>
> And create a pool following
Loic,
So, if want have a failure domain by host, I’ll need set up a erasure profile
which k+m = total number of hosts I have, right?
Regards.
Italo Santos
http://italosantos.com.br/
On Wednesday, December 17, 2014 at 3:24 PM, Loic Dachary wrote:
>
>
> On 17/12/2014 18:18, Italo Santos
On 17/12/2014 19:22, Italo Santos wrote:
> Loic,
>
> So, if want have a failure domain by host, I’ll need set up a erasure profile
> which k+m = total number of hosts I have, right?
Yes, k+m has to be <= number of hosts.
>
> Regards.
>
> *Italo Santos*
> http://italosantos.com.br/
>
> On W
Understood.
Thanks for your help, the cluster is healthy now :D
Also, using for example k=6,m=1 and failure domain by host I’ll be able lose
all OSD on the same host, but if a lose 2 disks on different hosts I can lose
data right? So, it is possible been a failure domain which allow me to lose a
I am trying to setup a small VM ceph cluster to excersise before creating a real
cluster. Currently there are two osd's on the same host. I wanted to create an
erasure coded pool with k=1 and m=1 (yes I know it's stupid, but it is a test
case). On top of it there is a cache tier (writeback) and I u
Hi Max,
On 17/12/2014 20:57, Max Power wrote:
> I am trying to setup a small VM ceph cluster to excersise before creating a
> real
> cluster. Currently there are two osd's on the same host. I wanted to create an
> erasure coded pool with k=1 and m=1 (yes I know it's stupid, but it is a test
> cas
On 17/12/2014 19:46, Italo Santos wrote:> Understood.
> Thanks for your help, the cluster is healthy now :D
>
> Also, using for example k=6,m=1 and failure domain by host I’ll be able lose
> all OSD on the same host, but if a lose 2 disks on different hosts I can lose
> data right? So, it is p
I have a somewhat interesting scenario. I have an RBD of 17TB formatted using
XFS. I would like it accessible from two different hosts, one mapped/mounted
read-only, and one mapped/mounted as read-write. Both are shared using Samba
4.x. One Samba server gives read-only access to the world fo
On Wed, Dec 17, 2014 at 2:31 PM, McNamara, Bradley
wrote:
> I have a somewhat interesting scenario. I have an RBD of 17TB formatted
> using XFS. I would like it accessible from two different hosts, one
> mapped/mounted read-only, and one mapped/mounted as read-write. Both are
> shared using Sam
On 12/17/2014 03:49 PM, Gregory Farnum wrote:
On Wed, Dec 17, 2014 at 2:31 PM, McNamara, Bradley
wrote:
I have a somewhat interesting scenario. I have an RBD of 17TB formatted
using XFS. I would like it accessible from two different hosts, one
mapped/mounted read-only, and one mapped/mounted
Hi John,
I am using 0.56.1. Could it be because data striping is not supported in
this version?
Kevin
On Wed Dec 17 2014 at 4:00:15 AM PST Wido den Hollander
wrote:
> On 12/17/2014 12:35 PM, John Spray wrote:
> > On Wed, Dec 17, 2014 at 10:25 AM, Wido den Hollander
> wrote:
> >> I just tried
Hello,
On Tue, 16 Dec 2014 08:58:23 -0700 Bryson McCutcheon wrote:
> Hey there,
>
> Is there a good work around if our SSDs are not handling D_SYNC very
> well? We invested a ton of money into Samsung 840 EVOS and they are not
> playing well with D_SYNC. Would really appreciate the help!
>
Ba
On Wednesday, December 17, 2014, Josh Durgin
wrote:
> On 12/17/2014 03:49 PM, Gregory Farnum wrote:
>
>> On Wed, Dec 17, 2014 at 2:31 PM, McNamara, Bradley
>> wrote:
>>
>>> I have a somewhat interesting scenario. I have an RBD of 17TB formatted
>>> using XFS. I would like it accessible from tw
On 12/17/2014 02:58 AM, Bryson McCutcheon wrote:
Is there a good work around if our SSDs are not handling D_SYNC very
well? We invested a ton of money into Samsung 840 EVOS and they are
not playing well with D_SYNC. Would really appreciate the help!
Just in case it's linked with the recent pe
I'be been experimenting with CephFS for funning KVM images (proxmox).
cephfs fuse version - 0.87
cephfs kernel module - kernel version 3.10
Part of my testing involves running a Windows 7 VM up and running
CrystalDiskMark to check the I/O in the VM. Its surprisingly good with
both the fuse and
Hi Mikaël,
>
> I have EVOs too, what to you mean by "not playing well with D_SYNC"?
> Is there something I can test on my side to compare results with you,
> as I have mine flashed?
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
described
>>what to you mean by "not playing well with D_SYNC"?
Hi, check this blog:
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
- Mail original -
De: "Mikaël Cluseau"
À: "Bryson McCutcheon" , "ceph-users"
Envoyé: Jeudi 18 Décembre 2
Looking at the blog, I notice he disabled the write cache before the
tests: doing this on my m550 resulted in *improved* dsync results (300
IOPS -> 700 IOPS) still not great obviously, but ... interesting.
So do experiment with the settings to see if you can get the 840's
working better for yo
The cluster state must be wrong,but how to recovery?
root@node3 ceph-cluster]# ceph -w
cluster 1365f2dd-b86c-436c-a64f-3318a937f3c2
health HEALTH_WARN 64 pgs incomplete; 64 pgs stale; 64 pgs stuck
inactive; 64 pgs stuck stale; 64 pgs stuck unclean; 8 requests are blocked
> 32 sec
m
44 matches
Mail list logo