Re: [ceph-users] Appending to an erasure coded pool

2016-10-17 Thread Tianshan Qu
pool_requires_alignment can get pool's stripe_width, and you need write multiple of that size in each append. stripe_width can be configured with osd_pool_erasure_code_stripe_width, but the actual size will be adjusted by ec plugin 2016-10-17 18:34 GMT+08:00 James Norman : > Hi Gregory, > > Many t

[ceph-users] 答复: Does anyone know why cephfs do not support EC pool?

2016-10-17 Thread Liuxuan
Hello: Thank you very much for your detail answer. I have used iozone to test randwrite, it reported not support in , but osd not crash. 发件人: huang jun [mailto:hjwsm1...@gmail.com] 发送时间: 2016年10月18日 13:44 收件人: Erick Perez - Quadrian Enterprises 抄送: liuxuan 11625 (RD); ceph-users@lists.ceph

Re: [ceph-users] Does anyone know why cephfs do not support EC pool?

2016-10-17 Thread huang jun
you can look into this: https://github.com/ceph/ceph/pull/10334 https://github.com/ceph/ceph/compare/master...athanatos:wip-ec-cache the community have do a lot works related to ec for rbd and fs interface. 2016-10-18 13:06 GMT+08:00 Erick Perez - Quadrian Enterprises < epe...@quadrianweb.com>: >

Re: [ceph-users] Does anyone know why cephfs do not support EC pool?

2016-10-17 Thread Erick Perez - Quadrian Enterprises
On Mon, Oct 17, 2016 at 9:23 PM, huang jun wrote: > ec only support writefull and append operations, but not partial write, > your can try it by doing random writes, see if the osd crash or not. > > 2016-10-18 10:10 GMT+08:00 Liuxuan : > > Hello: > > > > > > > > I have create cephfs which data

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-17 Thread Christian Balzer
Hello, On Tue, 18 Oct 2016 00:19:53 +0200 Lars Marowsky-Bree wrote: > On 2016-10-17T15:31:31, Maged Mokhtar wrote: > > > This is our first beta version, we do not support cache tiering. We > > definitely intend to support it. > > Cache tiering in Ceph works for this use case. I assume you mea

Re: [ceph-users] Does anyone know why cephfs do not support EC pool?

2016-10-17 Thread huang jun
ec only support writefull and append operations, but not partial write, your can try it by doing random writes, see if the osd crash or not. 2016-10-18 10:10 GMT+08:00 Liuxuan : > Hello: > > > > I have create cephfs which data pool type is EC and metadata is replica, > The cluster reported error

[ceph-users] Does anyone know why cephfs do not support EC pool?

2016-10-17 Thread Liuxuan
Hello: I have create cephfs which data pool type is EC and metadata is replica, The cluster reported errors from MDSMonitor::_check_pool function. But when I ignore to check the pool type, the cephfs can write and read datas. Does anyone know why cephfs do not support EC pool? __

Re: [ceph-users] RBD with SSD journals and SAS OSDs

2016-10-17 Thread Christian Balzer
Hello, As I had this written mostly already and since it covers some points Nick raised in more detail, here we go. On Mon, 17 Oct 2016 16:30:48 +0800 William Josefsson wrote: > Thx Christian for helping troubleshooting the latency issues. I have > attached my fio job template below. > There's

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-17 Thread Lars Marowsky-Bree
On 2016-10-17T15:31:31, Maged Mokhtar wrote: > This is our first beta version, we do not support cache tiering. We > definitely intend to support it. Cache tiering in Ceph works for this use case. I assume you mean in your UI? Though we all are waiting for Luminous to do away with the need for

Re: [ceph-users] resolve split brain situation in ceph cluster

2016-10-17 Thread Gregory Farnum
On Mon, Oct 17, 2016 at 4:58 AM, Manuel Lausch wrote: > Hi Gregory, > > each datacenter has its own IP subnet which is routed. We created > simultaneously iptables rules on each host wich drops all packages in and > outgoing to the other datacenter. After this our application wrote to DC A, > ther

Re: [ceph-users] Appending to an erasure coded pool

2016-10-17 Thread Gregory Farnum
On Mon, Oct 17, 2016 at 3:34 AM, James Norman wrote: > Hi Gregory, > > Many thanks for your reply. I couldn't spot any resources that describe/show > how you can successfully write / append to an EC pool with the librados API > on those links. Do you know of any such examples or resources? Or is i

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-17 Thread Mike Christie
On 10/17/2016 02:40 PM, Mike Christie wrote: > For the (non target_mode approach), everything that is needed for basic Oops. Meant to write for the non target_mod_rbd approach. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-17 Thread Mike Christie
If it is just a couple kernel changes you should post them, so SUSE can merge them in target_core_rbd and we can port them to upstream. You will not have to carry them and SUSE and I will not have to re-debug the problems :) For the (non target_mode approach), everything that is needed for basic I

Re: [ceph-users] radowsg keystone integration in mitaka

2016-10-17 Thread Andrew Woodward
Some config hints here, if you convert your config, you have to unset the admin_token and change the api version to 3, then you can specify the keystone user, password, domain, tenant, etc. You can see what we do for puppet-ceph [1] if you need a refrence [1] https://github.com/openstack/puppet-ce

Re: [ceph-users] OSDs are flapping and marked down wrongly

2016-10-17 Thread Somnath Roy
Thanks Wei/Pavan for the response, it seems I need to debug osds to find out what is the cause of slowing down. Will update community if I find anything conclusive. Regards Somnath -Original Message- From: Wei Jin [mailto:wjin...@gmail.com] Sent: Monday, October 17, 2016 2:13 AM To: Som

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-17 Thread Maged Mokhtar
Thank you David very much and thank you for the correction. -- From: "David Disseldorp" Sent: Monday, October 17, 2016 5:24 PM To: "Maged Mokhtar" Cc: ; "Oliver Dzombic" ; "Mike Christie" Subject: Re: [ceph-users] new Open Source Ceph based iSC

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-17 Thread Maged Mokhtar
Hi Lars, Yes I was aware of David Disseldorp & Mike Christie efforts to upstream the patches from a while back ago. I understand there will be a move away from the SUSE target_mod_rbd to support a more generic device handling but do not know what the current status of this work is. We have made

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-17 Thread David Disseldorp
Hi Maged, Thanks for the announcement - good luck with the project! One comment... On Mon, 17 Oct 2016 13:37:29 +0200, Maged Mokhtar wrote: > if you are refering to clustering reservations through VAAI. We are using > upstream code from SUSE Enterprise Storage which adds clustered support for

Re: [ceph-users] debian jewel jessie packages missing from Packages file

2016-10-17 Thread Jon Morby (FidoNet)
Thanks Yes … working again … *phew* :) > On 17 Oct 2016, at 14:01, Dan Milon wrote: > > debian/jessie/jewel is fine now. — Jon Morby FidoNet - the internet made simple! tel: 0345 004 3050 / fax: 0345 004 3051 twitter: @fido | skype://jmorby | web: https://www.fido.net signature.asc Descr

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-17 Thread Lars Marowsky-Bree
On 2016-10-17T13:37:29, Maged Mokhtar wrote: Hi Maged, glad to see our patches caught your attention. You're aware that they are being upstreamed by David Disseldorp and Mike Christie, right? You don't have to uplift patches from our backported SLES kernel ;-) Also, curious why you based this o

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-17 Thread Oliver Dzombic
Hi Maged, sounds very valid. And as soon as we can, we will try it out. Thank you, and good luck with your project ! -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:i...@ip-interactive.de Anschrift: IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-17 Thread Maged Mokhtar
Hi Oliver, This is our first beta version, we do not support cache tiering. We definitely intend to support it. Cheers /maged -- From: "Oliver Dzombic" Sent: Monday, October 17, 2016 2:05 PM To: Subject: Re: [ceph-users] new Open Source Ceph ba

Re: [ceph-users] Ubuntu repo's broken

2016-10-17 Thread Alfredo Deza
On Mon, Oct 17, 2016 at 4:54 AM, Jon Morby (Fido) wrote: > full output at https://pastebin.com/tH65tNQy > > > cephadmin@cephadmin:~$ cat /etc/apt/sources.list.d/ceph.list > deb https://download.ceph.com/debian-jewel/ xenial main > > > oh and fyi > [osd04][WARNIN] W: > https://download.ceph.com/de

Re: [ceph-users] Missing arm64 Ubuntu packages for 10.2.3

2016-10-17 Thread Alfredo Deza
On Fri, Oct 14, 2016 at 5:42 PM, Stillwell, Bryan J wrote: > On 10/14/16, 2:29 PM, "Alfredo Deza" wrote: > >>On Thu, Oct 13, 2016 at 5:19 PM, Stillwell, Bryan J >> wrote: >>> On 10/13/16, 2:32 PM, "Alfredo Deza" wrote: >>> On Thu, Oct 13, 2016 at 11:33 AM, Stillwell, Bryan J wrote:

Re: [ceph-users] debian jewel jessie packages missing from Packages file

2016-10-17 Thread Dan Milon
debian/jessie/jewel is fine now. On 10/17/2016 02:36 PM, Jon Morby (FidoNet) wrote: > Hi Dan > > The repos do indeed seem to be messed up …. it’s been like it for at > least 4 days now (since everything went offline) > > I raised it via IRC over the weekend and also on this list on Saturday … > >

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-17 Thread Oliver Dzombic
Hi Maged, thank you for your clearification ! That makes it intresting. I have red that your base is ceph 0.94, in this version using cache tier is not recommanded, if i remember correctly. Does your codemodification also take care of this issue ? -- Mit freundlichen Gruessen / Best regards O

Re: [ceph-users] resolve split brain situation in ceph cluster

2016-10-17 Thread Manuel Lausch
Hi Gregory, each datacenter has its own IP subnet which is routed. We created simultaneously iptables rules on each host wich drops all packages in and outgoing to the other datacenter. After this our application wrote to DC A, there are 3 of 5 Monitor Nodes. Now we modified in B the monmap (r

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-17 Thread Maged Mokhtar
Hi Oliver, if you are refering to clustering reservations through VAAI. We are using upstream code from SUSE Enterprise Storage which adds clustered support for VAAI (compare and write, write same) in the kernel as well as in ceph (implemented as atomic osd operations). We have tested VMware

Re: [ceph-users] debian jewel jessie packages missing from Packages file

2016-10-17 Thread Jon Morby (FidoNet)
Hi Dan The repos do indeed seem to be messed up …. it’s been like it for at least 4 days now (since everything went offline) I raised it via IRC over the weekend and also on this list on Saturday … All the mirrors seem to be affected too (GiGo I guess) :( Jon > On 17 Oct 2016, at 11:33, Dan M

Re: [ceph-users] Appending to an erasure coded pool

2016-10-17 Thread James Norman
Hi Gregory, Many thanks for your reply. I couldn't spot any resources that describe/show how you can successfully write / append to an EC pool with the librados API on those links. Do you know of any such examples or resources? Or is it just simply not possible? Best regards, James Norman >

[ceph-users] debian jewel jessie packages missing from Packages file

2016-10-17 Thread Dan Milon
Hello, I'm trying to install ceph jewel from the debian repository, but it seems to be in a very weird state. ceph, ceph-mon, ceph-osd exist in the pool, but the Packages file does not have any of them. https://download.ceph.com/debian-jewel/dists/jessie/main/binary-amd64/Packages The other days

Re: [ceph-users] RBD with SSD journals and SAS OSDs

2016-10-17 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > William Josefsson > Sent: 17 October 2016 10:39 > To: n...@fisk.me.uk > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] RBD with SSD journals and SAS OSDs > > hi nick, I earlier did

Re: [ceph-users] RBD with SSD journals and SAS OSDs

2016-10-17 Thread William Josefsson
hi nick, I earlier did cpupower frequency-set --cpu-governor performance on all my hosts, which bumped all CPUs up to almost max speed or more. It didn't really help much, and I still experience 5-10ms latency in my fio benchmarks in VMs with this job description. Is there anything else I can do

Re: [ceph-users] RBD with SSD journals and SAS OSDs

2016-10-17 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > William Josefsson > Sent: 17 October 2016 09:31 > To: Christian Balzer > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] RBD with SSD journals and SAS OSDs > > Thx Christian for he

Re: [ceph-users] OSDs are flapping and marked down wrongly

2016-10-17 Thread Wei Jin
On Mon, Oct 17, 2016 at 3:16 PM, Somnath Roy wrote: > Hi Sage et. al, > > I know this issue is reported number of times in community and attributed to > either network issue or unresponsive OSDs. > Recently, we are seeing this issue when our all SSD cluster (Jewel based) is > stressed with larg

Re: [ceph-users] Ubuntu repo's broken

2016-10-17 Thread Jon Morby (Fido)
full output at https://pastebin.com/tH65tNQy cephadmin@cephadmin:~$ cat /etc/apt/sources.list.d/ceph.list deb https://download.ceph.com/debian-jewel/ xenial main oh and fyi [osd04][WARNIN] W: https://download.ceph.com/debian-jewel/dists/xenial/InRelease: Signature by key 08B73419AC32B4E966C1A

Re: [ceph-users] RBD with SSD journals and SAS OSDs

2016-10-17 Thread William Josefsson
Thx Christian for helping troubleshooting the latency issues. I have attached my fio job template below. I thought to eliminate the factor that the VM is the bottleneck, I've created a 128GB 32 cCPU flavor. Here's the latest fio benchmark. http://pastebin.ca/raw/3729693 I'm trying to benchmark t

Re: [ceph-users] Ubuntu repo's broken

2016-10-17 Thread Vy Nguyen Tan
Hello, I have the same problem. I am using Debian 8.6 and ceph-deploy 1.5.36 Logs from ceph-deploy: [*hv01*][*INFO* ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install -o Dpkg::Options::=--force-confnew ceph-os

Re: [ceph-users] OSDs are flapping and marked down wrongly

2016-10-17 Thread Pavan Rallabhandi
Regarding the mon_osd_min_down_reports I was looking at it recently, this could provide some insight https://github.com/ceph/ceph/commit/0269a0c17723fd3e22738f7495fe017225b924a4 Thanks! On 10/17/16, 1:36 PM, "ceph-users on behalf of Somnath Roy" wrote: Thanks Piotr, Wido for quick respon

Re: [ceph-users] Even data distribution across OSD - Impossible Achievement?

2016-10-17 Thread Christian Balzer
Hello, On Mon, 17 Oct 2016 09:42:09 +0200 (CEST) i...@witeq.com wrote: > Hi Wido, > > thanks for the explanation, generally speaking what is the best practice when > a couple of OSDs are reaching near-full capacity? > This has (of course) been discussed here many times. Google is your frien

Re: [ceph-users] OSDs are flapping and marked down wrongly

2016-10-17 Thread Somnath Roy
Thanks Piotr, Wido for quick response. @Wido , yes, I thought of trying with those values but I am seeing in the log messages at least 7 osds are reporting failure , so, didn't try. BTW, I found default mon_osd_min_down_reporters is 2 , not 1 and latest master is not having mon_osd_min_down_rep

Re: [ceph-users] Even data distribution across OSD - Impossible Achievement?

2016-10-17 Thread info
Hi Wido, thanks for the explanation, generally speaking what is the best practice when a couple of OSDs are reaching near-full capacity? I could set their weight do something like 0.9 but this seems only a temporary solution. Of course i can add more OSDs, but this change radically my prospe

Re: [ceph-users] OSDs are flapping and marked down wrongly

2016-10-17 Thread Wido den Hollander
> Op 17 oktober 2016 om 9:16 schreef Somnath Roy : > > > Hi Sage et. al, > > I know this issue is reported number of times in community and attributed to > either network issue or unresponsive OSDs. > Recently, we are seeing this issue when our all SSD cluster (Jewel based) is > stressed wit

Re: [ceph-users] Ubuntu repo's broken

2016-10-17 Thread Wido den Hollander
> Op 16 oktober 2016 om 11:57 schreef "Jon Morby (FidoNet)" : > > > Morning > > It’s been a few days now since the outage however we’re still unable to > install new nodes, it seems the repo’s are broken … and have been for at > least 2 days now (so not just a brief momentary issue caused by

[ceph-users] OSDs are flapping and marked down wrongly

2016-10-17 Thread Somnath Roy
Hi Sage et. al, I know this issue is reported number of times in community and attributed to either network issue or unresponsive OSDs. Recently, we are seeing this issue when our all SSD cluster (Jewel based) is stressed with large block size and very high QD. Lowering QD it is working just f