Re: [ceph-users] qemu-img convert vs rbd import performance

2017-07-20 Thread Mahesh Jambhulkar
Thanks for the information Jason! We have few concerns: 1. Following is our ceph configuration. Is there something that needs to be changed here? #cat /etc/ceph/ceph.conf [global] fsid = 0e1bd4fe-4e2d-4e30-8bc5-cb94ecea43f0 mon_initial_members = cephlarge mon_host = 10.0.0.188 auth_cluster_r

[ceph-users] How to install Ceph on ARM?

2017-07-20 Thread Jaemyoun Lee
Dear all, I wonder how to install Ceph on ARM processors. When I executed "$ ceph-deploy install [hosts]" on x86_64, ceph-deploy installed Ceph v10.2.9. However, when it is executed on ARM64, the installation is failed. $ ceph-deploy install ubuntu (...) [ubuntu][DEBUG ] add deb repo to /et

Re: [ceph-users] Bucket policies in Luminous

2017-07-20 Thread Pritha Srivastava
- Original Message - > From: "Graham Allan" > To: "Pritha Srivastava" , "Adam C. Emerson" > > Cc: "Ceph Users" > Sent: Friday, July 21, 2017 3:17:02 AM > Subject: Re: [ceph-users] Bucket policies in Luminous > > Hmm, I have to admit to major user error here - my .s3cfg file was > poin

[ceph-users] How to remove a cache tier?

2017-07-20 Thread 许雪寒
Hi, everyone. We are trying to remove a cache tier from one of our clusters. However, when we try to issue command "ceph osd tier cache-mode {cachepool} forward" which is recommended in ceph's documentation, it prompted "'forward' is not a well-supported cache mode and may corrupt your data. p

[ceph-users] 答复: 答复: calculate past_intervals wrong, lead to choose wrong authority osd, then osd assert(newhead >= log.tail)

2017-07-20 Thread Chenyehua
Reproduce as Follows: HOST-A HOST-B HOST-C osd 7 osd 21 osd11 1. shutdown HOST-C 2. for a long time, cluster has only HOST A and HOST B, write data 3. shutdown HOST-A => then start HOST-C=> restart ceph-osd-all on HOST-B about 5 times,at the same time s

Re: [ceph-users] OSDs flapping

2017-07-20 Thread Gregory Farnum
At a glance that looks like the bug fixed by just-merged https://github.com/ceph/ceph/pull/16421 On Thu, Jul 20, 2017 at 1:02 PM Roger Brown wrote: > I'm on Luminous 12.1.1 and noticed I have flapping OSDs. Even with `ceph > osd set nodown`, the OSDs will catch signal Aborted and sometimes > Seg

Re: [ceph-users] XFS attempt to access beyond end of device

2017-07-20 Thread Brad Hubbard
On Fri, Jul 21, 2017 at 4:23 AM, Marcus Furlong wrote: > On 20 July 2017 at 12:49, Matthew Vernon wrote: >> Hi, >> >> On 18/07/17 05:08, Marcus Furlong wrote: >>> >>> On 22 March 2017 at 05:51, Dan van der Ster >> > wrote: >> >> >>> Apologies for reviving an old thread

[ceph-users] Fwd: cluster health checks

2017-07-20 Thread Gregory Meno
You might want to know about this change coming "This would be a semi-incompatible change with pre-luminous ceph CLI" cheers, Gregory -- Forwarded message -- From: Sage Weil Date: Tue, Jun 13, 2017 at 12:34 PM Subject: cluster health checks To: jsp...@redhat.com Cc: ceph-de...@v

Re: [ceph-users] Bucket policies in Luminous

2017-07-20 Thread Graham Allan
Hmm, I have to admit to major user error here - my .s3cfg file was pointing at our jewel cluster, not luminous - no wonder the bucket policy didn't work. A bit embarrassing... Having corrected that, I can now set bucket policies without problem - thanks for the update! If I set a policy with

[ceph-users] Kraken rgw lifeycle processing nightly crash

2017-07-20 Thread Ben Hines
Still having this RGWLC crash once a day or so. I do plan to update to Luminous as soon as that is final, but it's possible this issue will still occur, so i was hoping one of the devs could take a look at it. My original suspicion was that it happens when lifecycle processing at the same time tha

[ceph-users] CephFS: concurrent access to the same file from multiple nodes

2017-07-20 Thread Andras Pataki
We are having some difficulties with cephfs access to the same file from multiple nodes concurrently. After debugging some large-ish applications with noticeable performance problems using CephFS (with the fuse client), I have a small test program to reproduce the problem. The core of the pro

[ceph-users] OSDs flapping

2017-07-20 Thread Roger Brown
I'm on Luminous 12.1.1 and noticed I have flapping OSDs. Even with `ceph osd set nodown`, the OSDs will catch signal Aborted and sometimes Segmentation fault 2-5 minutes after starting. I verified hosts can talk to eachother on the cluster network. I've rebooted the hosts. I'm running out of ideas.

[ceph-users] New Ceph Community Manager

2017-07-20 Thread Patrick McGarry
Hey cephers, As most of you know, my last day as the Ceph community lead is next Wed (26 July). The good news is that we now have a replacement who will be starting immediately! I would like to introduce you to Leo Vaz who, until recently, has been working as a maintenance engineer with a focus o

Re: [ceph-users] XFS attempt to access beyond end of device

2017-07-20 Thread Marcus Furlong
On 20 July 2017 at 12:49, Matthew Vernon wrote: > Hi, > > On 18/07/17 05:08, Marcus Furlong wrote: >> >> On 22 March 2017 at 05:51, Dan van der Ster > > wrote: > > >> Apologies for reviving an old thread, but I figured out what happened >> and never documented it, so I

Re: [ceph-users] ceph-disk activate-block: not a block device

2017-07-20 Thread Willem Jan Withagen
Hi Roger, Device detection has recently changed (because FreeBSD does not have blockdevices). So could very well be that this is an actual problem where something is still wrong. Please keep an eye out, and let me know if it comes back. --WjW Op 20-7-2017 om 19:29 schreef Roger Brown: So I

Re: [ceph-users] ceph-disk activate-block: not a block device

2017-07-20 Thread Roger Brown
So I disabled ceph-disk and will chalk it up as a red herring to ignore. On Thu, Jul 20, 2017 at 11:02 AM Roger Brown wrote: > Also I'm just noticing osd1 is my only OSD host that even has an enabled > target for ceph-disk (ceph-disk@dev-sdb2.service). > > roger@osd1:~$ systemctl list-units cep

Re: [ceph-users] Writing data to pools other than filesystem

2017-07-20 Thread David
On Thu, Jul 20, 2017 at 3:05 PM, wrote: > Hello! > > My understanding is that I create on (big) pool for all DB backups written > to storage. > The clients have restricted access to a specific directory only, means > they can mount only this directory. > > Can I define a quota for a specific dire

Re: [ceph-users] ceph-disk activate-block: not a block device

2017-07-20 Thread Roger Brown
Also I'm just noticing osd1 is my only OSD host that even has an enabled target for ceph-disk (ceph-disk@dev-sdb2.service). roger@osd1:~$ systemctl list-units ceph* UNIT LOAD ACTIVE SUB DESCRIPTION ● ceph-disk@dev-sdb2.service loaded failed failed Ceph disk activatio

Re: [ceph-users] XFS attempt to access beyond end of device

2017-07-20 Thread Matthew Vernon
Hi, On 18/07/17 05:08, Marcus Furlong wrote: On 22 March 2017 at 05:51, Dan van der Ster mailto:d...@vanderster.com>> wrote: Apologies for reviving an old thread, but I figured out what happened and never documented it, so I thought an update might be useful. [snip detailed debugging] Than

Re: [ceph-users] How's cephfs going?

2017-07-20 Thread Дмитрий Глушенок
Hi Ilya, While trying to reproduce the issue I've found that: - it is relatively easy to reproduce 5-6 minutes hangs just by killing active mds process (triggering failover) while writing a lot of data. Unacceptable timeout, but not the case of http://tracker.ceph.com/issues/15255 - it is hard t

[ceph-users] ceph-disk activate-block: not a block device

2017-07-20 Thread Roger Brown
I think I need help with some OSD trouble. OSD daemons on two hosts started flapping. At length, I rebooted host osd1 (osd.3), but the OSD daemon still fails to start. Upon closer inspection, ceph-disk@dev-sdb2.service is failing to start due to, "Error: /dev/sdb2 is not a block device" This is th

Re: [ceph-users] Degraded objects while OSD is being added/filled

2017-07-20 Thread Andras Pataki
for backfilling causing degraded objects to appear perhaps? I took a 'pg dump' before and after the change, as well as an 'osd tree' before and after. All these are available at http://voms.simonsfoundation.org:50013/m1Maf76sV1kS95spXQpijycmne92yjm/ceph-20170720/ Al

Re: [ceph-users] unsupported features with erasure-coded rbd

2017-07-20 Thread Ilya Dryomov
On Thu, Jul 20, 2017 at 4:26 PM, Roger Brown wrote: > What's the trick to overcoming unsupported features error when mapping an > erasure-coded rbd? This is on Ceph Luminous 12.1.1, Ubuntu Xenial, Kernel > 4.10.0-26-lowlatency. > > Steps to replicate: > > $ ceph osd pool create rbd_data 32 32 eras

[ceph-users] unsupported features with erasure-coded rbd

2017-07-20 Thread Roger Brown
What's the trick to overcoming unsupported features error when mapping an erasure-coded rbd? This is on Ceph Luminous 12.1.1, Ubuntu Xenial, Kernel 4.10.0-26-lowlatency. Steps to replicate: $ ceph osd pool create rbd_data 32 32 erasure default pool 'rbd_data' created $ ceph osd pool set rbd_data

Re: [ceph-users] Writing data to pools other than filesystem

2017-07-20 Thread c . monty
Hello! My understanding is that I create on (big) pool for all DB backups written to storage. The clients have restricted access to a specific directory only, means they can mount only this directory. Can I define a quota for a specific directory, or only for the pool? And do I need to define t

Re: [ceph-users] How's cephfs going?

2017-07-20 Thread Ilya Dryomov
On Thu, Jul 20, 2017 at 3:23 PM, Дмитрий Глушенок wrote: > Looks like I have similar issue as described in this bug: > http://tracker.ceph.com/issues/15255 > Writer (dd in my case) can be restarted and then writing continues, but > until restart dd looks like hanged on write. > > 20 июля 2017 г.,

[ceph-users] Is it possible to get IO usage (read / write bandwidth) by client or RBD image?

2017-07-20 Thread Stéphane Klein
Hi, is it possible to get IO stats (read / write bandwidth) by client or image? I see this thread http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-August/042030.html and this script https://github.com/cernceph/ceph-scripts/blob/master/tools/rbd-io-stats.pl There are better tools / method

Re: [ceph-users] How's cephfs going?

2017-07-20 Thread Дмитрий Глушенок
Looks like I have similar issue as described in this bug: http://tracker.ceph.com/issues/15255 Writer (dd in my case) can be restarted and then writing continues, but until restart dd looks like hanged on write. > 20 июля 2017 г., в 16:12, Дмитрий Глушенок написал(а): > > Hi, > > Repeated the

Re: [ceph-users] qemu-img convert vs rbd import performance

2017-07-20 Thread Jason Dillaman
Running a similar 20G import test within a single OSD VM-based cluster, I see the following: $ time qemu-img convert -p -O raw -f raw ~/image rbd:rbd/image (100.00/100%) real 3m20.722s user 0m18.859s sys 0m20.628s $ time rbd import ~/image Importing image: 100% complete...done. real 2m11.9

Re: [ceph-users] How's cephfs going?

2017-07-20 Thread Дмитрий Глушенок
Hi, Repeated the test using kernel 4.12.0. OSD node crash seems to be handled fine now, but MDS crash still leads to hanged writes to CephFS. Now it was enough just to crash the first MDS - failover didn't happened. At the same time FUSE client was running on another client - no problems with i

Re: [ceph-users] Ceph MDS Q Size troubleshooting

2017-07-20 Thread David
Hi James On Tue, Jul 18, 2017 at 8:07 AM, James Wilkins wrote: > Hello list, > > I'm looking for some more information relating to CephFS and the 'Q' size, > specifically how to diagnose what contributes towards it rising up > > Ceph Version: 11.2.0.0 > OS: CentOS 7 > Kernel (Ceph Servers): 3.10

Re: [ceph-users] Writing data to pools other than filesystem

2017-07-20 Thread David
I think the multiple namespace feature would be more appropriate for your use case. So that would be multiple file systems within the same pools rather than multiple pools in a single filesystem. With that said, that might be overkill for your requirement. You might be able to achieve what you nee

Re: [ceph-users] Ceph kraken: Calamari Centos7

2017-07-20 Thread Oscar Segarra
Hi David, Thanks a lot for your comments. Do you know any document about installing Calamari on CentOS 7? I have not been able to find any... Thanks a lot. 2017-07-20 12:36 GMT+02:00 David Turner : > Luminous is not stable enough yet. I wouldn't consider kraken for > production though. It is

Re: [ceph-users] 答复: How's cephfs going?

2017-07-20 Thread David
On Wed, Jul 19, 2017 at 7:09 PM, Gregory Farnum wrote: > > > On Wed, Jul 19, 2017 at 10:25 AM David wrote: > >> On Tue, Jul 18, 2017 at 6:54 AM, Blair Bethwaite < >> blair.bethwa...@gmail.com> wrote: >> >>> We are a data-intensive university, with an increasingly large fleet >>> of scientific in

Re: [ceph-users] Ceph kraken: Calamari Centos7

2017-07-20 Thread David Turner
Luminous is not stable enough yet. I wouldn't consider kraken for production though. It is a stable release, but its update and release cycle isn't what I would suggest using in production. More important is testing the version of ceph you want to use before putting it in production. It doesn't mat

Re: [ceph-users] Ceph kraken: Calamari Centos7

2017-07-20 Thread Oscar Segarra
Hi, Thanks a lot for your answers... I'm preparing a production environment and that is the reason why I'm trying to deploy the kraken version as it is the latest stable available. Is the mgr dashboard available from kraken? Have I to upgrade to luminous? Is it stable enouhg? Thanks a lot for y

Re: [ceph-users] Ceph kraken: Calamari Centos7

2017-07-20 Thread Yair Magnezi
Has anyone use this new dashboard already and can share his experience ? On Thu, Jul 20, 2017 at 12:11 PM, Christian Wuerdig < christian.wuer...@gmail.com> wrote: > Judging by the github repo, development on it has all but stalled, the > last commit was more then 3 months ago (https://github

Re: [ceph-users] Writing data to pools other than filesystem

2017-07-20 Thread c . monty
19. Juli 2017 17:34, "LOPEZ Jean-Charles" schrieb: > Hi, > > you must add the extra pools to your current file system configuration: ceph > fs add_data_pool > {fs_name} {pool_name} > > Once this is done, you just have to create some specific directory layout > within CephFS to modify > the na

Re: [ceph-users] Ceph kraken: Calamari Centos7

2017-07-20 Thread Christian Wuerdig
Judging by the github repo, development on it has all but stalled, the last commit was more then 3 months ago ( https://github.com/ceph/calamari/commits/master) Also there is the new dashboard in the new ceph mgr deamon in Luminous - so my guess is that pretty much Calamari is dead. On Thu, Jul 2

Re: [ceph-users] qemu-img convert vs rbd import performance

2017-07-20 Thread Mahesh Jambhulkar
Adding *rbd readahead disable after bytes = 0* did not help. [root@cephlarge mnt]# time qemu-img convert -p -O raw /mnt/data/workload_326e8a43-a90a-4fe9-8aab-6d33bcdf5a05/ snapshot_9f0cee13-8200-4562-82ec-1fb9f234bcd8/vm_id_05e9534e-5c84-4487-9613- 1e0e227e4c1a/vm_res_id_24291e4b-93d2-47ad-80a8-

Re: [ceph-users] Ceph kraken: Calamari Centos7

2017-07-20 Thread Martin Palma
Hi, Calamari is deprecated, it was replaced by the ceph-mgr [0] from what I know. Bye, Martin [0] http://docs.ceph.com/docs/master/mgr/ On Wed, Jul 19, 2017 at 6:28 PM, Oscar Segarra wrote: > Hi, > > Anybody has been able to setup Calamari on Centos7?? > > I've done a lot of Google but I haven