[ceph-users] restore OSD node After SO failure

2016-05-20 Thread Iban Cabrillo
Hi cephers, Could someone tell me the right steps, for bring to life and OSD server? data and journal disks seems to be OK but the dual SD slot for SO have failed. Regards, I -- Iban Cabrillo Bartolome Instituto de F

[ceph-users] free krbd size in ubuntu12.04 in ceph 0.67.9

2016-05-20 Thread lin zhou
Hi,cephers we only using krbd in ceph.and it works well near two yeas.but now I face a size problem. I have 7 nodes with 10 3T osd each.we using ceph 0.67.9 in ubuntu12.04.I know it is too old,but update is beyond my control. and now we use 80% size,so we start to delete historic unneeded data,b

Re: [ceph-users] Installing ceph monitor on Ubuntu denial: segmentation fault

2016-05-20 Thread Daniel Wilhelm
Hi I relieved to have found a solution to this problem. The ansible script for generating the key did not pass the key to the following command line and sent therefore an empty string to this script (see monitor_secret). ceph-authtool /var/lib/ceph/tmp/keyring.mon.{{ monitor_name }} --create-k

Re: [ceph-users] Recommended OSD size

2016-05-20 Thread gjprabu
Hi Christian, Thanks for your reply, our performance requirement will be like below. It will be very helpful is your provide the details for below scenario. As of now 6 TB data usage, in feature size will be 10 TB. Per ceph client read and write Read :- kB/s 57726 Write :-

Re: [ceph-users] dd testing from within the VM

2016-05-20 Thread Ketil Froyn
I'm a lurker here and don't know much about ceph, but: If fdatasync hardly makes a difference, then either it's not being honoured (which would be a major problem), or there's something else that is a bottleneck in your test (more likely). It's not uncommon for a poor choice of block size (bs) to

[ceph-users] NVRAM cards as OSD journals

2016-05-20 Thread EP Komarla
Hi, I am contemplating using a NVRAM card for OSD journals in place of SSD drives in our ceph cluster. Configuration: * 4 Ceph servers * Each server has 24 OSDs (each OSD is a 1TB SAS drive) * 1 PCIe NVRAM card of 16GB capacity per ceph server * Both Client &

[ceph-users] radosgw hammer -> jewel upgrade (default zone & region config)

2016-05-20 Thread Jonathan D. Proulx
Hi All, I saw the previous thread on this related to http://tracker.ceph.com/issues/15597 and Yehuda's fix script https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone Running this seems to have landed me in a weird state. I can create and get new buckets and objects

Re: [ceph-users] radosgw hammer -> jewel upgrade (default zone & region config)

2016-05-20 Thread Yehuda Sadeh-Weinraub
On Fri, May 20, 2016 at 9:03 AM, Jonathan D. Proulx wrote: > Hi All, > > I saw the previous thread on this related to > http://tracker.ceph.com/issues/15597 > > and Yehuda's fix script > https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone > > Running this seems to hav

Re: [ceph-users] radosgw hammer -> jewel upgrade (default zone & region config)

2016-05-20 Thread Jonathan D. Proulx
On Fri, May 20, 2016 at 09:21:58AM -0700, Yehuda Sadeh-Weinraub wrote: :On Fri, May 20, 2016 at 9:03 AM, Jonathan D. Proulx wrote: :> Hi All, :> :> I saw the previous thread on this related to :> http://tracker.ceph.com/issues/15597 :> :> and Yehuda's fix script :> https://raw.githubusercontent.c

Re: [ceph-users] dense storage nodes

2016-05-20 Thread Anthony D'Atri
[ too much to quote ] Dense nodes often work better for object-focused workloads than block-focused, the impact of delayed operations is simply speed vs. a tenant VM crashing. Re RAID5 volumes to decrease the number of OSD’s: This sort of approach is getting increasing attention in that it br

Re: [ceph-users] Do you see a data loss if a SSD hosting several OSD journals crashes

2016-05-20 Thread Anthony D'Atri
> Ceph will not acknowledge a client write before all journals (replica > size, 3 by default) have received the data, so loosing one journal SSD > will NEVER result in an actual data loss. Some say that all replicas must be written; others say that only min_size, 2 by default, must be written be

[ceph-users] Cant remove ceph filesystem

2016-05-20 Thread Ravi Nasani
Hi All, I am a newbie here. In latest CEPH (ceph-jewel), I am not able to remove the filesystem, getting error messages. Is this the right way to destroy the cephfs? $ ceph fs rm cephfs1 --yes-i-really-mean-it Error EINVAL: all MDS daemons must be inactive before removing filesystem Thanks, Ra

Re: [ceph-users] Cant remove ceph filesystem

2016-05-20 Thread Oliver Dzombic
Hi Ravi, well is the answer not already written in there ? MDS all of ? Please ceph -s :) -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:i...@ip-interactive.de Anschrift: IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3 63571 Gelnhausen HRB 93

[ceph-users] Unsubscribe

2016-05-20 Thread Wade Holler
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] OSDs automount all devices on a san

2016-05-20 Thread Andrus, Brian Contractor
All, I have found an issue with ceph OSDs that are on a SAN and Multipathed. It may not matter that they are multipathed, but that is how our setup is where I found the issue. Our setup has an infiniband network which uses SRP to annunciate block devices on a DDN. Every LUN can be seen by every

Re: [ceph-users] Do you see a data loss if a SSD hosting several OSD journals crashes

2016-05-20 Thread EP Komarla
So, which is correct, all replicas must be written or only min_size before ack? But for me the takeaway is that writes are protected - even if the journal drive crashes, I am covered. - epk -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of A

Re: [ceph-users] Do you see a data loss if a SSD hosting several OSD journals crashes

2016-05-20 Thread Anthony D'Atri
You should be protected against single component failures, yes, that's the point of journals. It's important to ensure that on-disk volatile cache -- these days in the 8-128MB range -- remain turned off, otherwise it usually presents an opportunity for data loss especially when power drops. Di

Re: [ceph-users] OSDs automount all devices on a san

2016-05-20 Thread Christian Balzer
Hello, On Fri, 20 May 2016 21:47:43 + Andrus, Brian Contractor wrote: > All, > I have found an issue with ceph OSDs that are on a SAN and Multipathed. > It may not matter that they are multipathed, but that is how our setup > is where I found the issue. > Your problem/issue is that Ceph is

Re: [ceph-users] Cant remove ceph filesystem

2016-05-20 Thread Christian Balzer
On Fri, 20 May 2016 14:14:45 -0700 Ravi Nasani wrote: > Hi All, > > I am a newbie here. In latest CEPH (ceph-jewel), I am not able to > remove the filesystem, getting error messages. > > > Is this the right way to destroy the cephfs? > > $ ceph fs rm cephfs1 --yes-i-really-mean-it > Error EINV

Re: [ceph-users] free krbd size in ubuntu12.04 in ceph 0.67.9

2016-05-20 Thread Christian Balzer
On Fri, 20 May 2016 17:00:03 +0800 lin zhou wrote: > Hi,cephers > we only using krbd in ceph.and it works well near two yeas.but now I > face a size problem. > > I have 7 nodes with 10 3T osd each.we using ceph 0.67.9 in > ubuntu12.04.I know it is too old,but update is beyond my control. > > an

Re: [ceph-users] dense storage nodes

2016-05-20 Thread Christian Balzer
On Thu, 19 May 2016 10:26:37 -0400 Benjeman Meekhof wrote: > Hi Christian, > > Thanks for your insights. To answer your question the NVMe devices > appear to be some variety of Samsung: > > Model: Dell Express Flash NVMe 400GB > Manufacturer: SAMSUNG > Product ID: a820 > Alright, these appear