[ceph-users] Secure way to wipe a Ceph cluster

2018-07-27 Thread Christopher Kunz
Hello all, as part of deprovisioning customers, we regularly have the task of wiping their Ceph clusters. Is there a certifiable, GDPR compliant way to do so without physically shredding the disks? Best regards, --ck ___ ceph-users mailing list ceph-us

Re: [ceph-users] Secure way to wipe a Ceph cluster

2018-07-27 Thread Robert Sander
Hi, On 27.07.2018 09:00, Christopher Kunz wrote: > > as part of deprovisioning customers, we regularly have the task of > wiping their Ceph clusters. Is there a certifiable, GDPR compliant way > to do so without physically shredding the disks? In the past I have used DBAN from https://dban.org/,

Re: [ceph-users] [Ceph-maintainers] download.ceph.com repository changes

2018-07-27 Thread Fabian Grünbichler
On Tue, Jul 24, 2018 at 10:38:43AM -0400, Alfredo Deza wrote: > Hi all, > > After the 12.2.6 release went out, we've been thinking on better ways > to remove a version from our repositories to prevent users from > upgrading/installing a known bad release. > > The way our repos are structured toda

[ceph-users] Issue with Rejoining MDS

2018-07-27 Thread Guillaume Lefranc
Hi, I am trying to repair a failed cluster with multiple MDS, but the failed MDS crashes on restart and won't stay up. I could not find a bug report for that specific failure. Here are the logs: -9> 2018-07-27 10:40:45.591137 7f239ae9a700 5 mds.lift-2 handle_mds_map epoch 3562 from mds.2

[ceph-users] understanding pool capacity and usage

2018-07-27 Thread Anton Aleksandrov
Hello, Might sounds strange, but I could not find answer in google or docs, might be called somehow else. I dont understand pool capacity policy and how to set/define it. I have created simple cluster for CephFS on 4 servers, each has 30gb disk - so in total 120gb. On top I build replicated

Re: [ceph-users] Preventing pool from allocating PG to OSD belonging not beloning to the device class defined in crush rule

2018-07-27 Thread Benoit Hudzia
Hi, Still trying to understand what is really happening under the hood, Did more test and collected the data I Changed the `osd max pg per osd hard ratio` to 16384 but this didn't change anything Scenario : 4 nodes , 4 disk per node , ceph 12.2.7 1. create 4 OSD with device class 2. create pool w

[ceph-users] VM fails to boot after evacuation when it uses ceph disk

2018-07-27 Thread Eddy Castillon
Hi dear folks, This looks for me something so critical, take a look on this issue if you will evacuate a compute. Basically, the evacuation process works fine (nova side); however, all the virtual machines show a kernel panic. https://bugs.launchpad.net/nova/+bug/1781878 Regards - Eddy ___

Re: [ceph-users] [Ceph-maintainers] download.ceph.com repository changes

2018-07-27 Thread Alfredo Deza
On Fri, Jul 27, 2018 at 3:28 AM, Fabian Grünbichler wrote: > On Tue, Jul 24, 2018 at 10:38:43AM -0400, Alfredo Deza wrote: >> Hi all, >> >> After the 12.2.6 release went out, we've been thinking on better ways >> to remove a version from our repositories to prevent users from >> upgrading/installi

Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-27 Thread Ilya Dryomov
On Thu, Jul 26, 2018 at 5:15 PM Alex Gorbachev wrote: > > On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote: > > On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev > > wrote: > >> > >> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev > >> wrote: > >> > On Wed, Jul 25, 2018 at 5:51 PM, Jason Dill

[ceph-users] Converting to dynamic bucket resharding in Luminous

2018-07-27 Thread Robert Stanford
I have a Jewel Ceph cluster with RGW index sharding enabled. I've configured the index to have 128 shards. I am upgrading to Luminous. What will happen if I enable dynamic bucket index resharding in ceph.conf? Will it maintain my 128 shards (the buckets are currently empty), and will it split

Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-27 Thread Alex Gorbachev
On Fri, Jul 27, 2018 at 9:33 AM, Ilya Dryomov wrote: > On Thu, Jul 26, 2018 at 5:15 PM Alex Gorbachev > wrote: >> >> On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote: >> > On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev >> > wrote: >> >> >> >> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev

Re: [ceph-users] VM fails to boot after evacuation when it uses ceph disk

2018-07-27 Thread Paul Emmerich
Does your keyring have the "profile rbd" capabilities on the mon? Paul 2018-07-27 13:49 GMT+02:00 Eddy Castillon : > Hi dear folks, > > This looks for me something so critical, take a look on this issue if you > will evacuate a compute. > > Basically, the evacuation process works fine (nova sid

Re: [ceph-users] VM fails to boot after evacuation when it uses ceph disk

2018-07-27 Thread Jason Dillaman
On Fri, Jul 27, 2018 at 10:25 AM Paul Emmerich wrote: > Does your keyring have the "profile rbd" capabilities on the mon? > +1 -- your Nova user will require the privilege to blacklist the dead peer from the cluster in order to break the exclusive lock. > > > Paul > > 2018-07-27 13:49 GMT+02:0

Re: [ceph-users] Issue with Rejoining MDS

2018-07-27 Thread Yan, Zheng
On Fri, Jul 27, 2018 at 4:47 PM Guillaume Lefranc wrote: > > Hi, > > I am trying to repair a failed cluster with multiple MDS, but the failed MDS > crashes on restart and won't stay up. I could not find a bug report for that > specific failure. Here are the logs: > > -9> 2018-07-27 10:40:45.

Re: [ceph-users] Secure way to wipe a Ceph cluster

2018-07-27 Thread Daniel Gryniewicz
On 07/27/2018 03:03 AM, Robert Sander wrote: Hi, On 27.07.2018 09:00, Christopher Kunz wrote: as part of deprovisioning customers, we regularly have the task of wiping their Ceph clusters. Is there a certifiable, GDPR compliant way to do so without physically shredding the disks? In the past

Re: [ceph-users] Secure way to wipe a Ceph cluster

2018-07-27 Thread Patrick Donnelly
Hello Christopher, On Fri, Jul 27, 2018 at 12:00 AM, Christopher Kunz wrote: > Hello all, > > as part of deprovisioning customers, we regularly have the task of > wiping their Ceph clusters. Is there a certifiable, GDPR compliant way > to do so without physically shredding the disks? This should

[ceph-users] v13.2.1 Mimic released

2018-07-27 Thread Sage Weil
This is the first bugfix release of the Mimic v13.2.x long term stable release series. This release contains many fixes across all components of Ceph, including a few security fixes. We recommend that all users upgrade. Notable Changes -- * CVE 2018-1128: auth: cephx authorizer subjec

[ceph-users] cephfs tell command not working

2018-07-27 Thread Scottix
ceph tell mds.0 client ls 2018-07-27 12:32:40.344654 7fa5e27fc700 0 client.89408629 ms_handle_reset on 10.10.1.63:6800/1750774943 Error EPERM: problem getting command descriptions from mds.0 mds log 2018-07-27 12:32:40.342753 7fc9c1239700 1 mds.CephMon203 handle_command: received command from cl

Re: [ceph-users] Slack-IRC integration

2018-07-27 Thread Matt . Brown
Hello, Can you please add me to the ceph-storage slack channel? Thanks! - Matt Brown | Lead Engineer | Infrastructure Services – Cloud & Compute | Target | 7000 Target Pkwy N., NCE-0706 | Brooklyn Park, MN 55445 | 612.304.4956 ___ ceph-users mailing l

Re: [ceph-users] v13.2.1 Mimic released

2018-07-27 Thread Bryan Stillwell
I decided to upgrade my home cluster from Luminous (v12.2.7) to Mimic (v13.2.1) today and ran into a couple issues: 1. When restarting the OSDs during the upgrade it seems to forget my upmap settings. I had to manually return them to the way they were with commands like: ceph osd pg-upmap-ite

[ceph-users] ceph lvm question

2018-07-27 Thread Satish Patel
I have simple question i want to use LVM with bluestore (Its recommended method), If i have only single SSD disk for osd in that case i want to keep journal + data on same disk so how should i create lvm to accommodate ? Do i need to do following pvcreate /dev/sdb vgcreate vg0 /dev/sdb Now i hav

[ceph-users] Setting up Ceph on EC2 i3 instances

2018-07-27 Thread Mansoor Ahmed
Hello, We are working on setting up Ceph on AWS i3 instances that have NVMe SSD as instance store to create our own EBS that spans multiple availability zones. We want to achieve better performance compared to EBS with provisioned IOPS. I thought it would be good to reach out to the community to

[ceph-users] rbdmap service failed but exit 1

2018-07-27 Thread xiang . dai
Hi! I find a rbd map service issue: [root@dx-test ~]# systemctl status rbdmap ● rbdmap.service - Map RBD devices Loaded: loaded (/usr/lib/systemd/system/rbdmap.service; enabled; vendor preset: disabled) Active: active (exited) (Result: exit-code) since 六 2018-07-28 13:55:01 CST; 11min ago