Hello all,
as part of deprovisioning customers, we regularly have the task of
wiping their Ceph clusters. Is there a certifiable, GDPR compliant way
to do so without physically shredding the disks?
Best regards,
--ck
___
ceph-users mailing list
ceph-us
Hi,
On 27.07.2018 09:00, Christopher Kunz wrote:
>
> as part of deprovisioning customers, we regularly have the task of
> wiping their Ceph clusters. Is there a certifiable, GDPR compliant way
> to do so without physically shredding the disks?
In the past I have used DBAN from https://dban.org/,
On Tue, Jul 24, 2018 at 10:38:43AM -0400, Alfredo Deza wrote:
> Hi all,
>
> After the 12.2.6 release went out, we've been thinking on better ways
> to remove a version from our repositories to prevent users from
> upgrading/installing a known bad release.
>
> The way our repos are structured toda
Hi,
I am trying to repair a failed cluster with multiple MDS, but the failed
MDS crashes on restart and won't stay up. I could not find a bug report for
that specific failure. Here are the logs:
-9> 2018-07-27 10:40:45.591137 7f239ae9a700 5 mds.lift-2
handle_mds_map epoch 3562 from mds.2
Hello,
Might sounds strange, but I could not find answer in google or docs,
might be called somehow else.
I dont understand pool capacity policy and how to set/define it. I have
created simple cluster for CephFS on 4 servers, each has 30gb disk - so
in total 120gb. On top I build replicated
Hi, Still trying to understand what is really happening under the hood, Did
more test and collected the data
I Changed the `osd max pg per osd hard ratio` to 16384 but this didn't
change anything
Scenario : 4 nodes , 4 disk per node , ceph 12.2.7
1. create 4 OSD with device class
2. create pool w
Hi dear folks,
This looks for me something so critical, take a look on this issue if you
will evacuate a compute.
Basically, the evacuation process works fine (nova side); however, all the
virtual machines show a kernel panic.
https://bugs.launchpad.net/nova/+bug/1781878
Regards
- Eddy
___
On Fri, Jul 27, 2018 at 3:28 AM, Fabian Grünbichler
wrote:
> On Tue, Jul 24, 2018 at 10:38:43AM -0400, Alfredo Deza wrote:
>> Hi all,
>>
>> After the 12.2.6 release went out, we've been thinking on better ways
>> to remove a version from our repositories to prevent users from
>> upgrading/installi
On Thu, Jul 26, 2018 at 5:15 PM Alex Gorbachev wrote:
>
> On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote:
> > On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev
> > wrote:
> >>
> >> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
> >> wrote:
> >> > On Wed, Jul 25, 2018 at 5:51 PM, Jason Dill
I have a Jewel Ceph cluster with RGW index sharding enabled. I've
configured the index to have 128 shards. I am upgrading to Luminous. What
will happen if I enable dynamic bucket index resharding in ceph.conf? Will
it maintain my 128 shards (the buckets are currently empty), and will it
split
On Fri, Jul 27, 2018 at 9:33 AM, Ilya Dryomov wrote:
> On Thu, Jul 26, 2018 at 5:15 PM Alex Gorbachev
> wrote:
>>
>> On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote:
>> > On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev
>> > wrote:
>> >>
>> >> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
Does your keyring have the "profile rbd" capabilities on the mon?
Paul
2018-07-27 13:49 GMT+02:00 Eddy Castillon :
> Hi dear folks,
>
> This looks for me something so critical, take a look on this issue if you
> will evacuate a compute.
>
> Basically, the evacuation process works fine (nova sid
On Fri, Jul 27, 2018 at 10:25 AM Paul Emmerich
wrote:
> Does your keyring have the "profile rbd" capabilities on the mon?
>
+1 -- your Nova user will require the privilege to blacklist the dead peer
from the cluster in order to break the exclusive lock.
>
>
> Paul
>
> 2018-07-27 13:49 GMT+02:0
On Fri, Jul 27, 2018 at 4:47 PM Guillaume Lefranc
wrote:
>
> Hi,
>
> I am trying to repair a failed cluster with multiple MDS, but the failed MDS
> crashes on restart and won't stay up. I could not find a bug report for that
> specific failure. Here are the logs:
>
> -9> 2018-07-27 10:40:45.
On 07/27/2018 03:03 AM, Robert Sander wrote:
Hi,
On 27.07.2018 09:00, Christopher Kunz wrote:
as part of deprovisioning customers, we regularly have the task of
wiping their Ceph clusters. Is there a certifiable, GDPR compliant way
to do so without physically shredding the disks?
In the past
Hello Christopher,
On Fri, Jul 27, 2018 at 12:00 AM, Christopher Kunz
wrote:
> Hello all,
>
> as part of deprovisioning customers, we regularly have the task of
> wiping their Ceph clusters. Is there a certifiable, GDPR compliant way
> to do so without physically shredding the disks?
This should
This is the first bugfix release of the Mimic v13.2.x long term stable release
series. This release contains many fixes across all components of Ceph,
including a few security fixes. We recommend that all users upgrade.
Notable Changes
--
* CVE 2018-1128: auth: cephx authorizer subjec
ceph tell mds.0 client ls
2018-07-27 12:32:40.344654 7fa5e27fc700 0 client.89408629 ms_handle_reset
on 10.10.1.63:6800/1750774943
Error EPERM: problem getting command descriptions from mds.0
mds log
2018-07-27 12:32:40.342753 7fc9c1239700 1 mds.CephMon203 handle_command:
received command from cl
Hello,
Can you please add me to the ceph-storage slack channel? Thanks!
- Matt Brown | Lead Engineer | Infrastructure Services – Cloud & Compute |
Target | 7000 Target Pkwy N., NCE-0706 | Brooklyn Park, MN 55445 | 612.304.4956
___
ceph-users mailing l
I decided to upgrade my home cluster from Luminous (v12.2.7) to Mimic (v13.2.1)
today and ran into a couple issues:
1. When restarting the OSDs during the upgrade it seems to forget my upmap
settings. I had to manually return them to the way they were with commands
like:
ceph osd pg-upmap-ite
I have simple question i want to use LVM with bluestore (Its
recommended method), If i have only single SSD disk for osd in that
case i want to keep journal + data on same disk so how should i create
lvm to accommodate ?
Do i need to do following
pvcreate /dev/sdb
vgcreate vg0 /dev/sdb
Now i hav
Hello,
We are working on setting up Ceph on AWS i3 instances that have NVMe SSD as
instance store to create our own EBS that spans multiple availability
zones. We want to achieve better performance compared to EBS with
provisioned IOPS.
I thought it would be good to reach out to the community to
Hi!
I find a rbd map service issue:
[root@dx-test ~]# systemctl status rbdmap
● rbdmap.service - Map RBD devices
Loaded: loaded (/usr/lib/systemd/system/rbdmap.service; enabled; vendor preset:
disabled)
Active: active (exited) (Result: exit-code) since 六 2018-07-28 13:55:01 CST;
11min ago
23 matches
Mail list logo