Dear Cephalopodians,
I performed several consistency checks now:
- Exporting an RBD snapshot before and after the object map rebuilding.
- Exporting a backup as raw image, all backups (re)created before and after the
object map rebuilding.
- md5summing all of that for a snapshot for which the re
On Wed, Jan 9, 2019 at 5:17 PM Kenneth Van Alstyne
wrote:
>
> Hey folks, I’m looking into what I would think would be a simple problem, but
> is turning out to be more complicated than I would have anticipated. A
> virtual machine managed by OpenNebula was blown away, but the backing RBD
> im
Hi Bryan,
I think this is the old hammer thread you refer to:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/013060.html
We also have osdmaps accumulating on v12.2.8 -- ~12000 per osd at the moment.
I'm trying to churn the osdmaps like before, but our maps are not being trimm
Hi,
for support reasons we're still running firefly (part of MCP 6). In our
grafana monitoring we noticed that two out of 128 OSD processes show
significantly higher outbound IO than all the others and this is
constant (cant see first occurance of this anymore, grafana only has 14
days backlo
On 1/10/19 12:59 PM, Marc wrote:
> Hi,
>
> for support reasons we're still running firefly (part of MCP 6). In our
> grafana monitoring we noticed that two out of 128 OSD processes show
> significantly higher outbound IO than all the others and this is
> constant (cant see first occurance of th
Hi all,
I wanted to test Dan's upmap-remapped script for adding new osd's to a
cluster. (Then letting the balancer gradually move pgs to the new OSD
afterwards)
I've created a fresh (virtual) 12.2.10 4-node cluster with very small disks
(16GB each). 2 OSD's per node.
Put ~20GB of data on the clus
Hello, Ceph users,
I am not sure where to report the issue with the ceph.com website,
so I am posting to this list:
The https://ceph.com/use/ page has an incorrect link for getting
the packages:
"For packages, see http://ceph.com/docs/master/install/get-packages";
- the URL should be ht
Hi Caspar,
On Thu, Jan 10, 2019 at 1:31 PM Caspar Smit wrote:
>
> Hi all,
>
> I wanted to test Dan's upmap-remapped script for adding new osd's to a
> cluster. (Then letting the balancer gradually move pgs to the new OSD
> afterwards)
Cool. Insert "no guarantees or warranties" comment here.
An
Hi Mohamad!
On 31/12/2018 19:30, Mohamad Gebai wrote:
> On 12/31/18 4:51 AM, Marcus Murwall wrote:
>> What you say does make sense though as I also get the feeling that the
>> osds are just waiting for something. Something that never happens and
>> the request finally timeout...
>
> So the OSDs a
Hello list,
there are two config options of mon/osd interaction that I don't fully
understand. Maybe one of you could clarify it for me.
mon osd report timeout
- The grace period in seconds before declaring unresponsive Ceph OSD
Daemons down. Default 900
mon osd down out interval
- The
I wanted to expand the usage of the ceph cluster and use a cephfs mount
to archive mail messages. Only (the below) 'Archive' tree is going to be
on this mount, the default folders stay where they are. Currently mbox
is still being used. I thought about switching storage from mbox to
mdbox.
I
Hallo,
I have the same issue as mentioned here, namely
converting/migrating a replicated pool to an EC-based one. I have ~20 TB
so my problem is far easier, but I'd like to perform this operation
without introducing any downtime (or possibly just a minimal one, to
rename pools).
I am u
Hey,
After upgrading to centos7.6, I started encountering the following kernel
panic
[17845.147263] XFS (rbd4): Unmounting Filesystem
[17846.860221] rbd: rbd4: capacity 3221225472 features 0x1
[17847.109887] XFS (rbd4): Mounting V5 Filesystem
[17847.191646] XFS (rbd4): Ending clean mount
[17861.66
On Thu, Jan 10, 2019 at 4:01 AM Oliver Freyermuth
wrote:
>
> Dear Cephalopodians,
>
> I performed several consistency checks now:
> - Exporting an RBD snapshot before and after the object map rebuilding.
> - Exporting a backup as raw image, all backups (re)created before and after
> the object ma
Dear Jason and list,
Am 10.01.19 um 16:28 schrieb Jason Dillaman:
On Thu, Jan 10, 2019 at 4:01 AM Oliver Freyermuth
wrote:
Dear Cephalopodians,
I performed several consistency checks now:
- Exporting an RBD snapshot before and after the object map rebuilding.
- Exporting a backup as raw imag
On Thu, Jan 10, 2019 at 10:50 AM Oliver Freyermuth
wrote:
>
> Dear Jason and list,
>
> Am 10.01.19 um 16:28 schrieb Jason Dillaman:
> > On Thu, Jan 10, 2019 at 4:01 AM Oliver Freyermuth
> > wrote:
> >>
> >> Dear Cephalopodians,
> >>
> >> I performed several consistency checks now:
> >> - Exportin
I just had this question as well.
I am interested in what you mean by fullest, is it percentage wise or raw
space. If I have an uneven distribution and adjusted it, would it make more
space available potentially.
Thanks
Scott
On Thu, Jan 10, 2019 at 12:05 AM Wido den Hollander wrote:
>
>
> On 1
On Thu, Jan 10, 2019 at 4:07 PM Scottix wrote:
> I just had this question as well.
>
> I am interested in what you mean by fullest, is it percentage wise or raw
> space. If I have an uneven distribution and adjusted it, would it make more
> space available potentially.
>
Yes - I'd recommend usin
On 09.01.2019 17:27, Matthew Vernon wrote:
Hi,
On 08/01/2019 18:58, David Galloway wrote:
The current distro matrix is:
Luminous: xenial centos7 trusty jessie stretch
Mimic: bionic xenial centos7
Thanks for clarifying :)
This may have been different in previous point releases because, as G
Thanks for the reply — I was pretty darn sure, since I live migrated all VMs
off of that box and then killed everything but a handful of system processes
(init, sshd, etc) and the watcher was STILL present. In saying that, I halted
the machine (since nothing was running on it any longer) and th
> Could I suggest building Luminous for Bionic
+1 for Luminous on Bionic.
Ran into issues with bionic upgrades, and had to eventually revert from the
ceph repos to the Ubuntu repos where they have 12.2.8, which isn’t ideal.
Reed
> On Jan 9, 2019, at 10:27 AM, Matthew Vernon wrote:
>
> Hi,
>
Hi everyone, I have some questions about encryption in Ceph.
1) Are RBD connections encrypted or is there an option to use encryption
between clients and Ceph? From reading the documentation, I have the
impression that the only option to guarantee encryption in transit is to
force clients to encry
Hi,
AFAIK, there is no encryption on the wire, either between daemons or
between a daemon and a client
The only encryption available on Ceph is at rest, using dmcrypt (aka
your data are encrypted before being written on disk)
Regards,
On 01/10/2019 07:59 PM, Sergio A. de Carvalho Jr. wrote:
> Hi
On Fri, Jan 11, 2019 at 12:20 AM Rom Freiman wrote:
>
> Hey,
> After upgrading to centos7.6, I started encountering the following kernel
> panic
>
> [17845.147263] XFS (rbd4): Unmounting Filesystem
> [17846.860221] rbd: rbd4: capacity 3221225472 features 0x1
> [17847.109887] XFS (rbd4): Mounting
I think Ilya recently looked into a bug that can occur when
CONFIG_HARDENED_USERCOPY is enabled and the IO's TCP message goes
through the loopback interface (i.e. co-located OSDs and krbd).
Assuming that you have the same setup, you might be hitting the same
bug.
On Thu, Jan 10, 2019 at 6:46 PM Br
On Fri, Jan 11, 2019 at 9:57 AM Jason Dillaman wrote:
>
> I think Ilya recently looked into a bug that can occur when
> CONFIG_HARDENED_USERCOPY is enabled and the IO's TCP message goes
> through the loopback interface (i.e. co-located OSDs and krbd).
> Assuming that you have the same setup, you m
>>1) Are RBD connections encrypted or is there an option to use encryption
>>between clients and Ceph? From reading the documentation, I have the
>>impression that the only option to guarantee encryption in >>transit is to
>>force clients to encrypt volumes via dmcrypt. Is there another option?
Hi,
as others pointed out, traffic in ceph is unencrypted (internal traffic
as well as client traffic). I usually advise to set up IPSec or
nowadays wireguard connections between all hosts. That takes care of
any traffic going over the wire, including ceph.
Cheers,
Tobias Florek
signature.as
28 matches
Mail list logo