Re: [ceph-users] osd prepare issue device-mapper mapping

2018-07-13 Thread Jacob DeGlopper
You have LVM data on /dev/sdb already; you will need to remove that before you can use ceph-disk on that device. Use the LVM commands 'lvs','vgs', and 'pvs' to list the logical volumes, volume groups, and physical volumes defined.  Once you're sure you don't need the data, lvremove, vgremove,

Re: [ceph-users] osd prepare issue device-mapper mapping

2018-07-13 Thread Jacob DeGlopper
Also, looking at your ceph-disk list output, the LVM is probably your root filesystem and cannot be wiped.  If you'd like the send the output of a 'mount' and 'lvs' command, you should be able to to tell.     -- jacob On 07/13/2018 03:42 PM, Jacob DeGlopper wrote:

[ceph-users] ceph-container - rbd map failing since upgrade?

2018-08-21 Thread Jacob DeGlopper
I'm seeing an error from the rbd map command running in ceph-container; I had initially deployed this cluster as Luminous, but a pull of the ceph/daemon container unexpectedly upgraded me to Mimic 13.2.1. [root@nodeA2 ~]# ceph version ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d7

[ceph-users] Safe to use RBD mounts for Docker volumes on containerized Ceph nodes

2018-09-06 Thread Jacob DeGlopper
I've seen the requirement not to mount RBD devices or CephFS filesystems on OSD nodes.  Does this still apply when the OSDs and clients using the RBD volumes are all in Docker containers? That is, is it possible to run a 3-server setup in production with both Ceph daemons (mon, mgr, and OSD) i

[ceph-users] Does ceph-ansible support the LVM OSD scenario under Docker?

2018-04-26 Thread Jacob DeGlopper
Hi - I'm trying to set up our first Ceph deployment with a small set of 3 servers, using an SSD boot drive each and 2x Micron 5200 SSDs per server for OSD drives.  It appears that Ceph under Docker gives us an allowable production config using 3 servers rather than 6.  We are using CentOS 7.4 a

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-11 Thread Jacob DeGlopper
Thanks, this is useful in general.  I have a semi-related question: Given an OSD server with multiple SSDs or NVME devices, is there an advantage to putting wal/db on a different device of the same speed?  For example, data on sda1, matching wal/db on sdb1,  and then data on sdb2 and wal/db on

Re: [ceph-users] [PROBLEM] Fail in deploy do ceph on RHEL

2018-05-18 Thread Jacob DeGlopper
Hi Antonio - you need to set !requiretty in your sudoers file.  This is documented here: http://docs.ceph.com/docs/jewel/start/quick-ceph-deploy/   but it appears that section may not have been copied into the current docs. You can test this by running 'ssh sds@node1 sudo whoami' from your adm

Re: [ceph-users] lacp bonding | working as expected..?

2018-06-21 Thread Jacob DeGlopper
Consider trying some variation in source and destination IP addresses and port numbers - unless you force it, iperf3 at least tends to pick only even port numbers for the ephemeral source port, which leads to all traffic being balanced to one link. In your example, where you see one link being

Re: [ceph-users] lacp bonding | working as expected..?

2018-06-21 Thread Jacob DeGlopper
your reply. But I'm not sure I completely understand it. :-) On 06/21/2018 09:09 PM, Jacob DeGlopper wrote: In your example, where you see one link being used, I see an even source IP paired with an odd destination port number for both transfers, or is that a search and replace issue? We

Re: [ceph-users] DockerSwarm and CephFS

2019-01-31 Thread Jacob DeGlopper
Hi Carlos - just a guess, but you might need your credentials from /etc/ceph on the host mounted inside the container.     -- jacob Hey guys! First post to the list and new Ceph user so I might say/ask some stupid stuff ;) I've setup a Ceph Storage (and crashed it 2 days after), with 2 cep

Re: [ceph-users] Experiences with the Samsung SM/PM883 disk?

2019-02-22 Thread Jacob DeGlopper
What are you connecting it to?  We just got the exact same drive for testing, and I'm seeing much higher performance, connected to a motherboard 6 Gb SATA port on a Supermicro X9 board. [root@centos7 jacob]# smartctl -a /dev/sda Device Model: Samsung SSD 883 DCT 960GB Firmware Version: HXT

Re: [ceph-users] Ceph inside Docker containers inside VirtualBox

2019-04-18 Thread Jacob DeGlopper
The ansible deploy is quite a pain to get set up properly, but it does work to get the whole stack working under Docker.  It uses the following script on Ubuntu to start the OSD containers: /usr/bin/docker run \   --rm \   --net=host \   --privileged=true \   --pid=host \   --memory=64386m \