You have LVM data on /dev/sdb already; you will need to remove that
before you can use ceph-disk on that device.
Use the LVM commands 'lvs','vgs', and 'pvs' to list the logical volumes,
volume groups, and physical volumes defined. Once you're sure you don't
need the data, lvremove, vgremove,
Also, looking at your ceph-disk list output, the LVM is probably your
root filesystem and cannot be wiped. If you'd like the send the output
of a 'mount' and 'lvs' command, you should be able to to tell.
-- jacob
On 07/13/2018 03:42 PM, Jacob DeGlopper wrote:
I'm seeing an error from the rbd map command running in ceph-container;
I had initially deployed this cluster as Luminous, but a pull of the
ceph/daemon container unexpectedly upgraded me to Mimic 13.2.1.
[root@nodeA2 ~]# ceph version
ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d7
I've seen the requirement not to mount RBD devices or CephFS filesystems
on OSD nodes. Does this still apply when the OSDs and clients using the
RBD volumes are all in Docker containers?
That is, is it possible to run a 3-server setup in production with both
Ceph daemons (mon, mgr, and OSD) i
Hi - I'm trying to set up our first Ceph deployment with a small set of
3 servers, using an SSD boot drive each and 2x Micron 5200 SSDs per
server for OSD drives. It appears that Ceph under Docker gives us an
allowable production config using 3 servers rather than 6. We are using
CentOS 7.4 a
Thanks, this is useful in general. I have a semi-related question:
Given an OSD server with multiple SSDs or NVME devices, is there an
advantage to putting wal/db on a different device of the same speed?
For example, data on sda1, matching wal/db on sdb1, and then data on
sdb2 and wal/db on
Hi Antonio - you need to set !requiretty in your sudoers file. This is
documented here:
http://docs.ceph.com/docs/jewel/start/quick-ceph-deploy/ but it
appears that section may not have been copied into the current docs.
You can test this by running 'ssh sds@node1 sudo whoami' from your adm
Consider trying some variation in source and destination IP addresses
and port numbers - unless you force it, iperf3 at least tends to pick
only even port numbers for the ephemeral source port, which leads to all
traffic being balanced to one link.
In your example, where you see one link being
your reply. But I'm not sure I completely understand it. :-)
On 06/21/2018 09:09 PM, Jacob DeGlopper wrote:
In your example, where you see one link being used, I see an even
source IP paired with an odd destination port number for both
transfers, or is that a search and replace issue?
We
Hi Carlos - just a guess, but you might need your credentials from
/etc/ceph on the host mounted inside the container.
-- jacob
Hey guys!
First post to the list and new Ceph user so I might say/ask some
stupid stuff ;)
I've setup a Ceph Storage (and crashed it 2 days after), with 2
cep
What are you connecting it to? We just got the exact same drive for
testing, and I'm seeing much higher performance, connected to a
motherboard 6 Gb SATA port on a Supermicro X9 board.
[root@centos7 jacob]# smartctl -a /dev/sda
Device Model: Samsung SSD 883 DCT 960GB
Firmware Version: HXT
The ansible deploy is quite a pain to get set up properly, but it does
work to get the whole stack working under Docker. It uses the following
script on Ubuntu to start the OSD containers:
/usr/bin/docker run \
--rm \
--net=host \
--privileged=true \
--pid=host \
--memory=64386m \
12 matches
Mail list logo