Quoting Mehmet (c...@elchaka.de):
> Hey Ceph people,
>
> need advise on how to move a ceph-cluster from one datacenter to another
> without any downtime :)
How is your networking set up? Moving your cluster around is easy if the
network is "stretched" across the two datacenters, and / or if you'v
Quoting Damian Dabrowski (scoot...@gmail.com):
> Hello,
>
> When I mount rbd image with -o queue_depth=1024 I can see much improvement,
> generally on writes(random write improvement from 3k IOPS on standard
> queue_depth to 24k IOPS on queue_depth=1024).
>
> But is there any way to attach rbd di
Hi! In my organisation we are using OpenNebula as our Cloud Platform.
Currently we are testing High Availability(HA) feature with Ceph Cluster as
our storage backend. In our test setup we have 3 systems with front-end HA
already successfully setup and configured with a floating IP in between
them.
I have Ceph set up and running RGW. I want to use multisite (
http://docs.ceph.com/docs/jewel/radosgw/multisite/), but I don't want to
delete my pools or lose any of my data. Is this possible, or do pools have
to be recreated when changing a cluster zone / zonegroup to multisite?
___
Hi Stefan, thanks for reply.
Unfortunately it didn't work.
disk config:
ce247187-a625-49f1-bacd-fc03df215395
Controller config:
benchmark command: fio --randrepeat=1 --
You can add multisite. Just skip down in the instructions to after the
initial zone is created. You might need to create the master keys to be
able to continue, but other than making sure they exist, there's no
complication there.
On Tue, Jun 26, 2018, 7:51 AM Robert Stanford
wrote:
>
> I have
> On 26 Jun 2018, at 14.04, Damian Dabrowski wrote:
>
> Hi Stefan, thanks for reply.
>
> Unfortunately it didn't work.
>
> disk config:
>
>discard='unmap'/>
>
>
>
>name='volumes-nvme/volume-ce247187-a625-49f1-bacd-fc03df215395'>
>
>
Hi,
Anyone having the following issues
We are randomly getting slow requests and rw locks error on the KVM volume.
The slow requests eventually clear off and the cluster back to normal.
The errors hit multiple osd across all 4 nodes.
We have checked the following
1) Disk, all disks are ok
2) Netw
ceph daemon osd.1 perf dump | grep bluestore | grep compress
"bluestore_compressed": 0,
"bluestore_compressed_allocated": 0,
"bluestore_compressed_original": 0,
"bluestore_extent_compress": 35372,
I filled up an RBD in a compressed pool (aggressive) in my test clust
Hi,
Zeros are not a great choice of data for testing a storage system unless
you are specifically testing what it does with zeros. Ceph knows that other
higher layers in the storage stack use zero-fill for certain things and
will probably optimise for it. E.g., it's common for thin-provisioning
sy
After I started using multipart uploads to RGW, Ceph automatically created
a non-ec pool. It looks like it stores object pieces there until all the
pieces of a multipart upload arrive, then moves the completed piece to the
normal rgw data pool. Is this correct?
__
Hello,
We have small cluster, initially on 4 hosts (1 osd per host, 8tb each)
with erasure-coding for data-pool (k=3 m=1).
After some time I have added one more small host (1 osd, 2tb). Ceph has
synced fine.
Then I have powered off one of first 8tb hosts and terminated it. Also
removed fro
Not quite. Only 'multipart meta' objects are stored in this non-ec pool
- these objects just track a list of parts that have been written for a
given multipart upload. This list is stored in the omap database, which
isn't supported for ec pools. The actual object data for these parts are
writte
In the same pool with compression enabled, I have a 1TB RBD filled with a
10GB /dev/urandom file repeating through the entire RBD. Deleting both of
these RBDs didn't change the number of bluestore_extent_compress. I'm
also pretty certain that's the same number I saw there before starting
these t
Hi Brad,
Here is the output of the "ceph auth list" command (I have removed the key:
line which was present in every single entry, including the osd.21):
# ceph auth list
installed auth entries:
mds.arh-ibstorage1-ib
caps: [mds] allow
caps: [mgr] allow profile mds
caps:
On Sun, Jun 24, 2018 at 12:59 AM, Enrico Kern
wrote:
> Hello,
>
> We have two ceph luminous clusters (12.2.5).
>
> recently one of our big buckets stopped syncing properly. We have a one
> specific bucket which is around 30TB in size consisting of alot of
> directories with each one having files
NFS v4 works like a charm, no issue for Linux clients, but when trying to
mount on MAC OS X client, it doesn't work - likely due to 'mountd' not
registered in rpc by ganesha when it comes to v4.
So I tried to set up v3, no luck:
# mount -t nfs -o rw,noatime,vers=3 ceph-dev:/ceph /mnt/ceph
mount.n
I tried enabling the RDMA support in Ceph Luminous release following this [1]
guide.
I used the released Luminous bits, and not the Mellanox branches mentioned in
the guide.
I could see some RDMA traffic in the perf counters, but the ceph daemons were
still
complaining that they are not able to
Hi Anton,
With Erasure Coding the min_size (minimum number of shards/replicas needed
to allow IO) of a pool is K+1 (in your case 4) so a single OSD failure
already triggers an IO freeze (because k=3 m=1) if you have 5 equal hosts
ceph 'should' get back to HEALTH_OK automatically (it will be
backfi
Hi,
I am playing with Ceph Luminous and getting confused information around
usage of WalDB vs RocksDB.
I have 2TB NVMe drive which I want to use for Wal/Rocks DB and have 5 2TB
SSD's for OSD.
I am planning to create 5 30GB partitions for RocksDB on NVMe drive, do I
need to create partitions of Wa
Hi,
In my test setup I have a ceph iscsi gateway (configured as in
http://docs.ceph.com/docs/luminous/rbd/iscsi-overview/ )
I would like to use thie with a FreeBSD (11.1) initiator, but I fail to
make a working setup in FreeBSD. Is it known if the FreeBSD initiator
(with gmultipath) can work with
Try setting the osd caps to 'allow *' for client.admin or running the
command using an id that has that access such as
mgr.arh-ibstorage1-ib.
On Wed, Jun 27, 2018 at 1:32 AM, Andrei Mikhailovsky wrote:
> Hi Brad,
>
> Here is the output of the "ceph auth list" command (I have removed the key:
> l
Hi Cephers,
One of our cluster’s osd can not start because of pg in the osd can not load
infover_key from rocksdb, log as the following.
Could someone talk something about this, thank you guys!
Log:
2018-06-26 15:09:16.036832 b66c6000 0 osd.41 3712 load_pgs
2056114 2018-06-26 15:09:16.0369
>
>
> On Wed, Jun 27, 2018 at 4:02 AM, Anthony D'Atri
wrote:
> Have you dumped ops-in-flight to see if the slow requests happen to
> correspond to scrubs or snap trims?
>
>
Hi Anthony,
Yes, we have tried the ops-in-flight, what we get is osd_op with
flag_point=delayed and event initiated, queued
Conceptually, I would assume it should just work if configured correctly w/
multipath (to properly configure the ALUA settings on the LUNs). I don't
run FreeBSD, but any particular issue you are seeing?
On Tue, Jun 26, 2018 at 6:06 PM Frank de Bot (lists)
wrote:
> Hi,
>
> In my test setup I have
Hi Cephers,
One of our cluster’s osd can not start because of pg in the osd can not load
infover_key from rocksdb, log as the following.
Could someone talk something about this, thank you guys!
Log:
2018-06-26 15:09:16.036832 b66c6000 0 osd.41 3712 load_pgs
2056114 2018-06-26 15:09:16.0369
I am playing with Ceph Luminous and getting confused information around
usage of WalDB vs RocksDB.
I have 2TB NVMe drive which I want to use for Wal/Rocks DB and have 5 2TB
SSD's for OSD.
I am planning to create 5 30GB partitions for RocksDB on NVMe drive, do I
need to create partitions of WalDB
List,
Had a failed disk behind an OSD in a Mimic Cluster 13.2.0, so I tried following
the doc on removal of an OSD.
I did:
# ceph osd crush reweight osd.19 0
waited for rebalancing to finish and cont.:
# ceph osd out 19
# systemctl stop ceph-osd@19
# ceph osd purge 19 --yes-i-really-mean-it
ve
Mons have only exactly one fixed IP address. A mon cannot use a floating
IP, otherwise it couldn't find its peers.
Also, the concept of a floating IP makes no sense for mons - you simply
give your clients a list of mon IPs to connect to.
Paul
2018-06-26 10:17 GMT+02:00 Rahul S :
> Hi! In my org
NFS3 does not use pseudo paths usually. You can enable
the Mount_Path_Pseudo option in NFS_CORE_PARAM to
enable usage of pseudo fsal for NFS3 clients. (Note that
the NFS3 clients cannot mount the pseudo root itself, but
only subdirectories due to limitations in the inode size)
Paul
2018-06-26 18
You are running into https://tracker.ceph.com/issues/24423
I've fixed it here: https://github.com/ceph/ceph/pull/22585
The fix has already been backported and will be in 13.2.1
Paul
2018-06-27 8:40 GMT+02:00 Steffen Winther Sørensen :
> List,
>
> Had a failed disk behind an OSD in a Mimic Clu
31 matches
Mail list logo