Hello all,
One of our clusters running nautilus release 14.2.15 is reporting health
error. It reports that there are inconsistent PGs. However, when I inspect
each of the reported PGs, I dont see any inconsistencies. Any inputs on
what's going on?
$ sudo ceph health detail
HEALTH_ERR 3 scrub erro
/placement/ and other related docs.
-Shridhar
On Wed, 11 Nov 2020 at 09:58, Bill Anderson wrote:
>
> Thank you for that info.
>
> Is it possible for an S3 RGW client to choose a pool, though?
>
>
>
> On Wed, Nov 11, 2020 at 10:40 AM Void Star Nill
> wrot
You can do this by creating 2 different pools with different replication
settings. But your users/clients need to choose the right pool while
writing the files.
-Shridhar
On Tue, 10 Nov 2020 at 12:58, wrote:
> Hi All,
>
> I'm exploring deploying Ceph at my organization for use as an object
> s
I have a similar setup and have been running some large concurrent
benchmarks and I am seeing that running multiple OSDs per NVME doesn't
really make a lot of difference. In fact, it actually increases the write
amplification if you have write-heavy workloads, so performance degrades
over time.
Al
Hello,
I am trying to debug slow operations in our cluster running Nautilus
14.2.13. I am analysing the output of "ceph daemon osd.N dump_historic_ops"
command. I am noticing that the
I am noticing that most of the time is spent between "header_read" and
"throttled" events. For example, below is
configuration/msgr2/
>
>
> Zitat von Void Star Nill :
>
> > Hello,
> >
> > I am running nautilus cluster. Is there a way to force the cluster to use
> > msgr-v1 instead of msgr-v2?
> >
> > I am debugging an issue and it seems like it could b
Hello,
I am running nautilus cluster. Is there a way to force the cluster to use
msgr-v1 instead of msgr-v2?
I am debugging an issue and it seems like it could be related to the msgr
layer, so want to test it by using msgr-v1.
Thanks,
Shridhar
___
ceph
Hello,
I am running 14.2.13-1xenial version and I am seeing lot of logs from msgv2
layer on the OSDs. Attached are some of the logs. It looks like these logs
are not controlled by the standard log level configuration, so I couldn't
find a way to disable these logs.
I am concerned that these logs
Hello,
What is the necessity for enabling the application on the pool? As per the
documentation, we need to enable application before using the pool.
However, in my case, I have a single pool running on the cluster used for
RBD. I am able to run all RBD operations on the pool even if I dont enable
eate 2
> separate pool, put the read operation to 1 pool and the write to another
> one and magic happened, no slow ops and a weigh higher performance.
> We asked the db team also to split the read and write (as much as thay
> can) and issue solved (after 2 week).
>
> Thank you
> _
Hello,
I have a ceph cluster running 14.2.11. I am running benchmark tests with
FIO concurrently on ~2000 volumes of 10G each. During the time initial
warm-up FIO creates a 10G file on each volume before it runs the actual
read/write I/O operations. During this time, I start seeing the Ceph
cluste
Thanks Ilya for the clarification.
-Shridhar
On Mon, 10 Aug 2020 at 10:00, Ilya Dryomov wrote:
> On Mon, Aug 10, 2020 at 6:14 PM Void Star Nill
> wrote:
> >
> > Thanks Ilya.
> >
> > I assume :0/0 indicates all clients on a given host?
>
> No, a blacklist
Thanks Ilya.
I assume *:0/0* indicates all clients on a given host?
Thanks,
Shridhar
On Mon, 10 Aug 2020 at 03:07, Ilya Dryomov wrote:
> On Fri, Aug 7, 2020 at 10:25 PM Void Star Nill
> wrote:
> >
> > Hi,
> >
> > I want to understand the format for `ceph os
Hi,
I want to understand the format for `ceph osd blacklist`
commands. The documentation just says it's the address. But I am not sure
if it can just be the host IP address or anything else. What does *:0/*
*3710147553* represent in the following output?
$ ceph osd blacklist ls
listed 1 entries1
they continue to be thin provisioned (allocate as you go based on
> real data). So far I have tried with ext3, ext4 and xfs and none of them
> dont allocate all the blocks during format.
>
> -Shridhar
>
>
> On Thu, 9 Jul 2020 at 06:58, Jason Dillaman wrote:
>
> > On T
allocate all
the blocks during format.
-Shridhar
On Thu, 9 Jul 2020 at 06:58, Jason Dillaman wrote:
> On Thu, Jul 9, 2020 at 12:02 AM Void Star Nill
> wrote:
> >
> >
> >
> > On Wed, Jul 8, 2020 at 4:56 PM Jason Dillaman
> wrote:
> >>
> >> On
On Wed, Jul 8, 2020 at 4:56 PM Jason Dillaman wrote:
> On Wed, Jul 8, 2020 at 3:28 PM Void Star Nill
> wrote:
> >
> > Hello,
> >
> > My understanding is that the time to format an RBD volume is not
> dependent
> > on its size as the RBD volumes are thin pr
Hello,
My understanding is that the time to format an RBD volume is not dependent
on its size as the RBD volumes are thin provisioned. Is this correct?
For example, formatting a 1G volume should take almost the same time as
formatting a 1TB volume - although accounting for differences in latencie
Thanks so much Jason.
On Sun, Jun 28, 2020 at 7:31 AM Jason Dillaman wrote:
> On Thu, Jun 25, 2020 at 7:51 PM Void Star Nill
> wrote:
> >
> > Hello,
> >
> > Is there a way to list all locks held by a client with the given IP
> address?
>
> Negative -- yo
Hello,
Is there a way to list all locks held by a client with the given IP address?
Also, I read somewhere that removing the lock with "rbd lock rm..."
automatically blacklists that client connection. Is that correct?
How do I blacklist a client with the given IP address?
Thanks,
Shridhar
_
Hello,
Is there a way to get read/write I/O statistics for each rbd device for
each mapping?
For example, when an application uses one of the volumes, I would like to
find out what performance (avg read/write bandwidth, IOPS, etc) that
application observed on a given volume. Is that possible?
Th
Thanks Ilya for the quick response.
On Mon, 4 May 2020 at 11:03, Ilya Dryomov wrote:
> On Mon, May 4, 2020 at 7:32 AM Void Star Nill
> wrote:
> >
> > Hello,
> >
> > I wanted to know if rbd will flush any writes in the page cache when a
> > volume is &qu
Thanks Janne. I actually meant that the RW mount is unmounted already -
sorry about the confusion.
- Shridhar
On Mon, 4 May 2020 at 00:35, Janne Johansson wrote:
> Den mån 4 maj 2020 kl 05:14 skrev Void Star Nill >:
>
>> One of the use cases (e.g. machine learning workloads)
Hello,
I wanted to know if rbd will flush any writes in the page cache when a
volume is "unmap"ed on the host, of if we need to flush explicitly using
"sync" before unmap?
Thanks,
Shridhar
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
Hello Brad, Adam,
Thanks for the quick responses.
I am not passing any arguments other than "ro,nouuid" on mount.
One thing I forgot to mention is that, there could be more than one mount
of the same volume on a host - I dont know how this plays out for xfs.
Appreciate your inputs.
Regards,
Sh
Hello All,
One of the use cases (e.g. machine learning workloads) for RBD volumes in
our production environment is that, users could mount an RBD volume in RW
mode in a container, write some data to it and later use the same volume in
RO mode into a number of containers in parallel to consume the
hridhar
>
>
>>
>> Is it possible to add custom udev rules to control this behavior?
>>
>> Thanks,
>> Shridhar
>>
>>
>> On Mon, 20 Apr 2020 at 01:19, Ilya Dryomov wrote:
>>
>> > On Sat, Apr 18, 2020 at 6:53 AM Void Star Nill <
other container.
Is it possible to add custom udev rules to control this behavior?
Thanks,
Shridhar
On Mon, 20 Apr 2020 at 01:19, Ilya Dryomov wrote:
> On Sat, Apr 18, 2020 at 6:53 AM Void Star Nill
> wrote:
> >
> > Hello,
> >
> > How frequently do RBD device names get
Hello,
How frequently do RBD device names get reused? For instance, when I map a
volume on a client and it gets mapped to /dev/rbd0 and when it is unmapped,
does a subsequent map reuse this name right away?
I ask this question, because in our use case, we try to unmap a volume and
we are thinking
her thread, if you always take a lock before
> mapping an image, you could just list the lockers. Unlike a watch,
> a lock will never disappear behind your back ;)
>
> Thanks,
>
> Ilya
>
> On Thu, Apr 9, 2020 at 9:24 PM Void Star Nill
> wrote:
> &
o the orchestration layer to decide when (and how) to break
> them. rbd can't make that decision for you -- consider a case where
> the device is alive and ready to serve I/O, but the workload is stuck
> for some other reason.
>
> Thanks,
>
> Ilya
&g
Hi,
Any thoughts on this?
Regards
Shridhar
On Thu, Apr 9, 2020 at 5:17 PM Void Star Nill
wrote:
> Hi,
>
> I am seeing a large number of connections from ceph-mgr are stuck in
> CLOSE_WAIT state with data stuck in the receive queue. Looks like ceph-mgr
> process is not r
Paul, Ilya, others,
Any inputs on this?
Thanks,
Shridhar
On Thu, 9 Apr 2020 at 12:30, Void Star Nill
wrote:
> Thanks Ilya, Paul.
>
> I dont have the panic traces and probably they are not related to rbd. I
> was merely describing our use case.
>
> On our setup that we
Hi,
I am seeing a large number of connections from ceph-mgr are stuck in
CLOSE_WAIT state with data stuck in the receive queue. Looks like ceph-mgr
process is not reading the data completely off the socket buffers and
terminating the connections properly.
I also notice that the access to dashboar
lock. We need to intervene manually and
resolve such issues as of now. So I am looking for a way to do this
deterministically.
Thanks,
Shridhar
On Wed, 8 Apr 2020 at 02:48, Ilya Dryomov wrote:
> On Tue, Apr 7, 2020 at 6:49 PM Void Star Nill
> wrote:
> >
> > Hello All,
> &g
; reestablish it, but this doesn't happen immediately.
>
> Thanks,
>
> Ilya
>
> On Tue, Apr 7, 2020 at 8:12 PM Void Star Nill
> wrote:
> >
> > Thanks Jack. Exactly what I needed.
> >
> > Appreciate quick response.
> >
&g
nt.522682726
> cookie=140177351959424
>
> This is the list of clients for that image
> All mapping hosts are in it
>
>
> On 4/7/20 6:46 PM, Void Star Nill wrote:
> > Hello,
> >
> > Is there a way to find out all the clients where the volumes are mapped
> &g
Hello All,
Is there a way to specify that a lock (shared or exclusive) on an rbd
volume be released if the client machine becomes unreachable or
irresponsive?
In one of our clusters, we use rbd locks on volumes to make sure provide a
kind of shared or exclusive access - to make sure there are no
Hello,
Is there a way to find out all the clients where the volumes are mapped
from a central point?
We have a large fleet of machines that use ceph rbd volumes. For some
maintenance purposes, we need to find out if a volume is mapped anywhere
before acting on it. Right now we go and query each c
39 matches
Mail list logo