Yes, because you did *not* specify a dedicated WAL device. This is
also reflected in the OSD metadata:
$ ceph osd metadata 6 | grep dedicated
"bluefs_dedicated_db": "1",
"bluefs_dedicated_wal": "0"
Only if you had specified a dedicated WAL device you would see it in
the lvm list outp
you can also test it directly with ceph bench, if the WAL is on the
flash device:
https://www.clyso.com/blog/verify-ceph-osd-db-and-wal-setup/
Joachim
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
https://www.
Hi,
On 07.07.23 16:52, jcic...@cloudflare.com wrote:
There are two sites, A and B. There are 5 mons, 2 in A, 3 in B. Looking at just
one PG and 4 replicas, we have 2 replicas in site A and 2 replicas in site B.
Site A holds the primary OSD for this PG. When a network split happens, I/O
would
Hello Eugen,
I've tried to specify dedicated WAL device, but I have only
/dev/nvme0n1 , so I cannot write a correct YAML file...
Dne Po, čec 10, 2023 at 09:12:29 CEST napsal Eugen Block:
> Yes, because you did *not* specify a dedicated WAL device. This is also
> reflected in the OSD metadata:
>
It's fine, you don't need to worry about the WAL device, it is
automatically created on the nvme if the DB is there. Having a
dedicated WAL device would only make sense if for example your data
devices are on HDD, your rocksDB on "regular" SSDs and you also have
nvme devices. But since you
Hello Eugen,
Dne Po, čec 10, 2023 at 10:02:58 CEST napsal Eugen Block:
> It's fine, you don't need to worry about the WAL device, it is automatically
> created on the nvme if the DB is there. Having a dedicated WAL device would
> only make sense if for example your data devices are on HDD, your ro
Hi,
I got a customer response with payload size 4096, that made things
even worse. The mon startup time was now around 40 minutes. My doubts
wrt decreasing the payload size seem confirmed. Then I read Dan's
response again which also mentions that the default payload size could
be too small
Hi,
In our cluster monitors' log grows to couple GBs in days. There are quite
many debug message from rocksdb, osd, mgr and mds. These should not be
necessary with a well-run cluster. How could I close these logging?
Thanks,
Ben
___
ceph-users mailing l
Hi,
yes, this is incomplete multiparts problem.
Then, how do admin delete the incomplete multipart object?
I mean
1. can admin find incomplete job and incomplete multipart object?
2. If first question is possible, then can admin delete all the job or object
at once?
On Mon, Jul 10, 2023 at 10:40 AM wrote:
>
> Hi,
>
> yes, this is incomplete multiparts problem.
>
> Then, how do admin delete the incomplete multipart object?
> I mean
> 1. can admin find incomplete job and incomplete multipart object?
> 2. If first question is possible, then can admin delete all
At what level do you have logging set to for your mons? That is a high
volume of logs for the mon to generate.
You can ask all the mons to print their debug logging level with:
"ceph tell mon.* config get debug_mon"
The default is 1/5
What is the overall status of your cluster? Is it healthy?
On 6/30/23 18:36, Yuri Weinstein wrote:
This RC has gone thru partial testing due to issues we are
experiencing in the sepia lab.
Please try it out and report any issues you encounter. Happy testing!
I tested the RC (v18.1.2) this afternoon. I tried out the new "read
balancer". I hit asserts
Hi Stefan,
Yes, please create a tracker. I will take a look at the issue,
Thanks,
Laura Flores
On Mon, Jul 10, 2023 at 10:50 AM Stefan Kooman wrote:
> On 6/30/23 18:36, Yuri Weinstein wrote:
>
> > This RC has gone thru partial testing due to issues we are
> > experiencing in the sepia lab.
> >
On Thu, 6 Jul 2023 at 12:54, Mark Nelson wrote:
>
>
> On 7/6/23 06:02, Matthew Booth wrote:
> > On Wed, 5 Jul 2023 at 15:18, Mark Nelson wrote:
> >> I'm sort of amazed that it gave you symbols without the debuginfo
> >> packages installed. I'll need to figure out a way to prevent that.
> >> Havi
Hi Jan,
On Sun, Jul 9, 2023 at 11:17 PM Jan Marek wrote:
> Hello,
>
> I have a cluster, which have this configuration:
>
> osd pool default size = 3
> osd pool default min size = 1
>
Don't use min_size = 1 during regular stable operations. Instead, use
min_size = 2 to ensure data safety, and th
Oh yes, sounds like purging the rbd trash will be the real fix here!
Good luck!
__
Clyso GmbH | Ceph Support and Consulting | https://www.clyso.com
On Mon, Jul 10, 2023 at 6:10 AM Eugen Block wrote:
> Hi,
> I got a customer response with pa
I'm in the process of adding the radosgw service to our OpenStack cloud
and hoping to re-use keystone for discovery and auth. Things seem to
work fine with many keystone tenants, but as soon as we try to do
something in a project with a '-' in its name everything fails.
Here's an example, usin
just rechecked debug_mon is by default 1/5. mgr/cephadm log_to_cluster
level has been set to critical from debug. Wonder how to set others' level.
Haven't got a clue to do that.
Thanks,
Ben
Wesley Dillingham 于2023年7月10日周一 23:21写道:
> At what level do you have logging set to for your mons? That i
18 matches
Mail list logo