Hello,
We did not build the windows client ourselves. The ceph version we are
connecting to is: 14.2.15
We followed the documentation here:
https://docs.ceph.com/en/latest/install/windows-install/
* So we installed Dokan from here:
https://github.com/dokan-dev/dokany/releases (The issue
Hi,
we run a larger octopus s3 cluster with only rotating disks.
1.3 PiB with 177 OSDs, some with a SSD block.db and some without.
We have a ton of spare 2TB disks and we just wondered if we can bring the
to good use.
For every 10 spinning disks we could add one 2TB SSD and we would create
two par
(There should not be any issues using rgw for other buckets while
re-sharding.)
If it is then disabling the bucket access will work right? Also sync should
be disabled.
Yes, after the manual reshard it should clear the leftovers but in my
situation resharding failed and I got double entries for th
Hi Stefan,
for a 6:1 or 3:1 ration we do not have enough slots (I think).
There is some read but I don't know if this is a lot.
client: 27 MiB/s rd, 289 MiB/s wr, 1.07k op/s rd, 261 op/s wr
Putting the to use for some special rgw pools also came to my mind.
But would this make a lot of diff
Maybe I missed it, but can't I just reshad buckets when they are not
replicated / synced / mirrord (what is the correct ceph terminology in
this)?
Am Mo., 8. Nov. 2021 um 12:28 Uhr schrieb mhnx :
> (There should not be any issues using rgw for other buckets while
> re-sharding.)
> If it is then d
Hi Dan,
I diffed two maps, but the only difference are the epoch number and the
timestamp.
# diff -u osdmap-183113.txt osdmap-183114.txt
--- osdmap-183113.txt 2021-11-08 12:44:24.421868492 +0100
+++ osdmap-183114.txt 2021-11-08 12:44:28.302027930 +0100
@@ -1,7 +1,7 @@
-epoch 183113
+epoch 18
Okay.
The default vaule for paxos_propose_interval seems to be "1.0" not
"2.0". But anyway, reducing to 0.25 seems to fix this issue on our
testing cluster.
I wanted to test some failure scenarios with this value and had a look
to the osdmap epoch to check how many new maps will be created.
On the
> That does not seem like a lot. Having SSD based metadata pools might
> reduce latency though.
>
So block.db and block.wal doesn't make sense? I would like to have a
consistent cluster.
In either case I would need to remove or add SSDs, because we currently
have this mixed.
It does waste a lot of
Hi,
Okay. Here is another case which was churning the osdmaps:
https://tracker.ceph.com/issues/51433
Perhaps similar debugging will show what's creating the maps in your case.
Cheers, Dan
On Mon, Nov 8, 2021, 12:48 PM Manuel Lausch wrote:
> Hi Dan,
>
> I diffed two maps, but the only differe
Hi Dave,
Please take into account that Nautilus reached End of Life, so the first
recommendation would be for you to upgrade to a supported release ASAP.
That said, the Grafana panels come from 2 different sources: Node Exporter
(procfs data: CPU, RAM, ...) and Ceph Exporter (Ceph-intrinsic data:
Hello.
I'm using Nautilus 14.2.16
I have 30 SSD in my cluster and I use them as Bluestore OSD for RGW index.
Almost every week I'm losing (down) an OSD and when I check osd log I see:
-6> 2021-11-06 19:01:10.854 7fa799989c40 1 *bluefs _allocate
failed to allocate 0xf4f04 on bdev 1, free 0xb0
Hi Dan,
thanks for the hint.
The cluster is not doing any changes (rebalance, merging, splitting, or
somethin like this). Only normal client traffic via librados.
In the mon.log I see regularly the following messages, which seems to
corelate to the osd map "changes"
2021-11-08T14:15:58.915+0100
Hi,
Yeah that is clearly showing a new osdmap epoch a few times per second.
There's nothing in the ceph.audit.log ?
You might need to increase the debug levels of the mon leader to see
what is triggering it.
-- dan
On Mon, Nov 8, 2021 at 2:37 PM Manuel Lausch wrote:
>
> Hi Dan,
>
> thanks for
On Mon, Nov 8, 2021 at 6:03 AM Manuel Lausch wrote:
> Okay.
> The default vaule for paxos_propose_interval seems to be "1.0" not
> "2.0". But anyway, reducing to 0.25 seems to fix this issue on our
> testing cluster.
>
> I wanted to test some failure scenarios with this value and had a look
> to
The Ceph Leadership Team has decided to stop holding DocuBetter meetings.
Documentation-related complaints and requests will now be heard at the
monthly "User + Dev" meetings.
The first of the following links is the link to the meeting itself.
The second of the following links is the agenda for t
Hi Benoît,
On Mon, Nov 8, 2021 at 4:31 PM Benoît Knecht wrote:
>
> Hi Dan,
>
> On Thursday, November 4th, 2021 at 11:33, Dan van der Ster
> wrote:
> > - Are we running the same firmware as you? (We have 0104). I wonder if
> > Toshiba has changed the implementation of the cache in the meantime
> 在 2021年11月8日,19:08,Boris Behrens 写道:
>
> And does it make a different to have only a block.db partition or a
> block.db and a block.wal partition?
I think having only a block.db partition is better if you don’t have 2 separate
disks for them. WAL will be placed in the DB partition if you don
Hi Franck,
I totally agree with your point 3 (also with 1 and 2 indeed). Generally
speaking, the release cycle of many softwares tends to become faster and
faster (not only for ceph, but also openstack etc...) and it's really
hard and tricky to maintain an infrastructure up to date in such
co
I have the idea that the choice for ceph adm and the release schedule is more
feulled by market acquisition aspirations. How can you reason with that.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph
In addition to what the others said - generally there is little point
in splitting block and wal partitions - just stick to one for both.
What model are you SSDs and how well do they handle small direct
writes? Because that's what you'll be getting on them and the wrong
type of SSD can make things
Hi,
Still in testing phase, but wonder how to disable configuration, which was
chosen uppon cephadm cluster creation.
I used key --all-available-devices
But also I applies custom osd yaml file, and cant find how to remove such
definition later.
Best Regards,
Arūnas
__
+1 to support LTS release with stability as the MAJOR goal. Perferably LTS
releases are maintained for a minimum of 5 or even ten years, like Linux kernel
huxia...@horebdata.cn
From: Francois Legrand
Date: 2021-11-08 17:59
To: Frank Schilder; ceph-users
Subject: [ceph-users] Re: Why you mig
Hi folks,
having a LTS release cycle could be a great topic for upcoming "Ceph
User + Dev Monthly meeting".
The first one is scheduled on November 18, 2021, 14:00-15:00 UTC
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes
Any volunteers to extend the agenda and advocate the idea?
Thank
Theoretically you should be able to reshard buckets which are not in sync. That
would produce new .dir.new_bucket_index objects inside your bucket.index pool
which would put omap key/values into new shards (.dir.new_bucket_index).
Objects itself would be left intact as marker id is not changed.
When resharding is performed I believe its considered as bucket operation and
undergoes through updating the bucket stats. Like new bucket shard is created
and it may increase the number of objects within the bucket stats. If it was
broken during resharding, you could check the current bucket i
Are those problematic OSDs getting almost full ? I do not have Ubuntu account
to check their pastebin.Надіслано з пристрою Galaxy
Оригінальне повідомлення Від: mhnx
Дата: 08.11.21 15:31 (GMT+02:00) Кому: Ceph Users Тема:
[ceph-users] allocate_bluefs_freespace failed to alloc
I was trying to keep things clear and I was aware of the login issue.
Sorry. You're right.
OSD's are not full. Need balance but I can't activate the balancer
because of the issue.
ceph osd df tree | grep 'CLASS\|ssd'
ID CLASS WEIGHT REWEIGHT SIZERAW USE DATAOMAPMETA
AVAIL %USE
Hi fellow ceph users,
I did an upgrade from 14.2.23 to 16.2.6 not knowing that the current
minor version had this nasty bug! [1] [2]
we were able to resolve some of the omap issues in the rgw.index pool
but still have 17pg's to fix in the rgw.meta and rgw.log pool!
I have a couple of questions:
Hi,
I have a 6-node cluster running Pacific 16.2.6 with 54 x 10TB HDD and 12 x
6.4 TB NVME drives. By default, the autoscaler appears to scale down each
pool to 32 PGs, which causes a very uneven data distribution and somewhat
lower performance.
Although I knew that previously there was a neat pu
29 matches
Mail list logo