Hi Igor,
thanks for your reply.
To workaround it you might want to switch both bluestore and bluefs allocators
back to bitmap for now.
Indeed, setting both allocators to bitmap brought the OSD back online and the
cluster recovered.
You rescued my cluster. ;-)
Cheers
Stephan
smime.p7s
D
Hi,
[client]
rbd cache = false
rbd cache writethrough until flush = false
this is the rbd client's config, not the global MON config you're
reading here:
# ceph --admin-daemon `find /var/run/ceph -name 'ceph-mon*'` config
show |grep rbd_cache
"rbd_cache": "true",
If you want to ch
On Thu, Dec 17, 2020 at 7:22 AM Eugen Block wrote:
>
> Hi,
>
> > [client]
> > rbd cache = false
> > rbd cache writethrough until flush = false
>
> this is the rbd client's config, not the global MON config you're
> reading here:
>
> > # ceph --admin-daemon `find /var/run/ceph -name 'ceph-mon*'` co
Is this cephcsi plugin under control of redhat?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
What is the easiest and best way to migrate bucket from an old cluster to a new
one?
Luminous to octopus not sure does it matter from the data perspective.
This message is confidential and is for the sole use of the intended
recipient(s). It may also be privileg
Hi List,
In order te reproduce an issue we see on a production cluster (cephFS
client: ceph-fuse outperform kernel client by a factor of 5) we would
like to have a test cluster to have the same cephfs "flags" as
production. However, it's not completely clear how certain features
influence the
Hello.
Had been someone starting using namespaces for real production for
multi-tenancy?
How good is it at isolating tenants from each other? Can they see each
other presence, quotas, etc?
Is is safe to give access via cephx to (possibly hostile to each other)
users to the same pool with restric
Huhhh...
Its unfortunate that every google search i did for turning off rbd cache,
specified "put it in the [client] section".
Doh.
Maybe this would make a good candidate to update the ceph rbd docs?
Speaking of which.. what is the *exact* syntax for that command please?
None of the below work:
On Thu, Dec 17, 2020 at 10:41 AM Philip Brown wrote:
>
> Huhhh...
>
> Its unfortunate that every google search i did for turning off rbd cache,
> specified "put it in the [client] section".
> Doh.
>
> Maybe this would make a good candidate to update the ceph rbd docs?
As an open source project,
I guess I left out in my examples, where I tried rbd_cache as well, and failed
# rbd config global set rbd_cache false
rbd: invalid config entity: rbd_cache (must be global, client or client.)
So, while i am happy to file a documentation pull request.. I still need to
find the specific comman
On Thu, Dec 17, 2020 at 11:21 AM Philip Brown wrote:
>
> I guess I left out in my examples, where I tried rbd_cache as well, and failed
>
> # rbd config global set rbd_cache false
> rbd: invalid config entity: rbd_cache (must be global, client or client.)
But that's not a valid command -- you at
Am 01.12.20 um 17:32 schrieb Peter Lieven:
> Hi all,
>
>
> the rados_stat() function has a TODO in the comments:
>
>
> * TODO: when are these set, and by whom? can they be out of date?
>
> Can anyone help with this? How reliably is the pmtime updated? Is there a
> minimum update interval?
>
> Than
On Thu, Dec 17, 2020 at 3:23 AM Stefan Kooman wrote:
>
> Hi List,
>
> In order te reproduce an issue we see on a production cluster (cephFS
> client: ceph-fuse outperform kernel client by a factor of 5) we would
> like to have a test cluster to have the same cephfs "flags" as
> production. However
I am happy to say, this seems to have been the solution.
After running
ceph config set global rbd_cache false
I can now run the full 256 thread varient,
fio --direct=1 --rw=randwrite --bs=4k --ioengine=libaio --filename=/dev/rbd0
--iodepth=256 --numjobs=1 --time_based --group_reporting --na
Huhhh..
It seems worthwhile to point out two inconsistencies, then.
1. the "old way", of ceph config set global rbd_cache false
doesnt require this odd redundant "global set global" syntax. It is confusing
to users to have to specify "global" twice.
may I suggest that the syntax for rbd config
one final word of warning for everyone.
while i no longer have the performance glitch
I can no longer reproduce it.
Doing
ceph config set global rbd_cache true
does not seem to reproduce the old behaviour. even if i do things like unmap
and remap the test rbd.
Which is worrying. because
On Thu, Dec 17, 2020 at 12:09 PM Philip Brown wrote:
>
> Huhhh..
> It seems worthwhile to point out two inconsistencies, then.
>
> 1. the "old way", of ceph config set global rbd_cache false
>
> doesnt require this odd redundant "global set global" syntax. It is confusing
> to users to have to s
On Thu, Dec 17, 2020 at 10:27 AM Stefan Kooman wrote:
> > In any case, I think what you're asking is about the file system flags
> > and not the required_client_features.
>
> That's correct. So I checked the file system flags on different clusters
> (some installed luminous, some mimic, some nauti
I was wondering how to change the IPs used for the OSD servers, in my new
Octopus based environment, which uses all those docker/podman images by default.
imiting date range to within a year, doesnt seem to hit anything.
unlimited google search pulled up
http://lists.ceph.com/pipermail/ceph-use
On Thu, Dec 17, 2020 at 11:35 AM Stefan Kooman wrote:
>
> On 12/17/20 7:45 PM, Patrick Donnelly wrote:
>
> >
> > When a file system is newly created, it's assumed you want all the
> > stable features on, including multiple MDS, directory fragmentation,
> > snapshots, etc. That's what those flags a
On 12/17/20 5:54 PM, Patrick Donnelly wrote:
file system flags are not the same as the "feature" flags. See this
doc for the feature flags:
https://docs.ceph.com/en/latest/cephfs/administration/#minimum-client-version
Thanks for making that clear.
Note that the new "fs feature" and "fs requ
On 12/17/20 7:45 PM, Patrick Donnelly wrote:
When a file system is newly created, it's assumed you want all the
stable features on, including multiple MDS, directory fragmentation,
snapshots, etc. That's what those flags are for. If you've been
upgrading your cluster, you need to turn those on
This is attempt #3 to submit this issue to this mailing list. I don't
expect this to be received. I give up.
I have an issue with MDS corruption which so far I haven't been able to
resolve using the recovery steps I've found online. I'm on v15.2.6. I've
tried all the recovery steps mentioned here,
What if you just stop the containers, configure the new IP address for that
server, then restart the containers? I think it should just work as long as
this server can still reach the MONs.
> 在 2020年12月18日,03:18,Philip Brown 写道:
>
> I was wondering how to change the IPs used for the OSD serve
Thanks for this.
Is download.ceph.com more heavily loaded than usual? It's taking more
than 24 hours to rsync this release to our local mirror (and AFAICT
none of the European mirrors have caught up yet).
Cheers, Dan
On Thu, Dec 17, 2020 at 3:55 AM David Galloway wrote:
>
> This is the 16th bac
25 matches
Mail list logo