Den ons 17 nov. 2021 kl 18:41 skrev Dave Hall :
>
> The real point here: From what I'm reading in this mailing list it appears
> that most non-developers are currently afraid to risk an upgrade to Octopus
> or Pacific. If this is an accurate perception then THIS IS THE ONLY
> PROBLEM.
You might
Thanks a ton!!! Very helps!Thanks,Xiong在 2021年11月17日,上午11:16,胡 玮文 写道:There is a rbytes mount option [1]. Besides, you can use “getfattr -n ceph.dir.rbytes /path/in/cephfs”[1]: https://docs.ceph.com/en/latest/man/8/mount.ceph/#advancedWeiwen Hu在 2021年11月17日,10:26,zxcs 写道:Hi,I want to list cephfs
This lab environment runs on:
"ceph version 15.2.14-84-gb6e5642e260
(b6e5642e2603d3e6bdbc158feff6a51722214e72) octopus (stable)": 3
Zitat von J-P Methot :
Yes, the single realm config works without issue. Same with the rgw
container. It's with the second realm that everything stop
docker itself is not the problem, it's super nice. It's just that adm/orch
is yet another deployment tool, and yet again not reliable enough. It's
easy to break, and adds additional errors like you can see at my
screenshot. I have a collection of them ;).
We are talking about a storage, meant to s
Hi all,
We are consistently seeing the MDS_CLIENT_RECALL warning in our cluster, it
seems harmless, but we cannot get HEALTH_OK, which is annoying.
The clients that are reported failing to respond to cache pressure are
constantly changing, and most of the time we got 1-5 such clients out of ~20
Gregory,
Thank you for your reply, I do understand that a number of serialized
lookups may take time. However if 3.25 sec is OK, 11.2 seconds sounds
long, and I had once removed a large subdirectory which took over 20
minutes to complete. I attempted to use nowsync mount option with kernel
5.15
On 11/17/21 8:19 PM, Martin Verges wrote:
There are still alternative solutions without the need for useless
containers and added complexity. Stay away from that crap and you won't
have a hard time. 😜
I don't have a problem with the containers *at all*. And with me
probably a lot of users. But
On Wed, Nov 17, 2021 at 6:10 PM Janne Johansson wrote:
>
> > * I personally wouldn't want to run an LTS release based on ... what would
> > that be now.. Luminous + security patches??. IMO, the new releases really
> > are much more performant, much more scalable. N, O, and P are really much
> > mu
> [2]: https://ceph.io/en/community/team/
Is this everyone who are working full time on ceph?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello,
Not sure whether this is perhaps related:
https://bugs.launchpad.net/ubuntu/+source/linux-meta-gcp-5.11/+bug/1948471
Any insight would be appreciated
Thanks,
Marco
On Wed, Nov 17, 2021 at 9:18 AM Marco Pizzolo
wrote:
> Good day everyone,
>
> This is a bit of a recurring theme for us
> > And it looks like I'll have to accept the move to containers even
> though I have serious concerns about operational maintainability due to
> the inherent opaqueness of container solutions.
>
> There are still alternative solutions without the need for useless
> containers and added complexity
> And it looks like I'll have to accept the move to containers even though
I have serious concerns about operational maintainability due to the
inherent opaqueness of container solutions.
There are still alternative solutions without the need for useless
containers and added complexity. Stay away
>
> a good choice. It lacks RBD encryption and read leases. But for us
> upgrading from N to O or P is currently not
>
what about just using osd encryption with N?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-u
Hello Dave,
> The potential to lose or lose access to millions of files/objects or
petabytes of data is enough to keep you up at night.
> Many of us out here have become critically dependent on Ceph storage, and
probably most of us can barely afford our production clusters, much less a
test cluste
Am 17.11.21 um 18:09 schrieb Janne Johansson:
>> * I personally wouldn't want to run an LTS release based on ... what would
>> that be now.. Luminous + security patches??. IMO, the new releases really
>> are much more performant, much more scalable. N, O, and P are really much
>> much *much* better
> * I personally wouldn't want to run an LTS release based on ... what would
> that be now.. Luminous + security patches??. IMO, the new releases really
> are much more performant, much more scalable. N, O, and P are really much
> much *much* better than previous releases. For example, I would not
Den ons 17 nov. 2021 kl 17:16 skrev Szabo, Istvan (Agoda)
:
>
> Hi,
>
> I’m curious when you put more osd/nvme or ssd how you calculate the pg?
> Are you still consider each osd as a separate disk so in case of 4osd/nvme on
> your disk actually you hold ~400pg?
> Or just have 4osd/nvme but you don
No, our osds are hdd (no ssd) and we have everything (data and metadata)
on them (no nvme).
Le 17/11/2021 à 16:49, Arthur Outhenin-Chalandre a écrit :
Hi,
On 11/17/21 16:09, Francois Legrand wrote:
Now we are investingating this snapshot issue and I noticed that as
long as we remove one snaps
>
> > Yeah, generally there is no much enthusiasm about supporting that
> among developers.
>
> I guess its because none of them is administrating any large production
> installation
Exactly!
> The actual implied upgrade period is every 2 years and every
> 4 years as an exception. For storage
> features" per se -- one which comes to mind is the fix related to
> detecting
> network binding addrs, for example -- something that would reasonably
> have
> landed in and broken LTS clusters.)
> * I personally wouldn't want to run an LTS release based on ... what
> would
> that be now.. Luminou
On 17/11/2021 15:19, Marc wrote:
The CLT is discussing a more feasible alternative to LTS, namely to
publish an RC for each point release and involve the user community to
help test it.
How many users even have the availability of a 'test cluster'?
The Sanger has one (3 hosts), which was a re
On Sat, Nov 13, 2021 at 5:25 PM Sasha Litvak
wrote:
>
> I continued looking into the issue and have no idea what hinders the
> performance yet. However:
>
> 1. A client operating with kernel 5.3.0-42 (ubuntu 18.04) has no such
> problems. I delete a directory with hashed subdirs (00 - ff) and tot
Hi,
On 11/17/21 16:09, Francois Legrand wrote:
Now we are investingating this snapshot issue and I noticed that as long
as we remove one snapshot alone, things seems to go well (only some pgs
in "unknown state" but no global warning nor slow ops, osd down or
crash). But if we remove several sn
>
> The demand for LTS - at least in our case - does not stem from
> unprofessionalism or biased opinion.
> It's the desire to stay up to date on security patches as much as
> possible while maintaining a well tested and stable environment.
Is this not the definition of Long Term Stable? ;)
> Bo
> The CLT is discussing a more feasible alternative to LTS, namely to
> publish an RC for each point release and involve the user community to
> help test it.
How many users even have the availability of a 'test cluster'?
___
ceph-users mailing list
Hi,
Yes, the hosts have internet access and other Ceph commands work successfully.
Every host we have tried has worked for bootstrap, but adding another node to
the cluster isn't working. We've also tried adding intentionally bad hosts and
get expected failures (missing SSH key, etc).
Here's s
Just as a friendly reminder:
1) No one prevents you from hiring developers to work on Ceph in a way you
like.
2) I personally dislike the current release cycle and would like change
that a bit.
3) There is a reason companies like our own prevent users from using latest
as "production", we tag them
Hello,
We recently upgraded our ceph+cephfs cluster from nautilus to octopus.
After the upgrade, we noticed that removal of snapshots was causing a
lot of problems (lot of slow ops, osd marked down, crashs etc...) so we
suspended the snapshots for a while so the cluster get stable again for
m
> >
> > But since when do developers decide? Do you know any factory where
> factory workers decide what product they are going to make and not the
> product management???
>
> You might want to check out [1] and [2]. There are different
> stakeholders with different interests. All these parties ha
My 2 cents:
* the best solution to ensure ceph's future rock solid stability is to
continually improve our upstream testing. We have excellent unit testing to
avoid regressions on specific bugs, and pretty adequate upgrade testing,
but I'd like to know if we're missing some high level major upgrade
Oh yes I have been telling gillette for years to stop producing so many
different plastic model razors, but I still see racks full of them. I also have
been telling BMW not to share so many parts between models, but they are still
doing this. I also have been telling Microsoft about the many s
We have had excellent results with multi-MDS - *after* we pinned every
directory. directory migrations caused so much load that it was
frequently no faster than a single MDS. This was on Nautilus at the
time. The hard limit on strays is also per-MDS, so we ended up
splitting to more MDS' to buy som
Good day everyone,
This is a bit of a recurring theme for us on a new deployment performed at
16.2.6 on Ubuntu 20.04.3 with HWE stack.
We have had good stability over the past 3 weeks or so copying data, and we
now have about 230M objects (470TB of 1PB used) and we have had 1 OSD drop
from each o
The demand for LTS - at least in our case - does not stem from
unprofessionalism or biased opinion.
It's the desire to stay up to date on security patches as much as possible
while maintaining a well tested and stable environment.
Both Pacific and Octopus (we’re currently on Nautilus) have some
Is there an incompatibility between nautilus and octopus? master is
octopus, the other one is nautilus.
Am Mi., 17. Nov. 2021 um 11:12 Uhr schrieb Boris Behrens :
> Hi,
> we've set up a non replicated multisite environment.
> We have one realm, with multiple zonegroups and one zone per group.
>
>
The CLT is discussing a more feasible alternative to LTS, namely to
publish an RC for each point release and involve the user community to
help test it.
This can be discussed at the user-dev meeting tomorrow.
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes
(BTW I just restored that etherpad --
First of all that's an open source - so developers tend to have higher
influence to decision making.
And you can replace "among developers" to "among CLT" in my previous post...
Hopefully this position can be shifter if there is a wide "feature
request" from the field hence please try to share
But since when do developers decide? Do you know any factory where factory
workers decide what product they are going to make and not the product
management??? IT is becoming such a refuge for undetected unprofessionals.
>
> Yeah, generally there is no much enthusiasm about supporting that a
Yeah, generally there is no much enthusiasm about supporting that among
developers. But it would be nice to hear points from user side anyway...
Igor
On 11/17/2021 2:29 PM, Peter Lieven wrote:
Am 17.11.21 um 12:20 schrieb Igor Fedotov:
Hi Peter,
sure, why not...
See [1]. I read it that it
Am 17.11.21 um 12:20 schrieb Igor Fedotov:
> Hi Peter,
>
> sure, why not...
See [1]. I read it that it is not wanted by upstream developers. If we want it
the community has to do it.
Nevertheless, I have put it on the list.
Peter
[1]
https://lists.ceph.io/hyperkitty/list/d...@ceph.io/thr
Hi Peter,
sure, why not...
Thanks,
Igor
On 11/17/2021 10:48 AM, Peter Lieven wrote:
Am 09.11.2021 um 00:01 schrieb Igor Fedotov :
Hi folks,
having a LTS release cycle could be a great topic for upcoming "Ceph User + Dev
Monthly meeting".
The first one is scheduled on November 18, 202
> > having a LTS release cycle could be a great topic for upcoming "Ceph
> User + Dev Monthly meeting".
> >
> > The first one is scheduled on November 18, 2021, 14:00-15:00 UTC
> >
> > https://pad.ceph.com/p/ceph-user-dev-monthly-minutes
> >
> > Any volunteers to extend the agenda and advocate the
Hi,
we've set up a non replicated multisite environment.
We have one realm, with multiple zonegroups and one zone per group.
When I try to add a lifecycle policy to a bucket that is not located in the
master zonegroup, I will receive 503 errorse from the RGW.
s3cmd sometimes just hangs forever or
Hi,
This is the only logging output we see:
# cephadm shell -- ceph orch host add kvm-mon02 192.168.7.12
Inferring fsid 826b9b36-4729-11ec-99f0-c81f66d05a38
Using recent ceph image
quay.io/ceph/ceph@sha256:a2c23b6942f7fbc1e15d8cfacd6655a681fe0e44f288e4a158db22030b8d58e3
This command hangs in
Hi,
in this thread [1] Dan gives very helpful points to consider regarding
multi-active MDS. Are you sure you need that?
One of our customers has tested such a setup extensively with
directory pinning because the MDS balancer couldn't handle the high
client load. In order to better utilize
45 matches
Mail list logo