Hello Janne, thank you for your response.
I understand your advice and be sure that I've designed too many EC
pools and I know the mess. This is not an option because I need SPEED.
Please let me tell you, my hardware first to meet the same vision.
Server: R620
Cpu: 2 x Xeon E5-2630 v2 @ 2.60GHz
R
Thank you Eugen for looking into it!
In short, it works. I'm using 16.2.10.
What I did wrong was to remove the OSD, which makes no sense.
Tony
From: Eugen Block
Sent: April 28, 2023 06:46 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: import OSD after
If a dir doesn't exist at the moment of snapshot creation, then the
schedule is deactivated for that dir.
On Fri, Apr 28, 2023 at 8:39 PM Jakob Haufe wrote:
> On Thu, 27 Apr 2023 11:10:07 +0200
> Tobias Hachmer wrote:
>
> > > Given the limitation is per directory, I'm currently trying this:
>
> FYI, PR - https://github.com/ceph/ceph/pull/51278
Thanks!
I just applied this to my cluster and will report back. Looks simple
enough, tbh.
Cheers,
sur5r
--
ceterum censeo microsoftem esse delendam.
pgp1U9cMc_XaM.pgp
Description: OpenPGP digital signature
__
Hello Angelo
You can try PetaSAN
www.petasan.org
We support scale out iscsi with Ceph and is actively developed.
/Maged
On 27/04/2023 23:05, Angelo Höngens wrote:
Hey guys and girls,
I'm working on a project to build storage for one of our departments,
and I want to ask you guys and girls f
On Thu, 27 Apr 2023 11:10:07 +0200
Tobias Hachmer wrote:
> > Given the limitation is per directory, I'm currently trying this:
> >
> > / 1d 30d
> > /foo 1h 48h
> > /bar 1h 48h
> >
> > I forgot to activate the new schedules yesterday so I can't say whether
> > it works as expected yet.
>
I chatted with Mykola who helped me get the OSDs back up. My test
cluster was on 16.2.5 (and still mostly is), after upgrading only the
MGRs to a more recent version (16.2.10) the activate command worked
successfully and the existing OSDs got back up. Not sure if that's a
bug or something e
Hey Yuval,
No problem. It was interesting to me to figure out how it all fits together
and works. Thanks for opening an issue on the tracker.
Cheers,
Tom
On Thu, 27 Apr 2023 at 15:03, Yuval Lifshitz wrote:
> Hi Thomas,
> Thanks for the detailed info!
> RGW lua scripting was never tested in a
A pleasure. Hope it helps :)
Happy to share if you need any more information Zac.
Cheers,
Tom
On Wed, 26 Apr 2023 at 18:14, Dan van der Ster
wrote:
> Thanks Tom, this is a very useful post!
> I've added our docs guy Zac in cc: IMHO this would be useful in a
> "Tips & Tricks" section of the doc
On 28/04/23 13:51, E Taka wrote:
I'm using a dockerized Ceph 17.2.6 under Ubuntu 22.04.
Presumably I'm missing a very basic thing, since this seems a very simple
question: how can I call cephfs-top in my environment? It is not inckuded
in the Docker Image which is accessed by "cephadm shell".
Hi,
I think I found a possible cause of my PG down but still understand why.
As explained in a previous mail, I setup a 15-chunk/OSD EC pool (k=9,
m=6) but I have only 12 OSD servers in the cluster. To workaround the
problem I defined the failure domain as 'osd' with the reasoning that as
I w
FYI, PR - https://github.com/ceph/ceph/pull/51278
On Fri, Apr 28, 2023 at 8:49 AM Milind Changire wrote:
> There's a default/hard limit of 50 snaps that's maintained for any dir via
> the definition MAX_SNAPS_PER_PATH = 50 in the source file
> src/pybind/mgr/snap_schedule/fs/schedule_client.py.
I found a small two-node cluster to test this on pacific, I can
reproduce it. After reinstalling the host (VM) most of the other
services are redeployed (mon, mgr, mds, crash), but not the OSDs. I
will take a closer look.
Zitat von Tony Liu :
Tried [1] already, but got error.
Created no o
I'm using a dockerized Ceph 17.2.6 under Ubuntu 22.04.
Presumably I'm missing a very basic thing, since this seems a very simple
question: how can I call cephfs-top in my environment? It is not inckuded
in the Docker Image which is accessed by "cephadm shell".
And calling the version found in the
Hi Venky,
> Also, at one point the kclient wasn't able to handle more than 400 snapshots
> (per file system), but we have come a long way from that and that is not a
> constraint right now.
Does it mean that there is no more limit to the number of snapshots per
filesystem? And, if not, do you k
15 matches
Mail list logo