Hi,
There is a default limit of 1TiB for the max_file_size in CephFS. I altered
that to 2TiB, but I now got a request for storing a file up to 7TiB.
I'd expect the limit to be there for a reason, but what is the risk of setting
that value to say 10TiB?
--
Mark Schouten
Tuxis, Ede,
e
marker either.
So I'm stuck with that bucket which I would like to remove without
abusing radosgw-admin.
This cluster is running 12.2.13 with civetweb rgw's behind a haproxy
setup. All is working fine, except for this versioning bucket. Can
anywone point me in the right direction to
The RGW does not seem to kick in yet, but I'll keep an eye on that.
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Sat, Apr 24, 2021 at 06:06:04PM +0200, Mark Schouten wrote:
> Using the following command:
> 3cmd setlifecycle lifecycle.xml s3://syslog_tuxis_net
>
> That gave no error, and I see in s3browser that it's active.
>
> The RGW does not seem to kick in yet, but I
ally, I would 'just' upgrade all Ceph packages on the
monitor-nodes and restart mons and then mgrs.
After that, I would upgrade all Ceph packages on the OSD nodes and
restart all the OSD's. Then, after that, the MDSes and RGWs. Restarting
the OSD's will probably take a while.
On Thu, Apr 29, 2021 at 10:58:15AM +0200, Mark Schouten wrote:
> We've done our fair share of Ceph cluster upgrades since Hammer, and
> have not seen much problems with them. I'm now at the point that I have
> to upgrade a rather large cluster running Luminous and I would like t
e num_strays decrease again?
I'm running a `find -ls` over my CephFS tree..
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubsc
er, AFAIK. How can I check that?
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
the
directories are directories anyone has ever actively put pinning on...
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe se
On Tue, May 11, 2021 at 09:53:10AM +0200, Mark Schouten wrote:
> This helped me too. However, should I see num_strays decrease again?
> I'm running a `find -ls` over my CephFS tree..
This helps, the amount of stray files is slowly decreasing. But given
the number of files in the clu
On Mon, May 10, 2021 at 10:46:45PM +0200, Mark Schouten wrote:
> I still have three active ranks. Do I simply restart two of the MDS'es
> and force max_mds to one daemon, or is there a nicer way to move two
> mds'es from active to standby?
It seems (documentation was no long
On Tue, May 11, 2021 at 02:55:05PM +0200, Mark Schouten wrote:
> On Tue, May 11, 2021 at 09:53:10AM +0200, Mark Schouten wrote:
> > This helped me too. However, should I see num_strays decrease again?
> > I'm running a `find -ls` over my CephFS tree..
>
> This helps, th
On Fri, May 14, 2021 at 09:12:07PM +0200, Mark Schouten wrote:
> It seems (documentation was no longer available, so ik took some
> searching) that I needed to run ceph mds deactivate $fs:$rank for every
> MDS I wanted to deactivate.
Ok, so that helped for one of the MDS'es. Trying
that
causes overload and takes a lot of time, while not neccesarily fixing
the num_strays.
How do I force the mds'es to process those strays so that clients do not
get 'incorrect' errors?
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl
0 | active | osdnode05 | Reqs:0 /s | 2760k | 2760k |
| 1 | stopping | osdnode06 | | 10 | 11 |
+--+--+---+---+---+---+
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 |
On Thu, May 27, 2021 at 10:37:33AM +0200, Mark Schouten wrote:
> On Thu, May 27, 2021 at 07:02:16AM +, 胡 玮文 wrote:
> > You may hit https://tracker.ceph.com/issues/50112, which we failed to find
> > the root cause yet. I resolved this by restart rank 0. (I have only 2
> >
I have no clients, and it still does not want to stop rank1. Funny
thing is, while trying to fix this by restarting mdses, I sometimes see
a list of clients popping up in the dashboard, even though no clients
are connected..
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http
On Thu, May 27, 2021 at 12:38:07PM +0200, Mark Schouten wrote:
> On Thu, May 27, 2021 at 06:25:44AM +, Martin Rasmus Lundquist Hansen
> wrote:
> > After scaling the number of MDS daemons down, we now have a daemon stuck in
> > the
> > "up:stopping" state.
Hi,
Op 15-05-2021 om 22:17 schreef Mark Schouten:
Ok, so that helped for one of the MDS'es. Trying to deactivate another
mds, it started to release inos and dns'es, until it was almost done.
When it had a 50-ish left, a client started to complain and be
blacklisted until I res
ecause one monitor has
incorrect time.
Thanks!
--
Mark Schouten
CTO, Tuxis B.V. | https://www.tuxis.nl/
<mailto:m...@tuxis.nl> | +31 318 200208
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
proxmox03”,
"public_addrs": {
"addrvec": [
{
"type": “v2”,
"addr": "10.10.10.3:3300”,
"nonce": 0
},
confirm that ancient (2017) leveldb database mons should just
accept ‘mon.$hostname’ names for mons, a well as ‘mon.$id’ ?
—
Mark Schouten
CTO, Tuxis B.V.
+31 318 200208 / m...@tuxis.nl
-- Original Message --
From "Eugen Block"
To ceph-users@ceph.io
Date 31/01/2024, 13:02:04
Sub
mon_warn_on_insecure_global_id_reclaim
true
root@proxmox01:~# ceph config get mon
mon_warn_on_insecure_global_id_reclaim_allowed
true
—
Mark Schouten
CTO, Tuxis B.V.
+31 318 200208 / m...@tuxis.nl
-- Original Message --
From "Eugen Block"
To ceph-users@ceph.io
Date 02/02/2024, 08:30:45
Subject [ceph
Hi Simon,
You can just dist-upgrade the underlying OS. Assuming that you installed
the packages from https://download.ceph.com/debian-octopus/, just change
bionic to focal in all apt-sources, and dist-upgrade away.
—
Mark Schouten, CTO
Tuxis B.V.
m...@tuxis.nl
-- Original Message
to work around this is welcome
:)
—
Mark Schouten, CTO
Tuxis B.V.
m...@tuxis.nl / +31 318 200208
-- Original Message --
From "Jan Pekař - Imatic"
To ceph-users@ceph.io
Date 1/12/2023 5:53:02 PM
Subject [ceph-users] OSD upgrade problem nautilus->octopus - snap_mapper
upgrad
Hi,
Thanks. Someone told me that we could just destroy the FileStore OSD’s
and recreate them as BlueStore, even though the cluster is partially
upgraded. So I guess I’ll just do that. (Unless someone here tells me
that that’s a terrible idea :))
—
Mark Schouten, CTO
Tuxis B.V.
m...@tuxis.nl
Hi,
I just destroyed the filestore osd and added it as a bluestore osd.
Worked fine.
—
Mark Schouten, CTO
Tuxis B.V.
m...@tuxis.nl / +31 318 200208
-- Original Message --
From "Jan Pekař - Imatic"
To m...@tuxis.nl; ceph-users@ceph.io
Date 2/25/2023 4:14:54 PM
Subject
ot yet converted to
per-pool stats" ?
Thanks!
--
Mark Schouten
Tuxis, Ede, https://www.tuxis.nl
T: +31 318 200208
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
and restart the osd's again
at a more convenient time?
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
uld I proceed?
Thanks,
--
Mark Schouten
Tuxis, Ede, https://www.tuxis.nl
T: +31 318 200208
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Cool, thanks!
--
Mark Schouten
Tuxis, Ede, https://www.tuxis.nl
T: +31 318 200208
- Originele bericht -
Van: James Page (james.p...@canonical.com)
Datum: 28-08-2019 11:02
Naar: Mark Schouten (m...@tuxis.nl)
Cc: ceph-users@ceph.io
Onderwerp: Re: [ceph-users] Upgrade procedure on
/F6PLHO7BGRIG2G2KSPPG3PVORQMBH6WP/
—
Mark Schouten
CTO, Tuxis B.V.
+31 318 200208 / m...@tuxis.nl
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
You set the pg_num and the balancer will split a few at a time until
pgp_num reaches pg_num.
At some point, ceph started complaining if you try to set pgp_num manually.
Ok. So the documentation is incorrect, or at least incomplete?
—
Mark Schouten
CTO, Tuxis B.V.
+31 318 200208 / m
33 matches
Mail list logo