that multiple filesystems in the same cluster are an
experimental feature, and the "latest" version of the same doc makes
the same claim.
What should I believe - the presentation or the official docs?
--
Alexander E. Patrakov
CV: http://pc.cd/PLz7
___
l using the
> above crush rule.
>
> Am I correct about the above statements? How would this work from your
> experience? Thanks.
This works (i.e. guards against host failures) only if you have
strictly separate sets of hosts that have SSDs and that have HDDs.
I.e., there should be n
ight also look at (i.e. benchmark for your workload specifically)
disabling the deepest idle states.
--
Alexander E. Patrakov
CV: http://pc.cd/PLz7
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
at are still
> valid for bluestore or not? I mean the read_ahead_kb and disk scheduler.
>
> Thanks.
>
> On Tue, Nov 3, 2020 at 10:55 PM Alexander E. Patrakov
> wrote:
>>
>> On Tue, Nov 3, 2020 at 6:30 AM Seena Fallah wrote:
>> >
>> > Hi all,
>&g
t used for, say, the past week? Or,
what logs should I turn on so that if it is used during the next week,
it is mentioned there?
--
Alexander E. Patrakov
CV: http://pc.cd/PLz7
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an ema
908207535 v1::0/3495403341'
> entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mgr",
> ""]}]: dispatch
>
> Does that help?
>
> Regards,
> Eugen
>
>
> Zitat von "Alexander E. Patrakov" :
&
tion_seconds=0
>
> and attempted to start the OSDs in question. Same error as before. Am I
> setting compaction options correctly?
You may also want this:
bluefs_log_compact_min_size=999G
--
Alexander E. Patrakov
CV: http://pc.cd/PLz7
___
ceph
the `inconsistents` key was empty! What is this? Is it a bug in Ceph or..?
>
> Thanks.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
A
gt; >
> > Cheers,
> > Simon
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> ___
> ceph-u
wrong. Ceph 15 runs on CentOS 7 just fine, but without the dashboard.
--
Alexander E. Patrakov
CV: http://u.pc.cd/wT8otalK
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
he "real" hot data.
--
Alexander E. Patrakov
CV: http://u.pc.cd/wT8otalK
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ceph-users-le...@ceph.io
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Alexander E. Patrakov
CV: http://u.pc.cd/wT8otalK
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
60 pro (512GB) = 5 Years or 600 TBW - $99
> > >
> >
> > But do these not lack power-loss protection..?
> >
> > We are running the Samsung PM883, as I was told that these would do much
> > better as OSDs.
> >
> > MJ
With Ceph, power loss protection is important not because it protects
the data after power loss, but because it allows the drive not to
waste time on fsyncs (which Ceph issues for every write). I.e. better
performance.
--
Alexander E. Patrakov
CV: http://u.pc.cd/wT8otalK
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ied, unsuccessfully, to tune their setup, but
our final recommendation (successfully benchmarked but rejected due to
costs) was to create a separate replica 3 pool for new backups.
--
Alexander E. Patrakov
CV: http://u.pc.cd/wT8otalK
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
even survive upgrades, but I wouldn't do it at home. Simply
because Ceph never made sense for small clusters, no matter what the
hardware is - for such use cases, you could always do a software RAID
over ISCSI or over AoE, with less overhead.
--
Alexander E. Patrakov
CV: http://u.pc.cd/wT8
vision
>
> Bangladesh Export Import Company Ltd.
>
> Level-8, SAM Tower, Plot #4, Road #22, Gulshan-1, Dhaka-1212,Bangladesh
>
> Tel: +880 9609 000 999, +880 2 5881 5559, Ext: 14191, Fax: +880 2 9895757
>
> Cell: +8801787680828, Email: mosharaf.hoss...@bol-online.com, Web:
ailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
nging a config file (I assume it's /etc/ceph/ceph.conf) on each Node
>
> c) Rebooting the Nodes
>
> d) Taking each Node out of Maintenance Mode
>
> Thanks in advance
>
> Cheers
>
> Dulux-Oz
> _______
> cep
____
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
way to change it is to destroy / redeploy the OSD.
> >
> > There was a succession of PRs in the Octopus / Pacific timeframe around
> > default min_alloc_size for HDD and SSD device classes, including IIRC one
> > temporary reversion.
> >
> > However, the osd labe
t system, but you need to know your current and
> future workloads to configure it accordingly. This is also true for any
> other shared filesystem.
>
>
> Best regards,
>
> Burkhard Linke
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
8 to report this
> > matter and delete all copies of this transmission together with any
> > attachments. /
> >
> --
> Igor Fedotov
> Ceph Lead Developer
>
> Looking for help with your Ceph cluster? Contact us athttps://croit.io
>
> croit GmbH, Freseniusstr. 31h, 81247 Munich
> CEO: Martin Verges - VAT-ID: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
> Web:https://croit.io | YouTube:https://goo.gl/PGE1Bx
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
lesystem
> are virtual machine disks. They are under constant, heavy write load. There
> is no way to turn this off.
> On 19/03/2024 9:36 pm, Alexander E. Patrakov wrote:
>
> Hello Thorne,
>
> Here is one more suggestion on how to debug this. Right now, there is
> uncertainty o
p, but I don't presently see
> how creating a new pool will help us to identify the source of the 10TB
> discrepancy in this original cephfs pool.
>
> Please help me to understand what you are hoping to find...?
> On 20/03/2024 6:35 pm, Alexander E. Patrakov wrote:
>
> Thorn
B 231263
> Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx
> _______
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
> 25 active+remapped+backfilling
> >> 16 active+clean+scrubbing+deep
> >> 1active+remapped+backfill_wait+backfill_toofull
> >>
> >> io:
> >> client: 117 MiB/s rd, 68 MiB/s wr, 274 o
t option).
7. We store ID mappings in non-AD LDAP and use winbindd with the
"ldap" idmap backend.
I am sure other weird but valid setups exist - please extend the list
if you can.
Which of the above scenarios would be supportable without resorting to
the old way of installing SAMBA manual
ate and I
> wanted to get home after a long day. :-)
>
> Is this the solution to my issue, or is there a better way to construct
> the fstab entries, or is there another solution I haven't found yet in
> the doco or via google-foo?
>
> All help and advice greatly appreciat
On Sat, Mar 23, 2024 at 3:08 PM duluxoz wrote:
>
>
> On 23/03/2024 18:00, Alexander E. Patrakov wrote:
> > Hi Dulux-Oz,
> >
> > CephFS is not designed to deal with mobile clients such as laptops
> > that can lose connectivity at any time. And I am not talking ab
n advance
>
> Cheers
>
> Dulux-Oz
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
for Magnetic Resonance DRCMR, Section 714
> Copenhagen University Hospital Amager and Hvidovre
> Kettegaard Allé 30, 2650 Hvidovre, Denmark
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le..
>
> > You can attached files to the mail here on the list.
>
> Doh, for some reason I was sure attachments would be stripped. Thanks,
> attached.
>
> Mvh.
>
> Torkil
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
incompatible, as there is
no way to change the EC parameters.
It would help if you provided the output of "ceph osd pool ls detail".
On Sun, Mar 24, 2024 at 1:43 AM Alexander E. Patrakov
wrote:
>
> Hi Torkil,
>
> Unfortunately, your files contain nothing obviously bad or suspic
at appears after the words
"erasure profile" in the "ceph osd pool ls detail" output.
On Sun, Mar 24, 2024 at 1:56 AM Alexander E. Patrakov
wrote:
>
> Hi Torkil,
>
> I take my previous response back.
>
> You have an erasure-coded pool with nine shards but only th
you have a few OSDs that
have 300+ PGs, the observed maximum is 347. Please set it to 400.
On Sun, Mar 24, 2024 at 3:16 AM Torkil Svensgaard wrote:
>
>
>
> On 23-03-2024 19:05, Alexander E. Patrakov wrote:
> > Sorry for replying to myself, but "ceph osd pool ls detail&q
query
that PG again after the OSD restart.
On Sun, Mar 24, 2024 at 4:56 AM Torkil Svensgaard wrote:
>
>
>
> On 23-03-2024 21:19, Alexander E. Patrakov wrote:
> > Hi Torkil,
>
> Hi Alexander
>
> > I have looked at the CRUSH rules, and the equivalent rules work on
t; longer mentioned but it unfortunately made no difference for the number
> of backfills which went 59->62->62.
>
> Mvh.
>
> Torkil
>
> On 23-03-2024 22:26, Alexander E. Patrakov wrote:
> > Hi Torkil,
> >
> > I have looked at the files that you attached.
t; - although, honestly, that seems to be counter-intuitive to me
> considering CERN uses Ceph for their data storage needs.
>
> Any ideas / thoughts?
>
> Cheers
>
> Dulux-Oz
>
> On 23/03/2024 18:52, Alexander E. Patrakov wrote:
> > Hello Dulux-Oz,
> >
> >
do its work and after that
> change the OSDs crush weights to be even?
>
> * or should it otherwise - first to make crush weights even and then
> enable the balancer?
>
> * or is there another safe(r) way?
>
> What are the ideal balancer settings for that?
>
> I'm e
modify_timestamp: Sun Mar 24 17:44:33 2024
> ~~~
>
> On 24/03/2024 21:10, Curt wrote:
> > Hey Mathew,
> >
> > One more thing out of curiosity can you send the output of blockdev
> > --getbsz on the rbd dev and rbd info?
> >
> > I'm u
t;
>
> Is it the only way to approach this, that each OSD has to be recreated?
>
> Thank you for reply
>
> dp
>
> On 3/24/24 12:44 PM, Alexander E. Patrakov wrote:
> > Hi Denis,
> >
> > My approach would be:
> >
> > 1. Run "ceph osd metadata&qu
On Mon, Mar 25, 2024 at 11:01 PM John Mulligan
wrote:
>
> On Friday, March 22, 2024 2:56:22 PM EDT Alexander E. Patrakov wrote:
> > Hi John,
> >
> > > A few major features we have planned include:
> > > * Standalone servers (internally defined us
On Mon, Mar 25, 2024 at 7:37 PM Torkil Svensgaard wrote:
>
>
>
> On 24/03/2024 01:14, Torkil Svensgaard wrote:
> > On 24-03-2024 00:31, Alexander E. Patrakov wrote:
> >> Hi Torkil,
> >
> > Hi Alexander
> >
> >> Thanks for the update. Eve
inux kernels (client side): 5.10 and 6.1
>
> Did I understand everything correctly? is this the expected behavior
> when running rsync?
>
>
> And one more problem (I don’t know if it’s related or not), when rsync
> finishes copying, all caps are freed except the last two (pinned i_caps
> /
G OBJECTS MISPLACED BYTES STATEUP
> >>ACTING
> >> 36.4a 221508 89144 951346455917 active+remapped+backfilling
> >> [40,43,33,32,30,38,22,35,9]p40 [27,10,20,7,30,21,1,28,31]p27
> >&g
On Thu, Mar 28, 2024 at 9:17 AM Angelo Hongens wrote:
> According to 45drives, saving the CTDB lock file in CephFS is a bad idea
Could you please share a link to their page that says this?
--
Alexander E. Patrakov
___
ceph-users mailing list -- c
s increasing. I suspect I just
> >> need to tell the MDS servers to trim faster but after hours of
> >> googling around I just can't figure out the best way to do it. The
> >> best I could come up with was to decrease "mds_cache_trim_decay_rate"
> >> from 1.0 to .8 (to start), based on this page:
> >>
> >> https://www.suse.com/support/kb/doc/?id=19740
> >>
> >> But it doesn't seem to help, maybe I should decrease it further? I am
> >> guessing this must be a common issue...? I am running Reef on the MDS
> >> servers, but most clients are on Quincy.
> >>
> >> Thanks for any advice!
> >>
> >> cheers,
> >> erich
> >> ___
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> >>
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
gt;>> client: 123 MiB/s rd, 75 MiB/s wr, 109 op/s rd, 1.40k op/s wr
> >>>
> >>> And the specifics are:
> >>>
> >>> # ceph health detail
> >>> HEALTH_WARN 1 MDSs report slow requests; 1 MDSs behind on trimming
> >>> [WRN] MDS_SLOW_REQUEST: 1 MDSs report slow requests
> >>> mds.slugfs.pr-md-01.xdtppo(mds.0): 99 slow requests are blocked >
> >>> 30 secs
> >>> [WRN] MDS_TRIM: 1 MDSs behind on trimming
> >>> mds.slugfs.pr-md-01.xdtppo(mds.0): Behind on trimming (13884/250)
> >>> max_segments: 250, num_segments: 13884
> >>>
> >>> That "num_segments" number slowly keeps increasing. I suspect I just
> >>> need to tell the MDS servers to trim faster but after hours of
> >>> googling around I just can't figure out the best way to do it. The
> >>> best I could come up with was to decrease "mds_cache_trim_decay_rate"
> >>> from 1.0 to .8 (to start), based on this page:
> >>>
> >>> https://www.suse.com/support/kb/doc/?id=19740
> >>>
> >>> But it doesn't seem to help, maybe I should decrease it further? I am
> >>> guessing this must be a common issue...? I am running Reef on the
> >>> MDS servers, but most clients are on Quincy.
> >>>
> >>> Thanks for any advice!
> >>>
> >>> cheers,
> >>> erich
> >>> ___
> >>> ceph-users mailing list -- ceph-users@ceph.io
> >>> To unsubscribe send an email to ceph-users-le...@ceph.io
> >>>
> >>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ain, the stuck directory in question responds
> > again and all is well. Then a few hours later it started happening
> > again (not always the same directory).
> >
> > I hope I'm not experiencing a bug, but I can't see what would be causing
> > this...
> >
&g
gt; https://www.suse.com/support/kb/doc/?id=19740
> >>
> >> But it doesn't seem to help, maybe I should decrease it further? I am
> >> guessing this must be a common issue...? I am running Reef on the MDS
> >> servers, but most clients are on Quincy.
> >>
> >> Thanks for any advice!
> >>
> >> cheers,
> >> erich
> >> ___
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> >>
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
tsgericht Munich HRB 231263
> > Web: https://croit.io/ | YouTube: https://goo.gl/PGE1Bx
> >
> >
> >
> >
> > _______
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> __
e=50.8 ms
>
>
> Any guidance would be greatly appreciated.
>
> Regards,
> Mohammad Saif
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Alexander E. Patrakov
_
Hello Matthew,
You can inherit the group, but not the user, of the containing folder.
This can be achieved by making the folder setgid and then making sure
that the client systems have a proper umask. See the attached PDF for
a presentation that I conducted on this topic to my students in the
past
Hello,
In the context of https://tracker.ceph.com/issues/64298, I decided to
do something manually. In the help output of "ceph tell" for an MDS, I
found these possibly useful commands:
dirfrag ls : List fragments in directory
dirfrag merge : De-fragment directory by path
dirfrag split : Fragm
e norebalance flag during the operation.
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
will slowly creep and accumulate and eat disk space
- and the problematic part is that this creepage is replicated to
OSDs.
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ph-ansible can do it), then upgrade the hosts to EL9 while still
keeping Nautilus, then, still containerized, upgrade to a more recent
Ceph release (but note that you can't upgrade from nautilus to Quincy
directly, you need Octopus or Pacific as a middle step), and then
optionally u
trim_to". Better don't use it, and let your ceph
cluster recover. If you can't wait, try to use upmaps to say that all
PGs are fine where they are now, i.e that they are not misplaced.
There is a script somewhere on GitHub that does this, but
unfortunately I can't fin
using a different key derived from its name and a per-bucket
master key which never leaves Vault.
Note that users will be able to create additional buckets by
themselves, and they won't be encrypted, so tell them either not to do
that or to encrypt the new buckets sim
email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Sat, May 27, 2023 at 5:09 AM Alexander E. Patrakov
wrote:
>
> Hello Frank,
>
> On Fri, May 26, 2023 at 6:27 PM Frank Schilder wrote:
> >
> > Hi all,
> >
> > jumping on this thread as we have requests for which per-client fs mount
> > encryption ma
ction-grade
setup. At the very least, wait until this subproject makes it into
Ceph documentation and becomes available as RPMs and DEBs.
For now, you can still use ceph-iscsi - assuming that you need it,
i.e. that raw RBD is not an option.
--
Alexander E. Patrakov
bug for its inclusion into the Zen kernel,
available for Arch Linux users, and the result is that the resulting system
stopped booting for some users. So a proper backport is required, even
though the Cloudflare patch applies as-is.
https://github.com/zen-kernel/zen-kernel/issues/306
https://github.com
0.x.x.12:50024ESTABLISHED
> 76749/radosgw
>
>
> but client ip 10.x.x.12 is unreachable(because the node was shutdown), the
> status of the tcp connections is always "ESTABLISHED", how to fix it?
Please use this guide:
https://www.cyberciti.biz/tips/cutting-the-tc
t the sync between e.g. Germany and Singapore to catch up
fast. It will be limited by the amount of data that can be synced in
one request and the hard-coded maximum number of requests in flight.
In Reef, there are new tunables that help on high-latency links:
rgw_data_sync_spawn_window, rgw_b
we have 2
> OSDs left (33 and 20) whose checksums disagree.
>
> I am just guessing this, though.
> Also, if this is correct, the next question would be: What is with OSD 20?
> Since there is no error reported at all for OSD 20, I assume that its
> checksum agrees with its data.
___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-l
.
>
> Note: I tested it on another smaller cluster, with 36 SAS disks and got the
> same result.
>
> I don't know exactly what to look for or configure to have any improvement.
> _______
> ceph-users mailing list -- ceph-users@ceph.io
nd if the compaction
> >> process is interrupted without finishing, it may explain that.
> >
> > You run the online compacting for this OSD's (`ceph osd compact
> > ${osd_id}` command), right?
> >
> >
> >
> > k
>
> --
> Jean-Phili
s).
>
> This is an older cluster running Nautilus 14.2.9.
>
> Any thoughts?
> Thanks
> -Dave
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Alexander E. Patrako
came
standby-replay, as expected.
Is there a better way? Or, should I have rebooted mds02 without much
thinking?
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
cache size needing to be
> bigger? Is it a problem with the clients holding onto some kind of
> reference (documentation says this can be a cause, but now how to check for
> it).
>
> Thanks in advance,
> Pedro Lopes
> ___
> ceph-u
dding 64 GB GB
of zram-based swap on each server (with 128 GB of physical RAM in this type
of server).
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
пт, 7 янв. 2022 г. в 00:50, Alexander E. Patrakov :
> чт, 6 янв. 2022 г. в 12:21, Lee :
>
>> I've tried add a swap and that fails also.
>>
>
> How exactly did it fail? Did you put it on some disk, or in zram?
>
> In the past I had to help a customer who hit me
t;ceph health detail" output over time, and with/without the
OSDs with injected PGs running. At the very least, it provides a useful
metric of what is remaining to do.
Also an interesting read-only command (but maybe for later) would be: "ceph
osd safe-to-destroy 123" where 123 is the
e default limit. Even Nautilus can do 400 PGs per OSD,
given "mon max pg per osd = 400" in ceph.conf. Of course it doesn't
mean that you should allow this.
--
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsu
gt; 2022-02-21T13:49:28.452+0100 7f6fa8a34700 -1 osd.7 4711 *** Immediate
> >>> shutdown (osd_fast_shutdown=true) ***
> >>> 2022-02-21T13:53:40.455+0100 7fc9645f4f00 0 set uid:gid to 64045:64045
> >>> (ceph:ceph)
ctive+clean
io:
client: 290 KiB/s rd, 251 MiB/s wr, 366 op/s rd, 278 op/s wr
cache:123 MiB/s flush, 72 MiB/s evict, 31 op/s promote, 3 PGs
flushing, 1 PGs evicting
Is there any workaround, short of somehow telling the client to stop
creating new rbds?
--
Alexander E. Patrakov
CV: http
n any case, the following commands (please run as root) would help debugging:
lsblk
lvs -a -o name,lv_tags
--
Alexander E. Patrakov
CV: http://pc.cd/PLz7
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
4% of the data
device.
3) --data on something (then the db goes there as well) and
--block.wal on a small (i.e. not large enough to use as a db device)
but very fast nvdimm.
--
Alexander E. Patrakov
CV: http://pc.cd/PLz7
___
ceph-users mailing lis
t; > 2020-07-06 18:18:36.933+: 3273: debug :
> >> >> > virEventPollCalculateTimeout:369 :
> >> >> > Timeout at 1594059521930 due in 4997 ms
> >> >> > 2020-07-06 18:18:36.933+0000: 3273: info : virEventPollR
t; (stable)
>
> # uname -r
> 5.4.52-050452-generic
You could use rbd-nbd
# rbd-nbd map image@snap
--
Alexander E. Patrakov
CV: http://pc.cd/PLz7
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
this. Is the rbd-nbd process running?
I.e.:
# cat /proc/partitions
# ps axww | grep nbd
--
Alexander E. Patrakov
CV: http://pc.cd/PLz7
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Fri, Jul 24, 2020 at 7:43 PM Herbert Alexander Faleiros
wrote:
>
> On Fri, Jul 24, 2020 at 07:28:07PM +0500, Alexander E. Patrakov wrote:
> > On Fri, Jul 24, 2020 at 6:01 PM Herbert Alexander Faleiros
> > wrote:
> > >
> > > Hi,
> > >
>
ate it.
3. The network MTU.
4. The utilization figures for SSDs and network interfaces during each test.
Also, given that the scope of the project only includes block storage,
I think it would be fair to ask for a comparison with DRBD 9 and
possibly Linstor, not only with Ceph.
--
Alexander
one.
Here is why the options:
--bluestore-block-db-size=31G: ceph-bluestore-tool refuses to do
anything if this option is not set to any value
--bluefs-log-compact-min-size=31G: make absolutely sure that log
compaction doesn't happen, because it would hit "bluefs enospc" again.
So 25Gb/s
may be a bit too tight.
--
Alexander E. Patrakov
CV: http://pc.cd/PLz7
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
87 matches
Mail list logo