Hi,
You can also try increasing the aggressiveness of the MDS recall but
I'm surprised it's still a problem with the settings I gave you:
ceph config set mds mds_recall_max_caps 15000
ceph config set mds mds_recall_max_decay_rate 0.75
I finally had the chance to try the more aggressive recall
I have inherited a ceph cluster & being new to ceph I am trying to understand
whats being stored in the cluster. I can see we have the below pools
# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
170T 24553G 146T 85.90
POOLS:
NAME
On 2019-08-04T13:27:00, Eitan Mosenkis wrote:
> I'm running a single-host Ceph cluster for CephFS and I'd like to keep
> backups in Amazon S3 for disaster recovery. Is there a simple way to
> extract a CephFS snapshot as a single file and/or to create a file that
> represents the incremental diff
On Tue, Jul 30, 2019 at 10:33 AM Massimo Sgaravatto
wrote:
>
> The documentation that I have seen says that the minimum requirements for
> clients to use upmap are:
>
> - CentOs 7.5 or kernel 4.5
> - Luminous version
Do you have a link for that?
This is wrong: CentOS 7.5 (i.e. RHEL 7.5 kernel)
With 3 monitors, paxos needs at least 2 to reach consensus about the
cluster status
With 4 monitors, more than half is 3. The only problem I can see here is
that I will have only 1 spare monitor.
There's any other problem with and even number of monitors?
--
Alfrenovsky
With 4 monitors if you lost 2 , your quorum will get out, because it needs
be N+1
Monitors recommended:
1 - 3 - 5 - 7
Regards
Manuel
-Mensaje original-
De: ceph-users En nombre de Alfredo
Daniel Rezinovsky
Enviado el: lunes, 5 de agosto de 2019 12:28
Para: ceph-users
Asunto: [ceph-us
On 2019-08-05T07:27:39, Alfredo Daniel Rezinovsky wrote:
There's no massive problem with even MON counts.
As you note, n+2 doesn't really provide added fault tolerance compared
to n+1, so there's no win either. That's fairly obvious.
Somewhat less obvious - since the failure of any additional M
Hi,
I'm still testing my 2 node (dedicated) iSCSI gateway with ceph 12.2.12
before I dare to put it into production. I installed latest tcmu-runner
release (1.5.1) and (like before) I'm seeing that both nodes switch
exclusive locks for the disk images every 21 seconds. tcmu-runner logs
look l
On Mon, Aug 5, 2019 at 11:43 AM Ilya Dryomov wrote:
> On Tue, Jul 30, 2019 at 10:33 AM Massimo Sgaravatto
> wrote:
> >
> > The documentation that I have seen says that the minimum requirements
> for clients to use upmap are:
> >
> > - CentOs 7.5 or kernel 4.5
> > - Luminous version
>
> Do you ha
On 8/4/19 7:36 PM, Christian Balzer wrote:
Hello,
On Sun, 4 Aug 2019 06:34:46 -0500 Mark Nelson wrote:
On 8/4/19 6:09 AM, Paul Emmerich wrote:
On Sun, Aug 4, 2019 at 3:47 AM Christian Balzer wrote:
2. Bluestore caching still broken
When writing data with the fios below, it isn't cached
Hi Team,
@vita...@yourcmc.ru , thank you for information and could you please
clarify on the below quires as well,
1. Average object size we use will be 256KB to 512KB , will there be
deferred write queue ?
With the default settings, no (bluestore_prefer_deferred_size_hdd =
32KB)
Are you su
Another option is if both RDMA ports are on the same card, then you can do
RDMA with a bond. This does not work if you have two separate cards.
As far as your questions go, my guess would be that you would want to have
the different NICs in different broadcast domains, or set up Source Based
Routi
Thanks for that. Seeing 'health err' so frequently has led to worrisome
'alarm fatigue'. Yup that's half of what I want to do.
The number of copies of a pg in the crush map drives how time-critical
and human-intervention critical the pg repair process is. Having
several copies makes automati
On 08/05/2019 05:58 AM, Matthias Leopold wrote:
> Hi,
>
> I'm still testing my 2 node (dedicated) iSCSI gateway with ceph 12.2.12
> before I dare to put it into production. I installed latest tcmu-runner
> release (1.5.1) and (like before) I'm seeing that both nodes switch
> exclusive locks for th
I'm using it for a NAS to make backups from the other machines on my home
network. Since everything is in one location, I want to keep a copy offsite
for disaster recovery. Running Ceph across the internet is not recommended
and is also very expensive compared to just storing snapshots.
On Sun, Au
On Mon, Aug 5, 2019 at 12:21 AM Janek Bevendorff
wrote:
>
> Hi,
>
> > You can also try increasing the aggressiveness of the MDS recall but
> > I'm surprised it's still a problem with the settings I gave you:
> >
> > ceph config set mds mds_recall_max_caps 15000
> > ceph config set mds mds_recall_m
All;
While most discussion of MONs, and their failure modes revolves around the
failure of the MONs themselves, the recommendation for od numbers of MONs has
nothing to do with the loss of one or more MONs. It's actually in response to
the split brain problem.
Imagine you have the following (
Hello,every one,
I can not found ceph v12.2.12 rpm at
https://download.ceph.com/rpm-luminous/el7/aarch64/
why?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi All,
I have multiple nvme ssd and I wish to use two of them for spdk as
bluestore db & wal
my assumption would be in ceph.conf under osd.conf
put following
bluestore_block_db_path = "spdk::01:00.0"bluestore_block_db_size =
40 * 1024 * 1024 * 1024 (40G)
Then how to prepare osd?
ceph-volu
19 matches
Mail list logo