Martin,
thanks a lot for the information. This is very interesting, and i will contact
again if we decided to go this way.
best regards,
samuel
huxia...@horebdata.cn
From: Martin Verges
Date: 2020-03-22 20:50
To: huxia...@horebdata.cn
CC: ceph-users
Subject: Re: Questions on Ceph cluster w
okay, so i have ceph version 14.2.6 nautilus on my source cluster and
ceph version 12.2.13 luminous on my backup clouster.
To be able to mount the mirrored rbd image (without a protected snapshot):
rbd-nbd --read-only map cluster5-rbd/vm-114-disk-1
--cluster backup
I just need to upgrade my bac
I am running the very latest version of Nautilus. I will try setting up
an external exporter today and see if that fixes anything. Our cluster
is somewhat large-ish with 1248 OSDs, so I expect stat collection to
take "some" time, but it definitely shouldn't crush the MGRs all the time.
On 21/03/20
Hello Martin,
how much disk space do you reserve for log in the PXE setup?
Regards
Thomas
Am 22.03.2020 um 20:50 schrieb Martin Verges:
> Hello Samuel,
>
> we from croit.io don't use NFS to boot up Servers. We copy the OS directly
> into the RAM (approximately 0.5-1GB). Think of it like a contai
День добрий!
Mon, Mar 23, 2020 at 05:21:37PM +1300, droopanu wrote:
> Hi Dave,
>
> Thank you for the answer.
>
> Unfortunately the issue is that ceph uses the wrong source IP address, and
> sends the traffic on the wrong interface anyway.
> Would be good if ceph could actually set the source
Hello Thomas,
by default we allocate 1GB per Host on the Management Node, nothing on the
PXE booted server.
This value can be changed in the management container config file
(/config/config.yml):
> ...
> logFilesPerServerGB: 1
> ...
After changing the config, you need to restart the mgmt containe
Hi,
I have upgraded to 14.2.8 and rebooted all nodes sequentially including
all 3 MON services.
However the slow ops are still displayed with increasing block time.
# ceph -s
cluster:
id: 6b1b5117-6e08-4843-93d6-2da3cf8a6bae
health: HEALTH_WARN
17 daemons have recently cr
Hello Martin,
that is much less than I experienced of allocated disk space in case
something is wrong with the cluster.
I have defined at least 10GB and there were situations (in the past)
when this space was quickly allocated by
syslog
user.log
messages
daemon.log
Regards
Thomas
Am 23.03.2020 u
To be able to mount the mirrored rbd image (without a protected snapshot):
rbd-nbd --read-only map cluster5-rbd/vm-114-disk-1
--cluster backup
I just need to upgrade my backup cluster?
No, that only works with snapshots. Although I'm not sure if you can
really skip the protection. I have tw
Am 21.03.20 um 05:51 schrieb Konstantin Shalygin:
> On 3/18/20 10:09 PM, Stolte, Felix wrote:
>> a short question about pool quotas. Do they apply to stats attributes
>> “stored” or “bytes_used” (Is replication counted for or not)?
>
> Quotas is for total used space for this pool on OSD's. So this
OK, so after some debugging, I've pinned the problem down to
OSDMonitor::get_trim_to:
std::lock_guard l(creating_pgs_lock);
if (!creating_pgs.pgs.empty()) {
return 0;
}
apparently creating_pgs.pgs.empty() is not true, do I understand it
correctly that cluster thinks the list of
https://tracker.ceph.com/issues/44184
Looks similar, maybe you're also seeing other symptoms listed there? In any case
would be good to track this in one place.
On Mon, Mar 23, 2020 at 11:29:53AM +0100, Nikola Ciprich wrote:
OK, so after some debugging, I've pinned the problem down to
OSDMonit
Hi Lenz,
Yeah, I saw the PR and i still hit the issue today. In the meantime while
@Volker investigates, is there a workaround to bring dashboard back? Or
should I wait for @Volker's investigation?
P.S.: I fopund this too: https://tracker.ceph.com/issues/44271
Thanks,
Gencer.
==
Hi all,
I have a large distributed ceph cluster that recently broke with all PGs housed
at a single site getting marked as 'unknown' after a run of the Ceph Ansible
playbook (which was being used to expand the cluster at a third site). Is
there a way to recover the location of PGs in this stat
On 2020-03-23 12:23, Gencer W. Genç wrote:
> Yeah, I saw the PR and i still hit the issue today. In the meantime
> while @Volker investigates, is there a workaround to bring dashboard
> back? Or should I wait for @Volker's investigation?
The workaround is likely to remove the user account, so it
Liviu,
All due respect, the settings I suggested should cause the kernel to
always pick the right source IP for a given destination IP, even when
both NICs are connected to the same physical subnet. Except maybe if you
have a default route on your private interface - you should only have
one
Hi Gencer,
could you please post the output of
$ ceph config-key get "mgr/dashboard/accessdb_v2"
Regards
Volker
Am 22.03.20 um 09:37 schrieb Gencer W. Genç:
> After upgrading from 15.1.0 to 15.1.1 of Octopus im seeing this error for
> dashboard:
>
>
>
> cluster:
>
> id: c5233cbc-e9c
On Mon, Mar 23, 2020 at 5:02 AM Eugen Block wrote:
>
> > To be able to mount the mirrored rbd image (without a protected snapshot):
> > rbd-nbd --read-only map cluster5-rbd/vm-114-disk-1
> > --cluster backup
> >
> > I just need to upgrade my backup cluster?
>
> No, that only works with snapshots
Sorry, to clarify, you also need to restrict the clients to mimic or
later to use RBD clone v2 in the default "auto" version selection
mode:
$ ceph osd set-require-min-compat-client mimic
Ah, of course, thanks for the clarification.
Zitat von Jason Dillaman :
On Mon, Mar 23, 2020 at 5:02 AM
Hello all,
For multi-node NFS Ganesha over CephFS, is it OK to leave libcephfs write
caching on, or should it be configured off for failover ?
Cheers /Maged
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-u
OK, to reply myself :-)
I wasn't very smart about decoding the output of "ceph-kvstore-tool get ..."
so I added dump of creating_pgs.pgs into get_trim_to function.
now I have the list of PGs which seem to be stuck in creating state
in monitors DB. If i query them, they're active+clean as I wrote.
-- Forwarded message -
From: Abhinav Singh
Date: Mon, Mar 23, 2020 at 7:43 PM
Subject: RGW failing to create bucket
To:
ceph : octopus
JaegerTracing : master
ubuntu : 18.04
When I implementing jaeger tracing it is unable to create a bucket.
(I m using swif to perform testing.)
Hi Jan,
yes, I'm watching this TT as well, I'll post update there
(together with quick & dirty patch to get more debugging info)
BR
nik
On Mon, Mar 23, 2020 at 12:12:43PM +0100, Jan Fajerski wrote:
> https://tracker.ceph.com/issues/44184
> Looks similar, maybe you're also seeing other symptoms
Hi Volker,
Sure, here you go:
{"users": {"gencer": {"username": "gencer", "password": "",
"roles": ["administrator"], "name": "Gencer Gen\u00e7", "email": "gencer@xxx",
"lastUpdate": 1580029921, "enabled": true, "pwdExpirationDate": null}},
"roles": {}, "version": 2}
Dear All,
We are having problems with a critical osd crashing on a Nautilus
(14.2.8) cluster.
This is a critical failure, as the osd is part of a pg that is otherwise
"down+remapped" due to other osd's crashing; we were hoping the pg was
going to repair itself, as there are plenty of free os
I haven't seen any MGR hangs so far since I disabled the prometheus
module. It seems like the module is not only slow, but kills the whole
MGR when the cluster is sufficiently large, so these two issues are most
likely connected. The issue has become much, much worse with 14.2.8.
On 23/03/2020 09
I dug up this issue report, where the problem has been reported before:
https://tracker.ceph.com/issues/39264
Unfortuantely, the issue hasn't got much (or any) attention yet. So
let's get this fixed, the prometheus module is unusable in its current
state.
On 23/03/2020 17:50, Janek Bevendorff wr
Hi everyone,
As we wrap up Octopus and kick of development for Pacific, now it seems
like a good idea to sort out what to call the Q release.
Traditionally/historically, these have always been names of cephalopod
species--usually the "common name", but occasionally a latin name
(infernalis).
Maybe just call it Quincy and have a backstory? Might be fun...
> On Mar 23, 2020, at 11:11 AM, Sage Weil wrote:
>
> Hi everyone,
>
> As we wrap up Octopus and kick of development for Pacific, now it seems
> like a good idea to sort out what to call the Q release.
> Traditionally/historicall
there's always Quahog here in New England, but I like Quincy.
On Mon, Mar 23, 2020 at 1:13 PM Brian Topping
wrote:
> Maybe just call it Quincy and have a backstory? Might be fun...
>
> > On Mar 23, 2020, at 11:11 AM, Sage Weil wrote:
> >
> > Hi everyone,
> >
> > As we wrap up Octopus and kick
How about the squid-headed alien species from Star Wars?
https://en.wikipedia.org/wiki/List_of_Star_Wars_species_(P%E2%80%93T)#Quarren
On Mon, Mar 23, 2020 at 6:11 PM Sage Weil wrote:
>
> Hi everyone,
>
> As we wrap up Octopus and kick of development for Pacific, now it seems
> like a good id
Quincy - Should be in the context of easily-solved storage failures that can
occur only on Thursday nights between the hours of 2000 and 2100 with a strong
emphasis on random chance for corrective actions and an incompetent local
security group. Possibly not the best associations for a technolog
That has potential. Another, albeit suboptimal idea would be simply
Quid
as in
’S quid
as in “it’s squid”. cf. https://en.wikipedia.org/wiki/%27S_Wonderful
Alternately just skip to R and when someone tasks about Q, we say “The first
rule of Ceph is that we don’t talk about Q”.
— aad
>
>
I liked the first one a lot. Until I read the second one.
> On Mar 23, 2020, at 11:29 AM, Anthony D'Atri wrote:
>
> That has potential. Another, albeit suboptimal idea would be simply
>
> Quid
>
> as in
>
> ’S quid
>
> as in “it’s squid”. cf. https://en.wikipedia.org/wiki/%27S_Wonderful
>
Checking the word "Octopus" in different languages the only one starting
with a "Q" is in "Maltese": "Qarnit"
For good measure here is a Maltesian Qarnit stew recipe:
http://littlerock.com.mt/food/maltese-traditional-recipe-stuffat-tal-qarnit-octopus-stew/
Respectfully,
*Wes Dillingham*
w...@wes
+1 Quincy
On Mon, Mar 23, 2020 at 10:11 AM Sage Weil wrote:
>
> Hi everyone,
>
> As we wrap up Octopus and kick of development for Pacific, now it seems
> like a good idea to sort out what to call the Q release.
> Traditionally/historically, these have always been names of cephalopod
> species--u
What about Quasar? (https://www.google.com/search?q=quasar)
It's belong to the universe.
True that there are no so much options for Q.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Liviu;
First: what version of Ceph are you running?
Second: I don't see a cluster network option in you configuration file?
At least for us, running Nautilus, there are no underscores (_) in the options,
so our configuration files look like this:
[global]
auth clust required = cphx
please someone help me
On Mon, 23 Mar 2020, 19:44 Abhinav Singh,
wrote:
>
>
> -- Forwarded message -
> From: Abhinav Singh
> Date: Mon, Mar 23, 2020 at 7:43 PM
> Subject: RGW failing to create bucket
> To:
>
>
> ceph : octopus
> JaegerTracing : master
> ubuntu : 18.04
>
> When
Tried that:
[client.1]
key = ***
caps mds = "allow rw path=/"
caps mon = "allow r"
caps osd = "allow rw tag cephfs pool=meta_data, allow rw pool=data"
No change.
From: Yan, Zheng
Sent: Sunday, March 22
Wait, your client name is just "1"? In that case you need to specify
that in your mount command:
mount ... -o name=1,secret=...
It has to match your ceph auth settings, where "client" is only a
prefix and is followed by the client's name
[client.1]
Zitat von "Dungan, Scott A." :
Tried
On Mon, 2020-03-23 at 15:49 +0200, Maged Mokhtar wrote:
> Hello all,
>
> For multi-node NFS Ganesha over CephFS, is it OK to leave libcephfs write
> caching on, or should it be configured off for failover ?
>
You can do libcephfs write caching, as the caps would need to be
recalled for any comp
That was it! I am not sure how I got confused with the client name syntax. When
I issued the command to create a client key, I used:
ceph fs authorize cephfs client.1 / r / rw
I assumed from the syntax that my client name is "client.1"
I suppose the correct syntax is that anything after "client
Hi Gencer,
you can fix the Dashboard user database with the following command:
# ceph config-key get "mgr/dashboard/accessdb_v2" | jq -cM
".users[].pwdUpdateRequired = false" | ceph config-key set
"mgr/dashboard/accessdb_v2" -i -
Regards
Volker
Am 23.03.20 um 16:22 schrieb gen...@gencgiyen.com:
Hi Volker,
Thank you so much for your quick fix for me. It worked. I got my dashboard back
and ceph is in HEALTH_OK state.
Thank you so much again and stay safe!
Regards,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
On 23/03/2020 20:50, Jeff Layton wrote:
On Mon, 2020-03-23 at 15:49 +0200, Maged Mokhtar wrote:
Hello all,
For multi-node NFS Ganesha over CephFS, is it OK to leave libcephfs write
caching on, or should it be configured off for failover ?
You can do libcephfs write caching, as the caps woul
Hi,
I'm not able to bootstrap an OSD container for a physical device or LVM.
¿Anyone has been able to bootstrap it?
Sorry if it is not the correct place to post this question. If not, I
apologize and I will be grateful if anyone can redirect-me to the correct
place.
Thanks in advance
Oscar
Hi,
I'm not able to bootstrap an OSD container for a physical device or LVM.
¿Anyone has been able to bootstrap it?
Sorry if it is not the correct place to post this question. If not, I
apologize and I will be grateful if anyone can redirect-me to the correct
place.
Thanks in advance
Oscar
Evening,
We are running into issues exporting a disk image from ceph rbd. When we
attempt to export an rbd image in a cache tiered erasure-coded pool on Luminus.
All the other disks are working fine but this one is acting up. We have a bit
of important data on other disks so obviously want to m
49 matches
Mail list logo