> AIT Risø Campus
> Bygning 109, rum S14
>
> ________
> From: Amudhan P
> Sent: Monday, September 16, 2024 6:19 PM
> To: Frank Schilder
> Cc: Eugen Block; ceph-users@ceph.io
> Subject: Re: [ceph-users] Re: Ceph octopus version cluster not starting
>
> Thanks Frank.
ot; to the daemon's
> command lines to force traditional log files to be written.
>
> Best regards,
> =====
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
>
> From: Amudhan P
> Sent: Monday, September 16, 2024 12:18
o on
> until you have enough up for quorum. Then you can start querying the MONs
> what their problem is.
>
> If none of this works, the output of the manual command maybe with higher
> debug settings on the command line should be helpful.
>
> Best regards,
> =
> Frank Schilde
t monmap
>
> Does it print the expected output?
>
> Zitat von Amudhan P :
>
> > No, I don't use cephadm and I have enough space for a log storage.
> >
> > When I try to start mon service in any of the node it just keeps waiting
> to
> > complete without an
the MONs first. If they don't start, your cluster is
> not usable. It doesn't look like you use cephadm, but please confirm.
> Check if the nodes are running out of disk space, maybe that's why
> they don't log anything and fail to start.
>
>
> Zitat von Amudhan P
Hi,
Recently added one disk in Ceph cluster using "ceph-volume lvm create
--data /dev/sdX" but the new OSD didn't start. After some rest of the other
nodes OSD service also stopped. So, I restarted all nodes in the cluster
now after restart.
MON, MDS, MGR and OSD services are not starting. Could
om/issues/17170
> --
> Alex Gorbachev
> Intelligent Systems Services Inc.
> http://www.iss-integration.com
> https://www.linkedin.com/in/alex-gorbachev-iss/
>
>
>
> On Sat, Nov 4, 2023 at 11:22 AM Amudhan P wrote:
>
>> Hi,
>>
>> One of the server
Hi,
One of the server in Ceph cluster accidentally shutdown abruptly due to
power failure. After restarting OSD's not coming up and in Ceph health
check it shows osd down.
When checking OSD status "osd.26 18865 unable to obtain rotating service
keys; retrying"
For every 30 seconds it's just putti
roblematic client could fix the issue if it's acceptable.
>
> Thanks and Regards,
> Kotresh H R
>
> On Thu, Dec 29, 2022 at 4:35 PM Amudhan P wrote:
>
>> Hi,
>>
>> Suddenly facing an issue with Ceph cluster I am using ceph version 16.2.6.
>> I couldn
Hi,
Suddenly facing an issue with Ceph cluster I am using ceph version 16.2.6.
I couldn't find any solution for the issue below.
Any suggestions?
health: HEALTH_WARN
1 clients failing to respond to capability release
1 clients failing to advance oldest client/flush ti
Hi,
I am trying to configure Ceph (version 15.2.3) mgr alert email using
office365 account I get the below error.
[WRN] ALERTS_SMTP_ERROR: unable to send alert email
[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1056)
Configured SMTP server and port 587.
I have followed the documen
I also have a similar problem in my case OSD's starts and stops after a few
mins and not much in the log.
I have filed a bug waiting for a reply to confirm it's a bug or an issue.
On Fri, Sep 3, 2021 at 5:21 PM mahnoosh shahidi
wrote:
> We still have this problem. Does anybody have any id
ticket
> at tracker.ceph.com with the bcaktrace and osd log file? We can direct
> that to the RADOS team to check out.
> -Greg
>
> On Sat, Aug 28, 2021 at 7:13 AM Amudhan P wrote:
> >
> > Hi,
> >
> > I am having a peculiar problem with my ceph octopus cluster. 2 we
Hi,
I am having a peculiar problem with my ceph octopus cluster. 2 weeks ago I
had an issue that started like too many scrub error and later random OSD
stopped which lead to pg corrupt, replica missing. since it's a testing
cluster I wanted to understand the issue.
I tried to recover PG but it did
; Suresh
>
> On Sat, Aug 14, 2021, 9:53 AM Amudhan P wrote:
>
>> Hi,
>> I am stuck with ceph cluster with multiple PG errors due to multiple OSD
>> was stopped and starting OSD's manually again didn't help. OSD service
>> stops again there is no issue with HD
Hi,
I am stuck with ceph cluster with multiple PG errors due to multiple OSD
was stopped and starting OSD's manually again didn't help. OSD service
stops again there is no issue with HDD for sure but for some reason, OSD
stops.
I am using running ceph version 15.2.5 on podman container.
How do I
]: log_file
/var/lib/ceph/crash/2021-08-11T11:25:47.930411Z_a06defcc-19c6-41df-a37d-c071166cdcf3/log
Aug 11 16:55:48 bash[27152]: --- end dump of recent events ---
Aug 11 16:55:48 bash[27152]: reraise_fatal: default handler for signal 6
didn't terminate the process?
On Wed, Aug 11, 2021
Hi,
I am using ceph version 15.2.7 in 4 node cluster my OSD's is
continuously stopping and even if I start again it stops after some time. I
couldn't find anything from the log.
I have set norecover and nobackfil as soon as I unset norecover OSD starts
to fail.
cluster:
id: b6437922-3edf-
Hi,
This issue is fixed now after setting cluster_IP to only osd's. Mount works
perfectly fine.
"ceph config set osd cluster_network 10.100.4.0/24"
regards
Amudhan
On Sat, Nov 7, 2020 at 10:09 PM Amudhan P wrote:
> Hi,
>
> At last, the problem fixed for now by adding c
have run the command output you have asked and output is after
applying all the changes above said.
# ceph config get mon cluster_network
ouput :
#ceph config get mon public_network
output : 10.100.3.0/24
Still testing more on this to confirm the issue and playing out with my
ceph cluster.
regards
Amudh
ble-check your setup.
>
>
> Zitat von Amudhan P :
>
> > Hi Nathan,
> >
> > Kernel client should be using only the public IP of the cluster to
> > communicate with OSD's.
> >
> > But here it requires both IP's for mount to work properly.
> >
Hi Janne,
My OSD's have both public IP and Cluster IP configured. The monitor node
and OSD nodes are co-located.
regards
Amudhan P
On Tue, Nov 10, 2020 at 4:45 PM Janne Johansson wrote:
>
>
> Den tis 10 nov. 2020 kl 11:13 skrev Amudhan P :
>
>> Hi Nathan,
>>
>
e mon but not the OSD?
> It needs to be able to reach all mons and all OSDs.
>
> On Sun, Nov 8, 2020 at 4:29 AM Amudhan P wrote:
> >
> > Hi,
> >
> > I have mounted my cephfs (ceph octopus) thru kernel client in Debian.
> > I get following error in
nfig osd.2
ceph orch daemon reconfig osd.3
restarting all daemons.
regards
Amudhan P
On Mon, Nov 9, 2020 at 9:49 PM Eugen Block wrote:
> Clients don't need the cluster IP because that's only for OSD <--> OSD
> replication, no client traffic. But of course to be able to
Hi Frank,
You said only one OSD is down but in ceph status shows more than 20 OSD is
down.
Regards,
Amudhan
On Sun 8 Nov, 2020, 12:13 AM Frank Schilder, wrote:
> Hi all,
>
> I moved the crush location of 8 OSDs and rebalancing went on happily
> (misplaced objects only). Today, osd.1 crashed, r
Hi,
I have mounted my cephfs (ceph octopus) thru kernel client in Debian.
I get following error in "dmesg" when I try to read any file from my mount.
"[ 236.429897] libceph: osd1 10.100.4.1:6891 socket closed (con state
CONNECTING)"
I use public IP (10.100.3.1) and cluster IP (10.100.4.1) in my
set up it had only public network. later
added cluster with cluster IP and it was working fine until the restart of
the entire cluster.
regards
Amudhan P
On Fri, Nov 6, 2020 at 12:02 AM Amudhan P wrote:
>
>> Hi,
>> I am trying to read file from my ceph kernel mount and file read st
up it had only public network. later
added cluster with cluster IP and it was working fine until the restart of
the entire cluster.
regards
Amudhan P
On Fri, Nov 6, 2020 at 12:02 AM Amudhan P wrote:
> Hi,
> I am trying to read file from my ceph kernel mount and file read stays in
> bytes
f8bc7682-0d11-11eb-a332-
0cc47a5ec98a
[ 272.132787] libceph: osd1 10.0.104.1:6891 socket closed (con state
CONNECTING)
Ceph cluster status is healthy no error It was working fine until before my
entire cluster was down.
Using Ceph octopus in debian.
Regards
Amudhan P
(con state CONNECTING)"
-- Forwarded message -----
From: Amudhan P
Date: Wed, Nov 4, 2020 at 6:24 PM
Subject: File read are not completing and IO shows in bytes able to not
reading from cephfs
To: ceph-users
Hi,
In my test ceph octopus cluster I was trying to simulate a failu
state
CONNECTING)
What went wrong why is this issue.?
regards
Amudhan P
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
0124 TiB
So, PG number is not an issue in showing less size.
I am trying other options also to see what made this issue.
On Mon, Oct 26, 2020 at 8:20 PM 胡 玮文 wrote:
>
> 在 2020年10月26日,22:30,Amudhan P 写道:
>
>
> Hi Jane,
>
> I agree with you and I was trying to say d
stand that due to replica
it's showing half of the space.
why it's not showing the entire RAW disk space as available space?
Number of PG per pool play any vital role in showing available space?
On Mon, Oct 26, 2020 at 12:37 PM Janne Johansson
wrote:
>
>
> Den sön 25 okt. 2020
u only have 289 placement groups, which I think is too few for
> your 48 OSDs [2]. If you have more placement groups, the unbalance issue
> will be far less severe.
>
> [1]: https://docs.ceph.com/en/latest/architecture/#mapping-pgs-to-osds
> [2]: https://docs.ceph.com/en/late
Hi Stefan,
I have started balancer but what I don't understand is there are enough
free space in other disks.
Why it's not showing those in available space?
How to reclaim the free space?
On Sun 25 Oct, 2020, 2:27 PM Stefan Kooman, wrote:
> On 2020-10-25 05:33, Amudhan P wrote:
&
5 TiB 1.5 TiB 5.2 MiB 2.0 GiB
3.9 TiB 28.32 0.569 up
MIN/MAX VAR: 0.19/1.88 STDDEV: 22.13
On Sun, Oct 25, 2020 at 12:08 AM Stefan Kooman wrote:
> On 2020-10-24 14:53, Amudhan P wrote:
> > Hi,
> >
> > I have created a test Ceph cluster with Ceph Octopus using
Hi Nathan,
Attached crushmap output.
let me know if you find any thing odd.
On Sat, Oct 24, 2020 at 6:47 PM Nathan Fish wrote:
> Can you post your crush map? Perhaps some OSDs are in the wrong place.
>
> On Sat, Oct 24, 2020 at 8:51 AM Amudhan P wrote:
> >
> > Hi,
>
Hi,
I have created a test Ceph cluster with Ceph Octopus using cephadm.
Cluster total RAW disk capacity is 262 TB but it's allowing to use of only
132TB.
I have not set quota for any of the pool. what could be the issue?
Output from :-
ceph -s
cluster:
id: f8bc7682-0d11-11eb-a332-0cc47
2020 at 6:14 PM Eugen Block wrote:
> Did you restart the OSD containers? Does ceph config show your changes?
>
> ceph config get mon cluster_network
> ceph config get mon public_network
>
>
>
> Zitat von Amudhan P :
>
> > Hi Eugen,
> >
> > I did the
gt; ceph orch daemon reconfig mon.host2
> >> ceph orch daemon reconfig mon.host3
> >> ceph orch daemon reconfig osd.1
> >> ceph orch daemon reconfig osd.2
> >> ceph orch daemon reconfig osd.3
> >> ---snip---
> >>
> >> I haven't tried
Hi,
I have installed Ceph Octopus cluster using cephadm with a single network
now I want to add a second network and configure it as a cluster address.
How do I configure ceph to use second Network as cluster network?.
Amudhan
___
ceph-users mailing li
Hi,
Future releases of Ceph support cephdeploy or only Cephadm will be the
choice.
Thanks,
Amudhan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
Any of you used cephadm bootstrap command without root user?
On Wed, Aug 19, 2020 at 11:30 AM Amudhan P wrote:
> Hi,
>
> I am trying to install ceph 'octopus' using cephadm. In bootstrap
> command, I have specified a non-root user account as ssh-user.
> c
Hi,
I am trying to install ceph 'octopus' using cephadm. In bootstrap
command, I have specified a non-root user account as ssh-user.
cephadm bootstrap --mon-ip xx.xxx.xx.xx --ssh-user non-rootuser
when bootstrap about to complete it threw an error stating.
INFO:cephadm:Non-zero exit code 2
https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/
On Sat, Jun 13, 2020 at 2:31 PM masud parvez
wrote:
> Could anyone give me the latest version ceph install guide for ubuntu 20.04
> ___
> ceph-users mailing list -- ce
Hi,
I am looking for a Software suite to deploy Ceph Storage Node and Gateway
server (SMB & NFS) and also dashboard Showing entire Cluster status,
Individual node health, disk identification or maintenance activity,
network utilization.
Simple user manageable dashboard.
Please suggest any Paid or
s all connected in 10G. In the same setup copying 1GB
file from windows client to samba getting 90 MB/s.
Are there any kernel or network tunning needs to be done?
Any suggestions?
regards
Amudhan P
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubs
Hi,
I have not worked with orchestrator but I remember I read somewhere that
NFS implementation is not supported.
Refer Cephadm documentation and for NFS you have configure nfs Ganesha.
You can manage NFS thru dashboard but for that you have initial config in
dashboard and in nfsganaesha you hav
x27;s the value of `devices` in the table.
>
> Regards,
> --
> Kiefer Chang (Ni-Feng Chang)
>
>
>
>
> On 2020/6/7, 11:03 PM, "Amudhan P" wrote:
>
> Hi,
>
> I am using Ceph octopus in a small cluster.
>
> I have enabled ceph dashboard and when I g
Hi,
I am using Ceph octopus in a small cluster.
I have enabled ceph dashboard and when I go to inventory page I could see
OSD's running in mgr node only not listing other OSD in other 3 nodes.
I don't see any issue in the log.
How do I list other OSD'S
Re
I didn't do any changes but started working now with jumbo frames.
On Sun, May 24, 2020 at 1:04 PM Khodayar Doustar
wrote:
> So this is your problem, it has nothing to do with Ceph. Just fix the
> network or rollback all changes.
>
> On Sun, May 24, 2020 at 9:05 AM Amudhan
and you mean you changed min_size to 1? I/O paus with
> min_size 1 and size 2 is unexpected, can you share more details like
> your crushmap and your osd tree?
>
>
> Zitat von Amudhan P :
>
> > Behaviour is same even after setting min_size 2.
> >
> > On Mon 18
onfigured for MTU 9000
>> it
>> wouldn't work.
>>
>> On Sat, May 23, 2020 at 2:30 PM si...@turka.nl wrote:
>>
>> > Can the servers/nodes ping eachother using large packet sizes? I guess
>> not.
>> >
>> > Sinan Polat
>> >
No, ping with MTU size 9000 didn't work.
On Sun, May 24, 2020 at 12:26 PM Khodayar Doustar
wrote:
> Does your ping work or not?
>
>
> On Sun, May 24, 2020 at 6:53 AM Amudhan P wrote:
>
>> Yes, I have set setting on the switch side also.
>>
>> On Sat
.@turka.nl wrote:
>
>> Can the servers/nodes ping eachother using large packet sizes? I guess
>> not.
>>
>> Sinan Polat
>>
>> > Op 23 mei 2020 om 14:21 heeft Amudhan P het
>> volgende geschreven:
>> >
>> > In OSD logs "heartbeat_
In OSD logs "heartbeat_check: no reply from OSD"
On Sat, May 23, 2020 at 5:44 PM Amudhan P wrote:
> Hi,
>
> I have set Network switch with MTU size 9000 and also in my netplan
> configuration.
>
> What else needs to be checked?
>
>
> On Sat, May 23, 2020
Hi,
I have set Network switch with MTU size 9000 and also in my netplan
configuration.
What else needs to be checked?
On Sat, May 23, 2020 at 3:39 PM Wido den Hollander wrote:
>
>
> On 5/23/20 12:02 PM, Amudhan P wrote:
> > Hi,
> >
> > I am using ceph Nautilus i
Hi,
I am using ceph Nautilus in Ubuntu 18.04 working fine wit MTU size 1500
(default) recently i tried to update MTU size to 9000.
After setting Jumbo frame running ceph -s is timing out.
regards
Amudhan P
___
ceph-users mailing list -- ceph-users
gt;
> Zitat von Amudhan P :
>
> > Hi,
> >
> > Crush rule is "replicated" and min_size 2 actually. I am trying to test
> > multiple volume configs in a single filesystem
> > using file layout.
> >
> > I have created metadata pool with rep 3 (min_
ode failure is handled properly when only having metadata pool and
one data pool (rep3).
After adding additional data pool to fs, single node failure scenario is
not working.
regards
Amudhan P
On Sun, May 17, 2020 at 1:29 AM Eugen Block wrote:
> What’s your pool configuration wrt min_siz
ica2) pool.
I was expecting read and write continue after a small pause due to a Node
failure but it halts and never resumes until the failed node is up.
I remember I tested the same scenario before in ceph mimic where it
continued IO after a small pause.
regards
use the single
> /etc/ganesha/ganesha.conf file.
>
> Daniel
>
> On 5/15/20 4:59 AM, Amudhan P wrote:
> > Hi Rafael,
> >
> > I have used config you have provided but still i am not able mount nfs. I
> > don't see any error in log msg
> >
> &g
eaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server
Now NOT IN GRACE
Regards
Amudhan P
On Fri, May 15, 2020 at 1:01 PM Rafael Lopez
wrote:
> Hello Amudhan,
>
> The only ceph specific thing required in the ganesha config is to add the
> FSAL block to your export, everything else is standard ganesha con
Hi,
I am trying to setup NFS ganesh in Ceph Nautilus.
In a ubuntu 18.04 system i have installed nfs-ganesha (v2.6) and
nfs-ganesha-ceph pkg and followed the steps in the link
https://docs.ceph.com/docs/nautilus/cephfs/nfs/ but i am not able to
export my cephfs volume there is no error msg in nfs
Will EC based write benefit from Public network and Cluster network?
On Thu, May 14, 2020 at 1:39 PM lin yunfan wrote:
> That is correct.I didn't explain it clearly. I said that is because in
> some write only scenario the public network and cluster network will
> all be saturated the same tim
For Ceph release before nautilus to effect osd_memory_target changes need
to restart OSD service.
I had similar issue in mimic I did the same in my test setup.
Before restarting OSD service ensure you set osd nodown and osd noout
similar commands to ensure it doesn't trigger OSD down and recove
Hi,
I am running a small 3 node Ceph Nautilus 14.2.8 cluster on Ubuntu 18.04.
I am testing cluster to expose cephfs volume in samba v4 share for the user
to access from windows for latter use.
Samba Version 4.7.6-Ubuntu and mount.cifs version: 6.8.
When I did a test with DD Write (600 MB/s) and
Hi,
I am running a small Ceph Nautilus cluster on Ubuntu 18.04.
I am testing cluster to expose cephfs volume in samba v4 share for user to
access from windows.
When i do test with DD Write (600 MB/s) and md5sum file Read speed is (700
- 800 MB/s) from ceph kernel mount.
Same volume i have expo
s://goo.gl/PGE1Bx
>
>
> Am So., 15. März 2020 um 14:34 Uhr schrieb Amudhan P >:
>
>> Thank you, All for your suggestions and ideas.
>>
>> what is your view on using MON, MGR, MDS and cephfs client or samba-ceph
>> vfs in a single machine (10 core xeon CPU wit
ect? Kernel module mapping or
> iSCSI targets.
>
>
>
> Another possibilty would be to create an RBD Image containing data and
> samba and use it with QEMU.
>
>
>
> Regards
>
>
>
> Marco Savoca
>
>
>
> *Von: *jes...@krogh.cc
> *Gesendet: *Samsta
Hi,
I am planning to create a new 3 node ceph storage cluster.
I will be using Cephfs + with samba for max 10 clients for upload and
download.
Storage Node HW is Intel Xeon E5v2 8 core single Proc, 32GB RAM and 10Gb
Nic 2 nos., 6TB SATA HDD 24 Nos. each node, OS separate SSD disk.
Earlier I ha
Ok, thanks.
On Mon, Oct 21, 2019 at 8:28 AM Konstantin Shalygin wrote:
> On 10/18/19 8:43 PM, Amudhan P wrote:
> > I am getting below error msg in ceph nautilus cluster, do I need to
> > worry about this?
> >
> > Oct 14 06:25:02 mon01 ceph-mds[35067]: 2019-10-14 06:25
Hi,
I am using Ceph nautilus cluster and i found one of my OSD node running 3
OSD's service suddenly went down and it was very slow typing command. I
killed ceph-osd process and system become normal and started all OSD
service.
After that it becomes normal I figure out that due low memory the sys
Hi,
I am getting below error msg in ceph nautilus cluster, do I need to worry
about this?
Oct 14 06:25:02 mon01 ceph-mds[35067]: 2019-10-14 06:25:02.209 7f55a4c48700
-1 received signal: Hangup from killall -q -1 ceph-mon ceph-mgr ceph-mds
ceph-osd ceph-fuse
Oct 14 06:25:02 mon01 ceph-mds[35067]:
memory usage was high even when backfills is set to "1".
On Mon, Sep 23, 2019 at 8:54 PM Robert LeBlanc wrote:
> On Fri, Sep 20, 2019 at 5:41 AM Amudhan P wrote:
> > I have already set "mon osd memory target to 1Gb" and I have set
> max-backfill from 1
Why does it use such heavy RAM?
I am planning to use only cephfs, no block or object, is there a way to
take control over memory?
On Mon, Sep 23, 2019 at 10:56 AM Konstantin Shalygin wrote:
> On 9/22/19 10:50 PM, Amudhan P wrote:
> > Do you think 4GB RAM for two OSD's is low, e
iness*
>
>
>
> On Fri, 20 Sep 2019 20:41:09 +0800 *Amudhan P >* wrote
>
> Hi,
>
> I am using ceph mimic in a small test setup using the below configuration.
>
> OS: ubuntu 18.04
>
> 1 node running (mon,mds,mgr) + 4 core cpu and 4GB RAM and 1 Gb lan
>
Hi,
I am using ceph mimic in a small test setup using the below configuration.
OS: ubuntu 18.04
1 node running (mon,mds,mgr) + 4 core cpu and 4GB RAM and 1 Gb lan
3 nodes each having 2 osd's, disks are 2TB + 2 core cpu and 4G RAM and 1
Gb lan
1 node acting as cephfs client + 2 core cpu and 4G
Hi,
I am using ceph version 13.2.6 (mimic) on test setup trying with cephfs.
My current setup:
3 nodes, 1 node contain two bricks and other 2 nodes contain single brick
each.
Volume is a 3 replica, I am trying to simulate node failure.
I powered down one host and started getting msg in other sy
I am also getting this error msg in one node when other host is down.
ceph -s
Traceback (most recent call last):
File "/usr/bin/ceph", line 130, in
import rados
ImportError: libceph-common.so.0: cannot map zero-fill pages
On Tue, Sep 10, 2019 at 4:39 PM Amudhan P wrote:
&g
/8f2559099bf54865adc95e5340d04447/system.journal corrupted
> or uncleanly shut down, renaming and replacing.
> [332951.019531] systemd[1]: Started Journal Service.
>
> On Tue, Sep 10, 2019 at 3:04 PM Amudhan P wrote:
>
> Hi,
>
> I am using ceph version 13.2.6 (mimic) on test set
d[6249]: File
/var/log/journal/8f2559099bf54865adc95e5340d04447/system.journal corrupted
or uncleanly shut down, renaming and replacing.
[332951.019531] systemd[1]: Started Journal Service.
On Tue, Sep 10, 2019 at 3:04 PM Amudhan P wrote:
> Hi,
>
> I am using ceph version 13.2.6 (mimic) on test s
Hi,
I am using ceph version 13.2.6 (mimic) on test setup trying with cephfs.
My current setup:
3 nodes, 1 node contain two bricks and other 2 nodes contain single brick
each.
Volume is a 3 replica, I am trying to simulate node failure.
I powered down one host and started getting msg in other sy
R: 0.64/1.33 STDDEV: 2.43
regards
Amudhan P
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
kl 10:49 skrev Amudhan P :
>
>> After leaving 12 hours time now cluster status is healthy, but why did it
>> take such a long time for backfill?
>> How do I fine-tune? if in case of same kind error pop-out again.
>>
>> The backfilling is taking a while because max_ba
nd
>
> t: (+31) 299 410 414
> e: caspars...@supernas.eu
> w: www.supernas.eu
>
>
> Op do 29 aug. 2019 om 14:35 schreef Amudhan P :
>
>> output from "ceph -s "
>>
>> cluster:
>> id: 7c138e13-7b98-4309-b591-d4091a1742b4
>>
Hi,
How do i change "osd_memory_target" in ceph command line.
regards
Amudhan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 32 pgp_num 32 last_change 75 lfor 0/67 flags
hashpspool stripe_width 0 application cephfs
On Thu, Aug 29, 2019 at 6:13 PM Heðin Ejdesgaard Møller
wrote:
> What's the output of
> ceph osd pool ls detail
>
>
> On hós, 2019-08-2
ceph -s, could you provide the output of
> ceph osd tree
> and specify what your failure domain is ?
>
> /Heðin
>
>
> On hós, 2019-08-29 at 13:55 +0200, Janne Johansson wrote:
> >
> >
> > Den tors 29 aug. 2019 kl 13:50 skrev Amudhan P :
> > > H
Hi,
I am using ceph version 13.2.6 (mimic) on test setup trying with cephfs.
my ceph health status showing warning .
"ceph health"
HEALTH_WARN Degraded data redundancy: 1197023/7723191 objects degraded
(15.499%)
"ceph health detail"
HEALTH_WARN Degraded data redundancy: 1197128/7723191 objects d
90 matches
Mail list logo