Hi, noted and thanks a lot.
Best Rgds
/stwong
-Original Message-
From: Ricardo Dias
Sent: Thursday, July 25, 2019 8:47 PM
To: ST Wong (ITSC) ; Manuel Lausch
; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Please help: change IP address of a cluster
Hi,
The monmaptool has a
:10.0.1.97:6789
MON map resumed normal with both v1 and v2 for this MON after removing the MON
and add it again without running this command after mkfs, then start the
services as usual.
Thanks a lot.
Rgds
/stwong
-Original Message-
From: ceph-users On Behalf Of ST Wong (ITSC)
Sent:
luster is quite easy and in my opinion the most secure.
The complet IP change in our cluster worked without outage while the cluster
was in production.
I hope I could help you.
Regards
Manuel
On Fri, 19 Jul 2019 10:22:37 +
"ST Wong (ITSC)" wrote:
> Hi all,
>
> Our cl
Hi all,
Our cluster has to change to new IP range in same VLAN: 10.0.7.0/24 ->
10.0.18.0/23, while IP address on private network for OSDs remains unchanged.
I wonder if we can do that in either one following ways:
=
1.
a. Define static route for 10.0.18.0/23 on each
somewhere, that ID would probably be
the one you just removed.
-Erik
On Fri, Jul 5, 2019, 9:19 AM Paul Emmerich
mailto:paul.emmer...@croit.io>> wrote:
On Fri, Jul 5, 2019 at 2:17 PM Alfredo Deza
mailto:ad...@redhat.com>> wrote:
On Fri, Jul 5, 2019 at 6:23 AM ST Wong (ITSC)
mailto
somewhere, that ID would probably be
the one you just removed.
-Erik
On Fri, Jul 5, 2019, 9:19 AM Paul Emmerich
mailto:paul.emmer...@croit.io>> wrote:
On Fri, Jul 5, 2019 at 2:17 PM Alfredo Deza
mailto:ad...@redhat.com>> wrote:
On Fri, Jul 5, 2019 at 6:23 AM ST Wong (ITS
Hi,
I target to run just destroy and re-use the ID as stated in manual but seems
not working.
Seems I’m unable to re-use the ID ?
Thanks.
/stwong
From: Paul Emmerich
Sent: Friday, July 5, 2019 5:54 PM
To: ST Wong (ITSC)
Cc: Eugen Block ; ceph-users@lists.ceph.com
Subject: Re: [ceph-users
4:54 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-volume failed after replacing disk
Hi,
did you also remove that OSD from crush and also from auth before recreating it?
ceph osd crush remove osd.71
ceph auth del osd.71
Regards,
Eugen
Zitat von "ST Wong (ITSC)"
Hi all,
We replaced a faulty disk out of N OSD and tried to follow steps according to
"Replacing and OSD" in
http://docs.ceph.com/docs/nautilus/rados/operations/add-or-rm-osds/, but got
error:
# ceph osd destroy 71--yes-i-really-mean-it
# ceph-volume lvm create --bluestore --data /dev/data/lv0
Hi,
Mounted a CephFS through kernel module or FUSE. Both work except when we do a
"df -h", the "Avail" value shown is the MAX AVAIL of the data pool in "ceph df".
I'm expecting it should match with max_bytes of the data pool.
Rbd mount doesn't have similar observation.
Is this normal?
Thanks a
Hi all,
We deployed a new Nautilus cluster using ceph-ansible. We enabled dashboard
through group vars.
Then we enabled pg autoscaler using command line "ceph mgr module enable
pg_autoscaler".
Shall we update group vars and deploy again to make the change permanent?
Sorry for the newbie quest
mgr daemons do not support
module 'dashboard', pass --force to force enablement". Restart the mgr
service didn't help.
/st wong
From: Brent Kennedy
Sent: Friday, June 21, 2019 11:57 AM
To: ST Wong (ITSC) ; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] problems after
Hi all,
We recently upgrade a testing cluster from 13.2.4 to 14.2.1. We encountered 2
problems:
1. Got warning of BlueFS spillover but the usage is low while it's a
testing cluster without much activity/data:
# ceph -s
cluster:
id: cc795498-5d16-4b84-9584-1788d0458be9
hea
Update: deployment (ansible 2.6 + ceph-ansible 3.2) completed after cleaning up
everything deployed before.
Thanks a lot.
From: ceph-users On Behalf Of ST Wong (ITSC)
Sent: Tuesday, May 7, 2019 6:22 PM
To: solarflow99 ; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-create-keys loops
.
Instead of deploying mimic, should we deploy luminous with latest ansible and
ceph-ansible 4.0 or master, then upgrade to mimic ?
Thanks a lot.
From: ceph-users On Behalf Of ST Wong (ITSC)
Sent: Tuesday, May 7, 2019 11:48 AM
To: solarflow99
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph
": "0.0.0.0:0/3",
"public_addr": "0.0.0.0:0/3"
},
{
"rank": 4,
"name": "cphmon5b",
"addr": "0.0.0.0:0/4",
"public_ad
yes, we’re using 3.2 stable, on RHEL 7. Thanks.
From: solarflow99
Sent: Tuesday, May 7, 2019 1:40 AM
To: ST Wong (ITSC)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-create-keys loops
you mention the version of ansible, that is right. How about the branch of
ceph-ansible
Hi all,
I've problem in deploying mimic using ceph-ansible at following step:
-- cut here ---
TASK [ceph-mon : collect admin and bootstrap keys] *
Monday 06 May 2019 17:01:23 +0800 (0:00:00.854) 0:05:38.899
fatal: [cphmon3a]: FAILED!
Hi,
Yes, the ansible user has sudo right, needs password prompt. Thanks.
Regards,
/st wong
From: Sinan Polat
Sent: Tuesday, April 23, 2019 12:46 PM
To: ST Wong (ITSC)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-ansible as non-root user
Hi,
Does your ansible user has sudo
Hi all,
We tried to deploy a new CEPH cluster using latest ceph-ansible, run as an
non-root user (e.g. ansible), and got following error during gathering facts:
-- cut here --
TASK [ceph-facts : create a local fetch directory if it does not exist]
Tuesday 23 April 2019
ected
PG) without losing data. You'll also have the certainty that there are always
two replicas per room, no guessing or hoping which room is more likely to fail.
If the overhead is too high could EC be an option for your setup?
Regards,
Eugen
Zitat von "ST Wong (ITSC)" :
>
0'0 2019-02-12
04:47:28.183218 0'0 2019-02-11 01:20:51.276922 0
Fyi. Sorry for the belated report.
Thanks a lot.
/st
From: Gregory Farnum
Sent: Monday, November 26, 2018 9:27 PM
To: ST Wong (ITSC)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-user
e client host?
Thanks a lot.
/st
-Original Message-
From: Jason Dillaman
Sent: Friday, January 25, 2019 10:04 PM
To: ST Wong (ITSC)
Cc: dilla...@redhat.com; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RBD client hangs
That doesn't appear to be an error -- that's just st
essage-
From: ceph-users On Behalf Of ST Wong (ITSC)
Sent: Friday, January 25, 2019 5:58 PM
To: dilla...@redhat.com
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RBD client hangs
Hi, It works. Thanks a lot.
/st
-Original Message-
From: Jason Dillaman mailto:jdill...@redh
Hi, It works. Thanks a lot.
/st
-Original Message-
From: Jason Dillaman
Sent: Tuesday, January 22, 2019 9:29 PM
To: ST Wong (ITSC)
Cc: Ilya Dryomov ; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RBD client hangs
Your "mon" cap should be "profile rbd" ins
caps osd = "allow rwx pool=2copy, allow rwx pool=4copy"
--- cut here ---
Thanks a lot.
/st
-Original Message-----
From: Ilya Dryomov
Sent: Monday, January 21, 2019 7:33 PM
To: ST Wong (ITSC)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RBD client hangs
Hi, we're trying mimic on an VM farm. It consists 4 OSD hosts (8 OSDs) and 3
MON. We tried mounting as RBD and CephFS (fuse and kernel mount) on
different clients without problem.
Then one day we perform failover test and stopped one of the OSD. Not sure if
it's related but after that test
Hi all,
We've 8 osd hosts, 4 in room 1 and 4 in room2.
A pool with size = 3 using following crush map is created, to cater for room
failure.
rule multiroom {
id 0
type replicated
min_size 2
max_size 4
step take default
step choose firstn 2 type
Hi all,
We're using mimic and enabled multiple fs flag. We can do kernel mount of
particular fs (e.g. fs1) with mount option mds_namespace=fs1.However, this
is not working for ceph-fuse:
#ceph-fuse -n client.acapp3 -o mds_namespace=fs1 /tmp/ceph
2018-11-20 19:30:35.246 7ff5653edcc0 -1 i
Hi,
We're trying to test rbd on a small CEPH running on VM: 8 OSD, 3 mon+mgr using
rbd bench on 2 rbd from 2 pools with different replication setting:
For pool 4copy:
---
rule 4copy_rule {
id 1
type replicated
min_size 2
max_size 10
, shall /etc/ceph/ceph.client.admin.keyring be removed in ceph-ansible
client deployment task? Thanks.
Best Regards,
/st wong
From: Ashley Merrick
Sent: Friday, November 9, 2018 11:44 PM
To: ST Wong (ITSC)
Cc: Wido den Hollander ; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] mount rbd
wong
From: Ashley Merrick
Sent: Friday, November 9, 2018 10:51 PM
To: ST Wong (ITSC)
Cc: Wido den Hollander ; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] mount rbd read only
You could create a key ring that only has perms to mount the RBD and read only
to the mon’s.
Depends if anyone tha
Rgds
/st wong
-Original Message-
From: ceph-users On Behalf Of Wido den
Hollander
Sent: Thursday, November 8, 2018 8:31 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] mount rbd read only
On 11/8/18 1:05 PM, ST Wong (ITSC) wrote:
> Hi,
>
>
>
> We created
Hi,
We created a testing rbd block device image as following:
- cut here ---
# rbd create 4copy/foo --size 10G
# rbd feature disable 4copy/foo object-map fast-diff deep-flatten
# rbd --image 4copy/foo info
rbd image 'foo':
size 10 GiB in 2560 objects
order 22 (4 MiB object
uot;,
"description": "bluefs wal"
},
"/var/lib/ceph/osd/ceph-2/block.db": {
"osd_uuid": "6d999288-a4a4-4088-b764-bf2379b4492b",
"size": 524288000,
"btime": "2018-10-18 15:59:06.175997&qu
Hi all,
We deployed a testing mimic CEPH cluster using bluestore.We can't run
ceph-bluestore-tool on OSD with following error:
---
# ceph-bluestore-tool show-label --dev *device*
2018-10-31 09:42:01.712 7f3ac5bb4a00 -1 auth: unable to find a keyring on
/etc/ceph/ceph.client.admin.keyrin
anks.
/st wong
-Original Message-
From: Wido den Hollander
Sent: Tuesday, October 16, 2018 1:59 AM
To: solarflow99 ; ST Wong (ITSC)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] SSD for MON/MGR/MDS
On 10/15/2018 07:50 PM, solarflow99 wrote:
> I think the answer is, yes
Hi all,
We've got some servers with some small size SSD but no hard disks other than
system disks. While they're not suitable for OSD, will the SSD be useful for
running MON/MGR/MDS?
Thanks a lot.
Regards,
/st wong
___
ceph-users mailing list
ceph-us
Hi all, we're new to CEPH. We've some old machines redeployed for setting up
CEPH cluster for our testing environment.
There are over 100 disks for OSDs. Will use replication with 2 copies. We
wonder if it's better to create pools on all OSDs, or using some OSDs for
particular pools, for b
r span across 3 different
buildings, or compose of 3 ceph clusters in 3 different buildings? Thanks.
Thanks again for your help.
Best Regards,
/ST Wong
-Original Message-
From: Oliver Freyermuth
Sent: Thursday, September 20, 2018 2:10 AM
To: ST Wong (ITSC)
Cc: Peter Wienemann ; ceph-users
th
Sent: Wednesday, September 19, 2018 5:28 PM
To: ST Wong (ITSC)
Cc: Peter Wienemann ; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] backup ceph
Hi,
Am 19.09.18 um 03:24 schrieb ST Wong (ITSC):
> Hi,
>
> Thanks for your information.
> May I know more about the backup destin
---
From: c...@jack.fr.eu.org
Sent: Wednesday, September 19, 2018 4:16 PM
To: ST Wong (ITSC)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] backup ceph
For cephfs & rgw, it all depends on your needs, as with rbd You may want to
trust blindly Ceph Or you may backup all your data, just i
ssume that you are speaking of rbd only
Taking snapshot of rbd volumes and keeping all of them on the cluster is fine
However, this is no backup A snapshot is only a backup if it is exported
off-site
On 09/18/2018 11:54 AM, ST Wong (ITSC) wrote:
> Hi,
>
> We're newbie to Ceph.
Hi,
We're newbie to Ceph. Besides using incremental snapshots with RDB to backup
data on one Ceph cluster to another running Ceph cluster, or using backup tools
like backy2, will there be any recommended way to backup Ceph data ? Someone
here suggested taking snapshot of RDB daily and keeps
another.
It not really relevant to the question at hand, but thank you for satisfying my
curiosity
On Mon, Apr 2, 2018 at 7:13 PM, ST Wong (ITSC)
mailto:s...@itsc.cuhk.edu.hk>> wrote:
There are multiple disks per server, and will have one OSD for each disk. Is
that okay?
Thanks again.
From:
There are multiple disks per server, and will have one OSD for each disk. Is
that okay?
Thanks again.
From: Donny Davis [mailto:do...@fortnebula.com]
Sent: Tuesday, April 03, 2018 10:12 AM
To: ST Wong (ITSC)
Cc: Ronny Aasen; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] split brain case
Friday, March 30, 2018 3:18 AM
To: ST Wong (ITSC); ceph-users@lists.ceph.com
Subject: Re: [ceph-users] split brain case
On 29.03.2018 11:13, ST Wong (ITSC) wrote:
Hi,
Thanks.
> ofcourse the 4 osd's left working now want to selfheal by recreating all
> objects stored on the 4 split off
eph.com] On Behalf Of Ronny
Aasen
Sent: Thursday, March 29, 2018 4:51 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] split brain case
On 29.03.2018 10:25, ST Wong (ITSC) wrote:
Hi all,
We put 8 (4+4) OSD and 5 (2+3) MON servers in server rooms in 2 buildings for
redundancy. The
Hi all,
We put 8 (4+4) OSD and 5 (2+3) MON servers in server rooms in 2 buildings for
redundancy. The buildings are connected through direct connection.
While servers in each building have alternate uplinks. What will happen in
case the link between the buildings is broken (application server
Hi all,
We got some decommissioned servers from other projects for setting up OSDs.
They've 10 2TB SAS disks with 4 2TB SSD.
We try to test with bluestores and hope to play wal and db devices on SSD.
Need advice on some newbie questions:
1. As there are more SAS than SSD, is it possible/recom
Hi,
I tried to extend my experimental cluster with more OSDs running CentOS 7 but
failed with warning and error with following steps:
$ ceph-deploy install --release luminous newosd1#
no error
$ ceph-deploy osd create newosd1 --data /dev/sdb
cut here --
Hi,
Thanks for your advice. Will try it out.
Best Regards,
/ST Wong
From: Maged Mokhtar [mailto:mmokh...@petasan.org]
Sent: Wednesday, February 14, 2018 4:20 PM
To: ST Wong (ITSC)
Cc: Luis Periquito; Kai Wagner; Ceph Users
Subject: Re: [ceph-users] Newbie question: stretch ceph cluster
Hi
, 2018 at 2:59 PM, Kai Wagner wrote:
> Hi and welcome,
>
>
> On 09.02.2018 15:46, ST Wong (ITSC) wrote:
>
> Hi, I'm new to CEPH and got a task to setup CEPH with kind of DR feature.
> We've 2 10Gb connected data centers in the same campus.I wonder if it's
>
Hi,
Thanks a lot,
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Kai
Wagner
Sent: Friday, February 09, 2018 11:00 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Newbie question: stretch ceph cluster
Hi and welcome,
On 09.02.2018 15:46, ST Wong (ITSC
Hi, I'm new to CEPH and got a task to setup CEPH with kind of DR feature.
We've 2 10Gb connected data centers in the same campus.I wonder if it's
possible to setup a CEPH cluster with following components in each data center:
3 x mon + mds + mgr
3 x OSD (replicated factor=2, between data
55 matches
Mail list logo