[ceph-users] HDFS with CEPH, only single RGW works with the hdfs

2018-05-23 Thread 한승진
Hello Cephers, Our Team currently is trying to replace hdfs to CEPH object storage. However, there is a big problem which is "*hdfs dfs -put*" operation is very slow. I doubt session of RGW with hadoop system. Because, only one RGW node works with hadoop, even through we have 4 RGWs. There see

Re: [ceph-users] PG active+clean+remapped status

2018-01-01 Thread 한승진
Are all odsd are same version? I recently experienced similar situation. I upgraded all osds to exact same version and reset of pool configuration like below ceph osd pool set min_size 5 I have 5+2 erasure code the important thing is not the number of min_size but re-configuration I think. I ho

[ceph-users] Does ceph pg scrub error affect all of I/O in ceph cluster?

2017-08-03 Thread 한승진
Hi cephers, I experienced ceph status into HEALTH_ERR because of pg scrub error. I thought all I/O is blocked when the status of ceph is Error. However, ceph could operate normally even thought ceph is in error status. There are two pools in ceph cluster which are include seperate nodes.(volume

[ceph-users] mon/osd cannot start with RDMA

2017-06-28 Thread 한승진
Hello Cephers! I am testing CEPH over RDMA now. I cloned the latest source code of ceph. I added below configs in ceph.conf ms_type = async+rdma ms_cluster_type = async+rdma ms_async_rdma_device_name = mlx4_0 However, I got same error message when I start ceph-mon and ceph-osd service. The me

[ceph-users] How to integrate rgw with hadoop?

2017-02-15 Thread 한승진
Hi all, I would like to integrate radosgw with hadoop. We are using 2.7.3 version of hadoop. So, s3a can be suitable plugin. But, I can't find any guide for it. Could anybody help me? Thanks. John Haan ___ ceph-users mailing list ceph-users@lists.c

Re: [ceph-users] node and its OSDs down...

2016-12-07 Thread 한승진
Hi Because "down" and "out" are different to ceph cluster Crush map of ceph is depends on how many osds are in ths cluster. Crush map doesn't change when osds are down. However crush map would chage when the osds are absolutelly out. Data location also will change, there fore rebalancing starts.

Re: [ceph-users] OpenStack Keystone with RadosGW

2016-11-22 Thread 한승진
s radosgw doesn't support keystone identity version 3 yet. 2016-11-22 15:41 GMT+09:00 한승진 : > Hi All, > > I am trying to implement radosgw with Openstack as an object storage > service. > > I think there are 2 cases for using radosgw as an object storage > > First, Ke

[ceph-users] OpenStack Keystone with RadosGW

2016-11-21 Thread 한승진
Hi All, I am trying to implement radosgw with Openstack as an object storage service. I think there are 2 cases for using radosgw as an object storage First, Keystone <-> Ceph connect directly. like below guide.. http://docs.ceph.com/docs/master/radosgw/keystone/ Second, use ceph as a back-en

[ceph-users] Is straw2 bucket type working well?

2016-10-31 Thread 한승진
Hi all, I tested straw / straw 2 bucket type. The Ceph document says below - straw2 bucket type fixed several limitations in the original straw bucket - *the old straw buckets would change some mapping that should hava changed when a weight was adjusted* - straw2 achieves the ori

[ceph-users] When the kernel support JEWEL tunables?

2016-10-19 Thread 한승진
Hi all, When I try to mount rbd through KRBD, it failed because of mismatch features. The Client's OS is Ubuntu 16.04 and kernel is 4.4.0-38 My original CRUSH tunables is below. root@Fx2x1ctrlserv01:~# ceph osd crush show-tunables { "choose_local_tries": 0, "choose_local_fallback_tries"

Re: [ceph-users] ceph journal system vs filesystem journal system

2016-09-04 Thread 한승진
ite a description of the operation to the journal* and *apply the operation to the filesystem* How can I understand above document? I will really appreciate for your help. Thanks. 2016-09-01 19:09 GMT+09:00 huang jun : > 2016-09-01 17:25 GMT+08:00 한승진 : > > Hi all. > > &

[ceph-users] ceph journal system vs filesystem journal system

2016-09-01 Thread 한승진
Hi all. I'm very confused about ceph journal system Some people said ceph journal system works like linux journal filesystem. Also some people said all data are written journal first and then written to OSD data. Journal of Ceph storage also write just metadata of object or write all data of ob

[ceph-users] the reweight value of OSD is always 1

2016-08-31 Thread 한승진
Hi Cephers! The re-weight value of OSD is always 1 when we create and activate an OSD daemon. I utilize ceph-deploy tool whenever deploy ceph cluster. Is there a default reweight value of ceph-deploy tool? Can we adjust the reweight value when we activate OSD daemon? ID WEIGHT TYPE NAME

[ceph-users] Fwd: Ceph OSD suicide himself

2016-07-10 Thread 한승진
Hi cephers. I need your help for some issues. The ceph cluster version is Jewel(10.2.1), and the filesytem is btrfs. I run 1 Mon and 48 OSD in 4 Nodes(each node has 12 OSDs). I've experienced one of OSDs was killed himself. Always it issued suicide timeout message. Below is detailed logs. =

[ceph-users] Should I restart VMs when I upgrade ceph client version

2016-07-05 Thread 한승진
Hi Cephers, I implemented Ceph with OpenStack. Recently, I upgrade Ceph server from Hammer to Jewel. Also, I plan to upgrade ceph clients that are OpenStack Nodes. There are a lot of VMs running in Compute Nodes. Should I restart the VMs after upgrade of Compute Nodes?

[ceph-users] VM shutdown because of PG increase

2016-06-28 Thread 한승진
Hi, Cephers. Our ceph version is Hammer(0.94.7). I implemented ceph with OpenStack, all instances use block storage as a local volume. After increasing the PG number from 256 to 768, many vms are shutdown. That was very strange case for me. Below vm's is libvirt error log. osd/osd_types.cc: I

[ceph-users] How can I make daemon for ceph-dash

2016-06-15 Thread 한승진
I am using ceph-dash for dashboard of ceph clusters. There are contrib directory for apache,nginx,wsgi in ceph-dash sources. However, I cannot adjust those files to start ceph-dah as a apache daemon or other daemon. How to run ceph-dash as a daemon? thanks. John Haan ___

[ceph-users] Journal partition owner's not change to ceph

2016-06-09 Thread 한승진
Hi Cephers, I hava a ceph cluster Jewel on Ubuntu 16.04. What I am wondering is Whenever I reboot the OSD nodes, the OSD init service is failed. The reason why is that the owner not changed in journal partition. I have btrfs file system and the devices are sdc1,sdd1,sde1.. and so on. They are a

[ceph-users] not change of journal devices

2016-06-09 Thread 한승진
Hy Cephers. I impletented Ceph with 12 HDDs(for OSD) and 1 SSD(Journal). Device Map is like below (sdb4 is omitted.) /dev/sdc1 is for OSD.0 and /dev/sdb1 is for Journal /dev/sdd1 is for OSD.1 and /dev/sdb2 is for Journal /dev/sde1 is for OSD.2 and /dev/sdb3 is for Journal . . . /dev/sdn1 is for

[ceph-users] Cache pool with replicated pool don't work properly.

2016-06-01 Thread 한승진
Hi All. My name is John Haan. I've been testing Cache Pool using Jewel version on ubuntu 16.04 OS. I implemented 2 types of cache tiers. first one is cache pool + erasure pool and the other one is cache pool + replicated pool I choose writeback mode of cache mode. vdbench and rados bench are