Hello Cephers,
Our Team currently is trying to replace hdfs to CEPH object storage.
However, there is a big problem which is "*hdfs dfs -put*" operation is
very slow.
I doubt session of RGW with hadoop system.
Because, only one RGW node works with hadoop, even through we have 4 RGWs.
There see
Are all odsd are same version?
I recently experienced similar situation.
I upgraded all osds to exact same version and reset of pool configuration
like below
ceph osd pool set min_size 5
I have 5+2 erasure code the important thing is not the number of min_size
but re-configuration I think.
I ho
Hi cephers,
I experienced ceph status into HEALTH_ERR because of pg scrub error.
I thought all I/O is blocked when the status of ceph is Error.
However, ceph could operate normally even thought ceph is in error status.
There are two pools in ceph cluster which are include seperate
nodes.(volume
Hello Cephers!
I am testing CEPH over RDMA now.
I cloned the latest source code of ceph.
I added below configs in ceph.conf
ms_type = async+rdma
ms_cluster_type = async+rdma
ms_async_rdma_device_name = mlx4_0
However, I got same error message when I start ceph-mon and ceph-osd
service.
The me
Hi all,
I would like to integrate radosgw with hadoop.
We are using 2.7.3 version of hadoop. So, s3a can be suitable plugin.
But, I can't find any guide for it.
Could anybody help me?
Thanks.
John Haan
___
ceph-users mailing list
ceph-users@lists.c
Hi
Because "down" and "out" are different to ceph cluster
Crush map of ceph is depends on how many osds are in ths cluster.
Crush map doesn't change when osds are down. However crush map would chage
when the osds are absolutelly out.
Data location also will change, there fore rebalancing starts.
s radosgw doesn't support keystone identity version 3 yet.
2016-11-22 15:41 GMT+09:00 한승진 :
> Hi All,
>
> I am trying to implement radosgw with Openstack as an object storage
> service.
>
> I think there are 2 cases for using radosgw as an object storage
>
> First, Ke
Hi All,
I am trying to implement radosgw with Openstack as an object storage
service.
I think there are 2 cases for using radosgw as an object storage
First, Keystone <-> Ceph connect directly.
like below guide..
http://docs.ceph.com/docs/master/radosgw/keystone/
Second, use ceph as a back-en
Hi all,
I tested straw / straw 2 bucket type.
The Ceph document says below
- straw2 bucket type fixed several limitations in the original straw
bucket
- *the old straw buckets would change some mapping that should hava
changed when a weight was adjusted*
- straw2 achieves the ori
Hi all,
When I try to mount rbd through KRBD, it failed because of mismatch
features.
The Client's OS is Ubuntu 16.04 and kernel is 4.4.0-38
My original CRUSH tunables is below.
root@Fx2x1ctrlserv01:~# ceph osd crush show-tunables
{
"choose_local_tries": 0,
"choose_local_fallback_tries"
ite a description of the operation to the journal*
and *apply
the operation to the filesystem*
How can I understand above document?
I will really appreciate for your help.
Thanks.
2016-09-01 19:09 GMT+09:00 huang jun :
> 2016-09-01 17:25 GMT+08:00 한승진 :
> > Hi all.
> >
&
Hi all.
I'm very confused about ceph journal system
Some people said ceph journal system works like linux journal filesystem.
Also some people said all data are written journal first and then written
to OSD data.
Journal of Ceph storage also write just metadata of object or write all
data of ob
Hi Cephers!
The re-weight value of OSD is always 1 when we create and activate an OSD
daemon.
I utilize ceph-deploy tool whenever deploy ceph cluster.
Is there a default reweight value of ceph-deploy tool?
Can we adjust the reweight value when we activate OSD daemon?
ID WEIGHT TYPE NAME
Hi cephers.
I need your help for some issues.
The ceph cluster version is Jewel(10.2.1), and the filesytem is btrfs.
I run 1 Mon and 48 OSD in 4 Nodes(each node has 12 OSDs).
I've experienced one of OSDs was killed himself.
Always it issued suicide timeout message.
Below is detailed logs.
=
Hi Cephers,
I implemented Ceph with OpenStack.
Recently, I upgrade Ceph server from Hammer to Jewel.
Also, I plan to upgrade ceph clients that are OpenStack Nodes.
There are a lot of VMs running in Compute Nodes.
Should I restart the VMs after upgrade of Compute Nodes?
Hi, Cephers.
Our ceph version is Hammer(0.94.7).
I implemented ceph with OpenStack, all instances use block storage as a
local volume.
After increasing the PG number from 256 to 768, many vms are shutdown.
That was very strange case for me.
Below vm's is libvirt error log.
osd/osd_types.cc: I
I am using ceph-dash for dashboard of ceph clusters.
There are contrib directory for apache,nginx,wsgi in ceph-dash sources.
However, I cannot adjust those files to start ceph-dah as a apache daemon
or other daemon.
How to run ceph-dash as a daemon?
thanks.
John Haan
___
Hi Cephers,
I hava a ceph cluster Jewel on Ubuntu 16.04.
What I am wondering is Whenever I reboot the OSD nodes, the OSD init
service is failed.
The reason why is that the owner not changed in journal partition.
I have btrfs file system and the devices are sdc1,sdd1,sde1.. and so on.
They are a
Hy Cephers.
I impletented Ceph with 12 HDDs(for OSD) and 1 SSD(Journal).
Device Map is like below (sdb4 is omitted.)
/dev/sdc1 is for OSD.0 and /dev/sdb1 is for Journal
/dev/sdd1 is for OSD.1 and /dev/sdb2 is for Journal
/dev/sde1 is for OSD.2 and /dev/sdb3 is for Journal
.
.
.
/dev/sdn1 is for
Hi All.
My name is John Haan.
I've been testing Cache Pool using Jewel version on ubuntu 16.04 OS.
I implemented 2 types of cache tiers.
first one is cache pool + erasure pool and the other one is cache pool +
replicated pool
I choose writeback mode of cache mode.
vdbench and rados bench are
20 matches
Mail list logo