Re: [ceph-users] experimental features

2014-12-08 Thread Fred Yang
You will have to consider in the real world whoever built the cluster might not document the dangerous option to let support stuff or successor aware. Thus any experimental feature considered not safe for production should be included in a warning message in 'ceph health', and logs, either log it p

[ceph-users] Negative number of objects degraded for extended period of time

2014-11-13 Thread Fred Yang
Hi, The Ceph cluster we are running have few OSDs approaching to 95% 1+ weeks ago so I ran a reweight to balance it out, in the meantime, instructing application to purge data not required. But after large amount of data purge issued from application side(all OSDs' usage dropped below 20%), the cl

Re: [ceph-users] Ceph RBD

2014-10-20 Thread Fred Yang
Sage, Even with cluster file system, it will still need a fencing mechanism to allow SCSI device shared by multiple host, what kind of SCSI reservation RBD currently support? Fred Sent from my Samsung Galaxy S3 On Oct 20, 2014 4:42 PM, "Sage Weil" wrote: > On Mon, 20 Oct 2014, Dianis Dimoglo wr

[ceph-users] radosgw scalability questions

2014-07-07 Thread Fred Yang
I'm setting up federated gateway following https://ceph.com/docs/master/radosgw/federated-config/, it seems one cluster can have multiple instances serving multiple zone each(be it master or slave), but it's not clear whether I can have multiple radosgw/httpd instances in the same cluster to serve

Re: [ceph-users] about rgw region and zone

2014-06-17 Thread Fred Yang
I have been looking for documents regarding DR procedure for Federated Gateway as well and not much luck. Can somebody from Inktank comment on that? In the event of site failure, what's the current procedure to switch master/secondary zone role? or Ceph currently does not have that capability yet?

Re: [ceph-users] cephx authentication issue

2014-06-17 Thread Fred Yang
hn Wilkins wrote: > Did you run ceph-deploy in the directory where you ran ceph-deploy new and > ceph-deploy gatherkeys? That's where the monitor bootstrap key should be. > > > On Mon, Jun 16, 2014 at 8:49 AM, Fred Yang > wrote: > >> I'm adding three OSD node

[ceph-users] cephx authentication issue

2014-06-16 Thread Fred Yang
I'm adding three OSD nodes(36 osds in total) to existing 3-node cluster(35 osds) using ceph-deploy, after disks prepared and OSDs activated, the cluster re-balanced and shows all pgs active+clean: osdmap e820: 72 osds: 71 up, 71 in pgmap v173328: 15920 pgs, 17 pools, 12538 MB data, 3903

Re: [ceph-users] Moving Ceph cluster to different network segment

2014-06-13 Thread Fred Yang
x27;t work, this cluster is running on Emperor and not sure whether that will make any difference. Fred On Jun 13, 2014 7:51 AM, "Wido den Hollander" wrote: > On 06/13/2014 01:41 PM, Fred Yang wrote: > >> Thanks, John. >> >> That seems will take care of m

Re: [ceph-users] Moving Ceph cluster to different network segment

2014-06-13 Thread Fred Yang
ur question, but I would > definitely have a look at: > http://ceph.com/docs/master/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address > > There are some important steps in there for monitors. > > > On Wed, Jun 11, 2014 at 12:08 PM, Fred Yang > wrote: > &

[ceph-users] Moving Ceph cluster to different network segment

2014-06-11 Thread Fred Yang
We need to move Ceph cluster to different network segment for interconnectivity between mon and osc, anybody has the procedure regarding how that can be done? Note that the host name reference will be changed, so originally the osd host referenced as cephnode1, in the new segment it will be cephnod

Re: [ceph-users] Migrate whole clusters

2014-05-13 Thread Fred Yang
I have to say I'm shocked to see the suggestion is rbd import/export if 'you care the data'. These kind of operation is common use case and should be an essential part of any distributed storage. What if I have a hundred node cluster running for years and need to do hardware refresh? There are no c

Re: [ceph-users] Replace journals disk

2014-05-06 Thread Fred Yang
On May 6, 2014 7:12 AM, "Gandalf Corvotempesta" < gandalf.corvotempe...@gmail.com> wrote: > > 2014-05-06 13:08 GMT+02:00 Dan Van Der Ster : > > I've followed this recipe successfully in the past: > > > > http://wiki.skytech.dk/index.php/Ceph_-_howto,_rbd,_lvm,_cluster#Add.2Fmove_journal_in_running_