You will have to consider in the real world whoever built the cluster might
not document the dangerous option to let support stuff or successor aware.
Thus any experimental feature considered not safe for production should be
included in a warning message in 'ceph health', and logs, either log it
p
Hi,
The Ceph cluster we are running have few OSDs approaching to 95% 1+ weeks
ago so I ran a reweight to balance it out, in the meantime, instructing
application to purge data not required. But after large amount of data
purge issued from application side(all OSDs' usage dropped below 20%), the
cl
Sage,
Even with cluster file system, it will still need a fencing mechanism to
allow SCSI device shared by multiple host, what kind of SCSI reservation
RBD currently support?
Fred
Sent from my Samsung Galaxy S3
On Oct 20, 2014 4:42 PM, "Sage Weil" wrote:
> On Mon, 20 Oct 2014, Dianis Dimoglo wr
I'm setting up federated gateway following
https://ceph.com/docs/master/radosgw/federated-config/, it seems one
cluster can have multiple instances serving multiple zone each(be it master
or slave), but it's not clear whether I can have multiple radosgw/httpd
instances in the same cluster to serve
I have been looking for documents regarding DR procedure for Federated
Gateway as well and not much luck. Can somebody from Inktank comment on
that?
In the event of site failure, what's the current procedure to switch
master/secondary zone role? or Ceph currently does not have that capability
yet?
hn Wilkins
wrote:
> Did you run ceph-deploy in the directory where you ran ceph-deploy new and
> ceph-deploy gatherkeys? That's where the monitor bootstrap key should be.
>
>
> On Mon, Jun 16, 2014 at 8:49 AM, Fred Yang
> wrote:
>
>> I'm adding three OSD node
I'm adding three OSD nodes(36 osds in total) to existing 3-node cluster(35
osds) using ceph-deploy, after disks prepared and OSDs activated, the
cluster re-balanced and shows all pgs active+clean:
osdmap e820: 72 osds: 71 up, 71 in
pgmap v173328: 15920 pgs, 17 pools, 12538 MB data, 3903
x27;t
work, this cluster is running on Emperor and not sure whether that will
make any difference.
Fred
On Jun 13, 2014 7:51 AM, "Wido den Hollander" wrote:
> On 06/13/2014 01:41 PM, Fred Yang wrote:
>
>> Thanks, John.
>>
>> That seems will take care of m
ur question, but I would
> definitely have a look at:
> http://ceph.com/docs/master/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address
>
> There are some important steps in there for monitors.
>
>
> On Wed, Jun 11, 2014 at 12:08 PM, Fred Yang
> wrote:
>
&
We need to move Ceph cluster to different network segment for
interconnectivity between mon and osc, anybody has the procedure regarding
how that can be done? Note that the host name reference will be changed, so
originally the osd host referenced as cephnode1, in the new segment it will
be cephnod
I have to say I'm shocked to see the suggestion is rbd import/export if
'you care the data'. These kind of operation is common use case and should
be an essential part of any distributed storage. What if I have a hundred
node cluster running for years and need to do hardware refresh? There are
no c
On May 6, 2014 7:12 AM, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> wrote:
>
> 2014-05-06 13:08 GMT+02:00 Dan Van Der Ster :
> > I've followed this recipe successfully in the past:
> >
> >
http://wiki.skytech.dk/index.php/Ceph_-_howto,_rbd,_lvm,_cluster#Add.2Fmove_journal_in_running_
12 matches
Mail list logo