Good Morning,
I have a production Ceph cluster at the University I work at, which runs
brilliantly.
However, I'd like your advice on the best way of sharing CIFS / SMB from Ceph.
So far I have three ideas:
1. ??Use a server as a head node, with an RBD mapped, then just export with
samba
On 12/15/2015 11:45 AM, Alex Leake wrote:
> Good Morning,
>
>
> I have a production Ceph cluster at the University I work at, which runs
> brilliantly.
>
>
> However, I'd like your advice on the best way of sharing CIFS / SMB from
> Ceph. So far I have three ideas:
>
> 1. Use a server as a
Keep it simple is my approach. #1
If needed Add rudimentary HA with pacemaker.
http://linux-ha.org/wiki/Samba
Cheers
Wade
On Tue, Dec 15, 2015 at 5:45 AM Alex Leake wrote:
> Good Morning,
>
>
> I have a production Ceph cluster at the University I work at, which runs
> brilliantly.
>
>
> Howeve
Hi everybody,
My OpenStack System use Ceph as backend for Glance, Cinder, Nova. In the
future, we intend build a new Ceph Cluster.
I can re-connect current OpenStack with new Ceph systems.
After that, I have tried export rbd images and import to new Ceph, but VMs
and Volumes were clone of Glance
Hi,
I open an issue, and the link is as following:
http://tracker.ceph.com/issues/14081
Thanks
Sunfch
2015-12-15 2:33 GMT+08:00 Yehuda Sadeh-Weinraub :
> On Sun, Dec 13, 2015 at 7:27 AM, 孙方臣 wrote:
> > Hi, All,
> >
> > I'm setting up federated gateway. One is master zone, the other is slave
>
Currently, we use approach #1 with kerberized NFSv4 and Samba (with AD as
KDC) - desperately waiting for CephFS :-)
Best,
Martin
On Tue, Dec 15, 2015 at 11:51 AM, Wade Holler wrote:
> Keep it simple is my approach. #1
>
> If needed Add rudimentary HA with pacemaker.
>
> http://linux-ha.org/wiki
Hi folks,
This morning, one of my MDSes dropped into "replaying":
mds cluster is degraded
mds.0 at 192.168.1.31:6800/12550 rank 0 is replaying journal
and the ceph filesystem seems to be unavailable to the clients. Is there
any way to see the progress of this replay? I don't see any indication
On Tue, Dec 15, 2015 at 5:01 PM, Bryan Wright wrote:
> Hi folks,
>
> This morning, one of my MDSes dropped into "replaying":
>
> mds cluster is degraded
> mds.0 at 192.168.1.31:6800/12550 rank 0 is replaying journal
>
> and the ceph filesystem seems to be unavailable to the clients. Is there
> an
Dear Cephfs gurus.
I have two questions regarding ACL support on cephfs.
1) Last time we tried ACLs we saw that they were only working properly in the
kernel module and I wonder what is the present status of acl support on
ceph-fuse. Can you clarify on that?
2) If ceph-fuse is still not proper
John Spray writes:
> Anyway -- you'll need to do some local poking of the MDS to work out
> what the hold up is. Turn up MDS debug logging[1] and see what's
> it's saying during the replay. Also, you can use performance counters
> "ceph daemon mds. perf dump" and see which are incrementing to
Hi,
I have a setup with two MDS in active/standby configuration. During
times of high network load / network congestion, the active MDS is
bounced between both instances:
1. mons(?) decide that MDS A is crashed/not available due to missing
heartbeats
2015-12-15 16:38:08.471608 7f880df10700
On Tue, Dec 15, 2015 at 6:21 PM, Burkhard Linke
wrote:
> Hi,
>
> I have a setup with two MDS in active/standby configuration. During times of
> high network load / network congestion, the active MDS is bounced between
> both instances:
>
> 1. mons(?) decide that MDS A is crashed/not available due
Hey cephers,
In the interests of transparency, I wanted to share the resulting
minutes from last week’s very first Ceph Advisory Board meeting:
http://tracker.ceph.com/projects/ceph/wiki/CAB_2015-12-09
We are looking to meet monthly to discuss the following:
* Pending development tasks for the
John Spray writes:
> If you haven't already, also
> check the overall health of the MDS host, e.g. is it low on
> memory/swapping?
For what it's worth, I've taken down some OSDs, and that seems to have
allowed the MDS to finish replaying. My guess is that one of the OSDs was
having a problem t
On Tue, Dec 15, 2015 at 3:01 AM, Goncalo Borges
wrote:
> Dear Cephfs gurus.
>
> I have two questions regarding ACL support on cephfs.
>
> 1) Last time we tried ACLs we saw that they were only working properly in the
> kernel module and I wonder what is the present status of acl support on
> ceph
On Tue, Dec 15, 2015 at 10:21 AM, Burkhard Linke
wrote:
> Hi,
>
> I have a setup with two MDS in active/standby configuration. During times of
> high network load / network congestion, the active MDS is bounced between
> both instances:
>
> 1. mons(?) decide that MDS A is crashed/not available due
On Tue, Dec 15, 2015 at 12:29 PM, Bryan Wright wrote:
> John Spray writes:
>
>> If you haven't already, also
>> check the overall health of the MDS host, e.g. is it low on
>> memory/swapping?
>
> For what it's worth, I've taken down some OSDs, and that seems to have
> allowed the MDS to finish r
Hi all,
After recently upgrading to CentOS 7.2 and installing a new Ceph cluster
using Infernalis v9.2.0, I have noticed that disk's are failing to prepare.
I have observed the same behaviour over multiple Ceph servers when
preparing disk's. All the servers are identical.
Disk's are zapping
I had more or less the same problem. This most likely synchronization
issue. I have been deploying 16 OSD each running exactly the same
hardware/software. The issue appeared randomly with no obvious correlations
with other stuff. The dirty workaround was to put time.sleep(5) before
invoking partpro
Hello,
On Wed, 16 Dec 2015 07:26:52 +0100 Mykola Dvornik wrote:
> I had more or less the same problem. This most likely synchronization
> issue. I have been deploying 16 OSD each running exactly the same
> hardware/software. The issue appeared randomly with no obvious
> correlations with other s
Hi,
A fresh server install on one of my nodes (and yum update) left me with CentOS
6.7 / Ceph 0.94.5. All the other nodes are running Ceph 0.94.2.
"ceph-disk prepare /dev/sda /dev/sdc" seems to work as expected, but "ceph-disk
activate / dev/sda1" fails. I have traced the problem to
"/dev/di
Hi,
On 12/15/2015 10:22 PM, Gregory Farnum wrote:
On Tue, Dec 15, 2015 at 10:21 AM, Burkhard Linke
wrote:
Hi,
I have a setup with two MDS in active/standby configuration. During times of
high network load / network congestion, the active MDS is bounced between
both instances:
1. mons(?) deci
22 matches
Mail list logo