On Tue, Jun 14, 2016 at 4:29 AM, Rakesh Parkiti
wrote:
> Hello,
>
> Unable to mount the CephFS file system from client node with "mount error 5
> = Input/output error"
> MDS was installed on a separate node. Ceph Cluster health is OK and mds
> services are running. firewall was disabled across all
Hello,
Unable to mount the CephFS file system from client node with "mount error 5 =
Input/output error"
MDS was installed on a separate node. Ceph Cluster health is OK and mds
services are running. firewall was disabled across all the nodes in a cluster.
-- Ceph Cluster Nodes (RHEL 7.2 version
On Mon, May 30, 2016 at 8:33 PM, Ilya Dryomov wrote:
> On Mon, May 30, 2016 at 4:12 PM, Jens Offenbach wrote:
>> Hallo,
>> in my OpenStack Mitaka, I have installed the additional service "Manila"
>> with a CephFS backend. Everything is working. All shares are created
>> successfully:
>>
>> mani
On Mon, May 30, 2016 at 4:12 PM, Jens Offenbach wrote:
> Hallo,
> in my OpenStack Mitaka, I have installed the additional service "Manila" with
> a CephFS backend. Everything is working. All shares are created successfully:
>
> manila show 9dd24065-97fb-4bcd-9ad1-ca63d40bf3a8
> +-
Hallo,
in my OpenStack Mitaka, I have installed the additional service "Manila" with a
CephFS backend. Everything is working. All shares are created successfully:
manila show 9dd24065-97fb-4bcd-9ad1-ca63d40bf3a8
+-+--
Hi,
I can answer this myself. It was a kernel. After upgrade to lates Debian
Jessie 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u2 (2015-07-17)
x86_64 GNU/Linux. Everything started to work as normal.
Thanks :)
On 6/08/2015 22:38, Jiri Kanicky wrote:
Hi,
I am trying to mount my CephFS a
Hi,
I am trying to mount my CephFS and getting the following message. It was
all working previously, but after power failure I am not able to mount
it anymore (Debian Jessie).
cephadmin@maverick:/etc/ceph$ sudo mount -t ceph
ceph1.allsupp.corp,ceph2.allsupp.corp:6789:/ /mnt/cephdata/ -o
nam
On Wed, Dec 4, 2013 at 7:15 AM, Mr.Salvatore Rapisarda
wrote:
> Hi,
>
> i have a ceph cluster with 3 nodes on Ubuntu 12.04.3 LTS and ceph version
> 0.72.1
>
> My configuration is the follow:
>
> * 3 MON
> - XRVCLNOSTK001=10.170.0.110
> - XRVCLNOSTK002=10.170.0.111
> - XRVOSTKMNG001=10.170.0.
Hi,
i have a ceph cluster with 3 nodes on Ubuntu 12.04.3 LTS and ceph
version 0.72.1
My configuration is the follow:
* 3 MON
- XRVCLNOSTK001=10.170.0.110
- XRVCLNOSTK002=10.170.0.111
- XRVOSTKMNG001=10.170.0.112
* 3 OSD
- XRVCLNOSTK001=10.170.0.110
- XRVCLNOSTK002=10.170.0.111
- X
Hi,
i have a ceph cluster with 3 nodes on Ubuntu 12.04.3 LTS and ceph
version 0.72.1
My configuration is the follow:
* 3 MON
- XRVCLNOSTK001=10.170.0.110
- XRVCLNOSTK002=10.170.0.111
- XRVOSTKMNG001=10.170.0.112
* 3 OSD
- XRVCLNOSTK001=10.170.0.110
- XRVCLNOSTK002=10.170.0.111
- X
Dear all,
I am trying to mount cephfs to 2 different mount points (each should have their
respective pools and keys). While the first mount works (after using set_layout
to get it to the right pool), the second attempt failed with "mount error 12 =
Cannot allocate memory". Did I miss some steps
No I am not running MDS in a VM. I have MDS and mon in a single node.
On Fri, May 17, 2013 at 4:03 PM, John Wilkins wrote:
> Are you running the MDS in a VM?
>
> On Fri, May 17, 2013 at 12:40 AM, Sridhar Mahadevan
> wrote:
> > Hi,
> > I did try to restart the MDS server. The logs show the follo
Are you running the MDS in a VM?
On Fri, May 17, 2013 at 12:40 AM, Sridhar Mahadevan
wrote:
> Hi,
> I did try to restart the MDS server. The logs show the following error
>
> [187846.234448] init: ceph-mds (ceph/blade2-qq) main process (15077) killed
> by ABRT signal
> [187846.234493] init: ceph-
Hi,
I did try to restart the MDS server. The logs show the following error
*[187846.234448] init: ceph-mds (ceph/blade2-qq) main process (15077)
killed by ABRT signal
[187846.234493] init: ceph-mds (ceph/blade2-qq) main process ended,
respawning
[187846.687929] init: ceph-mds (ceph/blade2-qq) main
Have you tried restarting your MDS server?
http://ceph.com/docs/master/rados/operations/operating/#operating-a-cluster
On Fri, May 17, 2013 at 12:16 AM, Sridhar Mahadevan
wrote:
> Hi,
>
> I have deployed the ceph object store using ceph-deploy.
> I tried to mount cephfs and I got struck with this
Hi,
I have deployed the ceph object store using ceph-deploy.
I tried to mount cephfs and I got struck with this error.
*sudo mount.ceph 192.168.35.82:/ /mnt/mycephfs -o
name=admin,secret=AQDa5JJRqLxuOxAA77VljIjaAGWR6mGdL12NUQ==*
*mount error 5 = Input/output error*
The output of the command
#
Hi there,
I intend to install Ceph on two VMs (client - server) where both
have Ubuntu 12.04 and for client 4G RAM and server 8G RAM.
Walking through the 5 min quick installation i did relay on the default
configurations with little changes: disable the cephx authentication, let
rep value = 1 , a
17 matches
Mail list logo