Thanks Matthew
I've read the launchpad article: "This bug was fixed in the package
mountall - 2.20"
On Ubuntu 12.04 mountall is at ver. 2.36 but the mount problem is still present
2013/4/2 Matthew Roy :
> Marco & Igor,
>
> mountall in Ubuntu 12.04 has a bug when mounting cephfs. It's fixed in 12.1
I have a problem in my ceph client.
I have mounted the ceph path to my ceph client with the command "mount
-t ceph * *".
The allocation size is normal when viewed through the command "df -h",
however,
the allocation size become very large and the capacity size become too
small
when viewed
Hi,
I am new to ceph. I found that there are three different roles for ceph
service.
MDS
MON
OSD
Do I have to use separate nodes for each of the roles or can I collocate
all the services in each node. A standard setup example will help.
Best Regards,
Dewan Shamsul Alam
_
Hi Dewan,
On 04/03/2013 04:33 AM, Dewan Shamsul Alam wrote:
Hi,
I am new to ceph. I found that there are three different roles for ceph
service.
MDS
MON
OSD
Do I have to use separate nodes for each of the roles or can I collocate
all the services in each node. A standard setup example will he
Hi,
I still have this problem in v0.60.
If I stop one OSD, the OSD get set down after 20 seconds. But after 300
seconds the OSD get not set out, there for the ceph stays degraded for ever.
I can reproduce it with a fresh created cluster.
root@store1:~# ceph -s
health HEALTH_WARN 405 pgs degrad
Hi,
I've seen this in 0.56. In my case I shutdown one server then bring it
back. I have to run /etc/init.d/ceph -a restart to make it healthy. It
doesn't impact the running VM I have in that cluster though.
On Wed, Apr 3, 2013 at 8:32 PM, Martin Mailand wrote:
> Hi,
>
> I still have this prob
Hi,
I have an existing bucket with about 2,5TB data on a single system with 24
OSD's. This data can be replaced from S3, but I'd rather not, as this takes a
long time.
The pg-num for the .rgw.buckets pool turns out to be 8 by default, which we had
not realised before.
The question is what to d
Hi,
On Wed, 3 Apr 2013, dawn li wrote:
> I have a problem in my ceph client. I have mounted the ceph path to
> my ceph client with the command "mount -t ceph * *".
> The allocation size is normal when viewed through the command "df -h",
> however,
> the allocation size become very large a
I have two test clusters running Bobtail (0.56.4) and Ubuntu Precise
(12.04.2). The problem I'm having is that I'm not able to get either
of them into a state where I can both mount the filesystem and have
all the PGs in the active+clean state.
It seems that on both clusters I can get them into a
That sounds like a bug then. This should be reported on ceph-devel
On Tue, Apr 2, 2013 at 2:18 AM, Marco Aroldi wrote:
> Igor,
>
> Thanks, I confirm too:
> the problem rises only tryin' to mount the root directory on Ubuntu 12.04
>
> --
> Marco Aroldi
>
> Il 02/04/2013 11:02, Igor Laskovy ha s
On Apr 1, 2013, at 3:33 PM, Gregory Farnum wrote:
>> On Mon, Apr 1, 2013 at 2:16 PM, Sam Lang wrote:
>>> On Mon, Apr 1, 2013 at 5:59 AM, Papaspyrou, Alexander
>>> wrote:
>>> 1. So far, I understand that OSD ids have to be numeric, nothing else in
>>> there. What I couldn't find is whether they
Can you elaborate on "manually deleted"? If you used an interface like RBD
or REST to upload the file, and then just deleted the upload from the file
system directly, your cluster map doesn't update. So you'd have a lost
object.
On Tue, Apr 2, 2013 at 2:35 AM, Adam Iwanowski wrote:
> Hello.
>
>
On Wed, Apr 3, 2013 at 9:45 AM, John Nielsen wrote:
> On Apr 1, 2013, at 3:33 PM, Gregory Farnum wrote:
>
>>> On Mon, Apr 1, 2013 at 2:16 PM, Sam Lang wrote:
On Mon, Apr 1, 2013 at 5:59 AM, Papaspyrou, Alexander
wrote:
1. So far, I understand that OSD ids have to be numeric, noth
And if you put a big file in CephFS and then deleted it, the data will
be deleted from the RADOS cluster asynchronously in the background (by
the MDS), so it can take a while to actually get removed. :) If this
wasn't the behavior then a file delete would require you to wait for
each of those (10GB
I'll put in some notes about stepwise and starting from zero.
On Wed, Apr 3, 2013 at 9:58 AM, Gregory Farnum wrote:
> On Wed, Apr 3, 2013 at 9:45 AM, John Nielsen wrote:
> > On Apr 1, 2013, at 3:33 PM, Gregory Farnum wrote:
> >
> >>> On Mon, Apr 1, 2013 at 2:16 PM, Sam Lang wrote:
> On
On Tue, Apr 2, 2013 at 4:18 AM, Varun Chandramouli wrote:
>
> Another question I had was regarding hadoop-MR on ceph. I believe that on
> HDFS, the jobtracker tries to schedule jobs locally, with necessary
> information from the namenode. When on ceph, how is this ensured, given
> that a file may
On Thu, Mar 28, 2013 at 6:32 AM, Kai Blin wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 2013-03-28 09:16, Volker Lendecke wrote:
>> On Wed, Mar 27, 2013 at 10:43:36PM -0700, Matthieu Patou wrote:
>>> On 03/27/2013 10:41 AM, Marco Aroldi wrote:
Hi list, I'm trying to create a
On Wed, Apr 03, 2013 at 03:53:58PM -0500, Sam Lang wrote:
> On Thu, Mar 28, 2013 at 6:32 AM, Kai Blin wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA1
> >
> > On 2013-03-28 09:16, Volker Lendecke wrote:
> >> On Wed, Mar 27, 2013 at 10:43:36PM -0700, Matthieu Patou wrote:
> >>> On 03/27
I have centos 6.3 with kernel 3.8.4-1.el6.elrepo.x86_64 from elrepo.org.
Cephfs mount with kernel module.
[root@localhost t1]# wget
http://joomlacode.org/gf/download/frsrelease/17965/78413/Joomla_3.0.3-Stable-Full_Package.tar.gz
[root@localhost t1]# time tar -zxf Joomla_3.0.3-Stable-Full_Package.t
On Wed, Apr 03, 2013 at 03:53:58PM -0500, Sam Lang wrote:
> Just to let folks know, we have a ceph vfs driver for samba that we
> are testing out now. We're planning to resolve a few of the bugs that
> we're seeing presently with smbtorture, and send a pull request to the
> samba repo. If anyone
20 matches
Mail list logo