On Fri, Jan 22, 2016 at 6:24 AM, Gregory Farnum wrote:
> On Fri, Jan 15, 2016 at 9:00 AM, HMLTH wrote:
>> Hello,
>>
>> I'm evaluating cephfs on a virtual machines cluster. I'm using Infernalis
>> (9.2.0) on debian Jessie as client and server.
>>
>> I'm trying to get some performance numbers on op
thanks for sharing
Dan
On January 20, 2016 11:04:44 PM Somnath Roy wrote:
> Hi,
> Here is the copy of the ppt I presented in today's performance meeting..
>
> https://docs.google.com/presentation/d/1j4Lcb9fx0OY7eQlQ_iUI6TPVJ6t_orZWKJyhz0S_3ic/edit?usp=sharing
>
> Thanks & Regards
> Somnath
>
I haven't been able to reproduce the issue on my end but I do not fully
understand how the bug exists or why it is happening. I was finally
given the code they are using to upload the files::
http://pastebin.com/N0j86NQJ
I don't know if this helps at all :-(. the other thing is that I have
on
On Thu, Jan 21, 2016 at 4:02 PM, seapasu...@uchicago.edu
wrote:
> I haven't been able to reproduce the issue on my end but I do not fully
> understand how the bug exists or why it is happening. I was finally given
> the code they are using to upload the files::
>
> http://pastebin.com/N0j86NQJ
>
>
Hi Greg,
while running the dd:
server:
[root@ceph2 ~]# ceph daemon /var/run/ceph/ceph-mds.ceph2.asok status
{
"cluster_fsid": "",
"whoami": 0,
"state": "up:active",
"mdsmap_epoch": 83,
"osdmap_epoch": 12592,
"osdmap_epoch_barrier": 12592
}
[root@ceph2 ~]# ceph daemon /v
On Fri, Jan 15, 2016 at 9:00 AM, HMLTH wrote:
> Hello,
>
> I'm evaluating cephfs on a virtual machines cluster. I'm using Infernalis
> (9.2.0) on debian Jessie as client and server.
>
> I'm trying to get some performance numbers on operations like tar/untar on
> things like the linux kernel. I hav
Hello,
"ceph-rest-api" works greatly with client.admin.
But with client.test-admin which I created just after building the Ceph cluster
, it does not work.
~$ ceph auth get-or-create client.test-admin mon 'allow *' mds 'allow *' osd
'allow *'
~$ sudo ceph auth list
installed auth entries:
On Thu, Jan 21, 2016 at 4:24 AM, Oliver Dzombic wrote:
> Hi Greg,
>
> alright.
>
> After shutting down the whole cluster and start it with "none" as
> authentication, i resettet the auth rights and restarted the whole
> cluster again after setting back to cephx.
>
> Now it looks like:
>
> client.a
On Thu, Jan 21, 2016 at 1:20 AM, HMLTH wrote:
>
>
>
> Gregory Farnum – Thu., 21. January 2016 4:02
>>
>> On Wed, Jan 20, 2016 at 6:40 PM, Francois Lafont
>> wrote:
>> > Hi,
>> >
>> > On 19/01/2016 07:24, Adam Tygart wrote:
>> >> It appears that with --apparent-size, du adds the "size" of the
>> >
I realised this operation with no impact using this way :
1- I duplicated the rule used by my pools and give it the number of the new
corresponding rule in the new crushmap
2- I set the noout
3- I set the new crushmap
4- I unset the noout
Everything was fine (i already had set max_backfill and ot
Hi Mike,
same happend to us.
I am afraid but there was no answer in the mail list for that problem.
I ended up in changing everything to the new fsid.
But still are some errors left in the logs, but it seems to work.
Maybe you run manually the mon create command again with your old fsid.
But
Hey ceph-users,
One of of ceph environments changed its fsid for the cluster, and I would
like advice on how to get it corrected.
We added a new OSD node in hope of retiring one of the older OSD + MON
nodes.
Using ceph-deploy, we unfortunately ran "ceph-deploy mon create ..."
instead of mon add
Am 21.01.2016 um 15:32 schrieb Jason Dillaman:
> Are you performing a lot of 'rbd export-diff' or 'rbd diff' operations? I
> can't speak to whether or not list-snaps is related to your blocked requests,
> but I can say that operation is only issued when performing RBD diffs.
Yes, we are also do
It looks like there might be an issue with the repo metadata. I'm not
seeing ceph, ceph-common, librbd1, etc. in the debian-giant wheezy
branch. I ended up just downloading the debs and installing them
manually in the interim. FYI.
-Steve
cat /etc/apt/sources.list.d/ceph.list
deb http://download.
Are you performing a lot of 'rbd export-diff' or 'rbd diff' operations? I
can't speak to whether or not list-snaps is related to your blocked requests,
but I can say that operation is only issued when performing RBD diffs.
--
Jason Dillaman
- Original Message -
> From: "Christian K
Hi Greg,
alright.
After shutting down the whole cluster and start it with "none" as
authentication, i resettet the auth rights and restarted the whole
cluster again after setting back to cephx.
Now it looks like:
client.admin
key: mysuperkey
caps: [mds] allow *
caps: [mo
On Thu, Jan 21, 2016 at 11:21 AM, yuyang wrote:
> Hello, everyone
> In our cluster, we use cephFS with two MDS, and there are serevel ceph-fuse
> clients.
> Every client mount there own dir so that they can not see each other.
> We use the following cmd to mount:
> ceph-fuse -m 10.0.9.75:6789 -r
Hello, everyone
In our cluster, we use cephFS with two MDS, and there are serevel ceph-fuse
clients.
Every client mount there own dir so that they can not see each other.
We use the following cmd to mount:
ceph-fuse -m 10.0.9.75:6789 -r /clientA /mnt/cephFS
And we want to monit our client and get
Hi,
some of our applications (e.g., backy) use 'rbd snap ls' quite often. I see
regular occurrences of blocked requests on a headly loaded cluster which
correspond to snap_list operations. Log file example:
2016-01-20 11:38:14.389325 osd.13 172.22.4.44:6803/13012 40529 : cluster [WRN]
1 slow requ
Hi Greg,
ceph auth list showed
client.admin
key: mysuperkey
caps: [mds] allow
caps: [mon] allow *
caps: [osd] allow *
Then i tried to add the capability for the mds:
[root@ceph1 ~]# ceph auth caps client.admin mds 'allow'
updated caps for client.admin
which was
Gregory Farnum – Thu., 21. January 2016 4:02
On Wed, Jan 20, 2016 at 6:40 PM, Francois Lafont wrote:
> Hi,
>
> On 19/01/2016 07:24, Adam Tygart wrote:
>> It appears that with --apparent-size, du adds the "size" of the
>> directories to the total as well. On most filesystems this is the
>> blo
On Wed, Jan 20, 2016 at 8:01 PM, Zoltan Arnold Nagy
wrote:
>
> Wouldn’t actually blowing away the other monitors then recreating them from
> scratch solve the issue?
>
> Never done this, just thinking out loud. It would grab the osdmap and
> everything from the other monitor and form a quorum, w
22 matches
Mail list logo