Le 15/02/2015 16:48, hp cre a écrit :
Hello all, I'm currently studying the possibility of creating a small
ceph cluster on arm nodes.
The reasonably priced boards I found (like the banana pi/pro, Orange
pi/pro/h3, etc..) most have either dual core or quad core Allwinner
chips and 1GB RAM.
Turns out to be an authentication problem .. I recreated the keyring file
again, and re-added the RGW to the cluster, as follows:
ceph-authtool --create-keyring ceph.client.radosgw.keyring ceph-authtool
ceph.client.radosgw.keyring -n client.radosgw.gateway --gen-key ceph-authtool
-n client.rado
I bought a copy some days ago, great job but it is Redhat specific .
Thanks,
Sunday Olutayo
- Original Message -
From: "Andrei Mikhailovsky"
To: "Wido den Hollander"
Cc: ceph-users@lists.ceph.com
Sent: Saturday, February 14, 2015 1:05:45 AM
Subject: Re: [ceph-users] Introducing "Lea
Hi all
I have been installing ceph giant quite happily for the past 3 months on
various systems and use
an ansible recipe to do so. The OS is RHEL7.
This morning on one of my test systems installation fails with:
[root@octopus ~]# yum install ceph ceph-deploy
Loaded plugins: langpacks, prioriti
Hi ceph-experts,
We are getting "store is getting too big" on our test cluster. Cluster is
running with giant release and configured as EC pool to test cephFS.
cluster c2a97a2f-fdc7-4eb5-82ef-70c52f2eceb1
health HEALTH_WARN too few pgs per osd (0 < min 20); mon.master01
store is getting too
On 02/16/2015 12:57 PM, Mohamed Pakkeer wrote:
Hi ceph-experts,
We are getting "store is getting too big" on our test cluster.
Cluster is running with giant release and configured as EC pool to test
cephFS.
cluster c2a97a2f-fdc7-4eb5-82ef-70c52f2eceb1
health HEALTH_WARN too few pgs
Nice, will probably order it.
--
Hannes Landeholm
On Fri, Feb 13, 2015 at 9:02 AM, Alexandre DERUMIER wrote:
> Just buy it.
>
> Nice book, I don't have read yet all the book, but it seem to cover all ceph
> features.
>
> Good job !
>
>
> - Mail original -
> De: "Karan Singh"
> À: "Cep
hi,
Is it possible to specify multiple pool names for authorization?
in my test, only the following are allowed,
ceph auth caps client.CLIENT_ID osd 'allow *'
ceph auth caps client.CLIENT_ID osd 'allow * pool=*'
Let's say I want to grant access to "a-1" and "a-2" but not any other
pools, it does
On 16-02-15 13:14, Mingfai wrote:
> hi,
>
> Is it possible to specify multiple pool names for authorization?
>
> in my test, only the following are allowed,
> ceph auth caps client.CLIENT_ID osd 'allow *'
> ceph auth caps client.CLIENT_ID osd 'allow * pool=*'
>
> Let's say I want to grant acces
Hi Paul,
Would you mind sharing/posting the contents of your .repo files for
ceph, ceph-el7, and ceph-noarch repos?
I see that python-rbd is getting pulled in from EPEL, which I don't
think is what you want.
My guess is that you need the fix documented in
http://tracker.ceph.com/issues/10476, th
Hi Travis
Thanks for the reply.
My only doubt is that this was all working until this morning. Has anything
changed in the Ceph repository?
I tried commenting out various repos but this did not work.
If I delete the epel repos than ceph installation fails becuase tcmalloc and
leveldb are not f
Hi Paul,
Looking a bit closer, I do believe it is the same issue. It looks
like python-rbd in EPEL (and others like python-rados) were updated in
EPEL on January 21st, 2015. This update included some changes to how
dependencies were handled between EPEL and RHEL for Ceph. See
http://pkgs.fedora
Thanks for that Travis. Much appreciated.
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353
From: Travis Rhoden [trho...@gmail.com]
Sent: 16 February 2015 15:35
To: HEWLETT, Paul (Paul)** CTR **
Hi,
I'm trying to plan the hardware for a little ceph cluster.
We don't have a lot of financial means. In addition, we will
have to pay attention to the electric consumption. At first,
it will probably be a cluster with 3 physical servers and
on each server will be osd node and monitor node (and m
Woah, major thread necromancy! :)
On Feb 13, 2015, at 3:03 PM, Josef Johansson wrote:
>
> Hi,
>
> I skimmed the logs again, as we’ve had more of this kinda errors,
>
> I saw a lot of lossy connections errors,
> -2567> 2014-11-24 11:49:40.028755 7f6d49367700 0 -- 10.168.7.23:6819/10217
> >> 1
On Sun, Feb 15, 2015 at 5:39 PM, Sage Weil wrote:
> On Sun, 15 Feb 2015, Mykola Golub wrote:
>> The "ceph osd create" could be extended to have OSD ID as a second
>> optional argument (the first is already used for uuid).
>>
>> ceph osd create
>>
>> The command would succeed only if the ID wer
On 02/16/2015 08:44 AM, HEWLETT, Paul (Paul)** CTR ** wrote:
> Thanks for that Travis. Much appreciated.
>
> Paul Hewlett
> Senior Systems Engineer
> Velocix, Cambridge
> Alcatel-Lucent
> t: +44 1223 435893 m: +44 7985327353
>
>
>
>
> From: Travis Rhode
Dan Mick writes:
>
> 0cbcfbaa791baa3ee25c4f1a135f005c1d568512 on the 1.2.3 branch has the
> change to yo 1.1.0. I've just cherry-picked that to v1.3 and master.
Do you mean that you merged 1.2.3 into master and branch 1.3?
BTW I managed to clone and built branch 1.2.3 in my vagrant env.
_
Well, I knew it had all the correct information since earlier so gave it a shot
:)
Anyway, I think it may be just a bad controller as well. New enterprise drives
shouldn’t be giving read errors this early in deployment tbh.
Cheers,
Josef
> On 16 Feb 2015, at 17:37, Greg Farnum wrote:
>
> Woah
Steffen Winther writes:
> Trying to figure out how to initially configure
> calamari clients to know about my
> Ceph Cluster(s) when such aint install through ceph.deploy
> but through Proxmox pveceph.
>
> Assume I possible need to copy some client admin keys and
> configure my MON hosts somehow
And yeah, it’s the same EIO 5 error.
So ok, the errors doesn’t show anything useful to the osd crash.
> On 16 Feb 2015, at 21:58, Josef Johansson wrote:
>
> Well, I knew it had all the correct information since earlier so gave it a
> shot :)
>
> Anyway, I think it may be just a bad controlle
Hello,
re-adding the mailing list.
On Mon, 16 Feb 2015 17:54:01 +0300 Mike wrote:
> Hello
>
> 05.02.2015 08:35, Christian Balzer пишет:
> >
> > Hello,
> >
> >>>
> LSI 2308 IT
> 2 x SSD Intel DC S3700 400GB
> 2 x SSD Intel DC S3700 200GB
> >>> Why the separation of SSDs?
> >>>
Hello,
For the case of multiple clients(separate process) accessing an object
in the cluster, is the exclusive protection of the shared object
needed for the caller?
Process A::rados::ioctx.write(obj, ...)
Process B::rados::ioctx.write(obj, ...)
or:
Process A::
mylock.lock();
ioctx.writ
On Tue, 17 Feb 2015, Dennis Chen wrote:
> Hello,
>
> For the case of multiple clients(separate process) accessing an object
> in the cluster, is the exclusive protection of the shared object
> needed for the caller?
>
> Process A::rados::ioctx.write(obj, ...)
> Process B::rados::ioctx.write(obj,
Hello,
On Mon, 16 Feb 2015 17:13:40 +0100 Francois Lafont wrote:
> Hi,
>
> I'm trying to plan the hardware for a little ceph cluster.
> We don't have a lot of financial means. In addition, we will
> have to pay attention to the electric consumption. At first,
> it will probably be a cluster with
25 matches
Mail list logo