is it possible to use ceph-deploy to set up the OSD-processes having both a
private and cluster-network?
>From the "normal" documentation, it seems, that for each OSD-entry in the
>config-file, there has to be an individual entry. Can this be done with
>ceph-deploy?
Regards, Harald
Hi ,
I have observed that the latest ceph packages from ceph are being blocked
by ceph packages from epel on cEntos6 is it just me or are others observing
this too.
Cheers,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/list
Hi Alfredo,
Now all works fine. Thank you!
Hi Roman,
This was a recent change in ceph-deploy to enable Ceph services on
CentOS/RHEL/Fedora distros after deploying a daemon (an OSD in your
case).
There was an issue where the remote connection was closed before being
able to enable a service wh
Hi Loic,
I'll be there and interested to chat with other Cephers. But your pad
isn't returning any page data...
Cheers,
On 11 October 2014 08:48, Loic Dachary wrote:
> Hi Ceph,
>
> TL;DR: please register at http://pad.ceph.com/p/kilo if you're attending the
> OpenStack summit
>
> November 3 -
Ah, yes. So your gateway is called something other than:
[client.radosgw.gateway]
So take a look at what
$ ceph auth list
says (run from your rgw), it should pick up the correct name. Then
correct your ceph.conf, restart and see what the rgw log looks like as
you edge ever so closer to havin
Hi All,
I would like to know if there are useful performance counters in
ceph which can help to debug the cluster. I have seen hundreds of stat
counters in various daemon dumps. Some of them are,
1. commit_latency_ms
2. apply_latency_ms
3. snap_trim_queue_len
4. num_snap_trimming
What d
Hi,
Apparently pad.ceph.com is temporarily unavailable. Hopefully it will be back
soon :-)
Cheers
On 12/10/2014 21:17, Karan Singh wrote:
> Hey Loic
>
> Its a nice idea for a Micro Summit during OpenStack summit
>
> I am not able to open the link http://pad.ceph.com/p/kilo , can you please
On 12/10/2014 18:52, Gregory Farnum wrote:
> On Sun, Oct 12, 2014 at 9:29 AM, Loic Dachary wrote:
>>
>>
>> On 12/10/2014 18:22, Gregory Farnum wrote:
>>> On Sun, Oct 12, 2014 at 9:10 AM, Loic Dachary wrote:
On 12/10/2014 17:48, Gregory Farnum wrote:
> On Sun, Oct 12, 2014 at
On Sun, Oct 12, 2014 at 9:29 AM, Loic Dachary wrote:
>
>
> On 12/10/2014 18:22, Gregory Farnum wrote:
>> On Sun, Oct 12, 2014 at 9:10 AM, Loic Dachary wrote:
>>>
>>>
>>> On 12/10/2014 17:48, Gregory Farnum wrote:
On Sun, Oct 12, 2014 at 7:46 AM, Loic Dachary wrote:
> Hi,
>
> On
On Mon, Oct 13, 2014 at 12:15 AM, Wido den Hollander wrote:
>
> That is default now. No CephFS pools are created until you activate CephFS.
>
> Wido
>
Thanks you for the explanation!
Anthony
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://l
On 12/10/2014 18:22, Gregory Farnum wrote:
> On Sun, Oct 12, 2014 at 9:10 AM, Loic Dachary wrote:
>>
>>
>> On 12/10/2014 17:48, Gregory Farnum wrote:
>>> On Sun, Oct 12, 2014 at 7:46 AM, Loic Dachary wrote:
Hi,
On a 0.80.6 cluster the command
ceph tell osd.6 version
>>>
On Sun, Oct 12, 2014 at 9:10 AM, Loic Dachary wrote:
>
>
> On 12/10/2014 17:48, Gregory Farnum wrote:
>> On Sun, Oct 12, 2014 at 7:46 AM, Loic Dachary wrote:
>>> Hi,
>>>
>>> On a 0.80.6 cluster the command
>>>
>>> ceph tell osd.6 version
>>>
>>> hangs forever. I checked that it establishes a TCP
> Op 12 okt. 2014 om 18:13 heeft Anthony Alba het
> volgende geschreven:
>
> Hi,
>
> I am following the manual creation method with 0.8.6 on CentOS7.
>
> When I start mon.node1, I only have one pool created. No data, metadata pools.
>
> Any suggestions?
That is default now. No CephFS poo
Hi,
I am following the manual creation method with 0.8.6 on CentOS7.
When I start mon.node1, I only have one pool created. No data, metadata pools.
Any suggestions?
Steps:
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon.
--cap mon 'allow *'
ceph-authtool --create-keyring /
On 12/10/2014 17:48, Gregory Farnum wrote:
> On Sun, Oct 12, 2014 at 7:46 AM, Loic Dachary wrote:
>> Hi,
>>
>> On a 0.80.6 cluster the command
>>
>> ceph tell osd.6 version
>>
>> hangs forever. I checked that it establishes a TCP connection to the OSD,
>> raised the OSD debug level to 20 and I
On Sun, Oct 12, 2014 at 7:46 AM, Loic Dachary wrote:
> Hi,
>
> On a 0.80.6 cluster the command
>
> ceph tell osd.6 version
>
> hangs forever. I checked that it establishes a TCP connection to the OSD,
> raised the OSD debug level to 20 and I do not see
>
> https://github.com/ceph/ceph/blob/firefl
Thanks:)
If someone can help reg the question below, that would be great!
"
>
> For VMs, I am trying to visualize how the RBD device would be exposed.
> Where does the driver live exactly? If its exposed via libvirt and
> QEMU, does the kernel driver run in the host OS, and communicate with
> a b
Hi,
On a 0.80.6 cluster the command
ceph tell osd.6 version
hangs forever. I checked that it establishes a TCP connection to the OSD,
raised the OSD debug level to 20 and I do not see
https://github.com/ceph/ceph/blob/firefly/src/osd/OSD.cc#L4991
in the logs. All other OSDs answer to the sam
18 matches
Mail list logo