Hi Guys,
I will be installing Ceph behind a very restrictive firewall and one of the
requirements is for me to submit the IP block of the repository used in the
installation. I searched the internet but couldn't find one (or I haven't
searched enough). Hoping to get answers from here.
Thanks.
/V
On Sat, Aug 20, 2016 at 12:07 AM, Ivan Koortzen wrote:
> Hi Guys, hope someone can help
Hi Ivan,
>
> Im running Cent-OS7 with ceph Jewel release
>
> I recently installed ceph on 3x new servers, but im having trouble preparing
> and activating osd's:
>
> notes:
> in my setup :
> - /dev/sdh is a s
Hello,
I have around 6 rbd images. Mounted all 6 images to a VM.
Now all works fine. no issues with access all these 6 mount partitions.
Now, I have shared these 6 mount partions to others using NFS.
But these NFS partitions facing the issue of disconnects during the access
of these partition on N
For a home server project I've set up a single-node ceph system.
Everything works just fine; I can mount block devices and store stuff on
them, however the system will not shut down without hanging.
I've traced it back to systemd; it shuts down part(s?) of ceph before
unmounting or unmapping the
Simple solution that always works : purge systemd
Tested and approved on all my ceph nodes, and all my servers :)
On 20/08/2016 19:35, Marcus wrote:
> Blablabla systemd blablabla
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.c
Good point, I don’t know why I hadn’t actually considered that!
Something less drastic would be great though, I don’t want to give myself more
“fun” esoteric problems to diagnose ;)
> On 20 Aug 2016, at 18:59, c...@jack.fr.eu.org wrote:
>
> Simple solution that always works : purge systemd
>
>
On Sat, Aug 20, 2016 at 10:35 AM, Marcus wrote:
> For a home server project I've set up a single-node ceph system.
>
> Everything works just fine; I can mount block devices and store stuff on
> them, however the system will not shut down without hanging.
>
> I've traced it back to systemd; it shut
No I'm doing "shutdown -h now".
I think the ceph services shutdown fine, it's just happening too early.
I couldn't find a way to express the dependency between the mount, the
rbdmap and the rest of the ceph services.
On 20 Aug 2016 20:06, "Vasu Kulkarni" wrote:
> On Sat, Aug 20, 2016 at 10:35 A
It sounds like the Ceph services are being stopped before it gets to the
unmounts. It probably can't unmount the rbd cleanly so shutdowm hangs.
Btw mounting with the kernel client on an OSD node isn't recommend.
On 20 Aug 2016 6:35 p.m., "Marcus" wrote:
> For a home server project I've set up a
On Tue, Jul 19, 2016 at 12:04 PM, Alex Gorbachev
wrote:
> On Mon, Jul 18, 2016 at 4:41 AM, Василий Ангапов wrote:
>> Guys,
>>
>> This bug is hitting me constantly, may be once per several days. Does
>> anyone know is there a solution already?
>
>
> I see there is a fix available, and am waiting
Hi Nick,
On Thu, Jul 21, 2016 at 8:33 AM, Nick Fisk wrote:
>> -Original Message-
>> From: w...@globe.de [mailto:w...@globe.de]
>> Sent: 21 July 2016 13:23
>> To: n...@fisk.me.uk; 'Horace Ng'
>> Cc: ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] Ceph + VMware + Single Thread Perfo
11 matches
Mail list logo