After getting all the OSDs and MONs updated and running ok, I updated the
MDS as usual; rebooted the machine after updating the kernel (we're on
14.04, but it was running an older 4.x kernel, so took it to 16.04's
version), the MDS fails to come up. No replay, no nothing.
It boots normally, and th
Supposedly cephfs-hadoop worked and/or works on hadoop 2. I am in the
process of getting it working with cdh5.7.0 (based on hadoop 2.6.0).
I'm under the impression that it is/was working with 2.4.0 at some
point in time.
At this very moment, I can use all of the DFS tools built into hadoop
to crea
I think what you are thinking of is the driver that was built to actually
replace hdfs with rbd. As far as I know that thing had a very short
lifespan on one version of hadoop. Very sad.
As to what you proposed:
1) Don't use Cephfs in production pre-jewel.
2) running hdfs on top of ceph is a mas
Hi,
please check with
ceph health
which pg's cause trouble.
Please try:
ceph pg repair 4.97
And look if it can be resolved.
If not, please paste the corresponding log.
That repair can take some time...
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...
Hi Ceph-Users,
Help with how to resolve these would be appreciated.
2016-04-30 09:25:58.399634 9b809350 0 log_channel(cluster) log [INF] :
4.97 deep-scrub starts
2016-04-30 09:26:00.041962 93009350 0 -- 192.168.2.52:6800/6640 >>
192.168.2.32:0/3983425916 pipe(0x27406000 sd=111 :6800 s=0 pgs=0 c
Hi Guys,
I am new to ceph and have a openstack deployment that uses ceph as the
backend storage. Recently we had a failed OSD disk which seems to have
caused issues for any new volume creation. I am really surprised that
having a single osd drive down or node down could cause such a large
impact
On Apr 29, 2016 11:46 PM, Gregory Farnum wrote:
>
> On Friday, April 29, 2016, Edward Huyer mailto:erh...@rit.edu>>
wrote:
This is more of a "why" than a "can I/should I" question.
The Ceph block device quickstart says (if I interpret it correctly) not to use
a physical machine as both a Ceph
On Sat, Apr 30, 2016 at 7:00 PM, Oliver Dzombic wrote:
> Hi,
>
> sure.
>
> http://tracker.ceph.com/issues/13643
Thanks!!
I've totally missed that -;
>
> --
> Mit freundlichen Gruessen / Best regards
>
> Oliver Dzombic
> IP-Interactive
>
> mailto:i...@ip-interactive.de
>
> Anschrift:
>
> IP Inte
Hi,
sure.
http://tracker.ceph.com/issues/13643
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 93402 beim Amtsgericht Hanau
Geschäftsführung:
On Sat, Apr 30, 2016 at 5:32 PM, Oliver Dzombic wrote:
> Hi,
>
> there is a memory allocation bug, at least in hammer.
>
Could you give us any pointer?
> Mouting an rbd volume as a block device on a ceph node might run you
> into that. Then your mount wont work, and you will have to restart the
Hi,
there is a memory allocation bug, at least in hammer.
Mouting an rbd volume as a block device on a ceph node might run you
into that. Then your mount wont work, and you will have to restart the
OSD daemon(s).
Its generally not a perfectly good idea.
Better use a dedicated client for the mou
11 matches
Mail list logo