-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello,
I am installing a Ceph Cluster as File Server with Jewel under Centos 7.
At this Moment it is not in Produktion. I installed Jewel, because i
need the cephfs Option.
I tried to install Jewel in a Debian Wheezy Server. This doesn't happen
so i
Deleted old messages... Here is what I am seeing.. Home contains copy from
OSD2.
root@OSD1:/var/lib/ceph/osd# find ./ceph-*/current/meta | grep osdmap | grep
16024./ceph-0/current/meta/DIR_E/DIR_3/inc\uosdmap.16024__0_46887E3E__none./ceph-0/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__none
can you find the full osdmap.16024 in other osd?
seems like the osd::init doesnt read the incremental osdmap but the full osdmap,
if you find it, then copy to osd.3.
2016-04-16 13:27 GMT+08:00 hjcho616 :
> I found below file missing on osd.3 so I copied over. Still fails with the
> similar messag
yes, it's a incremental osdmap, does the file size is correct?
you can compare it with the same file in other osd.
If it's not the same, you can overwrite it with the right one.
2016-04-16 13:11 GMT+08:00 hjcho616 :
> Is this it?
>
> root@OSD2:/var/lib/ceph/osd/ceph-3/current/meta# find ./ | grep
First, you should check whether file osdmap.16024 exists in your
osd.3/current/meta dir,
if not, you can copy it from other OSD who has it.
2016-04-16 12:36 GMT+08:00 hjcho616 :
> Here is what I get wtih debug_osd = 20.
>
> 2016-04-15 23:28:24.429063 7f9ca0a5b800 0 set uid:gid to 1001:1001
> (ce
for your cluster warning message, it's a pg's some objects have
inconsistent in primary and replicas,
so you can try 'ceph pg repair $PGID'.
2016-04-16 9:04 GMT+08:00 Oliver Dzombic :
> Hi,
>
> i meant of course
>
> 0.e6_head
> 0.e6_TEMP
>
> in
>
> /var/lib/ceph/osd/ceph-12/current
>
> sry...
>
>
for striped objects, the main goodness is your cluster's OSDs capacity
usage will get more balanced,
and write\read requests will spread across the whole cluster which
will improve w/r performance .
2016-04-15 22:17 GMT+08:00 Chandan Kumar Singh :
> Hi
>
> Is it a good practice to store striped ob
Can you set 'debug_osd = 20' in ceph.conf and restart the osd again,
and post the corrupt log.
I doubt it's problem related to "0 byte osdmap" decode problem?
2016-04-16 12:14 GMT+08:00 hjcho616 :
> I've been successfully running cephfs on my Debian Jessies for a while and
> one day after power ou
I've been successfully running cephfs on my Debian Jessies for a while and one
day after power outage, MDS wasn't happy. MDS crashing after it was done
loading, increasing the memory utilization quite a bit. I was running
infernalis 9.2.0 and did successful upgrade from Hammer before... so I t
Hi,
i meant of course
0.e6_head
0.e6_TEMP
in
/var/lib/ceph/osd/ceph-12/current
sry...
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 934
Hi,
pg 0.e6 is active+clean+inconsistent, acting [12,7]
/var/log/ceph/ceph-osd.12.log:36:2016-04-16 01:08:40.058585 7f4f6bc70700
-1 log_channel(cluster) log [ERR] : 0.e6 deep-scrub stat mismatch, got
4476/4477 objects, 133/133 clones, 4476/4477 dirty, 1/1 omap, 0/0
hit_set_archive, 0/0 whiteouts,
The ceph-objectstore-tool set-osdmap operation updates existing
osdmaps. If a map doesn't already exist the --force option can be used
to create it. It appears safe in your case to use that option.
David
On 4/15/16 9:47 AM, Markus Blank-Burian wrote:
Hi,
we had a problem on our prod
Hi,
we had a problem on our production cluster (running 9.2.1) which caused /proc,
/dev and /sys to be unmounted. During this time, we received the following error
on a large number of OSDs (for various osdmap epochs):
Apr 15 15:25:19 kaa-99 ceph-osd[4167]: 2016-04-15 15:25:19.457774 7f1c8
Hi
Is it a good practice to store striped objects in a EC pool? If yes, what
are the pros and cons of such a pattern?
Regards
Chandan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi
I am evaluating EC for a ceph cluster where the objects are mostly of
smaller sizes (< 1 MB) and occasionally large (~ 100 - 500 MB). Besides the
general performance penalty of EC, is there any additional disadvantage of
storing small objects along with large objects in same EC pool.
More gene
>>> "leon...@gstarcloud.com" schrieb am
Freitag, 15. April
2016 um 11:33:
> Hello Daniel,
>
> I'm a newbie to Ceph, and when i config the storage cluster on CentOS
7 VMs,
> i encontered the same problem as you posted on
>
[http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-August/041
Hi all,
Was wondering what is the best way to add a new osd server to the small ceph
cluster? I am interested in minimising performance degradation as the cluster
is live and actively used.
At the moment i've got the following setup:
2 osd servers (9 osds each)
Journals on Intel 520/530 ss
Thanks llya.The following is the solution.
3.8 is missing a *ton* of fixes, I'd strongy recommend an upgrade to
4.0+.
If the osdc output is still the same, try marking osd28 down with "ceph
down 28" (it'll come back automatically) and triggering some I/O (e.g.
a small read from a file you can ope
>>> Christian Balzer schrieb am Donnerstag, 14. April
2016 um
17:00:
> Hello,
>
> [reduced to ceph-users]
>
> On Thu, 14 Apr 2016 11:43:07 +0200 Steffen Weißgerber wrote:
>
>>
>>
>> >>> Christian Balzer schrieb am Dienstag, 12. April
2016
>> >>> um 01:39:
>>
>> > Hello,
>> >
>>
>> Hi,
Hello Daniel,
I'm a newbie to Ceph, and when i config the storage cluster on CentOS 7 VMs, i
encontered the same problem as you posted on
[http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-August/041992.html].
I've done lots of searching and trying, but still cannot make it work.
Coul
On Fri, Apr 15, 2016 at 10:59 AM, lin zhou wrote:
> Yes,the output is the same.
(Dropped ceph-users.)
Can you attach compressed osd logs for OSDs 28 and 40?
Thanks,
Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lis
Yes,the output is the same.
2016-04-15 16:55 GMT+08:00 Ilya Dryomov :
> On Fri, Apr 15, 2016 at 10:32 AM, lin zhou wrote:
>> thanks for so fast reply.
>> output in one of the faulty host:
>>
>> root@musicgci5:~# ceph -s
>> cluster 409059ba-797e-46da-bc2f-83e3c7779094
>>health HEALTH_OK
>>
On Fri, Apr 15, 2016 at 10:32 AM, lin zhou wrote:
> thanks for so fast reply.
> output in one of the faulty host:
>
> root@musicgci5:~# ceph -s
> cluster 409059ba-797e-46da-bc2f-83e3c7779094
>health HEALTH_OK
>monmap e1: 3 mons at
> {musicgci2=192.168.43.12:6789/0,musicgci3=192.168.43.1
thanks for so fast reply.
output in one of the faulty host:
root@musicgci5:~# ceph -s
cluster 409059ba-797e-46da-bc2f-83e3c7779094
health HEALTH_OK
monmap e1: 3 mons at
{musicgci2=192.168.43.12:6789/0,musicgci3=192.168.43.13:6789/0,musicgci4=192.168.43.14:6789/0},
election epoch 70, quoru
On Fri, Apr 15, 2016 at 10:18 AM, lin zhou wrote:
> Hi,cephers:
> In one of my ceph cluster,we map rbd then mount it. in node1 and then
> using samba to share it to do backup for several vm,and some web root
> directory.
>
> Yesterday,one of the disk in my cluster is full at 95%,then the
> cluste
Hi,cephers:
In one of my ceph cluster,we map rbd then mount it. in node1 and then
using samba to share it to do backup for several vm,and some web root
directory.
Yesterday,one of the disk in my cluster is full at 95%,then the
cluster stop receive write request.
I have solve the full problem.But
>>> Wido den Hollander schrieb am Donnerstag, 14. April
2016 um
16:02:
>> Op 14 april 2016 om 14:46 schreef Steffen Weißgerber
:
>>
>>
>> Hello,
>>
>> I tried to configure ceph logging to a remote syslog host based on
>> Sebastian Han's Blog
>> (http://www.sebastien-han.fr/blog/2013/01/07/l
27 matches
Mail list logo