True
max open files=65536
objecter inflight ops=2048
osd pool default pg num=512
log to syslog = true
#err to syslog = true
Thanks & Regards
Gaurav Bafna
9540631400
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
True
max open files=65536
objecter inflight ops=2048
osd pool default pg num=512
log to syslog = true
#err to syslog = true
--
Gaurav Bafna
9540631400
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
at least three.
>
> On Tue, May 3, 2016 at 8:42 AM, Gaurav Bafna wrote:
>>
>> Hi Cephers,
>>
>> I am running a very small cluster of 3 storage and 2 monitor nodes.
>>
>> After I kill 1 osd-daemon, the cluster never recovers fully. 9 PGs
>> remain unde
last_user_version": 0,
"last_backfill": "MAX",
"purged_snaps": "[]",
"history": {
"epoch_created": 13,
"last_epoch_started": 818,
"last_epoch_clean": 818,
"last_ep
> On Tuesday 03 May 2016 06:56 PM, Gaurav Bafna wrote:
>> Also , the old PGs are not mapped to the down osd as seen from the
>> ceph health detail
>>
>> pg 5.72 is active+undersized+degraded, acting [16,49]
>> pg 5.4e is active+undersized+degraded, acting [16,38]
>
4a
>
> osdmap e105161 pg 24.54a (24.54a) -> up [8,19] acting [8,19]
>
>
>
> The osd tree and crushmap can be found here: http://pastebin.com/i4BQq5Mi
>
>
>
> I’m hoping for some insight into why this is happening. I couldn’t find much
> out there on the net about u
s
>> > same
>> > > > with
>> > > > > total undersized PGs, does it mean that all PGs have at least one
>> > good
>> > > > > replica, so I can just mark lost or remove down OSD, reformat again
>> > and
>> > > > > then restart them if there is no hardware issue with HDDs? Which one
>> > of
>> > > > PGs
>> > > > > status should I pay more attention, degraded or undersized due to
>> > lost
>> > > > > object possibility?
>> > > > >
>> > > >
>> > > > Yes. Your system is not reporting any inactive, unfound or stale PGs,
>> > so
>> > > > that is good news.
>> > > >
>> > > > However, I recommend that you wait for the system to become fully
>> > > > active+clean before you start removing any OSDs or formatting hard
>> > drives.
>> > > > Better be safe than sorry.
>> > > >
>> > > > Wido
>> > > >
>> > > > > Best regards,
>> > > > > ___
>> > > > > ceph-users mailing list
>> > > > > ceph-users@lists.ceph.com
>> > > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> > > >
>> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Gaurav Bafna
9540631400
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ith good one,
> one by one OSD. I have done with that successfully.
>
> Best regards,
>
> On Tue, May 17, 2016 at 12:30 PM, Gaurav Bafna wrote:
>>
>> Even I faced the same issue with our production cluster .
>>
>> cluster fac04d85-db48-4564-b821-deebda04626
Best regards,
>
> On Tue, May 17, 2016 at 12:49 PM, Gaurav Bafna wrote:
>>
>> Hi Lazuardi
>>
>> No, there are no unfound or incomplete PGs.
>>
>> Replacing the osds surely makes the cluster health. But the problem
>> should not have occurred in the fir
ty to find whether this is an issue or not.
Thanks
On Wed, May 18, 2016 at 9:37 PM, Lazuardi Nasution
wrote:
> Hi Gaurav,
>
> It could be an issue. But, I never see crush map removal without recovery.
>
> Best regards,
>
> On Wed, May 18, 2016 at 1:41 PM, Gaurav Bafna wrote:
&
9700 1 == req done
req=0x7ff404ff37d0 op status=0 http_status=405 ==
Code wise I see that put_op is not defined for
RGWHandler_REST_S3Website class but is defined for
RGWHandler_REST_Bucket_S3 class .
Can somebody please help me out ?
--
s the resolve actually happen ? Does it need to listen on port
number 80 too ?
Thanks a lot for your time,
Gaurav
On Mon, May 30, 2016 at 5:54 AM, Robin H. Johnson wrote:
> On Sun, May 29, 2016 at 05:17:14PM +0530, Gaurav Bafna wrote:
>> Hi Cephers,
>>
>> I am unable to create bu
12 matches
Mail list logo