Hi, all
I want to deploy ceph manually. When I finish config, I need to start mon
and osd manually.
I used these command. I found these command in systemd/ceph-mon@.service
and systemd/ceph-osd@.service:
ceph-mon --id xt2 --setuser ceph --setgroup ceph
ceph-osd --cluster ceph --id 0 --setuser ceph
On Sat, Dec 10, 2016 at 11:50 PM, Sean Redmond wrote:
> Hi Goncarlo,
>
> With the output from "ceph tell mds.0 damage ls" we tracked the inodes of
> two damaged directories using 'find /mnt/ceph/ -inum $inode', after
> reviewing the paths involved we confirmed a backup was availble for this
> da
On Sun, Dec 11, 2016 at 5:22 AM, fridifree wrote:
> The min size was on 3 changing to 1 solve the problem
> thanks
Please be aware of the previous posts about the dangers of setting min_size=1.
>
> On Dec 10, 2016 02:06, "Christian Wuerdig"
> wrote:
>>
>> Hi,
>>
>> it's useful to generally prov
The min size was on 3 changing to 1 solve the problem
thanks
On Dec 10, 2016 02:06, "Christian Wuerdig"
wrote:
> Hi,
>
> it's useful to generally provide some detail around the setup, like:
> What are your pool settings - size and min_size?
> What is your failure domain - osd or host?
> What ver
I can confirm the latest packages upgrade fixes this issue.
Em 9 de dez de 2016 7:48 PM, "Reed Dier" escreveu:
> I don’t think there is a graceful path to downgrade.
>
> There is a hot fix upstream I believe. My understanding is the build is
> being tested for release.
>
> Francois Lafont posted
I should clarify that if the OSD has silently failed (e.g. the TCP
connection wasn't reset and packets are just silently being dropped /
not being acked), IO will pause for up to "osd_heartbeat_grace" before
IO can proceed again.
On Sat, Dec 10, 2016 at 8:46 AM, Jason Dillaman wrote:
> On Sat, De
Hi Goncarlo,
With the output from "ceph tell mds.0 damage ls" we tracked the inodes of
two damaged directories using 'find /mnt/ceph/ -inum $inode', after
reviewing the paths involved we confirmed a backup was availble for this
data so we ran "ceph tell mds.0 damage rm $inode" on the two inodes. W
On Sat, Dec 10, 2016 at 6:11 AM, zhong-yan.gu wrote:
> Hi Jason,
> sorry to bother you. A question about io consistency in osd down case :
> 1. a write op arrives primary osd A
> 2. osd A does local write and sends out replica writes to osd B and C
> 3. B finishes write and return ACK to A. Howev
Hi Ceph-users,
I just want to double check a new crush ruleset I am creating - the intent
here is that over 2 DCs, it will select one DC, and place two copies on
separate hosts in that DC. The pools created on this will use size 4 and
min-size 2.
I just want to check I have crafted this c
This point release fixes an important regression introduced in v10.2.4.
We recommend that all v10.2.x users upgrade.
Notable Changes
---
* msg/simple/Pipe: avoid returning 0 on poll timeout (issue#18185, pr#12376,
Sage Weil)
For more detailed information refer to the complete change
10 matches
Mail list logo