omments in-line.
On 16. 09. 14 23:53, Craig Lewis wrote:
> On Mon, Sep 8, 2014 at 2:53 PM, Francois Deppierraz
> mailto:franc...@ctrlaltdel.ch>> wrote:
>
> XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
>
> All logs from before the disaste
}'); do
ceph pg $pg list_missing | jq .objects
done | jq -s add | jq '.[] | .oid.oid'
On 11. 09. 14 11:05, Francois Deppierraz wrote:
> Hi Greg,
>
> An attempt to recover pg 3.3ef by copying it from broken osd.6 to
> working osd.32 resulted in one more broken osd :(
(__libc_start_main()+0xed) [0x7f13fb97576d]
17: /usr/bin/ceph-osd() [0x5d69d9]
NOTE: a copy of the executable, or `objdump -rdS ` is
needed to interpret this.
Fortunately it was possible to bring back osd.32 into a working state
simply be removing this pg.
root@storage2:~# ceph_objectstore_tool
Hi Greg,
Thanks for your support!
On 08. 09. 14 20:20, Gregory Farnum wrote:
> The first one is not caused by the same thing as the ticket you
> reference (it was fixed well before emperor), so it appears to be some
> kind of disk corruption.
> The second one is definitely corruption of some kin
Hi,
This issue is on a small 2 servers (44 osds) ceph cluster running 0.72.2
under Ubuntu 12.04. The cluster was filling up (a few osds near full)
and I tried to increase the number of pg per pool to 1024 for each of
the 14 pools to improve storage space balancing. This increase triggered
high mem
Hi Vickey,
This really looks like a DNS issue. Are you sure that the host from
which s3cmd is running is able to resolve the host 'bmi-pocfe2.scc.fi'?
Does a regular ping works?
$ ping bmi-pocfe2.scc.fi
François
On 23. 06. 14 16:24, Vickey Singh wrote:
> # s3cmd ls
>
> WARNING: Retrying faile