2GB ram is gonna be really tight, probably. However, I do something similar
at home with a bunch of rock64 4gb boards, and it works well. There are
sometimes issues with the released ARM packages (frequently crc32 doesn;'t
work, which isn't great), so you may have to build your own on the board
you
I actually made a dumb python script to do this. It's ugly and has a
lot of hardcoded things in it (like the mount location where i'm
copying things to to move pools, names of pools, the savings i was
expecting, etc) but should be easy to adapt to what you're trying to
do
https://gist.github.com/p
Last time I had to do this, I used the command outlined here:
https://tracker.ceph.com/issues/10098
On Mon, Mar 4, 2019 at 11:05 AM Daniel K wrote:
>
> Thanks for the suggestions.
>
> I've tried both -- setting osd_find_best_info_ignore_history_les = true and
> restarting all OSDs, as well as '
At the risk of hijacking this thread, like I said I've ran into this
problem again, and have captured a log with debug_osd=20, viewable at
https://www.dropbox.com/s/8zoos5hhvakcpc4/ceph-osd.3.log?dl=0 - any
pointers?
On Tue, Jan 8, 2019 at 11:31 AM Peter Woodman wrote:
>
> For the rec
For the record, in the linked issue, it was thought that this might be
due to write caching. This seems not to be the case, as it happened
again to me with write caching disabled.
On Tue, Jan 8, 2019 at 11:15 AM Sage Weil wrote:
>
> I've seen this on luminous, but not on mimic. Can you generate
not to mention that the current released version of mimic (.2) has a
bug that is potentially catastrophic to cephfs, known about for
months, yet it's not in the release notes. would have upgraded and
destroyed data had i not caught a thread on this list.
hopefully crowing like this isn't coming of
from what i've heard, xfs has problems on arm. use btrfs, or (i
believe?) ext4+bluestore will work.
On Sun, Mar 11, 2018 at 9:49 PM, Christian Wuerdig
wrote:
> Hm, so you're running OSD nodes with 2GB of RAM and 2x10TB = 20TB of
> storage? Literally everything posted on this list in relation to H
er, yeah, i didn't read before i replied. that's fair, though it is
only some of the integration test binaries that tax that limit in a
single compile step.
On Mon, Dec 18, 2017 at 4:52 PM, Peter Woodman wrote:
> not the larger "intensive" instance types! they go up to 128
t; their 32 bit ARM systems have the same 2 GB limit. I haven’t tried the
> cross-compile on the 64 bit ARMv8 they offer and that might be easier than
> trying to do it on x86_64.
>
>> On Dec 18, 2017, at 4:41 PM, Peter Woodman wrote:
>>
>> https://www.scaleway.com/
>>
2017 at 4:38 PM, Andrew Knapp wrote:
> I have no idea what this response means.
>
> I have tried building the armhf and arm64 package on my raspberry pi 3 to
> no avail. Would love to see someone post Debian packages for stretch on
> arm64 or armhf.
>
> On Dec 18, 2017 4:12
YMMV, but I've been using Scaleway instances to build packages for
arm64- AFAIK you should be able to run any armhf distro on those
machines as well.
On Mon, Dec 18, 2017 at 4:02 PM, Andrew Knapp wrote:
> I would also love to see these packages!!!
>
> On Dec 18, 2017 3:46 PM, "Ean Price" wrote:
IIRC there was a bug related to bluestore compression fixed between
12.2.1 and 12.2.2
On Sun, Dec 10, 2017 at 5:04 PM, Martin Preuss wrote:
> Hi,
>
>
> Am 10.12.2017 um 22:06 schrieb Peter Woodman:
>> Are you using bluestore compression?
> [...]
>
> As a matter of fact
Are you using bluestore compression?
On Sun, Dec 10, 2017 at 1:45 PM, Martin Preuss wrote:
> Hi (again),
>
> meanwhile I tried
>
> "ceph-bluestore-tool fsck --path /var/lib/ceph/osd/ceph-0"
>
> but that resulted in a segfault (please see attached console log).
>
>
> Regards
> Martin
>
>
> Am 10.1
I've had some success in this configuration by cutting the bluestore
cache size down to 512mb and only one OSD on an 8tb drive. Still get
occasional OOMs, but not terrible. Don't expect wonderful performance,
though.
Two OSDs would really be pushing it.
On Sun, Dec 10, 2017 at 10:05 AM, David Tur
How quickly are you planning to cut 12.2.3?
On Thu, Nov 30, 2017 at 4:25 PM, Alfredo Deza wrote:
> Thanks all for your feedback on deprecating ceph-disk, we are very
> excited to be able to move forwards on a much more robust tool and
> process for deploying and handling activation of OSDs, remov
how quickly are you planning to cut 12.2.3?
On Thu, Nov 30, 2017 at 4:25 PM, Alfredo Deza wrote:
> Thanks all for your feedback on deprecating ceph-disk, we are very
> excited to be able to move forwards on a much more robust tool and
> process for deploying and handling activation of OSDs, remo
16 matches
Mail list logo