Hi,
I'm new to Ceph. I started a ceph cluster from scratch on DEbian 9,
consisting of 3 hosts, each host has 3-4 OSDs (using 4TB hdds, currently
totalling 10 hdds).
Right from the start I always received random scrub errors telling me
that some checksums didn't match the expected value, fixable w
The docs recommend 1GB/TB of OSDs. I saw people asking if this was still
accurate for bluestore and the answer was that it is more true for
bluestore than filestore. There might be a way to get this working at the
cost of performance. I would look at Linux kernel memory settings as much
as ceph and
Given that servers with 64 CPU cores (128 threads @ 2.7GHz) and up to 2TB
RAM - as well as 12TB HDDs - are easily available and somewhat reasonably
priced I wonder what's the maximum number of OSDs per OSD server (if using
10TB or 12TB HDDs) and how much RAM does it really require if total storage
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Igor
Mendelev
Sent: 10 December 2017 15:39
To: ceph-users@lists.ceph.com
Subject: [ceph-users] what's the maximum number of OSDs per OSD server?
Given that servers with 64 CPU cores (128 threads @ 2.7GHz) and up to 2TB RA
Expected number of nodes for initial setup is 10-15 and of OSDs -
1,500-2,000.
Networking is planned to be 2 100GbE or 2 dual 50GbE in x16 slots (per OSD
node).
JBODs are to be connected with 3-4 x8 SAS3 HBAs (4 4x SAS3 ports each)
Choice of hardware is done considering (non-trivial) per-server
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Another option is to utilize the iscsi gateway, provided in 12.2
http://docs.ceph.com/docs/master/rbd/iscsi-overview/
Benefits:
You can EOL your old SAN wtihout having to simultaneously migrate to another
hypervisor.
Any infrastructure that ties i
Hi (again),
meanwhile I tried
"ceph-bluestore-tool fsck --path /var/lib/ceph/osd/ceph-0"
but that resulted in a segfault (please see attached console log).
Regards
Martin
Am 10.12.2017 um 14:34 schrieb Martin Preuss:
> Hi,
>
> I'm new to Ceph. I started a ceph cluster from scratch on DEbian
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Igor
Mendelev
Sent: 10 December 2017 17:37
To: n...@fisk.me.uk; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] what's the maximum number of OSDs per OSD server?
Expected number of nodes for initial setup is 10-15 and
I've had some success in this configuration by cutting the bluestore
cache size down to 512mb and only one OSD on an 8tb drive. Still get
occasional OOMs, but not terrible. Don't expect wonderful performance,
though.
Two OSDs would really be pushing it.
On Sun, Dec 10, 2017 at 10:05 AM, David Tur
Are you using bluestore compression?
On Sun, Dec 10, 2017 at 1:45 PM, Martin Preuss wrote:
> Hi (again),
>
> meanwhile I tried
>
> "ceph-bluestore-tool fsck --path /var/lib/ceph/osd/ceph-0"
>
> but that resulted in a segfault (please see attached console log).
>
>
> Regards
> Martin
>
>
> Am 10.1
Hi,
Am 10.12.2017 um 22:06 schrieb Peter Woodman:
> Are you using bluestore compression?
[...]
As a matter of fact, I do. At least for one of the 5 pools, exclusively
used with CephFS (I'm using CephFS as a way to achieve high availability
while replacing an NFS server).
However, I see these ch
IIRC there was a bug related to bluestore compression fixed between
12.2.1 and 12.2.2
On Sun, Dec 10, 2017 at 5:04 PM, Martin Preuss wrote:
> Hi,
>
>
> Am 10.12.2017 um 22:06 schrieb Peter Woodman:
>> Are you using bluestore compression?
> [...]
>
> As a matter of fact, I do. At least for one of
Can you open a ticket with exact version of your ceph cluster?
http://tracker.ceph.com
Thanks,
On Sun, Dec 10, 2017 at 10:34 PM, Martin Preuss wrote:
> Hi,
>
> I'm new to Ceph. I started a ceph cluster from scratch on DEbian 9,
> consisting of 3 hosts, each host has 3-4 OSDs (using 4TB hdds, cu
My workload is mainly seq write(for surveillance usage).I am not sure how cache
would effect the write performance and why the memory usage keeps increasing as
more data is wrote into ceph storage.
2017-12-11
lin.yunfan
发件人:Peter Woodman
发送时间:2017-12-11 05:04
主题:Re: [ceph-users] The way t
Created tracker for this issue -- > http://tracker.ceph.com/issues/22354
Thanks
Jayaram
On Fri, Dec 8, 2017 at 9:49 PM, nokia ceph wrote:
> Hello Team,
>
> We aware that ceph-disk which is deprecated in 12.2.2 . As part of my
> testing, I can still using this ceph-disk utility for creating OSD'
Hello,
We are currently doing some evaluations on a few storage technologies and ceph
has made it on our short list but the issue is we haven't been able to evaluate
it as I can't seem to get it to deploy out.
Before I spend the time spreading it across some hardware and purchasing the
product
16 matches
Mail list logo