[ceph-users] Network redundancy pro and cons, best practice, suggestions?

2015-04-13 Thread Götz Reinicke - IT Koordinator
ngle 10Gb) I know: redundancy keeps some headaches small, but also adds some more complexity and increases the budget. (add network adapters, other server, more switches, etc) So what would you suggest, what are your experiences? Thanks for any suggestion and feedback . Regards . Götz --

Re: [ceph-users] Network redundancy pro and cons, best practice, suggestions?

2015-04-13 Thread Götz Reinicke - IT Koordinator
t; > From HP, Intel, Supermicron etc reference documentations, they use > usually non-redundant network connection. (single 10Gb) > > I know: redundancy keeps some headaches small, but also adds some more > complexity and increases the budget. (add network adapters, other >

Re: [ceph-users] Network redundancy pro and cons, best practice, suggestions?

2015-04-20 Thread Götz Reinicke - IT Koordinator
Hi Christian, Am 13.04.15 um 12:54 schrieb Christian Balzer: > > Hello, > > On Mon, 13 Apr 2015 11:03:24 +0200 Götz Reinicke - IT Koordinator wrote: > >> Dear ceph users, >> >> we are planing a ceph storage cluster from scratch. Might be up to 1 PB >> with

[ceph-users] Some more numbers - CPU/Memory suggestions for OSDs and Monitors

2015-04-22 Thread Götz Reinicke - IT Koordinator
Regards . Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82 420 E-Mail goetz.reini...@filmakademie.de Filmakademie Baden-Württemberg GmbH Akademiehof 10 71638 Ludwigsburg www.filmakademie.de Eintragung Amtsgericht Stuttgart HRB 205016 Vorsitzender des Aufsichtsrats: Jürgen Walter MdL Staatsse

[ceph-users] inktank configuration guides are gone?

2015-04-22 Thread Götz Reinicke - IT Koordinator
Hi, here I saw some links that sound interisting to me regarding Hardware planing: https://ceph.com/category/resources/ The links redirect to Redhat, and I cant find the content. May be someone has a new Guid? I found one from 2013 as pdf. Regards and Thanks . Götz -- Götz Reinicke IT

[ceph-users] One more thing. Journal or not to journal or DB-what? Status?

2015-04-23 Thread Götz Reinicke - IT Koordinator
there is a roadmap on the progress? We hope to reduce the systems complexity (dedicated journal SSDs) with that. http://tracker.ceph.com/issues/11028 says "LMDB key/value backend for Ceph" is done by 70% 15 days ago. Kowtow, kowtow and thanks . Götz -- Götz Reinicke IT-Koordinat

[ceph-users] capacity planing with SSD Cache Pool Tiering

2015-05-05 Thread Götz Reinicke - IT Koordinator
s not calculated into the overall usable space. It is a "cache". E.g. The slow pool is 100 TB, the SSD Cache 10 TB, I dont have 110TB all in all? True? I'm wrong? As always thanks a lot and regards! Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82 420 E-Mail goet

Re: [ceph-users] capacity planing with SSD Cache Pool Tiering

2015-05-06 Thread Götz Reinicke - IT Koordinator
. This only means > that the files you'd want cached will have to be pulled back in after > that and you may lose the performance advantage for a little while after > each backup. > > Hope that helps, dont hesitate with further inquiries! > > > Marc -- Götz Rei

[ceph-users] How to backup hundreds or thousands of TB

2015-05-06 Thread Götz Reinicke - IT Koordinator
can handle such volumes nicely? Thanks and regards . Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82 420 E-Mail goetz.reini...@filmakademie.de Filmakademie Baden-Württemberg GmbH Akademiehof 10 71638 Ludwigsburg www.filmakademie.de Eintragung Amtsgericht Stuttgart

[ceph-users] Dataflow/path Client <---> OSD

2015-05-07 Thread Götz Reinicke - IT Koordinator
trough the monitors as well. The point is: If we connect our file servers and OSD nodes with 40Gb, dose the monitor need 40Gb to? Or would be 10Gb "enough". Oversize is ok :) ... Thansk and regards . Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82 420 E-Mail goetz.rein

[ceph-users] Cisco UCS Blades as MONs? Pros cons ...?

2015-05-12 Thread Götz Reinicke - IT Koordinator
ne servers, but space could be a bit of a problem currently Waht do you think? Regards . Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82 420 E-Mail goetz.reini...@filmakademie.de Filmakademie Baden-Württemberg GmbH Akademiehof 10 71638 Ludwigsburg www.filmakademie.de Eintrag

Re: [ceph-users] Cisco UCS Blades as MONs? Pros cons ...?

2015-05-12 Thread Götz Reinicke - IT Koordinator
s to > go with one dedicated MON that will be the primary (lowest IP) 99.8% of > the time and 4 OSDs with MONs on them. If you want to feel extra good > about this, give those OSDs a bit more CPU/RAM and most of all fast SSDs > for the OS (/var/lib/ceph). > > Christian > >

Re: [ceph-users] Cisco UCS Blades as MONs? Pros cons ...?

2015-05-13 Thread Götz Reinicke - IT Koordinator
one blade chassis? > > Jake > > On Wednesday, May 13, 2015, Götz Reinicke - IT Koordinator > mailto:goetz.reini...@filmakademie.de>> > wrote: > > Hi Christian, > > currently we do get good discounts as an University and the bundles were > worth it.

[ceph-users] Real world benefit from SSD Journals for a more read than write cluster

2015-07-08 Thread Götz Reinicke - IT Koordinator
Thanks as always for your feedback . Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82420 E-Mail goetz.reini...@filmakademie.de Filmakademie Baden-Württemberg GmbH Akademiehof 10 71638 Ludwigsburg www.filmakademie.de Eintragung Amtsgericht Stuttgart HRB 205016 Vorsitzender des Aufsich

Re: [ceph-users] Real world benefit from SSD Journals for a more read than write cluster

2015-07-09 Thread Götz Reinicke - IT Koordinator
Hi Christian, Am 09.07.15 um 09:36 schrieb Christian Balzer: > > Hello, > > On Thu, 09 Jul 2015 08:57:27 +0200 Götz Reinicke - IT Koordinator wrote: > >> Hi again, >> >> time is passing, so is my budget :-/ and I have to recheck the options >> for a "

[ceph-users] which SSD / experiences with Samsung 843T vs. Intel s3700

2015-08-25 Thread Götz Reinicke - IT Koordinator
Hi, most of the times I do get the recommendation from resellers to go with the intel s3700 for the journalling. Now I got an offer for systems with MLC 240 GB SATA Samsung 843T. A quick research on google shows that that ssd is not as good as the intel, but good, server grade 24/7 etc. and not

[ceph-users] network failover with public/custer network - is that possible

2015-11-25 Thread Götz Reinicke - IT Koordinator
Hi, discussing some design questions we came across the failover possibility of cephs network configuration. If I just have a public network, all traffic is crossing that lan. With public and cluster network I can separate the traffic and get some benefits. What if one of the networks fail? e.g

[ceph-users] Quick short survey which SSDs

2016-07-05 Thread Götz Reinicke - IT Koordinator
Hi, we have offers for ceph storage nodes with different SSD types and some are already mentioned as a very good choice but some are total new to me. May be you could give some feedback on the SSDs in question or provide just small information which you primarily us? Regarding the three disk in

[ceph-users] 40Gb fileserver/NIC suggestions

2016-07-12 Thread Götz Reinicke - IT Koordinator
Hi, can anybody give some realworld feedback on what hardware (CPU/Cores/NIC) you use for a 40Gb (file)server (smb and nfs)? The Ceph Cluster will be mostly rbd images. S3 in the future, CephFS we will see :) Thanks for some feedback and hints! Regadrs . Götz smime.p7s Description: S/MIME Cry

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-13 Thread Götz Reinicke - IT Koordinator
Am 13.07.16 um 11:47 schrieb Wido den Hollander: >> Op 13 juli 2016 om 8:19 schreef Götz Reinicke - IT Koordinator >> : >> >> >> Hi, >> >> can anybody give some realworld feedback on what hardware >> (CPU/Cores/NIC) you use for a 40Gb (file)server (

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-13 Thread Götz Reinicke - IT Koordinator
Am 13.07.16 um 14:27 schrieb Wido den Hollander: >> Op 13 juli 2016 om 12:00 schreef Götz Reinicke - IT Koordinator >> : >> >> >> Am 13.07.16 um 11:47 schrieb Wido den Hollander: >>>> Op 13 juli 2016 om 8:19 schreef Götz Reinicke - IT Koordinator >&

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-13 Thread Götz Reinicke - IT Koordinator
Am 13.07.16 um 14:59 schrieb Joe Landman: > > > On 07/13/2016 08:41 AM, c...@jack.fr.eu.org wrote: >> 40Gbps can be used as 4*10Gbps >> >> I guess welcome feedbacks should not be stuck by "usage of a 40Gbps >> ports", but extented to "usage of more than a single 10Gbps port, eg >> 20Gbps etc too" >

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-14 Thread Götz Reinicke - IT Koordinator
Am 13.07.16 um 17:44 schrieb David: > Aside from the 10GbE vs 40GbE question, if you're planning to export > an RBD image over smb/nfs I think you are going to struggle to reach > anywhere near 1GB/s in a single threaded read. This is because even > with readahead cranked right up you're still only

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-14 Thread Götz Reinicke - IT Koordinator
Am 13.07.16 um 17:08 schrieb c...@jack.fr.eu.org: > I am using these for other stuff: > http://www.supermicro.com/products/accessories/addon/AOC-STG-b4S.cfm > > If you want NIC, also think of the "network side" : SFP+ switch are very > common, 40G is less common, 25G is really new (= really few pro

[ceph-users] thoughts about Cache Tier Levels

2016-07-20 Thread Götz Reinicke - IT Koordinator
Hi, currently there are two levels I know of: storage- and cachepool. From our workload I do expect an third "level" of data, which will stay currently in the storagepool as well. Has anyone as we been thinking of data which could be moved even deeper in that tiering, e.g. have SSD cache, fast lo

[ceph-users] Degraded Cluster, some OSDs dont get mounted, dmesg confusion

2017-07-03 Thread Götz Reinicke - IT Koordinator
Hi, we have a 144 OSD 6 node ceph cluster with some pools (2 repl and EC). Today I did an CEPH (10.2.5 -> 10.2.7) and kernel update and rebooted two nodes. on both nodes some OSDs dont get mountet and on one node I get some dmesg like: attempt to access beyond end of device Currently the Clust

Re: [ceph-users] Installing ceph on Centos 7.3

2017-07-18 Thread Götz Reinicke - IT Koordinator
Hi, Am 18.07.17 um 10:51 schrieb Brian Wallis: > I’m failing to get an install of ceph to work on a new Centos 7.3.1611 > server. I’m following the instructions > at http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ to no > avail. > > First question, is it possible to install ceph on Cent

Re: [ceph-users] XFS attempt to access beyond end of device

2017-08-24 Thread Götz Reinicke - IT Koordinator
Hi, Am 28.07.17 um 04:06 schrieb Brad Hubbard: > An update on this. > > The "attempt to access beyond end of device" messages are created due to a > kernel bug which is rectified by the following patches. > > - 59d43914ed7b9625(vfs: make guard_bh_eod() more generic) > - 4db96b71e3caea(vfs: gua

Re: [ceph-users] rbd pool:replica size choose: 2 vs 3

2016-09-23 Thread Götz Reinicke - IT Koordinator
Hi, Am 23.09.16 um 05:55 schrieb Zhongyan Gu: > Hi there, > the default rbd pool replica size is 3. However, I found that in our > all ssd environment, capacity become a cost issue. We want to save > more capacity. So one option is change the replica size from 3 to 2. > anyone can share the experi

[ceph-users] where is what in use ...

2016-12-07 Thread Götz Reinicke - IT Koordinator
Hi, I started to play with our Ceph cluster and created some pools and rdbs and did some performance test. Currently I'm up to understand and interpret the different outputs of ceph -s or rados df etc. So far so good so nice. Now I was cleaning up (rbd rm ... ) and still see some space used on t

[ceph-users] suggestions on / how to update OS and Ceph in general

2017-01-09 Thread Götz Reinicke - IT Koordinator
Hi, we have a 6 node ceph 10.2.3 cluster on centos 7.2 servers, currently no hosting any rbds or anything else. MONs are on the OSD nodes. My question is as centos 7.3 is out now for some time and there is a ceph update to 10.2.5 available, what would be a good or the best path to update everythi

Re: [ceph-users] Jewel v10.2.6 released

2017-03-10 Thread Götz Reinicke - IT Koordinator
Hi, Am 08.03.17 um 13:11 schrieb Abhishek L: This point release fixes several important bugs in RBD mirroring, RGW multi-site, CephFS, and RADOS. We recommend that all v10.2.x users upgrade. For more detailed information, see the complete changelog[1] and the release notes[2] I hope you can

[ceph-users] At what point are objects removed?

2017-03-28 Thread Götz Reinicke - IT Koordinator
Hi, may be I got something wrong or did not understend it yet in total. I have some pools and created some test rbd images which are mounted to a samba server. After the test I deleted all files from the on the samba server. But "ceph df detail" and "ceph -s" show still used space. The OSDs

[ceph-users] SSD OSDs - more Cores or more GHz

2016-01-20 Thread Götz Reinicke - IT Koordinator
, I can give some more detailed information on the layout. Thansk for feedback . Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82420 E-Mail goetz.reini...@filmakademie.de Filmakademie Baden-Württemberg GmbH Akademiehof 10 71638 Ludwigsburg www.filmakademie.de Eintragung

Re: [ceph-users] SSD OSDs - more Cores or more GHz

2016-01-20 Thread Götz Reinicke - IT Koordinator
Am 20.01.16 um 11:30 schrieb Christian Balzer: > > Hello, > > On Wed, 20 Jan 2016 10:01:19 +0100 Götz Reinicke - IT Koordinator wrote: > >> Hi folks, >> >> we plan to use more ssd OSDs in our first cluster layout instead of SAS >> osds. (more IO is needed

Re: [ceph-users] K is for Kraken

2016-02-09 Thread Götz Reinicke - IT Koordinator
Am 08.02.16 um 20:09 schrieb Robert LeBlanc: > Too bad K isn't an LTS. It was be fun to release the Kraken many times. +1 :) https://www.youtube.com/watch?v=_lN2auTVavw cheers . Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82420 E-Mail goetz.reini...@filmaka