ngle 10Gb)
I know: redundancy keeps some headaches small, but also adds some more
complexity and increases the budget. (add network adapters, other
server, more switches, etc)
So what would you suggest, what are your experiences?
Thanks for any suggestion and feedback . Regards . Götz
--
t;
> From HP, Intel, Supermicron etc reference documentations, they use
> usually non-redundant network connection. (single 10Gb)
>
> I know: redundancy keeps some headaches small, but also adds some more
> complexity and increases the budget. (add network adapters, other
>
Hi Christian,
Am 13.04.15 um 12:54 schrieb Christian Balzer:
>
> Hello,
>
> On Mon, 13 Apr 2015 11:03:24 +0200 Götz Reinicke - IT Koordinator wrote:
>
>> Dear ceph users,
>>
>> we are planing a ceph storage cluster from scratch. Might be up to 1 PB
>> with
Regards . Götz
--
Götz Reinicke
IT-Koordinator
Tel. +49 7141 969 82 420
E-Mail goetz.reini...@filmakademie.de
Filmakademie Baden-Württemberg GmbH
Akademiehof 10
71638 Ludwigsburg
www.filmakademie.de
Eintragung Amtsgericht Stuttgart HRB 205016
Vorsitzender des Aufsichtsrats: Jürgen Walter MdL
Staatsse
Hi,
here I saw some links that sound interisting to me regarding Hardware
planing: https://ceph.com/category/resources/
The links redirect to Redhat, and I cant find the content.
May be someone has a new Guid? I found one from 2013 as pdf.
Regards and Thanks . Götz
--
Götz Reinicke
IT
there is a roadmap on
the progress?
We hope to reduce the systems complexity (dedicated journal SSDs) with that.
http://tracker.ceph.com/issues/11028 says "LMDB key/value backend for
Ceph" is done by 70% 15 days ago.
Kowtow, kowtow and thanks . Götz
--
Götz Reinicke
IT-Koordinat
s not calculated into the overall
usable space. It is a "cache".
E.g. The slow pool is 100 TB, the SSD Cache 10 TB, I dont have 110TB all
in all?
True? I'm wrong?
As always thanks a lot and regards! Götz
--
Götz Reinicke
IT-Koordinator
Tel. +49 7141 969 82 420
E-Mail goet
. This only means
> that the files you'd want cached will have to be pulled back in after
> that and you may lose the performance advantage for a little while after
> each backup.
>
> Hope that helps, dont hesitate with further inquiries!
>
>
> Marc
--
Götz Rei
can handle such volumes nicely?
Thanks and regards . Götz
--
Götz Reinicke
IT-Koordinator
Tel. +49 7141 969 82 420
E-Mail goetz.reini...@filmakademie.de
Filmakademie Baden-Württemberg GmbH
Akademiehof 10
71638 Ludwigsburg
www.filmakademie.de
Eintragung Amtsgericht Stuttgart
trough the monitors as well.
The point is: If we connect our file servers and OSD nodes with 40Gb,
dose the monitor need 40Gb to? Or would be 10Gb "enough".
Oversize is ok :) ...
Thansk and regards . Götz
--
Götz Reinicke
IT-Koordinator
Tel. +49 7141 969 82 420
E-Mail goetz.rein
ne servers, but space
could be a bit of a problem currently
Waht do you think?
Regards . Götz
--
Götz Reinicke
IT-Koordinator
Tel. +49 7141 969 82 420
E-Mail goetz.reini...@filmakademie.de
Filmakademie Baden-Württemberg GmbH
Akademiehof 10
71638 Ludwigsburg
www.filmakademie.de
Eintrag
s to
> go with one dedicated MON that will be the primary (lowest IP) 99.8% of
> the time and 4 OSDs with MONs on them. If you want to feel extra good
> about this, give those OSDs a bit more CPU/RAM and most of all fast SSDs
> for the OS (/var/lib/ceph).
>
> Christian
>
>
one blade chassis?
>
> Jake
>
> On Wednesday, May 13, 2015, Götz Reinicke - IT Koordinator
> mailto:goetz.reini...@filmakademie.de>>
> wrote:
>
> Hi Christian,
>
> currently we do get good discounts as an University and the bundles were
> worth it.
Thanks as always for your feedback . Götz
--
Götz Reinicke
IT-Koordinator
Tel. +49 7141 969 82420
E-Mail goetz.reini...@filmakademie.de
Filmakademie Baden-Württemberg GmbH
Akademiehof 10
71638 Ludwigsburg
www.filmakademie.de
Eintragung Amtsgericht Stuttgart HRB 205016
Vorsitzender des Aufsich
Hi Christian,
Am 09.07.15 um 09:36 schrieb Christian Balzer:
>
> Hello,
>
> On Thu, 09 Jul 2015 08:57:27 +0200 Götz Reinicke - IT Koordinator wrote:
>
>> Hi again,
>>
>> time is passing, so is my budget :-/ and I have to recheck the options
>> for a "
Hi,
most of the times I do get the recommendation from resellers to go with
the intel s3700 for the journalling.
Now I got an offer for systems with MLC 240 GB SATA Samsung 843T.
A quick research on google shows that that ssd is not as good as the
intel, but good, server grade 24/7 etc. and not
Hi,
discussing some design questions we came across the failover possibility
of cephs network configuration.
If I just have a public network, all traffic is crossing that lan.
With public and cluster network I can separate the traffic and get some
benefits.
What if one of the networks fail? e.g
Hi,
we have offers for ceph storage nodes with different SSD types and some
are already mentioned as a very good choice but some are total new to me.
May be you could give some feedback on the SSDs in question or provide
just small information which you primarily us?
Regarding the three disk in
Hi,
can anybody give some realworld feedback on what hardware
(CPU/Cores/NIC) you use for a 40Gb (file)server (smb and nfs)? The Ceph
Cluster will be mostly rbd images. S3 in the future, CephFS we will see :)
Thanks for some feedback and hints! Regadrs . Götz
smime.p7s
Description: S/MIME Cry
Am 13.07.16 um 11:47 schrieb Wido den Hollander:
>> Op 13 juli 2016 om 8:19 schreef Götz Reinicke - IT Koordinator
>> :
>>
>>
>> Hi,
>>
>> can anybody give some realworld feedback on what hardware
>> (CPU/Cores/NIC) you use for a 40Gb (file)server (
Am 13.07.16 um 14:27 schrieb Wido den Hollander:
>> Op 13 juli 2016 om 12:00 schreef Götz Reinicke - IT Koordinator
>> :
>>
>>
>> Am 13.07.16 um 11:47 schrieb Wido den Hollander:
>>>> Op 13 juli 2016 om 8:19 schreef Götz Reinicke - IT Koordinator
>&
Am 13.07.16 um 14:59 schrieb Joe Landman:
>
>
> On 07/13/2016 08:41 AM, c...@jack.fr.eu.org wrote:
>> 40Gbps can be used as 4*10Gbps
>>
>> I guess welcome feedbacks should not be stuck by "usage of a 40Gbps
>> ports", but extented to "usage of more than a single 10Gbps port, eg
>> 20Gbps etc too"
>
Am 13.07.16 um 17:44 schrieb David:
> Aside from the 10GbE vs 40GbE question, if you're planning to export
> an RBD image over smb/nfs I think you are going to struggle to reach
> anywhere near 1GB/s in a single threaded read. This is because even
> with readahead cranked right up you're still only
Am 13.07.16 um 17:08 schrieb c...@jack.fr.eu.org:
> I am using these for other stuff:
> http://www.supermicro.com/products/accessories/addon/AOC-STG-b4S.cfm
>
> If you want NIC, also think of the "network side" : SFP+ switch are very
> common, 40G is less common, 25G is really new (= really few pro
Hi,
currently there are two levels I know of: storage- and cachepool. From
our workload I do expect an third "level" of data, which will stay
currently in the storagepool as well.
Has anyone as we been thinking of data which could be moved even deeper
in that tiering, e.g. have SSD cache, fast lo
Hi,
we have a 144 OSD 6 node ceph cluster with some pools (2 repl and EC).
Today I did an CEPH (10.2.5 -> 10.2.7) and kernel update and rebooted
two nodes.
on both nodes some OSDs dont get mountet and on one node I get some
dmesg like:
attempt to access beyond end of device
Currently the Clust
Hi,
Am 18.07.17 um 10:51 schrieb Brian Wallis:
> I’m failing to get an install of ceph to work on a new Centos 7.3.1611
> server. I’m following the instructions
> at http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ to no
> avail.
>
> First question, is it possible to install ceph on Cent
Hi,
Am 28.07.17 um 04:06 schrieb Brad Hubbard:
> An update on this.
>
> The "attempt to access beyond end of device" messages are created due to a
> kernel bug which is rectified by the following patches.
>
> - 59d43914ed7b9625(vfs: make guard_bh_eod() more generic)
> - 4db96b71e3caea(vfs: gua
Hi,
Am 23.09.16 um 05:55 schrieb Zhongyan Gu:
> Hi there,
> the default rbd pool replica size is 3. However, I found that in our
> all ssd environment, capacity become a cost issue. We want to save
> more capacity. So one option is change the replica size from 3 to 2.
> anyone can share the experi
Hi,
I started to play with our Ceph cluster and created some pools and rdbs
and did some performance test. Currently I'm up to understand and
interpret the different outputs of ceph -s or rados df etc.
So far so good so nice.
Now I was cleaning up (rbd rm ... ) and still see some space used on t
Hi,
we have a 6 node ceph 10.2.3 cluster on centos 7.2 servers, currently no
hosting any rbds or anything else. MONs are on the OSD nodes.
My question is as centos 7.3 is out now for some time and there is a
ceph update to 10.2.5 available, what would be a good or the best path
to update everythi
Hi,
Am 08.03.17 um 13:11 schrieb Abhishek L:
This point release fixes several important bugs in RBD mirroring, RGW
multi-site, CephFS, and RADOS.
We recommend that all v10.2.x users upgrade.
For more detailed information, see the complete changelog[1] and the release
notes[2]
I hope you can
Hi, may be I got something wrong or did not understend it yet in total.
I have some pools and created some test rbd images which are mounted to
a samba server.
After the test I deleted all files from the on the samba server.
But "ceph df detail" and "ceph -s" show still used space.
The OSDs
, I can give some more detailed information on the layout.
Thansk for feedback . Götz
--
Götz Reinicke
IT-Koordinator
Tel. +49 7141 969 82420
E-Mail goetz.reini...@filmakademie.de
Filmakademie Baden-Württemberg GmbH
Akademiehof 10
71638 Ludwigsburg
www.filmakademie.de
Eintragung
Am 20.01.16 um 11:30 schrieb Christian Balzer:
>
> Hello,
>
> On Wed, 20 Jan 2016 10:01:19 +0100 Götz Reinicke - IT Koordinator wrote:
>
>> Hi folks,
>>
>> we plan to use more ssd OSDs in our first cluster layout instead of SAS
>> osds. (more IO is needed
Am 08.02.16 um 20:09 schrieb Robert LeBlanc:
> Too bad K isn't an LTS. It was be fun to release the Kraken many times.
+1
:) https://www.youtube.com/watch?v=_lN2auTVavw
cheers . Götz
--
Götz Reinicke
IT-Koordinator
Tel. +49 7141 969 82420
E-Mail goetz.reini...@filmaka
36 matches
Mail list logo