Re: [ceph-users] Real world benefit from SSD Journals for a more read than write cluster

2015-07-09 Thread Götz Reinicke
g continues to plummet. > > Warren > > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Götz > Reinicke - IT Koordinator > Sent: Thursday, July 09, 2015 4:47 AM > To: ceph-users@lists.ceph.com > Subject: Re: [cep

[ceph-users] where is a RBD in use

2017-08-31 Thread Götz Reinicke
Hi, Is it possible to see which clients are using an RBD? … I found an RBD in one of my pools but cant remember if I ever use / mounted it to a client. Thx for feedback ! Regards . Götz ___ ceph-users mailing list ceph-users@lists.ceph.com http://lis

[ceph-users] Some OSDs are down after Server reboot

2017-09-14 Thread Götz Reinicke
Hi, maybe someone has a hint: I do have a cephalopod cluster (6 nodes, 144 OSDs), Cents 7.3 ceph 10.2.7. I did a kernel update to the recent centos 7.3 one on a node and did a reboot. After that, 10 OSDs did not came up as the others. The disk did not get mounted and the OSD processes did noth

[ceph-users] Updating ceps client - what will happen to services like NFS on clients

2017-09-25 Thread Götz Reinicke
Hi, I updated our ceph OSD/MON Nodes from 10.2.7 to 10.2.9 and everything looks good so far. Now I was wondering (as I may have forgotten how this works) what will happen to a NFS server which has the nfs shares on a ceph rbd ? Will the update interrupt any access to the NFS share or is it th

Re: [ceph-users] Updating ceps client - what will happen to services like NFS on clients

2017-09-26 Thread Götz Reinicke
> lockups when OSDs went down in the cluster so that's something to watch out > for. > > > On Mon, Sep 25, 2017 at 8:38 AM, Götz Reinicke > mailto:goetz.reini...@filmakademie.de>> > wrote: > Hi, > > I updated our ceph OSD/MON Nodes from 10.2.7 to 10

[ceph-users] Different Ceph versions on OSD/MONs and Clients?

2018-01-05 Thread Götz Reinicke
Hi, our OSD and MONs run on jewel and Centos7. Now I was wondering if an older fileserver with RHEL 6 for which I just found hammer RPMs on the official ceph site can use the RDBs creation on the cluster? I think there might be problems with some kernel versions/ceph features. If it is not po

[ceph-users] Suggestion fur naming RBDs

2018-01-16 Thread Götz Reinicke
Hi, I was wondering what naming scheme you use for naming RBDs in different pools. There are no strict rules I know, so what might be a best practice? Something like the target service like fileserver_students or webservers_xen, webservers_vmware? A good naming scheme might be helpful :)

[ceph-users] Shutting down half / full cluster

2018-02-14 Thread Götz Reinicke
Hi, We have some work to do on our power lines for all building and we have to shut down all systems. So there is also no traffic on any ceph client. Pitty, we have to shot down some ceph nodes too in an affected building. To avoid rebalancing - as I see there is no need for it, as there is no

Re: [ceph-users] Shutting down half / full cluster

2018-02-14 Thread Götz Reinicke
ail/ceph-users-ceph.com/2017-April/017378.html> > Kai > > On 02/14/2018 11:06 AM, Götz Reinicke wrote: >> Hi, >> >> We have some work to do on our power lines for all building and we have to >> shut down all systems. So there is also no traffic on any ceph

[ceph-users] OT: Bad Sector Count - suggestions and experiences?

2018-07-09 Thread Götz Reinicke
Hi, I apologize for the OT, but I hope some ceph users with bigger installations have a lot more experiences than the users reporting theire home problem (NAS with 2 disks … ) I saw a lot googling that topic. Luckily we had not as much hard disk failures as some coworkers, but now with more an

Re: [ceph-users] limited disk slots - should I ran OS on SD card ?

2018-08-15 Thread Götz Reinicke
Hi, > Am 15.08.2018 um 15:11 schrieb Steven Vacaroaia : > > Thank you all > > Since all concerns were about reliability I am assuming performance impact > of having OS running on SD card is minimal / negligible some time ago we had a some Cisco Blades booting VMware esxi from SD cards and

[ceph-users] Upgrading ceph and mapped rbds

2018-03-28 Thread Götz Reinicke
Hi, I bet I did read it somewhere already, but can’t remember where…. Our ceph 10.2. cluster is fin and healthy and I have a couple of rbds exported to some fileserver and a nfs server. The upgrade to V 12.2 documentation is clear regarding upgrading/restarting all MONs first, after that, the O

Re: [ceph-users] Upgrading ceph and mapped rbds

2018-04-03 Thread Götz Reinicke
Hi Robert, > Am 29.03.2018 um 10:27 schrieb Robert Sander : > > On 28.03.2018 11:36, Götz Reinicke wrote: > >> My question is: How to proceed with the serves which map the rbds? > > Do you intend to upgrade the kernels on these RBD clients acting as NFS > servers? &g

Re: [ceph-users] Upgrading ceph and mapped rbds

2018-04-03 Thread Götz Reinicke
> Am 03.04.2018 um 13:31 schrieb Konstantin Shalygin : > >> and true the VMs have to be shut down/server rebooted > > > Is not necessary. Just migrate VM. Hi, The VMs are XenServer VMs with virtual Disk saved at the NFS Server which has the RBD mounted … So there is nor migration from my PO

[ceph-users] Ceph -s require_jewel_osds pops up and disappears

2017-02-07 Thread Götz Reinicke
Hi, Ceph -s shows like a direction indicator on of require_jewel_osds. I did recently an upgrade from centos 7.2 to 7.3 and ceph 10.2.3 to 10.2.5. May be I forgot to set an option? I thought I did a „ ceph osd set require_jewel_osds“ as described in the release notes https://ceph.com/geen-cat

[ceph-users] To backup or not to backup the classic way - How to backup hundreds of TB?

2017-02-14 Thread Götz Reinicke
Hi, I guess that's a question that pops up in different places, but I could not find any which fits to my thoughts. Currently we start to use ceph for file shares of our films produced by our students and some xen/vmware VMs. Thd VM data is already backed up; the fils original footage is store

Re: [ceph-users] recommendations for file sharing

2016-02-21 Thread Götz Reinicke
Hi, > Am 17.12.2015 um 09:43 schrieb Alex Leake >: > > ​Lin, > > Thanks for this! I did not see the ownCloud RADOS implementation. > > I maintain a local ownCloud environment anyway, so this is a really good idea. > > Have you used it? <…> just a quick google

[ceph-users] Suggestions/experiences with mixed disk sizes and models from 4TB - 14TB

2019-01-16 Thread Götz Reinicke
Dear Ceph users, I’d like to get some feedback for the following thought: Currently I run some 24*4TB bluestore OSD nodes. The main focus is on storage space over IOPS. We use erasure code and cephfs, and things look good right now. The „but“ is, I do need more disk space and don’t have so muc

[ceph-users] Rezising an online mounted ext4 on a rbd - failed

2019-01-26 Thread Götz Reinicke
Hi, I have a fileserver which mounted a 4TB rbd, which is ext4 formatted. I grow that rbd and ext4 starting with an 2TB rbd that way: rbd resize testpool/disk01--size 4194304 resize2fs /dev/rbd0 Today I wanted to extend that ext4 to 8 TB and did: rbd resize testpool/disk01--size 8388608 resi

Re: [ceph-users] Rezising an online mounted ext4 on a rbd - failed

2019-01-26 Thread Götz Reinicke
> Am 26.01.2019 um 14:16 schrieb Kevin Olbrich : > > Am Sa., 26. Jan. 2019 um 13:43 Uhr schrieb Götz Reinicke > : >> >> Hi, >> >> I have a fileserver which mounted a 4TB rbd, which is ext4 formatted. >> >> I grow that rbd and ext4 starting with

[ceph-users] One host with 24 OSDs is offline - best way to get it back online

2019-01-26 Thread Götz Reinicke
Hi, one host out of 10 is down for yet unknown reasons. I guess a power failure. I could not yet see the server. The Cluster is recovering and remapping fine, but still has some objects to process. My question: May I just switch the server back on and in best case, the 24 OSDs get back online

Re: [ceph-users] One host with 24 OSDs is offline - best way to get it back online

2019-01-26 Thread Götz Reinicke
make it go faster if you feel you can spare additional > IO and it won't affect clients. > > We do this in our cluster regularly and have yet to see an issue (given that > we take care to do it during periods of lower client io) > > On January 26, 2019 17:16:38 Götz Reini

Re: [ceph-users] One host with 24 OSDs is offline - best way to get it back online

2019-01-27 Thread Götz Reinicke
tored long before any recovery would be > finished and you also avoid the data movement back and forth. > And if you see that recovering the node will take a long time, just > manually set things out for the time being. > > Christian > > On Sun, 27 Jan 2019 00:02:54 +0100 G

[ceph-users] Update / upgrade cluster with MDS from 12.2.7 to 12.2.11

2019-02-11 Thread Götz Reinicke
Hi, as 12.2.11 is out for some days and no panic mails showed up on the list I was planing to update too. I know there are recommended orders in which to update/upgrade the cluster but I don’t know how rpm packages are handling restarting services after a yum update. E.g. when MDS and MONs are

Re: [ceph-users] Update / upgrade cluster with MDS from 12.2.7 to 12.2.11

2019-02-12 Thread Götz Reinicke
> Am 12.02.2019 um 00:03 schrieb Patrick Donnelly : > > On Mon, Feb 11, 2019 at 12:10 PM Götz Reinicke > wrote: >> as 12.2.11 is out for some days and no panic mails showed up on the list I >> was planing to update too. >> >> I know there are recommended o

[ceph-users] How to change/anable/activate a different osd_memory_target value

2019-02-19 Thread Götz Reinicke
Hi, we run into some OSD node freezes with out of memory and eating all swap too. Till we get more physical RAM I’d like to reduce the osd_memory_target, but can’t find where and how to enable it. We have 24 bluestore Disks in 64 GB centos nodes with Luminous v12.2.11 Thanks for hints

Re: [ceph-users] How to change/anable/activate a different osd_memory_target value

2019-02-21 Thread Götz Reinicke
> Am 20.02.2019 um 09:26 schrieb Konstantin Shalygin : > > >> we run into some OSD node freezes with out of memory and eating all swap >> too. Till we get more physical RAM I’d like to reduce the osd_memory_target, >> but can’t find where and how to enable it. >> >> We have 24 bluestore Disk

[ceph-users] Resizing a cache tier rbd

2019-03-26 Thread Götz Reinicke
Hi, I have a rbd in a cache tier setup which I need to extend. The question is, do I resize it trough the cache pool or directly on the slow/storage pool? Or dosen t that matter at all? Thanks for feedback and regards . Götz smime.p7s Description: S/MIME cryptographic signature ___

Re: [ceph-users] Resizing a cache tier rbd

2019-03-27 Thread Götz Reinicke
ity > BTW), you should always reference the base tier pool. The fact that a > cache tier sits in front of a slower, base tier is transparently > handled. > > On Tue, Mar 26, 2019 at 5:41 PM Götz Reinicke > wrote: >> >> Hi, >> >> I have a rbd in a cache

[ceph-users] is there a Cephfs path length limit

2019-07-30 Thread Götz Reinicke
Hi, I was asked, if Cephfs has a path length limit, and if, how long might a path get? We evaluate some software wich might generate very long pathnames (I’m not yet told how long it could be). Thanks for feedback . /Götz smime.p7s Description: S/MIME cryptographic signature _

[ceph-users] Network redundancy pro and cons, best practice, suggestions?

2015-04-13 Thread Götz Reinicke - IT Koordinator
ngle 10Gb) I know: redundancy keeps some headaches small, but also adds some more complexity and increases the budget. (add network adapters, other server, more switches, etc) So what would you suggest, what are your experiences? Thanks for any suggestion and feedback . Regards . Götz --

Re: [ceph-users] Network redundancy pro and cons, best practice, suggestions?

2015-04-13 Thread Götz Reinicke - IT Koordinator
gt; Hi, you can have a look at mellanox sx1012 for example > http://www.mellanox.com/page/products_dyn?product_family=163 > > 12 ports 40GB for around 4000€ > > you can use breakout cables to have 4x12 10GB ports. > > > They can be stacked with mlag and lacp > > >

Re: [ceph-users] Network redundancy pro and cons, best practice, suggestions?

2015-04-20 Thread Götz Reinicke - IT Koordinator
Hi Christian, Am 13.04.15 um 12:54 schrieb Christian Balzer: > > Hello, > > On Mon, 13 Apr 2015 11:03:24 +0200 Götz Reinicke - IT Koordinator wrote: > >> Dear ceph users, >> >> we are planing a ceph storage cluster from scratch. Might be up to 1 PB >> with

[ceph-users] Some more numbers - CPU/Memory suggestions for OSDs and Monitors

2015-04-22 Thread Götz Reinicke - IT Koordinator
quot; How to calculate a good balanced? Is there a rule of thumb estimate :)? BTW: The numbers I got are from the recommendation and sample configurations from DELL, HP, Intel, Supermicron, Emulex, CERN and some more... Like this list. Thanks a lot for any suggestion and feedback .

[ceph-users] inktank configuration guides are gone?

2015-04-22 Thread Götz Reinicke - IT Koordinator
Hi, here I saw some links that sound interisting to me regarding Hardware planing: https://ceph.com/category/resources/ The links redirect to Redhat, and I cant find the content. May be someone has a new Guid? I found one from 2013 as pdf. Regards and Thanks . Götz -- Götz Reinicke IT

[ceph-users] One more thing. Journal or not to journal or DB-what? Status?

2015-04-23 Thread Götz Reinicke - IT Koordinator
there is a roadmap on the progress? We hope to reduce the systems complexity (dedicated journal SSDs) with that. http://tracker.ceph.com/issues/11028 says "LMDB key/value backend for Ceph" is done by 70% 15 days ago. Kowtow, kowtow and thanks . Götz -- Götz Reinicke IT-Koordinat

[ceph-users] capacity planing with SSD Cache Pool Tiering

2015-05-05 Thread Götz Reinicke - IT Koordinator
s not calculated into the overall usable space. It is a "cache". E.g. The slow pool is 100 TB, the SSD Cache 10 TB, I dont have 110TB all in all? True? I'm wrong? As always thanks a lot and regards! Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82 420 E-Mail goet

Re: [ceph-users] capacity planing with SSD Cache Pool Tiering

2015-05-06 Thread Götz Reinicke - IT Koordinator
. This only means > that the files you'd want cached will have to be pulled back in after > that and you may lose the performance advantage for a little while after > each backup. > > Hope that helps, dont hesitate with further inquiries! > > > Marc -- Götz Rei

[ceph-users] How to backup hundreds or thousands of TB

2015-05-06 Thread Götz Reinicke - IT Koordinator
can handle such volumes nicely? Thanks and regards . Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82 420 E-Mail goetz.reini...@filmakademie.de Filmakademie Baden-Württemberg GmbH Akademiehof 10 71638 Ludwigsburg www.filmakademie.de Eintragung Amtsgericht Stuttgart

[ceph-users] Dataflow/path Client <---> OSD

2015-05-07 Thread Götz Reinicke - IT Koordinator
trough the monitors as well. The point is: If we connect our file servers and OSD nodes with 40Gb, dose the monitor need 40Gb to? Or would be 10Gb "enough". Oversize is ok :) ... Thansk and regards . Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82 420 E-Mail goetz.rein

[ceph-users] Cisco UCS Blades as MONs? Pros cons ...?

2015-05-12 Thread Götz Reinicke - IT Koordinator
ne servers, but space could be a bit of a problem currently Waht do you think? Regards . Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82 420 E-Mail goetz.reini...@filmakademie.de Filmakademie Baden-Württemberg GmbH Akademiehof 10 71638 Ludwigsburg www.filmakademie.de Eintrag

Re: [ceph-users] Cisco UCS Blades as MONs? Pros cons ...?

2015-05-12 Thread Götz Reinicke - IT Koordinator
s to > go with one dedicated MON that will be the primary (lowest IP) 99.8% of > the time and 4 OSDs with MONs on them. If you want to feel extra good > about this, give those OSDs a bit more CPU/RAM and most of all fast SSDs > for the OS (/var/lib/ceph). > > Christian > >

Re: [ceph-users] Cisco UCS Blades as MONs? Pros cons ...?

2015-05-13 Thread Götz Reinicke - IT Koordinator
one blade chassis? > > Jake > > On Wednesday, May 13, 2015, Götz Reinicke - IT Koordinator > mailto:goetz.reini...@filmakademie.de>> > wrote: > > Hi Christian, > > currently we do get good discounts as an University and the bundles were > worth it.

[ceph-users] Real world benefit from SSD Journals for a more read than write cluster

2015-07-08 Thread Götz Reinicke - IT Koordinator
Thanks as always for your feedback . Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82420 E-Mail goetz.reini...@filmakademie.de Filmakademie Baden-Württemberg GmbH Akademiehof 10 71638 Ludwigsburg www.filmakademie.de Eintragung Amtsgericht Stuttgart HRB 205016 Vorsitzender des Aufsich

Re: [ceph-users] Real world benefit from SSD Journals for a more read than write cluster

2015-07-09 Thread Götz Reinicke - IT Koordinator
Hi Christian, Am 09.07.15 um 09:36 schrieb Christian Balzer: > > Hello, > > On Thu, 09 Jul 2015 08:57:27 +0200 Götz Reinicke - IT Koordinator wrote: > >> Hi again, >> >> time is passing, so is my budget :-/ and I have to recheck the options >> for a "

[ceph-users] which SSD / experiences with Samsung 843T vs. Intel s3700

2015-08-25 Thread Götz Reinicke - IT Koordinator
Hi, most of the times I do get the recommendation from resellers to go with the intel s3700 for the journalling. Now I got an offer for systems with MLC 240 GB SATA Samsung 843T. A quick research on google shows that that ssd is not as good as the intel, but good, server grade 24/7 etc. and not

[ceph-users] network failover with public/custer network - is that possible

2015-11-25 Thread Götz Reinicke - IT Koordinator
Hi, discussing some design questions we came across the failover possibility of cephs network configuration. If I just have a public network, all traffic is crossing that lan. With public and cluster network I can separate the traffic and get some benefits. What if one of the networks fail? e.g

[ceph-users] Quick short survey which SSDs

2016-07-05 Thread Götz Reinicke - IT Koordinator
Hi, we have offers for ceph storage nodes with different SSD types and some are already mentioned as a very good choice but some are total new to me. May be you could give some feedback on the SSDs in question or provide just small information which you primarily us? Regarding the three disk in

[ceph-users] 40Gb fileserver/NIC suggestions

2016-07-12 Thread Götz Reinicke - IT Koordinator
Hi, can anybody give some realworld feedback on what hardware (CPU/Cores/NIC) you use for a 40Gb (file)server (smb and nfs)? The Ceph Cluster will be mostly rbd images. S3 in the future, CephFS we will see :) Thanks for some feedback and hints! Regadrs . Götz smime.p7s Description: S/MIME Cry

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-13 Thread Götz Reinicke - IT Koordinator
Am 13.07.16 um 11:47 schrieb Wido den Hollander: >> Op 13 juli 2016 om 8:19 schreef Götz Reinicke - IT Koordinator >> : >> >> >> Hi, >> >> can anybody give some realworld feedback on what hardware >> (CPU/Cores/NIC) you use for a 40Gb (file)server (

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-13 Thread Götz Reinicke - IT Koordinator
Am 13.07.16 um 14:27 schrieb Wido den Hollander: >> Op 13 juli 2016 om 12:00 schreef Götz Reinicke - IT Koordinator >> : >> >> >> Am 13.07.16 um 11:47 schrieb Wido den Hollander: >>>> Op 13 juli 2016 om 8:19 schreef Götz Reinicke - IT Koordinator >&

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-13 Thread Götz Reinicke - IT Koordinator
Am 13.07.16 um 14:59 schrieb Joe Landman: > > > On 07/13/2016 08:41 AM, c...@jack.fr.eu.org wrote: >> 40Gbps can be used as 4*10Gbps >> >> I guess welcome feedbacks should not be stuck by "usage of a 40Gbps >> ports", but extented to "usage of more than a single 10Gbps port, eg >> 20Gbps etc too" >

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-14 Thread Götz Reinicke - IT Koordinator
Am 13.07.16 um 17:44 schrieb David: > Aside from the 10GbE vs 40GbE question, if you're planning to export > an RBD image over smb/nfs I think you are going to struggle to reach > anywhere near 1GB/s in a single threaded read. This is because even > with readahead cranked right up you're still only

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-14 Thread Götz Reinicke - IT Koordinator
Am 13.07.16 um 17:08 schrieb c...@jack.fr.eu.org: > I am using these for other stuff: > http://www.supermicro.com/products/accessories/addon/AOC-STG-b4S.cfm > > If you want NIC, also think of the "network side" : SFP+ switch are very > common, 40G is less common, 25G is really new (= really few pro

[ceph-users] thoughts about Cache Tier Levels

2016-07-20 Thread Götz Reinicke - IT Koordinator
Hi, currently there are two levels I know of: storage- and cachepool. From our workload I do expect an third "level" of data, which will stay currently in the storagepool as well. Has anyone as we been thinking of data which could be moved even deeper in that tiering, e.g. have SSD cache, fast lo

[ceph-users] Degraded Cluster, some OSDs dont get mounted, dmesg confusion

2017-07-03 Thread Götz Reinicke - IT Koordinator
Hi, we have a 144 OSD 6 node ceph cluster with some pools (2 repl and EC). Today I did an CEPH (10.2.5 -> 10.2.7) and kernel update and rebooted two nodes. on both nodes some OSDs dont get mountet and on one node I get some dmesg like: attempt to access beyond end of device Currently the Clust

Re: [ceph-users] Installing ceph on Centos 7.3

2017-07-18 Thread Götz Reinicke - IT Koordinator
Hi, Am 18.07.17 um 10:51 schrieb Brian Wallis: > I’m failing to get an install of ceph to work on a new Centos 7.3.1611 > server. I’m following the instructions > at http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ to no > avail. > > First question, is it possible to install ceph on Cent

Re: [ceph-users] XFS attempt to access beyond end of device

2017-08-24 Thread Götz Reinicke - IT Koordinator
Hi, Am 28.07.17 um 04:06 schrieb Brad Hubbard: > An update on this. > > The "attempt to access beyond end of device" messages are created due to a > kernel bug which is rectified by the following patches. > > - 59d43914ed7b9625(vfs: make guard_bh_eod() more generic) > - 4db96b71e3caea(vfs: gua

Re: [ceph-users] rbd pool:replica size choose: 2 vs 3

2016-09-23 Thread Götz Reinicke - IT Koordinator
Hi, Am 23.09.16 um 05:55 schrieb Zhongyan Gu: > Hi there, > the default rbd pool replica size is 3. However, I found that in our > all ssd environment, capacity become a cost issue. We want to save > more capacity. So one option is change the replica size from 3 to 2. > anyone can share the experi

[ceph-users] where is what in use ...

2016-12-07 Thread Götz Reinicke - IT Koordinator
Hi, I started to play with our Ceph cluster and created some pools and rdbs and did some performance test. Currently I'm up to understand and interpret the different outputs of ceph -s or rados df etc. So far so good so nice. Now I was cleaning up (rbd rm ... ) and still see some space used on t

[ceph-users] suggestions on / how to update OS and Ceph in general

2017-01-09 Thread Götz Reinicke - IT Koordinator
Hi, we have a 6 node ceph 10.2.3 cluster on centos 7.2 servers, currently no hosting any rbds or anything else. MONs are on the OSD nodes. My question is as centos 7.3 is out now for some time and there is a ceph update to 10.2.5 available, what would be a good or the best path to update everythi

Re: [ceph-users] Jewel v10.2.6 released

2017-03-10 Thread Götz Reinicke - IT Koordinator
Hi, Am 08.03.17 um 13:11 schrieb Abhishek L: This point release fixes several important bugs in RBD mirroring, RGW multi-site, CephFS, and RADOS. We recommend that all v10.2.x users upgrade. For more detailed information, see the complete changelog[1] and the release notes[2] I hope you can

[ceph-users] At what point are objects removed?

2017-03-28 Thread Götz Reinicke - IT Koordinator
Hi, may be I got something wrong or did not understend it yet in total. I have some pools and created some test rbd images which are mounted to a samba server. After the test I deleted all files from the on the samba server. But "ceph df detail" and "ceph -s" show still used space. The OSDs

[ceph-users] SSD OSDs - more Cores or more GHz

2016-01-20 Thread Götz Reinicke - IT Koordinator
, I can give some more detailed information on the layout. Thansk for feedback . Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82420 E-Mail goetz.reini...@filmakademie.de Filmakademie Baden-Württemberg GmbH Akademiehof 10 71638 Ludwigsburg www.filmakademie.de Eintragung

Re: [ceph-users] SSD OSDs - more Cores or more GHz

2016-01-20 Thread Götz Reinicke - IT Koordinator
Am 20.01.16 um 11:30 schrieb Christian Balzer: > > Hello, > > On Wed, 20 Jan 2016 10:01:19 +0100 Götz Reinicke - IT Koordinator wrote: > >> Hi folks, >> >> we plan to use more ssd OSDs in our first cluster layout instead of SAS >> osds. (more IO is needed

Re: [ceph-users] K is for Kraken

2016-02-09 Thread Götz Reinicke - IT Koordinator
Am 08.02.16 um 20:09 schrieb Robert LeBlanc: > Too bad K isn't an LTS. It was be fun to release the Kraken many times. +1 :) https://www.youtube.com/watch?v=_lN2auTVavw cheers . Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82420 E-Mail goetz.reini...@filmaka