[ceph-users] Has anyone contact Data for Samsung Datacenter SSD Support ?

2021-03-11 Thread Christoph Adomeit
Hi, I hope someone here can help me out with some contact data, email-adress or phone Number for Samsung Datacenter SSD Support ? If I contact Standard Samsung Datacenter Support they tell me they are not there to support PM1735 Drives. We are planning a new Ceph-Cluster and we are thinking of

[ceph-users] Out of Memory after Upgrading to Nautilus

2021-05-05 Thread Christoph Adomeit
I manage a historical cluster of severak ceph nodes with each 128 GB Ram and 36 OSD each 8 TB size. The cluster ist just for archive purpose and performance is not so important. The cluster was running fine for long time using ceph luminous. Last week I updated it to Debian 10 and Ceph Nautilus

[ceph-users] Re: Out of Memory after Upgrading to Nautilus

2021-05-06 Thread Christoph Adomeit
ig set osd/class:hdd osd_memory_target 2147483648 for now. Thanks Christoph On Wed, May 05, 2021 at 04:30:17PM +0200, Christoph Adomeit wrote: > I manage a historical cluster of severak ceph nodes with each 128 GB Ram and > 36 OSD each 8 TB size. > > The cluster ist just f

[ceph-users] Anyone else having Problems with lots of dying Seagate Exos X18 18TB Drives ?

2022-12-07 Thread Christoph Adomeit
Hi, I am using Seagate Exos X18 18TB Drives in a Ceph Archives Cluster which is mainly write once/read sometimes. The drives are about 6 months old. I use them in a ceph cluster and also in a ZFS Server. Different Servers (all Supermicro) and different controllers but all of type LSI SAS3008 I

[ceph-users] Protecting Files in CephFS from accidental deletion or encryption

2022-12-19 Thread Christoph Adomeit
Hi, we are planning an archive with cephfs containing 2 Petabytes of Data on 200 slow S-ATA Disks on a single cephfs with 150 subdirectories. The Disks will be around 80% full (570 TB of Data, 3-way replication). Since this is an archive most of the data will be written once and read only someti

[ceph-users] snaptrim blocks io on ceph pacific even on fast NVMEs

2021-11-10 Thread Christoph Adomeit
I have upgraded my ceph cluster to pacific in August and updated to pacific 16.2.6 in September without problems. I had no performance issues at all, the cluster has 3 nodes 64 core each, 15 blazing fast Samsung PM1733 NVME osds, 25 GBit/s Network and around 100 vms. The cluster was really fas

[ceph-users] Re: snaptrim blocks io on ceph pacific even on fast NVMEs

2021-11-10 Thread Christoph Adomeit
, Stefan Kooman wrote: > On 11/10/21 16:14, Christoph Adomeit wrote: > > I have upgraded my ceph cluster to pacific in August and updated to pacific > > 16.2.6 in September without problems. > > Have you set "ceph osd require-osd-release pacific" when you finished &

[ceph-users] Re: snaptrim blocks io on ceph pacific even on fast NVMEs

2021-11-11 Thread Christoph Adomeit
ers@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Hard times create strong men. Strong men create good times.Good times create weak men. And weak men create hard times. Christoph Adomeit GATWORKS GmbH Metzenweg 78 41068 Moenchengladbach Sitz: Moenchengladbach Amtsge

[ceph-users] How to trim/discard ceph osds ?

2021-11-26 Thread Christoph Adomeit
Hi, I am just wondering if it is recommended to regularly fstrim or discard ceph bluestore osds on flash memory (ssds and nvmes) and how it is done and configured ? Any Ideas ? Thanks Christoph ___ ceph-users mailing list -- ceph-users@ceph.io To un

[ceph-users] Shall i set bluestore_fsck_quick_fix_on_mount now after upgrading to 16.2.7 ?

2021-12-14 Thread Christoph Adomeit
Hi, I remember there was a bug in 16.2.6 for clusters upgraded from older versions where one had to set bluestore_fsck_quick_fix_on_mount to false . Now I have upgraded from 16.2.6 to 16.2.7 Should I now set bluestore_fsck_quick_fix_on_mount to true ? And if yes, what would be the command to a

[ceph-users] Ideas for Powersaving on archive Cluster ?

2022-01-12 Thread Christoph Adomeit
Hi, a customer has a ceph cluster which is used for archiving large amounts of video data. The cluster sometimes is not used for several days but if data is needed the cluster should be available within a few minutes. The cluster consists of 5 Servers and 180 physical seagate harddisks and was