Hi,
I hope someone here can help me out with some contact data, email-adress or
phone Number for Samsung Datacenter SSD Support ? If I contact Standard Samsung
Datacenter Support they tell me they are not there to support PM1735 Drives.
We are planning a new Ceph-Cluster and we are thinking of
I manage a historical cluster of severak ceph nodes with each 128 GB Ram and 36
OSD each 8 TB size.
The cluster ist just for archive purpose and performance is not so important.
The cluster was running fine for long time using ceph luminous.
Last week I updated it to Debian 10 and Ceph Nautilus
ig set osd/class:hdd osd_memory_target 2147483648
for now.
Thanks
Christoph
On Wed, May 05, 2021 at 04:30:17PM +0200, Christoph Adomeit wrote:
> I manage a historical cluster of severak ceph nodes with each 128 GB Ram and
> 36 OSD each 8 TB size.
>
> The cluster ist just f
Hi,
I am using Seagate Exos X18 18TB Drives in a Ceph Archives Cluster which is
mainly
write once/read sometimes.
The drives are about 6 months old.
I use them in a ceph cluster and also in a ZFS Server. Different Servers
(all Supermicro) and different controllers but all of type LSI SAS3008
I
Hi,
we are planning an archive with cephfs containing 2 Petabytes of Data
on 200 slow S-ATA Disks on a single cephfs with 150 subdirectories. The Disks
will be around 80% full (570 TB of Data, 3-way replication).
Since this is an archive most of the data will be written once and read only
someti
I have upgraded my ceph cluster to pacific in August and updated to pacific
16.2.6 in September without problems.
I had no performance issues at all, the cluster has 3 nodes 64 core each, 15
blazing fast Samsung PM1733 NVME osds, 25 GBit/s Network and around 100 vms.
The cluster was really fas
, Stefan Kooman wrote:
> On 11/10/21 16:14, Christoph Adomeit wrote:
> > I have upgraded my ceph cluster to pacific in August and updated to pacific
> > 16.2.6 in September without problems.
>
> Have you set "ceph osd require-osd-release pacific" when you finished
&
ers@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Hard times create strong men. Strong men create good times.Good times create
weak men. And weak men create hard times.
Christoph Adomeit
GATWORKS GmbH
Metzenweg 78
41068 Moenchengladbach
Sitz: Moenchengladbach
Amtsge
Hi,
I am just wondering if it is recommended to regularly fstrim or discard ceph
bluestore osds on flash memory
(ssds and nvmes) and how it is done and configured ?
Any Ideas ?
Thanks
Christoph
___
ceph-users mailing list -- ceph-users@ceph.io
To un
Hi,
I remember there was a bug in 16.2.6 for clusters upgraded from older versions
where one had to set bluestore_fsck_quick_fix_on_mount to false .
Now I have upgraded from 16.2.6 to 16.2.7
Should I now set bluestore_fsck_quick_fix_on_mount to true ?
And if yes, what would be the command to a
Hi,
a customer has a ceph cluster which is used for archiving large amounts of
video data.
The cluster sometimes is not used for several days but if data is needed the
cluster
should be available within a few minutes.
The cluster consists of 5 Servers and 180 physical seagate harddisks and was
11 matches
Mail list logo