On Sun, Jun 19, 2022 at 9:43 PM Yuri Weinstein wrote:
>
> rados, rgw, rbd and fs suits ran on the latest sha1
> (https://shaman.ceph.com/builds/ceph/quincy-release/eb0eac1a195f1d8e9e3c472c7b1ca1e9add581c2/)
>
> pls see the summary:
> https://tracker.ceph.com/issues/55974#note-1
>
> seeking final
Hi all,
Running into some trouble. I just setup ceph multi-site replication.
Good news is that it is syncing the data. But the metadata is NOT syncing.
I was trying to follow the instructions from here:
https://docs.ceph.com/en/quincy/radosgw/multisite/#create-a-secondary-zone
I see there
I’ve come close more than once to removing that misleading 4% guidance.
The OP plans to use a single M.2 NVMe device - I’m a bit suspcious that the M.2
connector may only be SATA, and 12 OSDs sharing one SATA device for WAL+DB,
plus potential CephFS metadata and RGW index pools seems like a soun
Thanks Christophe,
On Mon, Jun 20, 2022 at 11:45 AM Christophe BAILLON wrote:
> Hi
>
> We have 20 ceph node, each with 12 x 18Tb, 2 x nvme 1Tb
>
> I try this method to create osd
>
> ceph orch apply -i osd_spec.yaml
>
> with this conf
>
> osd_spec.yaml
> service_type: osd
> service_id: osd_spec_
Thank Jake
On Mon, Jun 20, 2022 at 10:47 AM Jake Grimmett
wrote:
> Hi Stefan
>
> We use cephfs for our 7200CPU/224GPU HPC cluster, for our use-case
> (large-ish image files) it works well.
>
> We have 36 ceph nodes, each with 12 x 12TB HDD, 2 x 1.92TB NVMe, plus a
> 240GB System disk. Four dedic
On Sun, Jun 19, 2022 at 6:13 PM Yuri Weinstein wrote:
>
> rados, rgw, rbd and fs suits ran on the latest sha1
> (https://shaman.ceph.com/builds/ceph/quincy-release/eb0eac1a195f1d8e9e3c472c7b1ca1e9add581c2/)
>
> pls see the summary:
> https://tracker.ceph.com/issues/55974#note-1
>
> seeking final
Hi
We have 20 ceph node, each with 12 x 18Tb, 2 x nvme 1Tb
I try this method to create osd
ceph orch apply -i osd_spec.yaml
with this conf
osd_spec.yaml
service_type: osd
service_id: osd_spec_default
placement:
host_pattern: '*'
data_devices:
rotational: 1
db_devices:
paths:
- /dev/n
Hi Stefan
We use cephfs for our 7200CPU/224GPU HPC cluster, for our use-case
(large-ish image files) it works well.
We have 36 ceph nodes, each with 12 x 12TB HDD, 2 x 1.92TB NVMe, plus a
240GB System disk. Four dedicated nodes have NVMe for metadata pool, and
provide mon,mgr and MDS service
This was related to an older post back from 2020:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/GBUBEGTZTAMNEQNKUGS47M6W6B4AEVVS/
Von: Kilian Ries
Gesendet: Freitag, 17. Juni 2022 17:23:03
An: ceph-users@ceph.io; ste...@bit.nl
Betreff: [ceph-us
On 6/20/22 3:45 PM, Arnaud M wrote:
Hello to everyone
I have looked on the internet but couldn't find an answer.
Do you know the maximum size of a ceph filesystem ? Not the max size of a
single file but the limit of the whole filesystem ?
For example a quick search on zfs on google output :
A Z
> Currently the biggest HDD is 20TB.
According to news articles, HDDs up to 26TB are sampling. Mind you they’re SMR.
And for many applications having that much capacity behind a tired SATA
interface is a serious bottleneck; I’ve seen deployments cap HDD size at 8TB
because of this. But I di
Am 20.06.22 um 10:34 schrieb Arnaud M:
Yes i know for zfs such an amount of data is impossible
I am not trying to store that much data
My question is really for pure curiosity
What is the téorical max size of a ceph filesystem
There is no theoretical limit on the size of a Ceph cluster as fa
Yes i know for zfs such an amount of data is impossible
I am not trying to store that much data
My question is really for pure curiosity
What is the téorical max size of a ceph filesystem
For example is it theoretically possible to store 1 exabyte ? 1 zettabyte ?
More ? in cephfs ?
Let's suppo
Am 20.06.22 um 09:45 schrieb Arnaud M:
A ZFS file system can store up to *256 quadrillion zettabytes* (ZB).
How would a storage system look like in reality that could hold such an
amount of data?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www
Hello thanks for the answer
But is there any hard coded limit ? Like in zfs ?
Maybe a limit to the maximum files a cephfs can have ?
All the best
Arnaud
Le lun. 20 juin 2022 à 10:18, Serkan Çoban a écrit :
> Currently the biggest HDD is 20TB. 1 exabyte means 50.000 OSD
> cluster(without repli
Currently the biggest HDD is 20TB. 1 exabyte means 50.000 OSD
cluster(without replication or EC)
AFAIK Cern did some tests using 5000 OSDs. I don't know any larger
clusters than Cern's.
So I am not saying it is impossible but it is very unlikely to grow a
single Ceph cluster to that size.
Maybe you
Hello to everyone
I have looked on the internet but couldn't find an answer.
Do you know the maximum size of a ceph filesystem ? Not the max size of a
single file but the limit of the whole filesystem ?
For example a quick search on zfs on google output :
A ZFS file system can store up to *256 qu
17 matches
Mail list logo