> So, its not necessarily a "which one should I support". One of cephs great
> features is you can support all 3 with the same storage and use them all as
> needed.
..with the caveat that you can't serve the same files over them, but
it is quite true that you can have all three served from the s
The more files I delete, the more space is used.
How can this be?
Am Di., 25. Mai 2021 um 14:41 Uhr schrieb Boris Behrens :
>
> Am Di., 25. Mai 2021 um 09:23 Uhr schrieb Boris Behrens :
> >
> > Hi,
> > I am still searching for a reason why these two values differ so much.
> >
> > I am currentl
Hi,
did you wipe the LV on the SSD that was assigned to the failed HDD? I
just did that on a fresh Pacific install successfully, a couple of
weeks ago it also worked on an Octopus cluster. Note that I have a few
filters in my specs file but that shouldn't make a difference, I
believe.
p
Hi,
I'm currently exploring Pacific and the "_admin" label doesn't seem to
work as expected.
pacific1:~ # ceph -v
ceph version 16.2.3-26-g422932e923
(422932e923eb429b9e16c352a663968f4b6f0a52) pacific (stable)
According to the docs [1] the "_admin" label should instruct cephadm
to distri
The quick answer, is they are optimized for different use cases.
Things like relational databases (mysql, postgresql) benefit from the
performance that a dedicated filesystem can provide (rbd). Shared filesystems
are usually counter indicated with such software.
Shared filesystems like cephfs a
Yeah, agreed. My first question would be how is your user going to consume
the storage?
You'll struggle to run VM's on RadosGW and if they are doing archival
backups then RBD is likely not the best solution.
Each has very different requirements at the hardware level, for example if
you are talking
Hi Jorge,
I think it depends on your workload.
On Tue, May 25, 2021 at 7:43 PM Jorge Garcia wrote:
>
> This may be too broad of a topic, or opening a can of worms, but we are
> running a CEPH environment and I was wondering if there's any guidance
> about this question:
>
> Given that some group
This may be too broad of a topic, or opening a can of worms, but we are
running a CEPH environment and I was wondering if there's any guidance
about this question:
Given that some group would like to store 50-100 TBs of data on CEPH and
use it from a linux environment, are there any advantages
Hi,
On my setup I didn't enable a strech cluster. It's just a 3 x VM setup
running on the same Proxmox node, all the nodes are using a single
unique network. I installed Ceph using the documented cephadm flow.
Thanks for the confirmation, Greg! I‘ll try with a newer release then. >That’s wh
Hi everyone,
The Ceph Month June schedule is now available:
https://pad.ceph.com/p/ceph-month-june-2021
We have great sessions from component updates, performance best
practices, Ceph on different architectures, BoF sessions to get more
involved with working groups in the community, and more! Yo
Thanks for the confirmation, Greg! I‘ll try with a newer release then.
That’s why we’re testing, isn’t it? ;-)
Then the OPs issue is probably not resolved yet since he didn’t
mention a stretch cluster. Sorry for high-jacking the thread.
Zitat von Gregory Farnum :
On Tue, May 25, 2021 at 7:1
On Tue, May 25, 2021 at 7:17 AM Eugen Block wrote:
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.4/rpm/el8/BUILD/ceph-16.2.4/src/osd/OSDMap.cc:
> In function 'void OSDMap::Incremental::enco
Thank you Janne,
I will give upmap a shot. Need to try it first in some non-prod
cluster. Non-prod clusters are doing much better for me even though
they have a lot fewer OSDs..
Thanks everyone!
On Tue, May 25, 2021 at 12:48 AM Janne Johansson wrote:
>
> I would suggest enabling the upmap balance
Hi,
I wanted to explore the stretch mode in pacific (16.2.4) and see how
it behaves with a DC failure. It seems as if I'm hitting the same or
at least a similar issue here. To verify if it's the stretch mode I
removed the cluster and rebuilt it without stretch mode, three hosts
in three D
Am Di., 25. Mai 2021 um 09:23 Uhr schrieb Boris Behrens :
>
> Hi,
> I am still searching for a reason why these two values differ so much.
>
> I am currently deleting a giant amount of orphan objects (43mio, most
> of them under 64kb), but the difference get larger instead of smaller.
>
> This was
Eugen,
Eugen Block wrote:
: Mykola explained it in this thread [1] a couple of months ago:
:
: `rbd cp` will copy only one image snapshot (or the image head) to the
: destination.
:
: `rbd deep cp` will copy all image snapshots and the image head.
Thanks for the explanation. I have created a pu
Hi
The server run 15.2.9 and has 15 HDD and 3 SSD.
The OSDs was created with this YAML file
hdd.yml
service_type: osd
service_id: hdd
placement:
host_pattern: 'pech-hd-*'
data_devices:
rotational: 1
db_devices:
rotational: 0
The result was that the 3 SSD is added to 1 VG with 15
Am Di., 25. Mai 2021 um 09:39 Uhr schrieb Konstantin Shalygin :
>
> Hi,
>
> On 25 May 2021, at 10:23, Boris Behrens wrote:
>
> I am still searching for a reason why these two values differ so much.
>
> I am currently deleting a giant amount of orphan objects (43mio, most
> of them under 64kb), but
Not sure what I'm doing wrong, I suspect its the way I'm running
ceph-volume.
root@drywood12:~# cephadm ceph-volume lvm create --data /dev/sda --dmcrypt
Inferring fsid 1518c8e0-bbe4-11eb-9772-001e67dc85ea
Using recent ceph image ceph/ceph@sha256
:54e95ae1e11404157d7b329d0bef866ebbb214b195a009e87aa
Hi,
> On 25 May 2021, at 10:23, Boris Behrens wrote:
>
> I am still searching for a reason why these two values differ so much.
>
> I am currently deleting a giant amount of orphan objects (43mio, most
> of them under 64kb), but the difference get larger instead of smaller.
When user trough AP
Hi,
I am still searching for a reason why these two values differ so much.
I am currently deleting a giant amount of orphan objects (43mio, most
of them under 64kb), but the difference get larger instead of smaller.
This was the state two days ago:
>
> [root@s3db1 ~]# radosgw-admin bucket stats |
Hi,
Mykola explained it in this thread [1] a couple of months ago:
`rbd cp` will copy only one image snapshot (or the image head) to the
destination.
`rbd deep cp` will copy all image snapshots and the image head.
It depends on the number of snapshots that need to be copied, if there
are non
22 matches
Mail list logo