On 6/18/21 8:42 PM, Sage Weil wrote:
We've been beat up for years about how complicated and hard Ceph is.
Rook and cephadm represent two of the most successful efforts to
address usability (and not just because they enable deployment
management via the dashboard!), and taking advantage of conta
Hello,
I am setting up user quotas and I would like to enable the check on raw
setting for my user's quota. I can't find any documentation on how to
change this setting in any of the ceph documents. Do any of you know how to
change this setting? Possibly using radosgw-admin?
Thanks in advance!
Ja
On 6/21/21 6:19 PM, Nico Schottelius wrote:
And while we are at claiming "on a lot more platforms", you are at the
same time EXCLUDING a lot of platforms by saying "Linux based
container" (remember Ceph on FreeBSD? [0]).
Indeed, and that is a more fundamental question: how easy it is to make
On 6/22/21 6:56 PM, Martin Verges wrote:
> There is no "should be", there is no one answer to that, other than
42. Containers have been there before Docker, but Docker made them
popular, exactly for the same reason as why Ceph wants to use them: ship
a known good version (CI tests) of the soft
140 LV's actually, in hybrid OSD case
Cheers,
k
Sent from my iPhone
> On 22 Jun 2021, at 12:56, Thomas Roth wrote:
>
> I was going to try cephfs on ~10 servers with 70 HDD each. That would make
> each system having to deal with 70 OSDs, on 70 LVs?
On 6/21/21 7:37 PM, Marc wrote:
I have seen no arguments why to use containers other than to try and make it "easier" for new ceph people.
I advise to read the whole thread again, especially Sage his comments,
as there are other benefits. It would free up resources that can be
dedicated t
>
> >
> > I have seen no arguments why to use containers other than to try and
> make it "easier" for new ceph people.
>
> I advise to read the whole thread again, especially Sage his comments,
> as there are other benefits. It would free up resources that can be
> dedicated to (arguably) more pr
Hello Cephers,
On a capacitive Ceph cluster (13 nodes, 130 OSDs 8To HDD), I'm migrating a 40
To image from a 3+2 EC pool to a 8+2 one.
The use case is Veeam backup on XFS filesystems, mounted via KRBD.
Backups are running, and I can see 200MB/s Throughput.
But my migration (rbd migrate prep
ceph -s is healthy. I started to do a xfs_repair on that block device
now which seems to do something...:
- agno = 1038
- agno = 1039
- agno = 1040
- agno = 1041
- agno = 1042
- agno = 1043
- agno = 1044
- agno = 1045
- agno =
Hi Daniel,
You are correct, currently, only v2 auth is supported for topic management.
(tracked here: https://tracker.ceph.com/issues/50039)
It should be fixed soon but may take some time before it is backported to
Pacific (will keep the list posted).
Best Regards,
Yuval
On Tue, Jun 22, 2021 at
> There is no "should be", there is no one answer to that, other than 42.
Containers have been there before Docker, but Docker made them popular,
exactly for the same reason as why Ceph wants to use them: ship a known
good version (CI tests) of the software with all dependencies, that can be
run "a
Hello List,
oversudden i can not mount a specific rbd device anymore:
root@proxmox-backup:~# rbd map backup-proxmox/cluster5 -k
/etc/ceph/ceph.client.admin.keyring
/dev/rbd0
root@proxmox-backup:~# mount /dev/rbd0 /mnt/backup-cluster5/
(just never times out)
Any idea how to debug that mount? Tc
fbff700 20 HTTP_ACCEPT_ENCODING=gzip,
deflate, br
debug 2021-06-22T15:36:15.572+ 7ff04fbff700 20
HTTP_AUTHORIZATION=AWS4-HMAC-SHA256
Credential=utuAMlfhgTAOzMkTNPb/20210622/us-east-1/s3/aws4_request,
SignedHeaders=host;x-amz-content-sha256;x-amz-date;x-amz-security-token,
I don't think so. It is exactly the same location in all tests and it is
reproducible.
Why would a move be a copy on some MDSs/OSDs but not others?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Marc
Sent: 22 J
hi thomas, just a quick note. If you have a few large osds, ceph will
have problems distributing the data based on the number of placement
groups and the number of objects per placement group, ...
I recommend reading the concept of placement groups.
___
Clyso Gmb
https://tracker.ceph.com/issues/50526
https://github.com/alfredodeza/remoto/issues/62
If you're brave (YMMV, test first non-prod), we pushed an image with
the issue we encountered fixed as per above here:
https://hub.docker.com/repository/docker/ormandj/ceph/tags?page=1 that
you can use to install
I get really strange timings depending on kernel version; see below. Did the
patch of the kernel client get lost? The only difference between gnosis and
smb01 is that gnosis is physical and smb01 is a KVM. Both have direct access to
the client network and use the respective kernel clients.
Timi
Hi again,
turns out the long bootstrap time was my own fault. I had some down&out
OSDs for quite a long time, which prohibited the monitor from pruning
the OSD maps. Makes sense, when I think about it, but I didn't before.
Rich's hint to get the cluster to health OK first pointed me in the
right d
On Tue, Jun 22, 2021 at 10:12 AM Ml Ml wrote:
> ceph -s is healthy. I started to do a xfs_repair on that block device
> now which seems to do something...:
>
> - agno = 1038
> - agno = 1039
> - agno = 1040
> - agno = 1041
> - agno = 1042
> - agno =
Den tis 22 juni 2021 kl 15:44 skrev Shafiq Momin :
> I see octopus is having limited Suport on Centos 7 I have prod cluster with
> 1.2 PTB data with nautilus 14.2
> Can we upgrade on Centos 7 from nautilus to octopus or we foresee issue
Upgrading to octopus should be fine, we run C7 cluster with t
Hi all,
I see octopus is having limited Suport on Centos 7 I have prod cluster with
1.2 PTB data with nautilus 14.2
Can we upgrade on Centos 7 from nautilus to octopus or we foresee issue
We have erasure coded pool
Please guide on recommended approach and document if any
Will yum upgrade will
On Tue, Jun 22, 2021 at 8:36 AM Ml Ml wrote:
> Hello List,
>
> oversudden i can not mount a specific rbd device anymore:
>
> root@proxmox-backup:~# rbd map backup-proxmox/cluster5 -k
> /etc/ceph/ceph.client.admin.keyring
> /dev/rbd0
>
> root@proxmox-backup:~# mount /dev/rbd0 /mnt/backup-cluster5/
The move seems to work as expected on recent kernels. I get O(1) with this
version:
# uname -r
5.9.9-1.el7.elrepo.x86_64
I cannot upgrade on the machine I need to do the move on. Is it worth trying a
newer fuse client, say from the nautilus or octupus repo?
Best regards,
=
Fran
Dear all,
some time ago I reported that the kernel client resorts to a copy instead of
move when moving a file across quota domains. I was told that the fuse client
does not have this problem. If enough space is available, a move should be a
move, not a copy.
Today, I tried to move a large fil
Maybe it is nice to send this as a calendar invite? So it nicely shows up at
correct local time of everyone?
> -Original Message-
> From: Mike Perez
> Sent: Tuesday, 22 June 2021 14:50
> To: ceph-users
> Subject: [ceph-users] Re: Ceph Month June Schedule Now Available
>
> Hi everyone
Hi everyone,
Join us in ten minutes for week 4 of Ceph Month!
9:00 ET / 15:00 CEST cephadm [sebastian wagner]
9:30 ET / 15:30 CEST CephFS + fscrypt: filename and content encryption
10:00 ET / 16:00 CEST Crimson Update [Samuel Just]
Meeting link:https://bluejeans.com/908675367
Full schedule: http
Sorry for the very naive question:
I know how to set/check the rgw quota for a user (using radosgw-admin)
But how can a radosgw user check what is the quota assigned to his/her
account , using the S3 and/or the swift interface ?
I don't get this information using "swift stat", and I can't fin
Could this not be related to the mds and different osd's being used?
> -Original Message-
> From: Frank Schilder
> Sent: Tuesday, 22 June 2021 13:25
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: ceph fs mv does copy, not move
>
> I get really strange timings depending on kernel
Hi Jay,
this alert was introduced in Pacific indeed. That's probably why you
haven't seen it before.
And it definitely implies read retries, the following output mentionsÂ
that explicitly:
HEALTH_WARN 1 OSD(s) have spurious read errors [WRN]
BLUESTORE_SPURIOUS_READ_ERRORS: 1 OSD(s) have sp
Den tis 22 juni 2021 kl 11:55 skrev Thomas Roth :
> Hi all,
> newbie question:
> The documentation seems to suggest that with ceph-volume, one OSD is created
> for each HDD (cf. 4-HDD-example in
> https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/)
>
> This seems odd: what i
Thank you all for the clarification!
I just did not grasp the concept before, probably because I am used to those systems that form a layer on top of the local file system. If ceph does
it all, down to the magnetic platter, all the better.
Cheers
Thomas
On 6/22/21 12:15 PM, Marc wrote:
That
Hi,
just an addition:
currentl CEPH releases also include disk monitoring (e.g. SMART and
other health related features). These do not work with raid devices. You
will need external monitoring for your OSD disks.
Regards,
Burkhard
___
ceph-use
That is the idea, what is wrong with this concept? If you aggregate disks, you
still aggregate 70 disks, and you still be having 70 disks.
Everything you do that ceph can't be aware of creates a potential
misinterpretation of the reality and make ceph act in a way it should not.
> -Origi
Hi,
On 22.06.21 11:55, Thomas Roth wrote:
Hi all,
newbie question:
The documentation seems to suggest that with ceph-volume, one OSD is
created for each HDD (cf. 4-HDD-example in
https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/)
This seems odd: what if a server has
On 22.06.21 11:55, Thomas Roth wrote:
> That would make each system
> having to deal with 70 OSDs, on 70 LVs?
Yes. And 70 is a rather unusual number of HDDs in a Ceph node.
Normally you have something like 20 to 24 block devices in a single
node. Each OSD needs CPU and RAM.
You could theoretical
Hi all,
newbie question:
The documentation seems to suggest that with ceph-volume, one OSD is created for each HDD (cf. 4-HDD-example in
https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/)
This seems odd: what if a server has a finite number of disks? I was going to try
36 matches
Mail list logo