Hello,
we see same problems. Deleting all the pools and redeploy rgw solved it on
that test cluster, however that is no solution for production ;)
systemd[1]: Started Ceph rados gateway.
radosgw[7171]: 2021-04-04T14:37:51.508+ 7fc6641efc00 0 deferred set
uid:gid to 167:167 (ceph:ceph)
radosg
Hi,
I forgot to mention that CephFS is enabled and working.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein
O
sigh.
Thank you very much.
That actually makes sense, and isnt so bad after all.
Makes me surprised why I got no answers on my prior related question, couple
weeks ago, about what the proper way to replace an HDD in a failed hybrid OSD.
At least I know now.
You guys might consider a fe
Am 05.04.21 um 21:27 schrieb Peter Woodman:
yeah, but you don't want to have those reference objects in an EC pool,
that's iiuc been explicitly disallowed in newer versions, as it's a performance
suck. so leaving them in the replicated pool is good :)
I know, but that's quite workload-dependen
On 4/5/2021 3:49 PM, Philip Brown wrote:
I would file this as a potential bug.. but it takes too long to get approved,
and tracker.ceph.com doesnt have straightfoward google signin enabled :-/
I believe that with the new lvm mandate, ceph-volume should not be complaining about
"missing PARTU
I would file this as a potential bug.. but it takes too long to get approved,
and tracker.ceph.com doesnt have straightfoward google signin enabled :-/
I believe that with the new lvm mandate, ceph-volume should not be complaining
about "missing PARTUUID".
This is stopping me from using my syst
yeah, but you don't want to have those reference objects in an EC pool,
that's iiuc been explicitly disallowed in newer versions, as it's a
performance suck. so leaving them in the replicated pool is good :)
On Mon, Apr 5, 2021 at 2:55 PM Oliver Freyermuth <
freyerm...@physik.uni-bonn.de> wrote:
In order to enable NFS via Ganesha, you will need either an RGW or a
CephFS. Within the context of a Ceph deployment, Ganesha cannot export
anything it's own, it just exports either RGW or CephFS.
Daniel
On 4/5/21 1:43 PM, Robert Sander wrote:
Hi,
I have a test cluster now running on Pacifi
Hi,
that really looks like a useful tool, thanks for mentioning this on the list
:-).
However, I'd also love to learn about a different way — as documentation
states:
"You may notice that object counts in your primary data pool (the one passed to fs new) continue to increase, even if files a
Thanks Sage,
I opted to move to an explicit placement map of candidate hostnames and
a replica count rather than using labels. This is a testing cluster of
VMs to experiment before updating the production system.
The only reason I was starting with 2 on the test cluster is that my
productio
Hi,
Am 04.04.21 um 15:22 schrieb 胡 玮文:
> bash[9823]: debug 2021-04-04T13:01:04.995+ 7ff80f172440 -1 static int
> rgw::cls::fifo::FIFO::create(librados::v14_2_0::IoCtx, std::__cxx11::string,
> std::unique_ptr*, optional_yield,
> std::optional,
> std::optional >, bool, uint64_t, uint64_t):9
Hi,
I have a test cluster now running on Pacific with the cephadm
orchestrator and upstream container images.
In the Dashboard on the services tab I created a new service for NFS.
The containers got deployed.
But when I go to the NFS tab and try to create a new NFS share the
Dashboard only retur
I am in a situation where I see conflicting information.
On the one hand,
ls -l /var/lib/ceph/osd/ceph-7
shows a symlink for block device, but no block.db
On the other hand,
ceph-volume lvm list
claims that there is a separate db device registered for osd 7
how can I know which one is correct?
hi, i made a tool to do this. it’s rough around the edges and has some
known bugs with symlinks as parent paths but it checks all file layouts to
see if they match the directory layout they’re in, and if not, makes them
so by copying and replacing. so to ‘migrate’ set your directory layouts and
the
Hello. I was have one-way multisite S3 cluster and we've seen issues
with rgw-sync due to sharding problems and I've stopped the multisite
sync. This is not the topic just a knowledge about my story.
I have some leftover 0 byte objects in destination and I'm trying to
overwrite them with Rclone "pa
Surprisingly enough - I figured this out moments after sending this. Setting public_network = 0.0.0.0/0 seems to work.
- Original message -From: "Stephen Smith6" To: ceph-users@ceph.ioCc:Subject: [EXTERNAL] [ceph-users] "unable to find any IP address in networks"Date: Mon, Apr 5, 2021 9:2
Hey folks - I have a unique networking scenario I'm trying to understand. I'm using cephadm to deploy an Octopus based cluster and I'm trying to add monitors however when running "ceph orch apply mon " I'm seeing the following error in my cephadm.log on the node I'm trying to make a monitor: "unabl
‐‐‐ Original Message ‐‐‐
On Saturday, April 3, 2021 11:22 PM, David Orman wrote:
> We use cephadm + podman for our production clusters, and have had a
> great experience. You just need to know how to operate with
> containers, so make sure to do some reading about how containers work.
> W
hello
when I run my borgbackup over cephfs volume (10 subvolumes for 1.5To) I
can see a big increase of osd space usage and 2 or 3 osd goes near
full, or full, then out and finally the cluster goes in error.
Any tips to prevent this ?
My cluster is cephv15 with :
9 nodes :
each node run : 2x6t
> 在 2021年4月5日,20:48,Adrian Sevcenco 写道:
>
> On 4/5/21 3:27 PM, 胡 玮文 wrote:
在 2021年4月5日,19:29,Adrian Sevcenco 写道:
>>>
>>> Hi! How/where can i change the image configured for a service?
>>> I tried to modify /var/lib/ceph///unit.{image,run}
>>> but after restarting
>>> ceph orch ps shows
On 4/5/21 3:27 PM, 胡 玮文 wrote:
在 2021年4月5日,19:29,Adrian Sevcenco 写道:
Hi! How/where can i change the image configured for a service?
I tried to modify /var/lib/ceph///unit.{image,run}
but after restarting
ceph orch ps shows that the service use the same old image.
Hi Adrian,
Hi!
Try “ceph
在 2021年4月5日,19:29,Adrian Sevcenco 写道:
Hi! How/where can i change the image configured for a service?
I tried to modify /var/lib/ceph///unit.{image,run}
but after restarting
ceph orch ps shows that the service use the same old image.
Hi Adrian,
Try “ceph config set container_image ” where ca
Hi! How/where can i change the image configured for a service?
I tried to modify /var/lib/ceph///unit.{image,run}
but after restarting
ceph orch ps shows that the service use the same old image.
What other configuration locations are there for the ceph components
beside /etc/ceph (which is quite
Good morning,
I was wondering if there are any timing indications as to how long a PG
should "usually" stay in a certain state?
For instance, how long should a pg stay in
- peering (seconds - minutes?)
- activating (seconds?)
- srubbing (+deep)
The scrub process obviously depends on the numbe
I am attempting to upgrade a Ceph Upgrade cluster that was deployed with
Octopus 15.2.8 and upgraded to 15.2.10 successfully. I'm not attempting to
upgrade to 16.2.0 Pacific, and it is not going very well.
I am using cephadm. It looks to have upgraded the managers and stopped,
and not moved on to
Am 04.04.21 um 22:52 schrieb Kai Börnert:
> a) Make SSD only pools for the cephfs metadata
>
> b) Give every OSD a SSD for the bluestore cache
I would go with both. Depending on how much budget you have for SSD they
could be used in a mixed scenario where you have three to four Block.DB
volumes
26 matches
Mail list logo