On 11/3/23 16:43, Frank Schilder wrote:
Hi Gregory and Xiubo,
we have a smoking gun. The error shows up when using python's shutil.copy
function. It affects newer versions of python3. Here some test results (quoted
e-mail from our user):
I now have a minimal example that reproduces the erro
On 11/1/23 23:57, Gregory Farnum wrote:
We have seen issues like this a few times and they have all been
kernel client bugs with CephFS’ internal “capability” file locking
protocol. I’m not aware of any extant bugs like this in our code base,
but kernel patches can take a long and winding path
On 11/1/23 22:14, Frank Schilder wrote:
Dear fellow cephers,
today we observed a somewhat worrisome inconsistency on our ceph fs. A file
created on one host showed up as 0 length on all other hosts:
[user1@host1 h2lib]$ ls -lh
total 37M
-rw-rw 1 user1 user1 12K Nov 1 11:59 dll_wrapper.p
Only client I/O, cluster recovery I/O and/or data scrubbing I/O make the
cluster "busy". If you have removed client workloads and the cluster is
healthy, it should be mostly idle. Simply having data sitting in the
cluster but not being accessed or modified doesn't make the cluster do any
work, exce
Hi Yuri,
On Tue, Nov 7, 2023 at 3:01 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63443#note-1
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures)
> rados - Neha, Radek, Travis
I'm having difficulty adding and using a non-default placement target & storage
class and would appreciate insights. Am I going about this incorrectly? Rook
does not yet have the ability to do this, so I'm adding it by hand.
Following instructions on the net I added a second bucket pool, place
On Mon, Nov 6, 2023 at 10:31 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63443#note-1
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures)
> rados - Neha, Radek, Travis, Ernesto
Details of this release are summarized here:
https://tracker.ceph.com/issues/63443#note-1
Seeking approvals/reviews for:
smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures)
rados - Neha, Radek, Travis, Ernesto, Adam King
rgw - Casey
fs - Venky
orch - Adam King
rbd - Ilya
krbd -
Please clarify my query.
I had 700+ volumes (220 applications) running in 36 OSDs when it reported the
slow operations. Due to emergency, we migrated 200+ VMs to another
virtualization environment. So we have shutdown all the related VMs in our
Openstack production setup running with Ceph.
We have
On Sun, 5 Nov 2023 at 10:05, Eugen Block wrote:
>
> Hi,
>
> this is another example why min_size 1/size 2 are a bad choice (if you
> value your data). There have been plenty discussions on this list
> about that, I'm not going into detail about that. I'm not familiar
> with rook, but activating ex
Hi,
I used this but all returns "directory inode not in cache"
ceph tell mds.* dirfrag ls path
I would like to pin some subdirs to a rank after dynamic subtree
partitioning. Before that, I need to know where are they exactly
Thank you,
Ben
___
ceph-user
11 matches
Mail list logo