Hi everyone,
Our schedule has filled up for Ceph Day London, but we're still looking for
content for Ceph Day Poland on October 28, as well as Ceph Day San Diego
November 18. If you're interested in giving a community talk, please see
any of the Ceph Days links from my earlier email for the CFP fo
Dear ceph users,
we're experiencing a segfault during MDS startup (replay process) which is
making our FS inaccessible.
MDS log messages:
Oct 15 03:41:39.894584 mds1 ceph-mds: -472> 2019-10-15 00:40:30.201
7f3c08f49700 1 -- 192.168.8.195:6800/3181891717 <== osd.26
192.168.8.209:6821/2419345
Once upon a time ceph-fuse did its own internal hash-map of live
inodes to handle that (by just remembering which 64-bit inode any
32-bit one actually referred to).
Unfortunately I believe this has been ripped out because it caused
problems when the kernel tried to do lookups on 32-bit inodes that
Den tis 15 okt. 2019 kl 19:40 skrev Nathan Fish :
> I'm not sure exactly what would happen on an inode collision, but I'm
> guessing Bad Things. If my math is correct, a 2^32 inode space will
> have roughly 1 collision per 2^16 entries. As that's only 65536,
> that's not safe at all.
>
Yeah, the
I ran the s3 test (run-s3tests.sh) in vstart mode against Nautilus. Is there a
better guide out there than this one?
https://docs.ceph.com/ceph-prs/17381/dev/#testing-how-to-run-s3-tests-locally
Thus far I ran into a ceph.conf parse issue, a keyring permission issue and a
radosgw crash.
+ ra
I was wondering if this is provided somehow? All I see is rbd and radosgw
mentioned. If you have applications built with librados surely openstack
must have a way to provide it?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an em
I'm not sure exactly what would happen on an inode collision, but I'm
guessing Bad Things. If my math is correct, a 2^32 inode space will
have roughly 1 collision per 2^16 entries. As that's only 65536,
that's not safe at all.
On Mon, Oct 14, 2019 at 8:14 AM Dan van der Ster wrote:
>
> OK I found
On 14/10/2019 22:57, Reed Dier wrote:
> I had something slightly similar to you.
>
> However, my issue was specific/limited to the device_health_metrics pool
> that is auto-created with 1 PG when you turn that mgr feature on.
>
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg56315.htm
No, it's not possible to recover from a *completely dead* block.db, it
contains all the metadata (like... object names)
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
Hi all,
One of our users has some 32-bit commercial software that they want to
use with CephFS, but it's not working because our inode numbers are
too large. E.g. his application gets a "file too big" error trying to
stat inode 0x40008445FB3.
I'm aware that CephFS is offsets the inode numbers by
Hello cephers!
We're lost block.db file for one of our osd. This results in down osd and
incomplete PG. Block file from osd, which symlinks to particular /dev
folder is live and correct.
My main question is there any thereoretical possibility to extract
particular PG from osd which block.db device
On Tue, Oct 15, 2019 at 2:42 AM Jeremi Avenant wrote:
> Good day
>
> I'm currently administrating a Ceph cluster that consists out of HDDs &
> SSDs. The rule for cephfs_data (ec) is to write to both these drive
> classifications (HDD+SSD). I would like to change it so that
> cephfs_metadata (non-
On Mon, Oct 14, 2019 at 2:58 PM Paul Emmerich wrote:
>
> Could the 4 GB GET limit saturate the connection from rgw to Ceph?
> Simple to test: just rate-limit the health check GET
I don't think so, we have dual 25Gbp in a LAG, so Ceph to RGW has
multiple paths, but we aren't balancing on port yet,
That's apply/commit latency (the exact same since BlueStore btw, no
point in tracking both). It should not contain any network component.
Since the path you are optimizing is inter-OSD communication: check
out subop latency, that's the one where this should show up.
Paul
--
Paul Emmerich
Look
I don't see any changes here...
There is graph here. It was pure Nautilus before 10-05 and
Nautilus+RDMA after.
https://nc.avalon.org.ua/s/LptPTEaTeTTyKtD
Link expires on Nov 1.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
День добрий!
Tue, Oct 15, 2019 at 02:29:58PM +0300, vitalif wrote:
> Wow, does it really work?
>
> And why is it not supported by RBD?
I hadn't dive into sources, but it stated in docs.
>
> Can you show us the latency graphs before and after and tell the I/O pattern
> to which the latency
Hi,
I have a Mimic 13.2.6 cluster which is throwing an error on a PG that
it's inconsistent.
PG_DAMAGED Possible data damage: 1 pg inconsistent
pg 21.e6d is active+clean+inconsistent, acting [988,508,825]
I checked 'list-inconsistent-obj' (See below) and it shows:
selected_object_info: "dat
On 10/15/19 1:29 PM, Vitaliy Filippov wrote:
> Wow, does it really work?
>
> And why is it not supported by RBD?
>
> Can you show us the latency graphs before and after and tell the I/O
> pattern to which the latency applies? Previous common knowledge was that
> RDMA almost doesn't affect laten
Wow, does it really work?
And why is it not supported by RBD?
Can you show us the latency graphs before and after and tell the I/O
pattern to which the latency applies? Previous common knowledge was that
RDMA almost doesn't affect latency with Ceph, because most of the latency
is in Ceph i
Hello!
Mon, Oct 14, 2019 at 07:28:07AM -, gabryel.mason-williams wrote:
> Hello,
>
> I was wondering what user experience was with using Ceph over RDMA?
> - How you set it up?
We had used RoCE Lag with Mellanox ConnectX-4 Lx.
> - Documentation used to set it up?
Generally, Mellano
Good day
I'm currently administrating a Ceph cluster that consists out of HDDs &
SSDs. The rule for cephfs_data (ec) is to write to both these drive
classifications (HDD+SSD). I would like to change it so that
cephfs_metadata (non-ec) writes to SSD & cephfs_data (erasure encoded "ec")
writes to HD
hi,
if needing to stat size of all root dirs in cephfs file system, is there
any simple way to do that via ceph system tools?
thanks
| |
renjianxinlover
|
|
renjianxinlo...@163.com
|
签名由网易邮箱大师定制
On 10/15/2019 04:57, wrote:
Send ceph-users mailing list submissions to
ceph-users@ceph.io
To
Hi,
I want to use balancer mode "upmap" for all pools.
This mode is currently enable for pool "hdb_backup" with ~600TB used space.
root@ld3955:~# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY
UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR
Hi,
checking my Ceph health status I get this warning:
1 subtrees have overcommitted pool target_size_bytes; 1 subtrees have
overcommitted pool target_size_ratio
The details are as follows:
Pools ['hdb_backup'] overcommit available storage by 1.288x due to
target_size_bytes 0 on pools []
How
24 matches
Mail list logo