Hey guys,
i have three servers with 12x 12 TB Sata HDDs and 1x 3,4 TB NVME. I am thinking
of putting DB/WAL on the NVMe as well as an 5GB DM-Cache for each spinning
disk. Is anyone running something like this in a production environment?
best regards
Felix
-
Hi,
I would like to ask if anybody knows how to handle the gwcli status below.
- Disks state in gwcli shows as "Unknowm"
- Clients still mounting the "Unknown" disks and seems working normally.
Two of the rbd disks show "Unknown" instead of "Online" in gwcli.
=
Hi everyone,
This month's Ceph User + Dev Monthly meetup is on May 19, 14:00-15:00
UTC. Please add topics to the agenda:
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes. We are hoping to
receive feedback on the Quincy release and hear more about your
general ops experience regarding upgrades,
I'm not quite clear where the confusion is coming from here, but there
are some misunderstandings. Let me go over it a bit:
On Tue, May 10, 2022 at 1:29 AM Frank Schilder wrote:
>
> > What you are missing from stretch mode is that your CRUSH rule wouldn't
> > guarantee at least one copy in surviv
Hello,
Yes we got several slow ops stocks for many seconds.
What we noted : CPU/MeM usage less than Nautilus (
https://drive.google.com/file/d/1NGa5sA8dlQ65ld196Ku2hm_Y0xxvfvNs/view?usp=sharingt
)
Same behaviour than you .
For the moment, the rebuild of one our node seems to fix the latency issu
We're happy to announce the 8th backport release in the Pacific series.
We recommend users to update to this release. For a detailed release
notes with links & changelog please refer to the official blog entry at
https://ceph.io/en/news/blog/2022/v16-2-8-pacific-released
Notable Changes
--
> Op 16 mei 2022 om 13:41 heeft Sanjeev Jha het
> volgende geschreven:
>
> Hi,
>
> Could someone please let me know how to take S3 and RBD backup from Ceph side
> and possibility to take backup from Client/user side?
>
> Which tool should I use for the backup?
It depends.
>
> Best regard
i have error a in my cluster ceph
HEALT_WARN 1 demons have recently crashed
[WRN] RECENT_CRASH: 1 demons have recently crashed
client.admin crashed on host node1 at 2022-05-16T08:30:41205667z
what does this mean
How can I fix it?
___
ceph-users ma
i have error a in my cluster ceph
HEALT_WARN 1 demons have recently crashed
[WRN] RECENT_CRASH: 1 demons have recently crashed
client.admin crashed on host node1 at 2022-05-16T08:30:41205667z
___
ceph-users mailing list -- ceph-users@ceph.io
To uns
In our case it appears that file deletes have a very high impact on osd
operations. Not a significant delete either ~20T on a 1PB utilized
filesystem (large files as well).
We are trying to tune down cephfs delayed deletes via:
"mds_max_purge_ops": "512",
"mds_max_purge_ops_per_pg": "0.100
We have a newly-built pacific (16.2.7) cluster running 8+3 EC jerasure ~250
OSDS across 21 hosts which has significantly lower than expected IOPS. Only
doing about 30 IOPS per spinning disk (with appropriately sized SSD
bluestore db) around ~100 PGs per OSD. Have around 100 CephFS (ceph fuse
16.2.7
On Tue, May 10, 2022 at 2:47 PM Horvath, Dustin Marshall
wrote:
>
> Hi there, newcomer here.
>
> I've been trying to figure out if it's possible to repair or recover cephfs
> after some unfortunate issues a couple of months ago; these couple of nodes
> have been offline most of the time since th
Hi,
Could someone please let me know how to take S3 and RBD backup from Ceph side
and possibility to take backup from Client/user side?
Which tool should I use for the backup?
Best regards,
Sanjeev Kumar Jha
___
ceph-users mailing list -- ceph-users@c
o 20 for a
while as ceph-mds.ceph16.log-20220516.gz
Thanks
&
Best regards,
Felix Lee ~
On 5/16/22 14:45, Jos Collin wrote:
It's hard to suggest without the logs. Do verbose logging debug_mds=20.
What's the ceph version? Do you have the logs why the MDS crashed?
On 16/05/22 11:
Am 2022-05-12 15:29, schrieb Arthur Outhenin-Chalandre:
We are going towards mirror snapshots, but we didn't advertise
internally so far and we won't enable it on every images; it would only
be for new volumes if people want explicitly that feature. So we are
probably not going to hit these p
15 matches
Mail list logo