Hey all,
I'm trying to figure out the appropriate process for adding a separate SSD
block.db to an existing OSD. From what I gather the two steps are:
1. Use ceph-bluestore-tool bluefs-bdev-new-db to add the new db device
2. Migrate the data ceph-bluestore-tool bluefs-bdev-migrate
I followed
Don’t forget to change the lv tags and make sure ceph-bluestore-tool
show-label has the right labels. This has been discussed multiple
times [1].
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/GSFUUIMYDPSFM2HHO25TCTPLTXBS3O2K/
Zitat von t...@postix.net:
Hey all,
I
Hi all,
I'm having problem creating an osd using ceph-volume (by the way of cephadm).
This is on an octopus installation with cephadm. So I use "cephadm shell" and
then "ceph-volume" but got the following error:
root@furry:/var/lib# ceph-volume lvm prepare --data /dev/sda --block.db
/dev/vg/sd
Dear Michael,
maybe there is a way to restore access for users and solve the issues later.
Someone else with a lost/unfound object was able to move the affected file (or
directory containing the file) to a separate location and restore the now
missing data from backup. This will "park" the prob
I have still ceph-disk created osd's in nautilus. Thought about using
this ceph-volume, but looks like this manual for replacing ceph-disk[1]
is not complete. Getting already this error
RuntimeError: Unable check if OSD id exists:
[1]
https://docs.ceph.com/en/latest/rados/operations/add-or-r
On 2020-09-17 19:21, vita...@yourcmc.ru wrote:
> It does, RGW really needs SSDs for bucket indexes. CephFS also needs SSDs for
> metadata in any setup that's used by more than 1 user :).
Nah. I crashed my first cephfs with my music library, a 2 TB git annex
repo, just me alone (slow ops on mds).
Use the block.db option without the device path but only {VG}/{LV}:
ceph-volume lvm prepare --data /dev/sda
--block.db vg/sda.db --dmcrypt
Zitat von t...@postix.net:
Hi all,
I'm having problem creating an osd using ceph-volume (by the way of
cephadm). This is on an octopus installation with
Dear Michael,
> I disagree with the statement that trying to recover health by deleting
> data is a contradiction. In some cases (such as mine), the data in ceph
> is backed up in another location (eg tape library). Restoring a few
> files from tape is a simple and cheap operation that takes a m
Hi,
I have something weird about GID selection for Ceph with RDMA. When I do
configuration with ms_async_rdma_device_name and ms_async_rdma_gid_idx,
Ceph with RDMA running successfully. But, when I do configuration with
ms_async_rdma_device_name, ms_async_rdma_local_gid and
ms_async_rdma_roce_ver,
dm-writecache works using a high and low watermarks, set at 45 and 50%.
All writes land in cache, once cache fills to the high watermark
backfilling to the slow device starts and stops when reaching the low
watermark. Backfilling uses b-tree with LRU blocks and tries merge
blocks to reduce h
Do you want to connect with Australian assignment helpers to feel the low
stress of submission? Are you seeking the best source of help for composing
your academic documents? In this context, use Assignment Help Services and get
completed papers without any delay. As I can understand that, whene
On 17/09/2020 17:37, Mark Nelson wrote:
Does fio handle S3 objects spread across many buckets well? I think
bucket listing performance was maybe missing too, but It's been a
while since I looked at fio's S3 support. Maybe they have those use
cases covered now. I wrote a go based benchmark cal
Hello Jason,
I confirm this release fixes the crashes. there is no a single crash for
past 4 days
On Mon, Sep 14, 2020 at 2:55 PM Jason Dillaman wrote:
> On Mon, Sep 14, 2020 at 5:13 AM Lomayani S. Laizer
> wrote:
> >
> > Hello,
> > Last week i got time to try debug crashes of these vms
> >
Hey all,
We will be having a Ceph science/research/big cluster call on Wednesday
September 23rd. If anyone wants to discuss something specific they can
add it to the pad linked below. If you have questions or comments you
can contact me.
This is an informal open call of community members mos
I start to wonder (again) which scheduler is better for ceph on SSD.
My reasoning.
None:
1. Reduces latency for requests. The lower latency is, the higher is
perceived performance for unbounded workload with fixed queue depth
(hello, benchmarks).
2. Causes possible spikes in latency for reque
Hi,
We have a little problem with deep-scrubbing on PGs on EC pool.
[root@mon-1 ~]# ceph health detail
HEALTH_WARN 1 pgs not deep-scrubbed in time
PG_NOT_DEEP_SCRUBBED 1 pgs not deep-scrubbed in time
pg 14.d4 not deep-scrubbed since 2020-09-05 20:26:02.696191
[root@mon-1 ~]# ceph pg deep-sc
Hi *,
I have 2 virtual one-node-clusters configured for multisite RGW. In
the beginning the replication actually worked for some hundred MB or
so, and then it stopped. In the meantime I wiped both RGWs twice to
make sure the configuration is right (including wiping all pools
clean). I do
Hi Frank,
On 9/18/20 2:50 AM, Frank Schilder wrote:
Dear Michael,
firstly, I'm a bit confused why you started deleting data. The objects were
unfound, but still there. That's a small issue. Now the data might be gone and
that's a real issue.
Interval:
Anyone rea
Dear Michael,
firstly, I'm a bit confused why you started deleting data. The objects were
unfound, but still there. That's a small issue. Now the data might be gone and
that's a real issue.
Interval:
Anyone reading this: I have seen many threads where ceph admins s
> we did test dm-cache, bcache and dm-writecache, we found the later to be
> much better.
Did you set bcache block size to 4096 during your tests? Without this setting
it's slow because 99.9% SSDs don't handle 512 byte overwrites well. Otherwise I
don't think bcache should be worse than dm-write
Hi,
Am 18.09.20 um 03:53 schrieb Liam MacKenzie:
> As I understand that using RAID isn't recommended, how would I best deploy my
> cluster so it's smart enough to group drives according to the trays that
> they're in?
You could treat both disks as one and do a RAID0 over them with one OSD
on i
Dear Maged,
Do you mean dm-writecache is better than B-cache in terms of small IO
performance. By how much? Could you please share us a bit more details?
thanks in advance,
Samuel
huxia...@horebdata.cn
From: Maged Mokhtar
Date: 2020-09-18 02:12
To: ceph-users
Subject: [ceph-users] Re: Be
22 matches
Mail list logo