Students are welcome to utilize our https://scholaron.com/subjects/human-resource-management ">Human Resource
Management Answers available on https://scholaron.com/ ">Scholar
On with 100% quality and satisfaction guarantee. We aim at making all your
educational endeavors fruitful with the help o
Students are welcome to utilize our https://scholaron.com/subjects/human-resource-management";>Human Resource
Management Answers available on https://scholaron.com/";>Scholar
On with 100% quality and satisfaction guarantee. We aim at making all your
educational endeavors fruitful with the help o
Students are welcome to utilize our https://scholaron.com/subjects/human-resource-management";>Human Resource
Management Answers available on https://scholaron.com/";>Scholar
On with 100% quality and satisfaction guarantee. We aim at making all your
educational endeavors fruitful with the help o
Hi Igor,
Am 23.09.2020 um 18:38 schrieb Igor Fedotov:
bin/ceph-bluestore-tool --path dev/osd0 --devs-source dev/osd0/block.wal
--dev-target dev/osd0/block.db --command bluefs-bdev-migrate
Would this also work if the OSD only has its primary block device and
the separate WAL device? Like runni
In the end I solved it by restarting cluster target with systemd, I guess
something was stuck.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
>> With today’s networking, _maybe_ a super-dense NVMe box needs 100Gb/s where
>> a less-dense probably is fine with 25Gb/s. And of course PCI lanes.
>>
>> https://cephalocon2019.sched.com/event/M7uJ/affordable-nvme-performance-on-ceph-ceph-on-nvme-true-unbiased-story-to-fast-ceph-wido-den-holl
On 23/09/2020 17:58, vita...@yourcmc.ru wrote:
I have no idea how you get 66k write iops with one OSD )
I've just repeated a test by creating a test pool on one NVMe OSD with 8 PGs
(all pinned to the same OSD with pg-upmap). Then I ran 4x fio randwrite q128
over 4 RBD images. I got 17k iops.
On 9/23/20 2:21 PM, Alexander E. Patrakov wrote:
On Wed, Sep 23, 2020 at 8:12 PM Anthony D'Atri wrote:
With today’s networking, _maybe_ a super-dense NVMe box needs 100Gb/s where a
less-dense probably is fine with 25Gb/s. And of course PCI lanes.
https://cephalocon2019.sched.com/event/M7uJ/
On Wed, Sep 23, 2020 at 8:12 PM Anthony D'Atri wrote:
> With today’s networking, _maybe_ a super-dense NVMe box needs 100Gb/s where a
> less-dense probably is fine with 25Gb/s. And of course PCI lanes.
>
> https://cephalocon2019.sched.com/event/M7uJ/affordable-nvme-performance-on-ceph-ceph-on-nv
Thanks for the feedback everyone! It seems we have more to look into regarding
NVMe enterprise storage solutions. The workload doesn’t demand NVMe
performance, so SSD seems to be the most cost effective way to handle this.
The performance discussion is very interesting!
Regards,
Brent
-
Hi
Thanks for the reply.
cephadm runs ceph containers automatically. How to set privileged mode
in ceph container?
--
El 23/9/20 a las 13:24, Daniel Gryniewicz escribió:
NFSv3 needs privileges to connect to the portmapper. Try running
your docker container in privileged mode, and see if t
On 9/23/20 12:18 PM, Mark Nelson wrote:
On 9/23/20 10:58 AM, vita...@yourcmc.ru wrote:
I have no idea how you get 66k write iops with one OSD )
I've just repeated a test by creating a test pool on one NVMe OSD
with 8 PGs (all pinned to the same OSD with pg-upmap). Then I ran 4x
fio randwrite
On 9/23/20 10:58 AM, vita...@yourcmc.ru wrote:
I have no idea how you get 66k write iops with one OSD )
I've just repeated a test by creating a test pool on one NVMe OSD with 8 PGs
(all pinned to the same OSD with pg-upmap). Then I ran 4x fio randwrite q128
over 4 RBD images. I got 17k iops.
Hi Michael,
yes, you can use ceph-bluestore-tool to do that. E.g.
bin/ceph-bluestore-tool --path dev/osd0 --devs-source dev/osd0/block.wal
--dev-target dev/osd0/block.db --command bluefs-bdev-migrate
inferring bluefs devices from bluestore path
device removed:0 dev/osd0/block.wal
Additional
NFSv3 needs privileges to connect to the portmapper. Try running your
docker container in privileged mode, and see if that helps.
Daniel
On 9/23/20 11:42 AM, Gabriel Medve wrote:
Hi,
I have a CEPH 15.2.5 running in a docker , i configure nfs ganesha with
nfs version 3 but i can not mount it
Hi Eugen,
Am 23.09.2020 um 14:51 schrieb Eugen Block:
I don't think there's a way to remove WAL/DB without rebuilding the OSD.
ceph-bluestore-tool bluefs-bdev-migrate expects a target device to
migrate the data since it's a migration. I can't read the full thread (I
get a server error), what i
I have no idea how you get 66k write iops with one OSD )
I've just repeated a test by creating a test pool on one NVMe OSD with 8 PGs
(all pinned to the same OSD with pg-upmap). Then I ran 4x fio randwrite q128
over 4 RBD images. I got 17k iops.
OK, in fact that's not the worst result for Ceph,
I don't think you need a bucket under host for the two LVs. It's unnecessary.
September 23, 2020 6:45 AM, "George Shuklin" wrote:
> On 23/09/2020 10:54, Marc Roos wrote:
>
>>> Depends on your expected load not? I already read here numerous of times
>> that osd's can not keep up with nvme's, tha
Apologies for not consolidating these replys. My UMA is not my friend today.
> With 10 NVMe drives per node, I'm guessing that a single EPYC 7451 is
> going to be CPU bound for small IO workloads (2.4c/4.8t per OSD), but
> will be network bound for large IO workloads unless you are sticking
> 2x1
Hi,
I have a CEPH 15.2.5 running in a docker , i configure nfs ganesha with
nfs version 3 but i can not mount it.
If configure ganesha with nfs version 4 i can mounted without problems
but i need the version 3 .
The error is mount.nfs: Protocol not supported
Can help me?
Thanks.
--
___
> How they did it?
You can create partitions / LVs by hand and build OSDs on them, or you can use
ceph-volume lvm batch –osds-per-device
> I have an idea to create a new bucket type under host, and put two LV from
> each ceph osd VG into that new bucket. Rules are the same (different host),
> That's pretty much the advice I've been giving people since the Inktank days.
> It costs more and is lower density, but the design is simpler, you are less
> likely to under provision CPU, less likely to run into memory bandwidth
> bottlenecks, and you have less recovery to do when a node f
Hi Stefan,
thanks for your answer. I think the deprecated option is still supported and I
found something else - I will update to the new option though. On the ceph
side, I see in the log now:
client session with non-allowable root '/' denied (client.31382084
192.168.48.135:0/2576875769)
It
Hello,
it seems that ceph-volume-systemd makes this confusion by missing
python3.6 stuff here:
[root@ceph1n011 system]# /usr/sbin/ceph-volume-systemd
Traceback (most recent call last):
File "/usr/sbin/ceph-volume-systemd", line 6, in
from pkg_resources import load_entry_point
File "/usr
On 9/23/20 8:23 AM, George Shuklin wrote:
I've just finishing doing our own benchmarking, and I can say, you
want to do something very unbalanced and CPU bounded.
1. Ceph consume a LOT of CPU. My peak value was around 500% CPU per
ceph-osd at top-performance (see the recent thread on 'ceph
On 2020-09-23 11:00, Frank Schilder wrote:
> Dear all,
>
> maybe someone has experienced this before. We are setting up a SAMBA gateway
> and would like to use the vfs_ceph module. In case of several file systems
> one needs to choose an mds namespace. There is an option in ceph.conf:
>
> cli
I would put that data on the ceph.com website. Eg. A performance/test
page with every release and compared to the previous release. Some
default fio tests like you now have in the spreadsheet. And maybe some
io patterns that relate to real world use cases like databases. Like eg
how these gu
Hello,
does anyone tried to update 15.2.4 on Centos7 to 15.2.5?
I did a full: yum -y update on my first OSD node and after this no OSD
on this want to start anymore. No log will be written, so osd process
stops immediately i think.
Starting osd daemon in forground shows that no tmpfs will b
Hi Lenz,
thanks for that, this should do. Please retain the copy until all is migrated :)
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Lenz Grimmer
Sent: 23 September 2020 10:55:13
To: ceph-users@ceph.io
Subje
I've just finishing doing our own benchmarking, and I can say, you
want to do something very unbalanced and CPU bounded.
1. Ceph consume a LOT of CPU. My peak value was around 500% CPU per
ceph-osd at top-performance (see the recent thread on 'ceph on brd')
with more realistic numbers around
On 9/23/20 8:05 AM, Marc Roos wrote:
I'm curious if you've tried octopus+ yet?
Why don't you publish results of your test cluster? You cannot expect
all new users to buy 4 servers with 40 disks, and try if the performance
is ok.
Get a basic cluster and start publishing results, and document ch
Update: setting "ceph fs set-default CEPH-FS-NAME" allows to do a kernel fs
mount without providing the mds_namespace mount option, but the vfs_ceph module
still fails with either
cephwrap_connect: [CEPH] Error return: Operation not permitted
or
cephwrap_connect: [CEPH] Error return: Opera
> https://docs.google.com/spreadsheets/d/1e5eTeHdZnSizoY6AUjH0knb4jTCW7KMU4RoryLX9EHQ/edit?usp=sharing
I see that in your tests Octopus delivers more than twice iops with 1 OSD.
Can I ask you what's my problem then? :-)
I have a 4-node Ceph cluster with 14 NVMe drives and fast CPUs (Threadripper
> I'm curious if you've tried octopus+ yet?
Why don't you publish results of your test cluster? You cannot expect
all new users to buy 4 servers with 40 disks, and try if the performance
is ok.
Get a basic cluster and start publishing results, and document changes
to the test cluster.
_
Dear all,
maybe someone has experienced this before. We are setting up a SAMBA gateway
and would like to use the vfs_ceph module. In case of several file systems one
needs to choose an mds namespace. There is an option in ceph.conf:
client mds namespace = CEPH-FS-NAME
Unfortunately, it seems
Hello, I got this error after I tried upgrade from ceph:v15.2.4 to ceph:v15.2.5
with dead osd.6
cluster:
id: 4e01640b-951b-4f75-8dca-0bad4faf1b11
health: HEALTH_ERR
Module 'cephadm' has failed: auth get failed: failed to find osd.6
in keyring retval: -2
In short I adde
Hi,
I don't think there's a way to remove WAL/DB without rebuilding the
OSD. ceph-bluestore-tool bluefs-bdev-migrate expects a target device
to migrate the data since it's a migration. I can't read the full
thread (I get a server error), what is the goal here?
Regards,
Eugen
Zitat von M
On 9/23/20 5:41 AM, George Shuklin wrote:
I've just finishing doing our own benchmarking, and I can say, you
want to do something very unbalanced and CPU bounded.
1. Ceph consume a LOT of CPU. My peak value was around 500% CPU per
ceph-osd at top-performance (see the recent thread on 'ceph
Hi Andreas,
Am 22.09.2020 um 22:35 schrieb Andreas John:
and then removing the journal
enough?
any hints on how to remove the journal?
Regards,
Michael
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le..
Sounds like you just want to create 2 OSDs per drive? It's OK, everyone does
that :) I tested Ceph with 2 OSDs per SATA SSD when comparing it to my
Vitastor, Micron also tested Ceph with 2 OSDs per SSD in their PDF and so on.
> On 23/09/2020 10:54, Marc Roos wrote:
>
>>> Depends on your expecte
On 2020-09-23 07:39, Brent Kennedy wrote:
> We currently run a SSD cluster and HDD clusters and are looking at possibly
> creating a cluster for NVMe storage. For spinners and SSDs, it seemed the
> max recommended per osd host server was 16 OSDs ( I know it depends on the
> CPUs and RAM, like 1 cp
Working with estate agents might not be the first choice for many of us. But if
you are trying to find someone who can help you with your house for sale in
Chichester or finding a renter, the estate agent can help you to meet potential
investors and buyers. Estate agents in Chichester always ens
On 23/09/2020 04:09, Alexander E. Patrakov wrote:
Sometimes this doesn't help. For data recovery purposes, the most
helpful step if you get the "bluefs enospc" error is to add a separate
db device, like this:
systemctl disable --now ceph-osd@${OSDID}
truncate -s 32G /junk/osd.${OSDID}-recover/b
On 23/09/2020 10:54, Marc Roos wrote:
Depends on your expected load not? I already read here numerous of times
that osd's can not keep up with nvme's, that is why people put 2 osd's
on a single nvme. So on a busy node, you probably run out of cores? (But
better verify this with someone that ha
I've just finishing doing our own benchmarking, and I can say, you want
to do something very unbalanced and CPU bounded.
1. Ceph consume a LOT of CPU. My peak value was around 500% CPU per
ceph-osd at top-performance (see the recent thread on 'ceph on brd')
with more realistic numbers aroun
Good morning,
you might have seen my previous mails and I wanted to discuss some
findings over the last day+night over what happened and why it happened
here.
As the system behaved inexplicitly for us, we are now looking for
someone to analyse the root cause on consultancy basis - if you are
in
Slow. https://yourcmc.ru/wiki/Ceph_performance :-)
> Hi,
>
> we're considering running KVM virtual machine images on Ceph RBD block
> devices. How does Ceph RBD perform with the synchronous writes of
> databases (MariaDB)?
>
> Best regards,
>
> Renne
Thanks Marc :)
It's easier to write code than to cooperate :) I can do whatever I want in my
own project.
Ceph is rather complex. For example, I failed to find bottlenecks in OSD when I
tried to profile it - I'm not an expert of course, but still... The only
bottleneck I found was cephx_sign_m
Hi
> We currently run a SSD cluster and HDD clusters and are looking at possibly
> creating a cluster for NVMe storage. For spinners and SSDs, it seemed the
> max recommended per osd host server was 16 OSDs ( I know it depends on the
> CPUs and RAM, like 1 cpu core and 2GB memory ).
What do you
I love how it’s not possible to delete inodes yet. Data loss would be a thing
of the past!
Jokes aside, interesting project.
Sent from mobile
> Op 23 sep. 2020 om 00:45 heeft vita...@yourcmc.ru het volgende geschreven:
>
> Hi!
>
> After almost a year of development in my spare time I present
Hi,
this has been discussed a couple of times [1]. Changing an ec profile
won't affect existing pools, only new pools created with this updated
profile will apply the device-class.
Make sure to provide all parameters for the profile update, not just
the device-class.
Regards,
Eugen
[1]
I was wondering if switching to ceph-volume requires me to change the
default centos lvm.conf? Eg. The default has issue_discards = 0
Also I wonder if trimming is the default on lvm's on ssds? I read
somewhere that the dmcrypt passthrough of trimming was still secure in
combination with a btr
Hi Brent,
> 1. If we do a jbod setup, the servers can hold 48 NVMes, if the servers
> were bought with 48 cores and 100+ GB of RAM, would this make sense?
Do you seriously mean 48 NVMes per server? How would you even come remotely
close to supporting them with connection (to board) and network
Hi,
we're considering running KVM virtual machine images on Ceph RBD block
devices. How does Ceph RBD perform with the synchronous writes of
databases (MariaDB)?
Best regards,
Renne
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send
It succeeds to load with LD_PRELOAD as, as I understand, block_register() gets
called. QAPI and new QAPI-based block device syntax don't work though because
they're based on IDLs built into QEMU... QAPI will require patching, yeah. It
would be nicer if QAPI supported plugins too... :-)
--
With
Depends on your expected load not? I already read here numerous of times
that osd's can not keep up with nvme's, that is why people put 2 osd's
on a single nvme. So on a busy node, you probably run out of cores? (But
better verify this with someone that has an nvme cluster ;))
-Original
Vitaliy you are crazy ;) But really cool work. Why not combine efforts
with ceph? Especially with something as important as SDS and PB's of
clients data stored on it, everyone with a little bit of brain chooses a
solution from a 'reliable' source. For me it was decisive to learn that
CERN an
Hi,
We are running a Nautilus cluster and have some old and new erasure
code profiles. For example:
# ceph osd erasure-code-profile get m_erasure
crush-device-class=hdd
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=6
m=2
plugin=jerasure
technique=reed_sol_va
58 matches
Mail list logo