ust/028719.html
[2]
https://docs.ceph.com/en/latest/cephfs/eviction/#advanced-un-blocklisting-a-client
Zitat von Marc Roos :
> Is there not some genius out there that can shed a ligth on this? ;)
> Currently I am not able to reproduce this. Thus it would be nice to
> have some procedure at hand
Is there not some genius out there that can shed a ligth on this? ;)
Currently I am not able to reproduce this. Thus it would be nice to have
some procedure at hand that resolves stale cephfs mounts nicely.
-Original Message-
To: ceph-users
Subject: [ceph-users] kvm vm cephfs mount ha
I am not really familiar with spark, but I see it often used in
combination with mesos. They currently implemented a csi solution that
should enable access to ceph. I have been trying to get this to work[1]
I assume being able to scale tasks with distributed block devices or the
cephfs would
I can live-migrate the vm in this locked up state to a different host
without any problems.
-Original Message-
To: ceph-users
Subject: [ceph-users] kvm vm cephfs mount hangs on osd node (something
like umount -l available?) (help wanted going to production)
I have a vm on a osd nod
Just got this during bonnie test, trying to do an ls -l on the cephfs. I
also have this kworker process constantly at 40% when doing this
bonnie++ test.
[35281.101763] INFO: task bash:1169 blocked for more than 120 seconds.
[35281.102064] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
di
I have a vm on a osd node (which can reach host and other nodes via the
macvtap interface (used by the host and guest)). I just did a simple
bonnie++ test and everything seems to be fine. Yesterday however the
dovecot procces apparently caused problems (only using cephfs for an
archive names
How to recover from this? It is possible to have a vm with a cephfs
mount on a osd server?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hnln8ZMQ==
host1:/etc/ceph # touch /mnt/file2
host1:/etc/ceph # ls -l /mnt/
insgesamt 0
-rw-r--r-- 1 root root 0 21. Dez 10:14 file2
Zitat von Marc Roos :
> Is there a command to update a client with a new generated key?
> Something like:
&g
Is there a command to update a client with a new generated key?
Something like:
ceph auth new-key client.rbd
Could be usefull if you accidentaly did a ceph auth ls, because that
still displays keys ;)
___
ceph-users mailing list -- ceph-users@ceph.i
Is this cephcsi plugin under control of redhat?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Just run the tool from a client that is not part of the ceph nodes. Than
it can do nothing, that you did not configure ceph to allow it to do ;)
Besides you should never run software from 'unknown' sources in an
environment where it can use 'admin' rights.
-Original Message-
To: ce
t; dhils...@performair.com
> www.PerformAir.com
>
> -Original Message-
> From: Marc Roos [mailto:m.r...@f1-outsourcing.eu]
> Sent: Tuesday, December 8, 2020 2:02 PM
> To: ceph-users; Dominic Hilsbos
> Cc: aKrishna
> Subject: [ceph-users] Re: CentOS
>
>
> I did not
I did not. Thanks for the info. But if I understand this[1] explanation
correctly. CentOS stream is some sort of trial environment for rhel. So
who is ever going to put SDS on such an OS?
Last post on this blog "But if you read the FAQ, you also learn that
once they start work on RHEL 9, Cent
Yes, use everywhere virtio-scsi (via kvm with discard='unmap'). 'lsblk
--discard' also shows discard is supported. vm's with xfs filesystem
seem to behave better.
-Original Message-
Cc: lordcirth; ceph-users
Subject: Re: [ceph-users] Re: guest fstrim not showing free space
What driver
nt: Monday, December 07, 2020 3:58 PM
Cc: ceph-users
Subject: Re: [ceph-users] guest fstrim not showing free space
Is the VM's / ext4?
On Sun., Dec. 6, 2020, 12:57 p.m. Marc Roos,
wrote:
I have a 74GB vm with 34466MB free space. But when I do fstrim /
'rbd
du
Yes! Indeed old one, with ext4 still.
-Original Message-
Sent: Monday, December 07, 2020 3:58 PM
Cc: ceph-users
Subject: Re: [ceph-users] guest fstrim not showing free space
Is the VM's / ext4?
On Sun., Dec. 6, 2020, 12:57 p.m. Marc Roos,
wrote:
I have a 74GB vm
Marc Roos :
> I have a 74GB vm with 34466MB free space. But when I do fstrim / 'rbd
> du' shows still 60GB used.
> When I fill the 34GB of space with an image, delete it and do again
> the fstrim 'rbd du' still shows 59GB used.
>
> Is this normal? Or sh
I have a 74GB vm with 34466MB free space. But when I do fstrim / 'rbd
du' shows still 60GB used.
When I fill the 34GB of space with an image, delete it and do again the
fstrim 'rbd du' still shows 59GB used.
Is this normal? Or should I be able to get it to ~30GB used?
nsson [mailto:icepic...@gmail.com]
Sent: Monday, November 30, 2020 12:43 PM
To: Marc Roos
Cc: ceph-users
Subject: *SPAM* Re: [ceph-users] rbd image backup best practice
Den fre 27 nov. 2020 kl 23:21 skrev Marc Roos
:
Is there a best practice or guide for backuping rbd images?
Is there a best practice or guide for backuping rbd images?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
How does ARM compare to Xeon in latency and cluster utilization?
-Original Message-; ceph-users
Subject: [ceph-users] Re: Ceph on ARM ?
Indeed it does run very happily on ARM. We have three of the Mars 400
appliances from Ambedded and they work exceedingly well. 8 micro servers
pe
2nd that. Why even remove old documentation before it is migrated to the
new environment. It should be left online until the migration
successfully completed.
-Original Message-
Sent: Tuesday, November 24, 2020 4:23 PM
To: Frank Schilder
Cc: ceph-users
Subject: [ceph-users] Re: Docu
I am advocating already a long time for publishing testing data of some
basic test cluster against different ceph releases. Just a basic ceph
cluster that covers most configs and run the same tests, so you can
compare just ceph performance. That would mean a lot for smaller
companies that do
Really? First time I read this here, afaik you can get a split brain
like this.
-Original Message-
Sent: Thursday, October 29, 2020 12:16 AM
To: Eugen Block
Cc: ceph-users
Subject: [ceph-users] Re: frequent Monitor down
Eugen, I've got four physical servers and I've installed mon on a
un octopus via RPMs on el7 without the
cephadm and containers orchestration, then the answer is yes.
-- dan
On Fri, Oct 23, 2020 at 9:47 AM Marc Roos
wrote:
>
>
> No clarity on this?
>
> -Original Message-
> To: ceph-users
> Subject: [ceph-users] ceph octopus centos7,
No clarity on this?
-Original Message-
To: ceph-users
Subject: [ceph-users] ceph octopus centos7, containers, cephadm
I am running Nautilus on centos7. Does octopus run similar as nautilus
thus:
- runs on el7/centos7
- runs without containers by default
- runs without cephadm by defa
I am running Nautilus on centos7. Does octopus run similar as nautilus
thus:
- runs on el7/centos7
- runs without containers by default
- runs without cephadm by default
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email
I wanted to create a few statefull containers with mysql/postgres that
did not depend on local persistant storage, so I can dynamically move
them around. What about using;
- a 1x replicated pool and use rbd mirror,
- or having postgres use 2 1x replicated pools
- or upon task launch create an
> In the past I see some good results (benchmark & latencies) for MySQL
and PostgreSQL. However, I've always used
> 4MB object size. Maybe i can get much better performance on smaller
object size. Haven't tried actually.
Did you tune mysql / postgres for this setup? Did you have a default
ce
tps://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/
On 10/15/20 2:18 PM, Marc Roos wrote:
>
> I enabled a certificate on my radosgw, but I think I am running into
the
> problem that the s3 clients are accessing the buckets like
> bucket.rgw
I enabled a certificate on my radosgw, but I think I am running into the
problem that the s3 clients are accessing the buckets like
bucket.rgw.domain.com. Which fails my cert rgw.domain.com.
Is there any way to configure that only rgw.domain.com is being used?
Is it possible to disable checking on 'x pool(s) have no replicas
configured', so I don't have this HEALTH_WARN constantly.
Or is there some other disadvantage of keeping some empty 1x replication
test pools?
___
ceph-users mailing list -- ceph-use
>1. The pg log contains 3000 entries by default (on nautilus). These
>3000 entries can legitimately consume gigabytes of ram for some
>use-cases. (I haven't determined exactly which ops triggered this
>today).
How can I check how much ram my pg_logs are using?
-Original Message
Ok thanks Dan for letting me know.
-Original Message-
Cc: ceph-users
Subject: Re: [ceph-users] el6 / centos6 rpm's for luminous?
We had built some rpms locally for ceph-fuse, but AFAIR luminous needs
systemd so the server rpms would be difficult.
-- dan
>
>
> Nobody ever used lumino
Nobody ever used luminous on el6?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I honestly do not get what the problem is. Just yum remove the rpm's, dd
your osd drives, if there is something left in /var/lib/ceph, /etc/ceph,
rm -R -f * those. Do a find / -iname "*ceph*" if there is still
something there.
-Original Message-
To: Samuel Taylor Liston
Cc: ceph-us
Normally I would install ceph-common.rpm and access some rbd image via
rbdmap. What would be the best way to do this on an old el6? There is
not even a luminous el6 on download.ceph.com.
___
ceph-users mailing list -- ceph-users@ceph.io
To uns
pg_num and pgp_num need to be the same, not?
3.5.1. Set the Number of PGs
To set the number of placement groups in a pool, you must specify the
number of placement groups at the time you create the pool. See Create a
Pool for details. Once you set placement groups for a pool, you can
increase
ve unclean PGs/Pools.
Cheers, dan
On Fri, Oct 2, 2020 at 4:14 PM Marc Roos
wrote:
>
>
> Does this also count if your cluster is not healthy because of errors
> like '2 pool(s) have no replicas configured'
> I sometime
I think I do not understand you completely. How long does a live
migration take? If I do virsh migrate with vm's on librbd it is a few
seconds. I guess this is mainly caused by copying the ram to the other
host.
Any time more this takes in case of a host failure, is related to time
out sett
Does this also count if your cluster is not healthy because of errors
like '2 pool(s) have no replicas configured'
I sometimes use these pools for testing, they are empty.
-Original Message-
Cc: ceph-users
Subject: [ceph-users] Re: Massive Mon DB Size with noout on 14.2.11
As long a
If such 'simple' tool as ceph-volume is not properly working, how can I
trust cephadm to be good? Maybe ceph development should rethink trying
to pump out quickly new releases, and take a bit more time for testing.
I am already sticking to the oldest supported version just because of
this.
video" at the bottom
On Thu, Oct 1, 2020 at 1:10 PM Marc Roos
wrote:
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Mike,
Can you allow access without mic and cam?
Thanks,
Marc
-Original Message-
To: ceph-users@ceph.io
Subject: *SPAM* [ceph-users] Ceph Tech Talk: Karan Singh - Scale
Testing Ceph with 10Billion+ Objects
Hey all,
We're live now with the latest Ceph tech talk! Join us:
> Did you have any success with `ceph-volume` for activating your OSD?
No, I have tried with ceph-volume prepare and ceph-volume activate, but
got errors also. The only way for me to currently create an osd without
hasle is:
ceph-volume lvm zap --destroy /dev/sdf &&
ceph-volume lvm create
I have been creating lvm osd's with:
ceph-volume lvm zap --destroy /dev/sdf && ceph-volume lvm create --data
/dev/sdf --dmcrypt
Because this procedure failed:
ceph-volume lvm zap --destroy /dev/sdf
(waiting on slow human typing)
ceph-volume lvm create --data /dev/sdf --dmcrypt
However when I
I am not sure, but it looks like this remapping at hdd's is not being
done when adding back the same ssd osd.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thanks!
-Original Message-
To: Janne Johansson; Marc Roos
Cc: ceph-devel; ceph-users
Subject: Re: [ceph-users] Re: Understanding what ceph-volume does, with
bootstrap-osd/ceph.keyring, tmpfs
The key is stored in the ceph cluster config db. It can be retrieved by
KEY=`/usr/bin/ceph
"op": "emit"
}
]
}
[@ceph]# ceph osd crush rule dump replicated_ruleset_ssd
{
"rule_id": 5,
"rule_name": "replicated_ruleset_ssd",
"ruleset": 5,
"type": 1,
"min_size": 1,
"max_
ess I have missed this in
the release notes or so.
[1]
https://pastebin.com/PFx0V3S7
-Original Message-
To: Eugen Block
Cc: Marc Roos; ceph-users
Subject: Re: [ceph-users] Re: hdd pg's migrating when converting ssd
class osd's
This is how my crush tree including shadow h
Yes correct this is coming from Luminous or maybe even Kraken. How does
a default crush tree look like in mimic or octopus? Or is there some
manual how to bring this to the new 'default'?
-Original Message-
Cc: ceph-users
Subject: Re: [ceph-users] Re: hdd pg's migrating when converti
Yes correct, hosts have indeed both ssd's and hdd's combined. Is this
not more of a bug then? I would assume the goal of using device classes
is that you separate these and one does not affect the other, even the
host weight of the ssd and hdd class are already available. The
algorithm shoul
It looks like I have less problems, doing the zap and create quickly
after each other.
ceph-volume lvm zap --destroy /dev/sdj && ceph-volume lvm create --data
/dev/sdj --dmcrypt
-Original Message-
To: ceph-users
Subject: [ceph-users] Keep having ceph-volume create fail
I have no id
I have practically a default setup. If I do a 'ceph osd crush tree
--show-shadow' I have a listing like this[1]. I would assume from the
hosts being listed within the default~ssd and default~hdd, they are
separate (enough)?
[1]
root default~ssd
host c01~ssd
..
..
host c02~ssd
..
ro
I have no idea why ceph-volume keeps failing so much. I keep zapping and
creating and all of a sudden it works. I am not having pvs or links left
in /dev/mapper. I am checking that with lsblk, dmsetup ls --tree and
ceph-volume inventory.
These are the stdout/err I am having, every time ceph-
I did also some testing, but was more surprised how much cputime kworker
and dmcrypt-write(?) instances are taking. Is there some way to get fio
output realtime to influx or prometheus so you can view it with load
together?
-Original Message-
From: t...@postix.net [mailto:t...@p
I have been converting ssd's osd's to dmcrypt, and I have noticed that
pg's of pools are migrated that should be (and are?) on hdd class.
On a healthy ok cluster I am getting, when I set the crush reweight to
0.0 of a ssd osd this:
17.35 10415 00 9907
When I add an osd rebalancing is taking place, lets say ceph relocates
40 pg's.
When I add another osd during rebalancing, when ceph has only relocated
10 pgs and has to do still 30 pgs.
What happens then:
1. Is ceph just finishing the relocation of these 30 pgs and then
calculates how the
https://www.storagereview.com/review/hgst-4tb-deskstar-nas-hdd-review
-Original Message-
Subject: Re: [ceph-users] Re: NVMe's
On 9/23/20 8:05 AM, Marc Roos wrote:
>> I'm curious if you've tried octopus+ yet?
> Why don't you publish results of your test cluster? You can
> I'm curious if you've tried octopus+ yet?
Why don't you publish results of your test cluster? You cannot expect
all new users to buy 4 servers with 40 disks, and try if the performance
is ok.
Get a basic cluster and start publishing results, and document changes
to the test cluster.
_
I was wondering if switching to ceph-volume requires me to change the
default centos lvm.conf? Eg. The default has issue_discards = 0
Also I wonder if trimming is the default on lvm's on ssds? I read
somewhere that the dmcrypt passthrough of trimming was still secure in
combination with a btr
Depends on your expected load not? I already read here numerous of times
that osd's can not keep up with nvme's, that is why people put 2 osd's
on a single nvme. So on a busy node, you probably run out of cores? (But
better verify this with someone that has an nvme cluster ;))
-Original
Vitaliy you are crazy ;) But really cool work. Why not combine efforts
with ceph? Especially with something as important as SDS and PB's of
clients data stored on it, everyone with a little bit of brain chooses a
solution from a 'reliable' source. For me it was decisive to learn that
CERN an
[mailto:respo...@ifastnet.com]
Cc: Janne Johansson; Marc Roos; ceph-devel; ceph-users
Subject: Re: [ceph-users] Re: Understanding what ceph-volume does, with
bootstrap-osd/ceph.keyring, tmpfs
Tbh ceph caused us more problems than it tried to fix ymmv good luck
> On 22 Sep 2020, at 13:04
ceph device ls
-Original Message-
To: ceph-users
Subject: [ceph-users] one-liner getting block device from mounted osd
I have a optimize script that I run after the reboot of a ceph node. It
sets among other things /sys/block/sdg/queue/read_ahead_kb and
/sys/block/sdg/queue/nr_reques
I have a optimize script that I run after the reboot of a ceph node. It
sets among other things /sys/block/sdg/queue/read_ahead_kb and
/sys/block/sdg/queue/nr_requests of block devices being used for osd's.
Normally I am using the mount command to discover these but with the
tmpfs and ceph-vo
also
https://docs.ceph.com/docs/mimic/rados/configuration/ceph-conf/
-Original Message-
From: Marc Roos
Sent: zondag 20 september 2020 15:36
To: ceph-users
Subject: [ceph-users] ceph docs redirect not good
https://docs.ceph.com/docs/mimic/man/8/ceph-volume-systemd
When I create a new encrypted osd with ceph volume[1]
I assume something like this is being done, please correct what is
wrong.
- it creates the pv on the block device
- it creates the ceph vg on the block device
- it creates the osd lv in the vg
- it uses cryptsetup to encrypt this lv
(or
I tested something in the past[1] where I could notice that an osd
staturated a bond link and did not use the available 2nd one. I think I
maybe made a mistake in writing down it was a 1x replicated pool.
However it has been written here multiple times that these osd processes
are single thre
Thanks Oliver, useful checks!
-Original Message-
To: ceph-users
Subject: Re: [ceph-users] ceph-volume lvm cannot zap???
Hi,
we have also seen such cases, it seems that sometimes (when the
controller / device is broken in special ways), device mapper keeps the
volume locked.
You ca
https://docs.ceph.com/docs/mimic/man/8/ceph-volume-systemd/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
- pat yourself on the back for choosing ceph, there are a lot of
experts(not including me :)) here willing to help(during office hours)
- decide what you like to use ceph for, and how much storage you need.
- Running just an osd on a server has not that many implications so you
could rethink y
[@]# ceph-volume lvm activate 36 82b94115-4dfb-4ed0-8801-def59a432b0a
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-36
Running command: /usr/bin/ceph-authtool
/var/lib/ceph/osd/ceph-36/lockbox.keyring --create-keyring --name
client.osd-lockbox.82b94115-4dfb-4ed0-8801-d
[@~]# ceph-volume lvm zap /dev/sdi
--> Zapping: /dev/sdi
--> --destroy was not specified, but zapping a whole device will remove
the partition table
stderr: wipefs: error: /dev/sdi: probing initialization failed: Device
or resource busy
--> failed to wipefs device, will try again to workaroun
I have still ceph-disk created osd's in nautilus. Thought about using
this ceph-volume, but looks like this manual for replacing ceph-disk[1]
is not complete. Getting already this error
RuntimeError: Unable check if OSD id exists:
[1]
https://docs.ceph.com/en/latest/rados/operations/add-or-r
[mailto:lgrim...@suse.com]
Sent: donderdag 17 september 2020 11:04
To: ceph-users; dev
Subject: [ceph-users] Re: Migration to ceph.readthedocs.io underway
Hi Marc,
On 9/16/20 7:30 PM, Marc Roos wrote:
> - In the future you will not be able to read the docs if you have an
> adblocker(?)
C
- In the future you will not be able to read the docs if you have an
adblocker(?)
-Original Message-
To: dev; ceph-users
Cc: Kefu Chai
Subject: [ceph-users] Migration to ceph.readthedocs.io underway
Hi everyone,
We are in the process of migrating from docs.ceph.com to
ceph.readth
the hint is not correctly set to do the passive compression.
- or the passive compression is not working when this hint is set.
Thanks,
Marc
-Original Message-
Cc: ceph-users
Subject: Re: [ceph-users] ceph rbox test on passive compressed pool
On 09/11 09:36, Marc Roos wrote:
>
> Hi Da
I did the same, 1 or 2 years ago, creating a replicated_ruleset_hdd and
replicated_ruleset_ssd. Eventhough I did not have any ssd's on any of
the nodes at that time, adding this hdd type criteria made pg's migrate.
I thought it was strange that this happens on a hdd only cluster, so I
mention
> mail/b875f40571f1545ff43052412a8e mtime 2020-09-06
> 16:25:53.00,
> size 63580
> mail/e87c120b19f1545ff43052412a8e mtime 2020-09-06
> 16:24:25.00,
> size 525
Hi David, How is this going? To me this looks more like deduplication
than compression. This
> mail/b875f40571f1545ff43052412a8e mtime 2020-09-06
> 16:25:53.00,
> size 63580
> mail/e87c120b19f1545ff43052412a8e mtime 2020-09-06
> 16:24:25.00,
> size 525
Hi David, How is this going. To me this looks more like deduplication
than compression. This
I have also these mounts with bluestore
/dev/sde1 on /var/lib/ceph/osd/ceph-32 type xfs
(rw,relatime,attr2,inode64,noquota)
/dev/sdb1 on /var/lib/ceph/osd/ceph-3 type xfs
(rw,relatime,attr2,inode64,noquota)
/dev/sdc1 on /var/lib/ceph/osd/ceph-6 type xfs
(rw,relatime,attr2,inode64,noquota)
/d
xxx",
"size": 3000487051264,
"btime": "2017-07-14 14:45:59.212792",
"description": "main",
"require_osd_release": "14"
}
}
-Original Message-
Cc: ceph-users
Subject: Re: [ceph
0 15:59:01 BST, Marc Roos
wrote:
I have been inserting 10790 exactly the same 64kb text message to a
passive compressing enabled pool. I am still counting, but it looks
like
only half the objects are compressed.
Hi George,
Very interesting and also a bit expecting result. Some messages posted
here are already indicating that getting expensive top of the line
hardware does not really result in any performance increase above some
level. Vitaliy has documented something similar[1]
[1]
https://yourcm
Do know that this is the only mailing list I am subscribed to, that
sends me so much spam. Maybe the list admin should finally have a word
with other list admins on how they are managing their lists
___
ceph-users mailing list -- ceph-users@ceph.
Hi David,
I suppose it is this part
https://github.com/ceph-dovecot/dovecot-ceph-plugin/tree/master/src/storage-rbox
-Original Message-
To: ceph-users@ceph.io;
Subject: Re: [ceph-users] ceph rbox test on passive compressed pool
The hints have to be given from the client side as far as
I have been inserting 10790 exactly the same 64kb text message to a
passive compressing enabled pool. I am still counting, but it looks like
only half the objects are compressed.
mail/b08c3218dbf1545ff43052412a8e mtime 2020-09-06 16:27:39.00,
size 63580
mail/00f6043775f1545ff43
s where those
pool-specific compression settings weren't applied correctly anyway, so
I'm not sure they even work yet in 14.2.9.
-- dan
On Sat, Sep 5, 2020 at 6:12 PM Marc Roos
wrote:
>
>
> I am still running 14.2.9 with lz4-1.7.5-3. Will I run into this bug
> enabling c
I am still running 14.2.9 with lz4-1.7.5-3. Will I run into this bug
enabling compression on a pool with:
ceph osd pool set POOL_NAME compression_algorithm COMPRESSION_ALGORITHM
ceph osd pool set POOL_NAME compression_mode COMPRESSION_MODE
___
ceph-u
:) this is just native disk performance with regular sata adapter
nothing fancy, on the ceph hosts I have the SAS2308.
-Original Message-
Cc: 'ceph-users'
Subject: AW: [ceph-users] Re: Can 16 server grade ssd's be slower then
60 hdds? (no extra journals)
Wow 34K ios 4k iodetph 1 😊
H
write-4k-seq: (groupid=0, jobs=1): err= 0: pid=11017: Tue Sep 1
20:58:43 2020
write: IOPS=34.4k, BW=134MiB/s (141MB/s)(23.6GiB/180001msec)
slat (nsec): min=3964, max=124499, avg=4432.71, stdev=911.13
clat (nsec): min=470, max=435529, avg=23528.70, stdev=2553.67
lat (usec): min=
Sorry I am not fully aware of what has been already discussed in this
thread. But can't you flash these LSI logic cards to jbod? I have done
this with my 9207 with sas2flash.
I have attached my fio test of the Micron 5100 Pro/5200 SSDs
MTFDDAK1T9TCC. They perform similar to my samsung sm863a 1
>octopus 15.2.4
>
>just as a test, I put my OSDs each inside of a LXD container. Set up
>cephFS and mounted it inside a LXD container and it works.
Thanks for making such effort! I am little bit new with the statefull
containers, but I am getting the impression it is mostly by design
storag
This what I mean, this guy is just posting all his keys.
https://www.mail-archive.com/ceph-devel@vger.kernel.org/msg26140.html
-Original Message-
To: ceph-users
Subject: [ceph-users] ceph auth ls
Am I the only one that thinks it is not necessary to dump these keys
with every command
I am getting this, on a osd node I am able to mount the path.
adding ceph secret key to kernel failed: Operation not permitted
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Am I the only one that thinks it is not necessary to dump these keys
with every command (ls and get)? Either remove these keys from auth ls
and auth get. Or remove the commands "auth print_key" "auth print-key"
and "auth get-key"
___
ceph-users
Can someone shed a light on this? Because it is the difference of
running multiple instances of one task, or running multiple different
tasks.
-Original Message-
To: ceph-users
Subject: [ceph-users] radowsgw still needs dedicated clientid?
I think I can remember reading somewhere t
>>
>>
>> I was wondering if anyone is using ceph csi plugins[1]? I would like
to
>> know how to configure credentials, that is not really described for
>> testing on the console.
>>
>> I am running
>> ./csiceph --endpoint unix:///tmp/mesos-csi-XSJWlY/endpoint.sock
--type
>> rbd --drivern
I was wondering if anyone is using ceph csi plugins[1]? I would like to
know how to configure credentials, that is not really described for
testing on the console.
I am running
./csiceph --endpoint unix:///tmp/mesos-csi-XSJWlY/endpoint.sock --type
rbd --drivername rbd.csi.ceph.com --nodeid
1 - 100 of 305 matches
Mail list logo