Hi,
the Samsung PM1725b is definitely a good choice when it comes to "lower"
price enterprise SSDs. They cost pretty much the same as the Samsung Pro
SSDs but offer way higher DWPD and power loss protection.
My benchmarks of the 3.2TB version in a PCIe 2.0 slot (the card is 3.0!)
fio --filen
Hi, sorry for intervening, but please try the first also with -fsync=1,
NVMes sometimes ignore -sync=1 (Bluestore uses fsync).
the Samsung PM1725b is definitely a good choice when it comes to "lower"
price enterprise SSDs. They cost pretty much the same as the Samsung Pro
SSDs but offer way hi
For vmotion speed, check "emulate_3pc" attribute on the LIO target. If 0
(default), VMWare will issue io in 64KB blocks which gives low speed. if
set to 1 this will trigger VMWare to use vaai extended copy, which
activates LIO's xcopy functionality which uses 512KB block sizes by
default. We a
Actually this may not work if moving from a local datastore to Ceph. For
iSCSI xcopy, both the source and destination need to be accessible by
the target such as in moving vms across Ceph datastores. So in your
case, vmotion will be handled by VMWare data mover which uses 64K block
sizes.
On
<>
Can anybody take a look into this ? why are other prefixes apart from
'x-amz-' in headers are checked and used to sign s3 requests? This maybe a
use case for swift requests but the same code flow is also used for s3
requests and that results in signature mismatch at client and server end.
Rega
Disabling write cache helps with the 970 Pro, but it still sucks. I've
worked on a setup with heavy metadata requirements (gigantic S3
buckets being listed) that unfortunately had all of that stored on 970
Pros and that never really worked out.
Just get a proper SSD like the 883, 983, or 1725. The
Hi,
Em qui, 24 de out de 2019 às 20:16, Mike Christie
escreveu:
> On 10/24/2019 12:22 PM, Ryan wrote:
> > I'm in the process of testing the iscsi target feature of ceph. The
> > cluster is running ceph 14.2.4 and ceph-iscsi 3.3. It consists of 5
>
> What kernel are you using?
>
> > hosts with 12
Not related to the original topic but the Micron case in that article is
fascinating and a little surprising.
With pretty much best in class hardware in a lab environment:
Potential 25,899,072 4KiB random write IOPs goes to 477K
Potential 23,826,216 4KiB random read IOPs goes to 2,000,000
477K
On 25/10/2019 02:38, Oliver Freyermuth wrote:
> Also, if there's an expert on this: Exposing a bucket under a tenant as
> static website is not possible since the colon (:) can't be encoded in DNS,
> right?
There are certainly much better-qualified radosgw experts than I am, but
as I understand
Hi Oliver,
In order to run the `s3cmd ws-create`, you need to run it against an RGW that
has the following settings:
rgw_enable_static_website = true
rgw_enable_apis = s3, s3website
You can choose to do so temporarily if you only need to apply that config once,
or leave it running indefinitely
10G should be fine on bluestore the smallest size you can have is about 2GB
since LVM takes up about 1GB of space at that size so at that point it most of
the disk is taken up with LVM. I have seen/recorded performance benefits in
some cases when using small OSD sizes on bluestore instead of lar
I had to remove the disk from the target host in gwcli to
change max_data_area_mb. So the disk would need to be detached. For
cmdsn_depth I was able to change it live.
On Fri, Oct 25, 2019 at 7:50 AM Gesiel Galvão Bernardes <
gesiel.bernar...@gmail.com> wrote:
> Hi,
>
> Em qui, 24 de out de 2019
I'm not seeing the emulate_3pc setting under disks/rbd/diskname when
calling info. A google search shows that SUSE Enterprise Storage has it
available. I thought I had the latest packages, but maybe not. I'm using
tcmu-runner 1.5.2 and ceph-iscsi 3.3. Almost all of my VMs are currently on
Nimble iS
On 10/25/2019 09:31 AM, Ryan wrote:
> I'm not seeing the emulate_3pc setting under disks/rbd/diskname when
emulate_3pc is only for kernel based backends. tcmu-runner always has
xcopy on.
> calling info. A google search shows that SUSE Enterprise Storage has it
> available. I thought I had the lat
On 10/24/2019 11:47 PM, Ryan wrote:
> I'm using CentOS 7.7.1908 with kernel 3.10.0-1062.1.2.el7.x86_64. The
> workload was a VMware Storage Motion from a local SSD backed datastore
Ignore my comments. I thought you were just doing fio like tests in the vm.
> to the ceph backed datastore. Performa
What's your ceph version? Have you verified whether the problem could be
reproduced on master branch?
On 08:33 Fri 25 Oct, Mason-Williams, Gabryel (DLSLtd,RAL,LSCI) wrote:
>I am currently trying to run Ceph on RDMA, either RoCE 1 or 2. However,
>I am experiencing issues with this.
>
>
Greetings,
I am running a mimic cluster. I noticed that I suddenly have over 200
rbd images that have seemingly random 64 character names. They were
all created within a short time period. rbd info on one of the rbds
looks like this:
rbd image 'ff8f0ad0b5323fc3a8e930d96d3021d4c4dfc18c62ef9a1305b4
On 10/25/19 7:14 PM, Randall Smith wrote:
> Greetings,
>
> I am running a mimic cluster. I noticed that I suddenly have over 200
> rbd images that have seemingly random 64 character names. They were
> all created within a short time period. rbd info on one of the rbds
> looks like this:
>
> rb
Can you point me to the directions for the kernel mode iscsi backend. I was
following these directions
https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/
Thanks,
Ryan
On Fri, Oct 25, 2019 at 11:29 AM Mike Christie wrote:
> On 10/25/2019 09:31 AM, Ryan wrote:
> > I'm not seeing the emulate
Just to clarify, it is better to separate the different performance cases:
1- regular io performance ( iops / throughput ), this should be good.
2- vmotion within datastores managed by Ceph: this will be good, as
xcopy will be used.
3. vmotion between Ceph datastore and an external datastore.
esxtop is showing a queue length of 0
Storage motion to ceph
DEVICEPATH/WORLD/PARTITION DQLEN WQLEN ACTV
QUED %USD LOAD CMDS/s READS/s WRITES/s MBREAD/s MBWRTN/s DAVG/cmd
KAVG/cmd GAVG/cmd QAVG/cmd
naa.6001405ec60d8b82342404d929fbbd03 - 128
All;
We're setting up our second cluster, using version 14.2.4, and we've run into a
weird issue: all of our OSDs are created with a size of 0 B. Weights are
appropriate for the size of the underlying drives, but ceph -s shows this:
cluster:
id:
health: HEALTH_WARN
R
All;
We're setting up our second cluster, using version 14.2.4, and we've run into a
weird issue: all of our OSDs are created with a size of 0 B. Weights are
appropriate for the size of the underlying drives, but ceph -s shows this:
cluster:
id:
health: HEALTH_WARN
R
On Fri, 25 Oct 2019, dhils...@performair.com wrote:
> All;
>
> We're setting up our second cluster, using version 14.2.4, and we've run into
> a weird issue: all of our OSDs are created with a size of 0 B. Weights are
> appropriate for the size of the underlying drives, but ceph -s shows this:
24 matches
Mail list logo