Hi Felix,
I have experience from running Ceph on SATADOM on R630. And it is kind of
bad cause we got bad SATADOM's from Dell.
If you are going to use SATADOM make sure to buy directly from a Innodisk
reseller and not from Dell.
We bought our SATADOM from Dell and they degraded in 5-6 months. And t
> Op 28 oktober 2016 om 15:37 schreef Kees Meijs :
>
>
> Hi,
>
> Interesting... We're now running using deadline. In other posts I read
> about noop for SSDs instead of CFQ.
>
> Since we're using spinners with SSD journals; does it make since to mix
> the scheduler? E.g. CFG for spinners _and_
Erik McCormick writes:
> We use Edge-Core 5712-54x running Cumulus Linux. Anything off their
> compatibility list would be good though. The switch is 48 10G sfp+
> ports. We just use copper cables with attached sfp. It also had 6 40G
> ports. The switch cost around $4800 and the cumulus license is
Hi all,
I tested straw / straw 2 bucket type.
The Ceph document says below
- straw2 bucket type fixed several limitations in the original straw
bucket
- *the old straw buckets would change some mapping that should hava
changed when a weight was adjusted*
- straw2 achieves the ori
I've recently undergone an upgrade from Hammer to Jewel migrating from
federated to multi-site, and I note that naming conventions have changed
for rgw pools (names changed, leading periods dropped etc.) I was hoping
to update my pools to mirror these new conventions; conventions become
assumption
Hi Cephers:
I build Ceph(v11.0.2) with spdk,then prompt the following error:
[ 87%] Building CXX object src/os/CMakeFiles/os.dir/FuseStore.cc.o
[ 87%] Building CXX object
src/os/CMakeFiles/os.dir/bluestore/NVMEDevice.cc.o
/srv/autobuild-ceph/gitbuilder.git/build/rpmbuild/BUILD/ceph-11.0.2/s
there may something broken when switching to cmake. I will check this. thanks!
On Mon, Oct 31, 2016 at 6:46 PM, wrote:
> Hi Cephers:
>
> I build Ceph(v11.0.2) with spdk,then prompt the following error:
>
> [ 87%] Building CXX object src/os/CMakeFiles/os.dir/FuseStore.cc.o
> [ 87%] Building
> Op 31 oktober 2016 om 11:33 schreef 한승진 :
>
>
> Hi all,
>
> I tested straw / straw 2 bucket type.
>
> The Ceph document says below
>
>
>- straw2 bucket type fixed several limitations in the original straw
>bucket
>- *the old straw buckets would change some mapping that should h
On 10/31/16 05:56, xxhdx1985126 wrote:
> Hi, everyone.
>
> Recently, I deployed a ceph cluster manually. And I found that, after
> I start the ceph osd through "/etc/init.d/ceph -a start osd", the size
> of the log file "ceph-osd.log" is 0, and its owner isnot "ceph" which
> I configured in /etc/ce
On Sun, Oct 30, 2016 at 5:40 AM, Bill WONG wrote:
> any ideas or comments?
Can you set "rbd non blocking aio = false" in your ceph.conf and retry
librbd? This will eliminate at least one context switch on the read IO
path -- which result in increased latency under extremely low queue
depths.
--
Hi Jason,
it looks the situation is the same, no difference. my ceph.conf is below,
any comments or improvement required?
---
[global]
fsid = 106a12b0-5ed0-4a71-b6aa-68a09088ec33
mon_initial_members = ceph-mon1, ceph-mon2, ceph-mon3
mon_host = 192.168.8.11,192.168.8.12,192.168.8.13
auth_cluster_re
I would suggest deadline for any ssd, nvme ssd, or pcie flash card. But, you
will need to supply the deadline settings too or deadline won't be any
different than running with noop
Rick
Sent from my iPhone, please excuse any typing errors.
> On Oct 31, 2016, at 5:01 AM, Wido den Hollander wr
Hello,
After patching my OSD servers with the latest Centos kernel and
rebooting the nodes, all OSD drives moved to different positions.
Before the reboot:
Systemdisk: /dev/sda
Journaldisk: /dev/sdb
OSD disk 1: /dev/sdc
OSD disk 2: /dev/sdd
OSD disk 3: /dev/sde
After the reboot:
Systemdisk: /d
this is normal. You should expect that your disks may get reordered
after reboot. I am not sure about your setup details, but in 10.2.3 udev
should be able to activate your OSDs no matter the naming (there were
some bugs in previous 10.2.x releases)
On 16-10-31 18:32, jan hugo prins wrote:
He
After the kernel upgrade, I also upgraded the cluster to 10.2.3 from
10.2.2.
Let's hope I only hit a bug and that this bug is now fixed, on the other
hand, I think I also saw the issue with a 10.2.3 node, but I'm not sure.
Jan Hugo
On 10/31/2016 11:41 PM, Henrik Korkuc wrote:
> this is normal. Y
How are your OSDs setup? It is possible that udev rules didn't activate
your OSDs if it didn't match rules. Refer to
/lib/udev/rules.d/95-ceph-osd.rules. Basically your partition types must
be of correct type for it to work
On 16-10-31 19:10, jan hugo prins wrote:
After the kernel upgrade, I
For better or worse, I can repeat your "ioping" findings against a
qcow2 image hosted on a krbd-backed volume. The "bad" news is that it
actually isn't even sending any data to the OSDs -- which is why your
latency is shockingly low. When performing a "dd ... oflag=dsync"
against the krbd-backed qc
Hi, everyone.
I'm trying to write a program based on the librbd API that transfers snapshot
diffs between ceph clusters without the need for a temporary storage which is
required if I use the "rbd export-diff" and "rbd import-diff" pair. I found
that the configuration object "g_conf" and ceph
18 matches
Mail list logo