The VMs are XenServer VMs with virtual Disk saved at the NFS Server which has
the RBD mounted … So there is nor migration from my POV as there is no second
storage to migrate to ...
All your pain is self-inflicted.
Just FYI clients are not interrupted when you upgrade ceph. Client will
be
Btw, I am using ceph-volume.
I just test ceph-disk. In this case, the ceph-0 folder is mounted from
/dev/sdb1.
So tmpfs only happens when using ceph-volume? how it works?
On Wed, Apr 4, 2018 at 9:29 AM, Jeffrey Zhang <
zhang.lei.fly+ceph-us...@gmail.com> wrote:
> I am testing ceph Luminous, the
I am testing ceph Luminous, the environment is
- centos 7.4
- ceph luminous ( ceph offical repo)
- ceph-deploy 2.0
- bluestore + separate wal and db
I found the ceph osd folder `/var/lib/ceph/osd/ceph-0` is mounted
from tmpfs. But where the files in that folder come from? like `keyring`,
`whoami`
You might want to take a look at the Zipkin tracing hooks that are
(semi)integrated into Ceph [1]. The hooks are disabled by default in
release builds so you would need to rebuild Ceph yourself and then
enable tracing via the 'rbd_blkin_trace_all = true' configuration
option [2].
[1] http://victor
Hey Cephers,
This is just a friendly reminder that the next Ceph Developer Montly
meeting is coming up:
http://wiki.ceph.com/Planning
If you have work that you're doing that it a feature work, significant
backports, or anything you would like to discuss with the core team,
please add it to the
Thanks for the input Greg, we've submitted the patch to the ceph github
repo https://github.com/ceph/ceph/pull/21222
Kevin
On 04/02/2018 01:10 PM, Gregory Farnum wrote:
On Mon, Apr 2, 2018 at 8:21 AM Kevin Hrpcek
mailto:kevin.hrp...@ssec.wisc.edu>> wrote:
Hello,
We use python librad
I was wondering if there is a mechanism to instrument an RBD workload to
elucidate what takes place on OSDs to troubleshoot performance issues
better.
Currently, we can issue the RBD IO, such as via fio, and observe just the
overall performance. One needs to guess what OSDs that hits and try to fi
> Am 03.04.2018 um 13:31 schrieb Konstantin Shalygin :
>
>> and true the VMs have to be shut down/server rebooted
>
>
> Is not necessary. Just migrate VM.
Hi,
The VMs are XenServer VMs with virtual Disk saved at the NFS Server which has
the RBD mounted … So there is nor migration from my PO
On 03/28/2018 11:11 AM, Mark Nelson wrote:
> Personally I usually use a modified version of Mark Seger's getput
> tool here:
>
> https://github.com/markhpc/getput/tree/wip-fix-timing
>
> The difference between this version and upstream is primarily to make
> getput more accurate/useful when using
and true the VMs have to be shut down/server rebooted
Is not necessary. Just migrate VM.
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Robert,
> Am 29.03.2018 um 10:27 schrieb Robert Sander :
>
> On 28.03.2018 11:36, Götz Reinicke wrote:
>
>> My question is: How to proceed with the serves which map the rbds?
>
> Do you intend to upgrade the kernels on these RBD clients acting as NFS
> servers?
>
> If so you have to plan a
11 matches
Mail list logo