Jason Dillaman wrote:
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.726 54121 [DEBUG]
dbus_name_acquired:461: name org.kernel.TCMUService1 acquired
чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.521 54121
[DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-
Hello,
We're into some problems with dynamic bucket index resharding. After an upgrade
from Ceph 12.2.2 to 12.2.5, which fixed an issue with the resharding when using
tenants (which we do), the cluster was busy resharding for 2 days straight,
resharding the same buckets over and over again.
Af
If I would like to copy/move an rbd image, this is the only option I
have? (Want to move an image from a hdd pool to an ssd pool)
rbd clone mypool/parent@snap otherpool/child
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.co
The "rbd clone" command will just create a copy-on-write cloned child
of the source image. It will not copy any snapshots from the original
image to the clone.
With the Luminous release, you can use "rbd export --export-format 2
- | rbd import --export-format 2 - " to
export / import an image (an
On Fri, Jun 15, 2018 at 6:19 AM, Wladimir Mutel wrote:
> Jason Dillaman wrote:
>
>>> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.726 54121
>>> [DEBUG] dbus_name_acquired:461: name org.kernel.TCMUService1 acquired
>>> чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.521
Have seen some posts and issue trackers related to this topic in the
past but haven't been able to put it together to resolve the issue I'm
having. All on Luminous 12.2.5 (upgraded over time from past
releases). We are going to upgrade to Mimic near future if that would
somehow resolve the issue.
On Fri, Jun 15, 2018 at 2:55 PM, Benjeman Meekhof wrote:
> Have seen some posts and issue trackers related to this topic in the
> past but haven't been able to put it together to resolve the issue I'm
> having. All on Luminous 12.2.5 (upgraded over time from past
> releases). We are going to upg
Too long is 120 seconds
The DB is in SSD devices. The devices are fast. The process OSD reads
about 800Mb but I cannot be sure from where.
On 13/06/18 11:36, Gregory Farnum wrote:
How long is “too long”? 800MB on an SSD should only be a second or three.
I’m not sure if that’s a reasonable am
I have done this with Luminous by deep-flattening a clone in a different pool.
It seemed to do what I wanted, but the RBD appeared to lose its sparseness in
the process. Can anyone verify that and/or comment on whether Mimic's "rbd deep
copy" does the same?
Steve Taylor | Senior Software Eng
On Fri, Jun 15, 2018 at 12:15 PM, Steve Taylor
wrote:
> I have done this with Luminous by deep-flattening a clone in a different
> pool. It seemed to do what I wanted, but the RBD appeared to lose its
> sparseness in the process.
Hmm, Luminous librbd clients should have kept object-sized sparse
Jason Dillaman wrote:
[1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-win/
I don't use either MPIO or MCS on Windows 2008 R2 or Windows 10
initiator (not Win2016 but hope there is no much difference). I try to make
it work with a single session first. Also, right now I only
Hello List - anyone using these drives and have any good / bad things
to say about them?
Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
we've evaluated them but they were worse than the SM863a in the usual quick
sync write IOPS benchmark.
Not saying that it's a bad disk (10k IOPS with one thread, ~20k with more
threads), we haven't run any long-term tests.
Paul
2018-06-15 21:02 GMT+02:00 Brian : :
> Hello List - anyone usi
I'm at a loss as to what happened here.
I'm testing a single-node Ceph "cluster" as a replacement for RAID and
traditional filesystems. 9 4TB HDDs, one single (underpowered) server.
Running Luminous 12.2.5 with BlueStore OSDs.
I set up CephFS on a k=6,m=2 EC pool, mounted it via FUSE, and ran an
On 2018-06-16 13:04, Hector Martin wrote:
> I'm at a loss as to what happened here.
Okay, I just realized CephFS has a default 1TB file size... that
explains what triggered the problem. I just bumped it to 10TB. What that
doesn't explain is why rsync didn't complain about anything. Maybe when
ceph
15 matches
Mail list logo