This backport [1] looks suspicious as it was introduced in v14.2.12
and directly changes the initial MonMap code. If you revert it in a
dev build does it solve your problem?
[1] https://github.com/ceph/ceph/pull/36704
On Thu, Oct 22, 2020 at 12:39 PM Wido den Hollander wrote:
>
> Hi,
>
> I alrea
If the remove command is interrupted after it deletes the data and
image header but before it deletes the image listing in the directory,
this can occur. If you run "rbd rm " again (assuming it
was your intent), it should take care of removing the directory
listing entry.
On Fri, Oct 30, 2020 at 6
On Tue, Nov 10, 2020 at 1:52 PM athreyavc wrote:
>
> Hi All,
>
> We have recently deployed a new CEPH cluster Octopus 15.2.4 which consists
> of
>
> 12 OSD Nodes(16 Core + 200GB RAM, 30x14TB disks, CentOS 8)
> 3 Mon Nodes (8 Cores + 15GB, CentOS 8)
>
> We use Erasure Coded Pool and RBD block devi
On Sun, Dec 13, 2020 at 6:03 AM mk wrote:
>
> rados ls -p ssdshop
> outputs 20MB of lines without any bench prefix
> ...
> rbd_data.d4993cc3c89825.74ec
> rbd_data.d4993cc3c89825.1634
> journal_data.83.d4993cc3c89825.333485
> journal_data.83.d4993cc3c89825.380648
> journal_d
On Mon, Dec 14, 2020 at 9:39 AM Marc Boisis wrote:
>
>
> Hi,
>
> I would like to know if you support iser in gwcli like the traditional
> targetcli or if this is planned in a future version of ceph ?
We don't have the (HW) resources to test with iSER so it's not
something that anyone is looking
On Mon, Dec 14, 2020 at 11:28 AM Philip Brown wrote:
>
>
> I have a new 3 node octopus cluster, set up on SSDs.
>
> I'm running fio to benchmark the setup, with
>
> fio --filename=/dev/rbd0 --direct=1 --rw=randrw --bs=4k --ioengine=libaio
> --iodepth=256 --numjobs=1 --time_based --group_reporting
; Aha Insightful question!
> running rados bench write to the same pool, does not exhibit any problems. It
> consistently shows around 480M/sec throughput, every second.
>
> So this would seem to be something to do with using rbd devices. Which we
> need to do.
>
> For
r=0KiB/s,w=53.9MiB/s][r=0,w=13.8k IOPS][eta
> 01m:14s]
Have you tried different kernel versions? Might also be worthwhile
testing using fio's "rados" engine [1] (vs your rados bench test)
since it might not have been comparing apples-to-apples given the
>400MiB/s througho
On Tue, Dec 15, 2020 at 12:24 PM Philip Brown wrote:
>
> It wont be on the same node...
> but since as you saw, the problem still shows up with iodepth=32 seems
> we're still in the same problem ball park
> also... there may be 100 client machines.. but each client can have anywhere
> betwee
On Thu, Dec 17, 2020 at 7:22 AM Eugen Block wrote:
>
> Hi,
>
> > [client]
> > rbd cache = false
> > rbd cache writethrough until flush = false
>
> this is the rbd client's config, not the global MON config you're
> reading here:
>
> > # ceph --admin-daemon `find /var/run/ceph -name 'ceph-mon*'` co
lse
> rbd: not rbd option: cache
... the configuration option is "rbd_cache" as documented here [2].
>
>
> Very frustrating.
>
>
>
> - Original Message -
> From: "Jason Dillaman"
> To: "Eugen Block"
> Cc: "ceph-users&qu
gt;
> So, while i am happy to file a documentation pull request.. I still need to
> find the specific command line that actually *works*, for the "rbd config"
> variant, etc.
>
>
>
> - Original Message -
> From: "Jason Dillaman"
> To: "
rbd cache = false
>
> in /etc/ceph/ceph.conf should work also.
>
> Except it doesnt.
> Even after fully shutting down every node in the ceph cluster and doing a
> cold startup.
>
> is that a bug?
Nope [1]. How would changing a random configuration file on a random
node aff
You can try using the "--timeout X" optional for "rbd-nbd" to increase
the timeout. Some kernels treat the default as infinity, but there
were some >=4.9 kernels that switched behavior and started defaulting
to 30 seconds. There is also known issues with attempting to place XFS
file systems on top
e volume, could result in
hundreds of thousands of ops to the cluster. That's a great way to
hang IO.
> Do you have more information about the NBD/XFS memeory pressure issues ?
See [1].
> Thanks
>
> -----Message d'origine-
> De : Jason Dillaman
> Envoyé : mard
On Fri, Jan 15, 2021 at 4:36 AM Rafael Diaz Maurin
wrote:
>
> Hello cephers,
>
> I run Nautilus (14.2.15)
>
> Here is my context : each night a script take a snapshot from each RBD volume
> in a pool (all the disks of the VMs hosted) on my ceph production cluster.
> Then each snapshot is exporte
On Fri, Jan 15, 2021 at 10:12 AM Rafael Diaz Maurin
wrote:
>
> Le 15/01/2021 à 15:39, Jason Dillaman a écrit :
>
> 4. But the error is still here :
> 2021-01-15 09:33:58.775 7fa088e350c0 -1 librbd::DiffIterate: diff_object_map:
> failed to load object
On Wed, Jan 20, 2021 at 3:10 PM Adam Boyhan wrote:
>
> That's what I though as well, specially based on this.
>
>
>
> Note
>
> You may clone a snapshot from one pool to an image in another pool. For
> example, you may maintain read-only images and snapshots as templates in one
> pool, and writea
r1
> CephTestPool2/vm-100-disk-0-CLONE
> root@Bunkcephmon2:~# rbd ls CephTestPool2
> vm-100-disk-0-CLONE
>
> I am sure I will be back with more questions. Hoping to replace our Nimble
> storage with Ceph and NVMe.
>
> Appreciate it!
>
>
>
We actually have a bunch of bug fixes for snapshot-based mirroring
pending for the next Octopus release. I think this stuck snapshot case
has been fixed, but I'll try to verify on the pacific branch to
ensure.
On Thu, Jan 21, 2021 at 9:11 AM Adam Boyhan wrote:
>
> Decided to request a resync to s
d as
a first step, but perhaps there are some extra guardrails we can put
on the system to prevent premature usage if the sync status doesn't
indicate that it's complete.
> ____
> From: "Jason Dillaman"
> To: "adamb"
> Cc: "
On Thu, Jan 21, 2021 at 2:00 PM Adam Boyhan wrote:
>
> Looks like a script and cron will be a solid work around.
>
> Still interested to know if there are any options to make it so rbd-mirror
> can take more than 1 mirror snap per second.
>
>
>
> From: "adamb"
> To: "ceph-users"
> Sent: Thursda
on, bad superblock on /dev/nbd0, missing
> codepage or helper program, or other error.
>
> On the primary still no issues
>
> root@Ccscephtest1:/etc/pve/priv# rbd clone
> CephTestPool1/vm-100-disk-1@TestSnapper CephTestPool1/vm-100-disk-1-CLONE
> root@Ccscephtest1:/etc/pve/
On Thu, Jan 21, 2021 at 6:18 PM Chris Dunlop wrote:
>
> On Thu, Jan 21, 2021 at 10:57:49AM +0100, Robert Sander wrote:
> > Hi,
> >
> > Am 21.01.21 um 05:42 schrieb Chris Dunlop:
> >
> >> Is there any particular reason for that MAX_OBJECT_MAP_OBJECT_COUNT, or
> >> it just "this is crazy large, if y
on /dev/nbd0, missing
> codepage or helper program, or other error.
>
>
> Primary still looks good.
>
> root@Ccscephtest1:~# rbd clone CephTestPool1/vm-100-disk-1@TestSnapper1
> CephTestPool1/vm-100-disk-1-CLONE
> root@Ccscephtest1:~# rbd-nbd map CephTestPool1/vm-100-disk-1-CL
>
> This is pretty straight forward, I don't know what I could be missing here.
>
>
>
> From: "Jason Dillaman"
> To: "adamb"
> Cc: "ceph-users" , "Matt Wilder"
> Sent: Friday, January 22, 2021 2:11:36 PM
> Subject: Re: [ceph
On Fri, Jan 22, 2021 at 3:29 PM Adam Boyhan wrote:
>
> I will have to do some looking into how that is done on Proxmox, but most
> definitely.
Thanks, appreciate it.
> ____
> From: "Jason Dillaman"
> To: "adamb"
> Cc: &qu
7f78a331b7a4bf793890f9d324c64183e5)
> pacific (rc)
>
> Unfortunately, I am hitting the same exact issues using a pacific client.
>
> Would this confirm that its something specific in 15.2.8 on the osd/mon nodes?
>
>
>
>
>
>
> From: "Jason Dillaman"
On Thu, Jan 28, 2021 at 10:31 AM Jason Dillaman wrote:
>
> On Wed, Jan 27, 2021 at 7:27 AM Adam Boyhan wrote:
> >
> > Doing some more testing.
> >
> > I can demote the rbd image on the primary, promote on the secondary and the
> > image looks great. I can
On Fri, Jan 29, 2021 at 9:34 AM Adam Boyhan wrote:
>
> This is a odd one. I don't hit it all the time so I don't think its expected
> behavior.
>
> Sometimes I have no issues enabling rbd-mirror snapshot mode on a rbd when
> its in use by a KVM VM. Other times I hit the following error, the only
Verify you have correct values for "trusted_ip_list" [1].
[1] https://github.com/ceph/ceph-iscsi/blob/master/iscsi-gateway.cfg_sample#L29
On Mon, Mar 1, 2021 at 9:45 AM Várkonyi János
wrote:
>
> Hi All,
>
> I2d like to install a Ceph Nautilus on Ubuntu 18.04 LTS and give the storage
> to 2 win
On Mon, Mar 1, 2021 at 1:35 PM Pawel S wrote:
>
> hello!
>
> I'm trying to understand how Bluestore cooperates with RBD image clones, so
> my test is simple
>
> 1. create an image (2G) and fill with data
> 2. create a snapshot
> 3. protect it
> 4. create a clone of the image
> 5. write a small por
On Mon, Mar 1, 2021 at 3:07 PM Pawel S wrote:
>
> Hello Jason!
>
> On Mon, Mar 1, 2021, 19:48 Jason Dillaman wrote:
>
> > On Mon, Mar 1, 2021 at 1:35 PM Pawel S wrote:
> > >
> > > hello!
> > >
> > > I'm trying to understand how Blues
Can you provide the output from "rados -p volumes listomapvals rbd_trash"?
On Wed, Mar 10, 2021 at 8:03 AM Enrico Bocchi wrote:
>
> Hello everyone,
>
> We have an unpurgeable image living in the trash of one of our clusters:
> # rbd --pool volumes trash ls
> 5afa5e5a07b8bc volume-02d959fe-a693-4a
...volum|
> 0010 65 2d 30 32 64 39 35 39 66 65 2d 61 36 39 33 2d
> |e-02d959fe-a693-|
> 0020 34 61 63 62 2d 39 35 65 32 2d 63 61 30 34 62 39
> |4acb-95e2-ca04b9|
> 0030 36 35 33 38 39 62 12 05 2a 60 09 c5 d4 16 12 05
> |65389b..*`..|
> 0040 2a 60 09
It sounds like this is a non-primary mirrored image, which means it's
read-only and cannot be modified. A quick "rbd info" will tell you the
mirror state. Instead, you would need to force-promote it to primary
via "rbd mirror image promote --force" before attempting to modify the
image.
On Wed, Ma
promoted and not mirrored ,a primary image
> I have run rbd —debug-rbd=30 and collect a log file
> Which shows locker owner still alive and unable get the lock, return -EAGAIN
> I’ll send you log later
>
> Thank you so much
>
>
> Jason Dillaman 于2021年3月24日 周三20:55写道:
>&
ck owner alive. The question is how to find out the alive
> owner and what is the root cause about this. Why cannot acquire lock to the
> owner?
> Thank you so much
>
> Jason Dillaman 于2021年3月24日 周三20:55写道:
>>
>> It sounds like this is a non-primary mirrored image, which
On Mon, Apr 20, 2020 at 1:20 PM Void Star Nill
wrote:
> Thanks Ilya.
>
> The challenge is that, in our environment, we could have multiple
> containers using the same volume on the same host, so we map them multiple
> times and unman them by device when one of the containers
> complete/terminate
On Mon, Apr 27, 2020 at 7:38 AM Marc Roos wrote:
>
> I guess this is not good for ssd (samsung sm863)? Or do I need to devide
> 14.8 by 40?
>
The 14.8 ms number is the average latency coming from the OSDs, so no need
to divide the number by anything. What is the size of your writes? At 40
writes
On Wed, Apr 29, 2020 at 9:27 AM Ron Gage wrote:
> Hi everyone!
>
> I have been working for the past week or so trying to get ceph-iscsi to
> work - Octopus release. Even just getting a single node working would be a
> major victory in this battle but so far, victory has proven elusive.
>
> My set
I would also like to add that the OSDs can (and will) use redirect on write
techniques (not to mention the physical device hardware as well).
Therefore, your zeroing of the device might just cause the OSDs to allocate
new extents of zeros while the old extents remain intact (albeit
unreferenced and
On Thu, May 14, 2020 at 3:12 AM Brad Hubbard wrote:
> On Wed, May 13, 2020 at 6:00 PM Lomayani S. Laizer
> wrote:
> >
> > Hello,
> >
> > Below is full debug log of 2 minutes before crash of virtual machine.
> Download from below url
> >
> > https://storage.habari.co.tz/index.php/s/31eCwZbOoRTMpc
On Thu, May 14, 2020 at 12:47 PM Kees Meijs | Nefos wrote:
> Hi Anthony,
>
> A one-way mirror suits fine in my case (the old cluster will be
> dismantled in mean time) so I guess a single rbd-mirror daemon should
> suffice.
>
> The pool consists of OpenStack Cinder volumes containing a UUID (i.e.
On Thu, May 28, 2020 at 8:44 AM Hans van den Bogert
wrote:
> Hi list,
>
> When reading the documentation for the new way of mirroring [1], some
> questions arose, especially with the following sentence:
>
> > Since this mode is not point-in-time consistent, the full snapshot
> delta will need to
On Fri, May 29, 2020 at 11:38 AM Palanisamy wrote:
> Hello Team,
>
> Can I get any update on this request.
>
The Ceph team is not really involved in the out-of-tree rbd-provisioner.
Both the in-tree and this out-of-tree RBD provisioner are deprecated to the
ceph-csi [1][2] RBD provisioner. The c
On Fri, May 29, 2020 at 12:09 PM Miguel Castillo
wrote:
> Happy New Year Ceph Community!
>
> I'm in the process of figuring out RBD mirroring with Ceph and having a
> really tough time with it. I'm trying to set up just one way mirroring
> right now on some test systems (baremetal servers, all De
On Thu, Jun 4, 2020 at 3:43 AM Zhenshi Zhou wrote:
>
> My condition is that the primary image being used while rbd-mirror sync.
> I want to get the period between the two times of rbd-mirror transfer the
> increased data.
> I will search those options you provided, thanks a lot :)
When using the
On Sun, Jun 7, 2020 at 8:06 AM Hans van den Bogert wrote:
>
> Hi list,
>
> I've awaited octopus for a along time to be able to use mirror with
> snapshotting, since my setup does not allow for journal based
> mirroring. (K8s/Rook 1.3.x with ceph 15.2.2)
>
> However, I seem to be stuck, i've come t
the image is replayed and its time is
>> just before I demote
>> the primary image. I lost about 24 hours' data and I'm not sure whether
>> there is an interval
>> between the synchronization.
>>
>> I use version 14.2.9 and I deployed a one direction mirro
ing.
> Thanks for the follow-up though!
>
> Regards,
>
> Hans
>
> On Mon, Jun 8, 2020, 13:38 Jason Dillaman wrote:
>>
>> On Sun, Jun 7, 2020 at 8:06 AM Hans van den Bogert
>> wrote:
>> >
>> > Hi list,
>> >
>> > I'
at, so that you could non-force promote. How are you
writing to the original primary image? Are you flushing your data?
> Jason Dillaman 于2020年6月9日周二 下午7:19写道:
>>
>> On Mon, Jun 8, 2020 at 11:42 PM Zhenshi Zhou wrote:
>> >
>> > I have just done a test on rbd-mirr
On Thu, Jun 25, 2020 at 7:51 PM Void Star Nill wrote:
>
> Hello,
>
> Is there a way to list all locks held by a client with the given IP address?
Negative -- you would need to check every image since the locks are
tied to the image.
> Also, I read somewhere that removing the lock with "rbd lock
On Wed, Jul 1, 2020 at 3:23 AM Daniel Stan - nav.ro wrote:
>
> Hi,
>
> We are experiencing a weird issue after upgrading our clusters from ceph
> luminous to nautilus 14.2.9 - I am not even sure if this is ceph related
> but this started to happen exactly after we upgraded, so, I am trying my
> lu
On Tue, Jul 7, 2020 at 11:07 AM Andrei Mikhailovsky wrote:
>
> I've left the virsh pool-list command 'hang' for a while and it did
> eventually get the results back. In about 4 hours!
Perhaps enable the debug logging of libvirt [1] to determine what it's
spending its time on?
> root@ais-cloudho
storage backend. Do you
have a "1:storage" entry in your libvirtd.conf?
> Cheers
> - Original Message -
> > From: "Jason Dillaman"
> > To: "Andrei Mikhailovsky"
> > Cc: "ceph-users"
> > Sent: Tuesday, 7 July, 2020 16:33:
On Wed, Jul 8, 2020 at 3:28 PM Void Star Nill wrote:
>
> Hello,
>
> My understanding is that the time to format an RBD volume is not dependent
> on its size as the RBD volumes are thin provisioned. Is this correct?
>
> For example, formatting a 1G volume should take almost the same time as
> forma
On Thu, Jul 9, 2020 at 12:02 AM Void Star Nill wrote:
>
>
>
> On Wed, Jul 8, 2020 at 4:56 PM Jason Dillaman wrote:
>>
>> On Wed, Jul 8, 2020 at 3:28 PM Void Star Nill
>> wrote:
>> >
>> > Hello,
>> >
>> > My understanding is
On Fri, Jul 24, 2020 at 7:49 AM wrote:
>
> Hi,
>
> i have a working journal based mirror setup initially created with nautilus.
> I recently upgraded to octopus (15.2.4) to use snapshot based mirroring.
> After that I disabled mirroring for the first image and reenabled it snapshot
> based.
>
> T
On Fri, Jul 24, 2020 at 8:02 AM wrote:
>
> Hi,
>
> this is the main site:
>
> rbd mirror pool info testpool
> Mode: image
> Site Name: ceph
>
> Peer Sites:
>
> UUID: 1f1877cb-5753-4a0e-8b8c-5e5547c0619e
> Name: backup
> Mirror UUID: e9e2c4a0-1900-4db6-b828-e655be5ed9d8
> Direction: tx-only
>
>
> a
On Fri, Jul 24, 2020 at 9:11 AM Herbert Alexander Faleiros
wrote:
>
> Hi,
>
> is there any way to do that without disabling journaling?
Negative at this point. There are no versions of the Linux kernel that
support the journaling feature.
> # rbd map image@snap
> rbd: sysfs write failed
> RBD im
On Fri, Jul 24, 2020 at 10:45 AM Herbert Alexander Faleiros
wrote:
>
> On Fri, Jul 24, 2020 at 07:28:07PM +0500, Alexander E. Patrakov wrote:
> > On Fri, Jul 24, 2020 at 6:01 PM Herbert Alexander Faleiros
> > wrote:
> > >
> > > Hi,
> > >
> > > is there any way to fix it instead a reboot?
> > >
>
> > On main site it looks this way:
> > rbd mirror pool info testpool
> > Mode: image
> > Site Name: ceph
> > Peer Sites:
> > UUID: e68b09de-1d2c-4ec6-9350-a6ccad26e1b7
> > Name: ceph
> > Mirror UUID: 4d7f87f4-47be-46dd-85f1-79caa3fa23da
ll only receive images
from "master") or "rx-tx" for bi-directional mirroring?
> 2020-07-24T21:46:25.978+0200 7f931dca9700 10 rbd::mirror::RemotePollPoller:
> 0x5628339d92b0 schedule_task:
>
>
> -Ursprüngliche Nachricht-
> Von: Jason Dillaman
> Gesendet: Freitag
On Mon, Jul 27, 2020 at 3:08 PM Herbert Alexander Faleiros
wrote:
>
> Hi,
>
> On Fri, Jul 24, 2020 at 12:37:38PM -0400, Jason Dillaman wrote:
> > On Fri, Jul 24, 2020 at 10:45 AM Herbert Alexander Faleiros
> > wrote:
> > >
> > > On Fri, Jul 24, 2020
On Tue, Jul 28, 2020 at 7:19 AM Johannes Naab
wrote:
>
> Hi,
>
> we observe crashes in librbd1 on specific workloads in virtual machines
> on Ubuntu 20.04 hosts with librbd1=15.2.4-1focal.
>
> The changes in
> https://github.com/ceph/ceph/commit/50694f790245ca90a3b8a644da7b128a7a148cc6
> could be
On Tue, Jul 28, 2020 at 9:44 AM Johannes Naab
wrote:
>
> On 2020-07-28 14:49, Jason Dillaman wrote:
> >> VM in libvirt with:
> >>
> >>
> >>
> >>
> >>
> >>
> >>
>
On Tue, Jul 28, 2020 at 11:19 AM Johannes Naab
wrote:
>
> On 2020-07-28 15:52, Jason Dillaman wrote:
> > On Tue, Jul 28, 2020 at 9:44 AM Johannes Naab
> > wrote:
> >>
> >> On 2020-07-28 14:49, Jason Dilla
On Tue, Jul 28, 2020 at 11:39 AM Jason Dillaman wrote:
>
> On Tue, Jul 28, 2020 at 11:19 AM Johannes Naab
> wrote:
> >
> > On 2020-07-28 15:52, Jason Dillaman wrote:
> > > On Tue, Jul 28, 2020 at 9:44 AM Johannes Naab
> > > wrote:
> > >&g
On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote:
>
> Hi,
>
> I'm trying to have clients read the 'rbd_default_data_pool' config
> option from the config store when creating a RBD image.
>
> This doesn't seem to work and I'm wondering if somebody knows why.
It looks like all string-based
On Wed, Jul 29, 2020 at 9:03 AM Wido den Hollander wrote:
>
>
>
> On 29/07/2020 14:54, Jason Dillaman wrote:
> > On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote:
> >>
> >> Hi,
> >>
> >> I'm trying to have clients read the '
On Wed, Jul 29, 2020 at 9:07 AM Jason Dillaman wrote:
>
> On Wed, Jul 29, 2020 at 9:03 AM Wido den Hollander wrote:
> >
> >
> >
> > On 29/07/2020 14:54, Jason Dillaman wrote:
> > > On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote:
> > &
On Fri, Jul 31, 2020 at 3:37 AM Torsten Ennenbach
wrote:
>
> Hello all,
>
> I`ve a problem with an undeletable Image.
> Let my try to explain.
>
> There is a storage with a Snapshot, and the snapshot thinks he has a Children:
>
> rbd snap unprotect delete-me-please@995cc2e3-c636-4c43-87c3-dbc72917
dos
-p rbd listomapvals rbd_header.f907bc6b8b4567" return?
> I tried to move this to trash as a solution, but this aint working also.
>
>
> Best regards
> Torsten
>
>
> > Am 31.07.2020 um 13:58 schrieb Jason Dillaman :
> >
> > On Fri, Jul 31, 2020 at 3:37 AM Torsten Ennenb
On Fri, Jul 31, 2020 at 8:10 AM Torsten Ennenbach
wrote:
>
> Hi Jason
>
> > Am 31.07.2020 um 14:08 schrieb Jason Dillaman :
> >
> > rados
> > -p rbd listomapvals rbd_header.f907bc6b8b4567
>
> rados -p rbd listomapvals rbd_header.f907b
On Mon, Aug 3, 2020 at 4:11 AM Georg Schönberger
wrote:
>
> Hey Ceph users,
>
> we are currently facing some serious problems on our Ceph Cluster with
> libvirt (KVM), RBD devices and FSTRIM running inside VMs.
>
> The problem is right after running the fstrim command inside the VM the
> ext4 file
On Tue, Aug 4, 2020 at 2:12 AM Georg Schönberger
wrote:
>
> On 03.08.20 14:56, Jason Dillaman wrote:
> > On Mon, Aug 3, 2020 at 4:11 AM Georg Schönberger
> > wrote:
> >> Hey Ceph users,
> >>
> >> we are currently facing some serious problems on
On Fri, Aug 7, 2020 at 2:37 PM Steven Vacaroaia wrote:
>
> Hi,
> I would appreciate any help/hints to solve this issue
>iscis (gwcli) cannot see the images anymore
>
> This configuration worked fine for many months
> What changed was that ceph is "nearly full"
>
> I am in the process of cleani
; deep-flatten
> op_features:
> flags:
> create_timestamp: Thu Nov 29 13:56:28 2018
>
> On Mon, 10 Aug 2020 at 09:21, Jason Dillaman wrote:
>>
>> On Fri, Aug 7, 2020 at 2:37 PM Steven Vacaroaia wrote:
>> >
>> > Hi,
>> >
It's an effort to expose RBD to Windows via a native driver [1]. That
driver is basically a thin NBD shim to connect with the rbd-nbd daemon
running as a Windows service.
On Thu, Aug 20, 2020 at 6:07 AM Stolte, Felix wrote:
>
> Hey guys,
>
> it seems like there was a presentation called “ceph on
On Tue, Aug 25, 2020 at 6:54 AM huxia...@horebdata.cn
wrote:
>
> Dear Ceph folks,
>
> I am running Openstack Queens to host a variety of Apps, with ceph backend
> storage Luminous 12.2.13.
>
> Is there a solution to support IOPS constraints on a specific rbd volume from
> Ceph side? I konw Nauti
On Wed, Aug 26, 2020 at 9:15 AM Willi Schiegel
wrote:
>
> Hello All,
>
> I have a Nautilus (14.2.11) cluster which is running fine on CentOS 7
> servers. 4 OSD nodes, 3 MON/MGR hosts. Now I wanted to enable iSCSI
> gateway functionality to be used by some Solaris and FreeBSD clients. I
> followed
On Wed, Aug 26, 2020 at 10:11 AM Marc Roos wrote:
>
>
>
> I was wondering if anyone is using ceph csi plugins[1]? I would like to
> know how to configure credentials, that is not really described for
> testing on the console.
>
> I am running
> ./csiceph --endpoint unix:///tmp/mesos-csi-XSJWlY/end
On Wed, Aug 26, 2020 at 10:33 AM Marc Roos wrote:
>
> >>
> >>
> >> I was wondering if anyone is using ceph csi plugins[1]? I would like
> to
> >> know how to configure credentials, that is not really described for
> >> testing on the console.
> >>
> >> I am running
> >> ./csiceph --endpoin
On Fri, Sep 4, 2020 at 11:54 AM wrote:
>
> All;
>
> We've used iSCSI to support virtualization for a while, and have used
> multi-pathing almost the entire time. Now, I'm looking to move from our
> single box iSCSI hosts to iSCSI on Ceph.
>
> We have 2 independent, non-routed, subnets assigned
On Thu, Sep 10, 2020 at 7:44 AM Eugen Block wrote:
>
> Hi *,
>
> I'm currently testing rbd-mirror on ceph version
> 15.2.4-864-g0f510cb110 (0f510cb1101879a5941dfa1fa824bf97db6c3d08)
> octopus (stable) and saw this during an rbd import of a fresh image on
> the primary site:
>
> ---snip---
> ceph1:
On Thu, Sep 10, 2020 at 7:36 AM Eugen Block wrote:
>
> Hi *,
>
> I was just testing rbd-mirror on ceph version 15.2.4-864-g0f510cb110
> (0f510cb1101879a5941dfa1fa824bf97db6c3d08) octopus (stable) and
> noticed mgr errors on the primary site (also in version 15.2.2):
>
> ---snip---
> 2020-09-10T11:
404. ;-)
> This is better: https://tracker.ceph.com/projects/rbd/issues
Indeed -- thanks!
> Regards,
> Eugen
>
>
> Zitat von Jason Dillaman :
>
> > On Thu, Sep 10, 2020 at 7:36 AM Eugen Block wrote:
> >>
> >> Hi *,
> >>
&
On Mon, Sep 14, 2020 at 5:13 AM Lomayani S. Laizer wrote:
>
> Hello,
> Last week i got time to try debug crashes of these vms
>
> Below log includes rados debug which i left last time
>
> https://storage.habari.co.tz/index.php/s/AQEJ7tQS7epC4Zn
>
> I have observed the following with these settin
On Tue, Sep 22, 2020 at 7:23 AM Eugen Block wrote:
>
> It just hit me when I pushed the "send" button: the (automatically
> created) first snapshot initiates the first full sync to catch up on
> the remote site, but from then it's either a manual process or the
> snapshot schedule. Is that it?
In
On Thu, Sep 24, 2020 at 9:53 AM Stefan Kooman wrote:
>
> On 2020-09-24 14:34, Eugen Block wrote:
> > Hi *,
> >
> > I'm curious if this idea [1] of quotas on namespace level for rbd will
> > be implemented. I couldn't find any existing commands in my lab Octopus
> > cluster so I guess it's still ju
On Wed, Sep 30, 2020 at 8:28 AM wrote:
>
> Hi all,
>
> I'm trying to troubleshoot an interesting problem with RBD performance for
> VMs. Tests were done using fio both outside and inside the VMs shows that
> random read/write is 20-30% slower than bulk read/write at QD=1. However, at
> QD=16/32
On Mon, Aug 12, 2019 at 10:03 PM yang...@cmss.chinamobile.com
wrote:
>
> Hi Jason,
>
> I was recently testing the RBD mirror feature(ceph12.2.8), my test
> environment is a single-node cluster, which including 10 3T hdd OSDs + 800G
> pcie ssd + bluestore, and the wal and db partition of the OSD
On Tue, Aug 20, 2019 at 10:04 PM Zaharo Bai (白战豪)-云数据中心集团
wrote:
>
> Hi jason:
>
> I have a question I would like to ask you, Is the current image
> migration and openstack adapted? according to my understanding, openstack’s
> previous live-migration logic is implemented in cinder, just
On Wed, Aug 21, 2019 at 9:34 AM Florian Haas wrote:
>
> Hi everyone,
>
> apologies in advance; this will be long. It's also been through a bunch
> of edits and rewrites, so I don't know how well I'm expressing myself at
> this stage — please holler if anything is unclear and I'll be happy to
> try
> On Aug 21, 2019, at 11:41 AM, Florian Haas wrote:
>
> Hi Jason! Thanks for the quick reply.
>
> On 21/08/2019 16:51, Jason Dillaman wrote:>
>> It just looks like this was an oversight from the OpenStack developers
>> when Nova RBD "direct" ephem
On Wed, Aug 21, 2019 at 11:53 AM Jason Dillaman wrote:
>
>
> > On Aug 21, 2019, at 11:41 AM, Florian Haas wrote:
> >
> > Hi Jason! Thanks for the quick reply.
> >
> > On 21/08/2019 16:51, Jason Dillaman wrote:>
> >> It just looks like this was an o
an image "in-place" (i.e. you
keep it in the same pool w/ the same name).
> -邮件原件-
> 发件人: Jason Dillaman [mailto:jdill...@redhat.com]
> 发送时间: 2019年8月21日 20:44
> 收件人: Zaharo Bai (白战豪)-云数据中心集团
> 抄送: ceph-users
> 主题: Re: About image migration
>
> On Tue,
this problem?
That's a good point that we didn't consider. I've opened a tracker
ticket against the issue [1].
>
> -----邮件原件-
> 发件人: Jason Dillaman [mailto:jdill...@redhat.com]
> 发送时间: 2019年8月22日 8:38
> 收件人: Zaharo Bai (白战豪)-云数据中心集团
> 抄送: ceph-users
> 主题: Re: A
nstack.org/show/754766/
>>
>>
>> 2)Log for 16gb volume created from cinder, status in cinder volume is
>> available
>>
>> http://paste.openstack.org/show/754767/
>>
>>
>> 3)Log for 100gb volume created from cinder, status in cinder volume is error
>>
1 - 100 of 137 matches
Mail list logo