On Mon, Jun 17, 2024 at 12:18 AM Satoru Takeuchi
wrote:
>
> 2024年6月14日(金) 23:24 Anthony D'Atri :
>
> > Usually. There is a high bar for changing command structure or output.
> > Newer versions are more likely to *add* commands and options than to change
> > or remove them.
> >
> > That said, prob
On Tue, May 28, 2024 at 4:53 AM Tony Liu wrote:
>
> Hi,
>
> Say, the source image is being updated and data is mirrored to destination
> continuously.
> Suddenly, networking of source is down and destination will be promoted and
> used to
> restore the VM. Is that going to cause any FS issue and
On Mon, Jul 1, 2024 at 4:24 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/66756#note-1
>
> Release Notes - TBD
> LRC upgrade - TBD
>
> (Reruns were not done yet.)
>
> Seeking approvals/reviews for:
>
> smoke
> rados - Radek, Laura
> r
On Mon, Jul 1, 2024 at 8:41 PM Ilya Dryomov wrote:
>
> On Mon, Jul 1, 2024 at 4:24 PM Yuri Weinstein wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/66756#note-1
> >
> > Release Notes - TBD
> >
On Tue, Jul 2, 2024 at 9:13 PM Laura Flores wrote:
> The rados suite, upgrade suite, and powercycle are approved by RADOS.
>
> Failures are summarized here:
> https://tracker.ceph.com/projects/rados/wiki/SQUID#Squid-1910
>
> @Ilya Dryomov , please see the upgrade/reef-x suite
On Tue, Jul 2, 2024 at 8:16 PM Ilya Dryomov wrote:
>
> On Mon, Jul 1, 2024 at 8:41 PM Ilya Dryomov wrote:
> >
> > On Mon, Jul 1, 2024 at 4:24 PM Yuri Weinstein wrote:
> > >
> > > Details of this release are summarized here:
> > >
>
On Wed, Jul 3, 2024 at 5:45 PM Reid Guyett wrote:
>
> Hi,
>
> I have a small script in a Docker container we use for a type of CRUD test
> to monitor availability. The script uses Python librbd/librados and is
> launched by Telegraf input.exec. It does the following:
>
>1. Creates an rbd image
Hi Dan,
What is the output of
$ rbd info images-pubos/144ebab3-b2ee-4331-9d41-8505bcc4e19b
Can you confirm that the problem lies with that image by running
$ rbd diff --whole-object images-pubos/144ebab3-b2ee-4331-9d41-8505bcc4e19b
Thanks,
Ilya
On Thu, Jul 25, 2024 at 10:10 PM Dan O'Brien wrote:
>
> Ilya -
>
> I don't think images-pubos/144ebab3-b2ee-4331-9d41-8505bcc4e19b is the
> problem; it was just the last RBD image listed in the log before the crash.
> The commands you suggested work fine when using that image:
>
> [root@os-stora
On Fri, Jul 26, 2024 at 12:17 PM Dan O'Brien wrote:
>
> I'll try that today.
>
> Looking at the tracker issue you flagged, it seems like it should be fixed in
> v18.2.4, which is what I'm running.
Hi Dan,
The reef backport [1] has "Target version: Ceph - v18.2.5". It was
originally targeted fo
On Mon, Aug 5, 2024 at 10:32 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/67340#note-1
>
> Release Notes - N/A
> LRC upgrade - N/A
> Gibba upgrade -TBD
>
> Seeking approvals/reviews for:
>
> rados - Radek, Laura (https://github.com/c
On Tue, Aug 6, 2024 at 11:55 AM Torkil Svensgaard wrote:
>
> Hi
>
> [ceph: root@ceph-flash1 /]# rbd info rbd_ec/projects
> rbd image 'projects':
> size 750 TiB in 196608000 objects
> order 22 (4 MiB objects)
> snapshot_count: 0
> id: 15a979db61dda7
> da
On Mon, Aug 12, 2024 at 10:20 AM Oliver Freyermuth
wrote:
>
> Dear Cephalopodians,
>
> we've successfully operated a "good old" Mimic cluster with primary RBD
> images, replicated via journaling to a "backup cluster" with Octopus, for the
> past years (i.e. one-way replication).
> We've now fina
On Mon, Aug 12, 2024 at 11:28 AM Oliver Freyermuth
wrote:
>
> Am 12.08.24 um 11:09 schrieb Ilya Dryomov:
> > On Mon, Aug 12, 2024 at 10:20 AM Oliver Freyermuth
> > wrote:
> >>
> >> Dear Cephalopodians,
> >>
> >> we've successfully operat
On Mon, Aug 12, 2024 at 1:17 PM Oliver Freyermuth
wrote:
>
> Am 12.08.24 um 12:16 schrieb Ilya Dryomov:
> > On Mon, Aug 12, 2024 at 11:28 AM Oliver Freyermuth
> > wrote:
> >>
> >> Am 12.08.24 um 11:09 schrieb Ilya Dryomov:
> >>> On Mon, Aug 12,
On Fri, Sep 6, 2024 at 3:54 AM wrote:
>
> Hello Ceph Users,
>
> * Problem: we get the following errors when using krbd, we are using rbd
> for vms.
> * Workaround: by switching to librbd the errors disappear.
>
> * Software:
> ** Kernel: 6.8.8-2 (parameters: intel_iommu=on iommu=pt
> pcie_aspm.pol
On Tue, Sep 10, 2024 at 1:23 PM Milind Changire wrote:
>
> Problem:
> CephFS fallocate implementation does not actually reserve data blocks
> when mode is 0.
> It only truncates the file to the given size by setting the file size
> in the inode.
> So, there is no guarantee that writes to the file
On Thu, Nov 19, 2020 at 3:39 AM David Galloway wrote:
>
> This is the 6th backport release in the Octopus series. This releases
> fixes a security flaw affecting Messenger V2 for Octopus & Nautilus. We
> recommend users to update to this release.
>
> Notable Changes
> ---
> * CVE 2020-
On Fri, Jan 8, 2021 at 2:19 PM Gaël THEROND wrote:
>
> Hi everyone!
>
> I'm facing a weird issue with one of my CEPH clusters:
>
> OS: CentOS - 8.2.2004 (Core)
> CEPH: Nautilus 14.2.11 - stable
> RBD using erasure code profile (K=3; m=2)
>
> When I want to format one of my RBD image (client side)
On Mon, Jan 11, 2021 at 10:09 AM Gaël THEROND wrote:
>
> Hi Ilya,
>
> Here is additional information:
> My cluster is a three OSD Nodes cluster with each node having 24 4TB SSD
> disks.
>
> The mkfs.xfs command fail with the following error:
> https://pastebin.com/yTmMUtQs
>
> I'm using the foll
On Thu, Feb 11, 2021 at 1:34 AM Seena Fallah wrote:
>
> Hi,
> I have a few questions about krbd on kernel 4.15
>
> 1. Does it support msgr v2? (If not which kernel supports msgr v2?)
No. Support for msgr2 has been merged into kernel 5.11, due to be
released this weekend.
Note that the kernel cl
"crc/signature" errors in dmesg.
When the session is reset all its state is discarded, so it will retry
indefinitely.
>
> On Thu, Feb 11, 2021 at 3:05 PM Ilya Dryomov wrote:
>>
>> On Thu, Feb 11, 2021 at 1:34 AM Seena Fallah wrote:
>> >
>> > Hi,
>
On Sun, Feb 21, 2021 at 1:04 PM Gaël THEROND wrote:
>
> Hi Ilya,
>
> Sorry for the late reply, I've been sick all week long :-/ and then really
> busy at work once I'll get back.
>
> I've tried to wipe out the image by zeroing it (Even tried to fully wipe it),
> I can see the same error message.
On Wed, Feb 24, 2021 at 4:09 PM Frank Schilder wrote:
>
> Hi all,
>
> I get these log messages all the time, sometimes also directly to the
> terminal:
>
> kernel: ceph: mdsmap_decode got incorrect state(up:standby-replay)
>
> The cluster is healthy and the MDS complaining is actually both, c
On Tue, Mar 2, 2021 at 9:26 AM Stefan Kooman wrote:
>
> Hi,
>
> On a CentOS 7 VM with mainline kernel (5.11.2-1.el7.elrepo.x86_64 #1 SMP
> Fri Feb 26 11:54:18 EST 2021 x86_64 x86_64 x86_64 GNU/Linux) and with
> Ceph Octopus 15.2.9 packages installed. The MDS server is running
> Nautilus 14.2.16. M
On Tue, Mar 2, 2021 at 6:02 PM Stefan Kooman wrote:
>
> On 3/2/21 5:42 PM, Ilya Dryomov wrote:
> > On Tue, Mar 2, 2021 at 9:26 AM Stefan Kooman wrote:
> >>
> >> Hi,
> >>
> >> On a CentOS 7 VM with mainline kernel (5.11.2-1.el7.elrepo.x86_64 #1 SMP
On Wed, Mar 3, 2021 at 11:15 AM Stefan Kooman wrote:
>
> On 3/2/21 6:00 PM, Jeff Layton wrote:
>
> >>
> >>>
> >>> v2 support in the kernel is keyed on the ms_mode= mount option, so that
> >>> has to be passed in if you're connecting to a v2 port. Until the mount
> >>> helpers get support for that
On Tue, Mar 23, 2021 at 6:13 AM duluxoz wrote:
>
> Hi All,
>
> I've got a new issue (hopefully this one will be the last).
>
> I have a working Ceph (Octopus) cluster with a replicated pool
> (my-pool), an erasure-coded pool (my-pool-data), and an image (my-image)
> created - all *seems* to be wor
On Tue, Apr 20, 2021 at 2:01 AM David Galloway wrote:
>
> This is the 20th bugfix release in the Nautilus stable series. It
> addresses a security vulnerability in the Ceph authentication framework.
> We recommend users to update to this release. For a detailed release
> notes with links & change
On Tue, Apr 20, 2021 at 1:56 AM David Galloway wrote:
>
> This is the 11th bugfix release in the Octopus stable series. It
> addresses a security vulnerability in the Ceph authentication framework.
> We recommend users to update to this release. For a detailed release
> notes with links & changel
On Tue, Apr 20, 2021 at 2:02 AM David Galloway wrote:
>
> This is the first bugfix release in the Pacific stable series. It
> addresses a security vulnerability in the Ceph authentication framework.
> We recommend users to update to this release. For a detailed release
> notes with links & change
On Tue, Apr 20, 2021 at 11:30 AM Dan van der Ster wrote:
>
> On Tue, Apr 20, 2021 at 11:26 AM Ilya Dryomov wrote:
> >
> > On Tue, Apr 20, 2021 at 2:01 AM David Galloway wrote:
> > >
> > > This is the 20th bugfix release in the Nautilus stable series. It
>
On Thu, Apr 22, 2021 at 3:24 PM Cem Zafer wrote:
>
> Hi,
> I have recently add a new host to ceph and copied /etc/ceph directory to
> the new host. When I execute the simple ceph command as "ceph -s", get the
> following error.
>
> 021-04-22T14:50:46.226+0300 7ff541141700 -1 monclient(hunting):
>
On Thu, Apr 22, 2021 at 4:20 PM Boris Behrens wrote:
>
> Hi,
>
> I have a customer VM that is running fine, but I can not make snapshots
> anymore.
> rbd snap create rbd/IMAGE@test-bb-1
> just hangs forever.
Hi Boris,
Run
$ rbd snap create rbd/IMAGE@test-bb-1 --debug-ms=1 --debug-rbd=20
let it
al_id in a secure fashion. See
https://docs.ceph.com/en/latest/security/CVE-2021-20288/
for details.
Thanks,
Ilya
>
> On Thu, Apr 22, 2021 at 4:49 PM Ilya Dryomov wrote:
>>
>> On Thu, Apr 22, 2021 at 3:24 PM Cem Zafer wrote:
>> >
>> > Hi,
>
On Thu, Apr 22, 2021 at 5:08 PM Boris Behrens wrote:
>
>
>
> Am Do., 22. Apr. 2021 um 16:43 Uhr schrieb Ilya Dryomov :
>>
>> On Thu, Apr 22, 2021 at 4:20 PM Boris Behrens wrote:
>> >
>> > Hi,
>> >
>> > I have a customer VM that is runni
On Thu, Apr 22, 2021 at 6:01 PM Cem Zafer wrote:
>
> Thanks Ilya, pointing me out to the right direction.
> So if I change the auth_allow_insecure_global_id_reclaim to true means older
> userspace clients allowed to the cluster?
Yes, but upgrading all clients and setting it to false is recommend
On Thu, Apr 22, 2021 at 6:00 PM Boris Behrens wrote:
>
>
>
> Am Do., 22. Apr. 2021 um 17:27 Uhr schrieb Ilya Dryomov :
>>
>> On Thu, Apr 22, 2021 at 5:08 PM Boris Behrens wrote:
>> >
>> >
>> >
>> > Am Do., 22. Apr. 2021 um 16:43 Uhr s
On Thu, Apr 22, 2021 at 9:24 PM Cem Zafer wrote:
>
> Sorry to disturb you again but changing the value to yes doesnt affect
> anything. Executing simple ceph command from the client replies the following
> error, again. I'm not so sure it is related with that parameter.
> Have you any idea what
On Thu, Apr 22, 2021 at 10:16 PM Cem Zafer wrote:
>
> This client ceph-common version is 16.2.0, here are the outputs.
>
> indiana@mars:~$ ceph -v
> ceph version 16.2.0 (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific
> (stable)
>
> indiana@mars:~$ dpkg -l | grep -i ceph-common
> ii ceph-commo
On Fri, Apr 23, 2021 at 6:57 AM Cem Zafer wrote:
>
> Hi Ilya,
> Sorry, totally my mistake. I just saw that the configuration on mars like
> that.
>
> auth_cluster_required = none
> auth_service_required = none
> auth_client_required = none
>
> So I changed none to cephx, solved the problem.
> Tha
On Fri, Apr 23, 2021 at 9:16 AM Boris Behrens wrote:
>
>
>
> Am Do., 22. Apr. 2021 um 20:59 Uhr schrieb Ilya Dryomov :
>>
>> On Thu, Apr 22, 2021 at 7:33 PM Boris Behrens wrote:
>> >
>> >
>> >
>> > Am Do., 22. Apr. 2021 um 18:30 Uhr s
On Fri, Apr 23, 2021 at 12:03 PM Boris Behrens wrote:
>
>
>
> Am Fr., 23. Apr. 2021 um 11:52 Uhr schrieb Ilya Dryomov :
>>
>>
>> This snippet confirms my suspicion. Unfortunately without a verbose
>> log from that VM from three days ago (i.e. when it got i
On Fri, Apr 23, 2021 at 12:46 PM Boris Behrens wrote:
>
>
>
> Am Fr., 23. Apr. 2021 um 12:16 Uhr schrieb Ilya Dryomov :
>>
>> On Fri, Apr 23, 2021 at 12:03 PM Boris Behrens wrote:
>> >
>> >
>> >
>> > Am Fr., 23. Apr. 2021 um 11:52 Uhr
On Fri, Apr 23, 2021 at 1:12 PM Boris Behrens wrote:
>
>
>
> Am Fr., 23. Apr. 2021 um 13:00 Uhr schrieb Ilya Dryomov :
>>
>> On Fri, Apr 23, 2021 at 12:46 PM Boris Behrens wrote:
>> >
>> >
>> >
>> > Am Fr., 23. Apr. 2021 um 12:16 Uhr s
On Sun, Apr 25, 2021 at 12:37 AM Markus Kienast wrote:
>
> I am seeing these messages when booting from RBD and booting hangs there.
>
> libceph: get_reply osd2 tid 1459933 data 3248128 > preallocated
> 131072, skipping
>
> However, Ceph Health is OK, so I have no idea what is going on. I
> reboot
On Sun, Apr 25, 2021 at 11:42 AM Ilya Dryomov wrote:
>
> On Sun, Apr 25, 2021 at 12:37 AM Markus Kienast wrote:
> >
> > I am seeing these messages when booting from RBD and booting hangs there.
> >
> > libceph: get_reply osd2 tid 1459933 data 3248128 >
On Sun, May 2, 2021 at 11:15 PM Magnus Harlander wrote:
>
> Hi,
>
> I know there is a thread about problems with mounting cephfs with 5.11
> kernels.
> I tried everything that's mentioned there, but I still can not mount a cephfs
> from an octopus node.
>
> I verified:
>
> - I can not mount with
On Mon, May 3, 2021 at 9:20 AM Magnus Harlander wrote:
>
> Am 03.05.21 um 00:44 schrieb Ilya Dryomov:
>
> On Sun, May 2, 2021 at 11:15 PM Magnus Harlander wrote:
>
> Hi,
>
> I know there is a thread about problems with mounting cephfs with 5.11
> kernels.
>
> .
On Mon, May 3, 2021 at 12:00 PM Magnus Harlander wrote:
>
> Am 03.05.21 um 11:22 schrieb Ilya Dryomov:
>
> max_osd 12
>
> I never had more then 10 osds on the two osd nodes of this cluster.
>
> I was running a 3 osd-node cluster earlier with more than 10
> osds, but t
On Mon, May 3, 2021 at 12:27 PM Magnus Harlander wrote:
>
> Am 03.05.21 um 12:25 schrieb Ilya Dryomov:
>
> ceph osd setmaxosd 10
>
> Bingo! Mount works again.
>
> Vry strange things are going on here (-:
>
> Thanx a lot for now!! If I can help to track it down
On Mon, May 3, 2021 at 12:24 PM Magnus Harlander wrote:
>
> Am 03.05.21 um 11:22 schrieb Ilya Dryomov:
>
> There is a 6th osd directory on both machines, but it's empty
>
> [root@s0 osd]# ll
> total 0
> drwxrwxrwt. 2 ceph ceph 200 2. Mai 16:31 ceph-1
> drwxrwxrw
On Tue, May 11, 2021 at 10:50 AM Konstantin Shalygin wrote:
>
> Hi Ilya,
>
> On 3 May 2021, at 14:15, Ilya Dryomov wrote:
>
> I don't think empty directories matter at this point. You may not have
> had 12 OSDs at any point in time, but the max_osd value appears to hav
On Fri, May 14, 2021 at 8:20 AM Rainer Krienke wrote:
>
> Hello,
>
> has the "negative progress bug" also been fixed in 14.2.21? I cannot
> find any info about this in the changelog?
Unfortunately not -- this was a hotfix release driven by rgw and
dashboard CVEs.
Thanks,
Ilya
__
On Sun, May 16, 2021 at 12:54 PM Markus Kienast wrote:
>
> Hi Ilya,
>
> unfortunately I can not find any "missing primary copy of ..." error in the
> logs of my 3 OSDs.
> The NVME disks are also brand new and there is not much traffic on them.
>
> The only error keyword I find are those two messa
On Sun, May 16, 2021 at 4:18 PM Markus Kienast wrote:
>
> Am So., 16. Mai 2021 um 15:36 Uhr schrieb Ilya Dryomov :
>>
>> On Sun, May 16, 2021 at 12:54 PM Markus Kienast wrote:
>> >
>> > Hi Ilya,
>> >
>> > unfortunately I can not find any &q
On Sun, May 16, 2021 at 8:06 PM Markus Kienast wrote:
>
> Am So., 16. Mai 2021 um 19:38 Uhr schrieb Ilya Dryomov :
>>
>> On Sun, May 16, 2021 at 4:18 PM Markus Kienast wrote:
>> >
>> > Am So., 16. Mai 2021 um 15:36 Uhr schrieb Ilya Dryomov
>> > :
On Tue, Jun 8, 2021 at 9:20 PM Phil Merricks wrote:
>
> Hey folks,
>
> I have deployed a 3 node dev cluster using cephadm. Deployment went
> smoothly and all seems well.
>
> If I try to mount a CephFS from a client node, 2/3 mons crash however.
> I've begun picking through the logs to see what I
On Wed, Jun 9, 2021 at 11:24 AM Peter Lieven wrote:
>
> Hi,
>
>
> we currently run into an issue where a rbd ls for a namespace returns ENOENT
> for some of the images in that namespace.
>
>
> /usr/bin/rbd --conf=XXX --id XXX ls
> 'mypool/28ef9470-76eb-4f77-bc1b-99077764ff7c' -l --format=json
>
On Wed, Jun 9, 2021 at 1:36 PM Peter Lieven wrote:
>
> Am 09.06.21 um 13:28 schrieb Ilya Dryomov:
> > On Wed, Jun 9, 2021 at 11:24 AM Peter Lieven wrote:
> >> Hi,
> >>
> >>
> >> we currently run into an issue where a rbd ls for a namespace ret
On Wed, Jun 9, 2021 at 1:38 PM Wido den Hollander wrote:
>
> Hi,
>
> While doing some benchmarks I have two identical Ceph clusters:
>
> 3x SuperMicro 1U
> AMD Epyc 7302P 16C
> 256GB DDR
> 4x Samsung PM983 1,92TB
> 100Gbit networking
>
> I tested on such a setup with v16.2.4 with fio:
>
> bs=4k
>
On Wed, Jun 23, 2021 at 9:59 AM Matthias Ferdinand wrote:
>
> On Tue, Jun 22, 2021 at 02:36:00PM +0200, Ml Ml wrote:
> > Hello List,
> >
> > oversudden i can not mount a specific rbd device anymore:
> >
> > root@proxmox-backup:~# rbd map backup-proxmox/cluster5 -k
> > /etc/ceph/ceph.client.admin.k
On Wed, Jun 23, 2021 at 3:36 PM Marc wrote:
>
> From what kernel / ceph version is krbd usage on a osd node problematic?
>
> Currently I am running Nautilus 14.2.11 and el7 3.10 kernel without any
> issues.
>
> I can remember using a cephfs mount without any issues as well, until some
> specific
ed for ..."
splats in dmesg.
Thanks,
Ilya
>
>
> On Wed, Jun 23, 2021 at 11:25 AM Ilya Dryomov wrote:
> >
> > On Wed, Jun 23, 2021 at 9:59 AM Matthias Ferdinand
> > wrote:
> > >
> > > On Tue, Jun 22, 2021 at 02:36:00PM +02
On Thu, Jul 1, 2021 at 8:37 AM Jan Kasprzak wrote:
>
> Hello, Ceph users,
>
> How can I figure out why it is not possible to unprotect a snapshot
> in a RBD image? I use this RBD pool for OpenNebula, and somehow there
> is a snapshot in one image, which OpenNebula does not see. So I wanted
On Thu, Jul 1, 2021 at 9:48 AM Jan Kasprzak wrote:
>
> Ilya Dryomov wrote:
> : On Thu, Jul 1, 2021 at 8:37 AM Jan Kasprzak wrote:
> : >
> : > Hello, Ceph users,
> : >
> : > How can I figure out why it is not possible to unprotect a snapshot
> : >
On Thu, Jul 1, 2021 at 10:50 AM Jan Kasprzak wrote:
>
> Ilya Dryomov wrote:
> : On Thu, Jul 1, 2021 at 8:37 AM Jan Kasprzak wrote:
> : >
> : > # rbd snap unprotect one/one-1312@snap
> : > 2021-07-01 08:28:40.747 7f3cb6ffd700 -1 librbd::SnapshotUnprotectRequest:
>
On Thu, Jul 1, 2021 at 10:36 AM Oliver Dzombic wrote:
>
>
>
> Hi,
>
> mapping of rbd volumes fails clusterwide.
Hi Oliver,
Clusterwide -- meaning on more than one client node?
>
> The volumes that are mapped, are ok, but new volumes wont map.
>
> Receiving errors liks:
>
> (108) Cannot send aft
On Thu, Jul 15, 2021 at 11:55 PM Robert W. Eckert wrote:
>
> I would like to directly mount cephfs from the windows client, and keep
> getting the error below.
>
>
> PS C:\Program Files\Ceph\bin> .\ceph-dokan.exe -l x
> 2021-07-15T17:41:30.365Eastern Daylight Time 4 -1 monclient(hunting):
> hand
On Tue, Jun 29, 2021 at 4:03 PM Lucian Petrut
wrote:
>
> Hi,
>
> It’s a compatibility issue, we’ll have to update the Windows Pacific build.
Hi Lucian,
Did you get a chance to update the build?
I assume that means the MSI installer at [1]? I see [2] but the MSI
bundle still seems to contain th
ceph-win-latest is where "Ceph 16.0.0 for
Windows x64 - Latest Build" button points to.
Thanks,
Ilya
>
> Thanks,
> Rob
>
>
> -Original Message-
> From: Ilya Dryomov
> Sent: Monday, July 19, 2021 8:04 AM
> To: Lucian Petrut
>
On Wed, Jul 21, 2021 at 4:30 PM Marc wrote:
>
> Crappy code continues to live on?
>
> This issue has been automatically marked as stale because it has not had
> recent activity. It will be closed in a week if no further activity occurs.
> Thank you for your contributions.
Hi Marc,
Which issue
On Fri, Jul 23, 2021 at 11:58 PM wrote:
>
> Hi.
>
> I've followed the installation guide and got nautilus 14.2.22 running on el7
> via https://download.ceph.com/rpm-nautilus/el7/x86_64/ yum repo.
> I'm now trying to map a device on an el7 and getting extremely weird errors:
>
> # rbd info test1/b
On Mon, Jul 26, 2021 at 12:39 PM wrote:
>
> Although I appreciate the responses, they have provided zero help solving
> this issue thus far.
> It seems like the kernel module doesn't even get to the stage where it reads
> the attributes/features of the device. It doesn't know where to connect an
On Mon, Jul 26, 2021 at 5:25 PM wrote:
>
> Have found the problem. All this was caused by missing mon_host directive in
> ceph.conf. I have expected userspace to catch this, but it looks like it
> didn't care.
We should probably add an explicit check for that so that the error
message is explic
On Mon, Aug 9, 2021 at 5:14 PM Robert W. Eckert wrote:
>
> I have had the same issue with the windows client.
> I had to issue
> ceph config set mon auth_expose_insecure_global_id_reclaim false
> Which allows the other clients to connect.
> I think you need to restart the monitors as well,
On Thu, Aug 12, 2021 at 5:03 PM Boris Behrens wrote:
>
> Hi everybody,
>
> we just stumbled over a problem where the rbd image does not shrink, when
> files are removed.
> This only happenes when the rbd image is partitioned.
>
> * We tested it with centos8/ubuntu20.04 with ext4 and a gpt partitio
On Fri, Aug 13, 2021 at 9:45 AM Boris Behrens wrote:
>
> Hi Janne,
> thanks for the hint. I was aware of that, but it is goot to add that
> knowledge to the question for further googlesearcher.
>
> Hi Ilya,
> that fixed it. Do we know why the discard does not work when the partition
> table is not
On Wed, Aug 18, 2021 at 12:40 PM Torkil Svensgaard wrote:
>
> Hi
>
> I am looking at one way mirroring from cluster A to B cluster B.
>
> As pr [1] I have configured two pools for RBD on cluster B:
>
> 1) Pool rbd_data using default EC 2+2
> 2) Pool rbd using replica 2
>
> I have a peer relationsh
1] https://cloudbase.it/ceph-for-windows
Thanks,
Ilya
>
> Best regards
> Daniel
>
>
> On Mon, Aug 9, 2021 at 5:43 PM Ilya Dryomov wrote:
>
> > On Mon, Aug 9, 2021 at 5:14 PM Robert W. Eckert
> > wrote:
> > >
> > > I have
On Wed, Aug 25, 2021 at 7:02 AM Paul Giralt (pgiralt) wrote:
>
> I upgraded to Pacific 16.2.5 about a month ago and everything was working
> fine. Suddenly for the past few days I’ve started having the tcmu-runner
> container on my iSCSI gateways just disappear. I’m assuming this is because
> t
On Tue, Aug 24, 2021 at 11:43 AM Yanhu Cao wrote:
>
> Any progress on this? We have encountered the same problem, use the
> rbd-nbd option timeout=120.
> ceph version: 14.2.13
> kernel version: 4.19.118-2+deb10u1
Hi Yanhu,
No, we still don't know what is causing this.
If rbd-nbd is being too sl
appear to be unrelated.
Thanks,
Ilya
>
> On Mon, Aug 30, 2021 at 6:34 PM Ilya Dryomov wrote:
> >
> > On Tue, Aug 24, 2021 at 11:43 AM Yanhu Cao wrote:
> > >
> > > Any progress on this? We have encountered the same problem, use the
> > >
On Mon, Oct 23, 2023 at 5:15 PM Yuri Weinstein wrote:
>
> If no one has anything else left, we have all issues resolved and
> ready for the 17.2.7 release
A last-minute issue with exporter daemon [1][2] necessitated a revert
[3]. 17.2.7 builds would need to be respinned: since the tag created
by
On Mon, Nov 6, 2023 at 10:31 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63443#note-1
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures)
> rados - Neha, Radek, Travis, Ernesto
On Wed, Nov 15, 2023 at 5:57 PM Wesley Dillingham
wrote:
>
> looking into how to limit snapshots at the ceph level for RBD snapshots.
> Ideally ceph would enforce an arbitrary number of snapshots allowable per
> rbd.
>
> Reading the man page for rbd command I see this option:
> https://docs.ceph.
On Thu, Nov 16, 2023 at 3:21 AM Xiubo Li wrote:
>
> Hi Matt,
>
> On 11/15/23 02:40, Matt Larson wrote:
> > On CentOS 7 systems with the CephFS kernel client, if the data pool has a
> > `nearfull` status there is a slight reduction in write speeds (possibly
> > 20-50% fewer IOPS).
> >
> > On a simi
On Thu, Nov 16, 2023 at 5:26 PM Matt Larson wrote:
>
> Ilya,
>
> Thank you for providing these discussion threads on the Kernel fixes for
> where there was a change and details on this affects the clients.
>
> What is the expected behavior in CephFS client when there are multiple data
> pools
On Sat, Nov 25, 2023 at 4:19 AM Tony Liu wrote:
>
> Hi,
>
> The context is RBD on bluestore. I did check extent on Wiki.
> I see "extent" when talking about snapshot and export/import.
> For example, when create a snapshot, we mark extents. When
> there is write to marked extents, we will make a c
On Thu, Nov 30, 2023 at 8:25 AM Szabo, Istvan (Agoda)
wrote:
>
> Hi,
>
> Is there any config on Ceph that block/not perform space reclaim?
> I test on one pool which has only one image 1.8 TiB in used.
>
>
> rbd $p du im/root
> warning: fast-diff map is not enabled for root. operation may be slow.
--
> Agoda Services Co., Ltd.
> e: istvan.sz...@agoda.com
> -------
>
>
>
>
> From: Ilya Dryomov
> Sent: Thursday, November 30, 2023 6:27 PM
> To: Szabo, Istvan (Agoda)
> Cc: Ceph Users
> Subject: Re: [ceph-users]
On Tue, Nov 28, 2023 at 8:18 AM Tony Liu wrote:
>
> Hi,
>
> I have an image with a snapshot and some changes after snapshot.
> ```
> $ rbd du backup/f0408e1e-06b6-437b-a2b5-70e3751d0a26
> NAME
> PROVISIONED USED
> f04
On Tue, Dec 12, 2023 at 1:03 AM Satoru Takeuchi
wrote:
>
> Hi,
>
> I'm developing RBD images' backup system. In my case, a backup data
> must be stored at least two weeks. To meet this requirement, I'd like
> to take backups as follows:
>
> 1. Take a full backup by rbd export first.
> 2. Take a di
On Wed, Dec 13, 2023 at 12:48 AM Satoru Takeuchi
wrote:
>
> Hi Ilya,
>
> 2023年12月12日(火) 21:23 Ilya Dryomov :
> > Not at the moment. Mykola has an old work-in-progress PR which extends
> > "rbd import-diff" command to make this possible [1].
>
> I didn&
On Fri, Dec 15, 2023 at 12:52 PM Eugen Block wrote:
>
> Hi,
>
> I've been searching and trying things but to no avail yet.
> This is uncritical because it's a test cluster only, but I'd still
> like to have a solution in case this somehow will make it into our
> production clusters.
> It's an Open
On Thu, Jan 4, 2024 at 4:41 PM Peter wrote:
>
> I follow below document to setup image level rbd persistent cache,
> however I get error output while i using the command provide by the document.
> I have put my commands and descriptions below.
> Can anyone give some instructions? thanks in advance
On Sat, Jan 6, 2024 at 12:02 AM Peter wrote:
>
> Thanks for ressponse! Yes, it is in use
>
> "watcher=10.1.254.51:0/1544956346 client.39553300 cookie=140244238214096"
> this is indicating the client is connect the image.
> I am using fio perform write task on it.
>
> I guess it is the feature not
On Mon, Jan 8, 2024 at 10:43 PM Peter wrote:
>
> rbd --version
> ceph version 15.2.17 (8a82819d84cf884bd39c17e3236e0632ac146dc4) octopus
> (stable)
Hi Peter,
The PWL cache was introduced in Pacific (16.2.z).
Thanks,
Ilya
___
ceph-us
On Fri, Jan 19, 2024 at 2:38 PM Marc wrote:
>
> Am I doing something weird when I do on a ceph node (nautilus, el7):
>
> rbd snap ls vps-test -p rbd
> rbd map vps-test@vps-test.snap1 -p rbd
>
> mount -o ro /dev/mapper/VGnew-LVnew /mnt/disk <--- reset/reboot ceph node
Hi Marc,
It's not clear wher
On Wed, Jan 24, 2024 at 7:31 PM Eugen Block wrote:
>
> We do like the separation of nova pools as well, and we also heavily
> use ephemeral disks instead of boot-from-volume instances. One of the
> reasons being that you can't detach a root volume from an instances.
> It helps in specific maintena
1 - 100 of 269 matches
Mail list logo