On 01/02/2014 01:40 PM, James Harper wrote:
I just had to restore an ms exchange database after an ceph hiccup (no actual
data lost - Exchange is very good like that with its no loss restore!). The
order
of events went something like:
. Loss of connection on osd to the cluster network (public
On 01/02/2014 10:51 PM, James Harper wrote:
I've not used ceph snapshots before. The documentation says that the rbd device
should not be in use before creating a snapshot. Does this mean that creating a
snapshot is not an atomic operation? I'm happy with a crash consistent
filesystem if that'
On 01/08/2014 11:07 AM, Gautam Saxena wrote:
When booting an image from Openstack in which CEPH is the back-end for
both volumes and images, I'm noticing that it takes about ~10 minutes
during the "spawning" phase -- I believe Openstack is making a fully
copy of the 30 GB Windows image. Shouldn't
On 02/04/2014 07:44 PM, Craig Lewis wrote:
On 2/4/14 17:06 , Craig Lewis wrote:
On 2/4/14 14:43 , Yehuda Sadeh wrote:
Does it ever catching up? You mentioned before that most of the writes
went to the same two buckets, so that's probably one of them. Note
that writes to the same bucket are b
On 02/05/2014 01:23 PM, Craig Lewis wrote:
On 2/4/14 20:02 , Josh Durgin wrote:
From the log it looks like you're hitting the default maximum number of
entries to be processed at once per shard. This was intended to prevent
one really busy shard from blocking progress on syncing other s
On 03/20/2014 02:07 PM, Dmitry Borodaenko wrote:
The patch series that implemented clone operation for RBD backed
ephemeral volumes in Nova did not make it into Icehouse. We have tried
our best to help it land, but it was ultimately rejected. Furthermore,
an additional requirement was imposed to
On 03/20/2014 07:03 PM, Dmitry Borodaenko wrote:
On Thu, Mar 20, 2014 at 3:43 PM, Josh Durgin wrote:
On 03/20/2014 02:07 PM, Dmitry Borodaenko wrote:
The patch series that implemented clone operation for RBD backed
ephemeral volumes in Nova did not make it into Icehouse. We have tried
our
On 03/25/2014 12:18 AM, Ray Lv wrote:
Hi there,
We got a case to use the C APIs of compound operation in librados. These
interfaces are only exported as C APIs from Firefly release, such as
https://github.com/ceph/ceph/blob/firefly/src/include/rados/librados.h#L1834.
But our RADOS deployment wil
On 03/26/2014 02:26 PM, gustavo panizzo wrote:
hello
one of our OSD crashed, unfortunately it had a a unreplicated pool
(size = 1).
i want to recover as much as possible using raw files, i've tried using
this tool
https://raw.githubusercontent.com/smmoore/ceph/4eb806fdcc02632bf4ac60f302c4e1
On 03/26/2014 05:50 PM, Craig Lewis wrote:
I made a typo in my timeline too.
It should read:
At 14:14:00, I started OSD 4, and waited for ceph-w to stabilize. CPU
usage was normal.
At 14:15:10, I ran radosgw-admin --name=client.radosgw.ceph1c regions
list && radosgw-admin --name=client.radosgw.c
On 03/31/2014 03:03 PM, Brendan Moloney wrote:
Hi,
I was wondering if RBD snapshots use the CRUSH map to distribute
snapshot data and live data on different failure domains? If not, would
it be feasible in the future?
Currently rbd snapshots and live objects are stored in the same place,
since
On 04/03/2014 03:36 PM, Jonathan Gowar wrote:
On Tue, 2014-03-04 at 14:05 +0800, YIP Wai Peng wrote:
Dear all,
I have a rbd image that I can't delete. It contains a snapshot that is
"busy"
# rbd --pool openstack-images rm
2383ba62-b7ab-4964-a776-fb3f3723aabe-deleted
2014-03-04 14:02:04.0620
On 04/18/2014 10:47 AM, Alexandre DERUMIER wrote:
Thanks Kevin for for the full explain!
cache.writeback=on,cache.direct=off,cache.no-flush=off
I didn't known about the cache options split,thanks.
rbd does, to my knowledge, not use the kernel page cache, so we're safe
>from that part. It
On 05/08/2014 09:42 AM, Federico Iezzi wrote:
I guys,
First of all congratulations on Firefly Release!
IMHO I think that this release is a huge step for Ceph Project!
Just for fun, this morning I upgraded one my staging Ceph cluster used
by an OpenStack Havana installation (Canonical cloud arch
On 05/19/2014 01:48 AM, JinHwan Hwang wrote:
I have been trying to do live migration on vm which is running on rbd.
But so far, they only give me ' internal error: rbd username 'libvirt'
specified but secret not found' when i do live migration.
ceph-admin : source host
host : destination host
r
On 05/21/2014 03:03 PM, Olivier Bonvalet wrote:
Le mercredi 21 mai 2014 à 08:20 -0700, Sage Weil a écrit :
You're certain that that is the correct prefix for the rbd image you
removed? Do you see the objects lists when you do 'rados -p rbd ls - |
grep '?
I'm pretty sure yes : since I didn't s
On 05/21/2014 03:29 PM, Vilobh Meshram wrote:
Hi All,
I want to understand on how do CEPH users go about Quota Management when
CEPH is used with Openstack.
1. Is it recommended to use a common pool say “volumes” for creating
volumes which is shared by all tenants ? In this case a common
On 09/10/2015 11:53 AM, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
My notes show that it should have landed in 4.1, but I also have
written down that it wasn't merged yet. Just trying to get a
confirmation on the version that it did land in.
Yes, it landed in 4.1.
J
On 10/19/2015 02:45 PM, Jan Schermer wrote:
On 19 Oct 2015, at 23:15, Gregory Farnum wrote:
On Mon, Oct 19, 2015 at 11:18 AM, Jan Schermer wrote:
I'm sorry for appearing a bit dull (on purpose), I was hoping I'd hear what
other people using Ceph think.
If I were to use RADOS directly in m
On 11/13/2015 03:47 PM, Artie Ziff wrote:
Hello...
Want to share some more info about the rbd problem I am have.
I took Jason's suggestion to heart as there was a past errant configure
that was run with prefix not getting set so it installed into /. I never
did clean that up but today I configu
On 11/13/2015 09:20 PM, Master user for YYcloud Groups wrote:
Hi,
I found there is only librbd's python wrapper.
Do you know how to ports this library to other language such as
Java/Ruby/Perl etc?
Here are a few other ports of librbd:
Java: https://github.com/ceph/rados-java
Ruby: https://gi
ec026d0594
Merge: 773713b 615f8f4
Author: Josh Durgin
Date: Thu Nov 12 19:32:42 2015 -0800
Merge branch 'pybind3' of https://github.com/dcoles/ceph into
wip-pybind3
pybind: Add Python 3 support for rados and rbd modules
Reviewed-by: Josh Durgin
Conflicts:
On 12/07/2015 03:29 PM, Alex Gorbachev wrote:
When trying to merge two results of rbd export-diff, the following error
occurs:
iss@lab2-b1:~$ rbd export-diff --from-snap autosnap120720151500
spin1/scrun1@autosnap120720151502 /data/volume1/scrun1-120720151502.bck
iss@lab2-b1:~$ rbd export-diff -
On 12/08/2015 10:44 PM, Alex Gorbachev wrote:
Hi Josh,
On Mon, Dec 7, 2015 at 6:50 PM, Josh Durgin mailto:jdur...@redhat.com>> wrote:
On 12/07/2015 03:29 PM, Alex Gorbachev wrote:
When trying to merge two results of rbd export-diff, the
following error
This is the problem:
http://tracker.ceph.com/issues/14030
As a workaround, you can pass the first diff in via stdin, e.g.:
cat snap1.diff | rbd merge-diff - snap2.diff combined.diff
Josh
On 12/08/2015 11:11 PM, Josh Durgin wrote:
On 12/08/2015 10:44 PM, Alex Gorbachev wrote:
Hi Josh,
On
On 02/25/2013 11:12 AM, Mandell Degerness wrote:
I keep running into this error when attempting to create a volume from an image:
ProcessExecutionError: Unexpected error while running command.
Command: rbd import --pool rbd /mnt/novadisk/tmp/tmpbjwv9l
volume-d777527e-9754-4779-bd4c-d869968aba0c
On 03/11/2013 08:02 AM, Wolfgang Hennerbichler wrote:
also during writes.
I've tested it now on linux with a virtio disk-drive:
virsh console to the running VM.
Next - Write Test:
dd if=/dev/zero of=/bigfile bs=2M
On 03/12/2013 01:28 PM, Travis Rhoden wrote:
Thanks for the response, Trevor.
The root disk (/var/lib/nova/instances) must be on shared storage to run
the live migrate.
I would argue that it is on shared storage. It is an RBD stored in Ceph,
and that's available at each host via librbd.
A
Tue, Mar 12, 2013 at 4:38 PM, Josh Durgin wrote:
On 03/12/2013 01:28 PM, Travis Rhoden wrote:
Thanks for the response, Trevor.
The root disk (/var/lib/nova/instances) must be on shared storage to run
the live migrate.
I would argue that it is on shared storage. It is an RBD stored in
C
On 03/12/2013 12:46 AM, Wolfgang Hennerbichler wrote:
On 03/11/2013 11:56 PM, Josh Durgin wrote:
dd if=/dev/zero of=/bigfile bs=2M &
Serial console gets jerky, VM gets unresponsive. It doesn't crash, but
it's not 'healthy' either. CPU load isn't very high, it
On 03/18/2013 07:53 AM, Wolfgang Hennerbichler wrote:
On 03/13/2013 06:38 PM, Josh Durgin wrote:
Anyone seeing this problem, could you try the wip-rbd-cache-aio branch?
Hi,
just compiled and tested it out, unfortunately there's no big change:
ceph --version
ceph version 0.5
On 03/19/2013 11:03 PM, Chen, Xiaoxi wrote:
I think Josh may be the right man for this question ☺
To be more precious, I would like to add more words about the status:
1. We have configured “show_image_direct_url= Ture” in Glance, and from the
Cinder-volume’s log, we can make sure we have got
On 03/19/2013 07:06 PM, Josh Durgin wrote:
On 03/18/2013 07:53 AM, Wolfgang Hennerbichler wrote:
On 03/13/2013 06:38 PM, Josh Durgin wrote:
Anyone seeing this problem, could you try the wip-rbd-cache-aio branch?
Hi,
just compiled and tested it out, unfortunately there's n
On 04/16/2013 02:08 AM, Wolfgang Hennerbichler wrote:
On 03/29/2013 09:46 PM, Josh Durgin wrote:
The issue was that the qemu rbd driver was blocking the main qemu
thread when flush was called, since it was using a synchronous flush.
Fixing this involves patches to librbd to add an asynchronous
On 05/03/2013 01:48 PM, Jens Kristian Søgaard wrote:
Hi,
* librbd: new async flush method to resolve qemu hangs (requires Qemu
update as well)
I'm very interested in this update, as it has held our system back.
Which version of qemu is needed?
It's in a release yet.
The release notes
On 05/06/2013 11:56 AM, Barry O'Rourke wrote:
The version currently shipping with RHEL is qemu-kvm-0.12.1.2-2.355
which doesn't work with ceph authentication, so you'll need to go for
greater than 1.0.1.
The RHEL version number is pretty meaningless, since there are >2800
patches on top of it,
On 05/06/2013 03:41 PM, w sun wrote:
Hi Josh,
I assume by "put up-to-date rbd on top of the RHEL package", you mean that the latest
"asynchronous flush" fix (QEMU portion) can be back-ported and included in the RPMs? Or
not?
Yeah, the async flush will be included. We'll need one version of th
On 05/13/2013 09:17 AM, w sun wrote:
While planning the usage of fast clone from openstack glance image store
to cinder volume, I am a little concerned about possible IO performance
impact to the cinder volume service node if I have to perform flattening
of the multiple image down the road.
Am I
On 05/30/2013 02:09 AM, Stefan Priebe - Profihost AG wrote:
Hi,
under bobtail rbd snap rollback shows the progress going on. Since
cuttlefish i see no progress anymore.
Listing the rbd help it only shows me a no-progress option but it seems
no pogress is the default so i need a progress option.
On 05/30/2013 07:37 AM, Martin Mailand wrote:
Hi Josh,
I am trying to use ceph with openstack (grizzly), I have a multi host setup.
I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/.
Glance is working without a problem.
With cinder I can create and delete volumes without
e it explicitly with
cinder_endpoint_template=http://192.168.192.2:8776/v1/$(tenant_id)s
in your nova.conf.
Josh
-martin
On 30.05.2013 22:22, Martin Mailand wrote:
Hi Josh,
On 30.05.2013 21:17, Josh Durgin wrote:
It's trying to talk to the cinder api, and failing to connect at all.
Perhaps there's
lume-34838911-6613-4140-93e0-e1565054a2d3 10240M 2
root@controller:~/vm_images#
-martin
On 30.05.2013 22:56, Josh Durgin wrote:
On 05/30/2013 01:50 PM, Martin Mailand wrote:
Hi Josh,
I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone en
On 05/30/2013 02:50 PM, Martin Mailand wrote:
Hi Josh,
now everything is working, many thanks for your help, great work.
Great! I added those settings to
http://ceph.com/docs/master/rbd/rbd-openstack/ so it's easier to figure
out in the future.
-martin
On 30.05.2013 23:24, Josh D
On 06/03/2013 03:37 AM, Makkelie, R - SPLXL wrote:
I have the following installed
- openstack grizzly
- ceph cuttlefish
and followed
http://ceph.com/docs/master/rbd/rbd-openstack/
i can create volumes and i see them in ceph with the apporiate id's
but when i wan to attach them to a instance i g
On 06/07/2013 02:41 PM, John Nielsen wrote:
I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the
back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my
hosts and discovered that it includes this change:
[libvirt] [PATCH] Forbid use of ':'
On 06/07/2013 04:18 PM, John Nielsen wrote:
On Jun 7, 2013, at 5:01 PM, Josh Durgin wrote:
On 06/07/2013 02:41 PM, John Nielsen wrote:
I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the
back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my
On 06/11/2013 11:59 AM, Guido Winkelmann wrote:
Hi,
I'm having issues with data corruption on RBD volumes again.
I'm using RBD volumes for virtual harddisks for qemu-kvm virtual machines.
Inside these virtual machines I have been running a C++ program (attached)
that fills a mounted filesystem
On 06/11/2013 08:10 AM, Alvaro Izquierdo Jimeno wrote:
Hi all,
I want to connect an openstack Folsom glance service to ceph.
The first option is setting up the glance-api.conf with 'default_store=rbd' and
the user and pool.
The second option is defined in
https://blueprints.launchpad.net/gla
On 06/21/2013 09:48 AM, w sun wrote:
Josh & Sebastien,
Does either of you have any comments on this cephx issue with multi-rbd
backend pools?
Thx. --weiguo
From: ws...@hotmail.com
To: ceph-users@lists.ceph.com
Date: Thu,
On 06/27/2013 05:54 PM, w sun wrote:
Thanks Josh. That explains. So I guess right now with Grizzly, you can
only use one rbd backend pool (assume with different cephx key for
different pool) on a single Cinder node unless you are willing to modify
cinder-volume.conf and restart cinder service all
On 06/29/2017 08:16 PM, donglifec...@gmail.com wrote:
zhiqiang, Josn
what about the async recovery feature? I didn't see any update on
github recently,will it be further developed?
Yes, post-luminous at this point.
___
ceph-users mailing list
ceph-u
Both of you are seeing leveldb perform compaction when the osd starts
up. This can take a while for large amounts of omap data (created by
things like cephfs directory metadata or rgw bucket indexes).
The 'leveldb_compact_on_mount' option wasn't changed in 10.2.9, but
leveldb will compact automa
eren't any changes in the way ceph was using leveldb from 10.2.7
to 10.2.9 that I could find.
Josh
On 17.07.2017 22:03, Josh Durgin wrote:
Both of you are seeing leveldb perform compaction when the osd starts
up. This can take a while for large amounts of omap data (created by
things like c
On 09/06/2017 04:36 PM, Bryan Stillwell wrote:
I was reading this post by Josh Durgin today and was pretty happy to see we can
get a summary of features that clients are using with the 'ceph features'
command:
http://ceph.com/community/new-luminous-upgrade-complete/
However, I hav
On 09/07/2017 11:31 AM, Bryan Stillwell wrote:
On 09/07/2017 10:47 AM, Josh Durgin wrote:
On 09/06/2017 04:36 PM, Bryan Stillwell wrote:
I was reading this post by Josh Durgin today and was pretty happy to
see we can get a summary of features that clients are using with the
'ceph fea
Could you post your crushmap? PGs mapping to no OSDs is a symptom of something
wrong there.
You can stop the osds from changing position at startup with 'osd crush update
on start = false':
http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-location
Josh
Sent from Nine
On 09/13/2017 03:40 AM, Florian Haas wrote:
So we have a client that is talking to OSD 30. OSD 30 was never down;
OSD 17 was. OSD 30 is also the preferred primary for this PG (via
primary affinity). The OSD now says that
- it does itself have a copy of the object,
- so does OSD 94,
- but that th
On 09/14/2017 12:44 AM, Florian Haas wrote:
On Thu, Sep 14, 2017 at 2:47 AM, Josh Durgin wrote:
On 09/13/2017 03:40 AM, Florian Haas wrote:
So we have a client that is talking to OSD 30. OSD 30 was never down;
OSD 17 was. OSD 30 is also the preferred primary for this PG (via
primary affinity
On 09/15/2017 01:57 AM, Florian Haas wrote:
On Fri, Sep 15, 2017 at 8:58 AM, Josh Durgin wrote:
This is more of an issue with write-intensive RGW buckets, since the
bucket index object is a single bottleneck if it needs recovery, and
all further writes to a shard of a bucket index will be
nd, but I haven't thought of a better one
for existing versions.
Josh
Sent from Nine
From: Florian Haas
Sent: Sep 15, 2017 3:43 PM
To: Josh Durgin
Cc: ceph-users@lists.ceph.com; Christian Theune
Subject: Re: [ceph-users] Clarification on sequence of recov
On 09/06/2016 10:16 PM, Dan Jakubiec wrote:
Hello, I need to issue the following commands on millions of objects:
rados_write_full(oid1, ...)
rados_setxattr(oid1, "attr1", ...)
rados_setxattr(oid1, "attr2", ...)
Would it make it any faster if I combined all 3 of these into a single
On 09/13/2016 01:13 PM, Stuart Byma wrote:
Hi,
Can anyone tell me why librados creates multiple threads per object, and never
kills them, even when the ioctx is deleted? I am using the C++ API with a
single connection and a single IO context. More threads and memory are used for
each new obje
On 09/16/2016 09:46 AM, Erick Perez - Quadrian Enterprises wrote:
Can someone point me to a thread or site that uses ceph+erasure coding
to serve block storage for Virtual Machines running with Openstack+KVM?
All references that I found are using erasure coding for cold data or
*not* VM block acc
On 04/05/2018 06:15 PM, Adam Tygart wrote:
Well, the cascading crashes are getting worse. I'm routinely seeing
8-10 of my 518 osds crash. I cannot start 2 of them without triggering
14 or so of them to crash repeatedly for more than an hour.
I've ran another one of them with more logging, debug
On 04/05/2018 08:11 PM, Josh Durgin wrote:
On 04/05/2018 06:15 PM, Adam Tygart wrote:
Well, the cascading crashes are getting worse. I'm routinely seeing
8-10 of my 518 osds crash. I cannot start 2 of them without triggering
14 or so of them to crash repeatedly for more than an hour.
I
oking forward to future updates.
Let me know if you need anything else.
--
Adam
On Thu, Apr 5, 2018 at 10:13 PM, Josh Durgin wrote:
On 04/05/2018 08:11 PM, Josh Durgin wrote:
On 04/05/2018 06:15 PM, Adam Tygart wrote:
Well, the cascading crashes are getting worse. I'm routinely seeing
8-10
On 02/19/2017 12:15 PM, Patrick Donnelly wrote:
On Sat, Feb 18, 2017 at 2:55 PM, Noah Watkins wrote:
The least intrusive solution is to simply change the sandbox to allow
the standard file system module loading function as expected. Then any
user would need to make sure that every OSD had consi
On 03/08/2017 02:15 PM, Kent Borg wrote:
On 03/08/2017 05:08 PM, John Spray wrote:
Specifically?
I'm not saying you're wrong, but I am curious which bits in particular
you missed.
Object maps. Those transaction-y things. Object classes. Maybe more I
don't know about because I have been learni
On 04/12/2017 09:26 AM, Gerald Spencer wrote:
Ah I'm running Jewel. Is there any information online about python3-rados
with Kraken? I'm having difficulties finding more then I initially posted.
What info are you looking for?
The interface for the python bindings is the same for python 2 and 3
rying several times, the merge-diff sometimes works and
sometimes does not, using the same source files.
On Wed, Dec 9, 2015 at 10:15 PM, Alex Gorbachev mailto:a...@iss-integration.com>> wrote:
Hi Josh, looks like I celebrated too soon:
On Wed, Dec 9, 2015 at 2:25 PM, Josh Durgin
On 01/12/2016 06:10 AM, Alex Gorbachev wrote:
Good day! I am working on a robust backup script for RBD and ran into a
need to reliably determine start and end snapshots for differential
exports (done with rbd export-diff).
I can clearly see these if dumping the ASCII header of the export file,
On 01/09/2016 02:34 AM, Wukongming wrote:
Hi, all
I notice this sentence "Running GFS or OCFS on top of RBD will not work with
caching enabled." on http://docs.ceph.com/docs/master/rbd/rbd-config-ref/. why? Is
there any way to open rbd cache with ocfs2 based on? Because I have a fio test w
On 01/12/2016 11:11 AM, Stefan Priebe wrote:
Hi,
i want to add support for fast-diff and object map to our "old" firefly
v2 rbd images.
The current hammer release can't to this.
Is there any reason not to cherry-pick this one? (on my own)
https://github.com/ceph/ceph/commit/3a7b28d9a2de365d515
On 01/12/2016 10:34 PM, Wido den Hollander wrote:
On 01/13/2016 07:27 AM, wd_hw...@wistron.com wrote:
Thanks Wido.
So it seems there is no way to do this under Hammer.
Not very easily no. You'd have to count and stat all objects for a RBD
image to figure this out.
For hammer you'd need anot
On 02/24/2016 07:10 PM, Christian Balzer wrote:
10 second rados bench with 4KB blocks, 219MB written in total.
nand-writes per SSD:41*32MB=1312MB.
10496MB total written to all SSDs.
Amplification:48!!!
Le ouch.
In my use case with rbd cache on all VMs I expect writes to be rather
large for the m
how much the benchmark fills the image, this could be a
large or small overhead compared to the amount of data written.
Josh
Jan
On 26 Feb 2016, at 22:05, Josh Durgin wrote:
On 02/24/2016 07:10 PM, Christian Balzer wrote:
10 second rados bench with 4KB blocks, 219MB written in total.
nand-wr
c.
Josh
Rgds,
Shinobu
- Original Message -
From: "Josh Durgin"
To: "Christian Balzer" , ceph-users@lists.ceph.com
Sent: Saturday, February 27, 2016 6:05:07 AM
Subject: Re: [ceph-users] Observations with a SSD based pool under Hammer
On 02/24/2016 07:10 PM, Christian
On 02/26/2016 03:17 PM, Shinobu Kinjo wrote:
In jewel, as you mentioned, there will be "--max-objects" and "--object-size"
options.
That hint will go away or mitigate /w those options. Collect?
The io hint isn't sent by rados bench, just rbd. So even with those
options, rados bench still doesn
On 03/01/2016 10:03 PM, min fang wrote:
thanks, with your help, I set the read ahead parameter. What is the
cache parameter for kernel module rbd?
Such as:
1) what is the cache size?
2) Does it support write back?
3) Will read ahead be disabled if max bytes has been read into cache?
(similar the
On 03/17/2016 03:51 AM, Schlacta, Christ wrote:
I posted about this a while ago, and someone else has since inquired,
but I am seriously wanting to know if anybody has figured out how to
boot from a RBD device yet using ipxe or similar. Last I read.
loading the kernel and initrd from object stor
On 11/27/18 8:26 AM, Simon Ironside wrote:
On 27/11/2018 14:50, Abhishek Lekshmanan wrote:
We're happy to announce the tenth bug fix release of the Luminous
v12.2.x long term stable release series. The previous release, v12.2.9,
introduced the PG hard-limit patches which were found to cause an
On 11/27/18 9:40 AM, Graham Allan wrote:
On 11/27/2018 08:50 AM, Abhishek Lekshmanan wrote:
We're happy to announce the tenth bug fix release of the Luminous
v12.2.x long term stable release series. The previous release, v12.2.9,
introduced the PG hard-limit patches which were found to cause
On 11/27/18 12:00 PM, Robert Sander wrote:
Am 27.11.18 um 15:50 schrieb Abhishek Lekshmanan:
As mentioned above if you've successfully upgraded to v12.2.9 DO NOT
upgrade to v12.2.10 until the linked tracker issue has been fixed.
What about clusters currently running 12.2.9 (because this
On 11/27/18 12:11 PM, Josh Durgin wrote:
13.2.3 will have a similar revert, so if you are running anything other
than 12.2.9 or 13.2.2 you can go directly to 13.2.3.
Correction: I misremembered here, we're not reverting these patches for
13.2.3, so 12.2.9 users can upgrade to 13.2.2 or
;ceph-users on behalf of Josh Durgin"
wrote:
On 11/27/18 9:40 AM, Graham Allan wrote:
>
>
> On 11/27/2018 08:50 AM, Abhishek Lekshmanan wrote:
>>
>> We're happy to announce the tenth bug fix release of the Luminous
>> v1
On 8/2/19 3:04 AM, Harald Staub wrote:
Right now our main focus is on the Veeam use case (VMWare backup), used
with an S3 storage tier. Currently we host a bucket with 125M objects
and one with 100M objects.
As Paul stated, searching common prefixes can be painful. We had some
cases that did
101 - 186 of 186 matches
Mail list logo