On 8/2/19 3:04 AM, Harald Staub wrote:
Right now our main focus is on the Veeam use case (VMWare backup), used
with an S3 storage tier. Currently we host a bucket with 125M objects
and one with 100M objects.
As Paul stated, searching common prefixes can be painful. We had some
cases that did
;ceph-users on behalf of Josh Durgin"
wrote:
On 11/27/18 9:40 AM, Graham Allan wrote:
>
>
> On 11/27/2018 08:50 AM, Abhishek Lekshmanan wrote:
>>
>> We're happy to announce the tenth bug fix release of the Luminous
>> v1
On 11/27/18 12:11 PM, Josh Durgin wrote:
13.2.3 will have a similar revert, so if you are running anything other
than 12.2.9 or 13.2.2 you can go directly to 13.2.3.
Correction: I misremembered here, we're not reverting these patches for
13.2.3, so 12.2.9 users can upgrade to 13.2.2 or
On 11/27/18 12:00 PM, Robert Sander wrote:
Am 27.11.18 um 15:50 schrieb Abhishek Lekshmanan:
As mentioned above if you've successfully upgraded to v12.2.9 DO NOT
upgrade to v12.2.10 until the linked tracker issue has been fixed.
What about clusters currently running 12.2.9 (because this
On 11/27/18 9:40 AM, Graham Allan wrote:
On 11/27/2018 08:50 AM, Abhishek Lekshmanan wrote:
We're happy to announce the tenth bug fix release of the Luminous
v12.2.x long term stable release series. The previous release, v12.2.9,
introduced the PG hard-limit patches which were found to cause
On 11/27/18 8:26 AM, Simon Ironside wrote:
On 27/11/2018 14:50, Abhishek Lekshmanan wrote:
We're happy to announce the tenth bug fix release of the Luminous
v12.2.x long term stable release series. The previous release, v12.2.9,
introduced the PG hard-limit patches which were found to cause an
oking forward to future updates.
Let me know if you need anything else.
--
Adam
On Thu, Apr 5, 2018 at 10:13 PM, Josh Durgin wrote:
On 04/05/2018 08:11 PM, Josh Durgin wrote:
On 04/05/2018 06:15 PM, Adam Tygart wrote:
Well, the cascading crashes are getting worse. I'm routinely seeing
8-10
On 04/05/2018 08:11 PM, Josh Durgin wrote:
On 04/05/2018 06:15 PM, Adam Tygart wrote:
Well, the cascading crashes are getting worse. I'm routinely seeing
8-10 of my 518 osds crash. I cannot start 2 of them without triggering
14 or so of them to crash repeatedly for more than an hour.
I
On 04/05/2018 06:15 PM, Adam Tygart wrote:
Well, the cascading crashes are getting worse. I'm routinely seeing
8-10 of my 518 osds crash. I cannot start 2 of them without triggering
14 or so of them to crash repeatedly for more than an hour.
I've ran another one of them with more logging, debug
nd, but I haven't thought of a better one
for existing versions.
Josh
Sent from Nine
From: Florian Haas
Sent: Sep 15, 2017 3:43 PM
To: Josh Durgin
Cc: ceph-users@lists.ceph.com; Christian Theune
Subject: Re: [ceph-users] Clarification on sequence of recov
On 09/15/2017 01:57 AM, Florian Haas wrote:
On Fri, Sep 15, 2017 at 8:58 AM, Josh Durgin wrote:
This is more of an issue with write-intensive RGW buckets, since the
bucket index object is a single bottleneck if it needs recovery, and
all further writes to a shard of a bucket index will be
On 09/14/2017 12:44 AM, Florian Haas wrote:
On Thu, Sep 14, 2017 at 2:47 AM, Josh Durgin wrote:
On 09/13/2017 03:40 AM, Florian Haas wrote:
So we have a client that is talking to OSD 30. OSD 30 was never down;
OSD 17 was. OSD 30 is also the preferred primary for this PG (via
primary affinity
On 09/13/2017 03:40 AM, Florian Haas wrote:
So we have a client that is talking to OSD 30. OSD 30 was never down;
OSD 17 was. OSD 30 is also the preferred primary for this PG (via
primary affinity). The OSD now says that
- it does itself have a copy of the object,
- so does OSD 94,
- but that th
Could you post your crushmap? PGs mapping to no OSDs is a symptom of something
wrong there.
You can stop the osds from changing position at startup with 'osd crush update
on start = false':
http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-location
Josh
Sent from Nine
On 09/07/2017 11:31 AM, Bryan Stillwell wrote:
On 09/07/2017 10:47 AM, Josh Durgin wrote:
On 09/06/2017 04:36 PM, Bryan Stillwell wrote:
I was reading this post by Josh Durgin today and was pretty happy to
see we can get a summary of features that clients are using with the
'ceph fea
On 09/06/2017 04:36 PM, Bryan Stillwell wrote:
I was reading this post by Josh Durgin today and was pretty happy to see we can
get a summary of features that clients are using with the 'ceph features'
command:
http://ceph.com/community/new-luminous-upgrade-complete/
However, I hav
eren't any changes in the way ceph was using leveldb from 10.2.7
to 10.2.9 that I could find.
Josh
On 17.07.2017 22:03, Josh Durgin wrote:
Both of you are seeing leveldb perform compaction when the osd starts
up. This can take a while for large amounts of omap data (created by
things like c
Both of you are seeing leveldb perform compaction when the osd starts
up. This can take a while for large amounts of omap data (created by
things like cephfs directory metadata or rgw bucket indexes).
The 'leveldb_compact_on_mount' option wasn't changed in 10.2.9, but
leveldb will compact automa
On 06/29/2017 08:16 PM, donglifec...@gmail.com wrote:
zhiqiang, Josn
what about the async recovery feature? I didn't see any update on
github recently,will it be further developed?
Yes, post-luminous at this point.
___
ceph-users mailing list
ceph-u
On 04/12/2017 09:26 AM, Gerald Spencer wrote:
Ah I'm running Jewel. Is there any information online about python3-rados
with Kraken? I'm having difficulties finding more then I initially posted.
What info are you looking for?
The interface for the python bindings is the same for python 2 and 3
On 03/08/2017 02:15 PM, Kent Borg wrote:
On 03/08/2017 05:08 PM, John Spray wrote:
Specifically?
I'm not saying you're wrong, but I am curious which bits in particular
you missed.
Object maps. Those transaction-y things. Object classes. Maybe more I
don't know about because I have been learni
On 02/19/2017 12:15 PM, Patrick Donnelly wrote:
On Sat, Feb 18, 2017 at 2:55 PM, Noah Watkins wrote:
The least intrusive solution is to simply change the sandbox to allow
the standard file system module loading function as expected. Then any
user would need to make sure that every OSD had consi
On 09/16/2016 09:46 AM, Erick Perez - Quadrian Enterprises wrote:
Can someone point me to a thread or site that uses ceph+erasure coding
to serve block storage for Virtual Machines running with Openstack+KVM?
All references that I found are using erasure coding for cold data or
*not* VM block acc
On 09/13/2016 01:13 PM, Stuart Byma wrote:
Hi,
Can anyone tell me why librados creates multiple threads per object, and never
kills them, even when the ioctx is deleted? I am using the C++ API with a
single connection and a single IO context. More threads and memory are used for
each new obje
On 09/06/2016 10:16 PM, Dan Jakubiec wrote:
Hello, I need to issue the following commands on millions of objects:
rados_write_full(oid1, ...)
rados_setxattr(oid1, "attr1", ...)
rados_setxattr(oid1, "attr2", ...)
Would it make it any faster if I combined all 3 of these into a single
On 03/17/2016 03:51 AM, Schlacta, Christ wrote:
I posted about this a while ago, and someone else has since inquired,
but I am seriously wanting to know if anybody has figured out how to
boot from a RBD device yet using ipxe or similar. Last I read.
loading the kernel and initrd from object stor
On 03/01/2016 10:03 PM, min fang wrote:
thanks, with your help, I set the read ahead parameter. What is the
cache parameter for kernel module rbd?
Such as:
1) what is the cache size?
2) Does it support write back?
3) Will read ahead be disabled if max bytes has been read into cache?
(similar the
On 02/26/2016 03:17 PM, Shinobu Kinjo wrote:
In jewel, as you mentioned, there will be "--max-objects" and "--object-size"
options.
That hint will go away or mitigate /w those options. Collect?
The io hint isn't sent by rados bench, just rbd. So even with those
options, rados bench still doesn
c.
Josh
Rgds,
Shinobu
- Original Message -
From: "Josh Durgin"
To: "Christian Balzer" , ceph-users@lists.ceph.com
Sent: Saturday, February 27, 2016 6:05:07 AM
Subject: Re: [ceph-users] Observations with a SSD based pool under Hammer
On 02/24/2016 07:10 PM, Christian
how much the benchmark fills the image, this could be a
large or small overhead compared to the amount of data written.
Josh
Jan
On 26 Feb 2016, at 22:05, Josh Durgin wrote:
On 02/24/2016 07:10 PM, Christian Balzer wrote:
10 second rados bench with 4KB blocks, 219MB written in total.
nand-wr
On 02/24/2016 07:10 PM, Christian Balzer wrote:
10 second rados bench with 4KB blocks, 219MB written in total.
nand-writes per SSD:41*32MB=1312MB.
10496MB total written to all SSDs.
Amplification:48!!!
Le ouch.
In my use case with rbd cache on all VMs I expect writes to be rather
large for the m
On 01/12/2016 10:34 PM, Wido den Hollander wrote:
On 01/13/2016 07:27 AM, wd_hw...@wistron.com wrote:
Thanks Wido.
So it seems there is no way to do this under Hammer.
Not very easily no. You'd have to count and stat all objects for a RBD
image to figure this out.
For hammer you'd need anot
On 01/12/2016 11:11 AM, Stefan Priebe wrote:
Hi,
i want to add support for fast-diff and object map to our "old" firefly
v2 rbd images.
The current hammer release can't to this.
Is there any reason not to cherry-pick this one? (on my own)
https://github.com/ceph/ceph/commit/3a7b28d9a2de365d515
On 01/09/2016 02:34 AM, Wukongming wrote:
Hi, all
I notice this sentence "Running GFS or OCFS on top of RBD will not work with
caching enabled." on http://docs.ceph.com/docs/master/rbd/rbd-config-ref/. why? Is
there any way to open rbd cache with ocfs2 based on? Because I have a fio test w
On 01/12/2016 06:10 AM, Alex Gorbachev wrote:
Good day! I am working on a robust backup script for RBD and ran into a
need to reliably determine start and end snapshots for differential
exports (done with rbd export-diff).
I can clearly see these if dumping the ASCII header of the export file,
rying several times, the merge-diff sometimes works and
sometimes does not, using the same source files.
On Wed, Dec 9, 2015 at 10:15 PM, Alex Gorbachev mailto:a...@iss-integration.com>> wrote:
Hi Josh, looks like I celebrated too soon:
On Wed, Dec 9, 2015 at 2:25 PM, Josh Durgin
This is the problem:
http://tracker.ceph.com/issues/14030
As a workaround, you can pass the first diff in via stdin, e.g.:
cat snap1.diff | rbd merge-diff - snap2.diff combined.diff
Josh
On 12/08/2015 11:11 PM, Josh Durgin wrote:
On 12/08/2015 10:44 PM, Alex Gorbachev wrote:
Hi Josh,
On
On 12/08/2015 10:44 PM, Alex Gorbachev wrote:
Hi Josh,
On Mon, Dec 7, 2015 at 6:50 PM, Josh Durgin mailto:jdur...@redhat.com>> wrote:
On 12/07/2015 03:29 PM, Alex Gorbachev wrote:
When trying to merge two results of rbd export-diff, the
following error
On 12/07/2015 03:29 PM, Alex Gorbachev wrote:
When trying to merge two results of rbd export-diff, the following error
occurs:
iss@lab2-b1:~$ rbd export-diff --from-snap autosnap120720151500
spin1/scrun1@autosnap120720151502 /data/volume1/scrun1-120720151502.bck
iss@lab2-b1:~$ rbd export-diff -
ec026d0594
Merge: 773713b 615f8f4
Author: Josh Durgin
Date: Thu Nov 12 19:32:42 2015 -0800
Merge branch 'pybind3' of https://github.com/dcoles/ceph into
wip-pybind3
pybind: Add Python 3 support for rados and rbd modules
Reviewed-by: Josh Durgin
Conflicts:
On 11/13/2015 09:20 PM, Master user for YYcloud Groups wrote:
Hi,
I found there is only librbd's python wrapper.
Do you know how to ports this library to other language such as
Java/Ruby/Perl etc?
Here are a few other ports of librbd:
Java: https://github.com/ceph/rados-java
Ruby: https://gi
On 11/13/2015 03:47 PM, Artie Ziff wrote:
Hello...
Want to share some more info about the rbd problem I am have.
I took Jason's suggestion to heart as there was a past errant configure
that was run with prefix not getting set so it installed into /. I never
did clean that up but today I configu
On 10/19/2015 02:45 PM, Jan Schermer wrote:
On 19 Oct 2015, at 23:15, Gregory Farnum wrote:
On Mon, Oct 19, 2015 at 11:18 AM, Jan Schermer wrote:
I'm sorry for appearing a bit dull (on purpose), I was hoping I'd hear what
other people using Ceph think.
If I were to use RADOS directly in m
On 09/10/2015 11:53 AM, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
My notes show that it should have landed in 4.1, but I also have
written down that it wasn't merged yet. Just trying to get a
confirmation on the version that it did land in.
Yes, it landed in 4.1.
J
On 08/06/2015 03:10 AM, Daleep Bais wrote:
Hi,
Whenever I restart or check the logs for OSD, MON, I get below warning
message..
I am running a test cluster of 09 OSD's and 03 MON nodes.
[ceph-node1][WARNIN] libust[3549/3549]: Warning: HOME environment
variable not set. Disabling LTTng-UST per-
On 08/01/2015 07:52 PM, pixelfairy wrote:
Id like to look at a read-only copy of running virtual machines for
compliance and potentially malware checks that the VMs are unaware of.
the first note on http://ceph.com/docs/master/rbd/rbd-snapshot/ warns
that the filesystem has to be in a consistent
On 07/23/2015 06:31 AM, Jan Schermer wrote:
Hi all,
I am looking for a way to alleviate the overhead of RBD snapshots/clones for
some time.
In our scenario there are a few “master” volumes that contain production data,
and are frequently snapshotted and cloned for dev/qa use. Those
snapshots/
On 07/15/2015 11:48 AM, Shane Gibson wrote:
Somnath - thanks for the reply ...
:-) Haven't tried anything yet - just starting to gather
info/input/direction for this solution.
Looking at the S3 API info [2] - there is no mention of support for the
"S3a" API extensions - namely "rename" suppor
qemu.block/2500
Josh
- Mail original -
De: "Jason Dillaman"
À: "Andrey Korolyov"
Cc: "Josh Durgin" , "aderumier" ,
"ceph-users"
Envoyé: Lundi 8 Juin 2015 22:29:10
Objet: Re: [ceph-users] rbd cache + libvirt
On Mon, Jun 8, 2015 at
On 06/08/2015 11:19 AM, Alexandre DERUMIER wrote:
Hi,
looking at the latest version of QEMU,
It's seem that it's was already this behaviour since the add of rbd_cache
parsing in rbd.c by josh in 2012
http://git.qemu.org/?p=qemu.git;a=blobdiff;f=block/rbd.c;h=eebc3344620058322bb53ba8376af4a82
On 06/03/2015 04:15 AM, Jan Schermer wrote:
Thanks for a very helpful answer.
So if I understand it correctly then what I want (crash consistency with RPO>0)
isn’t possible now in any way.
If there is no ordering in RBD cache then ignoring barriers sounds like a very
bad idea also.
Yes, that'
r?
It'll be in 0.94.3.
0.94.2 is close to release already: http://tracker.ceph.com/issues/11492
Josh
Thanks,
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Wed, Jun 3, 2015 at 4:00 PM, Josh Durgin wrote:
On 06/03/2015 02:31 PM,
On 06/03/2015 02:31 PM, Robert LeBlanc wrote:
We are experiencing a problem where nova is opening up all kinds of
sockets like:
nova-comp 20740 nova 1996u unix 0x8811b3116b40 0t0 41081179
/var/run/ceph/ceph-client.volumes.20740.81999792.asok
hitting the open file limits rather quickly
On 06/01/2015 03:41 AM, Jan Schermer wrote:
Thanks, that’s it exactly.
But I think that’s really too much work for now, that’s why I really would like
to see a quick-win by using the local RBD cache for now - that would suffice
for most workloads (not too many people run big databases on CEPH n
e clues about what's going slower.
Josh
On Tue, Apr 14, 2015 at 12:36 PM, Josh Durgin mailto:jdur...@redhat.com>> wrote:
I don't see any commits that would be likely to affect that between
0.80.7 and 0.80.9.
Is this after upgrading an existing cluster?
Cou
ps in ceph.conf on the cinder
node. This affects delete speed, since rbd tries to delete each object in a
volume.
Josh
From: shiva rkreddy
Sent: Apr 14, 2015 5:53 AM
To: Josh Durgin
Cc: Ken Dreyer; Sage Weil; Ceph Development; ceph-us...@ceph.com
Subject: Re: v0.80.8 and librbd performance
>
On 04/08/2015 09:37 PM, Yuming Ma (yumima) wrote:
Josh,
I think we are using plain live migration and not mirroring block drives
as the other test did.
Do you have the migration flags or more from the libvirt log? Also
which versions of qemu is this?
The libvirt log message about qemuMigratio
On 04/08/2015 11:40 AM, Jeff Epstein wrote:
Hi, thanks for answering. Here are the answers to your questions.
Hopefully they will be helpful.
On 04/08/2015 12:36 PM, Lionel Bouton wrote:
I probably won't be able to help much, but people knowing more will
need at least: - your Ceph version, - th
Yes, you can use multiple ioctxs with the same underlying rados connection.
There's no hard limit on how many, it depends on your usage if/when a single
rados connection becomes a bottleneck.
It's safe to use different ioctxs from multiple threads. IoCtxs have some local
state like namespace,
Like the last comment on the bug says, the message about block migration (drive
mirroring) indicates that nova is telling libvirt to copy the virtual disks,
which is not what should happen for ceph or other shared storage.
For ceph just plain live migration should be used, not block migration.
On 04/01/2015 02:42 AM, Jimmy Goffaux wrote:
English Version :
Hello,
I found a strange behavior in Ceph. This behavior is visible on Buckets
(RGW) and pools (RDB).
pools:
``
root@:~# qemu-img info rbd:pool/kibana2
image: rbd:pool/kibana2
file format: raw
virtual size: 30G (32212254720 bytes)
On 03/26/2015 10:46 AM, Gregory Farnum wrote:
I don't know why you're mucking about manually with the rbd directory;
the rbd tool and rados handle cache pools correctly as far as I know.
That's true, but the rados tool should be able to manipulate binary data
more easily. It should probably be
On 03/05/2015 12:46 AM, koukou73gr wrote:
On 03/05/2015 03:40 AM, Josh Durgin wrote:
It looks like your libvirt rados user doesn't have access to whatever
pool the parent image is in:
librbd::AioRequest: write 0x7f1ec6ad6960
rbd_data.24413d1b58ba.0186 1523712~4096 should_com
On 03/04/2015 01:36 PM, koukou73gr wrote:
On 03/03/2015 05:53 PM, Jason Dillaman wrote:
Your procedure appears correct to me. Would you mind re-running your
cloned image VM with the following ceph.conf properties:
[client]
rbd cache off
debug rbd = 20
log file = /path/writeable/by/qemu.$pid.lo
On 03/02/2015 04:16 AM, koukou73gr wrote:
Hello,
Today I thought I'd experiment with snapshots and cloning. So I did:
rbd import --image-format=2 vm-proto.raw rbd/vm-proto
rbd snap create rbd/vm-proto@s1
rbd snap protect rbd/vm-proto@s1
rbd clone rbd/vm-proto@s1 rbd/server
And then proceeded
On 03/03/2015 03:28 PM, Ken Dreyer wrote:
On 03/03/2015 04:19 PM, Sage Weil wrote:
Hi,
This is just a heads up that we've identified a performance regression in
v0.80.8 from previous firefly releases. A v0.80.9 is working it's way
through QA and should be out in a few days. If you haven't upg
27;allow r class-read pool=foo namespace=""
object_prefix rbd_id, allow rwx pool=foo namespace=bar'
Cinder or other management layers would still want broader access, but
these more restricted keys could be the only ones exposed to QEMU.
Josh
On 13 February 2015 at 05:57, Josh Durgin
> From: "Logan Barfield"
> We've been running some tests to try to determine why our FreeBSD VMs
> are performing much worse than our Linux VMs backed by RBD, especially
> on writes.
>
> Our current deployment is:
> - 4x KVM Hypervisors (QEMU 2.0.0+dfsg-2ubuntu1.6)
> - 2x OSD nodes (8x SSDs each,
On 02/10/2015 07:54 PM, Blair Bethwaite wrote:
Just came across this in the docs:
"Currently (i.e., firefly), namespaces are only useful for
applications written on top of librados. Ceph clients such as block
device, object storage and file system do not currently support this
feature."
Then fou
On 02/05/2015 07:44 AM, Udo Lembke wrote:
Hi all,
is there any command to flush the rbd cache like the
"echo 3 > /proc/sys/vm/drop_caches" for the os cache?
librbd exposes it as rbd_invalidate_cache(), and qemu uses it
internally, but I don't think you can trigger that via any user-facing
qemu
On 01/06/2015 10:24 AM, Robert LeBlanc wrote:
Can't this be done in parallel? If the OSD doesn't have an object then
it is a noop and should be pretty quick. The number of outstanding
operations can be limited to 100 or a 1000 which would provide a
balance between speed and performance impact if
On Tue, Jan 6, 2015 at 4:19 PM, Josh Durgin wrote:
On 01/06/2015 10:24 AM, Robert LeBlanc wrote:
Can't this be done in parallel? If the OSD doesn't have an object then
it is a noop and should be pretty quick. The number of outstanding
operations can be limited to 100 or a 1000 whic
image
for write access (all handled automatically by librbd).
Using watch/notify to coordinate multi-client access would get complex
and inefficient pretty fast, and in general is best left to cephfs
rather than rbd.
Josh
On Jan 6, 2015 5:35 PM, "Josh Durgin" mailto:josh.dur...@inktank.c
On 12/18/2014 10:49 AM, Travis Rhoden wrote:
One question re: discard support for kRBD -- does it matter which format
the RBD is? Format 1 and Format 2 are okay, or just for Format 2?
It shouldn't matter which format you use.
Josh
___
ceph-users mai
On 12/17/2014 03:49 PM, Gregory Farnum wrote:
On Wed, Dec 17, 2014 at 2:31 PM, McNamara, Bradley
wrote:
I have a somewhat interesting scenario. I have an RBD of 17TB formatted
using XFS. I would like it accessible from two different hosts, one
mapped/mounted read-only, and one mapped/mounted
On 10/24/2014 08:21 AM, Xu (Simon) Chen wrote:
Hey folks,
I am trying to enable OpenStack to use RBD as image backend:
https://bugs.launchpad.net/nova/+bug/1226351
For some reason, nova-compute segfaults due to librados crash:
./log/SubsystemMap.h: In function 'bool
ceph::log::SubsystemMap::sh
On 09/24/2014 04:57 PM, Brian Rak wrote:
I've been doing some testing of importing virtual machine images, and
I've found that 'rbd import' is at least 2x as slow as 'qemu-img
convert'. Is there anything I can do to speed this process up? I'd
like to use rbd import because it gives me a little
On 09/09/2014 07:06 AM, yuelongguang wrote:
hi, josh.durgin:
i want to know how librbd launch io request.
use case:
inside vm, i use fio to test rbd-disk's io performance.
fio's pramaters are bs=4k, direct io, qemu cache=none.
in this case, if librbd just send what it gets from vm, i mean no
ga
On 07/07/2014 05:41 AM, Patrycja Szabłowska wrote:
OK, the mystery is solved.
From https://www.mail-archive.com/ceph-users@lists.ceph.com/msg10368.html
"During a multi part upload you can't upload parts smaller than 5M"
I've tried to upload smaller chunks, like 10KB. I've changed chunk size
to
On 07/04/2014 08:36 AM, Peter wrote:
i am having issues running radosgw-agent to sync data between two
radosgw zones. As far as i can tell both zones are running correctly.
My issue is when i run the radosgw-agent command:
radosgw-agent -v --src-access-key --src-secret-key
--dest-access-key
On 06/10/2014 01:56 AM, Vilobh Meshram wrote:
How does CEPH guarantee data isolation for volumes which are not meant
to be shared in a Openstack tenant?
When used with OpenStack the data isolation is provided by the
Openstack level so that all users who are part of same tenant will be
able to ac
On Fri, 6 Jun 2014 17:34:56 -0700
Tyler Wilson wrote:
> Hey All,
>
> Simple question, does 'rbd export-diff' work with children snapshot
> aka;
>
> root:~# rbd children images/03cb46f7-64ab-4f47-bd41-e01ced45f0b4@snap
> compute/2b65c0b9-51c3-4ab1-bc3c-6b734cc796b8_disk
> compute/54f3b23c-facf-4
On 05/21/2014 03:29 PM, Vilobh Meshram wrote:
Hi All,
I want to understand on how do CEPH users go about Quota Management when
CEPH is used with Openstack.
1. Is it recommended to use a common pool say “volumes” for creating
volumes which is shared by all tenants ? In this case a common
On 05/21/2014 03:03 PM, Olivier Bonvalet wrote:
Le mercredi 21 mai 2014 à 08:20 -0700, Sage Weil a écrit :
You're certain that that is the correct prefix for the rbd image you
removed? Do you see the objects lists when you do 'rados -p rbd ls - |
grep '?
I'm pretty sure yes : since I didn't s
On 05/19/2014 01:48 AM, JinHwan Hwang wrote:
I have been trying to do live migration on vm which is running on rbd.
But so far, they only give me ' internal error: rbd username 'libvirt'
specified but secret not found' when i do live migration.
ceph-admin : source host
host : destination host
r
On 05/08/2014 09:42 AM, Federico Iezzi wrote:
I guys,
First of all congratulations on Firefly Release!
IMHO I think that this release is a huge step for Ceph Project!
Just for fun, this morning I upgraded one my staging Ceph cluster used
by an OpenStack Havana installation (Canonical cloud arch
On 04/18/2014 10:47 AM, Alexandre DERUMIER wrote:
Thanks Kevin for for the full explain!
cache.writeback=on,cache.direct=off,cache.no-flush=off
I didn't known about the cache options split,thanks.
rbd does, to my knowledge, not use the kernel page cache, so we're safe
>from that part. It
On 04/03/2014 03:36 PM, Jonathan Gowar wrote:
On Tue, 2014-03-04 at 14:05 +0800, YIP Wai Peng wrote:
Dear all,
I have a rbd image that I can't delete. It contains a snapshot that is
"busy"
# rbd --pool openstack-images rm
2383ba62-b7ab-4964-a776-fb3f3723aabe-deleted
2014-03-04 14:02:04.0620
On 03/31/2014 03:03 PM, Brendan Moloney wrote:
Hi,
I was wondering if RBD snapshots use the CRUSH map to distribute
snapshot data and live data on different failure domains? If not, would
it be feasible in the future?
Currently rbd snapshots and live objects are stored in the same place,
since
On 03/26/2014 05:50 PM, Craig Lewis wrote:
I made a typo in my timeline too.
It should read:
At 14:14:00, I started OSD 4, and waited for ceph-w to stabilize. CPU
usage was normal.
At 14:15:10, I ran radosgw-admin --name=client.radosgw.ceph1c regions
list && radosgw-admin --name=client.radosgw.c
On 03/26/2014 02:26 PM, gustavo panizzo wrote:
hello
one of our OSD crashed, unfortunately it had a a unreplicated pool
(size = 1).
i want to recover as much as possible using raw files, i've tried using
this tool
https://raw.githubusercontent.com/smmoore/ceph/4eb806fdcc02632bf4ac60f302c4e1
On 03/25/2014 12:18 AM, Ray Lv wrote:
Hi there,
We got a case to use the C APIs of compound operation in librados. These
interfaces are only exported as C APIs from Firefly release, such as
https://github.com/ceph/ceph/blob/firefly/src/include/rados/librados.h#L1834.
But our RADOS deployment wil
On 03/20/2014 07:03 PM, Dmitry Borodaenko wrote:
On Thu, Mar 20, 2014 at 3:43 PM, Josh Durgin wrote:
On 03/20/2014 02:07 PM, Dmitry Borodaenko wrote:
The patch series that implemented clone operation for RBD backed
ephemeral volumes in Nova did not make it into Icehouse. We have tried
our
On 03/20/2014 02:07 PM, Dmitry Borodaenko wrote:
The patch series that implemented clone operation for RBD backed
ephemeral volumes in Nova did not make it into Icehouse. We have tried
our best to help it land, but it was ultimately rejected. Furthermore,
an additional requirement was imposed to
On 02/05/2014 01:23 PM, Craig Lewis wrote:
On 2/4/14 20:02 , Josh Durgin wrote:
From the log it looks like you're hitting the default maximum number of
entries to be processed at once per shard. This was intended to prevent
one really busy shard from blocking progress on syncing other s
On 02/04/2014 07:44 PM, Craig Lewis wrote:
On 2/4/14 17:06 , Craig Lewis wrote:
On 2/4/14 14:43 , Yehuda Sadeh wrote:
Does it ever catching up? You mentioned before that most of the writes
went to the same two buckets, so that's probably one of them. Note
that writes to the same bucket are b
On 01/08/2014 11:07 AM, Gautam Saxena wrote:
When booting an image from Openstack in which CEPH is the back-end for
both volumes and images, I'm noticing that it takes about ~10 minutes
during the "spawning" phase -- I believe Openstack is making a fully
copy of the 30 GB Windows image. Shouldn't
On 01/02/2014 10:51 PM, James Harper wrote:
I've not used ceph snapshots before. The documentation says that the rbd device
should not be in use before creating a snapshot. Does this mean that creating a
snapshot is not an atomic operation? I'm happy with a crash consistent
filesystem if that'
On 01/02/2014 01:40 PM, James Harper wrote:
I just had to restore an ms exchange database after an ceph hiccup (no actual
data lost - Exchange is very good like that with its no loss restore!). The
order
of events went something like:
. Loss of connection on osd to the cluster network (public
On 12/05/2013 02:37 PM, Dmitry Borodaenko wrote:
Josh,
On Tue, Nov 19, 2013 at 4:24 PM, Josh Durgin wrote:
I hope I can release or push commits to this branch contains live-migration,
incorrect filesystem size fix and ceph-snapshort support in a few days.
Can't wait to see this patch
1 - 100 of 186 matches
Mail list logo