d changing this, along with any other related settings, to no
avail -- for whatever I do, the delay remains at 20 seconds.
Anything else to try?
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Howdy —
I’ve had a failure on a small, Dumpling (0.67.4) cluster running on Ubuntu
13.10 machines. I had three OSD nodes (running 6 OSDs each), and lost two of
them in a beautiful failure. One of these nodes even went so far as to
scramble the XFS filesystems of my OSD disks (I’m curious if i
1",
> "objects": []},
> "peer_backfill_info": { "begin": "0\/\/0\/\/-1",
> "end": "0\/\/0\/\/-1",
> "objects": []},
> "ba
On Jun 3, 2014, at 5:58 PM, Smart Weblications GmbH - Florian Wiessner
wrote:
> I think it would be less painfull if you had removed and the immediatelly
> recreate the corrupted osd again to avoid 'holes' in the osd ids. It should
> work
> with your configuration anyhow, though.
I agree with
, Jason Harley wrote:
> On Jun 3, 2014, at 5:58 PM, Smart Weblications GmbH - Florian Wiessner
> wrote:
>
>> I think it would be less painfull if you had removed and the immediatelly
>> recreate the corrupted osd again to avoid 'holes' in the osd ids. It should
>
Howdy —
I’d like to run the ceph REST API behind nginx, and uWSGI and UNIX sockets
seems like a smart way to do this. Has anyone attempted to get this setup
working? I’ve tried writing a uWSGI wrapper as well as just telling ‘uwsgi’ to
call the ‘ceph_rest_api’ module without luck.
./JRH
On Jun 16, 2014, at 8:52 PM, Wido den Hollander wrote:
>> Op 16 jun. 2014 om 19:23 heeft "Jason Harley" het
>> volgende geschreven:
>>
>> Howdy —
>>
>> I’d like to run the ceph REST API behind nginx, and uWSGI and UNIX sockets
>> seems l
Hi list —
I’ve got a small dev. cluster: 3 OSD nodes with 6 disks/OSDs each and a single
monitor (this, it seems, was my mistake). The monitor node went down hard and
it looks like the monitor’s db is in a funny state. Running ‘ceph-mon’
manually with ‘debug_mon 20’ and ‘debug_ms 20’ gave the
Hi Joao,
On Jul 3, 2014, at 7:57 PM, Joao Eduardo Luis wrote:
> We don't have a way to repair leveldb. Having multiple monitors usually help
> with such tricky situations.
I know this, but for this small dev cluster I wasn’t thinking about corruption
of my mon’s backing store. Silly me :)
and the
> ceph-disks don’t persist over a boot cycle.
>
>
>
> Is there a document anywhere that anyone knows of that explains a step by
> step process for bringing up multiple osd’s per host – 1 hdd with ssd
> journal partition per osd?
>
> Thanks,
>
> Br
and the
> ceph-disks don’t persist over a boot cycle.
>
>
>
> Is there a document anywhere that anyone knows of that explains a step by
> step process for bringing up multiple osd’s per host – 1 hdd with ssd
> journal partition per osd?
>
> Thanks,
>
> Br
and the
> ceph-disks don’t persist over a boot cycle.
>
>
>
> Is there a document anywhere that anyone knows of that explains a step by
> step process for bringing up multiple osd’s per host – 1 hdd with ssd
> journal partition per osd?
>
> Thanks,
>
> Br
Hi Pierre —
You can manipulate your CRUSH map to make use of ‘chassis’ in addition to the
default ‘host’ type. I’ve done this with FatTwin and FatTwin^2 boxes with
great success.
For more reading take a look at:
http://ceph.com/docs/master/rados/operations/crush-map/
In particular the ‘Move
As the names suggest, the former removes the object from the store while
the latter deletes bucket index only.
Check the code for more details.
Jason
2014-08-29 19:09 GMT+08:00 zhu qiang :
> Hi all,
>From radosgw-admin commond :
> # radosgw-admin object rm --object=my_test
Guess you should multiply 27 by bs=4k?
Jason
2014-08-29 15:52 GMT+08:00 lixue...@chinacloud.com.cn <
lixue...@chinacloud.com.cn>:
>
> guys:
> There's a ceph cluster working and nodes were connected with 10Gb
> cable. We defined fio's bs=4k and the object
Hi,
What's the status of your cluster after the node failure?
Jason
2014-09-04 21:33 GMT+08:00 Christian Balzer :
>
> Hello,
>
> On Thu, 4 Sep 2014 20:56:31 +0800 Ding Dinghua wrote:
>
> Aside from what Loic wrote, why not replace the network controller or if
>
Check the docs.
2014-09-09 11:02 GMT+08:00 廖建锋 :
> Looks like it dosn't work, i noticed that 0.85 added superblock to
> leveldb osd, the osd which I alread have do not have superblock
> is there anybody can tell me how to upgrade OSDs ?
>
>
>
> *发件人:* ceph-users
> *发送时间:* 2014-09-09 10:32
>
Hi,
You should try */etc/init.d/ceph* command on the host where the OSD resides.
Jason
2014-09-19 16:33 GMT+08:00 Loic Dachary :
> Hi,
>
> The documentation indeed contains an example that does not work. This
> should fix it :
> https://github.com/dach
. The whole object would not be written to the OSDs unless you
wrote data to the whole object.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Xu (Simon) Chen"
To: ceph-users@lists.ceph.com
Sent: Wednesday, February 25,
/projects/rbd/issues?
Thanks,
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "koukou73gr"
To: ceph-users@lists.ceph.com
Sent: Monday, March 2, 2015 7:16:08 AM
Subject: [ceph-users] qemu-kvm and cloned rbd image
Hello
** rbd/small and backup/small are now consistent through snap2. import-diff
automatically created backup/small@snap2 after importing all changes.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Steve Anthony"
To:
An RBD image is split up into (by default 4MB) objects within the OSDs. When
you delete an RBD image, all the objects associated with the image are removed
from the OSDs. The objects are not securely erased from the OSDs if that is
what you are asking.
--
Jason Dillaman
Red Hat
dilla
s rbd_directory/rbd_children" to see the data within the files.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Matthew Monaco"
To: ceph-users@lists.ceph.com
Sent: Sunday, April 12, 2015 10:57:46 PM
Subject: [ceph-use
Yes, when you flatten an image, the snapshots will remain associated to the
original parent. This is a side-effect from how librbd handles CoW with
clones. There is an open RBD feature request to add support for flattening
snapshots as well.
--
Jason Dillaman
Red Hat
dilla
ldren object so that librbd no longer thinks
any image is a child of another.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Matthew Monaco"
To: "Jason Dillaman"
Cc: ceph-users@lists.ceph.com
Sent: Monday, Apri
Can you add "debug rbd = 20" your ceph.conf, re-run the command, and provide a
link to the generated librbd log messages?
Thanks,
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Nikola Ciprich"
To: ceph-users
'--image-features' when creating the image?
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Nikola Ciprich"
To: "Jason Dillaman"
Cc: ceph-users@lists.ceph.com
Sent: Monday, April 20, 2015 12:41:26 PM
into Hammer at
some point in the future. Therefore, I would recommend waiting for the full
toolset to become available.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Christoph Adomeit"
To: ceph-users@lists.ceph.com
Sent: Tuesda
The issue appears to be tracked with the following BZ for RHEL 7:
https://bugzilla.redhat.com/show_bug.cgi?id=1187533
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Wido den Hollander"
To: "Somnath Ro
You are correct -- it is little endian like the other values. I'll open a
ticket to correct the document.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Ultral"
To: ceph-us...@ceph.com
Sent: Thursday, May 7,
two
snapshots and no trim operations released your changes back? If you diff from
move2db24-20150428 to HEAD, do you see all your changes?
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Ultral"
To: "ceph-users&qu
a few kilobyes of
deltas)? Also, would it be possible for you to create a new, test image in the
same pool, snapshot it, use 'rbd bench-write' to generate some data, and then
verify if export-diff is properly working against the new image?
--
Jason Dillaman
Red Hat
dilla..
/master/install/get-packages/#add-ceph-development
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Pavel V. Kaygorodov"
To: "Tuomas Juntunen"
Cc: "ceph-users"
Sent: Tuesday, May 12, 2015 3:55:21 PM
Subjec
e your issues on Giant and was unable to recreate
it. I would normally ask for a log dump with 'debug rbd = 20', but given the
size of your image, that log will be astronomically large.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message ---
th/to/my/new/ceph.conf" QEMU parameter where the RBD cache is
explicitly disabled [2].
[1]
http://git.qemu.org/?p=qemu.git;a=blob;f=block/rbd.c;h=fbe87e035b12aab2e96093922a83a3545738b68f;hb=HEAD#l478
[2] http://ceph.com/docs/master/rbd/qemu-rbd/#usage
--
Jason Dillaman
Red
the short-term, you can
remove the "rbd cache" setting from your ceph.conf so that QEMU controls it
(i.e. it cannot get overridden when reading the configuration file) or use a
different ceph.conf for a drive which requires different cache settings from
the default configuration's settings.
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
actor the current cache mutex into finer-grained locks.
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
cs (or can you gather any statistics) that indicate the
percentage of block-size, zeroed extents within the clone images' RADOS
objects? If there is a large amount of waste, it might be possible /
worthwhile to optimize how RBD handles copy-on-write operations against the
clone.
--
Jas
will locate all associated RADOS objects, download the
objects one at a time, and perform a scan for fully zeroed blocks. It's not
the most CPU efficient script, but it should get the job done.
[1] http://fpaste.org/248755/43803526/
--
Jason Dillaman
Red Hat Ceph Storage Engineering
dilla
There currently is no mechanism to rename snapshots without hex editing the RBD
image header data structure. I created a new Ceph feature request [1] to add
this ability in the future.
[1] http://tracker.ceph.com/issues/12678
--
Jason Dillaman
Red Hat Ceph Storage Engineering
dilla
It sounds like you have rados CLI tool from an earlier Ceph release (< Hammer)
installed and it is attempting to use the librados shared library from a newer
(>= Hammer) version of Ceph.
Jason
- Original Message -
> From: "Aakanksha Pudipeddi-SSI"
> To: ceph
That rbd CLI command is a new feature that will be included with the upcoming
infernalis release. In the meantime, you can use this approach [1] to estimate
your RBD image usage.
[1] http://ceph.com/planet/real-size-of-a-ceph-rbd-image/
--
Jason Dillaman
Red Hat Ceph Storage Engineering
uot;thread apply all
bt". With the gcore or backtrace method, we would need a listing of all the
package versions installed on the machine to recreate a similar debug
environment.
Thanks,
Jason
- Original Message -
> From: "Christoph Adomeit"
> To: ceph-users
performance closer to
native performance with 8K blocks?
Thanks in advance.
--
--
*Jason Villalta*
Co-founder
[image: Inline image 1]
800.799.4407x1230 | www.RubixTechnology.com<http://www.rubixtechnology.com/>
___
ceph-users mailing list
ceph
17, 2013 at 10:56 AM, Campbell, Bill <
bcampb...@axcess-financial.com> wrote:
> Windows default (NTFS) is a 4k block. Are you changing the allocation
> unit to 8k as a default for your configuration?
>
> --
> *From: *"Gregory Farnum"
al location. Are your journals on separate disks or on the
> same disk as the OSD? What is the replica size of your pool?
>
> --
> *From: *"Jason Villalta"
> *To: *"Bill Campbell"
> *Cc: *"Gregory Farnum" , "ceph-user
You can deploy an osd using ceph deploy to folder. Use ceph-deploy odd
prepare host:/path
On Sep 17, 2013 1:40 PM, "Jordi Arcas" wrote:
> Hi!
> I've a remote server with one unit where is installed Ubuntu. I can't
> create another partition on the disk to install OSD because is mounted.
> There
of
>> clients, and if you don't force those 8k sync IOs (which RBD won't,
>> unless the application asks for them by itself using directIO or
>> frequent fsync or whatever) your performance will go way up.
>> -Greg
>> Software Engineer #42 @ http://inktank.com | h
directIO or
> frequent fsync or whatever) your performance will go way up.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Tue, Sep 17, 2013 at 1:47 PM, Jason Villalta
> wrote:
> >
> > Here are the stats with direct io.
> >
;
> RADOS performance from what I've seen is largely going to hinge on replica
> size and journal location. Are your journals on separate disks or on the
> same disk as the OSD? What is the replica size of your pool?
>
> --
> *From: *"Jason Vi
say it would make
sense to just use SSD for the journal and a spindel disk for data and read.
On Tue, Sep 17, 2013 at 5:12 PM, Jason Villalta wrote:
> Here are the results:
>
> dd of=ddbenchfile if=/dev/zero bs=8K count=100 oflag=dsync
> 819200 bytes (8.2 GB) copied, 266.87
the speed be the same or would the read speed be a factor of 10
less than the speed of the underlying disk?
On Wed, Sep 18, 2013 at 4:27 AM, Alex Bligh wrote:
>
> On 17 Sep 2013, at 21:47, Jason Villalta wrote:
>
> > dd if=ddbenchfile of=/dev/null bs=8K
> > 819200
Any other thoughts on this thread guys. I am just crazy to want near
native SSD performance on a small SSD cluster?
On Wed, Sep 18, 2013 at 8:21 AM, Jason Villalta wrote:
> That dd give me this.
>
> dd if=ddbenchfile of=- bs=8K | dd if=- of=/dev/null bs=8K
> 819200 bytes (8.
1.1 GB) copied, 6.26289 s, 171 MB/s
> dd if=/dev/zero of=1g bs=1M count=1024 oflag=dsync
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 37.4144 s, 28.7 MB/s
>
> As you can see, latency is a killer.
>
> On Sep 18, 2013, at 3:23 PM, Jason Villalta
those
to pull from three SSD disks on a local machine atleast as fast one Native
SDD test. But I don't see that, its actually slower.
On Wed, Sep 18, 2013 at 4:02 PM, Jason Villalta wrote:
> Thank Mike,
> High hopes right ;)
>
> I guess we are not doing too bad compared to
e, but assuming you want a solid synchronous / non-cached read, you
> should probably specify 'iflag=direct'.
>
> On Friday, September 20, 2013, Jason Villalta wrote:
>
>> Mike,
>> So I do have to ask, where would the extra latency be coming from if all
>> my OSDs
her testing
> "dd performance" as opposed to "using dd to test performance") if the
> concern is what to expect for your multi-tenant vm block store.
>
> Personally, I get more bugged out over many-thread random read throughput
> or synchronous write latency.
>
&
ach could have the most
> >> advantage.
> >>
> >> Your point of view would definitely help me.
> >>
> >> Sincerely,
> >> Martin
> >>
> >> --
> >> Martin Catudal
> >> Responsable TIC
> >> Ressources Me
I found this without much effort.
http://www.sebastien-han.fr/blog/2012/11/15/make-your-rbd-fly-with-flashcache/
On Mon, Oct 7, 2013 at 11:39 AM, Jason Villalta wrote:
> I also would be interested in how bcache or flashcache would integrate.
>
>
> On Mon, Oct 7, 2013 at 11:3
caching for writes.
On Mon, Oct 7, 2013 at 11:43 AM, Jason Villalta wrote:
> I found this without much effort.
>
> http://www.sebastien-han.fr/blog/2012/11/15/make-your-rbd-fly-with-flashcache/
>
>
> On Mon, Oct 7, 2013 at 11:39 AM, Jason Villalta wrote:
>
>> I also
I too have noticed this as well when using ceph-deploy to configure ceph.
>From what I can tell it just creates symlinks from the default osd location
at /var/lib/ceph. Same for the journal. If it on a different device a
symlink is created from the dir.
Then it appears the osds are just defined i
that issue?
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Chu Duc Minh"
To: ceph-de...@vger.kernel.org, "ceph-users@lists.ceph.com >>
ceph-users@lists.ceph.com"
Sent: Friday, November 7, 2014 7:05:5
In the longer term, there is an in-progress RBD feature request to add a new
RBD command to see image disk usage: http://tracker.ceph.com/issues/7746
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Sébastien Han"
T
Hi Luis,
Could you show us the output of *ceph osd tree*?
Jason
2015-01-12 20:45 GMT+08:00 Luis Periquito :
> Hi all,
>
> I've been trying to add a few new OSDs, and as I manage everything with
> puppet, it was manually adding via the CLI.
>
> At one point it adds t
Hi Don,
Take a look at CRUSH settings.
http://ceph.com/docs/master/rados/operations/crush-map/
Jason
2015-01-22 2:41 GMT+08:00 Don Doerner :
> OK, I've set up 'giant' in a single-node cluster, played with a replicated
> pool and an EC pool. All goes well so far.
g_num
pg_num: 8
Has anyone else run into this issue? Am I missing something? I know I could
just spawn a subprocess call to the ceph command line utility, but I would like
to avoid that in the name of a cleaner python integration.
Your assistance is greatly appreciated.
Thank you,
- Jason
_
,
-Jason
From: Gregory Farnum [mailto:g...@gregs42.com]
Sent: Monday, January 26, 2015 10:09 AM
To: Jason Anderson; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] pg_num not being set to ceph.conf default when
creating pool via python librados
Just from memory, I think these values are only used
] sections.
Greg: Thank you for your help on this, I really appreciate it!
-Jason
-Original Message-
From: Gregory Farnum [mailto:g...@gregs42.com]
Sent: Monday, January 26, 2015 1:17 PM
To: Jason Anderson
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] pg_num not being set to
Has anyone tried scaling a VMs io by adding additional disks and striping
them in the guest os? I am curious what effect this would have on io
performance?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-user
Thanks for the info everyone.
On Dec 16, 2013 1:23 AM, "Kyle Bader" wrote:
> >> Has anyone tried scaling a VMs io by adding additional disks and
> >> striping them in the guest os? I am curious what effect this would have
> >> on io performance?
>
> > Why would it? You can also change the stripe
> feel I should be getting significantly more from ceph than what I am able
> to.
>
> Of course, as soon as bcache stops providing benefits (ie data is pushed
> out of the SSD cache) then the raw performance drops to a standard SATA
> drive of around 120 IOPS.
>
> Regards
> --
Just looking for some suggestions. Thanks!
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
--
*Jason Villalta*
Co-founder
[image: Inline image 1]
800.799.44
OSDs/Nodes. I am not sure there is a specific metric in ceph for this
but it would be awesome if there was.
On Sat, Apr 12, 2014 at 10:37 AM, Greg Poirier wrote:
> Curious as to how you define cluster latency.
>
>
> On Sat, Apr 12, 2014 at 7:21 AM, Jason Villalta wrote:
>
&g
> I have a coredump with the size of 1200M compressed .
>
> Where shall i put the dump ?
>
I believe you can use the ceph-post-file utility [1] to upload the core and
your current package list to ceph.com.
Jason
[1] http://ceph.com/docs/master/man/8/ce
ench-write with a Ceph
Hammer-release client?
--
Jason
> Hiya. Playing with a small cephs setup from the Quick start documentation.
>
> Seeing an issue running rdb bench-write. Initial trace is provided
> below, let me know if you need other information. fwiw the rados bench
> works
ed, still trying to grok how
> things should go together.
You would execute bench-write just as you did. I am just saying there is no
reason to map the rbd image via the kernel RBD driver (i.e. no need to run 'rbd
map' prior to executing the bench-write command).
Jason
_
This is usually indicative of the same tracepoint event being included by both
a static and dynamic library. See the following thread regarding this issue
within Ceph when LTTng-ust was first integrated [1]. Since I don't have any
insight into your application, are you somehow linking against
ifying the image while at the same time not
crippling other use cases. librbd also supports cooperative exclusive lock
transfer, which is used in the case of qemu VM migrations where the image needs
to be opened R/W by two clients at the same time.
--
Jason Dillaman
- Original Mes
You can run the program under 'gdb' with a breakpoint on the 'abort' function
to catch the program's abnormal exit. Assuming you have debug symbols
installed, you should hopefully be able to see which probe is being
re-registered.
--
Jason Dillaman
- Orig
As a background, I believe LTTng-UST is disabled for RHEL7 in the Ceph project
only due to the fact that EPEL 7 doesn't provide the required packages [1].
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1235461
--
Jason Dillaman
- Original Message -
> From: "Paul Man
> On 22/09/15 17:46, Jason Dillaman wrote:
> > As a background, I believe LTTng-UST is disabled for RHEL7 in the Ceph
> > project only due to the fact that EPEL 7 doesn't provide the required
> > packages [1].
>
> interesting. so basically our program migh
ourself.
The new exclusive-lock feature is managed via 'rbd feature enable/disable'
commands and does ensure that only the current lock owner can manipulate the
RBD image. It was introduced to support the RBD object map feature (which can
track which backing RADOS objects are in-use in order
It looks like the issue you are experiencing was fixed in the Infernalis/master
branches [1]. I've opened a new tracker ticket to backport the fix to Hammer
[2].
--
Jason Dillaman
[1]
https://github.com/sponce/ceph/commit/e4c27d804834b4a8bc495095ccf5103f8ffbcc1e
[2]
approach via "rbd lock
add/remove" to verify that no other client has the image mounted before
attempting to mount it locally.
--
Jason Dillaman
- Original Message -
> From: "Allen Liao"
> To: ceph-users@lists.ceph.com
> Sent: Wednesday, September 23, 201
est and your cleanup operation.
--
Jason
- Original Message -
> From: "Stefan Priebe - Profihost AG"
> To: ceph-users@lists.ceph.com
> Sent: Friday, October 2, 2015 8:16:52 AM
> Subject: [ceph-users] possibility to delete all zeros
> Hi,
> we accidentally
isn't enabled.
[1] https://github.com/ceph/ceph/pull/6135
--
Jason Dillaman
- Original Message -
> From: "Ken Dreyer"
> To: "Goncalo Borges"
> Cc: ceph-users@lists.ceph.com
> Sent: Thursday, October 8, 2015 11:58:27 AM
> Subject: Re: [ceph-users] A
mental, you could
install the infernalis-based rbd tools from the Ceph gitbuilder [1] into a
sandbox environment and use the tool against your pre-infernalis cluster.
[1] http://ceph.com/gitbuilder.cgi
--
Jason Dillaman
- Original Message -
> From: "Corin Langosch"
>
o the object, so they
will be read via LevelDB or RocksDB (depending on your configuration) within
the object's PG's OSD.
--
Jason Dillaman
- Original Message -
> From: "Allen Liao"
> To: ceph-users@lists.ceph.com
> Sent: Monday, October 12, 2015 2:52
ite
operations by decoupling objects from the underlying filesystem's actual
storage path.
[1]
https://github.com/ceph/ceph/blob/master/doc/rados/configuration/journal-ref.rst
--
Jason Dillaman
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
uncate, overwrite, etc).
--
Jason Dillaman
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
There is no such interface currently on the librados / OSD side to abort IO
operations. Can you provide some background on your use-case for aborting
in-flight IOs?
--
Jason Dillaman
- Original Message -
> From: "min fang"
> To: ceph-users@lists.ceph.com
> Se
Can you provide more details on your setup and how you are running the rbd
export? If clearing the pagecache, dentries, and inodes solves the issue, it
sounds like it's outside of Ceph (unless you are exporting to a CephFS or krbd
mount point).
--
Jason Dillaman
- Original Me
> On Tue, 20 Oct 2015, Jason Dillaman wrote:
> > There is no such interface currently on the librados / OSD side to abort
> > IO operations. Can you provide some background on your use-case for
> > aborting in-flight IOs?
>
> The internal Objecter has a cancel interf
] http://tracker.ceph.com/issues/13559
--
Jason Dillaman
- Original Message -
> From: "Andrei Mikhailovsky"
> To: ceph-us...@ceph.com
> Sent: Wednesday, October 21, 2015 8:17:39 AM
> Subject: [ceph-users] [urgent] KVM issues after upgrade to 0.94.4
> Hello
command-line properties [1]. If you have "rbd cache =
true" in your ceph.conf, it would override "cache=none" in your qemu
command-line.
[1] https://lists.nongnu.org/archive/html/qemu-devel/2015-06/msg03078.html
--
Jason Dillaman
afe to detach a clone from a parent image even if snapshots exist due to the
changes to copyup.
--
Jason Dillaman
- Original Message -
> From: "Zhongyan Gu"
> To: dilla...@redhat.com
> Sent: Thursday, October 22, 2015 5:11:56 AM
> Subject: how to understand deep
ter flatten, child
> snapshot still has parent snap info?
> overlap: 1024 MB
Because deep-flatten wasn't enabled on the clone.
> Another question is since deep-flatten operations are applied to cloned
> image, why we need to create p
would immediately race to
re-establish the lost watch/notify connection before you could disassociate the
cache tier.
--
Jason Dillaman
- Original Message -
> From: "Robert LeBlanc"
> To: ceph-users@lists.ceph.com
> Sent: Monday, October 26, 2015 12:22:06 PM
> Subject
> Hi Jason dillaman
> Recently I worked on the feature http://tracker.ceph.com/issues/13500 , when
> I read the code about librbd, I was confused by RBD_FLAG_OBJECT_MAP_INVALID
> flag.
> When I create a rbd with “—image-features = 13 ” , we enable object-map
> featu
r its been enabled.
--
Jason Dillaman
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
1 - 100 of 879 matches
Mail list logo