Hi all,
Does anyone have any recommendations for good tools to perform
file-system/tree backups and restores to/from a RGW object store (Swift or
S3 APIs)? Happy to hear about both FOSS and commercial options please.
I'm interested in:
1) tools known to work or not work at all for a basic file-ba
Hey cephers,
This is just a reminder that we have 10x 40 minutes talk slots
available at OpenStack Boston (and 10 free passes to go with them). If
you are interested in giving a Ceph-related talk, please contact me as
soon as possible with the following:
* Presenter Name
* Presenter Org
* Talk ti
On Thu, Mar 2, 2017 at 5:01 PM, Xiaoxi Chen wrote:
> 2017-03-02 23:25 GMT+08:00 Ilya Dryomov :
>> On Thu, Mar 2, 2017 at 1:06 AM, Sage Weil wrote:
>>> On Thu, 2 Mar 2017, Xiaoxi Chen wrote:
>Still applies. Just create a Round Robin DNS record. The clients will
obtain a new monmap while
Success! There was an issue related to my operating system install procedure
that was causing the journals to become corrupt, but it was not caused by ceph!
That bug fixed; now the procedure on shutdown in this thread has been verified
to work as expected. Thanks for all the help.
-Chris
> On
2017-03-02 23:25 GMT+08:00 Ilya Dryomov :
> On Thu, Mar 2, 2017 at 1:06 AM, Sage Weil wrote:
>> On Thu, 2 Mar 2017, Xiaoxi Chen wrote:
>>> >Still applies. Just create a Round Robin DNS record. The clients will
>>> obtain a new monmap while they are connected to the cluster.
>>> It works to some ex
Erratum : Sorry for bad link for screenshots :
1st : https://supervision.pci-conseil.net/screenshot_LOAD.png
2nd : https://supervision.pci-conseil.net/screenshot_OSD_IO.png
:)
Le 02/03/2017 à 15:34, pascal.pu...@pci-conseil.net a écrit :
Hello,
So, I need maybe some advices : 1 week ago (la
On Thu, Mar 2, 2017 at 1:06 AM, Sage Weil wrote:
> On Thu, 2 Mar 2017, Xiaoxi Chen wrote:
>> >Still applies. Just create a Round Robin DNS record. The clients will
>> obtain a new monmap while they are connected to the cluster.
>> It works to some extent, but causing issue for "mount -a". We have
Over the weekend, two inconsistent PG’s popped up in my cluster. This being
after having scrubs disabled for close to 6 weeks after a very long rebalance
after adding 33% more OSD’s, an OSD failing, increasing PG’s, etc.
It appears we came out the other end with 2 inconsistent PG’s and I’m tryin
I run centos 6.8 so no 0.94.10 packages for el6.
On Mar 2, 2017 8:47 AM, "Abhishek L" wrote:
Sasha Litvak writes:
> Hello everyone,
>
> Hammer 0.94.10 update was announced in the blog a week ago. However,
there are no packages available for either version of redhat. Can someone
tell me what is
Ah ...
Il 02/03/2017 15:56, Jason Dillaman ha scritto:
I'll refer you to the man page for blkdiscard [1]. Since it operates
on the block device, it doesn't know about filesystem holes and
instead will discard all data specified (i.e. it will delete all your
data).
[1] http://man7.org/linux/man
I'll refer you to the man page for blkdiscard [1]. Since it operates
on the block device, it doesn't know about filesystem holes and
instead will discard all data specified (i.e. it will delete all your
data).
[1] http://man7.org/linux/man-pages/man8/blkdiscard.8.html
On Thu, Mar 2, 2017 at 9:54
Il 02/03/2017 14:11, Jason Dillaman ha scritto:
On Thu, Mar 2, 2017 at 8:09 AM, Massimiliano Cuttini wrote:
Ok,
then, if the command comes from the hypervisor that hold the image is it
safe?
No, it needs to be issued from the guest VM -- not the hypervisor that
is running the guest VM. The
Sasha Litvak writes:
> Hello everyone,
>
> Hammer 0.94.10 update was announced in the blog a week ago. However, there
> are no packages available for either version of redhat. Can someone tell me
> what is going on?
I see the packages at http://download.ceph.com/rpm-hammer/el7/x86_64/.
Are you
Hello,
So, I need maybe some advices : 1 week ago (last 19 feb), I upgraded my
stable Ceph Jewel from 10.2.3 to 10.2.5 (YES, It was maybe a bad idea).
I never had problem with Ceph 10.2.3 since last upgrade, last 23 September.
So since my upgrade (10.2.5), every 2 days, the first OSD server t
On Thu, Mar 2, 2017 at 8:09 AM, Massimiliano Cuttini wrote:
> Ok,
>
> then, if the command comes from the hypervisor that hold the image is it
> safe?
No, it needs to be issued from the guest VM -- not the hypervisor that
is running the guest VM. The reason is that it's a black box to the
hypervi
Ok,
then, if the command comes from the hypervisor that hold the image is it
safe?
But if the guest VM on the same Hypervisor try to using the image, what
happen?
Are these safe tools? (aka: safely exit with error instead of try the
command and ruin the image?).
Should I consider a snapshot b
In that case, the trim/discard requests would need to come directly
from the guest virtual machines to avoid damaging the filesystems. We
do have a backlog feature ticket [1] to allow an administrator to
transparently sparsify a in-use image via the rbd CLI, but no work has
been started on it yet.
Hi Ahsley,
The rule you indicated, with “step choose indep 0 type osd” should select 13
different OSDs but not necessary on 13 different servers. So you should be able
to test that on say 4 servers if you have ~4 OSDs per server.
To split the selected OSDs across 4 hosts, I think you would do s
Hello,
I am currently doing some erasure code tests in a dev environment.
I have set the following by "default"
rule sas {
ruleset 2
type erasure
min_size 3
max_size 13
step set_chooseleaf_tries 5
step set_choose_tries 100
step take fourtb
Thanks Jason,
I need some further info, because I'm really worried about ruin my data.
On this pool I have only XEN virtual disks.
Did I have to run the command directly on the "pool" or on the "virtual
disks" ?
I guess that I have to run it on the pool.
As Admin I don't have access to local f
Hi all,
yesterday we encountered a problem within our ceph cluster. After a long day we
were able to fix it, but we are very unsatisfied with the fix and assume that
it is also only temporary. Any hint or help is very appreciated.
We have a production ceph cluster (ceph version 10.2.5 on ubuntu
Hello,
Env:- v11.2.0 - bluestore - EC 3 + 1
We are getting below entries both in /var/log/messages and osd logs. May I
know what is the impact of the below message and as these message were
flooded in osd and sys logs.
~~~
2017-03-01 13:00:59.938839 7f6c96915700 -1
bdev(/var/lib/ceph/osd/ceph-0
22 matches
Mail list logo