Yes but I already have some sort of test cluster with data in it. I
don’t think there are commands to modify existing rules that are being
used by pools. And the default replicated_ruleset doesn’t have a class
specified. I also have an erasure code rule without any class definition
for the fi
Hi Max,
just a sidenote: we are using a fork of RBDSR
(https://github.com/vico-research-and-consulting/RBDSR) to connect
XENServer 7.2 Community to RBDs directly using rbd-nbd.
After a bit of hacking this works pretty good: direct RBD Creation from
the storage repo, live Migration between xen-node
On 06/13/2018 09:01 AM, Marc Roos wrote:
Yes but I already have some sort of test cluster with data in it. I
don’t think there are commands to modify existing rules that are being
used by pools. And the default replicated_ruleset doesn’t have a class
specified. I also have an erasure code rule wi
On 06/12/2018 07:14 PM, Max Cuttins wrote:
> it's a honor to me contribute to the main repo of ceph.
We appreciate you support! Please take a look at
http://docs.ceph.com/docs/master/start/documenting-ceph/ for guidance on
how to contribute to the documentation.
> Just a throught, is it wise havi
I'd say it's safe in terms of data integrity. In terms of availability,
that's something you'll want to test thoroughly e.g what happens when the
cluster is in recovery, does the filesystem remain accessible?
I think you'll be disappointed in terms of performance, I found OCFS2 to be
slightly bett
Hello,
i am running a ceph 13.2.0 cluster exclusively for radosrw / s3.
i only have one big bucket. and the cluster is currently in warning state:
cluster:
id: d605c463-9f1c-4d91-a390-a28eedb21650
health: HEALTH_WARN
13 large omap objects
i tried to google it, but i wa
Hi Marc,
thanks for the reply.
I knew RBDSR project very well.
I used to be one first contributors to the project:
https://github.com/rposudnevskiy/RBDSR/graphs/contributors
I rewrote all the installation script to do it easily and allow multiple
installation across all XenClusters in few comm
Hi everybody,
maybe I miss something but multipath is not adding new iscsi gateways.
I have installed 2 gateway and test it on a client.
Everything worked fine.
After that I decided to complete install and create a 3rd gateway.
But no one iscsi initiatior client update the number of gateways.
O
Thanks Janne for your reply.
Here are the reasons which made me think to "physically" split the pools :
1) A different usage of the pools : the first one will be used for user
home directories, with an intensive read/write access. And the second
one will be used for data storage/backup, with e
I just realize there is a an error:
multipath -r
*Jun 13 11:02:27 | rbd0: HDIO_GETGEO failed with 25*
reload: mpatha (360014051b4fb8c6384545b7ae7d5142e) undef LIO-ORG
,TCMU device
size=100G features='1 queue_if_no_path' hwhandler='1 alua' wp=undef
|-+- policy='queue-length 0'
Hi,
Is the osd journal flushed completely on a clean shutdown?
In this case, with Jewel, and FileStore osds, and a "clean shutdown"
being:
systemctl stop ceph-osd@${osd}
I understand it's documented practice to issue a --flush-journal after
shutting down down an osd if you're intending to d
On 06/13/2018 11:39 AM, Chris Dunlop wrote:
> Hi,
>
> Is the osd journal flushed completely on a clean shutdown?
>
> In this case, with Jewel, and FileStore osds, and a "clean shutdown" being:
>
It is, a Jewel OSD will flush it's journal on a clean shutdown. The
flush-journal is no longer ne
Shit, I added this class and now everything start backfilling (10%) How
is this possible, I only have hdd's?
-Original Message-
From: Konstantin Shalygin [mailto:k0...@k0ste.ru]
Sent: woensdag 13 juni 2018 9:26
To: Marc Roos; ceph-users
Subject: *SPAM* Re: [ceph-users] Add ssd'
On 06/13/2018 12:06 PM, Marc Roos wrote:
Shit, I added this class and now everything start backfilling (10%) How
is this possible, I only have hdd's?
This is normal when you change your crush and placement rules.
Post your output, I will take a look
ceph osd crush tree
ceph osd crush dump
ceph
I just added here 'class hdd'
rule fs_data.ec21 {
id 4
type erasure
min_size 3
max_size 3
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default class hdd
step choose indep 0 type osd
step emit
}
-Origin
Hi,
I'm trying to migrate a cephfs data pool to a different one in order to
reconfigure with new pool parameters. I've found some hints but no
specific documentation to migrate pools.
I'm currently trying with rados export + import, but I get errors like
these:
Write #-9223372036854775808:
I wonder if this is not a bug or so. Adding the class hdd, to an all hdd
cluster should not have such result that 60% of objects are moved
around.
pool fs_data.ec21 id 53
3866523/6247464 objects misplaced (61.889%)
recovery io 93089 kB/s, 22 objects/s
-Original Message-
From:
On Mon, Jun 04, 2018 at 06:39:08PM +, Sage Weil wrote:
> [adding ceph-maintainers]
[and ceph-devel]
>
> On Mon, 4 Jun 2018, Charles Alva wrote:
> > Hi Guys,
> >
> > When will the Ceph Mimic packages for Debian Stretch released? I could not
> > find the packages even after changing the sourc
See this thread:
http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-April/000106.html
http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-June/000113.html
(Wido -- should we kill the ceph-large list??)
-- dan
On Wed, Jun 13, 2018 at 12:27 PM Marc Roos wrote:
>
>
> Shit, I added th
On Wed, 13 Jun 2018, Fabian Grünbichler said:
> I hope we find some way to support Mimic+ for Stretch without requiring
> a backport of gcc-7+, although it unfortunately seems unlikely at this
> point.
Me too. I picked ceph luminous on debian stretch because I thought it would be
maintained goin
See this thread:
http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-April/000106.html
http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-June/000113.html
(Wido -- should we kill the ceph-large list??)
On Wed, Jun 13, 2018 at 1:14 PM Marc Roos wrote:
>
>
> I wonder if this is not a
Hi Fabian,
On Wed, 13 Jun 2018, Fabian Grünbichler wrote:
> On Mon, Jun 04, 2018 at 06:39:08PM +, Sage Weil wrote:
> > [adding ceph-maintainers]
>
> [and ceph-devel]
>
> >
> > On Mon, 4 Jun 2018, Charles Alva wrote:
> > > Hi Guys,
> > >
> > > When will the Ceph Mimic packages for Debian St
On Wed, Jun 13, 2018 at 7:06 PM Alessandro De Salvo
wrote:
>
> Hi,
>
> I'm trying to migrate a cephfs data pool to a different one in order to
> reconfigure with new pool parameters. I've found some hints but no
> specific documentation to migrate pools.
>
> I'm currently trying with rados export
On 06/13/2018 02:01 PM, Sean Purdy wrote:
> Me too. I picked ceph luminous on debian stretch because I thought
> it would be maintained going forwards, and we're a debian shop. I
> appreciate Mimic is a non-LTS release, I hope issues of debian
> support are resolved by the time of the next LTS.
On Wed, Jun 13, 2018 at 3:34 AM Webert de Souza Lima
wrote:
>
> hello,
>
> is there any performance impact on cephfs for using file layouts to bind a
> specific directory in cephfs to a given pool? Of course, such pool is not the
> default data pool for this cephfs.
>
For each file, no matter w
On 13/06/18 01:03, Konstantin Shalygin wrote:
Each node now has 1 SSD with the OS and the BlockDBs and 3 HDDs with
bluestore data.
Very. Very bad idea. When your ssd/nvme dead you lost your linux box.
I have 3 boxes. And I'm installing a new one. Any box can be lost
without data problem.
Thank you Zheng.
Does that mean that, when using such feature, our data integrity relies now
on both data pools' integrity/availability?
We currently use such feature in production for dovecot's index files, so
we could store this directory on a pool of SSDs only. The main data pool is
made of H
Hi,
Il 13/06/18 14:40, Yan, Zheng ha scritto:
On Wed, Jun 13, 2018 at 7:06 PM Alessandro De Salvo
wrote:
Hi,
I'm trying to migrate a cephfs data pool to a different one in order to
reconfigure with new pool parameters. I've found some hints but no
specific documentation to migrate pools.
I'
I've never used XenServer, but I'd imagine you would need to do
something similar to what is documented here [1].
[1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-linux/
On Wed, Jun 13, 2018 at 5:11 AM, Max Cuttins wrote:
> I just realize there is a an error:
>
> multipath -r
> Jun 13 1
The backtrace object Zheng referred to is used only for resolving hard
links or in disaster recovery scenarios. If the default data pool isn’t
available you would stack up pending RADOS writes inside of your mds but
the rest of the system would continue unless you manage to run the mds out
of memor
2018-06-13 7:13 GMT+02:00 Marc Roos :
> I just added here 'class hdd'
>
> rule fs_data.ec21 {
> id 4
> type erasure
> min_size 3
> max_size 3
> step set_chooseleaf_tries 5
> step set_choose_tries 100
> step take default class hdd
> st
How long is “too long”? 800MB on an SSD should only be a second or three.
I’m not sure if that’s a reasonable amount of data; you could try
compacting the rocksdb instance etc. But if reading 800MB is noticeable I
would start wondering about the quality of your disks as a journal or
rocksdb device.
Excellent news - tks!
On Wed, Jun 13, 2018 at 11:50:15AM +0200, Wido den Hollander wrote:
On 06/13/2018 11:39 AM, Chris Dunlop wrote:
Hi,
Is the osd journal flushed completely on a clean shutdown?
In this case, with Jewel, and FileStore osds, and a "clean shutdown" being:
It is, a Jewel OSD
On Tue, Jun 12, 2018 at 10:39:59AM -0400, Jason Dillaman wrote:
> > So, my usual question is - where to look and what logs to enable
> > to find out what is going wrong ?
> If not overridden, tcmu-runner will default to 'client.admin' [1] so
> you shouldn't need to add any additio
On Wed, Jun 13, 2018 at 8:33 AM, Wladimir Mutel wrote:
> On Tue, Jun 12, 2018 at 10:39:59AM -0400, Jason Dillaman wrote:
>
>> > So, my usual question is - where to look and what logs to enable
>> > to find out what is going wrong ?
>
>> If not overridden, tcmu-runner will default t
Thanks for clarifying that, Gregory.
As said before, we use the file layout to resolve the difference of
workloads in those 2 different directories in cephfs.
Would you recommend using 2 filesystems instead? By doing so, each fs would
have it's default data pool accordingly.
Regards,
Webert Lima
Yes thanks I know, I will change it when I get extra an extra node.
-Original Message-
From: Paul Emmerich [mailto:paul.emmer...@croit.io]
Sent: woensdag 13 juni 2018 16:33
To: Marc Roos
Cc: ceph-users; k0ste
Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd
up
Nah, I would use one Filesystem unless you can’t. The backtrace does create
another object but IIRC it’s a maximum one IO per create/rename (on the
file).
On Wed, Jun 13, 2018 at 1:12 PM Webert de Souza Lima
wrote:
> Thanks for clarifying that, Gregory.
>
> As said before, we use the file layout
Got it Gregory, sounds good enough for us.
Thank you all for the help provided.
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*
On Wed, Jun 13, 2018 at 2:20 PM Gregory Farnum wrote:
> Nah, I would use one Filesystem unless you can’t. Th
Hi, in fact it highly depend on your ceph underlying installation (both HW
and Ceph version).
Would you be willing to share more information on your needs and design?
How many IOPS are you looking at? Which processors and disks are you using?
What ceph version? Which OS?
Are you targeting mostly r
This is actually not to nice, because this remapping is now causing a
nearfull
-Original Message-
From: Dan van der Ster [mailto:d...@vanderster.com]
Sent: woensdag 13 juni 2018 14:02
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map class h
Hi yao,
IIRC there is a *sleep* Option which is usefull when delete Operation is being
done from ceph sleep_trim or something like that.
- Mehmet
Am 7. Juni 2018 04:11:11 MESZ schrieb Yao Guotao :
>Hi Jason,
>
>
>Thank you very much for your reply.
>I think the RBD trash is a good way.
2018-06-13 23:53 GMT+02:00 :
> Hi yao,
>
> IIRC there is a *sleep* Option which is usefull when delete Operation is
> being done from ceph sleep_trim or something like that.
>
you are thinking of "osd_snap_trim_sleep" which is indeed a very helpful
option - but not for deletions.
It rate limi
On Wed, Jun 13, 2018 at 9:35 PM Alessandro De Salvo
wrote:
>
> Hi,
>
>
> Il 13/06/18 14:40, Yan, Zheng ha scritto:
> > On Wed, Jun 13, 2018 at 7:06 PM Alessandro De Salvo
> > wrote:
> >> Hi,
> >>
> >> I'm trying to migrate a cephfs data pool to a different one in order to
> >> reconfigure with ne
On 06/13/2018 08:22 PM, Alfredo Daniel Rezinovsky wrote:
I have 3 boxes. And I'm installing a new one. Any box can be lost
without data problem.
If any SSD is lost I will just reinstall the whole box, still have
data duplicates and in about 40 hours the triplicates will be ready.
I understa
45 matches
Mail list logo