Hi Laszlo,
The script defaults are what we used to do a large intervention (the
default delta weight is 0.01). For our clusters going any faster
becomes disruptive, but this really depends on your cluster size and
activity.
BTW, in case it wasn't clear, to use this script for adding capacity
you
Hi Laszlo,
I've used Dan's script to deploy 9 storage nodes (36 x 6TB data disks/node)
into our dev cluster as practice for deployment into our production cluster.
The script performs very well. In general, disruption to a cluster (e.g. impact
on client I/O) is minimised by osd_max_backfills wh
Thank you Greg,
I will look into it and I hope the self managed and pool snapshot will work
for Erasure pool also, we predominantly use Erasure coding.
Thanks,
Muthu
On Wednesday, 2 August 2017, Gregory Farnum wrote:
> On Tue, Aug 1, 2017 at 8:29 AM Muthusamy Muthiah <
> muthiah.muthus...@gmai
Morning,
We ran into an issue with the default max file size of a cephfs file. Is it
possible to increase this value to 20 TB from 1 TB without recreating the file
system?
Rhian Resnick
Assistant Director Middleware and HPC
Office of Information Technology
Florida Atlantic University
777
Did you really mean to say "increase this value to 20 TB from 1 TB"?
On Fri, Aug 4, 2017 at 7:28 AM Rhian Resnick wrote:
> Morning,
>
>
> We ran into an issue with the default max file size of a cephfs file. Is
> it possible to increase this value to 20 TB from 1 TB without recreating
> the fil
Woops, nvm my last. My eyes deceived me.
On Fri, Aug 4, 2017 at 8:21 AM Roger Brown wrote:
> Did you really mean to say "increase this value to 20 TB from 1 TB"?
>
>
> On Fri, Aug 4, 2017 at 7:28 AM Rhian Resnick wrote:
>
>> Morning,
>>
>>
>> We ran into an issue with the default max file size
Is this something new in Luminous 12.1.2, or did I break something? Stuff
still seems to function despite the warnings.
$ ceph health detail
POOL_APP_NOT_ENABLED application not enabled on 14 pool(s)
application not enabled on pool 'default.rgw.buckets.non-ec'
application not enabled on p
I have got a placement group inconsistency, and saw some manual where
you can export and import this on another osd. But I am getting an
export error on every osd.
What does this export_files error -5 actually mean? I thought 3 copies
should be enough to secure your data.
> PG_DAMAGED Possi
In the 12.1.2 release notes it stated...
Pools are now expected to be associated with the application using them.
Upon completing the upgrade to Luminous, the cluster will attempt to
associate
existing pools to known applications (i.e. CephFS, RBD, and RGW). In-use
pools
that are not assoc
It _should_ be enough. What happened in your cluster recently? Power
Outage, OSD failures, upgrade, added new hardware, any changes at all. What
is your Ceph version?
On Fri, Aug 4, 2017 at 11:22 AM Marc Roos wrote:
>
> I have got a placement group inconsistency, and saw some manual where
> you
Got it, thanks!
On Fri, Aug 4, 2017 at 9:48 AM David Turner wrote:
> In the 12.1.2 release notes it stated...
>
> Pools are now expected to be associated with the application using them.
> Upon completing the upgrade to Luminous, the cluster will attempt to
> associate
> existing pools to
All those pools should have been auto-marked as owned by rgw though. We do
have a ticket around that (http://tracker.ceph.com/issues/20891) but so far
it's just confusing.
-Greg
On Fri, Aug 4, 2017 at 9:07 AM Roger Brown wrote:
> Got it, thanks!
>
> On Fri, Aug 4, 2017 at 9:48 AM David Turner wr
Should they be auto-marked if you upgraded an existing cluster to Luminous?
On Fri, Aug 4, 2017 at 12:13 PM Gregory Farnum wrote:
> All those pools should have been auto-marked as owned by rgw though. We do
> have a ticket around that (http://tracker.ceph.com/issues/20891) but so
> far it's just
Yes. https://github.com/ceph/ceph/blob/master/src/mon/OSDMonitor.cc#L1069
On Fri, Aug 4, 2017 at 9:14 AM David Turner wrote:
> Should they be auto-marked if you upgraded an existing cluster to Luminous?
>
> On Fri, Aug 4, 2017 at 12:13 PM Gregory Farnum wrote:
>
>> All those pools should have b
https://www.spinics.net/lists/ceph-users/msg36285.html
On Aug 4, 2017 8:28 AM, "Rhian Resnick" wrote:
> Morning,
>
>
> We ran into an issue with the default max file size of a cephfs file. Is
> it possible to increase this value to 20 TB from 1 TB without recreating
> the file system?
>
>
> Rhia
I am still on 12.1.1, it is still a test 3 node cluster, nothing much
happening. 2nd node had some issues a while ago, I had an osd.8 that
didn’t want to start so I replaced it.
-Original Message-
From: David Turner [mailto:drakonst...@gmail.com]
Sent: vrijdag 4 augustus 2017 17:52
Dear Cephers,
As most of you know the deadline for submitting talks on LCA (Linux Conf
Australia) is on this Saturday (Aug 4) and we would like to know if
anyone here is planning to participate the conference and present talks
on Ceph.
I was just talking with Sage and besides the talks submitted
I have a child rbd that doesn't acknowledge its parent. this is with
Kraken (11.2.0)
The misbehaving child was 'flatten'ed from its parent, but now I can't
remove the snapshot because it thinks it has a child still.
root@tyr-ceph-mon0:~# rbd snap ls
tyr-p0/51774a43-8d67-4d6d-9711-d0b1e4e6b5e9_de
Roger, was this a test cluster that was already running Luminous? The
auto-assignment logic won't work in that case (it's already got the
CEPH_RELEASE_LUMINOUS feature set which we're using to run it).
I'm not sure if there's a good way to do that upgrade that's worth the
effort.
-Greg
On Fri, Au
Ah, yes. This cluster has had all all the versions of Luminous on it.
Started with Kraken and went to every Luminous release candidate to date.
So I guess I'll just do the `ceph osd pool application enable` commands and
be done with it.
I appreciate your assistance.
Roger
On Fri, Aug 4, 2017 at
20 matches
Mail list logo