Oh, I missed this information.
So this means that, after having run once the balancer in compat mode, if
you add new OSDs you MUST manually defined the weight-set for these newly
added OSDs if you want to use the balancer, right ?
This is an important piece of information that IMHO should be in t
On 3/14/19 2:02 PM, Massimo Sgaravatto wrote:
Oh, I missed this information.
So this means that, after having run once the balancer in compat mode,
if you add new OSDs you MUST manually defined the weight-set for these
newly added OSDs if you want to use the balancer, right ?
This is an impo
You can try that commands, but maybe you need to find the root cause
why the current monmap contains no features at all, do you upgrade
cluster from luminous to mimic,
or it's a new cluster installed mimic?
Zhenshi Zhou 于2019年3月14日周四 下午2:37写道:
>
> Hi huang,
>
> It's a pre-production environment.
On 3/14/19 2:09 PM, Massimo Sgaravatto wrote:
I plan to use upmap after having migrated all my clients to CentOS 7.6
What is your current release?
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-user
I plan to use upmap after having migrated all my clients to CentOS 7.6
On Thu, Mar 14, 2019 at 8:03 AM Konstantin Shalygin wrote:
> On 3/14/19 2:02 PM, Massimo Sgaravatto wrote:
> > Oh, I missed this information.
> >
> > So this means that, after having run once the balancer in compat mode,
> >
I am using Luminous everywhere
On Thu, Mar 14, 2019 at 8:09 AM Konstantin Shalygin wrote:
> On 3/14/19 2:09 PM, Massimo Sgaravatto wrote:
> > I plan to use upmap after having migrated all my clients to CentOS 7.6
>
> What is your current release?
>
>
>
> k
>
>
___
On 3/14/19 2:10 PM, Massimo Sgaravatto wrote:
I am using Luminous everywhere
I'm mean, what is version of your kernel clients?
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I have some clients running centos7.4 with kernel 3.10
I was told that the minimum requirements are kernel >=4.13 or CentOS >= 7.5.
On Thu, Mar 14, 2019 at 8:11 AM Konstantin Shalygin wrote:
> On 3/14/19 2:10 PM, Massimo Sgaravatto wrote:
> > I am using Luminous everywhere
>
> I'm mean, what is
On 3/14/19 2:15 PM, Massimo Sgaravatto wrote:
I have some clients running centos7.4 with kernel 3.10
I was told that the minimum requirements are kernel >=4.13 or CentOS
>= 7.5.
Yes, this is correct.
k
___
ceph-users mailing list
ceph-users@list
Hi,
I'll try that command soon.
It's a new cluster installed mimic. Not sure what the exact reason, but as
far as I can think of, 2 things may cause this issue. One is that I moved
these servers from a datacenter to this one, followed by steps [1]. Another
is that I create a bridge using the inte
Hi.
I CC'ed Casey Bodley as new RGW tech lead.
Luminous doc [1] tells that s3:GetBucketTagging & s3:PutBucketTagging
methods is supported.But actually PutBucketTagging fails on Luminous
12.2.11 RGW with "provided input did not specify location constraint
correctly", I think is issue [2], but
Hi huang,
I think I've found the root cause which make the monmap contains no
feature. As I moved the servers from one place to another, I modified
the monmap once.
However, not all monmap is the same on all mons. I modified monmap
on one of the mons, and create from scratch on the other two mons
Hi Frank,
Did you ever get the 0.5 compression ratio thing figured out?
Thanks
-TJ Ragan
On 23 Oct 2018, at 16:56, Igor Fedotov
mailto:ifedo...@suse.de>> wrote:
Hi Frank,
On 10/23/2018 2:56 PM, Frank Schilder wrote:
Dear David and Igor,
thank you very much for your help. I have one more qu
You should never run a production cluster with this configuration.
Have you tried to access the disk with ceph-objectstoretool? The goal
would be export the shard of the PG on that disk and import it into
any other OSD.
Paul
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Co
Hi,
when running "apt update" I get the following error:
Err:6 http://download.ceph.com/debian-mimic bionic/main amd64 Packages
File has unexpected size (13881 != 13883). Mirror sync in progress? [IP:
158.69.68.124 80]
Hashes of expected file:
- Filesize:13883 [weak]
- SHA256:91a7e695d
I tried setting the weight set for the 'new' OSDs as you suggested.
What looks strange to me is that it was enough to set it for a single OSD
to have the weight-set defined for all the OSDs.
I defined the weight set for osd.12, and it got defined also for osd.13..19
... [*]
At any rate after this
For the record, the problem was that the new OSDs didn't have the
weight-set defined.
After having manually defined the weight-set for the new OSDs, I am able to
create a plan.
More info in the 'weight-set defined for some OSDs and not defined for the
new installed ones' thread
Cheers, Massimo
The bucket policy documentation just lists which actions the policy
engine understands. Bucket tagging isn't supported, so those requests
were misinterpreted as normal PUT requests to create a bucket. I opened
https://github.com/ceph/ceph/pull/26952 to return 405 Method Not Allowed
there instea
On 3/14/19 8:36 PM, Casey Bodley wrote:
The bucket policy documentation just lists which actions the policy
engine understands. Bucket tagging isn't supported, so those requests
were misinterpreted as normal PUT requests to create a bucket. I
opened https://github.com/ceph/ceph/pull/26952 to re
Hi Konstantin,
Luminous does not support bucket tagging--although I've done Luminous
backports for downstream use, and would be willing to help with
upstream backports if there is community support.
Matt
On Thu, Mar 14, 2019 at 9:53 AM Konstantin Shalygin wrote:
>
> On 3/14/19 8:36 PM, Casey Bo
Sorry, object tagging. There's a bucket tagging question in another thread :)
Matt
On Thu, Mar 14, 2019 at 9:58 AM Matt Benjamin wrote:
>
> Hi Konstantin,
>
> Luminous does not support bucket tagging--although I've done Luminous
> backports for downstream use, and would be willing to help with
On 3/14/19 8:58 PM, Matt Benjamin wrote:
Sorry, object tagging. There's a bucket tagging question in another thread :)
Luminous works fine with object tagging, at least on 12.2.11
getObjectTagging and putObjectTagging.
[k0ste@WorkStation]$ curl -s
https://rwg_civetweb/my_bucket/empty-file
Yes, sorry to misstate that. I was conflating with lifecycle
configuration support.
Matt
On Thu, Mar 14, 2019 at 10:06 AM Konstantin Shalygin wrote:
>
> On 3/14/19 8:58 PM, Matt Benjamin wrote:
> > Sorry, object tagging. There's a bucket tagging question in another thread
> > :)
>
> Luminous
Hi,
in the beginning, I create separate crush rules for SSD and HDD pool (
six Ceph nodes), following this HOWTO:
https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
Now I want to migrate to the standard crush rules, which comes with
Luminous. What is the
Would you be willing to elaborate on what configuration specifically is bad?
That would be helpful for future reference.
Yes, we have tried to access with ceph-objectstore-tool to export the shard.
The command spits out the tcmalloc lines shown in my previous output and then
crashes with an 'Ab
Hi,
I'm running dc p4610 6TB (nvme), no performance problem.
not sure what is the difference with d3-s4610.
- Mail original -
De: "Kai Wembacher"
À: "ceph-users"
Envoyé: Mardi 12 Mars 2019 09:13:44
Objet: [ceph-users] Intel D3-S4610 performance
Hi everyone,
I have an Intel D3-
On Tue, Mar 12, 2019 at 6:10 AM Stefan Kooman wrote:
> Hmm, 6 GiB of RAM is not a whole lot. Especially if you are going to
> increase the amount of OSDs (partitions) like Patrick suggested. By
> default it will take 4 GiB per OSD ... Make sure you set the
> "osd_memory_target" parameter accordin
Quoting Zack Brenton (z...@imposium.com):
> On Tue, Mar 12, 2019 at 6:10 AM Stefan Kooman wrote:
>
> > Hmm, 6 GiB of RAM is not a whole lot. Especially if you are going to
> > increase the amount of OSDs (partitions) like Patrick suggested. By
> > default it will take 4 GiB per OSD ... Make sure
Hello fellow Cephers,
My 12.2.2 cluster is pretty full so I've been adding new nodes/OSDs.
Last week I added two new nodes with 12 OSDs each and they are still
backfilling. I have max_backfills tuned quite low across the board to
minimize client impact. Yesterday I brought two more nodes online ea
in the beginning, I create separate crush rules for SSD and HDD pool (
six Ceph nodes), following this HOWTO:
https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
Now I want to migrate to the standard crush rules, which comes with
Luminous. What is the procedur
30 matches
Mail list logo