Hi All,
Anybody facing similar issue, please let us know how to hide or
avoid to use cephfs monitoring ip while mounting partition.
Regards
Prabu GJ
On Wed, 20 Jul 2016 13:03:31 +0530 gjprabu
wrote
Hi Team,
We are using c
hello cepher , I use ceph-10.2.2 source deploy a cluster.Since I am
the source deployment , I deploy it without ceph-deploy.
how to deploy a bluestore ceph cluster without ceph-deploy.No official
online documentation.
Where relevant documents?
hello cepher , I use ceph-10.2.2 source deploy a cluster.Since I am the
source deployment , I deploy it without ceph-deploy.
how to deploy a bluestore ceph cluster without ceph-deploy.No official
online documentation.
Where relevant documents?
Hi list,
I'm learning ceph and follow
http://docs.ceph.com/docs/master/rados/operations/user-management/
to experience ceph user management.
I create a user `client.chengwei` which looks like below.
```
exported keyring for client.chengwei
[client.chengwei]
key = AQBC1ZlXnVRgOBAA/nO03Hr1
In addition, I tried `ceph auth rm`, neither failed.
```
# ceph auth rm client.chengwei
Error EINVAL:
```
--
Thanks,
Chengwei
On Thu, Jul 28, 2016 at 06:23:09PM +0800, Chengwei Yang wrote:
> Hi list,
>
> I'm learning ceph and follow
> http://docs.ceph.com/docs/master/rados/operations/user-man
Hi John,
Thanks for your reply, Its a normal docker container can see the mount
information like /dev/sda... but this cause ip is exposed and it may security
reason should avoid ip address. As of now we will try to change hostname
instead of monitor ip address but is there an
Hi,
I just did a test deployment using ceph-deploy rgw create after
which I've added
[client.rgw.c11n1]
rgw_frontends = “civetweb port=80”
to the config.
Using show-config I can see that it’s there:
root@c11n1:~# ceph --id rgw.c11n1 --show-config | grep civet
debug_civetweb = 1/10
rgw_fronte
Hello,
hello - I use 100+ osds cluster...here, I am getting a few osds
wrongly marked down for a few seconds...and recovery starts...after a
few seconds again these OSDs will be up...
Any hint will help here..
Thanks
Swami
___
ceph-users mailing list
c
Hello Ceph alikes :)
i have a strange issue with one PG (0.223) combined with "deep-scrub".
Always when ceph - or I manually - run a " ceph pg deep-scrub 0.223 ",
this leads to many "slow/block requests" so that nearly all of my VMs
stop working for a while.
This happens only to this one PG
Firewall or communication issues?
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of M Ranga Swami
Reddy [swamire...@gmail.com]
Sent: 28 July 2016 22:00
To: ceph-users
Subject: [ceph-users] osd wrongly maked as down
Hello,
hello - I use 10
I suspect the data for one or more shards on this osd's underlying
filesystem has a marginally bad sector or sectors. A read from the deep
scrub may be causing the drive to perform repeated seeks and reads of
the sector until it gets a good read from the filesystem. You might
want to look at
Hi,
We solved it by running Micha scripts, plus we needed to run period update and
commit commands (for some reason we had to do it in separate commands):
radosgw-admin period update
radosgw-admin period commit
Btw, we added endpoints to json file, but I am not sure these are needed.
And I agr
Hi,
Does anyone know a fast way for S3 users to query their total bucket
usage? 's3cmd du' takes a long time on large buckets (is it iterating
over all the objects?). 'radosgw-admin bucket stats' seems to know the
bucket usage immediately, but I didn't find a way to expose that to
end users.
Hopi
Dan van der Ster writes:
> Hi,
>
> Does anyone know a fast way for S3 users to query their total bucket
> usage? 's3cmd du' takes a long time on large buckets (is it iterating
> over all the objects?). 'radosgw-admin bucket stats' seems to know the
> bucket usage immediately, but I didn't find a
Am 2016-07-28 15:26, schrieb Bill Sharer:
I suspect the data for one or more shards on this osd's underlying
filesystem has a marginally bad sector or sectors. A read from the
deep scrub may be causing the drive to perform repeated seeks and
reads of the sector until it gets a good read from the
On Thu, Jul 28, 2016 at 5:33 PM, Abhishek Lekshmanan wrote:
>
> Dan van der Ster writes:
>
>> Hi,
>>
>> Does anyone know a fast way for S3 users to query their total bucket
>> usage? 's3cmd du' takes a long time on large buckets (is it iterating
>> over all the objects?). 'radosgw-admin bucket sta
We tracked the problem down to the following rsyslog configuration in our
test cluster:
*.* @@:
$ActionExecOnlyWhenPreviousIsSuspended on
& /var/log/failover.log
$ActionExecOnlyWhenPreviousIsSuspended off
It seems that the $ActionExecOnlyWhenPreviousIsSuspended directive doesn't
work well with th
Hi,
This seems pretty quick here on a jewel cluster here, But I guess the key
questions is how large is large? Is it perhaps a large number of smaller
files that is slowing this down? Is the bucket index shared / on SSD?
[root@korn ~]# time s3cmd du s3://seanbackup
1656225129419 29 objects
I'm not sure what mechanism is used, but perhaps the Admin Ops API could
provide what you're looking for.
http://docs.ceph.com/docs/master/radosgw/adminops/#get-usage
I believe also that the usage log should be enabled for the gateway.
On Thu, Jul 28, 2016 at 12:19 PM, Sean Redmond
wrote:
> Hi
In order to use indexless (blind) buckets, you need to create a new
placement target, and then set the placement target's index_type param
to 1.
Yehuda
On Tue, Jul 26, 2016 at 10:30 AM, Tyler Bischel
wrote:
> Hi there,
> We are looking at using Ceph (Jewel) for a use case that is very write
>
Can I not update an existing placement target's index_type? I had tried to
update the default pool's index type:
radosgw-admin zone get --rgw-zone=default > default-zone.json
#replace index_type:0 to index_type:1 in the default zone file, under the
default-placement entry of the placement_pools
On Thu, Jul 28, 2016 at 12:11 PM, Tyler Bischel
wrote:
> Can I not update an existing placement target's index_type? I had tried to
> update the default pool's index type:
>
> radosgw-admin zone get --rgw-zone=default > default-zone.json
>
> #replace index_type:0 to index_type:1 in the default zo
Hi,
Has anyone configured compression in RockDB for BlueStore? Does it work?
Thanks
Pankaj
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Wed, Jul 27, 2016 at 6:37 PM, Goncalo Borges
wrote:
> Hi Greg
>
> Thanks for replying. Answer inline.
>
>
>
>>> Dear cephfsers :-)
>>>
>>> We saw some weirdness in cephfs that we do not understand.
>>>
>>> We were helping some user which complained that her batch system job
>>> outputs
>>> were
Should work fine AFAIK, let us know if it doesn't. :)
FWIW, the goal at the moment is to make the onode so dense that rocksdb
compression isn't going to help after we are done optimizing it.
Mark
On 07/28/2016 03:37 PM, Garg, Pankaj wrote:
Hi,
Has anyone configured compression in RockDB for
I am using snappy and it is working fine with Bluestore..
Thanks & Regards
Somnath
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark
Nelson
Sent: Thursday, July 28, 2016 2:03 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RocksDB
On Jul 25, 2016, Gregory Farnum wrote:
> * Right now, we allow users to rename snapshots. (This is newish, so
> you may not be aware of it if you've been using snapshots for a
> while.) Is that an important ability to preserve?
I recall wishing for it back in the early days (0.2?.*), when I trie
Hi Greg
For now we have to wait and see if it appears again. If it does, than at least
we provide a strace and perform any further debug.
We will update this thread when/ if it appears again.
Cheers
G.
From: Gregory Farnum [gfar...@redhat.com]
Sent: 29 July
hi all,
i want get the usage of user,so i use the command radosgw-admin usage
show ,but i can not get the usage when i use the --start-date unless minus
16 hours
i have rgw both on ceph01 and ceph03,civeweb:7480 port ,and the ceph
version is jewel 10.2.2
the time zone of ceph01 and ceph03
[roo
Hello,
On Thu, 28 Jul 2016 14:46:58 +0200 c wrote:
> Hello Ceph alikes :)
>
> i have a strange issue with one PG (0.223) combined with "deep-scrub".
>
> Always when ceph - or I manually - run a " ceph pg deep-scrub 0.223 ",
> this leads to many "slow/block requests" so that nearly all of my V
hello cepher , I use ceph-10.2.2 source deploy a cluster.Since I am the
source deployment , I deploy it without ceph-deploy.
how to deploy a bluestore ceph cluster without ceph-deploy.No official
online documentation.
Where relevant documents?
Hi All,
At the moment I am setting up CI pipelines for Ceph and ran into a
small issue; I have some memory constrained runners (2G). So, when
performing a build using do-cmake all is fine... the build might last
long, but after an hour or two I am greeted with a 'Build succeeded'
message, I gathe
Hi list,
I just followed the placement group guide to set pg_num for the rbd pool.
"
Less than 5 OSDs set pg_num to 128
Between 5 and 10 OSDs set pg_num to 512
Between 10 and 50 OSDs set pg_num to 4096
If you have more than 50 OSDs, you need to understand the tradeoffs and how to
calc
On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
> Hi list,
>
> I just followed the placement group guide to set pg_num for the rbd pool.
>
How many other pools do you have, or is that the only pool?
The numbers mentioned are for all pools, not per pool, something that
isn't abundantly c
Removing osd.4 and still getting the scrub problems removes its drive
from consideration as the culprit. Try the same thing again for osd.16
and then osd.28.
smartctl may not show anything out of sorts until the marginally bad
sector or sectors finally goes bad and gets remapped. The only hi
On Thu, Jul 28, 2016 at 5:53 PM, Leo Yu wrote:
> hi all,
> i want get the usage of user,so i use the command radosgw-admin usage show
> ,but i can not get the usage when i use the --start-date unless minus 16
> hours
>
> i have rgw both on ceph01 and ceph03,civeweb:7480 port ,and the ceph versi
The same problem is confusing me recently too, trying to figure out the
relationship (an equation would be the best) among number of pools, OSD and PG.
For example, having 10 OSD, 7 pools in one cluster, and osd_pool_default_pg_num
= 128, then how many PGs the health status would show?
I have se
Hello,
On Fri, 29 Jul 2016 03:18:10 + zhu tong wrote:
> The same problem is confusing me recently too, trying to figure out the
> relationship (an equation would be the best) among number of pools, OSD and
> PG.
>
The pgcalc tool and the equation on that page are your best bet/friend.
htt
Right, that was the one that I calculated the osd_pool_default_pg_num in our
test cluster.
7 OSD, 11 pools, osd_pool_default_pg_num is calculated to be 256, but when ceph
status shows
health HEALTH_WARN
too many PGs per OSD (5818 > max 300)
monmap e1: 1 mons at {open-kvm-app
You can do it with ceph-disk prepare --bluestore /dev/sdX
Just keep in mind that it is very unstable and will result in corruption
or other issues.
On 16-07-29 04:36, m13913886...@yahoo.com wrote:
hello cepher , I use ceph-10.2.2 source deploy a cluster.
Since I am the source deployment ,
Gerald,
To me this looks like a terribly crippled environment to build ceph, and I
won’t bother building in such environment. IMO, It’s not worth for anybody to
optimize the build process to make it work on such a crippled environment.
And to answer your question, If you can not add more memo
On Fri, Jul 29, 2016 at 2:20 PM, Kamble, Nitin A
wrote:
>To me this looks like a terribly crippled environment to build ceph, and I
> won’t bother building in such environment. IMO, It’s not worth for anybody to
> optimize the build process to make it work on such a crippled environment.
Po
42 matches
Mail list logo