Hi Pritha:
I added administrator quotas to users, but they didn't seem to work.
radosgw-admin user create --uid=ADMIN --display-name=ADMIN --admin --system
radosgw-admin caps add --uid="ADMIN"
--caps="user-policy=*;roles=*;users=*;buckets=*;metadata=*;usage=*;zone=*"
{
"user_id": "ADMIN",
Hi Myxingkong,
Did you add admin caps to the user (with access key id
'HTRJ1HIKR4FB9A24ZG9C'), which is trying to attach a user policy. using the
command below:
radosgw-admin caps add --uid= --caps="user-policy=*"
Thanks,
Pritha
On Tue, Mar 12, 2019 at 7:19 AM myxingkong wrote:
> Hi Pritha:
>
Hi Pritha:
I was unable to attach the permission policy through S3curl, which returned an
HTTP 403 error.
./s3curl.pl --id admin -- -s -v -X POST
"http://192.168.199.81:7480/?Action=PutUserPolicy&PolicyName=Policy1&UserName=TESTER&PolicyDocument=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\
On 9/03/19 10:07 PM, Victor Hooi wrote:
> Hi,
>
> I'm setting up a 3-node Proxmox cluster with Ceph as the shared storage,
> based around Intel Optane 900P drives (which are meant to be the bee's
> knees), and I'm seeing pretty low IOPS/bandwidth.
We found that CPU performance, specifically power
I'm wondering if the 'radosgw-admin bucket check --fix' command is broken in
Luminous (12.2.8)?
I'm asking because I'm trying to reproduce a situation we have on one of our
production clusters and it doesn't seem to do anything. Here's the steps of my
test:
1. Create a bucket with 1 million o
Hello Cephers,
I am trying to find the cause of multiple slow ops happened with my
small cluster. I have a 3 node with 9 OSDs
Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
128 GB RAM
Each OSD is SSD Intel DC-S3710 800GB
It runs mimic 13.2.2 in containers.
Cluster was operating normally for 4 mo
These options aren't needed, numjobs is 1 by default and RBD has no "sync"
concept at all. Operations are always "sync" by default.
In fact even --direct=1 may be redundant because there's no page cache
involved. However I keep it just in case - there is the RBD cache, what if
one day fio g
how about adding: --sync=1 --numjobs=1 to the command as well?
On Sat, Mar 9, 2019 at 12:09 PM Vitaliy Filippov wrote:
> There are 2:
>
> fio -ioengine=rbd -direct=1 -name=test -bs=4k -iodepth=1 -rw=randwrite
> -pool=bench -rbdname=testimg
>
> fio -ioengine=rbd -direct=1 -name=test -bs=4k -i
HI Casey,
We're still trying to figure this sync problem out, if you could possibly
tell us anything further we would be deeply grateful!
Our errors are coming from 'data sync'. In `sync status` we pretty
constantly show one shard behind, but a different one each time we run it.
Here's a paste
I am looking one problematic pg on my disaster scenario and look at
bellow :
root@monitor~# ceph pg ls-by-pool cinder_sata | grep 5.5b7
5.5b7 26911 29 53851 107644 29 11224818892853258
53258 active+recovering+undersized+degraded+remapped 2019-03-11
14:05:29.857657 950
Hello all
I have a 'would be nice' use case need I'm wondering if ceph can
handle. The goal is to allow an otherwise ordinary ceph server with a
little 'one-of' special purpose extra hardware that could provide at
least some value when off-host networking is down to do so while still
taking
> Am 11.03.2019 um 12:21 schrieb Konstantin Shalygin :
>
>
>> Hello list,
>>
>> I upgraded to mimic some time ago and want to make use of the upmap feature
>> now.
>> But I can't do "ceph osd set-require-min-compat-client luminous" as there
>> are still pre-luminous clients connected.
>>
>>
Hi all,
we had some assistance with our SSD crash issue outside of this
mailing list - which is not resolved yet
(http://tracker.ceph.com/issues/38395) - but there's one thing I'd
like to ask the list.
I noticed that a lot of the OSD crashes show a correlation to MON
elections. For the
Hello Daniel,
I think you will not avoid a tedious job of manual cleanup...
Or the other way is to delete the whole pool (ID 18).
The manual cleanup means to take all the OSDs from "probing_osds", stop
them one by one and remove the shards of groups 18.1e and 18.c (using
ceph-objstore-tool).
Afte
Hi David,
I know the different between cluster network and public network. I usually
split them to vlans for statistic, isolation and priority. What I need to
know is about what kind of RDMA messaging Ceph do. Is it between OSDs or
related to other daemons and clients too?
Best regards,
On Mon,
I can't speak to the rdma portion. But to clear up what each of these
does... the cluster network is only traffic between the osds for
replicating writes, reading EC data, as well as backfilling and recovery
io. Mons, mds, rgw, and osds talking with clients all happen on the public
network. The gen
Ceph has been getting better and better about prioritizing this sorry of
recovery, but free of those optimizations are in Jewel, which had been out
of the support cycle for about a year. You should look into upgrading to
mimic where you should see a pretty good improvement on this sorry of
prioriti
The problem with clients on osd nodes is for kernel clients only. That's
true of krbd and the kernel client for cephfs. The only other reason not to
run any other Ceph daemon in the same node as osds is resource contention
if you're running at higher CPU and memory utilizations.
On Sat, Mar 9, 201
Hello list,
I upgraded to mimic some time ago and want to make use of the upmap feature now.
But I can't do "ceph osd set-require-min-compat-client luminous" as there are
still pre-luminous clients connected.
The cluster was originally created from jewel release.
When I run "ceph features", I
Hi Myxingkong,
http://docs.ceph.com/docs/nautilus/mgr/restful/ is for the Manager module
of ceph. This is not related to rgw.
Please try attaching a policy by configuring s3curl tool.
Thanks,
Pritha
On Mon, Mar 11, 2019 at 3:43 PM myxingkong wrote:
> Hi Pritha:
>
> This is the documentation f
Hi Pritha:This is the documentation for configuring restful modules:http://docs.ceph.com/docs/nautilus/mgr/restful/The command given according to the official documentation is to attach the permission policy through the REST API.This is the documentation for STS lite:htt
Hi Myxingkong,
Can you explain what you mean by 'enabling restful modules', particularly
which document are you referring to?
Right now there is no other way to attach a permission policy to a user.
There is work in progress for adding functionality to RGW using which such
calls can be scripted
Hello list,
I upgraded to mimic some time ago and want to make use of the upmap feature now.
But I can't do "ceph osd set-require-min-compat-client luminous" as there are
still pre-luminous clients connected.
The cluster was originally created from jewel release.
When I run "ceph features", I s
Hello:I want to use the GetSessionToken method to get the temporary credentials, but according to the answer given in the official documentation, I need to attach a permission policy to the user before I can use the GetSessionToken method.This is the command for the additional permiss
Well, the drive supports trim:
# hdparm -I /dev/sdd|grep TRIM
* Data Set Management TRIM supported (limit 8 blocks)
* Deterministic read ZEROs after TRIM
But fstrim or discard is not enabled (I have checked both mount options and
services/cron). I'm using default
On 3/8/19 4:17 AM, Pardhiv Karri wrote:
> Hi,
>
> We have a ceph cluster with rack as failure domain but the racks are so
> imbalanced due to which we are not able to utilize the maximum of
> storage allocated as some odd's in small racks are filling up too fast
> and causing ceph to go into war
26 matches
Mail list logo