Thanks Casey,
It works using both --bucket XXX_name --bucket-id XXX_id. For both
radosgw Hammer and Jewel version.
But the documentation for the REST admin operations is completely wrong:
http://docs.ceph.com/docs/master/radosgw/adminops/#link-bucket
Cheers,
Valery
On 21/02/17 20:26 , Cas
Alright, so I've done a bit further research, looks like the DIMM on NVMe did
not go to public market except found in some NetAPP storage appliace.
However, this product has been evolved to use the memory slot on the
motherboard directly and becomes a NVDIMM project. HPE is said to launch those
Oh, the NVDIMM technology is almost ready in the mass market, linux kernel will
be fully supported starting 4.6 (not so far away), I think the NVDIMM hardware
is much cheaper than raid card, right? :P
http://www.admin-magazine.com/HPC/Articles/NVDIMM-Persistent-Memory
ISL E-Mail Disclaimer
(ht
This Hammer point release fixes several bugs and adds a couple of new
features.
We recommend that all hammer v0.94.x users upgrade.
Please note that Hammer will be retired when Luminous is released later
during the spring of this year. Until then, the focus will be primarily
on bugs that would hi
Found a workaround:
s3cmd -c s3cfg setlifecycle lifecycle_configuration.xml
s3://my-new-bucket --signature-v2
probably aws-sdk gem is not compatible with this feature.
On Tue, Feb 21, 2017 at 4:35 PM, Anton Iakimov wrote:
> Doing this (aws-sdk ruby gem):
>
> s3_client.put_bucket_lifecycle_co
> Op 21 februari 2017 om 15:35 schreef george.vasilaka...@stfc.ac.uk:
>
>
> I have noticed something odd with the ceph-objectstore-tool command:
>
> It always reports PG X not found even on healthly OSDs/PGs. The 'list' op
> works on both and unhealthy PGs.
>
Are you sure you are supplying t
Brad Hubbard pointed out on the bug tracker
(http://tracker.ceph.com/issues/18960) that, for EC, we need to add the shard
suffix to the PGID parameter in the command, e.g. --pgid 1.323s0
The command now works and produces the same output as PG query.
To avoid spamming the list I've put the outpu
> Op 22 februari 2017 om 14:24 schreef george.vasilaka...@stfc.ac.uk:
>
>
> Brad Hubbard pointed out on the bug tracker
> (http://tracker.ceph.com/issues/18960) that, for EC, we need to add the shard
> suffix to the PGID parameter in the command, e.g. --pgid 1.323s0
> The command now works and
Hi Cephers,
We are running latest jewel (10.2.5). Bucket index sharding is set to 8.
rgw pools except data are placed on SSD.
Today I've done some testing and run bucket index check on a bucket with
~120k objects:
# radosgw-admin bucket check -b mybucket --fix --check-objects
--rgw-realm=myrealm
On Wed, Feb 22, 2017 at 1:52 PM, Florent B wrote:
> On 02/21/2017 06:43 PM, John Spray wrote:
>> On Tue, Feb 21, 2017 at 5:20 PM, Florent B wrote:
>>> Hi everyone,
>>>
>>> I use a Ceph Jewel cluster.
>>>
>>> I have a CephFS with some directories at root, on which I defined some
>>> layouts :
>>>
Hi Cephers,
We are testing rgw multisite solution between to DC. We have one zonegroup
and to zones. At the moment all writes/deletes are done only to primary
zone.
Sometimes not all the objects are replicated.. We've written prometheus
exporter to check replication status. It gives us each bucke
So what I see there is this for osd.307:
"empty": 1,
"dne": 0,
"incomplete": 0,
"last_epoch_started": 0,
"hit_set_history": {
"current_last_update": "0'0",
"history": []
}
}
last_epoch_started is 0 and empty is 1. The other OSDs are reporting
last_epoch_st
> Op 22 februari 2017 om 15:12 schreef "Ammerlaan, A.J.G."
> :
>
>
> Hello,
>
> Many thanks for your info!
Np! Please use reply-all so the message goes back to the list and not only to
me.
>
> Must we configure the other MDS'es in /etc/ceph/ceph.conf?
>
> When we do: cdeploy#ceph-deploy m
On Wed, Feb 22, 2017 at 4:06 PM, Marius Vaitiekunas <
mariusvaitieku...@gmail.com> wrote:
> Hi Cephers,
>
> We are running latest jewel (10.2.5). Bucket index sharding is set to 8.
> rgw pools except data are placed on SSD.
> Today I've done some testing and run bucket index check on a bucket with
:0.000200:s3:GET
/:list_bucket:authorizing
2017-02-22 15:50:09.325813 7f0ec67fc700 10 v4 signature format =
c535db1ceb4ed3c7eb68f2f9a35ad61849631a1bb6391dcf314f5aa7f717b3fd
2017-02-22 15:50:09.325827 7f0ec67fc700 10 v4 credential format =
0213b30621e74120b73d11a5e99240f9/20170222/US/s3/aws4_requ
Hi,
Following a change done in set_priority() from yesterday ceph RDMA is broken at
latest master
We are working on a fix,
Please use version prior to eb0f62421dd9722055b3a87bfbe129d1325a723f
Thanks
Adir Lev
Mellanox
___
ceph-users mailing list
ceph-u
On Wed, Feb 22, 2017 at 6:19 AM, Marius Vaitiekunas
wrote:
> Hi Cephers,
>
> We are testing rgw multisite solution between to DC. We have one zonegroup
> and to zones. At the moment all writes/deletes are done only to primary
> zone.
>
> Sometimes not all the objects are replicated.. We've written
On Wed, Feb 22, 2017 at 8:33 PM, Yehuda Sadeh-Weinraub
wrote:
> On Wed, Feb 22, 2017 at 6:19 AM, Marius Vaitiekunas
> wrote:
> > Hi Cephers,
> >
> > We are testing rgw multisite solution between to DC. We have one
> zonegroup
> > and to zones. At the moment all writes/deletes are done only to pr
On Wed, Feb 22, 2017 at 11:41 AM, Marius Vaitiekunas
wrote:
>
>
> On Wed, Feb 22, 2017 at 8:33 PM, Yehuda Sadeh-Weinraub
> wrote:
>>
>> On Wed, Feb 22, 2017 at 6:19 AM, Marius Vaitiekunas
>> wrote:
>> > Hi Cephers,
>> >
>> > We are testing rgw multisite solution between to DC. We have one
>> > z
On Mon, Feb 20, 2017 at 02:12:52PM PST, Gregory Farnum spake thusly:
> Hmm, I went digging in and sadly this isn't quite right.
Thanks for looking into this! This is the answer I was afraid of. Aren't
all of those blog entries which talk about using repair and the ceph
docs themselves putting peo
So I updated suse leap, and now I'm getting the following error from
ceph. I know I need to disable some features, but I'm not sure what
they are.. Looks like 14, 57, and 59, but I can't figure out what
they correspond to, nor therefore, how to turn them off.
libceph: mon0 10.0.0.67:6789 feature
Hi,
i have a new stange error in my ceph cluster:
# ceph -s
health HEALTH_WARN
'default.rgw.buckets.data.cache' at/near target max
# ceph df
default.rgw.buckets.data 10 20699G 27.86
53594G 50055267
default.rgw.buckets.data.cache 1115E
22 matches
Mail list logo