Hello,
On Mon, 25 Apr 2016 13:23:04 +0800 lin zhou wrote:
> Hi,Cephers:
>
> Recently,I face a problem of full.and I have using reweight to adjust it.
> But now I want to increase pgnum before I can add new nodes into the
> cluster.
>
How many more nodes, OSDs?
> current pg_num is 2048,and tot
Hi Mike,
Am 21.04.2016 um 15:20 schrieb Mike Miller:
Hi Udo,
thanks, just to make sure, further increased the readahead:
$ sudo blockdev --getra /dev/rbd0
1048576
$ cat /sys/block/rbd0/queue/read_ahead_kb
524288
No difference here. First one is sectors (512 bytes), second one KB.
oops, sorr
Hi,
I thought that xfs fragmentation or leveldb(gc list growing, locking,
...) could be a problem.
Do you have any experience with this ?
---
Regards
Dominik
2016-04-24 13:40 GMT+02:00 :
> I do not see any issue with that
>
> On 24/04/2016 12:39, Dominik Mostowiec wrote:
>> Hi,
>> I'm curious if
Hello!
Running a completely new testcluster with status HEALTH_OK i get the same error.
I'm running Ubuntu 14.04 with kernel 3.16.0-70-generic and ceph 10.2.0 on all
hosts.
The rbd-nbd mapping was done on the same host having one osd and mon. (This is
a small cluster with 4 virtual hosts and on
Hi,
we test Jewel in our QA environment (from Infernalis to Hammer) the
upgrade went fine but the Radosgw did not start.
the error appears also with radosgw-admin
# radosgw-admin user info --uid="images" --rgw-region=eu --rgw-zone=eu-qa
2016-04-25 12:13:33.425481 7fc757fad900 0 error in read_i
On Mon, Apr 25, 2016 at 02:23:28PM +0200, Ansgar Jazdzewski wrote:
> Hi,
>
> we test Jewel in our QA environment (from Infernalis to Hammer) the
> upgrade went fine but the Radosgw did not start.
>
> the error appears also with radosgw-admin
>
> # radosgw-admin user info --uid="images" --rgw-re
On Thursday, April 21, 2016, Serkan Çoban wrote:
> I cannot install a different kernel that is not supported by redhat to
> clients.
> Any other way to increase fuse performance with default 6.7 kernel?
> Maybe I can compile jewel ceph-fuse packages for rhel6, is this make a
> difference?
It mi
On Mon, Apr 25, 2016 at 1:53 PM, Stefan Lissmats wrote:
> Hello!
>
> Running a completely new testcluster with status HEALTH_OK i get the same
> error.
> I'm running Ubuntu 14.04 with kernel 3.16.0-70-generic and ceph 10.2.0 on
> all hosts.
> The rbd-nbd mapping was done on the same host having o
On Thursday, April 21, 2016, Benoît LORIOT wrote:
> Hello,
>
> we want to disable readproxy cache tier but before doing so we would like
> to make sure we won't loose data.
>
> Is there a way to confirm that flush actually write objects to disk ?
>
> We're using ceph version 0.94.6.
>
>
> I tried
Hi Yehuda
I created a test 3xVM setup with Hammer and one radosgw on the (separate)
admin node; creating one user and buckets.
I upgraded the VMs to jewel and created a new radosgw on one of the nodes.
The object store didn't seem to survive the upgrade
# radosgw-admin user info --uid=testuser
(sorry for resubmission, adding ceph-users)
On Mon, Apr 25, 2016 at 9:47 AM, Richard Chan
wrote:
> Hi Yehuda
>
> I created a test 3xVM setup with Hammer and one radosgw on the (separate)
> admin node; creating one user and buckets.
>
> I upgraded the VMs to jewel and created a new radosgw on one
Hello again!
I understand that it's not recommended running osd and rbd-nbd on the same host
and i actually moved my rbd-nbd to a completely clean host (same kernel and OS
though), but with same result.
I hope someone can resolve this and you seem to indicate it is some kind of
known error but
On Mon, Apr 25, 2016 at 7:47 PM, Stefan Lissmats wrote:
> Hello again!
>
> I understand that it's not recommended running osd and rbd-nbd on the same
> host and i actually moved my rbd-nbd to a completely clean host (same kernel
> and OS though), but with same result.
>
> I hope someone can reso
This is how we use ceph/ radosgw. I'd say our cluster is not that
reliable, but it's probably mostly our fault (no SSD journals, etc).
However, note that deletes are very slow in ceph. We put millions of
objects in very quickly and they are verrry slow to delete again especially
from RGW because
> > How do you actually do that?
>
> What does 'radosgw-admin zone get' return?
>
> Yehuda
>
[root@node1 ceph]# radosgw-admin zone get
unable to initialize zone: (2) No such file or directory
(I don't have any rgw configuration in /etc/ceph/ceph.conf; this is from a
clean
ceph-deploy rgw create
I managed to reproduce the issue, and there seem to be multiple
problems. Specifically we have an issue when upgrading a default
cluster that hasn't had a zone (and region) explicitly configured
before. There is another bug that I found
(http://tracker.ceph.com/issues/15597) that makes things even
Hi Cephers:
I saw the rgw source code of jewel edition and I found a new field
"std::string tenant" in struct rgw_bucket. Where used the new field tenant?
Is it used in S3 API?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.co
Quick questions:
1. Should this script be run on a pre-Jewel setup (e.g. revert test VMs) or
*after* Jewel attempted to read the no-zone/no-region Hammer and created
the default.* pools?
2. Should the radosgw daemon be running when executing the script?
Thanks!
On Tue, Apr 26, 2016 at 8:06
Dear Cephers:
I got the same issue under Ubuntu 14.04, even I try to use the image format
‘1’.
# modinfo rbd
filename: /lib/modules/3.13.0-85-generic/kernel/drivers/block/rbd.ko
license:GPL
author: Jeff Garzik
description:rados block device
author: Yehuda Sadeh
Hello!
It seems you referring to an earlier message but i can't find it.
It doesn't look that you have created image format 1 images.
I have created images in Jewel (10.2.0 and also som erlier releases) with the
switch --image-format 1 and seems to work perfectly even if it's a depreciated
switc
Hello:
Sorry for that I forgot paste the results of image format 1. And I still
cannot mount the format 1 or 2 block on Ubuntu 14.04 client, which the kernel
is 3.13.0-85-generic #129-Ubuntu.
##
# rbd create block_data/data03 -s 10G --image-format 1
rbd: image format 1 is deprecated
#
21 matches
Mail list logo