Le 23/06/2016 08:25, Sarni Sofiane a écrit :
> Hi Florian,
>
> On 23.06.16 06:25, "ceph-users on behalf of Florian Haas"
> wrote:
>
>> On Wed, Jun 22, 2016 at 10:56 AM, Yoann Moulin wrote:
>>> Hello Florian,
>>>
On Tue, Jun 21, 2016 at 3:11 PM, Yoann Moulin wrote:
> Hello,
>
>>
In 'rule replicated_ruleset':
..
...
step chooseleaf firstn 0 type *osd*
...
...
2016-06-22 19:38 GMT+08:00 min fang :
> Thanks, actually I create a pool with more pgs also meet this problem.
> Following is my crush map, please help point how to change the crush
> ruleset? thanks.
>
> #begin cr
Hi Christian,
So, it seem that at first I must set target_max_bytes with Max. Available
size devided by the number opf cache pools and with calculation of worst
OSDs down possibility, isn't it? And then after some while, I adjust
target_max_bytes per cache pool by monitoring "ceph df detail" outpu
Hello,
On Thu, 23 Jun 2016 14:28:30 +0700 Lazuardi Nasution wrote:
> Hi Christian,
>
> So, it seem that at first I must set target_max_bytes with Max. Available
> size devided by the number opf cache pools and with calculation of worst
> OSDs down possibility, isn't it?
Correct.
>And then af
Hi, Brad:
This is the output of "ceph osd crush show-tunables -f json-pretty"
{
"choose_local_tries": 0,
"choose_local_fallback_tries": 0,
"choose_total_tries": 50,
"chooseleaf_descend_once": 1,
"chooseleaf_vary_r": 1,
"chooseleaf_stable": 0,
"straw_calc_version": 1,
Hi
is there any possibility to make RadosGW to account stats for something
similar to "radosgw.objects.(incoming|outgoing).bytes" similar to
Swift's meters storage.objects.(incoming|outgoing).bytes??
Thanks
J.
___
ceph-users mailing list
ceph-users
On Thu, Jun 23, 2016 at 6:38 PM, 王海涛 wrote:
> Hi, Brad:
> This is the output of "ceph osd crush show-tunables -f json-pretty"
> {
> "choose_local_tries": 0,
> "choose_local_fallback_tries": 0,
> "choose_total_tries": 50,
> "chooseleaf_descend_once": 1,
> "chooseleaf_vary_r": 1,
Its already supported from openstack Kilo version of Ceilometer. We
have added only 6 meters from radogw.
url -
https://blueprints.launchpad.net/ceilometer/+spec/ceph-ceilometer-integration
Thanks
Swami
On Thu, Jun 23, 2016 at 2:37 PM, magicb...@hotmail.com
wrote:
> Hi
>
> is there any possibi
Hi,
Did you check Admin API of Rados gateway?
http://docs.ceph.com/docs/master/radosgw/adminops/#get-usage
On Thu, Jun 23, 2016 at 5:07 PM, magicb...@hotmail.com <
magicb...@hotmail.com> wrote:
> Hi
>
> is there any possibility to make RadosGW to account stats for something
> similar to "rados
Hi
yes, radosgw keeps stats but those stats aren't pushed into telemetry
service I think..
On 23/06/16 11:55, c.y. lee wrote:
Hi,
Did you check Admin API of Rados gateway?
http://docs.ceph.com/docs/master/radosgw/adminops/#get-usage
On Thu, Jun 23, 2016 at 5:07 PM, magicb...@hotmail.c
HI
I'm running liberty and ceph hammer, and these are my available meters:
* ceph.storage.objects
* ceph.storage.objects.size
* ceph.storage.objects.containers
* ceph.storage.containers.objects
* ceph.storage.containers.objects.size
* ceph.storage.api.request
I'd like to have some
"radosgw.obje
yes...use rgw admin APIs for getting the meters.
On Thu, Jun 23, 2016 at 3:25 PM, c.y. lee wrote:
> Hi,
>
> Did you check Admin API of Rados gateway?
>
> http://docs.ceph.com/docs/master/radosgw/adminops/#get-usage
>
>
> On Thu, Jun 23, 2016 at 5:07 PM, magicb...@hotmail.com
> wrote:
>>
>> Hi
>>
Those meters pushed into ceilometer...the patches already in from Kilo version..
On Thu, Jun 23, 2016 at 4:05 PM, magicb...@hotmail.com
wrote:
> Hi
>
> yes, radosgw keeps stats but those stats aren't pushed into telemetry
> service I think..
>
>
> On 23/06/16 11:55, c.y. lee wrote:
>
> Hi,
>
Hi,
I am having issues adding RedHat Ceph (10.2.1) nodes to Calamari 1.3-7.
Below are more details.
1. On RHEL 7.2 VMs, configured Ceph (10.2.1) cluster with 3 mons and 19
osds .
2. Configured Calamari 1.3-7 on one node. Installation was done through
ICE_SETUP with ISO Image. Diamond packages we
Gentle reminder on my question. It would be great if you can suggest us on
any work around for achieving the same.
Thanks & Regards,
Manoj
On Tue, Jun 21, 2016 at 5:25 PM, Venkata Manojawa Paritala <
manojaw...@vedams.com> wrote:
> Hi,
>
> In Ceph cluster, currently we are seeing that COSBench i
A single image can be as large as you want, or at least as large as your
pool size.
But you want to take into consideration the maximum size allowed by the
filesystem on top of your volume and the maximum size supported by you OS
vendor if any.
And even if supported and even considered the resilie
Hi All,
I have created an image but cannot map the image, anybody know what could
be the problem:
sudo rbd map data/data_01
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the
kernel with "rbd feature disable".
In some cases useful info is found
On Thu, Jun 23, 2016 at 10:16 AM, Ishmael Tsoaela wrote:
> cluster_master@nodeC:~$ rbd --image data_01 -p data info
> rbd image 'data_01':
> size 102400 MB in 25600 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.105f2ae8944a
> format: 2
> features: layering, exclusive-lock, obj
it worked thanks:
cluster_master@nodeC:~$ sudo rbd map data/data_01
/dev/rbd0
On Thu, Jun 23, 2016 at 4:37 PM, Jason Dillaman wrote:
> On Thu, Jun 23, 2016 at 10:16 AM, Ishmael Tsoaela
> wrote:
> > cluster_master@nodeC:~$ rbd --image data_01 -p data info
> > rbd image 'data_01':
> > size 102
Hey cephers,
If you missed Tuesday’s Ceph Tech Talk by Sage on the new Bluestore
backend for Ceph, it is now available on Youtube:
https://youtu.be/kuacS4jw5pM
We love it when our community shares what they are doing, so if you
would like to give a Ceph Tech Talk in the future, please drop me a
l
Hello,
Trying out Ceph for the first time, following the installation guide using
ceph-deploy. All goes well, "ceph -s" reports health as ok at the
beginning, but shortly after it shows all placement groups as inactive, and
the 2 osds are down and out.
I understand this could be for a variety of
Hello,
I am using a script that calls the librados ioctx.trunc() method, and I
am getting an
errno ENOTSUP
I can read/write to the pool, and I can call the trunc method on a
replicated pool.
I am using version 0.80.7
I am just wondering if this is intended? Or is there maybe something wrong
w
When doing FIO RBD benchmarking using 94.7 on Ubuntu 14.04 using 10 SSD/OSD
and with/and without journals on separate SSDs, I get an even distribution
of IO to the OSD and to the journals (if used).
If I drop the # of OSD's down to 8, the IO to the journals is skewed by
40%, meaning 1 journal is d
vm.vfs_cache_pressure = 100
Go the other direction on that. You¹ll want to keep it low to help keep
inode/dentry info in memory. We use 10, and haven¹t had a problem.
Warren Wang
On 6/22/16, 9:41 PM, "Wade Holler" wrote:
>Blairo,
>
>We'll speak in pre-replication numbers, replicati
Or even vm.vfs_cache_pressure = 0 if you have sufficient memory to *pin*
inode/dentries in memory.
We are using that for long now (with 128 TB node memory) and it seems helping
specially for the random write workload and saving xattrs read in between.
Thanks & Regards
Somnath
-Original Mess
Hello,
On Thu, 23 Jun 2016 22:24:59 + Somnath Roy wrote:
> Or even vm.vfs_cache_pressure = 0 if you have sufficient memory to *pin*
> inode/dentries in memory. We are using that for long now (with 128 TB
> node memory) and it seems helping specially for the random write
> workload and saving
Oops , typo , 128 GB :-)...
-Original Message-
From: Christian Balzer [mailto:ch...@gol.com]
Sent: Thursday, June 23, 2016 5:08 PM
To: ceph-users@lists.ceph.com
Cc: Somnath Roy; Warren Wang - ISD; Wade Holler; Blair Bethwaite; Ceph
Development
Subject: Re: [ceph-users] Dramatic performanc
Hi, as my understanding, in PG level, IOs are execute in a sequential way,
such as the following cases:
Case 1:
Write A, Write B, Write C to the same data area in a PG --> A Committed,
then B committed, then C. The final data will from write C. Impossible
that mixed (A, B,C) data is in the data a
Correct. This is guaranteed.
Regards,
Anand
On Fri, Jun 24, 2016 at 10:37 AM, min fang wrote:
> Hi, as my understanding, in PG level, IOs are execute in a sequential way,
> such as the following cases:
>
> Case 1:
> Write A, Write B, Write C to the same data area in a PG --> A Committed,
> then
29 matches
Mail list logo