Hi ,
CRUSH map have two parameter are "min_size" and "max_size".
Explanation about min_size is "*If a pool makes fewer replicas than this
number, CRUSH will NOT select this rule*".
The max_size is "*If a pool makes more replicas than this number, CRUSH
will NOT select this rule*"
Default set
;
>
> The below command will set ruleset to the pool:
>
>
>
> ceph osd pool set crush_ruleset 1
>
>
>
> For more info : http://ceph.com/docs/master/rados/operations/crush-map/
>
>
>
> Thanks
>
> Sahana
>
>
>
> *From:* ceph-users [mailto:
Dear cephers:
My cluster( 0.87) got an odd incident.
The incident is when I marked the default crush rule "replicated_ruleset"
and set new rule called "new_rule1".
Content of rule "new_rule1" is just like "replicated_ruleset". Only
difference is ruleset number .
After applied new map into crush th
4a0-9920-df0774ad2ef3
>
>
> On Feb 10, 2015, at 12:55 PM, B L wrote:
>
>
> On Feb 10, 2015, at 12:37 PM, B L wrote:
>
> Hi Vickie,
>
> Thanks for your reply!
>
> You can find the dump in this link:
>
> https://gist.github.com/anonymous/706d4a1ec81c93fd1eca
data min_size 1"
Open another terminal and use command "ceph -w" watch pg and pgs status .
Best wishes,
Vickie
2015-02-10 19:16 GMT+08:00 Vickie ch :
> Hi Beanos:
> So you have 3 OSD servers and each of them have 2 disks.
> I have a question. What result of "ceph os
Hi
The weight is reflect spaces or ability of disks.
For example, the weight of 100G OSD disk is 0.100(100G/1T).
Best wishes,
Vickie
2015-02-10 22:25 GMT+08:00 B L :
> Thanks for everyone!!
>
> After applying the re-weighting command (*ceph osd crush reweight osd.0
> 0.0095*), my cluster is ge
Hello Cephers,
I have a question about pool quota. Is pool quota support RBD?
My cluster is Hammer 0.94.1 that have 1 Mon and 3 OSD. Each OSD server
have 3 disk.
My question is when I set pool quota size 1G on pool "rbd". I still
can create a image "abc" = 3G.
After I mount and
Hi all,
I want to use swift-client to connect ceph cluster. I have done s3 test
on this cluster before.
So I follow the guide to create a subuser and use swift client to test
it. But always got an error "404 Not Found"
How can I create the "auth" page?Any help will be appreciated.
-
Dear all,
I tried another way that use command ceph-deploy to create radosgw.
After that I can get list or create container finally.
But new problem is if I tried to upload files or delete container that
radosgw will return the message "Access denied".
Totally have no idea. Any help will be ap
GMT+08:00 Vickie ch :
> Dear all,
> I tried another way that use command ceph-deploy to create radosgw.
> After that I can get list or create container finally.
> But new problem is if I tried to upload files or delete container that
> radosgw will return the message
Dear Cephers,
When a bucket created, the default quota setting is unlimited. Is
there any setting can change this? That's admin no need to change bucket
quota one by one.
Best wishes,
Mika
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
Dear Cephers,
One day radosgw totally died. I tried to restart radosgw but I found
/tmp/radosgw.sock is missing.
Even service radosgw is exist that checked by using "ps aux". The process
will die later.
I got "Internal server error" from web page. How can I re-create
/tmp/radosgw.sock?
And
Dear all,
I try to create osds and get an error message (old/different cluster
instance?).
And osd can create but not active. This server ever build osds before.
Pls give me some advises.
OS:rhel7
ceph:0.80 firefly
Best wishes,
Mika
___
ceph-users ma
Hi ,
I've do that before and when I try to write file into rbd. It's get
freeze.
Beside resource, is there any other reason not recommend to combined mon
and osd?
Best wishes,
Mika
2015-08-18 15:52 GMT+08:00 Межов Игорь Александрович :
> Hi!
>
> You can run mons on the same hosts, tho
Hello all,
I have a question about how to calculate file size when mount a block
device from rbd image .
[Cluster information:]
1.The cluster with 1 mon and 6 osds. Every osd is 1T. Total spaces is 5556G.
2.rbd pool:replicated size 2 min_size 1. num = 128. Except rbd pool other
pools is empty.
[St
Hi all,
Try to use two OSDs to create a cluster. After the deply finished, I
found the health status is "88 active+degraded" "104 active+remapped".
Before use 2 osds to create cluster the result is ok. I'm confuse why this
situation happened. Do I need to set crush map to fix this problem?
crush_ruleset 0 object_hash
> rjenkins pg_num 1024 pgp_num 1024 last_change 186 flags hashpspool
> stripe_width 0
>
>
>
>
>
>
> On 29/10/14 21:46, Irek Fasikhov wrote:
>
>> Hi.
>> This parameter does not apply to pools by default.
>> ceph osd dump |
Best wishes,
Mika
2014-10-29 17:05 GMT+08:00 Irek Fasikhov :
> ceph osd tree please :)
>
> 2014-10-29 12:03 GMT+03:00 Vickie CH :
>
>> Dear all,
>> Thanks for the reply.
>> Pool replicated size is 2. Because the replicated size parameter already
>> write into
Hi Sakhi:
I got this problem before. Host OS is Ubuntu 14.04 3.13.0-24-generic.
In the end I use fdisk /dev/sdX delete all partition and reboot. Maybe you
can try.
Best wishes,
Mika
2014-10-29 17:13 GMT+08:00 Sakhi Hadebe :
> Hi Support,
>
> Can someone please help me with the below error
Hi all,
Thanks for you all.
Like Mark's information this problem is releate to CRUSH Map.
After create 2 OSDs on 2 different host, healthy check is OK.
Appreciate the information again~
Best wishes,
Mika
2014-10-29 17:19 GMT+08:00 Vickie CH :
> Hi:
> -ceph
Is any errors disply when execute "ceph-deploy osd prepare" ?
Best wishes,
Mika
2014-10-31 17:36 GMT+08:00 Subhadip Bagui :
> Hi,
>
> Can anyone please help on this
>
> Regards,
> Subhadip
>
>
> -
Hello cephers,
After input command "ceph osd map rbd abcde-no-file". I can get
the result like this:
*"osdmap e42 pool 'rbd' (0) object '*
*abcde-no-file' -> pg 0.2844d191 (0.11) -> up ([3], p3) acting ([3], p3)"*
But the object "abcde-no-file" is not exist. Why ceph osd map can mapping
t
Dear cephers,
Just upgrade radosgw from apache to civetweb.
It's really simple to installed and used. But I can't find any parameters
or logs to adjust(or observe) civetweb. (Like apache log). I'm really
confuse. Any ideas?
Best wishes,
Mika
___
cep
o*
> *Cloud Systems Engineer* | (*408) 409-KOBI*
>
> On Tue, Sep 8, 2015 at 8:20 AM, Yehuda Sadeh-Weinraub
> wrote:
>
>> You can increase the civetweb logs by adding 'debug civetweb = 10' in
>> your ceph.conf. The output will go into the rgw logs.
>>
>
Hi cephers,
Have anyone ever created osd with btrfs in Hammer 0.94.3 ? I can create
btrfs partition successfully. But once use "ceph-deploy" then always get
error like below. Another question there is no parameter " -f " with mkfs.
Any suggestion is appreciated.
--
Hi Artie,
Did you check your mon ? How many monitors in this cluster?
Best wishes,
Mika
2015-10-16 9:23 GMT+08:00 Artie Ziff :
> Hello Ceph-users!
>
> This is my first attempt at getting ceph running.
>
> Does the following, in isolation, indicate any potential troubleshooting
> directions
>
One more thing, did you check the setting of firewall?
Best wishes,
Mika
2015-10-16 14:54 GMT+08:00 Vickie ch :
> Hi Artie,
> Did you check your mon ? How many monitors in this cluster?
>
>
>
> Best wishes,
> Mika
>
>
> 2015-10-16 9:23 GMT+08:00 Artie Ziff :
>
Hi wah peng,
Just a thought.
If you have a large amount of OSDs but less pg number. You will find your
data write unevenly.
Some OSD have no change to write data.
In the other side, pg number too large but OSD number too small that have a
chance to cause data lost.
Best wishes,
Mika
2015-11
Hi ,
Looks like your cluster have warning message of "2 near full osd(s)".
Maybe try to extend osds first?
Best wishes,
Mika
2015-11-12 23:05 GMT+08:00 min fang :
> Hi cepher, I tried to use the following command to create a img, but
> unfortunately, the command hung for a long time un
rouble for
re-balance and recovery.
In the other side, if you have 1 OSDs but only set pg = 8.
That mean some disks have no chance to using.
Best wishes,
Mika
2015-11-13 16:26 GMT+08:00 wah peng :
> why data lost happens? thanks.
>
> On 2015/11/13 星期五 16:13, Vickie ch wrote:
>
&g
By the way, here is a useful tool to calculate pg.
http://ceph.com/pgcalc/
Best wishes,
Mika
2015-11-18 11:46 GMT+08:00 Vickie ch :
> Hi wah peng,
> Hope you don't mind. Just for reference.
> A extreme case. If your ceph cluster have 3 osd disks on different osd
> server.
&
31 matches
Mail list logo