thanks gregory.
If I give a file name in 'ceph osd map' command still am getting 2 OSD
numbers , even this file has more objects. Why is it like this? can you
please explain
and one more doubt is
When a client write the object into primary OSD,
1. will that write be over then the primary OSD star
There seems to be a more fundamental confusion here. "ceph osd map" asks
the cluster where a single *object* is located. On a pool of size 2, that
will return 2 OSDs, but it DOES NOT check to see if the object actually
exists — it just outputs the CRUSH mapping!
Files in CephFS are composed of many
what is the size of your file?What about a big size file?
If the file is big enough, it can not be stored by only two osds.
If the file is very small, as you know object size is 4MB, so it can be stored
by only one object in one primary osd, and slave osd.
> 在 2018年8月2日,下午6:56,Surya Bala 写道:
>
I understood your explaination.
The result of 'ceph osd map ' command always gives
only 2 OSDs(1 primary, 1 secondary). But it is not mandatory the objects
are stored only in 2 OSDs it should be spreaded many OSDs.
So my doubt is why the command gives this result
Regards
Surya Balan
On Thu, A
Hello,
file -> many objects-> many PG(each pg has two copies, because your replication
count is two)-> many OSD
pgs can be distributed in OSDs, no limitation for only 2, replication count
2only determine pg copies is 2.
Hope this will help.
> 在 2018年8月2日,下午3:43,Surya Bala 写道:
>
> Hi folks,
Hi,
Is that all objects of a file will be stored in only 2 OSD(in case of
replication count is 2)?
How Big is this file? Small files will not be splitted.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listi