Forgive me if this question is stupid.
Can you explain more why you would say " either the Ceph object gateway or Ceph
RBD makes more sense than CephFS currently." ?
Because CephFS is production ready?
Thanks.
-chen
-Original Message-
From: Neil Levine [mailto:neil.lev...@inktank.com]
What about this example: imagine you have two rooms and want three replicas,
but of course spread over the two rooms. How would you do that with crush? Will
that require to split one room into two "virtual" rooms (as one would otherwise
end up with 2 replicas only)?
Thanks,
Arne
--
Arne Wiebalc
Hi,Storing the image as an object with RADOS or RGW will result as a single big object stored somewhere in Ceph. However with RBD the image is spread across thousands of objects across the entire cluster. At the end, you get way more performance by using RBD since you intensively use the entire clu
2013/3/21 Sebastien Han
> Hi,
>
> Storing the image as an object with RADOS or RGW will result as a single
> big object stored somewhere in Ceph. However with RBD the image is spread
> across thousands of objects across the entire cluster. At the end, you get
> way more performance by using RBD s
Hi List,
I cannot start my monitor when I update my cluster to v0.59, pls note
that I am not trying to upgrade,but by reinstall the ceph software stack and
rerunning mkcephfs. I have seen that the monitor change a lot after 0.58, is
the mkcephfs still have bugs ?
Below is the log:
On 03/21/2013 11:23 AM, Chen, Xiaoxi wrote:
Hi List,
I cannot start my monitor when I update my cluster to v0.59,
pls note that I am not trying to upgrade,but by reinstall the ceph
software stack and rerunning mkcephfs. I have seen that the monitor
change a lot after 0.58, is the mkce
Hi,
I just upgraded one node of my ceph "cluster". I wanted upgrade node
after node.
osd on this node has no problem. but the mon (mon.4) has authorization
problems.
I did'nt change any config, just made an apt-get upgrade .
ceph -s
health HEALTH_WARN 1 mons down, quorum 0,1,2,3 0,1,2,3
On Thu, Mar 21, 2013 at 2:12 AM, Sebastien Han
wrote:
>
> Hi,
>
> Storing the image as an object with RADOS or RGW will result as a single big
> object stored somewhere in Ceph. However with RBD the image is spread across
> thousands of objects across the entire cluster. At the end, you get way
Another sprint and another release! This one is delayed a day or two due
to power issues in our data center. The most exciting bit here is a big
refactor in the monitor that has finally landed (thanks go to Joao Luis),
but there is lots of other good stuff to go around:
* mon: rearchitected
Hello,
Thx for all this sharing :)
On the start of this topic you have say :The main Ceph/Hadoop development is
being done in a new location now.http://github.com/ceph/hadopo-common
cephfs/branch-1.0The Java bindings for Ceph are contained in the main Ceph
tree. They are also included in the Deb
I think, I was impatient and should wait for the v.59 announcement. It
seems I should upgrading all monitors.
After upgrading all nodes I have on 2 monitors errors like:
=== mon.0 ===
Starting Ceph mon.0 on u124-161-ceph...
mon fs missing 'monmap/latest' and 'mkfs/monmap'
failed: 'ulimit -n 8192
On Mar 21, 2013, at 8:03 AM, François P-L wrote:
> I'm not seeing the new location on github (but the ceph documentation page
> have been updated, thx ;)).
> What is the status of all Hadoop dependency on the master branch ?
The current Hadoop dependency is on the master branch. We believe all
Hi,
I want to change my crushmap to reflect my setup, I have two racks with
each 3 hosts. I want to use for the rbd pool a replication size of 2.
The failure domain should be the rack, so each replica should be in each
rack. That works so far.
But if I shutdown a host the clusters stays degraded,
On 03/21/2013 11:39 AM, Joao Eduardo Luis wrote:
On 03/21/2013 11:23 AM, Chen, Xiaoxi wrote:
Hi List,
I cannot start my monitor when I update my cluster to v0.59,
pls note that I am not trying to upgrade,but by reinstall the ceph
software stack and rerunning mkcephfs. I have seen tha
Thanks for the explanation.
So, the conclusion is : RGW, RBD and Ceph FS are chunks objects, we can get way
good because we intensively use the entire cluster.
But my question is still not be answered.
Why you would say " object storage has been the preferred mechanism of late in
Openstack, bu
We recommend Ceph RGW and Ceph RBD in production but CephFS is in tech
preview mode, meaning we only recommend it for specific use-cases. You
could use with Openstack but we wouldn't recommend unless it is just a
test environment.
As to the specific reasons for your error, I can't provide any
assi
16 matches
Mail list logo