On Tue, 3 Feb 2015 05:24:19 PM Daniel Schneller wrote:
> Now I think on it, that might just be it - I seem to recall a similar
> problem
> > with cifs mounts, despite having the _netdev option. I had to issue a
> > mount in /etc/network/if-up.d/
> >
> >
> >
> > I'll test than and get back to you
Hi Raju,
That fixed the problem.
Thank you!
Eric
On Sat, Feb 7, 2015 at 10:57 PM, Raju Kurunkad
wrote:
> Eric,
>
>
>
> When creating RBD images of image format 2 in v0.92, can you try with
>
>
>
> rbd create SMB01/smb01_d1 --size 1000 --image-format 2 *--image-shared*
>
>
>
> Without the "--i
Eric,
When creating RBD images of image format 2 in v0.92, can you try with
rbd create SMB01/smb01_d1 --size 1000 --image-format 2 --image-shared
Without the "--image-shared" option, rbd CLI creates the image with
RBD_FEATURE_EXCLUSIVE_LOCK, which is not supported by the linux kernel RDB.
Than
Has anything changed in v0.92 that would keep a 3.18 Kernel from mapping a
RBD image?
I have been using a test script to create RBD images and map them since
FireFly and the script has worked fine through Ceph v0.91. It is not
working with v0.92, so I minimized it to the following 3 commands whic
On 2015-02-03 18:48:45 +, Alexandre DERUMIER said:
debian deb packages update are not restarting services.
(So, I think it should be the same for ubuntu).
you need to restart daemons in this order:
-monitor
-osd
-mds
-rados gateway
http://ceph.com/docs/master/install/upgrading-ceph/
Ju
Thank you both for your replies and for clearing up the matter.
I totally understand that you can't actually know the size of a pool, I was
just using the terminology to highlight the point in the 1st article that
seems to suggest the relative option already knows this. But as you have
confirmed t
On Sat, 7 Feb 2015, Nick Fisk wrote:
> Hi All,
>
> Time for a little Saturday evening Ceph related quiz.
>
> >From this documentation page
>
> http://ceph.com/docs/master/rados/operations/cache-tiering/
>
> It seems to indicate that you can either flush/evict using relative sizing
> (cache_targ
Hi Nick
it is correct that the ratios are relative to the size directives,
target_max_bytes and target_max_objects which ever is crossed first in case
they are both set. Those parameters are cache pool specific so you can create
multiple cache pools, all using the same OSDs (same CRUSH rule ass
Hi All,
Time for a little Saturday evening Ceph related quiz.
>From this documentation page
http://ceph.com/docs/master/rados/operations/cache-tiering/
It seems to indicate that you can either flush/evict using relative sizing
(cache_target_dirty_ratio) or absolute sizing (target_max_bytes). B
Hi John,
I have already put these rules in the firewall but no luck.
Using "iptraf" I saw that every time is going at a TCP port 33000 plus
something...different every time!
Best,
George
On Sat, 7 Feb 2015 18:40:38 +0100, John Spray wrote:
The relevant docs are here:
http://ceph.com/docs/
The relevant docs are here:
http://ceph.com/docs/master/start/quick-start-preflight/#open-required-ports
John
On Sat, Feb 7, 2015 at 4:33 PM, Georgios Dimitrakakis
wrote:
> Hi all!
>
> I am integrating my OpenStack Cluster with CEPH in order to be able to
> provide volumes for the instances!
>
>
Hello Logan and All -
I am interested in remote replication between two ceph clusters not using
federated radosgw setup. Something like ceph osd from one to ceph osd of
another cluster. Any thoughts on how to accomplish this?
Thanks,Lakshmi.
On Wednesday, January 7, 2015 5:21 PM, Logan Ba
Hi all!
I am integrating my OpenStack Cluster with CEPH in order to be able to
provide volumes for the instances!
I have managed to perform all operations successfully with one catch
only.
If firewall services (iptables) are running on the CEPH node then I am
stack at attaching state.
Th
I think you'll find the "ceph df" command more useful -- in recent
versions that is pretty smart about reporting the effective space
available for each pool.
John
On Fri, Feb 6, 2015 at 3:38 PM, pixelfairy wrote:
> heres output of 'ceph -s' from a kvm instance running as a ceph node.
> all 3 nod
14 matches
Mail list logo