Hi folks,
I'm trying to gather additional information surrounding
http://tracker.ceph.com/issues/9355 so we can hopefully find the root of
what's preventing us from successfully mapping RBD volumes inside a Linux
container.
With the RBD kernel module debugging enabled (and cephx authentication
di
On Mon, 6 Oct 2014 08:14:20 -0400 Nathan Stratton wrote:
> On Sun, Oct 5, 2014 at 11:19 PM, Ariel Silooy
> wrote:
>
> > Hello fellow ceph user, right now we are researching ceph for our
> > storage.
> >
> > We have a cluster of 3 OSD nodes (and 5 MON) for our RBD disk which for
> > now we are us
Thank you for your reply,
I really would think about something faster then gig ethernet. Merchant
silicon is changing the world, take a look at guys like Quanta, I just
bought two T3048-LY2 switches with Cumulus software for under 6k each.
That gives you 48 10 gig ports and 4 40 gig ports to pl
Thank you for your reply,
We use a stacked pair of Dell Powerconnect 6248's with the 2*12 Gb/s
interconnect and single 10 GbE links, with 1 GbE failovers using Linux bonding
active/backup mode, to the four OSD nodes.
I'm sorry, but I just have to ask, what kind of 10GbE NIC do you use? If
First, thank you for your reply
TRILL ( http://en.wikipedia.org/wiki/TRILL_(computing) ) based switches
(we have some Brocade VDX ones) have the advantage that they can do LACP
over 2 switches.
Meaning you can get full speed if both switches are running and still get
redundancy (at half speed) i
This sounds doable, with a few caveats.
Currently, replication is only one direction. You can only write to the
primary zone, and you can read from the primary or secondary zones. A
cluster can have many zones on it.
I'm thinking your setup would be a star topology. Each telescope will be a
p
Hi all,
I will be in NYC on Wednesday for the Ceph day. If you are in the area
and would like to join us, please do! It is always great to get out and
talk to users and developers.
http://ceph.com/cephdays/nyc2/
sage
___
ceph-users mailing
It'd be interesting to see which rados operation is slowing down the
requests. Can you provide a log dump of a request (with 'debug rgw =
20', and 'debug ms = 1'). This might give us a better idea as to
what's going on.
Thanks,
Yehuda
On Mon, Oct 6, 2014 at 10:05 AM, Daniel Schneller
wrote:
> Hi
Hi again!
We have done some tests regarding the limits of storing lots and
lots of buckets through Rados Gateway into Ceph.
Our test used a single user for which we removed the default max
buckets limit. It then continuously created containers - both empty
and such with 10 objects of around 100k
English version and French Version // En anglais et en Français
English :
hello,
I set up and working properly CEPH via S3, librbd etc etc. ..
I set up and configure the Keystone radosgw so he can talk together, so
far it seemed to work.
However I have a concern about the temporary URL.
I
You need to make sure rbd is in your whitelist when you run ./configure as
well as having rbd enabled.
--block-drv-rw-whitelist=qcow2,raw,file,host_device,nbd,iscsi,gluster,rbd
><>
nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
On Sun, Oct 5, 2014 at 4:36
On Mon, Oct 6, 2014 at 8:22 AM, Ignazio Cassano
wrote:
> Hi,
> but what kernel version you are using ?
> I think rbd kernel module is not in centos 7 kernel .
> Have you buill it by sources ?
>
Built kernel via source, I see modules loaded:
[root@virt01a secrets]# lsmod
Module
On Fri, 03 Oct 2014 11:56:42 +0200 Massimiliano Cuttini wrote:
>
> Il 02/10/2014 17:24, Christian Balzer ha scritto:
> > On Thu, 02 Oct 2014 12:20:06 +0200 Massimiliano Cuttini wrote:
> >> Il 02/10/2014 03:18, Christian Balzer ha scritto:
> >>> On Wed, 01 Oct 2014 20:12:03 +0200 Massimiliano Cutt
On Mon, 6 Oct 2014 09:17:03 + Carl-Johan Schenström wrote:
> Christian Balzer wrote:
>
> > Any decent switch with LACP will do really.
> > And with that I mean Cisco, Brocade etc.
> >
> > But that won't give you redundancy if a switch fails, see below.
> >
> > TRILL ( http://en.wikipedia.o
Ah! Thats it! I built qemu with rbd but have not rbd kernel module.
I tried to build that on stock el7 kernel and got:
/root/rpmbuild/BUILD/ceph-3.10.24-dc9ac62/fs/ceph//inode.c: In function
'splice_dentry':
/root/rpmbuild/BUILD/ceph-3.10.24-dc9ac62/fs/ceph//inode.c:904:193: error:
'struct dentry
Hi!
Our institute now planning to deploy a set of robotic telescopes across a
country.
Most of the telescopes will have low bandwidth and high latency, or even not
permanent internet connectivity.
I think, we can set up synchronization of observational data with ceph, using
federated gateways:
Hi,
but what kernel version you are using ?
I think rbd kernel module is not in centos 7 kernel .
Have you buill it by sources ?
2014-10-06 14:08 GMT+02:00 Nathan Stratton :
> SELinux is already disabled
>
> [root@virt01a /]# setsebool -P virt_use_execmem 1
> setsebool: SELinux is disabled.
>
>
On Sun, Oct 5, 2014 at 11:19 PM, Ariel Silooy wrote:
> Hello fellow ceph user, right now we are researching ceph for our storage.
>
> We have a cluster of 3 OSD nodes (and 5 MON) for our RBD disk which for
> now we are using the NFS proxy setup. On each OSD node we have 4x 1G Intel
> copper NIC (
SELinux is already disabled
[root@virt01a /]# setsebool -P virt_use_execmem 1
setsebool: SELinux is disabled.
><>
nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
On Mon, Oct 6, 2014 at 1:16 AM, Vladislav Gorbunov
wrote:
> Try to disable selinux or run
Christian Balzer wrote:
> Any decent switch with LACP will do really.
> And with that I mean Cisco, Brocade etc.
>
> But that won't give you redundancy if a switch fails, see below.
>
> TRILL ( http://en.wikipedia.org/wiki/TRILL_(computing) ) based switches
> (we have some Brocade VDX ones) hav
Hi all, thank you for your answers and your effort.
I ' like also to know if there is difference between mapping rbd device and
using them like normal block device (/dev/rbd) in kvm and using
qemu-img e libvirt support for rbd.
Are there performances issues in first case ?
Regards
Ignazio
2014
21 matches
Mail list logo