Hi,
The auth caps were as follows:
caps: [mon] allow r
caps: [osd] allow rwx pool=hosting_windows_sharedweb, allow rwx
pool=infra_systems, allow rwx pool=hosting_linux_sharedweb
I changed them (just adding a pool to the list) to:
caps: [mon] allow r
caps: [osd] allow rwx pool=hosting_windows_s
On 07/31/2014 06:37 PM, James Eckersall wrote:
Hi,
The stacktraces are very similar. Here is another one with complete
dmesg: http://pastebin.com/g3X0pZ9E
$ decodecode < tmp.oops
[ 28.636837] Code: dc 00 00 49 8b 50 08 4d 8b 20 49 8b 40 10 4d 85 e4 0f
84 17 01 00 00 48 85 c0 0f 84 0e 01 00 0
On Thu, 31 Jul 2014, James Eckersall wrote:
> Ah, thanks for the clarification on that.We are very close to the 250 limit,
> so that is something we'll have to look at addressing, but I don't think
> it's actually relevant to the panics as since reverting the auth key changes
> I made appears to ha
Do not go to a 3.15 or later Ubuntu kernel at this time if your are
using krbd. See bug 8818. The Ubuntu 3.14.x kernels seems to work fine
with krbd on Trusty.
The mainline packages from Ubuntu should be helpful in testing.
Info: https://wiki.ubuntu.com/Kernel/MainlineBuilds
Package
The mainline packages from Ubuntu should be helpful in testing.
Info: https://wiki.ubuntu.com/Kernel/MainlineBuilds
Packages: http://kernel.ubuntu.com/~kernel-ppa/mainline/?C=N;O=D
On 31/07/2014 10:31, James Eckersall wrote:
Ah, thanks for the clarification on that.
We are very close to the 250
On Thu, Jul 31, 2014 at 12:37 PM, James Eckersall
wrote:
> Hi,
>
> The stacktraces are very similar. Here is another one with complete dmesg:
> http://pastebin.com/g3X0pZ9E
>
> The rbd's are mapped by the rbdmap service on boot.
> All our ceph servers are running Ubuntu 14.04 (kernel 3.13.0-30-ge
Ah, thanks for the clarification on that.
We are very close to the 250 limit, so that is something we'll have to look
at addressing, but I don't think it's actually relevant to the panics as
since reverting the auth key changes I made appears to have resolved the
issue (no panics yet - 20 hours ish
On Thu, 31 Jul 2014 10:13:11 +0100 James Eckersall wrote:
> Hi,
>
> I thought the limit was in relation to ceph and that 0.80+ fixed that
> limit
> - or at least raised it to 4096?
>
Yes and yes. But 0.80 only made it into kernels 3.14 and beyond. ^o^
> If there is a 250 limit, can you confirm
Hi,
I thought the limit was in relation to ceph and that 0.80+ fixed that limit
- or at least raised it to 4096?
If there is a 250 limit, can you confirm where this is documented?
Thanks
J
On 31 July 2014 09:50, Christian Balzer wrote:
>
> Hello,
>
> are you per-chance approaching the maxim
Hello,
are you per-chance approaching the maximum amount of kernel mappings,
which is somewhat shy of 250 in any kernel below 3.14?
If you can easily upgrade to 3.14 see if that fixes it.
Christian
On Thu, 31 Jul 2014 09:37:05 +0100 James Eckersall wrote:
> Hi,
>
> The stacktraces are very s
Hi,
The stacktraces are very similar. Here is another one with complete dmesg:
http://pastebin.com/g3X0pZ9E
The rbd's are mapped by the rbdmap service on boot.
All our ceph servers are running Ubuntu 14.04 (kernel 3.13.0-30-generic).
Ceph packages are from the Ubuntu repos, version 0.80.1-0ubunt
On Thu, Jul 31, 2014 at 11:44 AM, James Eckersall
wrote:
> Hi,
>
> I've had a fun time with ceph this week.
> We have a cluster with 4 OSD (20 OSD's per) servers, 3 mons and a server
> mapping ~200 rbd's and presenting cifs shares.
>
> We're using cephx and the export node has its own cephx auth k
Hi,
I've had a fun time with ceph this week.
We have a cluster with 4 OSD (20 OSD's per) servers, 3 mons and a server
mapping ~200 rbd's and presenting cifs shares.
We're using cephx and the export node has its own cephx auth key.
I made a change to the key last week, adding rwx access to anothe
13 matches
Mail list logo