nfsv4 fails with kerberos

2013-09-07 Thread Martin Laabs
Hi,

I set up a nfsv4 server with kerberos but when starting the nfs server on
the arm (RBI-B) board I get the following error message and the first
(managing part) of the nfs exits:

"nfsd: can't register svc name"

This error message is produced by the following code in
/usr/src/sys/fs/nfsserver/nfs_nfsdkrpc.c:


==:<===
/* An empty string implies AUTH_SYS only. */
if (principal[0] != '\0') {
 ret2 = rpc_gss_set_svc_name_call(principal,
   "kerberosv5", GSS_C_INDEFINITE, NFS_PROG, NFS_VER2);
 ret3 = rpc_gss_set_svc_name_call(principal,
"kerberosv5", GSS_C_INDEFINITE, NFS_PROG, NFS_VER3);
 ret4 = rpc_gss_set_svc_name_call(principal,
"kerberosv5", GSS_C_INDEFINITE, NFS_PROG, NFS_VER4);

if (!ret2 || !ret3 || !ret4)
  printf("nfsd: can't register svc name\n");
==:<===

So something went wrong with the principals. Is there a way to get more
information or more verbose debugging output from the nfs-server part of
the kernel?

Thank you,
 Martin Laabs

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: nfsv4 fails with kerberos

2013-09-07 Thread Rick Macklem
Martin Laabs wrote:
> Hi,
> 
> I set up a nfsv4 server with kerberos but when starting the nfs
> server on
> the arm (RBI-B) board I get the following error message and the first
> (managing part) of the nfs exits:
> 
> "nfsd: can't register svc name"
> 
> This error message is produced by the following code in
> /usr/src/sys/fs/nfsserver/nfs_nfsdkrpc.c:
> 
> 
> ==:<===
> /* An empty string implies AUTH_SYS only. */
> if (principal[0] != '\0') {
>  ret2 = rpc_gss_set_svc_name_call(principal,
>"kerberosv5", GSS_C_INDEFINITE, NFS_PROG, NFS_VER2);
>  ret3 = rpc_gss_set_svc_name_call(principal,
> "kerberosv5", GSS_C_INDEFINITE, NFS_PROG, NFS_VER3);
>  ret4 = rpc_gss_set_svc_name_call(principal,
> "kerberosv5", GSS_C_INDEFINITE, NFS_PROG, NFS_VER4);
> 
> if (!ret2 || !ret3 || !ret4)
>   printf("nfsd: can't register svc name\n");
> ==:<===
> 
> So something went wrong with the principals. Is there a way to get
> more
> information or more verbose debugging output from the nfs-server part
> of
> the kernel?
> 
The above message normally indicates that the gssd daemon isn't running.

Here's a few places you can get info:
man nfsv4, gssd
http://code.google.com/p/macnfsv4/wiki/FreeBSD8KerberizedNFSSetup
- This was done quite a while ago and I should ggo in and update it,
  but I think it is still mostly correct for server side. (The client
  in head/10 now does have "host based initiator cred" support.)
  Feel free to update it. All you should need to do so is a Google
  login.

You need a service principal for "nfs", which means an entry for a
principal that looks like:
nfs/.@
(Stuff in "<>" needs to be filled in with the answer for your machine.)
in /etc/krb5.keytab i the server.

rick

> Thank you,
>  Martin Laabs
> 
> ___
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to
> "freebsd-net-unsubscr...@freebsd.org"
> 
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: mbuf autotuning effect

2013-09-07 Thread hiren panchasara
On Sep 6, 2013 8:26 PM, "Warner Losh"  wrote:
>
>
> On Sep 6, 2013, at 7:11 PM, Adrian Chadd wrote:
>
> > Yeah, why is VM_KMEM_SIZE only 12mbyte for MIPS? That's a little low
for a
> > platform that has a direct map that's slightly larger than 12mb :)
> >
> > Warner? Juli?
>
> All architectures have it at 12MB, except sparc64 where it is 16MB. This
can be changed with the options VM_KMEM_SIZE=x in the config file.

Right. Does that mean for any platform, if we do not have nmbclusters
pre-set in kmeminit() than we will always have pretty low value of
vm_kmem_size. And because of that, if maxmbufmem is not pre-set (via
loader.conf) inside tunable_mbinit() , we will have very low value for
maxmbufmem too.

I hope (partially believe) that my understanding is not entirely correct.
Because if its correct, we arw depending on loader.conf instead of actually
auto tuning.

Thanks,
Hiren
>
> So my guess as to why this is the case: cut and paste worked, so nobody
changed it after that.
>
> # Still need to reads hiren's email to comprehend it...
>
> Warner
>
>
> >
> >
> > -adrian
> >
> >
> >
> > On 6 September 2013 16:36, hiren panchasara wrote:
> >
> >> We are seeing an interesting thing on a mips board with 32MB ram.
> >>
> >> We run out of mbuf very easily and looking at numbers it seems we are
only
> >> getting 6mb of maxmbufmem.
> >>
> >> # sysctl -a | grep hw | grep mem
> >> hw.physmem: 33554432
> >> hw.usermem: 21774336
> >> hw.realmem: 33554432
> >> #
> >> # sysctl -a | grep maxmbuf
> >> kern.ipc.maxmbufmem: 6291456
> >>
> >> I believe that number is very low for a board with 32mb of ram.
> >>
> >> Looking at the code:
> >>
> >> sys/kern/kern_mbuf.c : tunable_mbinit()
> >>
> >> 124 realmem = qmin((quad_t)physmem * PAGE_SIZE, vm_kmem_size);
> >> 125 maxmbufmem = realmem / 2;
> >> 126 TUNABLE_QUAD_FETCH("kern.ipc.maxmbufmem", &maxmbufmem);
> >> 127 if (maxmbufmem > realmem / 4 * 3)
> >> 128 maxmbufmem = realmem / 4 * 3;
> >>
> >> So, realmem plays important role in determining maxmbufmem.
> >>
> >> physmem = 32mb
> >> PAGE_SIZE = 4096
> >>
> >> vm_kmem_size is calculated inside sys/kern/kern_malloc.c : kmeminit()
> >>
> >> 705 vm_kmem_size = VM_KMEM_SIZE + nmbclusters * PAGE_SIZE;
> >> 706 mem_size = cnt.v_page_count;
> >> 707
> >> 708 #if defined(VM_KMEM_SIZE_SCALE)
> >> 709 vm_kmem_size_scale = VM_KMEM_SIZE_SCALE;
> >> 710 #endif
> >> 711 TUNABLE_INT_FETCH("vm.kmem_size_scale",
&vm_kmem_size_scale);
> >> 712 if (vm_kmem_size_scale > 0 &&
> >> 713 (mem_size / vm_kmem_size_scale) > (vm_kmem_size /
> >> PAGE_SIZE))
> >> 714 vm_kmem_size = (mem_size / vm_kmem_size_scale) *
> >> PAGE_SIZE;
> >>
> >> here,
> >> VM_KMEM_SIZE = 12*1024*1024
> >> nmbclusters = 0 (initially)
> >> PAGE_SIZE = 4096
> >> # sysctl -a | grep v_page_count
> >> vm.stats.vm.v_page_count: 7035
> >>
> >> and VM_KMEM_SIZE_SCALE = 3 for mips.
> >>
> >> So, vm_kmem_size = 12mb.
> >>
> >> Going back to tunable_mbinit(),
> >> we get realmem = 12mb.
> >> and masmbufmem = 6mb.
> >>
> >>
> >> Wanted to see if I am following the code correctly and how autotuning
> >> should work here.
> >>
> >> cheers,
> >> Hiren
> >> ___
> >> freebsd-net@freebsd.org mailing list
> >> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> >> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
> >>
> > ___
> > freebsd-m...@freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-mips
> > To unsubscribe, send any mail to "freebsd-mips-unsubscr...@freebsd.org"
>
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: mbuf autotuning effect

2013-09-07 Thread Adrian Chadd
On 7 September 2013 12:21, hiren panchasara wrote:

>
> On Sep 6, 2013 8:26 PM, "Warner Losh"  wrote:
> >
> >
> > On Sep 6, 2013, at 7:11 PM, Adrian Chadd wrote:
> >
> > > Yeah, why is VM_KMEM_SIZE only 12mbyte for MIPS? That's a little low
> for a
> > > platform that has a direct map that's slightly larger than 12mb :)
> > >
> > > Warner? Juli?
> >
> > All architectures have it at 12MB, except sparc64 where it is 16MB. This
> can be changed with the options VM_KMEM_SIZE=x in the config file.
>
> Right. Does that mean for any platform, if we do not have nmbclusters
> pre-set in kmeminit() than we will always have pretty low value of
> vm_kmem_size. And because of that, if maxmbufmem is not pre-set (via
> loader.conf) inside tunable_mbinit() , we will have very low value for
> maxmbufmem too.
>
> I hope (partially believe) that my understanding is not entirely correct.
> Because if its correct, we arw depending on loader.conf instead of actually
> auto tuning.
>
> Thanks,
> Hiren
>
>
.. so how's this work on i386? ARM?




-adrian
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: mbuf autotuning effect

2013-09-07 Thread Ian Lepore
On Sat, 2013-09-07 at 12:21 -0700, hiren panchasara wrote:
> On Sep 6, 2013 8:26 PM, "Warner Losh"  wrote:
> >
> >
> > On Sep 6, 2013, at 7:11 PM, Adrian Chadd wrote:
> >
> > > Yeah, why is VM_KMEM_SIZE only 12mbyte for MIPS? That's a little
> low
> for a
> > > platform that has a direct map that's slightly larger than 12mb :)
> > >
> > > Warner? Juli?
> >
> > All architectures have it at 12MB, except sparc64 where it is 16MB.
> This
> can be changed with the options VM_KMEM_SIZE=x in the config file.
> 
> Right. Does that mean for any platform, if we do not have nmbclusters
> pre-set in kmeminit() than we will always have pretty low value of
> vm_kmem_size. And because of that, if maxmbufmem is not pre-set (via
> loader.conf) inside tunable_mbinit() , we will have very low value for
> maxmbufmem too.
> 
> I hope (partially believe) that my understanding is not entirely
> correct.
> Because if its correct, we arw depending on loader.conf instead of
> actually
> auto tuning.
> 
I think the part of this that strikes me as strange is calling 20% of
physical memory used for network buffers a "very low value".  It seems
outrageously high to me.   I'd be pissed if that much memory got wasted
on network buffers on one of our $work platforms with so little memory.

So the fact that you think it's crazy-low and I think it's crazy-high
may be a sign that it's auto-tuned to a reasonable compromise, and in
both our cases the right fix would be to use the available knobs to tune
things for our particular uses.

-- Ian


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: mbuf autotuning effect

2013-09-07 Thread Adrian Chadd
On 7 September 2013 12:56, Ian Lepore  wrote:


> I think the part of this that strikes me as strange is calling 20% of
> physical memory used for network buffers a "very low value".  It seems
> outrageously high to me.   I'd be pissed if that much memory got wasted
> on network buffers on one of our $work platforms with so little memory.
>
> So the fact that you think it's crazy-low and I think it's crazy-high
> may be a sign that it's auto-tuned to a reasonable compromise, and in
> both our cases the right fix would be to use the available knobs to tune
> things for our particular uses.
>

Well, which limit is actually being hit here? 20% of 32mb is still a lot of
memory buffers..

Now, for sizing up the needed buffers for wifi:

assuming 512 tx, 512 rx buffers for two ath NICs.

another 512+512 buffers for each arge NICs.

So, 4096 mbufs here, 2k each, so ~ 8mb of RAM.

Amusing..



-adrian
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: mbuf autotuning effect

2013-09-07 Thread hiren panchasara
On Sat, Sep 7, 2013 at 12:56 PM, Ian Lepore  wrote:

> On Sat, 2013-09-07 at 12:21 -0700, hiren panchasara wrote:
> > On Sep 6, 2013 8:26 PM, "Warner Losh"  wrote:
> > >
> > >
> > > On Sep 6, 2013, at 7:11 PM, Adrian Chadd wrote:
> > >
> > > > Yeah, why is VM_KMEM_SIZE only 12mbyte for MIPS? That's a little
> > low
> > for a
> > > > platform that has a direct map that's slightly larger than 12mb :)
> > > >
> > > > Warner? Juli?
> > >
> > > All architectures have it at 12MB, except sparc64 where it is 16MB.
> > This
> > can be changed with the options VM_KMEM_SIZE=x in the config file.
> >
> > Right. Does that mean for any platform, if we do not have nmbclusters
> > pre-set in kmeminit() than we will always have pretty low value of
> > vm_kmem_size. And because of that, if maxmbufmem is not pre-set (via
> > loader.conf) inside tunable_mbinit() , we will have very low value for
> > maxmbufmem too.
> >
> > I hope (partially believe) that my understanding is not entirely
> > correct.
> > Because if its correct, we arw depending on loader.conf instead of
> > actually
> > auto tuning.
> >
> I think the part of this that strikes me as strange is calling 20% of
> physical memory used for network buffers a "very low value".  It seems
> outrageously high to me.   I'd be pissed if that much memory got wasted
> on network buffers on one of our $work platforms with so little memory.
>

Interesting. So here how it looks on my laptop running amd64 GENERIC looks
like: (without any special loader.conf settings)

flymockour-l7% uname -a
FreeBSD flymockour-l7.corp.yahoo.com 10.0-CURRENT FreeBSD 10.0-CURRENT #1
r253512M: Sat Jul 20 23:00:51 PDT 2013
hir...@flymockour-l7.corp.yahoo.com:/usr/obj/usr/home/hirenp/head/sys/GENERIC
amd64

flymockour-l7% sysctl -a | grep hw| grep mem
hw.physmem: 8496877568
hw.usermem: 3538432000
hw.realmem: 9093251072

flymockour-l7% sysctl kern.ipc.maxmbufmem
kern.ipc.maxmbufmem: 4132540416
flymockour-l7% sysctl -a | grep vm.kmem_
vm.kmem_size: 8265080832
vm.kmem_size_min: 0
vm.kmem_size_max: 329853485875
vm.kmem_size_scale: 1
vm.kmem_map_size: 1380515840
vm.kmem_map_free: 5796265984

VM_KMEM_SIZE_SCALE is 1 for amd64 while 3 for mips. Which might be one
reason.


> So the fact that you think it's crazy-low and I think it's crazy-high
> may be a sign that it's auto-tuned to a reasonable compromise, and in
> both our cases the right fix would be to use the available knobs to tune
> things for our particular uses.
>

I am pretty ignorant on what the value _should_ be. I will try to find out
more.

cheers,
Hiren
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: mbuf autotuning effect

2013-09-07 Thread hiren panchasara
On Sat, Sep 7, 2013 at 1:39 PM, Adrian Chadd  wrote:

> On 7 September 2013 12:56, Ian Lepore  wrote:
>
>
>> I think the part of this that strikes me as strange is calling 20% of
>>  physical memory used for network buffers a "very low value".  It seems
>> outrageously high to me.   I'd be pissed if that much memory got wasted
>> on network buffers on one of our $work platforms with so little memory.
>>
>> So the fact that you think it's crazy-low and I think it's crazy-high
>> may be a sign that it's auto-tuned to a reasonable compromise, and in
>> both our cases the right fix would be to use the available knobs to tune
>> things for our particular uses.
>>
>
> Well, which limit is actually being hit here? 20% of 32mb is still a lot
> of memory buffers..
>
> Now, for sizing up the needed buffers for wifi:
>
> assuming 512 tx, 512 rx buffers for two ath NICs.
>
> another 512+512 buffers for each arge NICs.
>
> So, 4096 mbufs here, 2k each, so ~ 8mb of RAM.
>

And we are only getting 6mb of maxmbufmem with current setup.

 Index: mips/include/vmparam.h
===
--- mips/include/vmparam.h  (revision 255320)
+++ mips/include/vmparam.h  (working copy)
@@ -119,7 +119,7 @@
  * is the total KVA space allocated for kmem_map.
  */
 #ifndef VM_KMEM_SIZE_SCALE
-#defineVM_KMEM_SIZE_SCALE  (3)
+#defineVM_KMEM_SIZE_SCALE  (1)
 #endif

 /*

As I mentioned on another reply in the same thread, VM_KMEM_SIZE_SCALE is 1
for amd64. If I do the same for mips as above, we get
# sysctl -a | grep maxmbuf
kern.ipc.maxmbufmem: 14407680

Now, do we want to have this much rams assigned to mbufs is another
question.

cheers,
Hiren
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"