Just a Wild guess you may have
The kernel cifs occupying The
Smb Ports already
Make sure cifs
Is shut off
-:::-sG-:::-
On Jan 19, 2011, at 11:43, Paul Johnston wrote:
> Hi
>
> I have SunOS openindiana 5.11 oi_148 i86pc i386 i86pc Solaris and wanted to
> try it for samba
>
> However
> paulj@
Check out chime
-:::-sG-:::-
On Jan 27, 2011, at 23:28, WK wrote:
> I am experimenting with using Zenoss to monitor OI 148, and I was wondering
> if anyone had any advice on configuring SNMP for this purpose? Zenoss is
> showing the uptime, but not much more. On my Linux machines, it shows ne
Hi Dan,
If you monitor the Illumos checkins, you will see that there is
a fair amount of activity going on.
I am unsure how this correlates with any binary release schedule
but at least at the OS source code level, things are surely
being fixed.
/sG/
- "Dan Swartzendruber" wrote:
Hi Gabriel,
The immediate cause of this panic is an attempt to free an
address==null.
The intersting part (how this comes about), is hard to figure out
without more info.
At the very minimum, a stack (and some amount of luck),
ideally a crash dump would be necessary.
This brings into foc
man dumpadm..
You have to enable crash dumps, and select a location where you have some
room to save them. When you just run dumpadm it will tell you
what your current settings are.
If you do not have crashdumps enabled, you may be able to save the last
crashdump by running "savecore /some/l
For Intel CPUs, 32 bit code is certainly more compact , and in some cases
arguably faster than 64 bit code. (say, comparing the same code on the same
machine
compiled 32 and 64 bit)
But, newer cpu silicon tends to make performance improvements
in many ways (e.g locating more supporting circui
uld use more than 4GB total; just not
> individually.
>
> Mike
>
>
> On Fri, 2011-06-24 at 15:58 +, Steve Gonczi wrote:
>
>> For Intel CPUs, 32 bit code is certainly more compact , and in some cases
>> arguably faster than 64 bit code. (say, comparing the sa
Hello,
This should be analyzed and root caused.
A ::stack woudl be useful to see the call parameters.
Without disassembling zio_buf_alloc() I can only guess that
the mutex_enter you see crashing is really in kmem_cache_alloc()
If that proves to be the case, I would verify the offset of cc_l
You need remote reset or remote power cycle capability.
An ILOM console, if your hardware has support for it
would provide this.
ILOM is common on most server class hardware
( Sun servers certainly have it, so do Supermicro
boards).
Failing that, there are inexpensive remote power cycle
po
Hello,
Looking at the Hald source: ( usr/src/cmd/hal/hald /hald.c)
Error 95 is coming from a script, ti is just informing you that a fatal error
occurred.
The informative error code is the "2".
This tells you that hald forked a child process, and it timed out
waiting for the child process
Perhaps the focus should be amping up hald logging, so that if and when
the problem happens you have some info to look at.
The hald man page has examples on how to do this via svccfg.
Steve
- Original Message -
Hi Steve, thanks a lot for your help!
The problem is that the issues tha
Your lockstat output fingers Acpi debug tracing functions.
I wonder why these are running in the first place.
Steve
- Original Message -
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network.
i86_mwait is the idle function the cpu is executing when it has
nothing else to do. Basically it sleeps inside of that function.
Lockstat based profiling just samples what is on cpu, so idle time
shows up as some form of mwait, depending on how the bios
is configured.
Steve
- Original
Here is something to check:
Pop into the debugger ( mdb -k) and see what AcpiDbgLevel's current setting is.
E.g:
AcpiDbgLevel/x
The default setting is 3. If its something higher, that would explain the high
incidence of
Acpi trace/debug calls.
To exit the debugger type $q or ::quit
S
Are you running with dedup enabled?
If the box is still responsive, try to generate a thread stack listing e.g:
echo "::threadlist -v" > mdb -k > /tmp/threads.txt
Steve
On Oct 21, 2011, at 4:16, Tommy Eriksen wrote:
> Hi guys,
>
> I've got a bit of a ZFS problem:
> All of a sudden, and it doe
Your last line of vmstat shows a high number of context switches and interrupts
.
Most of the time is accounted for in the idle time column.
mpstat output would probably be more useful, perhaps you have lock contention.
/sG/
- Original Message -
Hi!
I encounter the following pr
Hi Matt,
The ZIL is not a performance enhancer. (This is a common misunderstanding,
people sometimes view the ZIL as a write cache) .
It is a way to simulate "sync" semantics on files where you really
need that, instead of the coarser ganularity guarantee that zfs gives
you without it. (txg
ool (or is this wrong?)
>
> Thanks for the heads up about the bug and pending fix.. I'll take a look.
>
> -Matt.
>
> On 13/01/2012, at 9:34 AM, Steve Gonczi wrote:
>
>> Hi Matt,
>>
>> The ZIL is not a performance enhancer. (This is a common mis
Take a look at README.altprivsep in usr/src/cmd/ssh.
Seems like the Solaris team significantly changed how privilege
separation works.
Looking at the Illumos hg log (which contains
the tail end of the Osol hg log) the Sun ssh code was periodically
resynced with openssh. The last resync visibl
It is long shot, but check how much space you have where your core dumps
supposed
to go. Your root pool may have limited space.
Also, the visibility of core dumps has security implications, they could be
inaccessible
unless you are looking as root.
Steve
- Original Message -
Thank
I am reading you mail as "I connect the disk array from a dismantled OI 148
system to another computer and..."
/etc is typically on your root pool (ie. rpool).
Note that the rpool is often on a different device, perhaps a flash drive. You
may, or may not
have connected this to the "new" moth
At one time I posted a dtrace script to track txg open times. Look for it in
the forum archives or I can repost it .Some Other folks Posted Similar
Scripts ...
I would not be surprised to find a txg being open for an unusually long time
when the problem happens.
That would Indicate a problem i
Explaining your problem in specific terms would be helpful,
(.e.g.: mention the commands you use to demonstrate the problem).
"an old system drive seems to have a memory..." is surely a colorful way of
describing your issue,
but unfortunately does not make it clear what the problem exactly is.
Ok.. . I am still a bit unclear about what you are trying to accomplish.
I am puzzled by your "jail drive" reference.
I think you are trying to re-cycle an ssd that was part of another pool
earlier. You do not care about preserving any of the information that is still
on this ssd, except that
Hi Jim,
This looks to me more like a rounding-up problem, esp. looking at the
bug report quoted. The waste factor increases as the block size goes
down. Kind of looks like it fits the ratio of the blocks nominal size, vs
its minimal on-disk foot print.
For example, compressed blocks are vari
This looks like an actual pci device error to me.
I would dig deeper and look at the errors with fmdump -v -e
Steve
- Original Message -
on occasion i have systems spontaneously rebooting. i can often find entries
like this in fault management but it is not particularly helpful. i s
For a fast (high ingest rate) system, 4G may not be enough.
If your Ram device space is not sufficient to hold all the in-flight Zill
blocks,
it will fail over, and the Zil will just redirect to your main data pool.
This is hard to notice, unless you have an idea of how much
data should be f
I could be wrong but I think he wants to know how to do this on a
system that hangs while trying to mount the pool.
Your article is should help . Nice job, btw.
Steve
- Original Message -
What do you want to achieve this way?
http://wiki.openindiana.org/oi/Advanced+-+ZFS+Pools+as+
Yes, I suggest you follow the guidance given by Jim.
Once you have the system up and running, you may want to try
to import the pool using an explicit zpool import command.
Esp. older versions of zfs had a problem when large files were deleted.
The fs mount path tries to perform any interrupte
29 matches
Mail list logo