it finish what it is doing in the background.
Steve
- Original Message -
Yes, Steve, exactly. I'd like to save the rest of my installation, but
I have a pool that when mounted on any system, prevents a reboot when
mounted.
___
OpenIn
I could be wrong but I think he wants to know how to do this on a
system that hangs while trying to mount the pool.
Your article is should help . Nice job, btw.
Steve
- Original Message -
What do you want to achieve this way?
http://wiki.openindiana.org/oi/Advanced+-+ZFS+Pools+as
For a fast (high ingest rate) system, 4G may not be enough.
If your Ram device space is not sufficient to hold all the in-flight Zill
blocks,
it will fail over, and the Zil will just redirect to your main data pool.
This is hard to notice, unless you have an idea of how much
data should be f
This looks like an actual pci device error to me.
I would dig deeper and look at the errors with fmdump -v -e
Steve
- Original Message -
on occasion i have systems spontaneously rebooting. i can often find entries
like this in fault management but it is not particularly helpful. i
metry
and insist on cylinder alignment)...
-Steve
On 19 August 2013 18:28, Richard Elling wrote:
> On Aug 19, 2013, at 4:02 AM, Edward Ned Harvey (openindiana) <
> openindi...@nedharvey.com> wrote:
>
> >> From: Steve Goldthorpe [mailto:openindi...@waistcoat.org.uk]
> >
55) = 3341
so change vmdk to
ddb.geometry.sectors = "32"
ddb.geometry.heads = "255"
ddb.geometry.cylinders = "3341"
3341*255*32 > 1697*255*63 :-)
I'll have a play tonight...
-Steve
On 18 August 2013 17:23, Steve Goldthorpe wrote:
> No matter what I
creates, it always use partition 0
for the rpool and starts that on cylinder 1. As it thinks the geometry is
255 heads & 63 sectors/track this will surely not be aligned. Do I have to
fudge the geometry too somehow?
I think none root pool disks will be fine as I ca
mentioned,
I am curious what are the performance and space usage implications when
the file size and compression are taken into consideration.
Steve
- Original Message -
Yes, I've had similar results on my rig and complained some time ago...
yet the ZFS world moves forward with desi
ive having been part of another pool).
Steve
>I am concluding that something on the old SSD drive still has a record
>of the old ZFS set which must have been stored somewhere other than the
>standard partitions ... I don't know where else this infor
m?
The Zfs term "Dataset" is specifically: a file system, snapshot, clone or zvol,
but perhaps
you are talking about a pool here.
Exporting and reimporting a pool sometimes helps resolving pool
inconsistencies.
Steve
- Original Message -
Hi Folks,
I'm having
I want an open source, community based ZFS storage backend.
Is OI the correct choice?? Seems to work OK. I don't want a gui
Is there an alternative? No, ZFS on FreeBSD/Linux doesn't cut it.
-steve
> i would be happy, if OI can be a success story
>
> bu
end summary *
Ideas?
Details below . thanx - steve
Details *
root@live-dfs-1:/var/adm# diff /lib/svc/method/rsyncd
/home/steve/rsyncd-smf
53c53
<sudo -u nobody $DAEMON --daemon
--config="$RSYNC_
c5000bc44a5f
c4::w5000c500525c2f91,0disk-pathconnectedconfigured unknown
8. c2t5000C500525C2F91d0
/scsi_vhci/disk@g5000c500525c2f91
-steve
>
>> c4::w5000c5000bc44cfd,0disk-pathconnectedconfigured
>> unknown
>> c4::w5
Howdy!
I have a Dell 610 with LSI 9200-8e HBA connected to Supermicro 847
(45 disk 4U JBOD)
Each port on the LSI is connect by separate cable to one of the 2
BackPlanes on
the SM847.
How come format and cfgadm show different controllers and devices..
- cfgadm -al
all devices o
in the
Zfs disk io path. eg. Large File deletions with dedup Turned
On may cause an Io Storm and that
In turn may cause your
Problem.
Steve
On May 18, 2012, at 4:25 AM, Adrian Carpenter wrote:
> We are using one port of each of a pair of Qlogic 2562 cards to act as a FC
> target f
o the "new" motherboard.
Steve
- Original Message -
I'm trying to access the /etc files from another system on which I installed OI
148. I can import the pool as fpool and can access /mnt/fpool & /mnt/export.
But for the life of me I can't figure out how to get to
It is long shot, but check how much space you have where your core dumps
supposed
to go. Your root pool may have limited space.
Also, the visibility of core dumps has security implications, they could be
inaccessible
unless you are looking as root.
Steve
- Original Message
no -
objfs - /system/object objfs - no -
sharefs - /etc/dfs/sharetab sharefs - no
-
fd - /dev/fd fd - no -
-steve
~
~
>
t
I guess I missed something? What else do I need to do?
Do I need to edit /etc/system or /etc/vfstab or /lib/svc/method/??
thanx - steve
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/m
visible 2009/408
(presumably 2009 april 8).
That would peg the Sun ssh version as "last synced with OpenSSH 5.2".
The current OpenSSH is 5.9..
Steve G
- Original Message -
They're needed so that sshd correctly uses solaris's version of PAM and audit
and other
ool (or is this wrong?)
>
> Thanks for the heads up about the bug and pending fix.. I'll take a look.
>
> -Matt.
>
> On 13/01/2012, at 9:34 AM, Steve Gonczi wrote:
>
>> Hi Matt,
>>
>> The ZIL is not a performance enhancer. (This is a common mis
need to be aware of a recent performance regression discovered pertaining
to
ZIL ( George Wilson has just posted the fix for review on the illumos dev list)
This has been in Illumos for a while, so it is possible, that it is biting you.
Steve
- Original Message -
Hi, I've install
Your last line of vmstat shows a high number of context switches and interrupts
.
Most of the time is accounted for in the idle time column.
mpstat output would probably be more useful, perhaps you have lock contention.
/sG/
- Original Message -
Hi!
I encounter the following pr
Are you running with dedup enabled?
If the box is still responsive, try to generate a thread stack listing e.g:
echo "::threadlist -v" > mdb -k > /tmp/threads.txt
Steve
On Oct 21, 2011, at 4:16, Tommy Eriksen wrote:
> Hi guys,
>
> I've got a bit of a ZFS prob
:quit
Steve
- Original Message -
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've zipped it an attached it to this mail, maybe someone can get
anything out of it...
i86_mwait is the idle function the cpu is executing when it has
nothing else to do. Basically it sleeps inside of that function.
Lockstat based profiling just samples what is on cpu, so idle time
shows up as some form of mwait, depending on how the bios
is configured.
Steve
Your lockstat output fingers Acpi debug tracing functions.
I wonder why these are running in the first place.
Steve
- Original Message -
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network
Perhaps the focus should be amping up hald logging, so that if and when
the problem happens you have some info to look at.
The hald man page has examples on how to do this via svccfg.
Steve
- Original Message -
Hi Steve, thanks a lot for your help!
The problem is that the issues
help
Show this information and exit\n" 221 "--version
Output version information and exit" 222 "\n" 223 "The HAL daemon detects
devices present in the system and provides the\n" 224 "org.freedesktop.Hal
service through the system-
power outlets you can buy. These allow power control of the
individual power plugs remotely.
Steve
- Original Message -
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi, I just upgraded an aged OpenSolaris 2009.6 to OpenIndiana 148 and I
have a silly but annoying issue.
When doing
Uh, new to OpenIndiana (O.I.) and kind of dismayed to see the lack of any
community forum anywhere - with the exception to this mailing list... which is
hardly the ideal way to resolve an issue, or seek guidance IMHO. I was able to
find something at OpenSolaris.org but the "Indiana" category app
t;out of memory" condition?
Best Wishes,
Steve
*/ 2486 void * 2487 kmem_cache_alloc ( kmem_cache_t * cp , int kmflag ) 2488 {
2489 kmem_cpu_cache_t * ccp = KMEM_CPU_CACHE ( cp ); 2490 kmem_magazine_t * fmp
; 2491 void * buf ; 2492 2493 mutex_enter (& ccp -> cc_lock );
/sG/
uld use more than 4GB total; just not
> individually.
>
> Mike
>
>
> On Fri, 2011-06-24 at 15:58 +, Steve Gonczi wrote:
>
>> For Intel CPUs, 32 bit code is certainly more compact , and in some cases
>> arguably faster than 64 bit code. (say, comparing the sa
y limited to 4G, which is split between
kernel and userland
depending on the OS and configuration. (E.g.: 1G kernel and 3G userland)
Steve
- "Michael Stapleton" wrote:
While we are talking about 32 | 64 bit processes;
Which one is better?
Faster
man dumpadm..
You have to enable crash dumps, and select a location where you have some
room to save them. When you just run dumpadm it will tell you
what your current settings are.
If you do not have crashdumps enabled, you may be able to save the last
crashdump by running "savecore /some/l
focus another issue. The community would benefit
from a server, where people could upload crash dumps in cases like this.
I am sure there are several people reading this list, who may be able and
inclined to take a quick look and provide a first cut diagnosis on a
volunteer basis .
Steve
/sG
Hi Dan,
If you monitor the Illumos checkins, you will see that there is
a fair amount of activity going on.
I am unsure how this correlates with any binary release schedule
but at least at the OS source code level, things are surely
being fixed.
/sG/
- "Dan Swartzendruber" wrote:
Alasdair - this looks very relevant - Thanks! Unfortunately, I don't have a
build environment set up right now, so I'm not sure how soon I can test this.
Within a couple of days, though, I hope to have some good data on this.
-steve j
--
Steve Jacobson | Director of Operations | D
and we're not on the old. We think that might be what's
contributing to the reset call. Regardless, though, the panic on OI is our
biggest concern.
Thx!
-steve j
--
Steve Jacobson | Director of Operations | Doyenz, Inc.
11245 SE 6th Street, Suite 120 | Bellevue, Wash. | 98004
Main: 2
iSCSI targets on
the Ubuntu system. There are two of these SuperMicro shelves. So, there are
ten LUNs available by iSCSI to OpenIndiana. The tank is created with one VDEV
for the ten LUNs.
--
Steve Jacobson | Director of Operations | Doyenz, Inc.
11245 SE 6th Street, Suite 120 | Bellevue
n the head system comes back, it appears to be just fine, but at some
point in the future, when there is i/o, it will reboot again.
Has anyone seen behavior like this before, or does anyone have any advice on
how to debug / resolve this?
Thanks!
-Steve J
--
Steve Jacobson | Director of Operation
Check out chime
-:::-sG-:::-
On Jan 27, 2011, at 23:28, WK wrote:
> I am experimenting with using Zenoss to monitor OI 148, and I was wondering
> if anyone had any advice on configuring SNMP for this purpose? Zenoss is
> showing the uptime, but not much more. On my Linux machines, it shows ne
Just a Wild guess you may have
The kernel cifs occupying The
Smb Ports already
Make sure cifs
Is shut off
-:::-sG-:::-
On Jan 19, 2011, at 11:43, Paul Johnston wrote:
> Hi
>
> I have SunOS openindiana 5.11 oi_148 i86pc i386 i86pc Solaris and wanted to
> try it for samba
>
> However
> paulj@
43 matches
Mail list logo