I think I did figure it out.
It is the issue with cpio that is in my system... I am not sure but I did copy
cpio from my solaris sparc 9 server and it seems like lucreate completed
without
bus error, and system booted up using root zpool.
original cpio that I have on all of my solaris 10 U6 bo
Hi
Wierd, almost like some kind of memory corruption.
Could I see the upgrade logs, that got you to u6
ie
/var/sadm/system/logs/upgrade_log
for the u6 env.
What kind of upgrade did you do, liveupgrade, text based etc?
Enda
On 11/06/08 15:41, Krzys wrote:
> Seems like core.vold.* are not being cr
Seems like core.vold.* are not being created until I try to boot from zfsBE,
just creating zfsBE gets onlu core.cpio created.
[10:29:48] @adas: /var/crash > mdb core.cpio.5545
Loading modules: [ libc.so.1 libavl.so.1 ld.so.1 ]
> ::status
debugging core file of cpio (32-bit) from adas
file: /usr
Hi
try and get the stack trace from the core
ie mdb core.vold.24978
::status
$C
$r
also run the same 3 mdb commands on the cpio core dump.
also if you could extract some data from the truss log, ie a few hundred
lines before the first SIGBUS
Enda
On 11/06/08 01:25, Krzys wrote:
> THis is so b
what makes me wonder is why I am not even able to see anything under boot -L ?
and it is just not seeing this disk as a boot device? so strange.
On Wed, 5 Nov 2008, Krzys wrote:
> THis is so bizare, I am unable to pass this problem. I though I had not enough
> space on my hard drive (new one) so
THis is so bizare, I am unable to pass this problem. I though I had not enough
space on my hard drive (new one) so I replaced it with 72gb drive, but still
getting that bus error. Originally when I restarted my server it did not want
to
boot, do I had to power it off and then back on and it the
Hi
Looks ok, some mounts left over from pervious fail.
In regards to swap and dump on zpool you can set them
zfs set volsize=1G rootpool/dump
zfs set volsize=1G rootpool/swap
for instance, of course above are only an example of how to do it.
or make the zvol doe rootpool/dump etc before lucreate,
I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
my file system is setup as follow:
[10:11:54] [EMAIL PROTECTED]: /root > df -h | egrep -v
"platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr"
Filesystem size used avail capacity Mounted on
/dev/dsk/c1t0d0s0
Hi
No that should be fine, as long as disk is SMI labelled then that's
fine, and lU would have failed much earlier if it found an EFI labelled
disk.
core dump is not due to this, something else is causing that.
Enda
On 11/05/08 15:14, Krzys wrote:
> Great, I will follow this, but I was wonderin
Great, I will follow this, but I was wondering maybe I did not setup my disc
correctly? from what I do understand zpool cannot be setup on whole disk as
other pools are so I did partition my disk so all the space is in s0 slice.
Maybe I thats not correct?
[10:03:45] [EMAIL PROTECTED]: /root > f
Hi Krzys
Also some info on the actual system
ie what was it upgraded to u6 from and how.
and an idea of how the filesystems are laid out, ie is usr seperate from
/ and so on ( maybe a df -k ). Don't appear to have any zones installed,
just to confirm.
Enda
On 11/05/08 14:07, Enda O'Connor wrote:
Hi
did you get a core dump?
would be nice to see the core file to get an idea of what dumped core,
might configure coreadm if not already done
run coreadm first, if the output looks like
# coreadm
global core file pattern: /var/crash/core.%f.%p
global core file content: default
On Wed, 5 Nov 2008, Enda O'Connor wrote:
> On 11/05/08 13:02, Krzys wrote:
>> I am not sure what I did wrong but I did follow up all the steps to get my
>> system moved from ufs to zfs and not I am unable to boot it... can anyone
>> suggest what I could do to fix it?
>>
>> here are all my ste
Sorry its Solaris 10 U6, not Nevada. I just upgraded to U6 and was hoping I
could take advantage of the zfs boot mirroring.
On Wed, 5 Nov 2008, Enda O'Connor wrote:
> On 11/05/08 13:02, Krzys wrote:
>> I am not sure what I did wrong but I did follow up all the steps to get my
>> system moved fr
Yes, I did notice that error too, but when I did lustatus it did show as it was
ok, so I guess I did asume it was safe to start from it, but even booting up
from original disk caused problems and I was unable to boot my system...
ANyway I did poweroff my system for few minutes, and then started
On 11/05/08 13:02, Krzys wrote:
> I am not sure what I did wrong but I did follow up all the steps to get my
> system moved from ufs to zfs and not I am unable to boot it... can anyone
> suggest what I could do to fix it?
>
> here are all my steps:
>
> [00:26:38] @adas: /root > zpool create roo
On 05 November, 2008 - Krzys sent me these 18K bytes:
>
> I am not sure what I did wrong but I did follow up all the steps to get my
> system moved from ufs to zfs and not I am unable to boot it... can anyone
> suggest what I could do to fix it?
>
> here are all my steps:
>
> [00:26:38] @adas
I am not sure what I did wrong but I did follow up all the steps to get my
system moved from ufs to zfs and not I am unable to boot it... can anyone
suggest what I could do to fix it?
here are all my steps:
[00:26:38] @adas: /root > zpool create rootpool c1t1d0s0
[00:26:57] @adas: /root > lucr
18 matches
Mail list logo