Hi Krzys
Also some info on the actual system
ie what was it upgraded to u6 from and how.
and an idea of how the filesystems are laid out, ie is usr seperate from 
/ and so on ( maybe a df -k ). Don't appear to have any zones installed, 
just to confirm.
Enda

On 11/05/08 14:07, Enda O'Connor wrote:
> Hi
> did you get a core dump?
> would be nice to see the core file to get an idea of what dumped core,
> might configure coreadm if not already done
> run coreadm first, if the output looks like
> 
> # coreadm
>      global core file pattern: /var/crash/core.%f.%p
>      global core file content: default
>        init core file pattern: core
>        init core file content: default
>             global core dumps: enabled
>        per-process core dumps: enabled
>       global setid core dumps: enabled
>  per-process setid core dumps: disabled
>      global core dump logging: enabled
> 
> then all should be good, and cores should appear in /var/crash
> 
> otherwise the following should configure coreadm:
> coreadm -g /var/crash/core.%f.%p
> coreadm -G all
> coreadm -e global
> coreadm -e per-process
> 
> 
> coreadm -u to load the new settings without rebooting.
> 
> also might need to set the size of the core dump via
> ulimit -c unlimited
> check ulimit -a first.
> 
> then rerun test and check /var/crash for core dump.
> 
> If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c 
> ufsBE -n zfsBE -p rootpool
> 
> might give an indication, look for SIGBUS in the truss log
> 
> NOTE, that you might want to reset the coreadm and ulimit for coredumps 
> after this, in order to not risk filling the system with coredumps in 
> the case of some utility coredumping in a loop say.
> 
> 
> Enda
> On 11/05/08 13:46, Krzys wrote:
>>
>> On Wed, 5 Nov 2008, Enda O'Connor wrote:
>>
>>> On 11/05/08 13:02, Krzys wrote:
>>>> I am not sure what I did wrong but I did follow up all the steps to 
>>>> get my system moved from ufs to zfs and not I am unable to boot 
>>>> it... can anyone suggest what I could do to fix it?
>>>>
>>>> here are all my steps:
>>>>
>>>> [00:26:38] @adas: /root > zpool create rootpool c1t1d0s0
>>>> [00:26:57] @adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool
>>>> Analyzing system configuration.
>>>> Comparing source boot environment <ufsBE> file systems with the file
>>>> system(s) you specified for the new boot environment. Determining which
>>>> file systems should be in the new boot environment.
>>>> Updating boot environment description database on all BEs.
>>>> Updating system configuration files.
>>>> The device </dev/dsk/c1t1d0s0> is not a root device for any boot 
>>>> environment; cannot get BE ID.
>>>> Creating configuration for boot environment <zfsBE>.
>>>> Source boot environment is <ufsBE>.
>>>> Creating boot environment <zfsBE>.
>>>> Creating file systems on boot environment <zfsBE>.
>>>> Creating <zfs> file system for </> in zone <global> on 
>>>> <rootpool/ROOT/zfsBE>.
>>>> Populating file systems on boot environment <zfsBE>.
>>>> Checking selection integrity.
>>>> Integrity check OK.
>>>> Populating contents of mount point </>.
>>>> Copying.
>>>> Bus Error - core dumped
>>> hmm above might be relevant I'd guess.
>>>
>>> What release are you on , ie is this Solaris 10, or is this Nevada 
>>> build?
>>>
>>> Enda
>>>> Creating shared file system mount points.
>>>> Creating compare databases for boot environment <zfsBE>.
>>>> Creating compare database for file system </var>.
>>>> Creating compare database for file system </usr>.
>>>> Creating compare database for file system </rootpool/ROOT>.
>>>> Creating compare database for file system </>.
>>>> Updating compare databases on boot environment <zfsBE>.
>>>> Making boot environment <zfsBE> bootable.
>>
>> Anyway I did restart the whole process again, and I got again that Bus 
>> Error
>>
>> [07:59:01] [EMAIL PROTECTED]: /root > zpool create rootpool c1t1d0s0
>> [07:59:22] [EMAIL PROTECTED]: /root > zfs set compression=on rootpool/ROOT
>> cannot open 'rootpool/ROOT': dataset does not exist
>> [07:59:27] [EMAIL PROTECTED]: /root > zfs set compression=on rootpool
>> [07:59:31] [EMAIL PROTECTED]: /root > lucreate -c ufsBE -n zfsBE -p rootpool
>> Analyzing system configuration.
>> Comparing source boot environment <ufsBE> file systems with the file
>> system(s) you specified for the new boot environment. Determining which
>> file systems should be in the new boot environment.
>> Updating boot environment description database on all BEs.
>> Updating system configuration files.
>> The device </dev/dsk/c1t1d0s0> is not a root device for any boot 
>> environment; cannot get BE ID.
>> Creating configuration for boot environment <zfsBE>.
>> Source boot environment is <ufsBE>.
>> Creating boot environment <zfsBE>.
>> Creating file systems on boot environment <zfsBE>.
>> Creating <zfs> file system for </> in zone <global> on 
>> <rootpool/ROOT/zfsBE>.
>> Populating file systems on boot environment <zfsBE>.
>> Checking selection integrity.
>> Integrity check OK.
>> Populating contents of mount point </>.
>> Copying.
>> Bus Error - core dumped
>> Creating shared file system mount points.
>> Creating compare databases for boot environment <zfsBE>.
>> Creating compare database for file system </var>.
>> Creating compare database for file system </usr>.
>>
>>
>>
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> 


-- 
Enda O'Connor x19781  Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to