Hi Richard,

thank you very much for your comments.
Please see my response in line.

Jan


Richard Elling wrote:
> Hi Jan, comments below...
>
> jan damborsky wrote:
>> Hi folks,
>>
>> I am member of Solaris Install team and I am currently working
>> on making Slim installer compliant with ZFS boot design specification:
>>
>> http://opensolaris.org/os/community/arc/caselog/2006/370/commitment-materials/spec-txt/
>>  
>>
>>
>> After ZFS boot project was integrated into Nevada and support
>> for installation on ZFS root delivered into legacy installer,
>> some differences occurred between how Slim installer implements
>> ZFS root and how it is done in legacy installer.
>>
>> One part is that we need to change in Slim installer is to create
>> swap & dump on ZFS volume instead of utilizing UFS slice for this
>> as defined in design spec and implemented in SXCE installer.
>>
>> When reading through the specification and looking at SXCE
>> installer source code, I have realized some points are not quite
>> clear to me.
>>
>> Could I please ask you to help me clarify them in order to
>> follow the right way as far as implementation of that features
>> is concerned ?
>>
>> Thank you very much,
>> Jan
>>
>>
>> [i] Formula for calculating dump & swap size
>> --------------------------------------------
>>
>> I have gone through the specification and found that
>> following formula should be used for calculating default
>> size of swap & dump during installation:
>>
>> o size of dump: 1/4 of physical memory
>>   
>
> This is a non-starter for systems with 1-4 TBytes of physical
> memory.  There must be a reasonable maximum cap, most
> likely based on the size of the pool, given that we regularly
> boot large systems from modest-sized disks.

I agree - there will be both upper (32GiB) as well as lower
(512MiB or 0 for dump ?) bounds defined.

>
>> o size of swap: max of (512MiB, 1% of rpool size)
>>
>> However, looking at the source code, SXCE installer
>> calculates default sizes using slightly different
>> algorithm:
>>
>> size_of_swap = size_of_dump = MAX(512 MiB, MIN(physical_memory/2, 32 
>> GiB))
>>
>> Are there any preferences which one should be used or is
>> there any other possibility we might take into account ?
>>   
>
> zero would make me happy :-)  But there are some cases where swap
> space is preferred.  Again, there needs to be a reasonable cap.  In
> general, the larger the system, the less use for swap during normal
> operations, so for most cases there is no need for really large swap
> volumes.  These can also be adjusted later, so the default can be
> modest.  One day perhaps it will be fully self-adjusting like it is
> with other UNIX[-like] implementations.
>
>>
>> [ii] Procedure of creating dump & swap
>> --------------------------------------
>>
>> Looking at the SXCE source code, I have discovered that following
>> commands should be used for creating swap & dump:
>>
>> o swap
>> # /usr/sbin/zfs create -b PAGESIZE -V <size_in_mb>m rpool/swap
>> # /usr/sbin/swap -a /dev/zvol/dsk/rpool/swap
>>
>> o dump
>> # /usr/sbin/zfs create -b 128*1024 -V <size_in_mb>m rpool/dump
>> # /usr/sbin/dumpadm -d /dev/zvol/dsk/rpool/dump
>>
>> Could you please let me know, if my observations are correct
>> or if I should use different approach ?
>>
>> As far as setting of volume block size is concerned (-b option),
>> how that numbers are to be determined ? Will they be the same in
>> different scenarios or are there plans to tune them in some way
>> in future ?
>>   
>
> Setting the swap blocksize to pagesize is interesting, but should be
> ok for most cases.  The reason I say it is interesting is because it
> is optimized for small systems, but not for larger systems which
> typically see more use of large page sizes.  OTOH larger systems
> should not swap, so it is probably a non-issue for them.  Small
> systems should see this as the best solution.
>
> Dump just sets the blocksize to the default, so it is a no-op.

I see - thank you for clarifying this.

> -- richard
>
>>
>> [iii] Is there anything else I should be aware of ?
>> ---------------------------------------------------
>>   
>
> Installation should *not* fail due to running out of space because
> of large dump or swap allocations.  I think the algorithm should
> first take into account the space available in the pool after accounting
> for the OS.
> -- richard
>


This is a good point. I can imagine that if user would like to install
on USB stick, he would be probably fine with having system installed
without dump, if space available is limited.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to