with Solaris 10 U7. Besides,
when will this feature be integrated in Solaris 10?
Is there a workaround? I have checked it out with format tool - without effects.
Thanks for any info.
Jan
--
This message posted from opensolaris.org
___
zfs-discuss
ormat create a
new one for the larger LUN. Finally, create slice 0 as the size of the
entire (now larger) disk."?
Could you please give me some more detailed information on your description?
Many thanks,
jan
--
This message posted from opensolaris.org
__
I am using a mirrored system pool on 2 80G drives - however I was only using
40G since I thought I might use the rest for something else. ZFS Time Slider
was complaining the pool was filled for 90% and I decided to increase pool
size.
What I did was a zpool detach of one of the mirrored hdds and in
On Sun, May 2, 2010 at 6:06 AM, Roy Sigurd Karlsbakk wrote:
> - "Jan Riechers" skrev:
>
> I am using a mirrored system pool on 2 80G drives - however I was only
> using 40G since I thought I might use the rest for something else. ZFS Time
> Slider was complaining t
On Sun, May 2, 2010 at 3:51 PM, Jan Riechers wrote:
>
>
> On Sun, May 2, 2010 at 6:06 AM, Roy Sigurd Karlsbakk
> wrote:
>
>> - "Jan Riechers" skrev:
>>
>> I am using a mirrored system pool on 2 80G drives - however I was only
>> using 40G s
n rpool
NAME PROPERTY VALUESOURCE
rpool version 22 default
... and this is where I am now.
The zpool contains my digital images and videos and I would be really unhappy
to lose them. What can I do to get back the pool? Is there hope?
Sorry for the long post - tried to assemble
j...@opensolaris:~$ pfexec zpool import -D
no pools available to import
Any other ideas?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
j...@opensolaris:~$ zpool clear vault
cannot open 'vault': no such pool
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Yes, I turned the system off before I connected the disks to the other
controller. And I turned the system off beore moving them back to the original
controller.
Now it seems like the system does not see the pool at all.
The disks are there, and they have not been used so I do not understand w
...@3,0
Specify disk (enter its number): ^C
j...@opensolaris:~$
On Thu, May 13, 2010 at 7:15 PM, Richard Elling wrote:
> now try "zpool import" to see what it thinks the drives are
> -- richard
>
> On May 13, 2010, at 2:46 AM, Jan Hellevik wrote:
>
> > Short versi
Thanks for the help, but I cannot get it to work.
j...@opensolaris:~# zpool import
pool: vault
id: 8738898173956136656
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http:
I cannot import - that is the problem. :-(
I have read the discussions you referred to (and quite a few more), and also
about the logfix program. I also found a discussion where 'zpool import -FX'
solved a similar problem so I tried that but no luck.
Now I have read so many discussions and blog
I don't think that is the problem (but I am not sure). It seems like te problem
is that the ZIL is missing. It is there, but not recognized.
I used fdisk to create a 4GB partition of a SSD, and then added it to the pool
with the command 'zpool add vault log /dev/dsk/c10d0p1'.
When I try to impo
svn_133 and zfs 22. At least my rpool is 22.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks! Not home right now, but I will try that as soon as I get home.
Message was edited by: janh
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-disc
It did not work. I did not find labels on p1, but on p0.
j...@opensolaris:~# zdb -l /dev/dsk/c10d0p1
LABEL 0
failed to unpack label 0
LABEL 1
-
Yes, I can try to do that. I do not have any more of this brand of disk, but I
guess that does not matter. It will have to wait until tomorrow (I have an
appointment in a few minutes, and it is getting late here in Norway), but I
will try first thing tomorrow. I guess a pool on a single drive wi
I am making a second backup of my other pool - then I'll use those disks and
recreate the problem pool. The only difference will be the SSD - only have one
of those. I'll use a disk in the same slot, so it will be close.
Backup will be finished in 2 hours time
--
This message posted from op
Ok - this is really strange. I did a test. Wiped my second pool (4 disks like
the other pool), and used them to create a pool similar to the one I have
problems with.
Then i powered off, moved the disks and powered on. Same error message as
before. Moved the disks back to the original controlle
cannot reproduce any issue with the given testcase on b137."
So you should test this with b137 or newer build. There have
been some extensive changes going to treeclimb_* functions,
so the bug is probably fixed or will be in near future.
Let us know if you can still reproduce the panic on
Hi! Sorry for the late reply - I have been busy at work and this had to wait.
The system has been powered off since my last post.
The computer is new - built it to use as file server at home. I have not seen
any strange behaviour (other than this). All parts are brand new (except for
the disks
Thanks for the reply. The thread on FreeBSD mentions creating symlinks for the
fdisk partitions. So did you earlier in this thread. I tried that but it did
not help - you can see the result in my earlier reply to your previous message
in this thread.
Is this the way to go? Should I try again wi
I found a thread that mentions an undocumented parameter -V
(http://opensolaris.org/jive/thread.jspa?messageID=444810) and that did the
trick!
The pool is now online and seems to be working well.
Thanks everyone who helped!
--
This message posted from opensolaris.org
__
Well, for me it was a cure. Nothing else I tried got the pool back. As far as I
can tell, the way to get it back should be to use symlinks to the fdisk
partitions on my SSD, but that did not work for me. Using -V got the pool back.
What is wrong with that?
If you have a better suggestion as to
I've been referred to here from the zfs-fuse newsgroup. I have a
(non-redundant) pool which is reporting errors that I don't quite understand:
# zpool status -v
pool: green
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications ma
Hello,
has anybody tried Zetaback? It looks like cool feature but I don't know
anybody who uses it.
https://labs.omniti.com/trac/zetaback/wiki
I need some help with configuration.
Regards,
Jan Hlodan
___
zfs-discuss mailing list
zfs-di
Hello,
when I run 'zfs send' into the file, system (Ultra Sparc 45) had this load:
# zfs send -R backup/zo...@moving_09112009 >
/tank/archive_snapshots/exa_all_zones_09112009.snap
Total: 107 processes, 951 lwps, load averages: 54.95, 59.46, 50.25
Is it normal?
Regards
mebody from ZFS team to help install folks understand
what changed and how the installer has to be modified, so that it can
destroy ZFS root pool containing dump on ZVOL ?
Thank you very much,
Jan
___
zfs-discuss mailing list
zfs-discuss@
Hi Jeffrey,
Jeffrey Huang wrote:
Hi, Jan,
于 2009/12/9 20:41, Jan Damborsky 写道:
# dumpadm -d swap
dumpadm: no swap devices could be configured as the dump device
# dumpadm
Dump content: kernel pages
Dump device: /dev/zvol/dsk/rpool/dump (dedicated)
Savecore directory: /var/crash
sed to receive incremental snapshot to sync ips repository, but now I
can't receive a new one.
(option -F doesn't help)
Thank you,
Regards,
Jan Hlodan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
x27;backup'
pool.
admin@master:~# zpool status
pool: backup
state: ONLINE
scan: scrub repaired 0 in 5h7m with 0 errors on Tue Jan 31 04:55:31 2012
config:
NAME STATE READ WRITE CKSUM
backup ONLINE 0 0 0
mirror-0 ONLINE 0
Hi!
On Feb 1, 2012, at 7:43 PM, Bob Friesenhahn wrote:
> On Wed, 1 Feb 2012, Jan Hellevik wrote:
>> The disk in question is c6t70d0 - it shows consistently higher %b and asvc_t
>> than the other disks in the pool. The output is from a 'zfs receive' after
>> about
On Feb 1, 2012, at 8:07 PM, Bob Friesenhahn wrote:
> On Wed, 1 Feb 2012, Jan Hellevik wrote:
>>>
>>> Are all of the disks the same make and model?
>>
>> They are different makes - I try to make pairs of different brands to
>> minimise risk.
>
> D
I expected:
4. c6t68d0
/pci@0,0/pci1022,9603@2/pci1000,3140@0/sd@44,0
8. c6t72d0
/pci@0,0/pci1022,9603@2/pci1000,3140@0/sd@48,0
Thank you for the explanation!
On Feb 3, 2012, at 12:02 PM, Christian Meier wrote:
> Hello Jan,
>
> I'm not
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scan: scrub repaired 0 in 19h9m with 0 errors on Mon Jan 30 05:57:51 2012
config:
NAME
.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jan Hellevik
> Sent: Friday, March 16, 2012 2:20 PM
> To: zfs-discuss@opensolaris.org
> Subject: [zfs-discuss] Cannot remove slog device
>
> I have a problem with my box. The slog started showing errors, so I decided
&g
0
mirror-3 ONLINE 0 0 0
c9t3d0 ONLINE 0 0 0
c9t4d0 ONLINE 0 0 0
errors: No known data errors
On Mar 16, 2012, at 9:21 PM, Jan Hellevik wrote:
> Hours... :-(
>
> Should have used both devices as
t encountering the upgrade notice ?
I'm using OpenIndiana 151a6 on x86.
Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
o recover the data from parity information and ditto
blocks. Sometimes the error is only in the current version of a
file/directory, so you can recover the data from a snapshot.
> nas4free:/tankki/media# cd Dokumentit
> Dokumentit: Input/output error.
red root fs.
If anyone has figured out how to mirror drives after getting the
message about sector alignment, please let the list know :-).
Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Nov 10, 2012 at 8:48 AM, Jan Owoc wrote:
> On Sat, Nov 10, 2012 at 8:14 AM, Trond Michelsen wrote:
>> When I try to replace the old drive, I get this error:
>>
>> # zpool replace tank c4t5000C5002AA2F8D6d0 c4t5000C5004DE863F2d0
>> cannot replace
On Sat, Nov 10, 2012 at 9:04 AM, Tim Cook wrote:
> On Sat, Nov 10, 2012 at 9:59 AM, Jan Owoc wrote:
>> Sorry... my question was partly answered by Jim Klimov on this list:
>> http://openindiana.org/pipermail/openindiana-discuss/2012-June/008546.html
>>
>> Apparently
1a7 on an AMD E-350 system (installed as
151a1, I think). I think it's the ASUS E35M-I [1]. I use it as a NAS,
so I only know that the SATA ports, USB port and network ports work -
sound, video acceleration, etc., are untested.
[1] http://www.asus.com/Motherboards/AMD_CP
e
the drive.
2) if you have an additional hard drive bay/cable/controller, you can do a
"zpool replace" on the offending drive without doing a "detach" first -
this may save you from the other drive failing during resilvering.
Jan
__
On Fri, Nov 30, 2012 at 9:05 AM, Tomas Forsman wrote:
>
> I don't have it readily at
> hand how to check the ashift value on a vdev, anyone
> else/archives/google?
>
This? ;-)
http://lmgtfy.com/?q=how+to+check+the+ashift+value+on+a+vdev&l=1
The first hit has:
# zdb m
ge ? It's take mounth to do
> that.
Those are the current limitations of zfs. Yes, with 12x2TB of data to
copy it could take about a month.
If you are feeling particularly risky and have backups elsewhere, you
could swap two drives at once, but then you lose all your data if one
of the r
y at the same
version, but you can't access it if you can't access the pool :-).
If you want to access the data now, your only option is to use Solaris
to read it, and copy it over (eg. with zfs send | recv) onto a pool
created with version 28.
Jan
__
On Thu, Dec 13, 2012 at 11:44 AM, Bob Netherton wrote:
> On Dec 13, 2012, at 10:47 AM, Jan Owoc wrote:
>> Yes, that is correct. The last version of Solaris with source code
>> used zpool version 28. This is the last version that is readable by
>> non-Solaris operating syste
tside of their
"refreservation" and now crashed for lack of free space on their zfs.
Some of the other VMs aren't using their refreservation (yet), so they
could, between them, still write 360GB of stuff to the drive.
Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
unts as a child filesystem, so you would have to do "zfs destroy -r
tank/filesystem" to recursively destroy all the children.
I would imagine you could write some sort of wrapper for the "zfs"
command that checks if the command includes "destroy" and then check
for
> # zfs destroy -r a/1
> cannot destroy 'a/1/hold@hold': snapshot is busy
Does this do what you want? (zpool destroy is already undo-able)
Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t's been done with ZFS :-).
Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ok, so I did it again... I moved my disks around without doing export first.
I promise - after this I will always export before messing with the disks. :-)
Anyway - the problem. I decided to rearrange the disks due to cable lengths and
case layout. I disconnected the disks and moved them around.
2.
Thanks,
Jan
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks! I will try later today and report back the result.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Export did not go very well.
j...@opensolaris:~# zpool export master
internal error: Invalid argument
Abort (core dumped)
So I deleted (renamed) the zpool.cache and rebooted.
After reboot I imported the pool and it seems to have gone well. It is now
scrubbing.
Thanks a lot for the help!
j...@
Hello Richard,
I've downloaded a new iso and created the second copy on a different computer
at my workplace (with the "verify data" option enabled within NERO and slow 4x
writing speed) - I also used another blank disc brand.
Cheers
Jan
--
This message posted from
I could resolve this issue:
I was testing FreeNAS with a raidz1 setup before I decided to check out
Nexentastore and it seems Nexentastore has some kind of problems if the
harddisk array already contain some kind of raidz data. After wiping the discs
with a tool from the "Ultimate Boot CD" I co
they are already fixed), or if some workarounds might be used.
Also, please let us know if there is possibility that other approach
(like other/new API, command, subcommand) might be used in order to
solve the problem.
Any help/suggestions/comments are much appreciated.
Thank you very much,
Jan
tting
custom parameters neither in man pages nor in
"Solaris ZFS Administration Guide" available on opensolaris.org,
I have probably missed it.
Thank you,
Jan
John Langley wrote:
> What about setting a custom parameter on rpool when you create it and
> then changing the value after
Hi Darren,
thank you very much for your help.
Please see my comments below.
Jan
Darren J Moffat wrote:
> jan damborsky wrote:
>> Hi John,
>>
>> I like this idea - it would be clear solution for the problem.
>> Is it possible to manage custom parameters with standard
Hi Andrew,
this is what I am thinking about based on John's
and Darren's responses.
I will file RFE for having possibility to set user
properties for pools (if it doesn't already exist).
Thank you,
Jan
andrew wrote:
> Perhaps user properties on pools would be useful here? A
Darren J Moffat wrote:
> jan damborsky wrote:
>>> zfs set caiman:install=preparing rpool/ROOT
>> That sounds reasonable. It is not atomic operation from installer
>> point of view, but the time window is really short (installer can
>> set ZFS user property al
>> And log an RFE for having user defined properties at the pool (if one
>> doesn't already exist).
>>
6739057 was filed to track this.
Thank you,
Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
".
Is there any way to release dump ZFS volume after it was
activated by dumpadm(1M) command ?
Thank you,
Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Mark,
Mark J Musante wrote:
> On Mon, 8 Sep 2008, jan damborsky wrote:
>
>> Is there any way to release dump ZFS volume after it was activated by
>> dumpadm(1M) command ?
>
> Try 'dumpadm -d swap' to point the dump to the swap device.
That helped - since swa
rt' command - please see below for detailed procedure.
Based on this, could you please take a look at those observations
and if possible help me understand if there is anything obvious
what might be wrong and if you think this is somehow related to
ZFS technology ?
Thank you very much for your
d we might be missing
other issues which are not related to 6769487
(e.g. when /rpool/boot/grub/menu.lst file was not created).
Thank you,
Jan
How to triage:
--
* In all cases, ask reporter to attach /tmp/install_log file
With LiveCD, this can be obtained using following proced
I have filed following bug in 'solaris/kernel/zfs' category for tracking
this issue:
6769487 Ended up in 'grub>' prompt after installation of OpenSolaris
2008.11 (build 101a)
Thank you,
Jan
jan damborsky wrote:
> Hi ZFS team,
>
> when testing installation with
Hi Robert,
you are hitting following ZFS bug:
4858 OpenSolaris fails to boot if previous zfs turds are present on disk
now tracked in Bugster:
6770808 OpenSolaris fails to boot if previous zfs turds are present on disk
Thanks,
Jan
Robert Milkowski wrote:
> Hello indiana-disc
Hi Dick,
I am redirecting your question to zfs-discuss
mailing list, where people are more knowledgeable
about this problem and your question could be
better answered.
Best regards,
Jan
dick hoogendijk wrote:
> I have s10u6 installed on my server.
> zfs list (partly):
Hey Rafal,
this sounds like missing GANG block support in GRUB. Checkout putback
log for snv_106 (afaik), there's a bug where grub fails like this.
Cheers,
Spity
On 3.1.2009, at 21:11, Rafal Pratnicki wrote:
> I recovered the system and created the opensolaris-12 BE. The system
> was workin
Hi Jeffrey,
jeffrey huang wrote:
> Hi, Jan,
>
> After successfully install AI on SPARC(zpool/zfs created), without
> reboot, I want try a installation again, so I want to destroy the rpool.
>
> # dumpadm -d swap --> ok
> # zfs destroy rpool/dump --> ok
> # swap -l
&
SIZE USED AVAILCAP HEALTH ALTROOT
rpool 59.5G 3.82G 55.7G 6% ONLINE -
sh-3.2# zpool import
sh-3.2#
How can I find and import left partition?
Thanks for help.
Regards,
Jan Hlodan
___
zfs-discuss mailing list
zfs-discuss@opensolari
t I still don't know how to import this partition (num. 3)
If I run:
zpool create c9d0
I'll lost all my data, right?
Regards,
Jan Hlodan
Will Murnane wrote:
On Thu, Feb 12, 2009 at 21:59, Jan Hlodan wrote:
I would like to import 3. partition as a another pool but I can't see
re is this partition, then I can run: zpool create trunk
c9d0XYZ
right?
Thanks for the answer.
Regards,
Jan Hlodan
Jan Hlodan wrote:
Hello,
thanks for the answer.
The partition table shows that Wind and OS run on:
1. c9d0
/p...@0,0/pci-...@1f,2/i...@0/c...@0,0
Partition Stat
l status
> 2.- Would you please recommend a good introduction to Solaris/OpenSolaris?
> I'm used to Linux and I'd like to get up to speed with OpenSolaris.
>
sure, OpenSolaris Bible :)
http://blogs.sun.com/observatory/entry/two_more_chapters_from_the
Hope this helps,
Regar
Hi Antonio,
did you try to recreate this partition e.g. with Gparted?
Maybe is something wrong with this partition.
Can you also post what "prtpart "disk ID" -ldevs" says?
Regards,
Jan Hlodan
Antonio wrote:
Hi Jan,
I tried out what you say long ago, but zfs fails on poo
d0p10Solaris x86
Hi Antonio,
and what does 'zpool create' command say?
$ pfexec zpool create test /dev/dsk/c3d0p5
or
$ pfexec zpool create -f test /dev/dsk/c3d0p5
Regards,
jh
Jan Hlodan escribió:
Hi Antonio,
did you try to recreate this partition e.g. with Gparted?
Maybe is
;
Can you help me please? I don't want to loose all my configurations.
Thank you!
Regards,
Jan Hlodan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ith status 256)"
Then I can see wallpaper and cursor. That's it, nothing more.
Regards,
Jan Hlodan
Tomas Ögren wrote:
> On 09 March, 2009 - Jan Hlodan sent me these 1,7K bytes:
>
>
>> Hello,
>>
>> I am desperate. Today I realized that my OS 108 doesn'
Thank you,
Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
casper@sun.com wrote:
hi Jan (and all)
My failure was when running
# swap -d /dev/zvol/dsk/rpool/swap
I saw this in my truss output.
uadmin(16, 3, -2748781172232)Err#12 ENOMEM
That sounds like "too much memory in use: can't remove swap".
It seems it
Hi,
On 14.6.2007, at 9:15, G.W. wrote:
If someone knows how to modify Extensions.kextcache and
Extensions.mkext, please let me know. After the bugs are worked
out, Leopard should be a pretty good platform.
You can recreate the kext cache like this:
kextcache -k /System/Library/Extensions
s are handled by the process of updating)?
Thanks
Jan Dreyer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the iscsi-vol (or import Pool-2) on HostA?
I know, this is (also) iSCSI-related, but mostly a ZFS-question.
Thanks for your answers,
Jan Dreyer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
would
like to avoid "system"-commands in my scripts ...
Thanks for your answers,
Jan Dreyer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
r as implementation of that features
is concerned ?
Thank you very much,
Jan
[i] Formula for calculating dump & swap size
I have gone through the specification and found that
following formula should be used for calculating default
size of swap &
Hi Lori,
Lori Alt wrote:
> Richard Elling wrote:
>> Hi Jan, comments below...
>>
>> jan damborsky wrote:
>>
>>> Hi folks,
>>>
>>> I am member of Solaris Install team and I am currently working
>>> on making Slim insta
Hi Richard,
thank you very much for your comments.
Please see my response in line.
Jan
Richard Elling wrote:
> Hi Jan, comments below...
>
> jan damborsky wrote:
>> Hi folks,
>>
>> I am member of Solaris Install team and I am currently working
>> on making
created if user dedicates
at least recommended disk space for installation.
Please feel free to correct me, if I misunderstood some point.
Thank you very much again,
Jan
Dave Miner wrote:
> Peter Tribble wrote:
>> On Tue, Jun 24, 2008 at 8:27 PM, Dave Miner <[EMAIL PROTECTED]> wrote
Hi Darren,
Darren J Moffat wrote:
> Jan Damborsky wrote:
>> Thank you very much all for this valuable input.
>>
>> Based on the collected information, I would take
>> following approach as far as calculating size of
>> swap and dump devices on ZFS volumes in
Darren J Moffat wrote:
> jan damborsky wrote:
>> I think it is necessary to have some absolute minimum
>> and not allow installer to proceed if user doesn't
>> provide at least minimum required, as we have to make
>> sure that installation doesn't fail b
Hi Mike,
Mike Gerdts wrote:
> On Wed, Jun 25, 2008 at 11:09 PM, Jan Damborsky <[EMAIL PROTECTED]> wrote:
>> Thank you very much all for this valuable input.
>>
>> Based on the collected information, I would take
>> following approach as far as calculating size o
provided by virtual tools and/or implemented in kernel, I think (I might
be wrong) that in the installer we will still need to use standard
mechanisms for now.
Thank you,
Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Mike Gerdts wrote:
> On Mon, Jun 30, 2008 at 9:19 AM, jan damborsky <[EMAIL PROTECTED]> wrote:
>> Hi Mike,
>>
>>
>> Mike Gerdts wrote:
>>> On Wed, Jun 25, 2008 at 11:09 PM, Jan Damborsky <[EMAIL PROTECTED]>
>>> wrote:
>>>> Th
limits on
> memory, and it's just virtual memory, after all. Besides which, we can
> infer that the system works well enough for the user's purposes without
> swap since the boot from the CD won't have used any swap.
That is a good poi
ch all for this valuable input.
Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Dave Miner wrote:
> jan damborsky wrote:
> ...
>> [2] dump and swap devices will be considered optional
>>
>> dump and swap devices will be considered optional during
>> fresh installation and will be created only if there is
>> appropriate space available
l, kernel plus
> current process, or all memory. If the dump content is 'all', the dump space
> needs to be as large as physical memory. If it's just 'kernel', it can be
> some fraction of that.
I see - thanks a lot for clarification.
Jan
__
1 - 100 of 145 matches
Mail list logo