[zfs-discuss] resizing zpools by growing LUN

2009-07-29 Thread Jan
with Solaris 10 U7. Besides, when will this feature be integrated in Solaris 10? Is there a workaround? I have checked it out with format tool - without effects. Thanks for any info. Jan -- This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] resizing zpools by growing LUN

2009-08-03 Thread Jan
ormat create a new one for the larger LUN. Finally, create slice 0 as the size of the entire (now larger) disk."? Could you please give me some more detailed information on your description? Many thanks, jan -- This message posted from opensolaris.org __

[zfs-discuss] rpool gone after zpool detach

2010-05-02 Thread Jan Riechers
I am using a mirrored system pool on 2 80G drives - however I was only using 40G since I thought I might use the rest for something else. ZFS Time Slider was complaining the pool was filled for 90% and I decided to increase pool size. What I did was a zpool detach of one of the mirrored hdds and in

Re: [zfs-discuss] rpool gone after zpool detach

2010-05-02 Thread Jan Riechers
On Sun, May 2, 2010 at 6:06 AM, Roy Sigurd Karlsbakk wrote: > - "Jan Riechers" skrev: > > I am using a mirrored system pool on 2 80G drives - however I was only > using 40G since I thought I might use the rest for something else. ZFS Time > Slider was complaining t

Re: [zfs-discuss] rpool gone after zpool detach

2010-05-02 Thread Jan Riechers
On Sun, May 2, 2010 at 3:51 PM, Jan Riechers wrote: > > > On Sun, May 2, 2010 at 6:06 AM, Roy Sigurd Karlsbakk > wrote: > >> - "Jan Riechers" skrev: >> >> I am using a mirrored system pool on 2 80G drives - however I was only >> using 40G s

[zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-13 Thread Jan Hellevik
n rpool NAME PROPERTY VALUESOURCE rpool version 22 default ... and this is where I am now. The zpool contains my digital images and videos and I would be really unhappy to lose them. What can I do to get back the pool? Is there hope? Sorry for the long post - tried to assemble

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving ba

2010-05-14 Thread Jan Hellevik
j...@opensolaris:~$ pfexec zpool import -D no pools available to import Any other ideas? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-14 Thread Jan Hellevik
j...@opensolaris:~$ zpool clear vault cannot open 'vault': no such pool -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-14 Thread Jan Hellevik
Yes, I turned the system off before I connected the disks to the other controller. And I turned the system off beore moving them back to the original controller. Now it seems like the system does not see the pool at all. The disks are there, and they have not been used so I do not understand w

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-14 Thread Jan Hellevik
...@3,0 Specify disk (enter its number): ^C j...@opensolaris:~$ On Thu, May 13, 2010 at 7:15 PM, Richard Elling wrote: > now try "zpool import" to see what it thinks the drives are > -- richard > > On May 13, 2010, at 2:46 AM, Jan Hellevik wrote: > > > Short versi

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Jan Hellevik
Thanks for the help, but I cannot get it to work. j...@opensolaris:~# zpool import pool: vault id: 8738898173956136656 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http:

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Jan Hellevik
I cannot import - that is the problem. :-( I have read the discussions you referred to (and quite a few more), and also about the logfix program. I also found a discussion where 'zpool import -FX' solved a similar problem so I tried that but no luck. Now I have read so many discussions and blog

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Jan Hellevik
I don't think that is the problem (but I am not sure). It seems like te problem is that the ZIL is missing. It is there, but not recognized. I used fdisk to create a 4GB partition of a SSD, and then added it to the pool with the command 'zpool add vault log /dev/dsk/c10d0p1'. When I try to impo

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Jan Hellevik
svn_133 and zfs 22. At least my rpool is 22. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Jan Hellevik
Thanks! Not home right now, but I will try that as soon as I get home. Message was edited by: janh -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-disc

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Jan Hellevik
It did not work. I did not find labels on p1, but on p0. j...@opensolaris:~# zdb -l /dev/dsk/c10d0p1 LABEL 0 failed to unpack label 0 LABEL 1 -

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Jan Hellevik
Yes, I can try to do that. I do not have any more of this brand of disk, but I guess that does not matter. It will have to wait until tomorrow (I have an appointment in a few minutes, and it is getting late here in Norway), but I will try first thing tomorrow. I guess a pool on a single drive wi

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-16 Thread Jan Hellevik
I am making a second backup of my other pool - then I'll use those disks and recreate the problem pool. The only difference will be the SSD - only have one of those. I'll use a disk in the same slot, so it will be close. Backup will be finished in 2 hours time -- This message posted from op

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-16 Thread Jan Hellevik
Ok - this is really strange. I did a test. Wiped my second pool (4 disks like the other pool), and used them to create a pool similar to the one I have problems with. Then i powered off, moved the disks and powered on. Same error message as before. Moved the disks back to the original controlle

Re: [zfs-discuss] zfs/lofi/share panic

2010-05-27 Thread Jan Kryl
cannot reproduce any issue with the given testcase on b137." So you should test this with b137 or newer build. There have been some extensive changes going to treeclimb_* functions, so the bug is probably fixed or will be in near future. Let us know if you can still reproduce the panic on

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-06-12 Thread Jan Hellevik
Hi! Sorry for the late reply - I have been busy at work and this had to wait. The system has been powered off since my last post. The computer is new - built it to use as file server at home. I have not seen any strange behaviour (other than this). All parts are brand new (except for the disks

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-06-12 Thread Jan Hellevik
Thanks for the reply. The thread on FreeBSD mentions creating symlinks for the fdisk partitions. So did you earlier in this thread. I tried that but it did not help - you can see the result in my earlier reply to your previous message in this thread. Is this the way to go? Should I try again wi

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving ba

2010-06-13 Thread Jan Hellevik
I found a thread that mentions an undocumented parameter -V (http://opensolaris.org/jive/thread.jspa?messageID=444810) and that did the trick! The pool is now online and seems to be working well. Thanks everyone who helped! -- This message posted from opensolaris.org __

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving ba

2010-06-13 Thread Jan Hellevik
Well, for me it was a cure. Nothing else I tried got the pool back. As far as I can tell, the way to get it back should be to use symlinks to the fdisk partitions on my SSD, but that did not work for me. Using -V got the pool back. What is wrong with that? If you have a better suggestion as to

[zfs-discuss] Permament errors in "files" <0x0>

2010-06-14 Thread Jan Ploski
I've been referred to here from the zfs-fuse newsgroup. I have a (non-redundant) pool which is reporting errors that I don't quite understand: # zpool status -v pool: green state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications ma

[zfs-discuss] Zetaback ZFS backup and recovery management system

2009-10-16 Thread Jan Hlodan
Hello, has anybody tried Zetaback? It looks like cool feature but I don't know anybody who uses it. https://labs.omniti.com/trac/zetaback/wiki I need some help with configuration. Regards, Jan Hlodan ___ zfs-discuss mailing list zfs-di

[zfs-discuss] High load when 'zfs send' to the file

2009-11-11 Thread Jan Hlodan
Hello, when I run 'zfs send' into the file, system (Ultra Sparc 45) had this load: # zfs send -R backup/zo...@moving_09112009 > /tank/archive_snapshots/exa_all_zones_09112009.snap Total: 107 processes, 951 lwps, load averages: 54.95, 59.46, 50.25 Is it normal? Regards

[zfs-discuss] How to destroy ZFS pool with dump on ZVOL

2009-12-09 Thread Jan Damborsky
mebody from ZFS team to help install folks understand what changed and how the installer has to be modified, so that it can destroy ZFS root pool containing dump on ZVOL ? Thank you very much, Jan ___ zfs-discuss mailing list zfs-discuss@

Re: [zfs-discuss] [caiman-discuss] How to destroy ZFS pool with dump on ZVOL

2009-12-13 Thread Jan Damborsky
Hi Jeffrey, Jeffrey Huang wrote: Hi, Jan, 于 2009/12/9 20:41, Jan Damborsky 写道: # dumpadm -d swap dumpadm: no swap devices could be configured as the dump device # dumpadm Dump content: kernel pages Dump device: /dev/zvol/dsk/rpool/dump (dedicated) Savecore directory: /var/crash

[zfs-discuss] cannot receive incremental stream

2010-02-03 Thread Jan Hlodan
sed to receive incremental snapshot to sync ips repository, but now I can't receive a new one. (option -F doesn't help) Thank you, Regards, Jan Hlodan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

[zfs-discuss] Disk failing? High asvc_t and %b.

2012-02-01 Thread Jan Hellevik
x27;backup' pool. admin@master:~# zpool status pool: backup state: ONLINE scan: scrub repaired 0 in 5h7m with 0 errors on Tue Jan 31 04:55:31 2012 config: NAME STATE READ WRITE CKSUM backup ONLINE 0 0 0 mirror-0 ONLINE 0

Re: [zfs-discuss] Disk failing? High asvc_t and %b.

2012-02-01 Thread Jan Hellevik
Hi! On Feb 1, 2012, at 7:43 PM, Bob Friesenhahn wrote: > On Wed, 1 Feb 2012, Jan Hellevik wrote: >> The disk in question is c6t70d0 - it shows consistently higher %b and asvc_t >> than the other disks in the pool. The output is from a 'zfs receive' after >> about

Re: [zfs-discuss] Disk failing? High asvc_t and %b.

2012-02-01 Thread Jan Hellevik
On Feb 1, 2012, at 8:07 PM, Bob Friesenhahn wrote: > On Wed, 1 Feb 2012, Jan Hellevik wrote: >>> >>> Are all of the disks the same make and model? >> >> They are different makes - I try to make pairs of different brands to >> minimise risk. > > D

Re: [zfs-discuss] Disk failing? High asvc_t and %b.

2012-02-05 Thread Jan Hellevik
I expected: 4. c6t68d0 /pci@0,0/pci1022,9603@2/pci1000,3140@0/sd@44,0 8. c6t72d0 /pci@0,0/pci1022,9603@2/pci1000,3140@0/sd@48,0 Thank you for the explanation! On Feb 3, 2012, at 12:02 PM, Christian Meier wrote: > Hello Jan, > > I'm not

[zfs-discuss] Cannot remove slog device

2012-03-16 Thread Jan Hellevik
Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the faulted device, or use 'zpool clear' to mark the device repaired. scan: scrub repaired 0 in 19h9m with 0 errors on Mon Jan 30 05:57:51 2012 config: NAME

Re: [zfs-discuss] Cannot remove slog device

2012-03-16 Thread Jan Hellevik
.org > [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jan Hellevik > Sent: Friday, March 16, 2012 2:20 PM > To: zfs-discuss@opensolaris.org > Subject: [zfs-discuss] Cannot remove slog device > > I have a problem with my box. The slog started showing errors, so I decided &g

Re: [zfs-discuss] Cannot remove slog device

2012-03-16 Thread Jan Hellevik
0 mirror-3 ONLINE 0 0 0 c9t3d0 ONLINE 0 0 0 c9t4d0 ONLINE 0 0 0 errors: No known data errors On Mar 16, 2012, at 9:21 PM, Jan Hellevik wrote: > Hours... :-( > > Should have used both devices as

[zfs-discuss] removing upgrade notice from 'zpool status -x'

2012-10-04 Thread Jan Owoc
t encountering the upgrade notice ? I'm using OpenIndiana 151a6 on x86. Jan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Directory is not accessible

2012-10-08 Thread Jan Owoc
o recover the data from parity information and ditto blocks. Sometimes the error is only in the current version of a file/directory, so you can recover the data from a snapshot. > nas4free:/tankki/media# cd Dokumentit > Dokumentit: Input/output error.

Re: [zfs-discuss] cannot replace X with Y: devices have different sector alignment

2012-11-10 Thread Jan Owoc
red root fs. If anyone has figured out how to mirror drives after getting the message about sector alignment, please let the list know :-). Jan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] cannot replace X with Y: devices have different sector alignment

2012-11-10 Thread Jan Owoc
On Sat, Nov 10, 2012 at 8:48 AM, Jan Owoc wrote: > On Sat, Nov 10, 2012 at 8:14 AM, Trond Michelsen wrote: >> When I try to replace the old drive, I get this error: >> >> # zpool replace tank c4t5000C5002AA2F8D6d0 c4t5000C5004DE863F2d0 >> cannot replace

Re: [zfs-discuss] cannot replace X with Y: devices have different sector alignment

2012-11-10 Thread Jan Owoc
On Sat, Nov 10, 2012 at 9:04 AM, Tim Cook wrote: > On Sat, Nov 10, 2012 at 9:59 AM, Jan Owoc wrote: >> Sorry... my question was partly answered by Jim Klimov on this list: >> http://openindiana.org/pipermail/openindiana-discuss/2012-June/008546.html >> >> Apparently

Re: [zfs-discuss] mixing WD20EFRX and WD2002FYPS in one pool

2012-11-21 Thread Jan Owoc
1a7 on an AMD E-350 system (installed as 151a1, I think). I think it's the ASUS E35M-I [1]. I use it as a NAS, so I only know that the SATA ports, USB port and network ports work - sound, video acceleration, etc., are untested. [1] http://www.asus.com/Motherboards/AMD_CP

Re: [zfs-discuss] Question about degraded drive

2012-11-27 Thread Jan Owoc
e the drive. 2) if you have an additional hard drive bay/cable/controller, you can do a "zpool replace" on the offending drive without doing a "detach" first - this may save you from the other drive failing during resilvering. Jan __

Re: [zfs-discuss] Remove disk

2012-11-30 Thread Jan Owoc
On Fri, Nov 30, 2012 at 9:05 AM, Tomas Forsman wrote: > > I don't have it readily at > hand how to check the ashift value on a vdev, anyone > else/archives/google? > This? ;-) http://lmgtfy.com/?q=how+to+check+the+ashift+value+on+a+vdev&l=1 The first hit has: # zdb m

Re: [zfs-discuss] Remove disk

2012-12-01 Thread Jan Owoc
ge ? It's take mounth to do > that. Those are the current limitations of zfs. Yes, with 12x2TB of data to copy it could take about a month. If you are feeling particularly risky and have backups elsewhere, you could swap two drives at once, but then you lose all your data if one of the r

Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-12-13 Thread Jan Owoc
y at the same version, but you can't access it if you can't access the pool :-). If you want to access the data now, your only option is to use Solaris to read it, and copy it over (eg. with zfs send | recv) onto a pool created with version 28. Jan __

Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-12-13 Thread Jan Owoc
On Thu, Dec 13, 2012 at 11:44 AM, Bob Netherton wrote: > On Dec 13, 2012, at 10:47 AM, Jan Owoc wrote: >> Yes, that is correct. The last version of Solaris with source code >> used zpool version 28. This is the last version that is readable by >> non-Solaris operating syste

Re: [zfs-discuss] how to know available disk space

2013-02-06 Thread Jan Owoc
tside of their "refreservation" and now crashed for lack of free space on their zfs. Some of the other VMs aren't using their refreservation (yet), so they could, between them, still write 360GB of stuff to the drive. Jan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-20 Thread Jan Owoc
unts as a child filesystem, so you would have to do "zfs destroy -r tank/filesystem" to recursively destroy all the children. I would imagine you could write some sort of wrapper for the "zfs" command that checks if the command includes "destroy" and then check for

Re: [zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-21 Thread Jan Owoc
> # zfs destroy -r a/1 > cannot destroy 'a/1/hold@hold': snapshot is busy Does this do what you want? (zpool destroy is already undo-able) Jan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Petabyte pool?

2013-03-15 Thread Jan Owoc
t's been done with ZFS :-). Jan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Ooops - did it again... Moved disks without export first.

2010-10-27 Thread Jan Hellevik
Ok, so I did it again... I moved my disks around without doing export first. I promise - after this I will always export before messing with the disks. :-) Anyway - the problem. I decided to rearrange the disks due to cable lengths and case layout. I disconnected the disks and moved them around.

Re: [zfs-discuss] Ooops - did it again... Moved disks without export first.

2010-10-28 Thread Jan Hellevik
2. Thanks, Jan -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Ooops - did it again... Moved disks without export first.

2010-10-28 Thread Jan Hellevik
Thanks! I will try later today and report back the result. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Ooops - did it again... Moved disks without export first.

2010-10-29 Thread Jan Hellevik
Export did not go very well. j...@opensolaris:~# zpool export master internal error: Invalid argument Abort (core dumped) So I deleted (renamed) the zpool.cache and rebooted. After reboot I imported the pool and it seems to have gone well. It is now scrubbing. Thanks a lot for the help! j...@

Re: [zfs-discuss] HP ProLiant N36L

2011-01-07 Thread Jan Sommer
Hello Richard, I've downloaded a new iso and created the second copy on a different computer at my workplace (with the "verify data" option enabled within NERO and slow 4x writing speed) - I also used another blank disc brand. Cheers Jan -- This message posted from

Re: [zfs-discuss] HP ProLiant N36L

2011-01-15 Thread Jan Sommer
I could resolve this issue: I was testing FreeNAS with a raidz1 setup before I decided to check out Nexentastore and it seems Nexentastore has some kind of problems if the harddisk array already contain some kind of raidz data. After wiping the discs with a tool from the "Ultimate Boot CD" I co

[zfs-discuss] OpenSolaris installer can't be run, if target ZFS pool exists.

2008-08-18 Thread Jan Damborsky
they are already fixed), or if some workarounds might be used. Also, please let us know if there is possibility that other approach (like other/new API, command, subcommand) might be used in order to solve the problem. Any help/suggestions/comments are much appreciated. Thank you very much, Jan

Re: [zfs-discuss] OpenSolaris installer can't be run, if target ZFS pool exists.

2008-08-19 Thread jan damborsky
tting custom parameters neither in man pages nor in "Solaris ZFS Administration Guide" available on opensolaris.org, I have probably missed it. Thank you, Jan John Langley wrote: > What about setting a custom parameter on rpool when you create it and > then changing the value after

Re: [zfs-discuss] OpenSolaris installer can't be run, if target ZFS pool exists.

2008-08-19 Thread jan damborsky
Hi Darren, thank you very much for your help. Please see my comments below. Jan Darren J Moffat wrote: > jan damborsky wrote: >> Hi John, >> >> I like this idea - it would be clear solution for the problem. >> Is it possible to manage custom parameters with standard

Re: [zfs-discuss] OpenSolaris installer can't be run, if target ZFS pool exists.

2008-08-19 Thread jan damborsky
Hi Andrew, this is what I am thinking about based on John's and Darren's responses. I will file RFE for having possibility to set user properties for pools (if it doesn't already exist). Thank you, Jan andrew wrote: > Perhaps user properties on pools would be useful here? A

Re: [zfs-discuss] OpenSolaris installer can't be run, if target ZFS pool exists.

2008-08-19 Thread jan damborsky
Darren J Moffat wrote: > jan damborsky wrote: >>> zfs set caiman:install=preparing rpool/ROOT >> That sounds reasonable. It is not atomic operation from installer >> point of view, but the time window is really short (installer can >> set ZFS user property al

Re: [zfs-discuss] OpenSolaris installer can't be run, if target ZFS pool exists.

2008-08-20 Thread jan damborsky
>> And log an RFE for having user defined properties at the pool (if one >> doesn't already exist). >> 6739057 was filed to track this. Thank you, Jan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.

[zfs-discuss] How to release/destroy ZFS volume dedicated to dump ?

2008-09-08 Thread jan damborsky
". Is there any way to release dump ZFS volume after it was activated by dumpadm(1M) command ? Thank you, Jan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to release/destroy ZFS volume dedicated to dump ?

2008-09-08 Thread jan damborsky
Hi Mark, Mark J Musante wrote: > On Mon, 8 Sep 2008, jan damborsky wrote: > >> Is there any way to release dump ZFS volume after it was activated by >> dumpadm(1M) command ? > > Try 'dumpadm -d swap' to point the dump to the swap device. That helped - since swa

[zfs-discuss] Ended up in GRUB prompt after the installation on ZFS

2008-11-07 Thread jan damborsky
rt' command - please see below for detailed procedure. Based on this, could you please take a look at those observations and if possible help me understand if there is anything obvious what might be wrong and if you think this is somehow related to ZFS technology ? Thank you very much for your

[zfs-discuss] How to deal with "ended in grub> prompt" issue ?

2008-11-10 Thread jan damborsky
d we might be missing other issues which are not related to 6769487 (e.g. when /rpool/boot/grub/menu.lst file was not created). Thank you, Jan How to triage: -- * In all cases, ask reporter to attach /tmp/install_log file With LiveCD, this can be obtained using following proced

Re: [zfs-discuss] [caiman-discuss] Ended up in GRUB prompt after the installation on ZFS

2008-11-10 Thread jan damborsky
I have filed following bug in 'solaris/kernel/zfs' category for tracking this issue: 6769487 Ended up in 'grub>' prompt after installation of OpenSolaris 2008.11 (build 101a) Thank you, Jan jan damborsky wrote: > Hi ZFS team, > > when testing installation with

Re: [zfs-discuss] [indiana-discuss] rpools mismatch

2008-11-13 Thread jan damborsky
Hi Robert, you are hitting following ZFS bug: 4858 OpenSolaris fails to boot if previous zfs turds are present on disk now tracked in Bugster: 6770808 OpenSolaris fails to boot if previous zfs turds are present on disk Thanks, Jan Robert Milkowski wrote: > Hello indiana-disc

Re: [zfs-discuss] [install-discuss] differences.. why?

2008-12-02 Thread jan damborsky
Hi Dick, I am redirecting your question to zfs-discuss mailing list, where people are more knowledgeable about this problem and your question could be better answered. Best regards, Jan dick hoogendijk wrote: > I have s10u6 installed on my server. > zfs list (partly):

Re: [zfs-discuss] Error 16: Inconsistent filesystem structure after a change in the system

2009-01-03 Thread Jan Spitalnik
Hey Rafal, this sounds like missing GANG block support in GRUB. Checkout putback log for snv_106 (afaik), there's a bug where grub fails like this. Cheers, Spity On 3.1.2009, at 21:11, Rafal Pratnicki wrote: > I recovered the system and created the opensolaris-12 BE. The system > was workin

Re: [zfs-discuss] [caiman-discuss] Can not delete swap on AI sparc

2009-01-20 Thread jan damborsky
Hi Jeffrey, jeffrey huang wrote: > Hi, Jan, > > After successfully install AI on SPARC(zpool/zfs created), without > reboot, I want try a installation again, so I want to destroy the rpool. > > # dumpadm -d swap --> ok > # zfs destroy rpool/dump --> ok > # swap -l &

[zfs-discuss] SPAM *** importing unformatted partition

2009-02-12 Thread Jan Hlodan
SIZE USED AVAILCAP HEALTH ALTROOT rpool 59.5G 3.82G 55.7G 6% ONLINE - sh-3.2# zpool import sh-3.2# How can I find and import left partition? Thanks for help. Regards, Jan Hlodan ___ zfs-discuss mailing list zfs-discuss@opensolari

[zfs-discuss] SPAM *** Re: unformatted partition

2009-02-13 Thread Jan Hlodan
t I still don't know how to import this partition (num. 3) If I run: zpool create c9d0 I'll lost all my data, right? Regards, Jan Hlodan Will Murnane wrote: On Thu, Feb 12, 2009 at 21:59, Jan Hlodan wrote: I would like to import 3. partition as a another pool but I can't see

[zfs-discuss] SPAM *** zpool create from spare partition

2009-02-13 Thread Jan Hlodan
re is this partition, then I can run: zpool create trunk c9d0XYZ right? Thanks for the answer. Regards, Jan Hlodan Jan Hlodan wrote: Hello, thanks for the answer. The partition table shows that Wind and OS run on: 1. c9d0 /p...@0,0/pci-...@1f,2/i...@0/c...@0,0 Partition Stat

[zfs-discuss] SPAM *** Re: [osol-help] Adding a new partition to the system

2009-02-14 Thread Jan Hlodan
l status > 2.- Would you please recommend a good introduction to Solaris/OpenSolaris? > I'm used to Linux and I'd like to get up to speed with OpenSolaris. > sure, OpenSolaris Bible :) http://blogs.sun.com/observatory/entry/two_more_chapters_from_the Hope this helps, Regar

[zfs-discuss] SPAM *** Re: [osol-help] Adding a new partition to the system

2009-02-14 Thread Jan Hlodan
Hi Antonio, did you try to recreate this partition e.g. with Gparted? Maybe is something wrong with this partition. Can you also post what "prtpart "disk ID" -ldevs" says? Regards, Jan Hlodan Antonio wrote: Hi Jan, I tried out what you say long ago, but zfs fails on poo

[zfs-discuss] SPAM *** Re: [osol-help] Adding a new partition to the system

2009-02-14 Thread Jan Hlodan
d0p10Solaris x86 Hi Antonio, and what does 'zpool create' command say? $ pfexec zpool create test /dev/dsk/c3d0p5 or $ pfexec zpool create -f test /dev/dsk/c3d0p5 Regards, jh Jan Hlodan escribió: Hi Antonio, did you try to recreate this partition e.g. with Gparted? Maybe is

[zfs-discuss] cannot mount '/export' directory is not empty

2009-03-09 Thread Jan Hlodan
; Can you help me please? I don't want to loose all my configurations. Thank you! Regards, Jan Hlodan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] cannot mount '/export' directory is not empty

2009-03-09 Thread Jan Hlodan
ith status 256)" Then I can see wallpaper and cursor. That's it, nothing more. Regards, Jan Hlodan Tomas Ögren wrote: > On 09 March, 2009 - Jan Hlodan sent me these 1,7K bytes: > > >> Hello, >> >> I am desperate. Today I realized that my OS 108 doesn'

Re: [zfs-discuss] [caiman-discuss] Can not delete swap on AI sparc

2009-06-08 Thread Jan Damborsky
Thank you, Jan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [caiman-discuss] Can not delete swap on AI sparc

2009-06-09 Thread Jan Damborsky
casper@sun.com wrote: hi Jan (and all) My failure was when running # swap -d /dev/zvol/dsk/rpool/swap I saw this in my truss output. uadmin(16, 3, -2748781172232)Err#12 ENOMEM That sounds like "too much memory in use: can't remove swap". It seems it

Re: [zfs-discuss] Mac OS X.5 "Leopard": zfs works; sudo kextload /System/Library/zfs.kext

2007-06-14 Thread Jan Spitalnik
Hi, On 14.6.2007, at 9:15, G.W. wrote: If someone knows how to modify Extensions.kextcache and Extensions.mkext, please let me know. After the bugs are worked out, Leopard should be a pretty good platform. You can recreate the kext cache like this: kextcache -k /System/Library/Extensions

[zfs-discuss] zfs as zone root

2007-10-11 Thread Jan Dreyer
s are handled by the process of updating)? Thanks Jan Dreyer ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] iscsi on zvol

2008-01-24 Thread Jan Dreyer
the iscsi-vol (or import Pool-2) on HostA? I know, this is (also) iSCSI-related, but mostly a ZFS-question. Thanks for your answers, Jan Dreyer ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] perl modules to access zfs commands?

2008-01-31 Thread Jan Dreyer
would like to avoid "system"-commands in my scripts ... Thanks for your answers, Jan Dreyer ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] swap & dump on ZFS volume

2008-06-23 Thread jan damborsky
r as implementation of that features is concerned ? Thank you very much, Jan [i] Formula for calculating dump & swap size I have gone through the specification and found that following formula should be used for calculating default size of swap &

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-06-24 Thread jan damborsky
Hi Lori, Lori Alt wrote: > Richard Elling wrote: >> Hi Jan, comments below... >> >> jan damborsky wrote: >> >>> Hi folks, >>> >>> I am member of Solaris Install team and I am currently working >>> on making Slim insta

Re: [zfs-discuss] swap & dump on ZFS volume

2008-06-24 Thread jan damborsky
Hi Richard, thank you very much for your comments. Please see my response in line. Jan Richard Elling wrote: > Hi Jan, comments below... > > jan damborsky wrote: >> Hi folks, >> >> I am member of Solaris Install team and I am currently working >> on making

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-06-25 Thread Jan Damborsky
created if user dedicates at least recommended disk space for installation. Please feel free to correct me, if I misunderstood some point. Thank you very much again, Jan Dave Miner wrote: > Peter Tribble wrote: >> On Tue, Jun 24, 2008 at 8:27 PM, Dave Miner <[EMAIL PROTECTED]> wrote

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-06-30 Thread jan damborsky
Hi Darren, Darren J Moffat wrote: > Jan Damborsky wrote: >> Thank you very much all for this valuable input. >> >> Based on the collected information, I would take >> following approach as far as calculating size of >> swap and dump devices on ZFS volumes in

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-06-30 Thread jan damborsky
Darren J Moffat wrote: > jan damborsky wrote: >> I think it is necessary to have some absolute minimum >> and not allow installer to proceed if user doesn't >> provide at least minimum required, as we have to make >> sure that installation doesn't fail b

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-06-30 Thread jan damborsky
Hi Mike, Mike Gerdts wrote: > On Wed, Jun 25, 2008 at 11:09 PM, Jan Damborsky <[EMAIL PROTECTED]> wrote: >> Thank you very much all for this valuable input. >> >> Based on the collected information, I would take >> following approach as far as calculating size o

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-07-01 Thread jan damborsky
provided by virtual tools and/or implemented in kernel, I think (I might be wrong) that in the installer we will still need to use standard mechanisms for now. Thank you, Jan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-07-01 Thread jan damborsky
Mike Gerdts wrote: > On Mon, Jun 30, 2008 at 9:19 AM, jan damborsky <[EMAIL PROTECTED]> wrote: >> Hi Mike, >> >> >> Mike Gerdts wrote: >>> On Wed, Jun 25, 2008 at 11:09 PM, Jan Damborsky <[EMAIL PROTECTED]> >>> wrote: >>>> Th

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-07-01 Thread jan damborsky
limits on > memory, and it's just virtual memory, after all. Besides which, we can > infer that the system works well enough for the user's purposes without > swap since the boot from the CD won't have used any swap. That is a good poi

[zfs-discuss] swap & dump on ZFS volume - updated proposal

2008-07-01 Thread jan damborsky
ch all for this valuable input. Jan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume - updated proposal

2008-07-02 Thread jan damborsky
Dave Miner wrote: > jan damborsky wrote: > ... >> [2] dump and swap devices will be considered optional >> >> dump and swap devices will be considered optional during >> fresh installation and will be created only if there is >> appropriate space available

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-07-02 Thread jan damborsky
l, kernel plus > current process, or all memory. If the dump content is 'all', the dump space > needs to be as large as physical memory. If it's just 'kernel', it can be > some fraction of that. I see - thanks a lot for clarification. Jan __

  1   2   >