Solaris FDISK
partition (although this is not enforced).
With EFI labeling, s7 is enforced as the whole EFI FDISK partition,
and so the trailing s7 is dropped off the device name for
clarity.
This simplicity is brought about because the GPT spec requires
that backwards compatible FDISK part
Andrew Werchowiecki wrote:
Total disk size is 9345 cylinders
Cylinder size is 12544 (512 byte) blocks
Cylinders
Partition StatusType Start End Length
March 2013 3:04 PM
To: Andrew Werchowiecki
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] partioned cache devices
On Sun, Mar 17, 2013 at 1:01 PM, Andrew Werchowiecki
mailto:andrew.werchowie...@xpanse.com.au>>
wrote:
I understand that p0 refers to the whole disk... in the l
what that does to the pool's OS interoperability.
From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
[opensolarisisdeadlongliveopensola...@nedharvey.com]
Sent: Friday, 15 March 2013 8:44 PM
To: Andrew Werchowiecki; zfs-discuss@opensolaris.org
Subject
Hi all,
I'm having some trouble with adding cache drives to a zpool, anyone got any
ideas?
muslimwookie@Pyzee:~$ sudo zpool add aggr0 cache c25t10d1p2
Password:
cannot open '/dev/dsk/c25t10d1p2': I/O error
muslimwookie@Pyzee:~$
I have two SSDs in the system, I've created an 8gb partition on eac
ilisation and performance for a ZFS COMSTAR target.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for large
transfers on 10GbE are:
280MB/smbuffer
220MB/srsh
180MB/sHPN-ssh unencrypted
60MB/s standard ssh
The tradeoff mbuffer is a little more complicated to script; rsh is, well, you know; and hpn-ssh requires rebuilding ssh and (probably) maintaining a second copy of it.
are portable to a different controller), are you able/willing to
swap it for one that Solaris is known to support well?
----
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@
or if you don't care about existing snapshots, use Shadow Migration to
move the data across.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Arne Jansen wrote:
We have finished a beta version of the feature.
What does FITS stand for?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
upted, but not for those which rely on the Posix
semantics of synchronous writes/syncs meaning data is secured on non-volatile
storage when the function returns.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
-r export/home | wc -l
1951
$ echo 1951 / 365 | bc -l
5.34520547945205479452
$
So you're slightly ahead of my 5.3 years of daily snapshots:-)
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
map(2) and doesn't document this in detail, Dtrace is your friend.
4. Keep plenty of free space in the zpool if you want good database
performance. If you're more than 60% full (S10U9) or 80% full (S10U10),
that could be a factor.
Anyway, there are a few things to think about.
--
Andrew
rinking of the ARC may be more
proactive now than it was back then, but I don't notice any ZFS
performance issues with the ARC restricted to 1GB on a desktop system.
It may have increase scrub times, but that happens when I'm in bed, so I
don't care.
--
Andrew
___
I just played and knocked this up (note the stunning lack of comments,
missing optarg processing, etc)...
Give it a list of files to check...
#define _FILE_OFFSET_BITS 64
#include
#include
#include
#include
#include
int
main(int argc, char **argv)
{
int i;
for (i = 1; i
relatively new, and the
controllers may not have been designed with SSDs in mind. That's likely
to be somewhat different nowadays, but I don't have any data to show
that either way.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-dis
aster, send only the difference between the current and recent
snapshots on the backup and then deploy it on backup.
Any ideas how this can be done?
It's called an incremental - it's part of the zfs send command line options.
--
Andrew Gabriel
;s nothing inbetween.
Actually, there are a number of disk firmware and cache faults
inbetween, which zfs has picked up over the years.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
10,000 synchronous write IOPs, but the underlying
devices are only performing about 1/10th of that, due to ZFS coalescing
multiple outstanding writes.
Sorry, I'm not familiar with what type of load bonnie generates.
--
Andrew Gabriel |
Solaris Systems Architect
Email: andrew.gabr...@oracle
Does "current" include sol10u10 as well as sol11? If so, when did that go in?
Was it in sol10u9?
Thanks,
Andrew
From: Cindy Swearingen
mailto:cindy.swearin...@oracle.com>>
Subject: Re: [zfs-discuss] Can I create a mirror for a root rpool?
Date: December 16, 2011 10:38:2
, which is just silly to
fight with anyway.
Gregg
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 11/15/11 23:40, Tim Cook wrote:
On Tue, Nov 15, 2011 at 5:17 PM, Andrew Gabriel
mailto:andrew.gabr...@oracle.com>> wrote:
On 11/15/11 23:05, Anatoly wrote:
Good day,
The speed of send/recv is around 30-60 MBytes/s for initial
send and 17-25 MBytes
sec, so it's pretty much limited by
the ethernet.
Since you have provided none of the diagnostic data you collected, it's
difficult to guess what the limiting factor is for you.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discus
s of
disk (SSD), block numbers are moved around to achieve wear leveling, so
blacklistinng a block number won't stop you reusing that real block.
--
Andrew Gabriel (from mobile)
--- Original message ---
From: Edward Ned Harvey
To: didier.reb...@u-bourgogne.fr, zfs-discuss@opensolari
On 28/10/2011, at 3:06 PM, Daniel Carosone wrote:
> On Thu, Oct 27, 2011 at 10:49:22AM +1100, afree...@mac.com wrote:
>> Hi all,
>>
>> I'm seeing some puzzling behaviour with my RAID-Z.
>>
>
> Indeed. Start with zdb -l on each of the disks to look at the labels in more
> detail.
>
> --
> Dan
a ufs root disk, but any attempt to put a serious load
on it, and it corrupted data all over the place. So if you're going to
try one, make sure you hammer it very hard in a test environment before
you commit anything important to it.
--
Andrew Gabriel
Block:
1380679072Error Block: 1380679072
Aug 16 13:14:16 nas-hz-02 scsi: Vendor:
DELL Serial Number:
Aug 16 13:14:16 nas-hz-02 scsi: Sense Key: Unit Attention
Aug 16 13:14:16 nas-hz-02 scsi: ASC: 0x29 (device internal
re
size (although that alone doesn't necessarily tell you much - a dtrace
quantize aggregation would be better). Also check service times on the
disks (iostat) to see if there's one which is significantly worse and
might be going bad.
--
Andrew Gabriel
___
none of my 'data' disks have
been 'configured' yet. I wanted to ID them before adding them to pools.
Use p0 on x86 (whole disk, without regard to any partitioning).
Any other s or p device node may or may not be there, depending on what
partitions/slices are on
e end of the URL). It conks out at version 31 though.
I have systems back to build 125, so I tend to always force zpool
version 19 for that (and that automatically limits zfs version to 4).
There's also some info about some builds on the zfs wikipedia page
http://en.wikipedia.org/wiki/Zfs
e to be able to find a non corrupt version of the data.
When you have a new hardware setup, I would perform scrubs more
frequently as a further check that the hardware doesn't have any
systemic problems, until you have gained confidence in it.
?
What's the RAID layout of your pool "zpool status"?
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Does anyone know if it's OK to do zfs send/receive between zpools with
different ashift values?
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ssion
and/or if you wish to reserve a minimum space...
zfs set reservation=50g logs/oracle
zfs set reservation=100g logs/session
Do I have to use the legacy mount options?
You don't have to.
--
Andrew Gabriel
___
zfs-discuss mailing list
z
you give mkfs_pcfs all the geom data it needs, then it won't try
asking the device...
andrew@opensolaris:~# zfs create -V 10m rpool/vol1
andrew@opensolaris:~# mkfs -F pcfs -o
fat=16,nofdisk,nsect=255,ntrack=63,size=2 /dev/zvol/rdsk/rpool/vol1
Construct a new FAT file system
gate
Barracuda XT 2Tb disks (which are a bit more Enterprise than the list
above), just plugged them in, and so far they're OK. Not had them long
enough to report on longevity.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolar
go about partitioning the disk?
What does the fdisk partitioning look like (if its x86)?
What does the VToC slice layout look like?
What are you using each partition and slice for?
What tells you that you can only see 300GB?
--
Andrew
___
zfs-discuss
t is much closer coupled to the CPU than a PCI
crypto card can be, and performance with small packets was key for the
crypto networking support T-series was designed for. Of course, it
handles crypto of large blocks just fine too.
--
Andrew
___
z
Richard Elling wrote:
On Jun 19, 2011, at 6:04 AM, Andrew Gabriel wrote:
Richard Elling wrote:
Actually, all of the data I've gathered recently shows that the number of IOPS
does not significantly increase for HDDs running random workloads. However the
response time does :-( My
taking into account priority, such as if the I/O
is a synchronous or asynchronous, and age of existing queue entries). I
had much fun playing with this at the time.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
diff [ | ]
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
7;s
in an area where there are lots of small blocks.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
is synchronous.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Certainly, fastfs (a similar although more dangerous option for ufs)
makes ufs to ufs copying significantly faster.
*ufsrestore works fine on ZFS filesystems (although I haven't tried it
with any POSIX ACLs on the original ufs filesystem, which would probably
simply get l
f oSol 134?
What does "zfs get sync" report?
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Do you have any details on that CR? Either my Google-fu is failing or Oracle
has moved the CR database private. I haven't encountered this problem but would
like to know if there are certain behaviors to avoid to not risk this.
Has it been fixed in Sol10 or OpenSolaris?
Thanks,
A
ss, and I think you'll need a Windows system to
actually flash the BIOS.
You might want to do a google search on "3114 data corruption" too,
although it never hit me back when I used the cards.
--
Andrew Gabriel
___
zfs-discuss mailing li
drivers had been developed. I would suggest
looking for something more modern.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e scanned all the
surfaces on startup to build up an internal table of the relative
misalignment of tracks across the surfaces, but this rapidly became
unviable as drive capacity increased and this scan would take an
unreasonable length of time. It may be that modern drives learn this as
they
ely
provisioned, in order to deallocate blocks in the LUN which have
previously been allocated, but whose contents have since been invalidated.
In this case, both ZFS and whatever is providing the storage LUN would
need to support TRIM.
Out of interest, what other filesystems out there toda
dicator of impending failure, such
as the various error and retry counts.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
replace it with a new raidz2 vdev?
If not what can I do to do damage control and add some redundancy to the single
drive vdev?
I think you should be able to attach another disk to it to make them
into a mirror. (Make sure you attach, and not add.)
--
Andrew
ed.
I won't claim ZFS couldn't better support use of back-end Enterprise
storage, but in this case, you haven't given any use cases where that's
relevant.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
htt
earlier opensolaris versions, but it
no longer works).
If you have a support contract, raise a call and asked to be added to
RFE 6744320.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
em) immediately
(so you can repeat the hardware snapshot again if it fails), maybe you
will be lucky.
The right way to do this with zfs is to send/recv the datasets to a
fresh zpool, or (S10 Update 9) to create an extra zpool mirror and then
split it off with zpool split.
--
3017015200 50
format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c7t0d0
/p...@0,0/pci1028,2...@1f,2/d...@0,0
Thanks for any idea.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris
itch to disable pseudo 512b access so
you can use the 4k native. The industry as a whole will transition to 4k
sectorsize over next few years, but these first 4k sectorsize HDs are
rather less useful with 4k sectorsize-aware OS's. Let's hope other
manufacturers get this right in their first
del is no longer available now. I'm going to have to swap out for
bigger disks in the not too distant future.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
S is 3 rather than 2?
If you look at zfs_create_fs(), you will see the first 3 items created
are:
Create zap object used for SA attribute registration
Create a delete queue.
Create root znode.
Hence, inode 3.
--
Andrew Gabriel
___
zfs-disc
ve you poor performance if
you are accessing both at the same time, as you are forcing head seeking
between them.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
if the target is a new hard drive can I use
this zfs send al...@3 > /dev/c10t0d0 ?
That command doesn't make much sense for the purpose of doing anything
useful.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discu
e error you included isn't a timeout.
The SSD's themselves are all Intel X-25E's (32GB) with firmware 8860
and the LSI 1068 is a SAS1068E B3 with firmware 011c0200 (1.28.02.00).
I'm not intimately familiar with the firmware versions, but if you're
having problems, making s
What you say is true only on the system itself. On an NFS client system, 30
seconds of lost data in the middle of a file (as per my earlier example) is a
corrupt file.
-original message-
Subject: Re: [zfs-discuss] Solaris startup script location
From: Edward Ned Harvey
Date: 18/08/2010 17:17
>
ing a file sequentially,
you will likely find an area of the file is corrupt because the data was
lost.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Andrew Gabriel wrote:
Alxen4 wrote:
Is there any way run start-up script before non-root pool is mounted ?
For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise
it complains that log device is missing :)
way to do this is to "zfs set sync=disabled ..." on relevant
filesystems.
I can't recall which build introduced this, but prior to that, you can
set zfs://zil_disable=1 in /etc/system but that applies to all
pools/filesystems.
--
Andrew Gabriel
undancy on flaky storage is not a good place to be.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
o see output of: zfs list -t all -r zpool/filesystem
There is a trouble - snapshot is too old, and ,consequently, there is a
questions -- Can I browse pre-rollbacked corrupted branch of FS ? And, if I
can, how ?
--
Andrew Gabriel
___
zfs-discu
Tony MacDoodle wrote:
I have 2 ZFS pools all using the same drive type and size. The
question is can I have 1 global hot spare for both of those pools?
Yes. A hot spare disk can be added to more than one pool at the same time.
--
Andrew Gabriel
ped out drives, this works well,
and avoids ending up with sprawling lower capacity drives as your pool
grows in size. This is what I do at home. The freed-up drives get used
in other systems and for off-site backups. Over the last 4 years, I've
upgraded from 1/4TB, to 1/2TB, and now on 1TB dri
e you do a planned reduction of the pool
redundancy (e.g. if you're going to detach a mirror side in order to
attach a larger disk), most particularly if you are reducing the
redundancy to nothing.
--
Andrew Gabriel
___
zfs-discuss
use for one which is stopped.
However, you haven't given anything like enough detail here of your
situation and what's happening for me to make any worthwhile guesses.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolari
Just wondering if anyone has experimented with working out the best zvol
recordsize for a zvol which is backing a zpool over iSCSI?
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
ential
implications before embarking on this route.
(As I said before, the zpool itself is not at any additional risk of
corruption, it's just that you might find the zfs filesystems with
sync=disabled appear to have been rewound by up to 30 seconds.)
If you're unsure, then adding SSD no
dea
for some other applications though (although Linux ran this way for many
years, seemingly without many complaints). Note that there's no
increased risk of the zpool going bad - it's just that after the reboot,
filesystems with sync=disabled will look like they were rewo
tcat.
I haven't figured out where to get netcat nor the syntax for using it yet.
I used a buffering program of my own, but I presume mbuffer would work too.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
between the machines due to the CPU limiting on the scp and gunzip
processes.
Also, if you have multiple datasets to send, might be worth seeing if
sending them in parallel helps.
--
Andrew Gabriel
___
zfs-discuss mailing list
z
takes nearly 19 hours now, and
hammers the heads quite hard. I keep meaning to reduce the scrub
frequency now it's getting to take so long, but haven't got around to
it. What I really want is pause/resume scrub, and the ability to trigger
the pause/resume from the screensaver (or
I find my home data growth is slightly less than the rate
of disk capacity increase, so every 18 months or so, I simply swap out
the disks for higher capacity ones.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
FS, amongst other Solaris features.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
27;t happen again (I have learned my lesson
this time) I have ordered two small SSD drives to put in a mirrored config
for the log device. Thanks again to everyone and now I will get some
worry-free sleep :)
Andrew
___
zfs-discuss mailing list
zfs-dis
from
a backup source.
On Jul 6, 2010, at 11:48 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Andrew Kener
>>
>> the OS hard drive crashed [and log device]
>
> Here'
esn't get recognized. Probably because it was
never part of the original zpool. I also symlinked the new ZIL file into
/dev/dsk but that didn't make any difference either.
Any suggestions?
Andrew Kener
___
zfs-discuss mailing list
zf
>
> Good. Run 'zpool scrub' to make sure there are no
> other errors.
>
> regards
> victor
>
Yes, scrubbed successfully with no errors. Thanks again for all of your
generous assistance.
/AJ
--
This message posted from opensolaris.org
___
zfs-discus
>
> - Original Message -
> > Victor,
> >
> > The zpool import succeeded on the next attempt
> following the crash
> > that I reported to you by private e-mail!
> >
> > For completeness, this is the final status of the
> pool:
> >
> >
> > pool: tank
> > state: ONLINE
> > scan: resilvere
e, because the features and functionality in zfs are
otherwise absolutely second to none.
/Andrew[i][/i][i][/i][i][/i][i][/i][i][/i]
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
> Andrew,
>
> Looks like the zpool is telling you the devices are
> still doing work of
> some kind, or that there are locks still held.
>
Agreed; it appears the CSV1 volume is in a fundamentally inconsistent state
following the aborted zfs destroy attempt. See later in
Victor,
A little more info on the crash, from the messages file is attached here. I
have also decompressed the dump with savecore to generate unix.0, vmcore.0, and
vmdump.0.
Jun 30 19:39:10 HL-SAN unix: [ID 836849 kern.notice]
Jun 30 19:39:10 HL-SAN ^Mpanic[cpu3]/thread=ff0017909c60:
Jun
Victor,
I've reproduced the crash and have vmdump.0 and dump device files. How do I
query the stack on crash for your analysis? What other analysis should I
provide?
Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
>
> On Jun 29, 2010, at 8:30 PM, Andrew Jones wrote:
>
> > Victor,
> >
> > The 'zpool import -f -F tank' failed at some point
> last night. The box was completely hung this morning;
> no core dump, no ability to SSH into the box to
> diagnose the
anything in the logs?
Earlier I ran 'zdb -e -bcsvL tank' in write mode for 36 hours and gave up to
try something different. Now the zpool import has hung the box.
Should I try zdb again? Any suggestions?
Thanks,
Andrew
--
This message po
Thanks Victor. I will give it another 24 hrs or so and will let you know how it
goes...
You are right, a large 2TB volume (CSV1) was not in the process of being
deleted, as described above. It is showing error 16 on 'zdb -e'
--
This message posted from opensolaris.org
_
Just re-ran 'zdb -e tank' to confirm the CSV1 volume is still exhibiting error
16:
Could not open tank/CSV1, error 16
Considering my attempt to delete the CSV1 volume lead to the failure in the
first place, I have to think that if I can either 1) complete the deletion of
this volume or 2) ro
Malachi,
Thanks for the reply. There were no snapshots for the CSV1 volume that I
recall... very few snapshots on the any volume in the tank.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
Dedup had been turned on in the past for some of the volumes, but I had turned
it off altogether before entering production due to performance issues. GZIP
compression was turned on for the volume I was trying to delete.
--
This message posted from opensolaris.org
___
Update: have given up on the zdb write mode repair effort, as least for now.
Hoping for any guidance / direction anyone's willing to offer...
Re-running 'zpool import -F -f tank' with some stack trace debug, as suggested
in similar threads elsewhere. Note that this appears hung at near idle.
f
Now at 36 hours since zdb process start and:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
827 root 4936M 4931M sleep 590 0:50:47 0.2% zdb/209
Idling at 0.2% processor for nearly the past 24 hours... feels very stuck.
Thoughts on how to determine where and
evious installs). It's an amd64 box.
Both OS versions show the same problem.
Do I need to run a scrub? (will take days...)
Other ideas?
It might be interesting to run it under truss, to see which syscall is
returning that error.
--
Andrew Gabriel
___
is is incorrect. The viral effects of the GPL only take effect at the point
of distribution. If ZFS is distributed seperately to the Linux kernel as a
module then the person doing the combining is the user. Different if a Linux
distro wanted to include it on a live CD, for example. GPL
for disks.
(Actually, vanity naming for disks should probably be brought out into
a separate RFE.)
--
Andrew Gabriel |
Solaris Systems Architect
Email: andrew.gabr...@oracle.com
Mobile: +44 7720 598213
Oracle Pre-Sales
Guillemont Park | Minley Road | Camberley | GU17 9QG | United Kingdom
ORACLE Corporat
1 - 100 of 327 matches
Mail list logo