> "js" == Joerg Schilling writes:
js> This is interesting. Where is this group hosted?
+1
I glance at the list after years of neglect (selfishly...after almost
losing my pool), and see stuff like this: shady backroom irc-kiddie
bullshit. please: names, mailing lists, urls, hg servers.
>>>>> "c" == Miles Nordin writes:
c> terabithia:/# zpool import andaman
c> cannot import 'andaman': I/O error
c> Destroy and re-create the pool from
c> a backup source.
snv_151, the proprietary release, was a
I have a Solaris Express snv_130 box that imports a zpool from two
iSCSI targets, and after some power problems I cannot import the pool.
When I found the machine, the pool was FAULTED with half of most
mirrors shoring CORRUPTED DATA and half showing UNAVAIL. One of
the two iSCSI enclosures was o
> "js" == Joerg Schilling writes:
>> GPLv3 might help with NetApp <-> Oracle pact while CDDL does
>> not.
js> GPLv3 does not help at all with NetApp as the CDDL already
js> includes a patent grant with the maximum possible
js> coverage.
AIUI CDDL makes a user safe from
> "ld" == Linder, Doug writes:
ld> This list is for ZFS discussion. There are plenty of other
ld> places for License Wars and IP discussion.
Did you miss the part where ZFS was forked by a license change? Did
you miss Solaris Express 11 coming out with no source? Do you not
unders
> "ld" == Linder, Doug writes:
ld> Very nice. So why isn't it in Fedora (for example)?
I think it's slow and unstable? To me it's not clear yet whether it
will be the first thing in the Linux world that's stable and has
zfs-like capability. If ZFS were GPL it probably would have been,
> "js" == Joerg Schilling
delivered the following alternate reality of idealogical
partisan hackery:
js> GPLv3 does not give you anything you don't have from CDDL
js> also.
I think this is wrong. The patent indemnification is totally
different: AIUI the CDDL makes the im
> "bf" == Bob Friesenhahn writes:
bf> Perhaps it is better for Linux if it is GPLv2, but probably
bf> not if it is GPLv3.
That's my understanding: GPLv3 is the one you would need to preserve
software freedom under deals like NetApp<->Oracle patent pact,
http://www.gnu.org/licenses/
> "rs" == Robert Soubie writes:
rs> Don't you forget that these companies also do much of their
rs> business in foreign countries (Europe, Asia) where software
rs> patenting is not allowed,
dated myth. software patents do exist in europe, and the EPO has
issued them. Fewer are
> "et" == Erik Trimble writes:
et> In that case, can I be the first to say "PANIC! RUN FOR THE
et> HILLS!"
Erik I thought most people already understood pushing to the public hg
gate had stopped at b147, hence Illumos and OpenIndiana. it's not
that you're wrong, just that you shoul
> "dm" == David Magda writes:
dm> The other thing is that with the growth of SSDs, if more OS
dm> vendors support "dynamic" sectors, SSD makers can have
dm> different values for the sector size
okay, but if the size of whatever you're talking about is a multiple
of 512, we don't
> "t" == taemun writes:
t> I would note that the Seagate 2TB LP has a 0.32% Annualised
t> Failure Rate.
bullshit.
pgpsMvTxl5Ghd.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensol
> "kd" == Krunal Desai writes:
kd> http://support.microsoft.com/kb/
dude.seriously?
This is worse than a waste of time. Don't read a URL that starts this
way.
kd> Windows 7 (even with SP1) has no support for 4K-sector
kd> drives.
NTFS has 4KByte allocation units, so all y
> "zu" == zfs user writes:
> "djm" == Darren J Moffat writes:
zu> Ugh, we all know that the first rule of crytpo is that any
zu> proprietary, closed source, "black-box" crypto is crap, blah,
zu> blah, blah (I am not sure what the point of repeating that
zu> tired line is)
> "djm" == Darren J Moffat writes:
djm> http://blogs.sun.com/darren/entry/introducing_zfs_crypto_in_oracle
djm> http://blogs.sun.com/darren/entry/assued_delete_with_zfs_dataset
djm>
http://blogs.sun.com/darren/entry/compress_encrypt_checksum_deduplicate_with
Is there a URL describi
> "sl" == Sigbjorn Lie writes:
sl> Do you need registered ECC, or will non-reg ECC do
registered means the same thing as buffered. It has nothing to do
with registering to some kind of authority---it's a register like the
accumulators inside CPU's. The register allows more sticks per
c
> "tc" == Tim Cook writes:
tc> Channeling Ethernet will not make it any faster. Each
tc> individual connection will be limited to 1gbit. iSCSI with
tc> mpxio may work, nfs will not.
well...probably you will run into this problem, but it's not
necessarily totally unsolved.
I am
> "re" == Richard Elling writes:
re> it seems the hypervisors try to do crazy things like make the
re> disks readonly,
haha!
re> which is perhaps the worst thing you can do to a guest OS
re> because now it needs to be rebooted
I might've set it up to ``pause'' the VM for mo
> "re" == Richard Elling writes:
re> The risk here is not really different that that faced by
re> normal disk drives which have nonvolatile buffers (eg
re> virtually all HDDs and some SSDs). This is why applications
re> can send cache flush commands when they need to ensure t
> "en" == Eff Norwood writes:
en> We also tried SSDs as the ZIL which worked ok until they got
en> full, then performance tanked. As I have posted before, SSDs
en> as your ZIL - don't do it!
yeah, iirc the thread went back and forth between you and I for a few
days, something lik
> "tb" == Thomas Burgess writes:
tb> I'm running b134 and have been for months now, without issue.
tb> Recently i enabled 2 services to get bonjoir notificatons
tb> working in osx
tb> /network/dns/multicast:default
tb> /system/avahi-bridge-dsd:default
tb> and i adde
> "nw" == Nicolas Williams writes:
nw> *You* stated that your proposal wouldn't allow Windows users
nw> full control over file permissions.
me: I have a proposal
you: op! OP op, wait! DOES YOUR PROPOSAL blah blah WINDOWS blah blah
COMPLETELY AND EXACTLY LIKE THE CURRENT ONE.
> "dd" == David Dyer-Bennet writes:
dd> Richard Elling said ZFS handles the 4k real 512byte fake
dd> drives okay now in default setups
There are two steps to handling it well. one is to align the start of
partitions to 4kB, and apparently on Solaris (thanks to all the
cumbersome par
> "ag" == Andrew Gabriel writes:
ag> Having now read a number of forums about these, there's a
ag> strong feeling WD screwed up by not providing a switch to
ag> disable pseudo 512b access so you can use the 4k native.
this reporting lie is no different from SSD's which have 2 - 8
> "nw" == Nicolas Williams writes:
nw> The current system fails closed
wrong.
$ touch t0
$ chmod 444 t0
$ chmod A0+user:$(id -nu):write_data:allow t0
$ ls -l t0
-r--r--r--+ 1 carton carton 0 Oct 6 20:22 t0
now go to an NFSv3 client:
$ ls -l t0
-r--r--r-- 1 carton 405 0 201
> "dm" == David Magda writes:
dm> Thank you Mr. Moffat et al. Hopefully the rest of us will be
dm> able to bang on this at some point. :)
Thanks for the heads-up on the gossip.
This etiquette seems weird, though: I don't thank Microsoft for
releasing a new version of Word. I'll p
> "nw" == Nicolas Williams writes:
nw> I would think that 777 would invite chmods. I think you are
nw> handwaving.
it is how AFS worked. Since no file on a normal unix box besides /tmp
ever had 777 it would send a SIGWTF to any AFS-unaware graybeards that
stumbled onto the director
>> Can the user in (3) fix the permissions from Windows?
no, not under my proposal.
but it sounds like currently people cannot ``fix'' permissions through
the quirky autotranslation anyway, certainly not to the point where
neither unix nor windows users are confused: windows users are always
> "nw" == Nicolas Williams writes:
nw> Keep in mind that Windows lacks a mode_t. We need to interop
nw> with Windows. If a Windows user cannot completely change file
nw> perms because there's a mode_t completely out of their
nw> reach... they'll be frustrated.
well...AIUI t
> "rb" == Ralph Böhme writes:
rb> The Darwin kernel evaluates permissions in a first
rb> match paradigm, evaluating the ACL before the mode
well...I think it would be better to AND them together like AFS did.
In that case it doesn't make any difference in which order you do it
becau
> "sb" == Simon Breden writes:
sb> WD itself does not recommend them for 'business critical' RAID
sb> use
The described problems with WD aren't okay for non-critical
development/backup/home use either. The statement from WD is nothing
but an attempt to upsell you, to differentiate t
> "dd" == David Dyer-Bennet writes:
dd> Sure, if only a single thread is ever writing to the disk
dd> store at a time.
video warehousing is a reasonable use case that will have small
numbers of sequential readers and writers to large files. virtual
tape library is another obviously
> "dm" == David Magda writes:
dm> http://www.theregister.co.uk/2010/09/09/oracle_netapp_zfs_dismiss/
http://www.groklaw.net/articlebasic.php?story=20050121014650517
says when the MPL was modified to become the CDDL, clauses were
removed which would have required Oracle to disclose any p
> "ml" == Mark Little writes:
ml> Just to clarify - do you mean TLER should be off or on?
It should be set to ``do not have asvc_t 11 seconds and <1 io/s''.
...which is not one of the settings of the TLER knob.
This isn't a problem with the TLER *setting*. TLER does not even
apply unl
> "aa" == Anurag Agarwal writes:
aa> Every one being part of beta program will have access to
aa> source code
...and the right to redistribute it if they like, which I think is
also guaranteed by the license.
Yes, I agree a somewhat formal beta program could be smart for this
type o
> "en" == Eff Norwood writes:
en> http://www.anandtech.com/show/2738/8
but a few pages later:
http://www.anandtech.com/show/2738/25
so, as you say, ``with all major SSDs in the role of a ZIL you will
eventually not be happy.'' is true, but you seem to have accidentally
left out the `
> "aa" == Anurag Agarwal writes:
aa> * Currently we are planning to do a closed beta
aa> * Source code will be made available with release.
CDDL violation.
aa> * We will be providing paid support for our binary
aa> releases.
great, so long as your ``binary releases'' alwa
> "pb(" == Phillip Bruce (Mindsource) writes:
pb(> Problem solved.. Try using FQDN on the server end and that
pb(> work. The client did not have to use FQDN.
1. your syntax is wrong. You must use netgroup syntax to specify an
IP, otherwise it will think you mean the hostname made
> "ee" == Ethan Erchinger writes:
ee> We've had a failed disk in a fully support Sun system for over
ee> 3 weeks, Explorer data turned in, and been given the runaround
ee> forever.
that sucks.
but while NetApp may replace your disk immediately, they are an
abusive partner with
> "gd" == Garrett D'Amore writes:
>> Joerg is correct that CDDL code can legally live right
>> alongside the GPLv2 kernel code and run in the same program.
gd> My understanding is that no, this is not possible.
GPLv2 and CDDL are incompatible:
http://www.fsf.org/licensing/e
dd> 2 * Copyright (C) 2007 Oracle. All rights reserved.
dd> 3 *
dd> 4 * This program is free software; you can redistribute it and/or
dd> 5 * modify it under the terms of the GNU General Public
dd> 6 * License v2 as published by the Free Software Foundation.
dd>
> "pj" == Peter Jeremy writes:
> "gd" == Garrett D'Amore writes:
> "cb" == C Bergström writes:
> "fc" == Frank Cusack writes:
> "tc" == Tim Cook writes:
pj> Given that both provide similar features, it's difficult to
pj> see why Oracle would continue to invest in b
> "sw" == Saxon, Will writes:
sw> It was and may still be common to use RDM for VMs that need
sw> very high IO performance. It also used to be the only
sw> supported way to get thin provisioning for an individual VM
sw> disk. However, VMware regularly makes a lot of noise abou
> "mg" == Mike Gerdts writes:
> "sw" == Saxon, Will writes:
sw> I think there may be very good reason to use iSCSI, if you're
sw> limited to gigabit but need to be able to handle higher
sw> throughput for a single client.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?
> "bh" == Brandon High writes:
bh> For those 5 minutes, you'll see horrible performance. If the
bh> drive returns an error within 7-10 seconds, it would only take
bh> 35-50 seconds to fail.
For those 1 - 5 minutes, AIUI you see NO performance, not bad
performance. And pools othe
> "sw" == Saxon, Will writes:
sw> 'clone' vs. a 'copy' would be very easy since we have
sw> deduplication now
dedup doesn't replace the snapshot/clone feature for the
NFS-share-full-of-vmdk use case because there's no equivalent of
'zfs rollback'
I'm tempted to say, ``vmware needs
> "bh" == Brandon High writes:
bh> Recent versions no longer support enabling TLER or ERC. To
bh> the best of my knowledge, Samsung and Hitachi drives all
bh> support CCTL, which is yet another name for the same thing.
once again, I have to ask, has anyone actually found these f
> "re" == Richard Elling writes:
re> we would very much like to see Oracle continue to produce
re> developer distributions which more closely track the source
re> changes.
I'd rather someone else than Oracle did it. Until someone else is
doing the ``building'', whatever that ent
> "ds" == Dmitry Sorokin writes:
ds> The SSD drive has failed and zpool is unavailable anymore.
AIUI,
6733267 Allow a pool to be imported with a missing slog
is only fixed for the case where the pool is still imported. If you
export it without removing the slog first, the pool is los
> "ab" == Alex Blewitt writes:
>>> 3. The quality of software inside the firewire cases varies
>>> wildly and is a big source of stability problems. (even on
>>> mac)
ab> It would be good if you could refrain from spreading FUD if
ab> you don't have experience with it.
> "ab" == Alex Blewitt writes:
ab> All Mac Minis have FireWire - the new ones have FW800.
I tried attaching just two disks to a ZFS host using firewire, and it
worked very badly for me. I found:
1. The solaris firewire stack isn't as good as the Mac OS one.
2. Solaris is very obnoxi
> "np" == Neil Perrin writes:
np> The L2ARC just holds blocks that have been evicted from the
np> ARC due to memory pressure. The DDT is no different than any
np> other object (e.g. file).
The other cacheable objects require pointers to stay in the ARC
pointing to blocks in the L
> "bh" == Brandon High writes:
>> Atom
bh> 32-bit kernels can't support drives over 1GB.
iirc, atom desktop chips are 64-bit and recognized as 64-bit by
kernel, but not recognized by grub. but I thought this got fixed. If
you use 'e' in grub to alter the boot line to replace $IS
> "pk" == Pasi Kärkkäinen writes:
>>> You're really confused, though I'm sure you're going to deny
>>> it.
>> I don't think so. I think that it is time to reset and reboot
>> yourself on the technology curve. FC semantics have been
>> ported onto ethernet. This is not
> "gd" == Garrett D'Amore writes:
gd> There are numerous people in the community that have indicated
gd> that they believe that such linking creates a *derivative*
gd> work. Donald Becker has made this claim rather forcefully.
yes, I think he has a point. The reality is, as lon
> "re" == Richard Elling writes:
re> Please don't confuse Ethernet with IP.
okay, but I'm not. seriously, if you'll look into it.
Did you misread where I said FC can exert back-pressure? I was
contrasting with Ethernet.
Ethernet output queues are either FIFO or RED, and are large com
> "et" == Erik Trimble writes:
et> With NFS-hosted VM disks, do the same thing: create a single
et> filesystem on the X4540 for each VM.
previous posters pointed out there are unreasonable hard limits in
vmware to the number of NFS mounts or iSCSI connections or something,
so you wil
> "sl" == Sigbjørn Lie writes:
sl> Excellent! I wish I would have known about these features when
sl> I was attempting to recover my pool using 2009.06/snv111.
the OP tried the -F feature. It doesn't work after you've lost zpool.cache:
op> I was setting up a new systen (osol 20
> "cs" == Cindy Swearingen writes:
okay wtf. Why is this thread still alive?
cs> The mirror mount feature
It's unclear to me from this what state the feature's in:
http://hub.opensolaris.org/bin/view/Project+nfs-namespace/
It sounds like mirror mounts are done but referrals are not,
> "d" == Don writes:
> "hk" == Haudy Kazemi writes:
d> You could literally split a sata cable and add in some
d> capacitors for just the cost of the caps themselves.
no, this is no good. The energy only flows in and out of the
capacitor when the voltage across it changes. I
> "ai" == Asif Iqbal writes:
>> If you disable the ZIL for locally run Oracle and you have an
>> unscheduled outage, then it is highly probable that you will
>> lose data.
ai> yep. that is why I am not doing it until we replace the
ai> battery
no, wait please, you st
> "dd" == David Dyer-Bennet writes:
dd> Just how DOES one know something for a certainty, anyway?
science.
Do a test like Lutz did on X25M G2. see list archives 2010-01-10.
pgpeiR4DYODbj.pgp
Description: PGP signature
___
zfs-discuss mailin
> "rsk" == Roy Sigurd Karlsbakk writes:
> "dm" == David Magda writes:
> "tt" == Travis Tabbal writes:
rsk> Disabling ZIL is, according to ZFS best practice, NOT
rsk> recommended.
dm> As mentioned, you do NOT want to run with this in production,
dm> but it is a quick w
> "d" == Don writes:
d> "Since it ignores Cache Flush command and it doesn't have any
d> persistant buffer storage, disabling the write cache is the
d> best you can do." This actually brings up another question I
d> had: What is the risk, beyond a few seconds of lost wri
> "et" == Erik Trimble writes:
et> No, you're reading that blog right - dedup is on a per-pool
et> basis.
The way I'm reading that blog is that deduped data is expaned in the
ARC.
pgpozjcLXZlNV.pgp
Description: PGP signature
___
zfs-discu
> "et" == Erik Trimble writes:
et> frequently-accessed files from multiple VMs are in fact
et> identical, and thus with dedup, you'd only need to store one
et> copy in the cache.
although counterintuitive I thought this wasn't part of the initial
release. Maybe I'm wrong altoget
> "bh" == Brandon High writes:
bh> The devid for a USB device must change as it moves from port
bh> to port.
I guess it was tl;dr the first time I said this, but:
the old theory was that a USB device does not get a devid because it
is marked ``removeable'' in some arcane SCSI pa
> "bh" == Brandon High writes:
bh> From what I've read, the Hitachi and Samsung drives both
bh> support CCTL, which is in the ATA-8 spec. There's no way to
bh> toggle it on from OpenSolaris (yet) and it doesn't persist
bh> through reboot so it's not really ideal.
bh> Here
> "eg" == Emily Grettel writes:
eg> What do people already use on their enterprise level NAS's?
For a SOHO NAS similar to the one you are running, I mix manufacturer
types within a redundancy set so that a model-wide manufacturing or
firmware glitch like the ones of which we've had sever
> "bh" == Brandon High writes:
bh> If you boot from usb and move your rpool from one port to
bh> another, you can't boot. If you plug your boot sata drive into
bh> a different port on the motherboard, you can't
bh> boot. Apparently if you are missing a device from your rpool
> "jcm" == James C McPherson writes:
>> storage controllers are more difficult for driver support.
jcm> Be specific - put up, or shut up.
marvell controller hangs machine when a drive is unplugged
marvell controller does not support NCQ
marvell driver is closed-source blob
sil3124
> "mg" == Mike Gerdts writes:
mg> If Solaris is under memory pressure, [...]
mg> The best thing to do with processes that can be swapped out
mg> forever is to not run them.
Many programs allocate memory they never use. Linux allows
overcommitting by default (but disableable), b
> "bh" == Brandon High writes:
bh> The drive should be on the same USB port because the device
bh> path is saved in the zpool.cache. If you removed the
bh> zpool.cache, it wouldn't matter where the drive was plugged
bh> in.
I thought it was supposed to go by devid.
There was
> "mef" == Mary Ellen Fitzpatrick writes:
mef> Is there a way to set permissions so that the /etc/auto.home
mef> file on the clients does not list every exported dir/mount
mef> point?
If I understand the question right, then, no. These maps are very
traditional from the earliest da
> "dm" == David Magda writes:
dm> Given that ZFS is always consistent on-disk, why would you
dm> lose a pool if you lose the ZIL and/or cache file?
because of lazy assertions inside 'zpool import'. you are right there
is no fundamental reason for it---it's just code that doesn't exi
> "re" == Richard Elling writes:
re> a well managed system will not lose zpool.cache or any other
re> file.
I would complain this was circular reasoning if it weren't such
obvious chest-puffing bullshit.
It's normal even to the extent of being a best practice to have no
redundancy f
> "re" == Richard Elling writes:
>> A failed unmirrored log device would be the
>> permanent death of the pool.
re> It has also been shown that such pools are recoverable, albeit
re> with tedious, manual procedures required.
for the 100th time, No, they're not, not if you lo
> "edm" == Eric D Mudama writes:
edm> What you're suggesting is exactly what SSD vendors already do.
no, it's not. You have to do it for them.
edm> They present a 512B standard host interface sector size, and
edm> perform their own translations and management inside the
edm> de
> "edm" == Eric D Mudama writes:
edm> How would you stripe or manage a dataset across a mix of
edm> devices with different geometries?
the ``geometry'' discussed is 1-dimensional: sector size.
The way that you do it is to align all writes, and never write
anything smaller than the sec
> "jcm" == James C McPherson writes:
> "ga" == Günther Alka writes:
jcm> I am amazed that you believe OpenSolaris binary distro has too
jcm> much desktop stuff. Most people I have come across are firmly
jcm> of the belief that it does not have enough.
minification is stupid, an
> "dd" == David Dyer-Bennet writes:
dd> Is it possible to switch to b132 now, for example?
yeah, this is not so bad. I know of two approaches:
* genunix.org assembles livecd's of each b tag. You can burn
one, unplug from the internet, install it. It is nice to have a
livecd ca
> "re" == Richard Elling writes:
> "dc" == Daniel Carosone writes:
re> In general, I agree. How would you propose handling nested
re> mounts?
force-unmount them. (so that they can be manually mounted elsewhere,
if desired, or even in the same place with the middle filesystem
mi
> "rs" == Ragnar Sundblad writes:
rs> use IPSEC to make IP address spoofing harder.
IPsec with channel binding is win, but not until SA's are offloaded to
the NIC and all NIC's can do IPsec AES at line rate. Until this
happens you need to accept there will be some protocols used on SAN
> "dm" == David Magda writes:
> "bf" == Bob Friesenhahn writes:
dm> OP may also want to look into the multi-platform pkgsrc for
dm> third-party open source software:
+1. jucr.opensolaris.org seems to be based on RPM which is totally
fail. RPM is the oldest, crappiest, most fru
> "jr" == Jeroen Roodhart writes:
jr> Running OSOL nv130. Power off the machine, removed the F20 and
jr> power back on. Machines boots OK and comes up "normally" with
jr> the following message in 'zpool status':
yeah, but try it again and this time put rpool on the F20 as well an
> "re" == Richard Elling writes:
re> # ptime zdb -S zwimming Simulated DDT histogram:
re> refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE
DSIZE
re> Total2.63M277G218G225G3.22M337G263G
270G
re>in-core size = 2.63M
> "enh" == Edward Ned Harvey writes:
enh> If you have zpool less than version 19 (when ability to remove
enh> log device was introduced) and you have a non-mirrored log
enh> device that failed, you had better treat the situation as an
enh> emergency.
Ed the log device removal sup
> "la" == Lori Alt writes:
la> I'm only pointing out that eliminating the zpool.cache file
la> would not enable root pools to be split. More work is
la> required for that.
makes sense. All the same, please do not retaliate against the
bug-opener by adding a lazy-assertion to pr
> "enh" == Edward Ned Harvey writes:
enh> Dude, don't be so arrogant. Acting like you know what I'm
enh> talking about better than I do. Face it that you have
enh> something to learn here.
funny! AIUI you are wrong and Casper is right.
ZFS recovers to a crash-consistent state, e
> "rm" == Robert Milkowski writes:
rm> the reason you get better performance out of the box on Linux
rm> as NFS server is that it actually behaves like with disabled
rm> ZIL
careful.
Solaris people have been slinging mud at linux for things unfsd did in
spite of the fact knfsd h
> "rm" == Robert Milkowski writes:
rm> This is not true. If ZIL device would die *while pool is
rm> imported* then ZFS would start using z ZIL withing a pool and
rm> continue to operate.
what you do not say, is that a pool with dead zil cannot be
'import -f'd. So, for example,
> "et" == Erik Trimble writes:
et> Add this zvol as the cache device (L2arc) for your other pool
doesn't bug 6915521 mean this arrangement puts you at risk of deadlock?
pgpLKXWAeF2QV.pgp
Description: PGP signature
___
zfs-discuss mailing list
> "cm" == Courtney Malone writes:
> "j" == Jim writes:
j> Thanks for the suggestion, but have tried detaching but it
j> refuses reporting no valid replicas.
yeah this happened to someone else also, see list archives around
2008-12-03:
cm> I have a 10 drive raidz, recentl
> "srbi" == Steve Radich, BitShop, Inc writes:
srbi>
http://www.bitshop.com/Blogs/tabid/95/EntryId/78/Bug-in-OpenSolaris-SMB-Server-causes-slow-disk-i-o-always.aspx
I'm having trouble understanding many things in here like ``our file
move'' (moving what from where to where with what proto
> "sn" == Sriram Narayanan writes:
sn> http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view
yeah, but he has no slog, and he says 'zpool clear' makes the system
panic and reboot, so even from way over here that link looks useless.
Patrick, maybe try a newer livecd from genunix.org lik
> "k" == Khyron writes:
k> FireWire is an Apple technology, so they have a vested
k> interest in making sure it works well [...] They could even
k> have a specific chipset that they exclusively use in their
k> systems,
yes, you keep repeating yourselves, but there are o
> "bh" == Brandon High writes:
bh> I think I'm seeing an error in the output from zfs list with
bh> regards to snapshot space utilization.
no bug. You just need to think harder about it: the space used cannot
be neatly put into buckets next to each snapshot that add to the
total, ju
>>>>> "c" == Miles Nordin writes:
>>>>> "mg" == Mike Gerdts writes:
c> are compatible with the goals of an archival tool:
sorry, obviously I meant ``not compatible''.
mg> Richard Elling made an interesting observation
> "djm" == Darren J Moffat writes:
djm> I've logged CR# "6936195 ZFS send stream while checksumed
djm> isn't fault tollerant" to keep track of that.
Other tar/cpio-like tools are also able to:
* verify the checksums without extracting (like scrub)
* verify or even extract the strea
> "la" == Lori Alt writes:
la> This is no longer the case. The send stream format is now
la> versioned in such a way that future versions of Solaris will
la> be able to read send streams generated by earlier versions of
la> Solaris.
Your memory of the thread is selective. T
1 - 100 of 508 matches
Mail list logo