Any news on the ZFS deduplication work being done? I hear Jeff Bonwick might
speak about it this month.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf
> on a UFS ore reiserfs such errors could be corrected.
I think some of these people are assuming your hard drive is broken. I'm not
sure what you're assuming, but if the hard drive is broken, I don't think ANY
file system can do anything about that.
At best, if the disk was in a RAID 5 array,
> Posted for my friend Marko:
>
> I've been reading up on ZFS with the idea to build a
> home NAS.
>
> My ideal home NAS would have:
>
> - high performance via striping
> - fault tolerance with selective use of multiple
> copies attribute
> - cheap by getting the most efficient space
> utilizati
I recently tried to import a b97 pool into a b98 upgraded version of that os,
and it failed because of some bug. So maybe try eliminating that kind of
problem by making sure to use the version that you know worked in the past.
Maybe you already did this.
>
> Folks,I have a zpool with a
> rai
> Yes, we've been pleasantly surprised by the demand.
> But, that doesn't mean we're not anxious to expand
> our ability to address such an important market as
> OpenSolaris and ZFS.
>
> We're actively working on OpenSolaris drivers. We
> don't expect it to take long - I'll keep you posted.
>
>
I'm wondering if this bug is fixed and if not, what is the bug number:
> If your entire pool consisted of a single mirror of
> two disks, A and B,
> and you detached B at some point in the past, you
> *should* be able to
> recover the pool as it existed when you detached B.
> However, I just
> ri
> It would be trivial to make the threshold a tunable,
> but we're
> trying to avoid this sort of thing. I don't want
> there to be a
> ZFS tuning guide, ever. That would mean we failed.
>
> Jeff
harumph... http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
:-)
Well now that
Just to confuse you more, I mean, give you another point of view:
> - CPU: 1 Xeon Quad Core E5410 2.33GHz 12MB Cache 1333MHz
The reason the Xeon line is good is because it allows you to squeeze maximum
performance out of a given processor technology from Intel, possibly getting
the highest perf
The good news is that even though the answer to your question is "no", it
doesn't matter because it sounds like what you are doing is a piece of cake :)
Given how cheap hardware is, and how modest your requirements sound, I expect
you could build multiple custom systems for the cost of an EMC sy
> [most people don't seem to know Solaris has ramdisk devices]
That is because only a select few are able to unravel the enigma wrapped in a
clue that is solaris :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@ope
There is no good ZFS gui. Nothing that is actively maintained, anyway.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Okay, so your ACHI hardware is not using an ACHI driver in solaris. A crash
when pulling a cable is still not great, but it is understandable because that
driver is old and bad and doesn't support hot swapping at all.
So there are two things to do here. File a bug about how pulling a sata cabl
> Pulling cables only simulates pulling cables. If you
> are having difficulty with cables falling out, then this problem cannot
> be solved with software. It *must* be solved with hardware.
I don't think anyone is asking for software to fix cables that fall out...
they're asking for the OS to no
> James isn't being a jerk because he hates your or
> anything...
>
> Look, yanking the drives like that can seriously
> damage the drives or your motherboard. Solaris
> doesn't let you do it and assumes that something's
> gone seriously wrong if you try it. That Linux
> ignores the behavior and l
> Will I get markedly better performance with 5 drives (2^2+1) or 6 drives
> 2*(2^1+1) because the parity calculations are more efficient across 2^N
> drives?
If only parity calculations stand to benefit, then it wouldn't make a
difference because your CPU is more than powerful enough to take c
> I got a 750 and sliced it and mirrored the other pieces.
Maybe you ran into a bug, because that situation would not be tested much in
the wild... or maybe you just bad lucked out and your computer toasted some
data.
> Thanks Jeff. I hope my frustration in all this doesn't sound directed
> at
> Then I went and bought an Intel PCI Gigabit Ethernet card for 25€ which seems
> to have solved the problem.
Is this really the case? If so that is an important clue to finding out why
virtualized opensolaris performance is so poor. I tried every network adapter
in virtualbox and vmware and
> It looks pretty lively from my browser :-)
Now that you showed up ;)
In my case it is OpenSolaris in VirtualBox so I was expecting more cooperation,
or at least people striving to make them cooperate.
But like you said, this is likely just a case of OpenSolaris being optimized
for big iron a
I mentioned this too, but on the performance forum:
http://www.opensolaris.org/jive/thread.jspa?threadID=64907&tstart=0
Unfortunately the performance forum has tumbleweeds blowing through it, so that
was probably the wrong place to complain. Not that people don't care about
performance, but th
> It turns out that when you are in IDE compatability mode, having two
> disks on the same 'controller' (c# in solaris) behaves just like real
> IDE... Crap!
That is the second time I've seen solaris guess wrong and force what it thinks
is right. Solaris will also limit the size of an ATA drive
> What do you mean about "mirrored vdevs" ? RAID1
> hardware? Because I have only ICH9R and opensolaris
> doesn't know about it.
No, he means a mirror created by zfs.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@o
> You may all have 'shared human errors', but i dont
> have that issue
> whatsoever :) I find it quite interesting the issues
> that you guys
> bring up with these drives. All manufactured goods
> suffer the same
> pitfalls of production. Would you say that WD and
> Seagate are the
> fro
> Putting into the zpool command would feel odd to me, but I agree that
> there may be a useful utility here.
There MAY be a useful utility here? I know this isn't your fight Dave, but
this tipped me and I have to say something :)
Can we agree that the format command lists the disks it can use
Use froogle for price checking.
I don't know what chipsets are supported by opensolaris, but if I were you I'd
be looking hard at motherboards with as much integrated as possible. For
instance, for less than $100 you can get a mini-atx motherboard with 6 SATA
ports and onboard video. I found
> One other thing I noticed is that OpenSolaris (.com) will
> automatically install ZFS root for you. Will Nexenta do that?
yeah nexenta was the first opensolaris distro to have zfs root install and
snapshots and a modern package system, which all ties together into easy
upgrades.
This messa
This sounds like an important problem
> > Hi...
> >
> > Here's my system:
> >
> > 2 Intel 3 Ghz 5160 dual-core cpu's
> > 0 SATA 750 GB disks running as a ZFS RAIDZ2 pool
> > 8 GB Memory
> > SunOS 5.11 snv_79a on a separate UFS mirror
> > ZFS pool version 10
> > No separate ZIL or
> So, I guess there is no point in providing a database on top of ZFS, just as
> MS tried > to do? A WinFS like thing on ZFS wouldnt be beneficial at all?
> Better off with plain ZFS?
This is a no-brainer. Microsoft tried to hit a fly with a hammer and noticed
their arm getting tired.
Why wou
> I have 4x500G disks in a RAIDZ. I'd like to repurpose one of them
> SYS1 124G 1.21T 29.9K /SYS1
This seems to be a simple task because RAID5/Z runs just fine when it is
missing one disk. Just format one disk any way that works (take the array
offline and do it with format or zpool, or boot in
> 1. In zfs can you currently add more disks to an existing raidz? This is
> important to me as i slowly add disks to my system one at a time.
No, but solaris and linux raid5 can do this (in linux, grow with mdadm).
> 2. in a raidz do all the disks have to be the same size?
I think this one has
> I'm not convinced that single bit flips are the common failure mode for disks.
I think the original suggestion might be for bad RAM more than bad disks. Just
about every home computer does not have ECC RAM, so as ZFS transitions from
enterprise to home, this (optional) feature sounds very wor
> So I scrubbed the whole pool and it found a lot more corrupted files.
My condolences :)
General questions and comments about ZFS and data corruption:
I thought RAIDZ would correct data errors automatically with the parity data.
How wrong am I on that? Perhaps a parity correction was alrea
> I didn't expect miracles, but since WinRAR gave 13% compression
ZFS doesn't compress a block if it can't get a certain amount of return on it.
Since the default compression is less effective than RAR, you can bet ZFS is
seeing much less than 13% return.
I expect everything is working proper
> Is this service something that we'd like to put into OpenSolaris
Heck yes, at least Indiana needs something like that. I guess nobody is
spearheading the "Indiana data backup solution" right now, but that work of
yours could be part of it.
To the user there is no difference between "regularl
> http://zfs.macosforge.org/
Good work to those involved :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> such a minor "feature"
I don't think copying files is a minor feature.
Doubly so since the words I've read from Sun suggest that ZFS "file systems"
(or "data sets" or whatever they are called now) can be used in the way
directories on a normal file system are used.
This message posted f
> 2) Unstable APT integrated with ON build 79, give it a try!
Excellent progress!! But your website is out of date and I cannot find a
NexentaCP link on the download page. Only the old NexentaOS link. Also you
should update the news page so it looks like the project is active :)
This mess
> So there is no current way to specify the creation of
> a 3 disk raid-z
> array with a known missing disk?
Can someone answer that? Or does the zpool command NOT accommodate the
creation of a degraded raidz array?
This message posted from opensolaris.org
___
ZFS has a smb server on the way, but there has been no real public information
about it released. Here is a sample of its existence:
http://www.opensolaris.org/os/community/arc/caselog/2007/560/;jsessionid=F4061C9308088852992B7DE83CD9C1A3
This message posted from opensolaris.org
> I consider myself an early adopter of ZFS and pushed
> it hard on this
> list and in real life with regards to iSCSI
> integration, zfs
> performance issues with latency there of, and how
> best to use it with
> NFS. Well, I finally get to talk more about the
> ZFS-based product I've
> been beta
> Here's what I've done so far:
The obvious thing to test is the drive controller, so maybe you should do that
:)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
ZFS "copies" attribute could be used to make this easy, but with all the talk
of kernel panics on drive loss and non-guaranteed block placement across
different disks, I don't like ZFS copies. (see threads like
http://mail.opensolaris.org/pipermail/zfs-discuss/2007-October/043279.html )
The bo
> Any idea when the installer integration for ZFS
> root/boot will happen?
Project Indiana will have it next week-ish, but I don't know about SXCE. SXCE
itself might disappear before it gets the zfs root installer...?
This message posted from opensolaris.org
_
Sun's storage strategy:
1) Finish Indiana and distro constructor
2) (ship stuff using ZFS-Indiana)
3) Success
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
I asked this recently, but haven't done anything else about it:
http://www.opensolaris.org/jive/thread.jspa?messageID=155583𥾿
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
"One or more devices could not be opened"? I wonder if this has anything to do
with our problems here...:
http://www.opensolaris.org/jive/thread.jspa?messageID=160589𧍍
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discus
> 3) Forget PCI-Express -- if you have a free PCI-X (or
> PCI)-slot. Supermicro AOC-SAT2-MV8 (PCI-X cards are
> (usually) plain-PCI-compatible; and this one is). It
> has 8 ports, is natively plug-and-play-suported and
> does not cost more than twice a si3132, and costs
> only a fraction of other >
This one might be better in the help forum/list :)
You will probably want to use the latest SXDE for that instead of Solaris 10.
It is a recent well-tested SXCE which is much newer than Solaris 10. Depending
on how good the Super Project Indiana OpenSolaris Milestone 1 Turbo turns out
at the
I think I might have run into the same problem. At the time I assumed I was
doing something wrong, but...
I made a b72 raidz out of three new 1gb virtual disks in vmware. I shut the vm
off, replaced one of the disks with a new 1.5gb virtual disk. No matter what
command I tried, I couldn't ge
Re: http://bugs.opensolaris.org/view_bug.do?bug_id=6602947
Specifically this part:
[i]Create zpool /testpool/. Create zfs file system /testpool/testfs.
Right click on /testpool/testfs (filesystem) in nautilus and rename to testfs2.
Do zfs list. Note that only /testpool/testfs (filesystem) is pr
With the arrival of ZFS, the "format" command is well on its way to deprecation
station. But how else do you list the devices that zpool can create pools out
of?
Would it be reasonable to enhance zpool to list the vdevs that are available to
it? Perhaps as part of the help output to "zpool cr
Just to answer one of my questions, "df" seems to work pretty well. That said
I still think the zpool creation tool would do well to list what it can create
zpools out of.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
That doesn't exist yet because everything about OpenSolaris is pretty young.
The demand is there though because there is a constant stream of people
interested in ZFS as a home file archive system.
By the time Indiana is off its feet, popularity will grow, the distro
constructor will exist, an
> My question is: Is there any interest in finishing RAID5/RAID6 for ZFS?
> If there is no chance it will be integrated into ZFS at some point, I
> won't bother finishing it.
Your work is as pure an example as any of what OpenSolaris should be about. I
think there should be no problem having a n
To expand on this:
> The recommended use of whole disks is for drives with volatile write caches
> where ZFS will enable the cache if it owns the whole disk.
Does ZFS really never use disk cache when working with a disk slice? Is there
any way to force it to use the disk cache?
This messag
> Unfortunately it only comes with 4 adapters, bare
> metal adapters without any dampering /silencing and
> so on...
> ...anyway I wanted to make it the most silent I
> could, so I suspeded all the 10 disks (8 sata 320gb
> and a little 2,5" pata root disk) with a flexible
> wire, like I posted in t
For everyone else:
http://blogs.sun.com/timthomas/entry/samba_and_swat_in_solaris#comments
"It looks like nevada 70b will be the next Solaris Express Developer Edition
(SXDE) which should also drop shortly and should also have the ZFS ACL fix, but
to find the full source integration you have to
> Richard, thanks for the pointer to the tests in
> '/usr/sunvts', as this
> is the first I have heard of them. They look quite
> comprehensive.
> I will give them a trial when I have some free time.
> Thanks
> Nigel Smith
>
> pmemtest- Physical Memory Test
> ramtest - Memory DIMMs
> This is a problem for replacement, not creation.
You're talking about solving the problem in the future? I'm talking about
working around the problem today. :) This isn't a fluffy dream problem. I
ran into this last month when an RMA'd drive wouldn't fit back into a RAID5
array. RAIDZ is
Thanks for the comprehensive replies!
I'll need some baby speak on this one though:
> The recommended use of whole disks is for drives with volatile write caches
> where ZFS will enable the cache if it owns the whole disk. There may be an
> RFE lurking here, but it might be tricky to correctly
The ZFS version pages (
http://www.google.ca/search?hl=en&safe=off&rlz=1B3GGGL_enCA220CA220&q=+site:www.opensolaris.org+zfs+version
) are undocumented on the main page, as far as I can see.
The root /versions/ directory should be listed on the main ZFS page somewhere,
and contain a list of al
The situation: a three 500gb disk raidz array. One disk breaks and you replace
it with a new one. But the new 500gb disk is slightly smaller than the
smallest disk in the array.
I presume the disk would not be accepted into the array because the zpool
replace entry on the zpool man page say
> So that leaves us with a Samba vs NFS issue (not
> related to
> ZFS). We know that NFS is able to create file _at
> most_ at
> one file per server I/O latency. Samba appears better
> and this is
> what we need to investigate. It might be better in a
> way
> that NFS can borrow (maybe through some
On the heels of the LZO compression thread, I bring you a 7zip compression
thread!
Shown here as the open source system with the best compression ratio:
http://en.wikipedia.org/wiki/Data_compression#Comparative
Shown here on a SPARC system with the best compression ratios and good CPU
usage: h
> zfs tries to compress a datablock and if that isn`t compressible enough, it
> doesn`t store it compressed.
That feature was pointed out to me off-list, and it makes great sense. I had
not heard about that before this thread.
This message posted from opensolaris.org
___
Awesome initiative.
One thing ZFS is missing is the ability to select which files to compress.
Even a simple heuristic like "don't compress mp3,avi,zip,tar files" would yield
a tremendous change in which data is compressed on consumer computers. I don't
know if such a heuristic is planned o
> Intending to experiment with ZFS, I have been
> struggling with what
> should be a simple download routine.
>
> Sun Download Manager leaves a great deal to be
> desired.
>
> In the Online Help for Sun Download Manager there's a
> section on
> troubleshooting, but if it causes *anyone* this
> Is there zfs available in boot with b64 ?
If you are asking if the installer supports installing to a zfs drive, I
believe the answer is still "no" :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
> Personally I would go with ZFS entirely in most cases.
That's the rule of thumb :) If you have a fast enough CPU and enough RAM, do
everything with ZFS. This sounds koolaid-induced, but you'll need nothing else
because ZFS does it all.
My second personal rule of thumb concerns RAIDZ perform
Onboard RAID solutions actually do all their work on your CPU, so you won't be
using that for anything if you use ZFS. You just want them acting like regular
SATA controllers.
Just run the Solaris hardware compatibility thinger (google it), or compare
your hardware to the supported hardware
You could estimate how long it will take for ZFS to get the feature you need,
and then buy enough space so that you don't run out before then.
Alternatively, Linux mdadm DOES support growing a RAID5 array with devices, so
you could use that instead.
This message posted from opensolaris.org
_
That's a lot of talking without an answer :)
> internal EIDE 320GB (boot drive), internal
250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive.
> So, what's the best zfs configuration in this situation?
RAIDZ uses disk space like RAID5. So the best you could do here for redundant
> With zfs, file systems are in many ways more like directories than what
we used to call file systems. They draw from pooled storage. They
have low overhead and are easy to create and destroy. File systems
are sort of like super-functional directories, with quality-of-service
control and cloning a
I think Benjamin was referring to the image Brian promised to upload, which, I
see now, is up on his web space.
My experience with the vmware image is as follows:
Doing a zpool scrub after booting up causes Solaris to restart about half way
through. After the crash, a zpool status says there i
You've delivered us to awesometown, Brain.
> zfsboot.tar.bz2 is a vmware image made on a VMWare Server 1.0.1
machine.
But oops, what is the root login password?! :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@op
> remember that solaris express can only be distributed by authorized parties.
Mmmyeah, I think we'll be fine. Sun is a capable organization and doesn't need
you or I to put a damper on the growth of OpenSolaris. If they have a problem
with something, they'll let us know.
Just waiting on you,
Good deal. We'll have a race to build a a vm image, then :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Now the original question by MC I belive was about providing
VMware and/or Xen image with guest OS being snv_62 with / as zfs.
This is true.
I'm not sure what Jim meant about the host system needing to support zfs.
Maybe you're on a different page, Jim :)
> I will setup a V
If the goal is to test ZFS as a root file system, could I suggest making a
virtual machine of b62-on-zfs available for download? This would reduce
duplicated effort and encourage new people to try it out.
This message posted from opensolaris.org
__
Two conflicting answers to the same question? I guess we need someone to break
the tie :)
> Hello,
>
> I have been reading alot of good things about Raid-z,
> but before I jump into it I have one unanswered
> question i can't find a clear answer for.
>
> Is it possible to enlarge the initial R
Running RAID5 like that is strongly inadvisable (to the point of "don't
bother"), so doing it with RAIDZ would be a similarly bad idea. You could try
another cheapo/junk controller card to verify whether or not it is a shared
resource problem ;)
This message posted from opensolaris.org
> o I've got a modified Solaris miniroot with ZFS
> functionality which
> takes up about 60 MB (The compressed image, which
> GRUB uses, is less
> than 30MB). Solaris boots entirely into RAM. From
> poweron to full
> functionality, it takes about 45 seconds to boot on a
> very modest 1GHz
> C
> > My question is not related directly to ZFS but
> maybe
> > you know the answer.
> > Currently I can run the ZFS Web administration
> > interface only locally - by pointing my browser to
> > [i]https://localhost:6789/zfs/[/i]
> > What should be done to enable an access to
> > [i]https://zfshost:
82 matches
Mail list logo