As for source, here you go :)
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
I'm not an expert but for what it's worth-
1. Try the original system. It might be a fluke/bad cable or anything else
intermittent. I've seen it happen here. If so, your pool may be alright.
2. For the (defunct) originals, I'd say we'd need to take a look into the
sources to find if something n
Umm, why do you need to do it the complicated way ? Here it is from zpool man
page-
zpool replace [-f] pool old_device [new_device]
Replaces old_device with new_device. This is equivalent
to attaching new_device, waiting for it to resilver, and
then detaching old
Seriously, if I had that many on _field_ I'd directly ring my support rep.
Getting one step go wrong from instruction provided in forum might mean that
you'd have to spend quite a long time fixing everyone (or worse re-installing)
one by one from scratch!
Get a support guy walk you through this
Can't help with recovering your data but can shed some light on how this may
have happened, its in another old thread.
This problem may happen if ZFS Thought that the data has been written but its
not! It can happen in virtual machine environment as VM has to go through host
OS buffers which ma
Hi Gray,
You've got a nice setup going there, few comments:
1. Do not tune ZFS without a proven test-case to show otherwise, except...
2. For databases. Tune recordsize for that particular FS to match DB recordsize.
Few questions...
* How are you divvying up the space ?
* How are you taking car
Just a random spectator here, but I think artifacts you're seeing here are not
due to file size, but rather due to record size.
What is the ZFS record size ?
On a personal note, I wouldn't do non-concurrent (?) benchmarks. They are at
best useless and at worst misleading for ZFS
- Akhilesh.
--
Hi,
My setup is arguably smaller than yours, so YMMV:
Key Point: I have found that using infrastructure provided natively by
Solaris/ZFS are the best choices.
I have been using CIFS... it's unpredictable when some random windows machines
would stop seeing them. XP/Server 2003/Vista - Too many
you need to run /usr/bin/amd64/ls
Some utils eg virtualbox shared folders in an old build munge file dates
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/z
> I don't doubt the superiority of LaTex/Framemaker in
> conjunction with Distiller in producing (the pdf
> versions of) nicely typeset books and brochures. But
> how good is a tool if it produces a product that its
> intended users can NOT read? This is what prompted
>
You seem to have missed
Waynel,
It takes significant amount of work to typeset any large document. Especially
if it is a technical document in which you have to adhere to a set of strict
typesetting guidelines. In these cases separation of content and style is
essential and can't be stressed enough.
Word Processors h
I doubt so. Star/OpenOffice are word processors... and like Word they are not
suitable for typesetting documents.
SGML, FrameMaker & TeX/LateX are the only ones capable of doing that.
This message posted from opensolaris.org
___
zfs-discuss mailing
> Btrfs does not suffer from this problem as far as I
> can see because it
> uses reference counting rather than a ZFS-style dead
> list. I was just
> wondering if ZFS devs recognize the problem and are
> working on a
> solution.
Daniel,
Correct me if I'm wrong, but how does reference counting s
> Welcome to font hell :-(. For many years, Sun
> documentation was written
> in the Palatino font, which is (or was?) not freely
> available. I believe
Umm No. PDF supports font embedding. This is how so many PDFs are out there
(company brochures, fliers etc) with commercial fonts and they loo
Evince likes to fuzz a number of PDFs. I too can't seem to nail the problems,
but it seems that a number of PDFs from SUN have this problem (very wrong
character spacing), and they all have been generated using FrameMaker. PDFs
generated using TeX/LaTeX are *usually* ok.
This message posted
> On Monday 14 July 2008 08:29, Akhilesh Mritunjai
> wrote:
> > Writable snapshots are called "clones" in zfs. So
> infact, you have
> > trees of snapshots and clones. Snapshots are
> read-only, and you can
> > create any number of "writable" clone
Still reading, but would like to correct one point.
> * It would seem that ZFS is deeply wedded to the
> concept of a single,
> linear chain of snapshots. No snapshots of
> snapshots, apparently.
>http://blogs.sun.com/ahrens/entry/is_it_magic
Writable snapshots are called "clones" in zfs. So
Hi
I had a quick look. Looks great!
A suggestion - From given example, I think API could be made more "pythonic".
Python is dynamically typed and properties can be dynamically looked up too.
Thus, instead of prop_get_* we can have -
1. prop() : generic function, returning "typed" arguments. Th
> Well, I'm not holding out much hope of Sun working
> with these suppliers any time soon. I asked Vmetro
> why they don't work with Sun considering how well ZFS
> seems to fit with their products, and this was the
> reply I got:
>
> "Micro Memory has a long history of working with Sun,
> and I w
This shouldn't have happened. Do you have zdb on Mac ? If yes you can try it.
It is (intentionally?) undocumented, so you'll need to search for various
scripts on blogs.sun.com and here. Something might just work. But do check what
apple is actually shipping. You may want to use dtrace to find o
Hi
I too strongly suspect that some HW component is failing. It is rare to see all
drives (in your case both drives in mirror and the boot drive) reporting errors
at same time.
"zfs clear" just resets the error counters. You still have got errors in there.
Start with following components (in t
> Thanks for your comments. FWIW, I am building an
> actual hardware array, so een though I _may_ put ZFS
> on top of the hardware arrays 22TB "drive" that the
> OS sees (I may not) I am focusing purely on the
> controller rebuild.
Not letting ZFS handle (at least one level of) redundancy is a ba
Can't say about /var/log, but I have a system here with /var on zfs.
My assumption was that, not just /var/log, but essentially all of /var is
supposed to be "runtime cruft", and so can be treated equally.
This message posted from opensolaris.org
__
I feel I'm being mis-understood.
RAID - "Redundant" Array of Inexpensive Disks.
I meant to state that - Let ZFS deal with redundancy.
If you want to have an "AID" by all means have your "RAID" controller do all
kind of striping/mirroring it can to help with throughput or ease of managing
drive
> I'll probably be having 16 Seagate 15K5 SAS disks,
> 150 GB each. Two in HW raid1 for the OS, two in HW
> raid 1 or 10 for the transaction log. The OS does not
> need to be on ZFS, but could be.
Whatever you do, DO NOT mix zfs and HW RAID.
ZFS likes to handle redundancy all by itself. It's mu
If there was no redundancy configured in zfs then you're mostly toast. RAID is
no protection against data errors as has been told by zfs guys and you just
discovered.
I think your only option is to somehow setup a recent build of OpenSolaris
(05/08 or SXCE), configure it to not panic on checksu
> On May 18, 2008, at 14:01, Mario Goebbels wrote:
> ZFS on Linux on
> humper would actually be very interesting to many of
> them. I think
> that's good for Sun. Of course, ZFS on Linux on
Umm, how many Linux shops buy support and/or HW from Sun ?
It it's a Linux shop money is (in order)
Hi
Is it possible to see what changed between two snapshots (efficiently) ?
I tried to take a look what "zfs send -i" does, and I found that it operates at
very low (dmu) level and basically dumps the blocks.
Any pointers on extracting inode info from this stream or otherwise ?
- mritun
Th
>From the bug description, it's actually not pool corruption, but rather error
>handling is not comprehensive. Your data is fine, you need to upgrade to
>snv77+ or S10u5 for the fix.
- mritun
This message posted from opensolaris.org
___
zfs-discuss
New, yes. Aware - probably not.
Given cheap filesystems, users would create "many" filesystems was an easy
guess, but I somehow don't think anybody envisioned that users would be
creating tens of thousands of filesystems.
ZFS - too good for it's own good :-p
This message posted from opensol
I remember reading a discussion where these kind of problems were discussed.
Basically it boils down to "everything" not being aware of the radical changes
in "filesystems" concept.
All these things are being worked on, but it might take sometime before
everything is made aware that yes it's no
Most probable culprit (close, but not identical stacktrace):
http://bugs.opensolaris.org/view_bug.do?bug_id=6458218
Fixed since snv60.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
Hi Ben
Not that I know much, but while monitoring the posts I read sometime long ago
that there was a bug/race condition in slab allocator which results in panic on
double free (ss != NULL).
I think zpool is fine but your system is tripping on this bug. Since it is
snv43, I'd suggest upgrading
USB2 giving you ~30MB/s is normal... a little better than mine (on Windows -
~25MB/s) actually.
For better performance better switch to eSATA or Firewire. Even FW400 will give
you better results than USB as there are lesser overheads.
However, I'm sure I saw some FW+ZFS related bug in bugdb som
Yes it will work, and quite nicely indeed. But you need to be careful.
Currently ZFS mounting is not "instantaneous", if you have like say 3
users, you might be for a rude surprize as system takes its own merry time (~
few hrs) mounting them at next reboot. Even with auto mounter, things won
> SUMMARY:
> 1) Why the difference between pool size and fs
> capacity?
With zfs take df output with a grain of salt -- add more if compression is
turned on.
ZFS being quite complicated, it seems only an "approximate" free space is
reported, which won't be too wrong and would suffice for the pu
OpenSolaris builds are like "development snapshots"...they're not a release and
thus there are no patches.
SXCE is just binary build from these snapshots... it's there are convenience
only, and "patches" are applied like in every other development project... by
updating from source repository,
Hi Folks
I believe that the word would have gone around already, Google engineers have
published a paper on disk reliability. It might supplement the ZFS FMA
integration and well - all the numerous debates on spares etc etc over here.
To quote /.
"The Google engineers just published a paper on
Oh yep, I know that "churning" feeling in stomach that there's got to be a
GOTCHA somewhere... it can't be *that* simple!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
ZFS Rule #0: You gotta have redundancy
ZFS Rule #1: Redundancy shall be managed by zfs, and zfs alone.
Whatever you have, junk it. Let ZFS manage mirroring and redundancy. ZFS
doesn't forgive even single bit errors!
This message posted from opensolaris.org
___
> So, does anyone know if I can run ZFS on my iPhone?
> ;-)
> -- richard
Hi Richard
Thanks for your interest in running ZFS, the final word in filesystems, on your
iPhone.
I'd be happy to help you. Please send the iPhone to me at the address provided
below and I shall get you going as fast a
Ok now this takes the "Most egregiously creative misuse of ZFS" award :-)
I doubt ZFS can help if badblocks "didn't work". It would help to know what was
the problem with it, but generally a destructive test reveals a lot.
OTOH, you can also do better by writing a small program which writes rand
Hi
I'll recommend going over the zfs presentation. One of the points they listed
was that - even in case of silent errors (like you noticed) other systems just
go on. Your data gets silently corrupted and you'd never notice it. If there
are few bit flips in jpegs and movie files, it will almost
Excuse me if I'm mistaken, but I think the question is on the lines of how to
access and more importantly - Backup zfs pools/filesystems present on a system
by just booting from a CD/DVD.
I think the answer would be on the lines of (forced?) importing of zfs pools
present on the system and then
> > Yuen L. Lee wrote:
> opensolaris could be a nice NAS filer. I posted
> my question on "How to build a NAS box" asking for
> instructions on how to build a Solaris NAS box.
> It looks like everyone is busy. I haven't got any
> response yet. By any chance, do you have any
Hi Yuen
May I suggest
> zpool status
> # uncomment the following lines if you want to see
> the system think
> # it can still read and write to the filesystem after
> the backing store has gone.
Hi
UNIX unlink() syscall doesn't remove the inode if its in use. Its marked to be
unliked when its use count falls to zero.
Hi,
Like what matt said, unless there is a bug in code, zfs should automatically
figure out the drive mappings. The real problem as I see is using 16 drives in
single raidz... which means if two drives malfunction, you're out of luck.
(raidz2 would survive 2 drives... but still I believe 16 dri
47 matches
Mail list logo