On 12/14/12 10:07 AM, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
wrote:
Is that right? You can't use zfs send | zfs receive to send from a newer
version and receive on an older version?
No. You can, with recv, override any property in the sending stream that can be
set from t
at 12:54 PM, Jan Owoc wrote:
> On Thu, Dec 13, 2012 at 11:44 AM, Bob Netherton
> wrote:
>> On Dec 13, 2012, at 10:47 AM, Jan Owoc wrote:
>>> Yes, that is correct. The last version of Solaris with source code
>>> used zpool version 28. This is the last version
at's what I did yesterday :)
Bob
Sent from my iPhone
On Dec 13, 2012, at 12:54 PM, Jan Owoc wrote:
> On Thu, Dec 13, 2012 at 11:44 AM, Bob Netherton
> wrote:
>> On Dec 13, 2012, at 10:47 AM, Jan Owoc wrote:
>>> Yes, that is correct. The last version of Solaris with sourc
That is a touch misleading. This has always been the case since S10u2. You
have to create the pool AND the file systems at the oldest versions you want to
support.
I maintain a table of pool and version numbers on my blog (blogs.oracle.
com/bobn) for this very purpose. I got lazy the othe
I'll agree with Bob on this. A specific use case is a VirtualBox server
hosting lots of guests. I even made a point of mentioning this tunable in the
Solaris 10 Virtualization Essentials section on vbox :)
There are several other use cases as well.
Bob
Bob
Sent from my iPad
On May 17, 20
zhihui Chen wrote:
I have created a pool on external storage with B114. Then I export this pool
and import it on another system with B110.But this import will fail and show
error: cannot import 'tpool': pool is formatted using a newer ZFS version.
Any big change in ZFS with B114 leads to this co
since I am trying to keep my pools at a version that different updates
can handle, I personally am glad it did not get rev'ed. I did get into
trouble recently that SX-CE 112 created a file system on an old pool
with a version newer than Solaris 10 likes :(
-o is your best friend ;-)I
Bob Doolittle wrote:
Blake wrote:
You need to use 'installgrub' to get the right boot pits in place on
your new disk.
I did that, but it didn't help.
I ran:
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4t1d0s0
Is it OK to run this before resilvering has completed?
You need
> Multiple pools on one server only makes sense if you are going to have
> different RAS for each pool for business reasons. It's a lot easier to
> have a single pool though. I recommend it.
A couple of other things to consider to go with that recommendation.
- never build a pool larger than y
Bob is right. Less chance of failure perhaps but also less
protection. I don't like it when my storage lies to me :)
Bob
Sent from my iPhone
On Feb 27, 2009, at 12:48 PM, Bob Friesenhahn > wrote:
On Fri, 27 Feb 2009, Blake wrote:
SinceZFS is trying to checksum blocks, the fewer abstract
> I am a bit slow today. It seems like a dying drive should be replaced
> ASAP.
Completely agree with Bob on this. I drive an 8.000lb truck and the
tires have industrial strength runflats. If I get a puncture or tear
in a tire I replace it as soon as I can, not when it is convenient.
The r
Jeff Bonwick wrote:
> On Sat, Dec 13, 2008 at 04:44:10PM -0800, Mark Dornfeld wrote:
>> I have installed Solaris 10 on a ZFS filesystem that is not mirrored. Since
>> I have an identical disk in the machine, I'd like to add that disk to the
>> existing pool as a mirror. Can this be done, and if
> This argument can be proven by basic statistics without need to resort
> to actual testing.
Mathematical proof <> reality of how things end up getting used.
> Luckily, most data access is not completely random in nature.
Which was my point exactly. I've never seen a purely mathematical
mod
> In other words, for random access across a working set larger (by say X%)
> than the SSD-backed L2 ARC, the cache is useless. This should asymptotically
> approach truth as X grows and experience shows that X=200% is where it's
> about 99% true.
>
Ummm, before we throw around phrases like
On Thu, 2008-11-06 at 19:54 -0500, Krzys wrote:
> WHen property value copies is set to value greater than 1 how does it work?
> Will
> it store second copy of data on different disk? or does it store it on the
> same
> disk? Also when this setting is changed at some point on file system, will i
> Bob, is there any specific reason why you suggest the creation of a
> bunch of zfs datasets up front?
Absolutely. ZFS filesystems created on too new of a Nevada/OpenSolaris
will not be mountable on Solaris 10. One of the SMF services,
filesystem/local perhaps will go into maintenance and you
On Thu, 2008-08-07 at 09:16 -0700, Daniel Templeton wrote:
> Is there a way that I can add the disk to a ZFS pool and have
> the ZFS pool accessible to all of the OS instances? I poked through the
> docs and searched around a bit, but I couldn't find anything on the topic.
Yes. I do that all o
soren wrote:
> ZFS has detected that my root filesystem has a small number of errors. Is
> there a way to tell which specific files have been corrupted?
>
After a scrub a zpool status -v should give you a list of files with
unrecoverable errors.
Bob
_
On Sun, 2008-08-03 at 20:46 -0700, Rahul wrote:
> hi
> can you give some disadvantages of the ZFS file system??
In what context ? Relative to what ?
Bob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
> We haven't had any "real life" drive failures at work, but at home I
> took some old flaky IDE drives and put them in a pentium 3 box running
> Nevada.
Similar story here. Some IDE and SATA drive burps under Linux (and
please don't tell me how wonderful Reiser4 is - 'cause it's banned in
this
On Thu, 2008-07-31 at 13:25 -0700, Ross wrote:
> Hey folks,
>
> I guess this is an odd question to be asking here, but I could do with some
> feedback from anybody who's actually using ZFS in anger.
ZFS in anger ? That's an interesting way of putting it :-)
> but I have some real concerns abo
> I want to
> start testing out ZFS boot and zfs allow to minimize the delay between the
> release of U6 and my production deployment.
Good observation. I mention this in every Solaris briefing that I do.
Get some stick time with this capability using SXCE or OpenSolaris so
that you can reduce
On Mon, 2008-03-10 at 15:19 +0800, Wee Yeh Tan wrote:
> Bob,
>
> Are you sure that /pandora is mounted?
Now that you ask, not sure. It shows as mounted but there is no
data in there other than the mountpoints for the other children
filesystems.
> zpool:pandora when /pandora is not empty. I n
Multi-boot system (s10u3, s10u4, and nevada84) having problems
mounting ZFS filesystems at boot time. The pool is s10u3
are are most of the filesystems. A few of the filesystems
are nevada83.
# zfs mount -a
cannot mount '/pandora': directory is not empty
# zfs list -o name,mountpoint
NAME
On Fri, 2007-05-11 at 09:00 -0700, lonny wrote:
> I've noticed a similar behavior in my writes. ZFS seems to write in bursts of
> around 5 seconds. I assume it's just something to do with caching?
Yep - the ZFS equivalent of fsflush. Runs more often so the pipes don't
get as clogged. We've ha
25 matches
Mail list logo