On Wed, Nov 26, 2008 at 07:02:11PM -0500, Miles Nordin wrote:
> (2) The FMA model of collecting telemmetry, taking it into
> user-space, chin-strokingly contemplating it for a while, then
> decreeing a diagnosis, is actually a rather limited one. I can
> think of two kinds of limit
>If there is a zfs implementation bug it could perhaps be more risky
>to have five pools rather than one.
Kind of goes both ways. You're perhaps 5 times as likely to wind up with a
damaged pool, but if that ever happens, there's only 1/5 as much data to
restore.
--
This message posted from ope
On Wed, 26 Nov 2008, Miles Nordin wrote:
>
> (2) The FMA model of collecting telemmetry, taking it into
> user-space, chin-strokingly contemplating it for a while, then
> decreeing a diagnosis, is actually a rather limited one. I can
> think of two kinds of limit:
>
> (a) you're di
> "k" == Krzys <[EMAIL PROTECTED]> writes:
k> sucks that there is no easy way of getting it rather than
k> going around this way. and format -e, changing label on that
k> disk did not help, I even recreated partition table and I did
k> make a huge file, I was trying to dd
> "rs" == Ross Smith <[EMAIL PROTECTED]> writes:
> "nw" == Nicolas Williams <[EMAIL PROTECTED]> writes:
rs> I disagree Bob, I think this is a very different function to
rs> that which FMA provides.
I see two problems.
(1) FMA doesn't seem to work very well, and was used as an ex
On Wed, Nov 26, 2008 at 4:28 PM, Ian Collins <[EMAIL PROTECTED]> wrote:
> Richard Catlin wrote:
> > On my laptop, I created a zpool on a 150GB Western Digital USB drive
> called wd149. Upon creation it mounted itself at /wd149.
> >
> > I now moved it to my desktop system and attached it. When I
Richard Catlin wrote:
> On my laptop, I created a zpool on a 150GB Western Digital USB drive called
> wd149. Upon creation it mounted itself at /wd149.
>
> I now moved it to my desktop system and attached it. When I "zpool list",
> the wd149 pool is not recognized (I'm not sure if it should be
On my laptop, I created a zpool on a 150GB Western Digital USB drive called
wd149. Upon creation it mounted itself at /wd149.
I now moved it to my desktop system and attached it. When I "zpool list", the
wd149 pool is not recognized (I'm not sure if it should be automatically
recognized or no
I suspect ZFS is unaware that anything has changed in the
z_phys so it never gets written out. You probably need
to create a dmu transaction and call dmu_buf_will_dirty(zp->z_dbuf, tx);
Neil.
On 11/26/08 03:36, shelly wrote:
> In place of padding in zfs znode i added a new field. stored an intege
Ross wrote:
> Will this be for Sun's xVM Server as well as for ESX?
>
That would be the goal. I will just depend on what features/APIs are
available and when.
-ryan
--
Ryan Arneson
Sun Microsystems, Inc.
303-223-6264
[EMAIL PROTECTED]
http://blogs.sun.com/rarneson
__
On Wed, Nov 26, 2008 at 04:30:59PM +0100, "C. Bergstr?m" wrote:
> Ok. here's a trick question.. So to the best of my understanding zfs
> turns off write caching if it doesn't own the whole disk.. So what if s0
> *is* the whole disk? Is write cache supposed to be turned on or off?
Actually, ZFS
On Wed, 26 Nov 2008, Bob Friesenhahn wrote:
> On Wed, 26 Nov 2008, Paul Sobey wrote:
> An important thing to keep in mind is that each vdev offers a "write IOP".
> If you put ten disks in a raidz2 vdev, then those ten disks are providing one
> write IOP and a one read IOP. If you use those 10 d
On Wed, 26 Nov 2008, Paul Sobey wrote:
>
> We're more worried about the idea of a single 'zfs filesystem' becoming
> corrupt somehow. From what you say below, the pool is the boundry where that
> might happen, not the individual filesystem. Therefore it seems no less
> dangerous creating a singl
On 26 Nov 2008, at 14:37, Darren J Moffat wrote:
> dick hoogendijk wrote:
>> On Wed, 26 Nov 2008 12:51:04 +
>> Chris Ridd <[EMAIL PROTECTED]> wrote:
>>> But what do I do with that swap slice? Should I ditch it and create
>>> an rpool/swap area? Do I still need a boot slice?
>
>
> Depending on
On Wed, 26 Nov 2008, Bob Friesenhahn wrote:
>> 1. Do these kinds of self-imposed limitations make any sense in a zfs
>> world?
>
> Depending on your backup situation, they may make just as much sense as
> before. For zfs this is simply implemented by applying a quota to each
> filesystem in th
On Wed, 26 Nov 2008, Paul Sobey wrote:
>
> Pointers to additional info are most welcome!
>
> 1. Do these kinds of self-imposed limitations make any sense in a zfs
> world?
Depending on your backup situation, they may make just as much sense
as before. For zfs this is simply implemented by applyi
On 26-Nov-08, at 10:30 AM, C. Bergström wrote:
> ... Also is it more efficient/better
> performing to give swap a 2nd slice on the inner part of the disk
> or not
> care and just toss it on top of zfs?
I think the thing about swap is that if you're swapping, you probably
have more to worry a
dick hoogendijk wrote:
> On Wed, 26 Nov 2008 15:29:50 +0100
> "C. Bergström" <[EMAIL PROTECTED]> wrote:
>
>
>> To clear up some confusion..
>> This is from a default indiana install
>> format -e
>> verify..
>> Part TagFlag Cylinders SizeBlocks
>> 0 root
Darren J Moffat wrote:
> dick hoogendijk wrote:
>
>> On Wed, 26 Nov 2008 12:51:04 +
>> Chris Ridd <[EMAIL PROTECTED]> wrote:
>>
>>
>>> I'm replacing the disk with my rpool with a mirrored pool, and
>>> wondering how best to do that.
>>>
>>> The disk I'm replacing is partitioned with r
On Wed, 26 Nov 2008 14:37:21 +
Darren J Moffat <[EMAIL PROTECTED]> wrote:
> dick hoogendijk wrote:
> > I've never seen a ZFS system on seperate slices. Slices are things
> > from the past ;-)
>
> Unfortunately not the case for ZFS pools that are to be booted from.
> This is because we can't
On Wed, 26 Nov 2008 15:29:50 +0100
"C. Bergström" <[EMAIL PROTECTED]> wrote:
> To clear up some confusion..
> This is from a default indiana install
> format -e
> verify..
> Part TagFlag Cylinders SizeBlocks
> 0 rootwm 262 - 19453 147.02GB(
> CS> Suppose that you have a SAN environment with a lot of LUNs. In the
> CS> normal course of events this means that 'zpool import' is very slow,
> CS> because it has to probe all of the LUNs all of the time.
>
> CS> In S10U6, the theoretical 'obvious' way to get around this for your
> CS> SAN
Hi,
maybe this [1] will help you. For more Information read also the linked
Blog [2].
HTH
[1] http://darkstar-solaris.blogspot.com/2008/09/zfs-root-mirror.html
[2] http://malsserver.blogspot.com/2008/08/mirroring-resolved-correct-way.html
--
This message posted from opensolaris.org
On Wed, 26 Nov 2008, Paul Sobey wrote:
> Hello,
>
> We have a new Thor here with 24TB of disk in (first of many, hopefully).
> We are trying to determine the bext practices with respect to file system
> management and sizing. Previously, we have tried to keep each file system
> to a max size of 50
dick hoogendijk wrote:
> On Wed, 26 Nov 2008 12:51:04 +
> Chris Ridd <[EMAIL PROTECTED]> wrote:
>
>> I'm replacing the disk with my rpool with a mirrored pool, and
>> wondering how best to do that.
>>
>> The disk I'm replacing is partitioned with root on s0, swap on s1
>> and boot on s8, whi
Hello,
We have a new Thor here with 24TB of disk in (first of many, hopefully).
We are trying to determine the bext practices with respect to file system
management and sizing. Previously, we have tried to keep each file system
to a max size of 500GB to make sure we could fit it all on a single
Chris Ridd wrote:
> On 26 Nov 2008, at 13:12, dick hoogendijk wrote:
>
>
>> On Wed, 26 Nov 2008 12:51:04 +
>> Chris Ridd <[EMAIL PROTECTED]> wrote:
>>
>>
>>> I'm replacing the disk with my rpool with a mirrored pool, and
>>> wondering how best to do that.
>>>
>>> The disk I'm replacing
On 26 Nov 2008, at 13:12, dick hoogendijk wrote:
> On Wed, 26 Nov 2008 12:51:04 +
> Chris Ridd <[EMAIL PROTECTED]> wrote:
>
>> I'm replacing the disk with my rpool with a mirrored pool, and
>> wondering how best to do that.
>>
>> The disk I'm replacing is partitioned with root on s0, swap on
On Wed, 26 Nov 2008 12:51:04 +
Chris Ridd <[EMAIL PROTECTED]> wrote:
> I'm replacing the disk with my rpool with a mirrored pool, and
> wondering how best to do that.
>
> The disk I'm replacing is partitioned with root on s0, swap on s1
> and boot on s8, which is what the original 2008.05 i
I'm replacing the disk with my rpool with a mirrored pool, and
wondering how best to do that.
The disk I'm replacing is partitioned with root on s0, swap on s1 and
boot on s8, which is what the original 2008.05 installer created for
me. I've partitioned the new disk in the same way and am no
Hello Chris,
Tuesday, November 25, 2008, 11:19:36 PM, you wrote:
CS> Suppose that you have a SAN environment with a lot of LUNs. In the
CS> normal course of events this means that 'zpool import' is very slow,
CS> because it has to probe all of the LUNs all of the time.
CS> In S10U6, the theore
In place of padding in zfs znode i added a new field. stored an integer value
and am able to see saved information.
but after reboot it is not there. If i was able to access before reboot so it
must be in memory. I think i need to save it to disk.
how does one force zfs znode to disk.
right n
Jens Elkner pisze:
> On Tue, Nov 25, 2008 at 06:34:47PM -0500, Richard Morris - Sun Microsystems -
> Burlington United States wrote:
>
>>option to list all datasets. So 6734907 added -t all which produces the
>>same output as -t filesystem,volume,snapshot.
>>1. http://bugs.opensolaris
Dnia 2008-11-25, wto o godzinie 18:34 -0500, Richard Morris - Sun
Microsystems - Burlington United States pisze:
> On 11/25/08 16:41, Paweł Tęcza wrote:
> >
> > Hi Rich,
> >
> > Thanks a lot for your feedback! I was thinking that `zfs list` thread
> > is already dead ;)
> >
> > The syntax above
Dnia 2008-11-25, wto o godzinie 23:10 +, Tim Foster pisze:
> Paweł Tęcza wrote:
> > Dnia 2008-11-25, wto o godzinie 23:16 +0100, Paweł Tęcza pisze:
> >
> >> Also I'm very curious whether I can configure Time Slider to taking
> >> backup every 2 or 4 or 8 hours, for example.
> >
> > Or set the
35 matches
Mail list logo