I want to create a separate home, shared, read/write zfs partition on a
tri-boot OpenSolaris, Ubuntu, and CentOS system. I have successfully created
and exported the zpools that I would like to use, in Ubuntu using zfs-fuse.
However, I boot into OpenSolaris, and I type zpool import with no opt
Can I ask why we need to use -c or -d at all? We already have -r to
recursively list children, can't we add an optional depth parameter to that?
You then have:
zfs list : shows current level (essentially -r 0)
zfs list -r : shows all levels (infinite recursion)
zfs list -r 2 : shows 2 levels of
I have a 3 disk raidz configuration, one disk reports lots of error so I
decided to replace it.
At the same time I replaced the system disk and reinstalled the system(the same
version).
When I try to import mypool, I got an I/O error.
What could I do to import it, replace the new disk(c2d0) with
OMG, Rich, that did help and solved all my confusion and now I can go to
sleep...
So now I have to consider Sun and EMC vs Intel in my home $ spending?!
Forget it, Lenovo it is!
at least my folks get a cut.
Goodnight!
best,
z
- Original Message -
From: "Richard Elling"
To: "Scott Lai
Scott Laird wrote:
> Today? Low-power SSDs are probably less reliable than low-power hard
> drives, although they're too new to really know for certain. Given
> the number of problems that vendors have had getting acceptable write
> speeds, I'd be really amazed if they've done any real work on
>
On Thu, Jan 8, 2009 at 5:54 PM, gnomad wrote:
> Ok, I'm going to reply to my own question here. After a few hours of
> thinking, I believe I know what is going on.
>
> I am seeing the initial high network throughput as the 4GB of RAM in the
> server fills up with data. In fact, in this case, I
It's a 2GB filessystem just for test.
I wait about half an hour yesterday, but it import successful with only 20s
when i re-tried today.
Meanwhile, zfs didn't find any disk issue. (by the demo it should)
On Thu, Jan 8, 2009 at 6:03 PM, Carsten Aulbert
wrote:
> Hi
>
> Qin Ming Hua wrote:
> > ba
You can edit /etc/user_attr file.
Sent from my iPhone
On Jan 9, 2009, at 11:13 AM, noz wrote:
>> To do step no 4, you need to login as root, or create
>> new user which
>> home dir not at export.
>>
>> Sent from my iPhone
>>
>
> I tried to login as root at the login screen but it wouldn't let m
> To do step no 4, you need to login as root, or create
> new user which
> home dir not at export.
>
> Sent from my iPhone
>
I tried to login as root at the login screen but it wouldn't let me, some error
about roles. Is there another way to login as root?
--
This message posted from openso
On Thu 08/01/09 20:36 , kavita kavita_kulka...@qualexsystems.com sent:
> What exactly does zfs_space function do?
> The comments suggest it allocates and frees space in a file. What does this
> mean? And through what operation can i invoke this function? for eg.
> whenever i edit/write to a file, z
To do step no 4, you need to login as root, or create new user which
home dir not at export.
Sent from my iPhone
On Jan 9, 2009, at 10:10 AM, noz wrote:
> Kyle wrote:
>> So if preserving the home filesystem through
>> re-installs are really
>> important, putting the home filesystem in a separ
On Thu, 08 Jan 2009 17:29:10 -0800
Dave Brown wrote:
> S,
>Are you sure you have MPXIO turned on? I haven't dealt with
> Solaris for a while (will again soon as I get some virtual servers
> setup) but in the past you had to manually turn it on. I believe the
> path was /kernel/drv/scsi_vhci
Kyle wrote:
> So if preserving the home filesystem through
> re-installs are really
> important, putting the home filesystem in a separate
> pool may be in
> order.
My problem similar to the original thread author, and this scenario is exactly
the one I had in mind. I figured out a workable solu
S,
Are you sure you have MPXIO turned on? I haven't dealt with Solaris
for a while (will again soon as I get some virtual servers setup) but in
the past you had to manually turn it on. I believe the path was
/kernel/drv/scsi_vhci.h (I may be missing some of the path) and you
changed the li
Dear S,
that's a regional question beyond our global chinatown inniatives.
in NYC,
we have the Old, Original chinatown in the city;
we have the newer [but I don't go much since that one is more Taiwan than
PRC] in Flushing Queens;
we have the Cantooness chinatowns in Brooklyn 8th Ave, and Ave U,
No prob z. When seeing your name, it keeps reminding me of the famous rapper.
Which Chinatown are they at? SF?
S
- Original Message
From: JZ
To: Stephen Yum ; zfs-discuss@opensolaris.org;
storage-disc...@opensolaris.org
Sent: Thursday, January 8, 2009 4:31:05 PM
Subject: Re: [zfs-dis
Hi S,
sorry, as much as I am Super z,
this is beyond me.
maybe you can go to china town for a seafood dinner (they are on sale
worldwide now), and see if Sun folks would reply?
best,
z
- Original Message -
From: "Stephen Yum"
To: ;
Sent: Thursday, January 08, 2009 7:27 PM
Subject: [
I'm trying to set up a iscsi connection (with MPXIO) between my Vista64
workstation and a ZFS storage machine running OpenSolaris 10 (forget the exact
version).
On the ZFS machines, I have two NICS. NIC #1 is 192.168.1.102, and NIC #2 is
192.168.2.102. The NICs are connected to two separate swi
test
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ok, I'm going to reply to my own question here. After a few hours of thinking,
I believe I know what is going on.
I am seeing the initial high network throughput as the 4GB of RAM in the server
fills up with data. In fact, in this case, I am bound by the speed of the
source drive, which tops
OMG!
what a critical factor I just didn't think about!!!
stupid me!
Moog, please, which laptops are supporting ZFS today?
I will only buy within those.
z, at home, feeling better, but still a bit confused
- Original Message -
From: "The Moog"
To: "JZ" ;
; "Scott Laird"
Cc: "Orvar Ko
Are you planning to run Solaris on your laptop?
Sent from my BlackBerry BoldĀ®
http://www.blackberrybold.com
-Original Message-
From: "JZ"
Date: Thu, 8 Jan 2009 18:27:52
To: Scott Laird
Cc: Orvar Korvar;
; Peter Korn
Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
Thanks
Thanks much Scott,
I still don't know what you are talking about -- my $3000 to $800 laptops
all never needed to swap any drive.
But yeah, I got hit on all of them when I was in china, by the china web
virus that no U.S. software could do anything [then a china open source
thing did the job]
S
You can't trust any hard drive. That's what backups are for :-).
Laptop hard drives aren't much worse than desktop drives, and 2.5"
SATA drives are cheap. As long as they're easy to swap, then a drive
failure isn't the end of the world. Order a new drive ($100 or so),
swap them, and restore fro
Scott??
I am really at a major cross-point in my decision making process --
until today, all my home stuff are Sony,
from TV, projector, stereo bricks, all the way to USB SSD sticks.
[besides speakers I use Bose]
but this laptop thing is really bothering my religious love for Sony.
should I or sh
Thanks Scott,
I was really itchy to order one, now I just want to save that open $ for
Remy+++.
Then, next question, can I trust any HD for my home laptop? should I go get
a Sony VAIO or a cheap China-made thing would do?
big price delta...
z at home
- Original Message -
From: "Scott
Today? Low-power SSDs are probably less reliable than low-power hard
drives, although they're too new to really know for certain. Given
the number of problems that vendors have had getting acceptable write
speeds, I'd be really amazed if they've done any real work on
long-term reliability yet. G
Okay, so is there an implementation of HyperV or VSS or whatever on the
Solaris+ZFS environment?
Also, is there something like this if I were to access ZFS-based storage from a
Linux client, for example?
Since most of my clients will be running some version of Windows while
accessing a ZFS bac
I was think about Apple's new SSD drive option on laptops...
is that safer than Apple's HD or less safe? [maybe Orvar can help me on
this]
the price is a bit hefty for me to just order for experiment...
Thanks!
z at home
- Original Message -
From: "Toby Thain"
To: "JZ"
Cc: "Scott La
On 7-Jan-09, at 9:43 PM, JZ wrote:
> ok, Scott, that sounded sincere. I am not going to do the pic thing
> on you.
>
> But do I have to spell this out to you -- somethings are invented
> not for
> home use?
>
> Cindy, would you want to do ZFS at home,
Why would you disrespect your personal d
I've got mine sitting on the floor at the moment. Need to find the time to
try out the install.
Do you know why it would not work with the DOM? I'm planning to use a spare
4GB DOM and keep the EMC one for backup if nothing works.
Did you use a video card to install?
On Fri, Jan 9, 2009 at 10:46 A
I've done it but could not make it to run from the the dom, had to use a usb
stick :-)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
But, Tim, you are a super IT guy, and your data is not baby...
I just have so many copies of my home baby data, since storage is so so cheap
today compared to the wine...
[and a baby JAVA thing to keep them in sync...]
(BTW, I am not a wine guy, I only do Remy+++)
;-)
best,
z
- Original
On Wed, Jan 7, 2009 at 8:43 PM, JZ wrote:
> ok, Scott, that sounded sincere. I am not going to do the pic thing on you.
>
> But do I have to spell this out to you -- somethings are invented not for
> home use?
>
> Cindy, would you want to do ZFS at home, or just having some wine and
> music?
>
>
David W. Smith wrote:
> On Thu, 2009-01-08 at 13:26 -0500, Brian H. Nelson wrote:
>
>> David Smith wrote:
>>
>>> I was wondering if anyone has any experience with how long a "zfs destroy"
>>> of about 40 TB should take? So far, it has been about an hour... Is there
>>> any good way to t
On Thu, Jan 8, 2009 at 14:38, Richard Morris - Sun Microsystems -
Burlington United States wrote:
> As you point out, the -c option is user friendly while the -depth (or
> maybe -d) option is more general. There have been several requests for
> the -c option. Would anyone prefer the -depth optio
[just for the beloved Orvar]
Ok, rule of thumb to save you some open time -- anything with "z", or "j",
would probably be safe enough for your baby data.
And yeah, I manage my own lunch hours BTW.
:-)
best,
z
- Original Message -
From: "Orvar Korvar"
To:
Sent: Thursday, January 08
On 01/08/09 06:39, Tim Foster wrote:
hi Rich,
On Wed, 2009-01-07 at 10:51 -0500, Richard Morris - Sun Microsystems -
Burlington United States wrote:
As you point out, the -c option is user friendly while the -depth (or
maybe -d) option is more general. There have been several requests for
the
On 01/08/09 06:28, Mike Futerko wrote:
> I'd have a few more proposals how to improve zfs list if they don't
> contravene the concept of zfs list command.
>
> Currently zfs list returns error "operation not applicable to datasets
> of this type" if you try to list for ex.: "zfs list -t snapshot
> f
Karthik, did you ever file a bug or this? I'm experiencing the same hang and
wondering how to recover.
/Brian
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listi
On Thu, Jan 8, 2009 at 10:01, Orvar Korvar
wrote:
> Thank you. How does raidz2 compare to raid-2? Safer? Less safe?
Raid-2 is much less used, for one, uses many more disks for parity,
for two, and is much slower in any application I can think of.
Suppose you have 11 100G disks. Raid-2 would use 7
On Thu, 2009-01-08 at 13:26 -0500, Brian H. Nelson wrote:
> David Smith wrote:
> > I was wondering if anyone has any experience with how long a "zfs destroy"
> > of about 40 TB should take? So far, it has been about an hour... Is there
> > any good way to tell if it is working or if it is hung
David Smith wrote:
> I was wondering if anyone has any experience with how long a "zfs destroy" of
> about 40 TB should take? So far, it has been about an hour... Is there any
> good way to tell if it is working or if it is hung?
>
> Doing a "zfs list" just hangs. If you do a more specific zfs
On Thu, 8 Jan 2009, Carsten Aulbert wrote:
>>
>> My experience with iozone is that it refuses to run on an NFS client of
>> a Solaris server using ZFS since it performs a test and then refuses to
>> work since it says that the filesystem is not implemented correctly.
>> Commenting a line of code in
A few more details:
The system is a Sun x4600 running Solaris 10 Update 4.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have just built an opensolaris box (2008.11) as a small fileserver (6x 1TB
drives as RAIDZ2, kernel CIFS) for home media use and I am noticing an odd
behavior copying files to the box.
My knowledge of monitoring/analysis tools under Solaris is very limited, and so
far I have just been using t
Here's what I did:
* had a t1000 with a zpool under /dev/dsk/c0t0d0s7 on solaris 10u4
* re-installed with solaris 10u6 (disk layout unchanged)
* imported zpool with zpool import -f (I'm forever forgetting to export them
first) - this was ok
* re-installed with solaris 10u6 and more up-to-date patc
I was wondering if anyone has any experience with how long a "zfs destroy" of
about 40 TB should take? So far, it has been about an hour... Is there any
good way to tell if it is working or if it is hung?
Doing a "zfs list" just hangs. If you do a more specific zfs list, then it is
okay... z
RAID 2 is something weird that no one uses, and really only exists on
paper as part of Berkeley's original RAID paper, IIRC. raidz2 is more
or less RAID 6, just like raidz is more or less RAID 5. With raidz2,
you have to lose 3 drives per vdev before data loss occurs.
Scott
On Thu, Jan 8, 2009
Hi Bob.
Bob Friesenhahn wrote:
>> Here is the current example - can anyone with deeper knowledge tell me
>> if these are reasonable values to start with?
>
> Everything depends on what you are planning do with your NFS access. For
> example, the default blocksize for zfs is 128K. My example test
On Thu, 8 Jan 2009, Carsten Aulbert wrote:
> for the people higher up the ladder), but someone gave a hint to use
> multiple threads for testing the ops/s and here I'm a bit at a loss how
> to understand the results and if the values are reasonable or not.
I will admit that some research is requi
This is bug 6727463.
On 01/07/09 13:49, Robert Bauer wrote:
> Why is it impossible to have a ZFS pool with a log device for the rpool
> (device used for the root partition)?
> Is this a bug?
> I can't boot a ZFS partition / on a zpool which uses also a log device. Maybe
> its not supported beca
On Wed, Jan 7, 2009 at 17:12, Volker A. Brandt wrote:
>> The Samsung HD103UJ drives are nice, if you're not using
>> NVidia controllers - there's a bug in either the drives or the
>> controllers that makes them drop drives fairly frequently.
>
> Do you happen to have more details about this proble
A question: why do you want to use HW raid together with ZFS? I thought ZFS
performing better if it was in total control? Would the results have been
better if no HW raid controller, and only ZFS?
--
This message posted from opensolaris.org
___
zfs-dis
I don't know if VSS has this capability, but essentially if it can temporarily
quiesce a device like a data base does for "warm standby" then a snapshot
should work. This would be a very simple Windows side script/batch:
1) Q-Disk
2) Remote trigger snapshot
3) Un Q-Disk
I have no idea where to
Thank you. How does raidz2 compare to raid-2? Safer? Less safe?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hi Rich,
On Wed, 2009-01-07 at 10:51 -0500, Richard Morris - Sun Microsystems -
Burlington United States wrote:
> As you point out, the -c option is user friendly while the -depth (or
> maybe -d) option is more general. There have been several requests for
> the -c option. Would anyone prefer
On 1/8/09, Bill Sommerfeld wrote:
>
>
> On Tue, 2009-01-06 at 22:18 -0700, Neil Perrin wrote:
> > I vaguely remember a time when UFS had limits to prevent
> > ordinary users from consuming past a certain limit, allowing
> > only the super-user to use it. Not that I'm advocating that
> > approach f
Hello
> This seems like a reasonable proposal to enhance zfs list. But it would
> also be good to add as few new options to zfs list as possible. So it
> probably makes sense to add at most one of these new options. Or
> perhaps add an optional depth argument to the -r option instead?
>
> A
Hello
> Yah, the incrementals are from a 30TB volume, with about 1TB used.
> Watching iostat on each side during the incremental sends, the sender
> side is hardly doing anything, maybe 50iops read, and that could be
> from other machines accessing it, really light load.
> The receiving side howev
Hi all,
among many other things I recently restarted benchmarking ZFS over NFS3
performance between X4500 (host) and Linux clients. I've just iozone
quite a while ago and am still a bit at a loss understanding the
results. The automatic mode is pretty ok (and generates nice 3D plots
for the people
Hi
Qin Ming Hua wrote:
> bash-3.00# zpool import mypool
> ^C^C
>
> it hung when i try to re-import the zpool, has anyone see this before?
>
How long did you wait?
Once a zfs import took 1-2 hours to complete (it was seemingly stuck at
a ~30 GB filesystem which it needed to do some work on).
Hi All,
I would like to try zfs Self Healing feature as --
http://www.opensolaris.org/os/community/zfs/demos/selfheal/
but meet some issue, please see my process.
bash-3.00# zpool create mypool mirror c3t5006016130603AE5d7
c3t5006016130603AE5d8
bash-3.00# cd /mypool/
bash-3.00# cp /export/iozone3
63 matches
Mail list logo