Ray Clark wrote:
> The April 2009 "ZFS Administration Guide" states "...tar and cpio commands,
> to save ZFS files. All of these utilities save and restore ZFS file
> attributes and ACLs.
Be careful, Sun tar and Sun cpio do not support sparse files.
Jörg
--
EMail:jo...@schily.isdn.cs.tu-b
Hi Bob,
> Striping across two large raidz2s is not ideal for
> multi-user use.
> You are getting the equivalent of two disks worth of
> IOPS, which does
> not go very far. More smaller raidz vdevs or mirror
> vdevs would be
> better. Also, make sure that you have plenty of RAM
> installed.
Th
Richard Connamacher indieimage.com> writes:
>
> Also, one of those drives will need to be the boot drive.
> (Even if it's possible I don't want to boot from the
> data dive, need to keep it focused on video storage.)
But why?
By allocating 11 drives instead of 12 to your data pool, you will re
On Sep 29, 2009, at 17:46, Cyril Plisko wrote:
On Tue, Sep 29, 2009 at 11:12 PM, Henrik Johansson
wrote:
Hello everybody,
The KCA ZFS keynote by Jeff and Bill seems to be available online
now:
http://blogs.sun.com/video/entry/
kernel_conference_australia_2009_jeff
It should probably be me
Trevor Pretty wrote:
It think James said there was audio problems and that's why it took so
long to get published.
Well, one of the reasons. The other, more major, reason is that there's
been a heckuvalot of video generated lately that we want to get up on
slx.sun.com etc, and we don't have a
It think James said there was audio problems and that's why it took so
long to get published.
Cyril Plisko wrote:
On Tue, Sep 29, 2009 at 11:12 PM, Henrik Johansson wrote:
Hello everybody,
The KCA ZFS keynote by Jeff and Bill seems to be available online now:
http://blogs.sun.co
> > How do I identify which drive it is? I hear each drive spinning (I listened
> > to them individually) so I can't simply select the one that is not spinning.
>
> You can try reading from each raw device, and looking for a blinky-light
> to identify which one is active. If you don't have indivi
Also, one of those drives will need to be the boot drive. (Even if it's
possible I don't want to boot from the data dive, need to keep it focused on
video storage.) So it'll end up being 11 drives in the raid-z.
--
This message posted from opensolaris.org
It appears that I have waded into a quagmire. Every option I can find (cpio,
tar (Many versions!), cp, star, pax) has issues. File size and filename or
path length, and ACLs are common shortfalls. "Surely there is an easy answer"
he says naively!
I simply want to copy one zfs filesystem tree
I'm on OSOL 118b
I needed to move a raw volume from one pool to another. It had 1T volsize and
quite a log of snapshots - probably around 100G.
I deleted them, then zfs send | zfs receive - and it transferred the referenced
size (600G). How does it happen?
# zfs get all zsan01z2/mbx01-test |
> You can try reading from each raw device, and looking
> for a blinky-light
> to identify which one is active. If you don't have
> individual lights,
> you may be able to hear which one is active. The
> "dd" command should do.
I received an email from another member (Ross) recommending the sa
On Tue, Sep 29, 2009 at 11:12 PM, Henrik Johansson wrote:
> Hello everybody,
> The KCA ZFS keynote by Jeff and Bill seems to be available online now:
> http://blogs.sun.com/video/entry/kernel_conference_australia_2009_jeff
> It should probably be mentioned here, i might have missed it.
Funny voic
On Tue, Sep 29, 2009 at 5:30 PM, David Stewart wrote:
> Before I try these options you outlined I do have a question. I went in to
> VMWare Fusion and removed one of the drives from the virtual machine that was
> used to create a RAIDZ pool (there were five drives, one for the OS, and four
> f
Before I try these options you outlined I do have a question. I went in to
VMWare Fusion and removed one of the drives from the virtual machine that was
used to create a RAIDZ pool (there were five drives, one for the OS, and four
for the RAIDZ.) Instead of receiving the "removed" status that
David Stewart wrote:
> How do I identify which drive it is? I hear each drive spinning (I listened
> to them individually) so I can't simply select the one that is not spinning.
You can try reading from each raw device, and looking for a blinky-light
to identify which one is active. If you don't
Hello everybody,
The KCA ZFS keynote by Jeff and Bill seems to be available online now:
http://blogs.sun.com/video/entry/kernel_conference_australia_2009_jeff
It should probably be mentioned here, i might have missed it.
Regards
Henrik
http://sparcv9.blogspot.com
David
That depends on the hardware layout. If you don't know and you say the
data is still somewhere else
You could.
Pull a disk out and see what happens to the pool the one you pulled
will be highlighted as the pool looses all it's replicas (clear
"should" fix when you plug it back in.)
Marc,
Thanks for the tips! I was looking at building a smaller scale version of it
first with maybe 8 1.5 TB drives, but I like your idea better. I'd probably use
1.5 TB drives since the cost per gigabyte is about the same now.
--
This message posted from opensolaris.org
___
How do I identify which drive it is? I hear each drive spinning (I listened to
them individually) so I can't simply select the one that is not spinning.
David
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@op
Bob, thanks for the tips. Before building a custom solution I want to do my due
diligence and make sure that, for every part that can go bad, I've got a backup
ready to be swapped in at a moment's notice. But I am seriously considering the
alternative as well, paying more to get something with a
Ray
Use this link it's worth it's weight in gold. The goolge search engine
is so much better than what's available at doc.sun.com
http://www.google.com/custom?hl=en&client=google-coop&cof=S%3Ahttp%3A%2F%2Fwww.sun.com%3BCX%3ASun%2520Documentation%3BL%3Ahttp%3A%2F%2Flogos.sun.com%2Ftry%2Fimg%2F
David
The disk is broken! Unlike
other file systems which would silently loose your data ZFS has decide
that this particular disk has "persistent errors"
action: Replace the faulted device, or use 'zpool clear' to mark the device repaired.
^^
It's clear you are
unsuccessful at repairing
Or just "try and buy" the machines from Sun for ZERO DOLLARS!!!
Like Erik said..
"Both the Thor and 7110 are available for Try-and-Buy. Get them and test them against your workload - it's the only way to be sure (to paraphrase Ripley)."
Marc Bevand wrote:
Richard Connamacher in
The April 2009 "ZFS Administration Guide" states "...tar and cpio commands, to
save ZFS files. All of these utilities save and restore ZFS file attributes
and ACLs.
I am running 8/07 (U4). Was this true for the U4 verison of ZFS and the tar
and cpio shipped with U4?
Also, I cannot seem to fi
Having casually used IRIX in the past and used BeOS, Windows, and MacOS as
primary OSes, last week I set up a RAIDZ NAS with four Western Digital 1.5TB
drives and copied over data from my WinXP box. All of the hardware is fresh
out of the box so I did not expect any hardware problems, but when
When using zfs send/receive to do the conversion, the receive creates a new
file system:
zfs snapshot zfs01/h...@before
zfs send zfs01/h...@before | zfs receive afx01/home.sha256
Where do I get the chance to "zfs set checksum=sha256" on the new file system
before all of the files are writ
On Sep 29, 2009, at 7:59 AM, Bernd Nies wrote:
Hi,
The system already has a SSD (ATASTECZeusIOPS018GBytesSTMD905C)
as ZFS log device.
I apologize, I should know better than to answer before the first cup of
coffee :-P
NFS writes from only one host are not the problem. Even with may
s
I could use some assistance on this case. I searched this error on
sunsolve & although it did spit out a million things I have not found
anything that pinpoints this issue.
T5240 w/solaris 10 5/09 U7& kernel patch #141414-10
# zpool upgrade -v is at version 10 so should have cache availability
Hello Claire.
That feature is in OpenSolaris but not regular Solaris 10
(http://www.opensolaris.org/os/community/zfs/version/10/):
ZFS Pool
Version 10
This page describes the feature that is available with the ZFS
on-disk format, version 10. This version includes support for the
following fea
On Tue, 29 Sep 2009, Eugen Leitl wrote:
No, basically all rackmount gear (especially 1-2 height units) which
dissipates nontrivial power is loud, since it has to maintain air flow,
which at small geometries means high-rpm and high-pitched noise. I've
The good news is that high-pitched noise is
On Tue, September 29, 2009 01:41, Eugen Leitl wrote:
> Unless
> it's for home use, where a downtime of days or weeks is not critical.
I hate to think what would happen if I were to tell my housemates that
critical services would be down for a WEEK!
--
David Dyer-Bennet, d...@dd-b.net; http://dd
On Tue, Sep 29, 2009 at 07:28:13AM -0400, rwali...@washdcmail.com wrote:
> I agree completely with the ECC. It's for home use, so the power
> supply issue isn't huge (though if it's possible that's a plus). My
> concern with this particular option is noise. It will be in a closet,
> but o
Bob Friesenhahn wrote:
Striping across two large raidz2s is not ideal for multi-user use. You
are getting the equivalent of two disks worth of IOPS, which does not
go very far. More smaller raidz vdevs or mirror vdevs would be
better. Also, make sure that you have plenty of RAM installed.
F
On Tue, 29 Sep 2009, Bernd Nies wrote:
NFS writes from only one host are not the problem. Even with may
small files it is almost as fast as a Netapp. Problem arises when
doing the same parallel from n hosts. E.g. the same write from 10
hosts lasts 10 times longer. On the Netapp the same from
Hi,
The system already has a SSD (ATASTECZeusIOPS018GBytesSTMD905C) as ZFS log
device.
NFS writes from only one host are not the problem. Even with may small files it
is almost as fast as a Netapp. Problem arises when doing the same parallel from
n hosts. E.g. the same write from 10 hosts
On Tue, Sep 29, 2009 at 10:35 AM, Richard Elling
wrote:
>
> On Sep 29, 2009, at 2:03 AM, Bernd Nies wrote:
>
>> Hi,
>>
>> We have a Sun Storage 7410 with the latest release (which is based upon
>> opensolaris). The system uses a hybrid storage pool (23 1TB SATA disks in
>> RAIDZ2 and 1 18GB SSD as
On Tue, 29 Sep 2009, rwali...@washdcmail.com wrote:
I agree completely with the ECC. It's for home use, so the power supply
issue isn't huge (though if it's possible that's a plus). My concern with
this particular option is noise. It will be in a closet, but one with
louvered doors right o
On Sep 29, 2009, at 2:03 AM, Bernd Nies wrote:
Hi,
We have a Sun Storage 7410 with the latest release (which is based
upon opensolaris). The system uses a hybrid storage pool (23 1TB
SATA disks in RAIDZ2 and 1 18GB SSD as log device). The ZFS volumes
are exported with NFSv3 over TCP. NFS
You don't like http://www.supermicro.com/products/nfo/chassis_storage.cfm
?
I must admit I don't have a price list of these.
I am using an SC846xxx for a project here at work.
The hardware consists of an ASUS server-level motherboard with 2 quad-core
Xeons, 8GB of RAM, an LSI PCI-e SAS/SATA car
9:51am, Ware Adams wrote:
On Sep 29, 2009, at 9:32 AM, p...@paularcher.org wrote:
I am using an SC846xxx for a project here at work.
The hardware consists of an ASUS server-level motherboard with 2 quad-core
Xeons, 8GB of RAM, an LSI PCI-e SAS/SATA card, and 24 1.5TB HD, all in one
of these ca
> On Mon, Sep 28, 2009 at 06:04:01PM -0400, Thomas Burgess wrote:
>> personally i like this case:
>>
>>
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16811219021
>>
>> it's got 20 hot swap bays, and it's surprisingly well built. For the
>> money,
>> it's an amazing deal.
>
> You don't lik
On Sep 29, 2009, at 2:41 AM, Eugen Leitl wrote:
On Mon, Sep 28, 2009 at 06:04:01PM -0400, Thomas Burgess wrote:
personally i like this case:
http://www.newegg.com/Product/Product.aspx?Item=N82E16811219021
it's got 20 hot swap bays, and it's surprisingly well built. For
the money,
it's an
Hi,
We have a Sun Storage 7410 with the latest release (which is based upon
opensolaris). The system uses a hybrid storage pool (23 1TB SATA disks in
RAIDZ2 and 1 18GB SSD as log device). The ZFS volumes are exported with NFSv3
over TCP. NFS mount options are:
rw,bg,vers=3,proto=tcp,hard,intr,
Le 28 sept. 09 à 17:58, Glenn Fawcett a écrit :
Been there, done that, got the tee shirt A larger SGA will
*always* be more efficient at servicing Oracle requests for blocks.
You avoid going through all the IO code of Oracle and it simply
reduces to a hash.
Sounds like good advice
I think it *IS* for home use. I like the supermicro stuff, i just
personally find it to be a little pricy for a home NAS. I personally find
the norco 4020's to be the best deal for a home nas. I LOVE mine. I'm
about to build a second one.
On Tue, Sep 29, 2009 at 2:41 AM, Eugen Leitl wrote:
Richard Connamacher indieimage.com> writes:
>
> I was thinking of custom building a server, which I think I can do for
> around $10,000 of hardware (using 45 SATA drives and a custom enclosure),
> and putting OpenSolaris on it. It's a bit of a risk compared to buying a
> $30,000 server, but would
46 matches
Mail list logo