Erwin Panen wrote:
Hi,
I'm not very familiar with manipulating zfs.
This is what happened:
I have an osol 2009.06 system on which I have some files that I need
to recover. Due to my ignorance and blindly testing, I have managed to
get this system to be unbootable... I know, my own fault.
So
Erwin Panen wrote:
Richard, thanks for replying;
I seem to have complicated matters:
I shutdown the system (past midnight here :-) )and seeing your reply
come in, fired it up again to further test.
The system wouldn't come up anymore (dumped in maintenance shell) as
it would try to import both
Erwin Panen wrote:
Ian, thanks for replying.
I'll give cfgadm | grep sata a go in a minute.
At the mo I've rebooted from 2009.06 livecd. Of course I can't import
rpool because it's a newer zfs version :-(
Any way to update zfs version on a running livecd?
No, if you can get a failsafe session
valrh...@gmail.com wrote:
Does this work with dedup?
Does what work? Context, Please! (I'm reading this on webmail with
limited history..)
If you have a deduped pool and send it to a file, will it reflect the smaller size, or
will this "rehydrate" things first?
That depends on the proper
Gary Mills wrote:
We have an IMAP e-mail server running on a Solaris 10 10/09 system.
It uses six ZFS filesystems built on a single zpool with 14 daily
snapshots. Every day at 11:56, a cron command destroys the oldest
snapshots and creates new ones, both recursively. For about four
minutes ther
David Dyer-Bennet wrote:
For a system where you care about capacity and safety, but not that
much about IO throughput (that's my interpretation of what you said
you would use it for), with 16 bays, I believe the expert opinion will
tell you that two RAIDZ2 groups of 8 disks each is one of th
Slack-Moehrle wrote:
Do you have any thoughts on implementation? I think I would just like to put my
Home directory on the ZFS pool and just SCP files up as needed. I dont think I
need to mount drives on my mac, etc. SCP seems to suite me.
One important point to note is you can only boot off a
Tim Cook wrote:
Is there a way to manually trigger a hot spare to kick in? Mine
doesn't appear to be doing so. What happened is I exported a pool to
reinstall solaris on this system. When I went to re-import it, one of
the drives refused to come back online. So, the pool imported
degraded,
On 03/11/10 05:42 AM, Andrew Daugherity wrote:
On Tue, 2010-03-09 at 20:47 -0800, mingli wrote:
And I update the sharenfs option with "rw,ro...@100.198.100.0/24", it works
fine, and the NFS client can do the write without error.
Thanks.
I've found that when using hostnames in the sh
On 03/11/10 09:27 AM, Robert Thurlow wrote:
Ian Collins wrote:
On 03/11/10 05:42 AM, Andrew Daugherity wrote:
I've found that when using hostnames in the sharenfs line, I had to use
the FQDN; the short hostname did not work, even though both client and
server were in the same DNS domai
On 03/11/10 03:21 PM, Harry Putnam wrote:
Running b133
When you see this line in a `zpool status' report:
status: The pool is formatted using an older on-disk format. The
pool can still be used, but some features are unavailable.
Is it safe and effective to heed the advice given i
I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100%
done, but not complete:
scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go
Any ideas?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.ope
On 03/18/10 11:09 AM, Bill Sommerfeld wrote:
On 03/17/10 14:03, Ian Collins wrote:
I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100%
done, but not complete:
scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go
Don't panic. If "zpool iostat&qu
On 03/18/10 03:53 AM, David Dyer-Bennet wrote:
Anybody using the in-kernel CIFS is also concerned with the ACLs, and I
think that's the big issue.
Especially in a paranoid organisation with 100s of ACEs!
Also, snapshots. For my purposes, I find snapshots at some level a very
important pa
On 03/18/10 01:03 PM, Matt wrote:
Shipping the iSCSI and SAS questions...
Later on, I would like to add a second lower spec box to continuously (or
near-continously) mirror the data (using a gig crossover cable, maybe). I have
seen lots of ways of mirroring data to other boxes which has left
On 03/18/10 11:09 AM, Bill Sommerfeld wrote:
On 03/17/10 14:03, Ian Collins wrote:
I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100%
done, but not complete:
scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go
If blocks that have already been visited are freed
On 03/18/10 12:07 PM, Khyron wrote:
Ian,
When you say you spool to tape for off-site archival, what software do
you
use?
NetVault.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-d
On 03/20/10 09:28 AM, Richard Jahnel wrote:
They way we do this here is:
zfs snapshot voln...@snapnow
[i]#code to break on error and email not shown.[/i]
zfs send -i voln...@snapbefore voln...@snapnow | pigz -p4 -1> file
[i]#code to break on error and email not shown.[/i]
scp /dir/file u...@re
On 03/23/10 09:34 AM, Harry Putnam wrote:
This may be a bit dimwitted since I don't really understand how
snapshots work. I mean the part concerning COW (copy on right) and
how it takes so little room.
But here I'm not asking about that.
It appears to me that the default snapshot setup shares
On 02/28/10 08:09 PM, Ian Collins wrote:
I was running zpool iostat on a pool comprising a stripe of raidz2
vdevs that appears to be writing slowly and I notice a considerable
imbalance of both free space and write operations. The pool is
currently feeding a tape backup while receiving a
On 03/25/10 09:32 PM, Bruno Sousa wrote:
On 24-3-2010 22:29, Ian Collins wrote:
On 02/28/10 08:09 PM, Ian Collins wrote:
I was running zpool iostat on a pool comprising a stripe of raidz2
vdevs that appears to be writing slowly and I notice a considerable
imbalance of both free space
On 03/25/10 11:23 PM, Bruno Sousa wrote:
On 25-3-2010 9:46, Ian Collins wrote:
On 03/25/10 09:32 PM, Bruno Sousa wrote:
On 24-3-2010 22:29, Ian Collins wrote:
On 02/28/10 08:09 PM, Ian Collins wrote:
I was running zpool iostat on a pool comprising a stripe of
On 03/26/10 08:47 AM, Bruno Sousa wrote:
Hi all,
The more readings i do about ZFS, and experiments the more i like this
stack of technologies.
Since we all like to see real figures in real environments , i might
as well share some of my numbers ..
The replication has been achieved with the zfs
On 03/26/10 10:00 AM, Bruno Sousa wrote:
[Boy top-posting sure mucks up threads!]
Hi,
Indeed the 3 disks per vdev (raidz2) seems a bad idea...but it's the
system i have now.
Regarding the performance...let's assume that a bonnie++ benchmark
could go to 200 mg/s in. The possibility of getting
On 03/27/10 11:22 AM, Muhammed Syyid wrote:
Hi
I have a couple of questions
I currently have a 4disk RaidZ1 setup and want to move to a RaidZ2
4x2TB = RaidZ1 (tank)
My current plan is to setup
8x1.5TB in a RAIDZ2 and migrate the data from the tank vdev over.
What's the best way to accomplish thi
On 03/27/10 11:32 AM, Svein Skogen wrote:
On 26.03.2010 23:25, Marc Nicholas wrote:
Richard,
My challenge to you is that at least three vedors that I know of built
their storage platforms on FreeBSD. One of them sells $4bn/year of
product - petty sure that eclipses all (Open)Solaris-based s
On 03/27/10 11:33 AM, Richard Jahnel wrote:
zfs send s...@oldpool | zfs receive newpool
In the OP's case, a recursive send is in order.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/
On 03/27/10 09:39 AM, Richard Elling wrote:
On Mar 26, 2010, at 2:34 AM, Bruno Sousa wrote:
Hi,
The jumbo-frames in my case give me a boost of around 2 mb/s, so it's not that
much.
That is about right. IIRC, the theoretical max is about 4% improvement, for
MTU of 8KB.
Now i
On 03/27/10 08:14 PM, Svein Skogen wrote:
On 26.03.2010 23:55, Ian Collins wrote:
On 03/27/10 09:39 AM, Richard Elling wrote:
On Mar 26, 2010, at 2:34 AM, Bruno Sousa wrote:
Hi,
The jumbo-frames in my case give me a boost of around 2 mb/s, so it's
not that
On 03/26/10 12:16 AM, Bruno Sousa wrote:
Well...i'm pretty much certain that at my job i faced something similar..
We had a server with 2 raidz2 groups each with 3 drives, and one drive
has failed and replaced by a hot spare. However, the balance of data
between the 2 groups of raidz2 start to be
On 03/28/10 10:02 AM, Harry Putnam wrote:
Bob Friesenhahn writes:
On Sat, 27 Mar 2010, Harry Putnam wrote:
What to do with a status report like the one included below?
What does it mean to have an unrecoverable error but no data errors?
I think that this summary means tha
On 03/28/10 04:18 PM, Tim Cook wrote:
Sounds exactly like the behavior people have had previously while a
system is trying to recover a pool with a faulted drive. I'll have to
check and see if I can dig up one of those old threads. I vaguely
recall someone here had a single drive fail on a
On 03/29/10 10:31 AM, Jim wrote:
I had a drive fail and replaced it with a new drive. During the resilvering
process the new drive had write faults and was taken offline. These faults were
caused by a broken SATA cable (drive checked with Manufacturers software and
all ok). New cable fixed the
On 03/30/10 12:44 PM, Mike Gerdts wrote:
On Mon, Mar 29, 2010 at 5:39 PM, Nicolas Williams
wrote:
One really good use for zfs diff would be: as a way to index zfs send
backups by contents.
Or to generate the list of files for incremental backups via NetBackup
or similar. This is es
I've lost a few drives on a thumper I look after in the past week and
I've noticed a couple of issues with the resilver process that could be
improved (or maybe have, the system is running Solaris 10 update 8).
1) While the pool has been resilvering, I have been copying a large
(2TB) filesyste
On 03/31/10 10:39 AM, Peter Tribble wrote:
I have a pool (on an X4540 running S10U8) in which a disk failed, and the
hot spare kicked in. That's perfect. I'm happy.
Then a second disk fails.
Now, I've replaced the first failed disk, and it's resilvered and I have my
hot spare back.
But: why ha
On 03/31/10 10:54 PM, Peter Tribble wrote:
On Tue, Mar 30, 2010 at 10:42 PM, Eric Schrock wrote:
On Mar 30, 2010, at 5:39 PM, Peter Tribble wrote:
I have a pool (on an X4540 running S10U8) in which a disk failed, and the
hot spare kicked in. That's perfect. I'm happy.
Then a second
On 04/ 1/10 01:51 AM, Charles Hedrick wrote:
We're getting the notorious "cannot destroy ... dataset already exists". I've
seen a number of reports of this, but none of the reports seem to get any response.
Fortunately this is a backup system, so I can recreate the pool, but it's going to take
On 04/ 1/10 02:01 PM, Charles Hedrick wrote:
So we tried recreating the pool and sending the data again.
1) compression wasn't set on the copy, even though I did sent -R, which is
supposed to send all properties
2) I tried killing to send | receive pipe. Receive couldn't be killed. It hung.
On 04/ 1/10 02:01 PM, Charles Hedrick wrote:
So we tried recreating the pool and sending the data again.
1) compression wasn't set on the copy, even though I did sent -R, which is
supposed to send all properties
Was compression explicitly set on the root filesystem of your set?
I don't t
Is this callstack familiar to anyone? It just happened on a Solaris 10
update 8 box:
genunix: [ID 655072 kern.notice] fe8000d1b830 unix:real_mode_end+7f81 ()
genunix: [ID 655072 kern.notice] fe8000d1b910 unix:trap+5e6 ()
genunix: [ID 655072 kern.notice] fe8000d1b920 unix:_cmntrap+14
On 04/ 2/10 02:52 PM, Andrej Gortchivkin wrote:
Hi All,
I just got across a strange (well... at least for me) situation with ZFS and I
hope you might be able to help me out. Recently I built a new machine from
scratch for my storage needs which include various CIFS / NFS and most
importantly
On 04/ 2/10 03:30 PM, Andrej Gortchivkin wrote:
I created the pool by using:
zpool create ZPOOL_SAS_1234 raidz c7t0d0 c7t1d0 c7t2d0 c7t3d0
However now that you mentioned the lack of redundancy I see where is the problem. I guess
it will then remain a mystery how did this happen, since I'm very
On 04/ 3/10 10:23 AM, Edward Ned Harvey wrote:
Momentarily, I will begin scouring the omniscient interweb for
information, but I’d like to know a little bit of what people would
say here. The question is to slice, or not to slice, disks before
using them in a zpool.
Not.
One reason to sl
On 04/ 9/10 10:48 AM, Erik Trimble wrote:
Well
The problem is (and this isn't just a ZFS issue) that resilver and scrub
times /are/ very bad for>1TB disks. This goes directly to the problem
of redundancy - if you don't really care about resilver/scrub issues,
then you really shouldn't bothe
On 04/ 9/10 08:58 PM, Andreas Höschler wrote:
zpool attach tank c1t7d0 c1t6d0
This hopefully gives me a three-way mirror:
mirror ONLINE 0 0 0
c1t15d0 ONLINE 0 0 0
c1t7d0 ONLINE 0 0 0
c1t6d0 ONL
On 04/10/10 06:20 AM, Daniel Bakken wrote:
My zfs filesystem hangs when transferring large filesystems (>500GB)
with a couple dozen snapshots between servers using zfs send/receive
with netcat. The transfer hangs about halfway through and is
unkillable, freezing all IO to the filesystem, requirin
On 04/11/10 11:55 AM, Harry Putnam wrote:
Would you mind expanding the abbrevs: ssd zil 12arc?
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
On 04/12/10 05:39 PM, Willard Korfhage wrote:
IT is a Corsair 650W modular power supply, with 2 or 3 disks per cable.
However, the Areca card is not reporting any errors, so I think power to the
disks is unlikely to be a problem.
Here's what is in /var/adm/messages
Apr 11 22:37:41 fs9 fmd: [I
On 04/13/10 05:47 PM, Daniel wrote:
Hi all.
Im pretty new to the whole OpenSolaris thing, i've been doing a bit of research
but cant find anything on what i need.
I am thinking of making myself a home file server running OpenSolaris with ZFS
and utilizing Raid/Z
I was wondering if there is a
On 04/ 2/10 10:25 AM, Ian Collins wrote:
Is this callstack familiar to anyone? It just happened on a Solaris
10 update 8 box:
genunix: [ID 655072 kern.notice] fe8000d1b830
unix:real_mode_end+7f81 ()
genunix: [ID 655072 kern.notice] fe8000d1b910 unix:trap+5e6 ()
genunix: [ID 655072
On 04/15/10 06:16 AM, David Dyer-Bennet wrote:
Because 132 was the most current last time I paid much attention :-). As
I say, I'm currently holding out for 2010.$Spring, but knowing how to get
to a particular build via package would be potentially interesting for the
future still.
I hope it's
On 04/17/10 09:34 AM, MstAsg wrote:
I have a question. I have a disk that solaris 10& zfs is installed. I wanted
to add the other disks and replace this with the other. (totally three others). If
I do this, I add some other disks, would the data be written immediately? Or only
the new data is
On 04/17/10 10:09 AM, Richard Elling wrote:
On Apr 16, 2010, at 2:49 PM, Ian Collins wrote:
On 04/17/10 09:34 AM, MstAsg wrote:
I have a question. I have a disk that solaris 10& zfs is installed. I wanted
to add the other disks and replace this with the other. (totally t
On 04/17/10 11:41 AM, Brandon High wrote:
When I set up my opensolaris system at home, I just grabbed a 160 GB
drive that I had sitting around to use for the rpool.
Now I'm thinking of moving the rpool to another disk, probably ssd,
and I don't really want to shell out the money for two 160 GB d
On 04/17/10 12:56 PM, Edward Ned Harvey wrote:
From: Erik Trimble [mailto:erik.trim...@oracle.com]
Sent: Friday, April 16, 2010 7:35 PM
Doesn't that defeat the purpose of a snapshot?
Eric hits
the
nail right on the head: you *don't* want to support such a "feature",
as it breaks
On 04/18/10 01:25 AM, Edward Ned Harvey wrote:
From: Ian Collins [mailto:i...@ianshome.com]
But is a fundamental of zfs:
snapshot
A read-only version of a file system or volume at a
given point in time. It is specified as filesys...@name
or vol
On 04/19/10 08:42 PM, Ian Garbutt wrote:
Having looked through the forum I gather that you cannot just add an additional
device to to raidz pool. This being the case is what are the alternatives that
I could to expand a raidz pool?
Either replace *all* the drives with bigger ones, or add
On 04/20/10 04:13 PM, Sunil wrote:
Hi,
I have a strange requirement. My pool consists of 2 500GB disks in stripe which
I am trying to convert into a RAIDZ setup without data loss but I have only two
additional disks: 750GB and 1TB. So, here is what I thought:
1. Carve a 500GB slice (A) in 750
On 04/20/10 05:00 PM, Sunil wrote:
On 04/20/10 04:13 PM, Sunil wrote:
Hi,
I have a strange requirement. My pool consists of 2
500GB disks in stripe which I am trying to convert
into a RAIDZ setup without data loss but I have only
two additional disks: 750GB and 1TB. So, here is w
On 04/20/10 05:32 PM, Sunil wrote:
ouch! My apologies! I did not understand what you were trying to say.
I was gearing towards:
1. Using the newer 1TB in the eventual RAIDZ. Newer hardware typically means
(slightly) faster access times and sequential throughput.
Using a slice on a newer 1T
On 04/22/10 06:59 AM, Justin Lee Ewing wrote:
So I can obviously see what zpools I have imported... but how do I see
pools that have been exported? Kind of like being able to see
deported volumes using "vxdisk -o alldgs list".
"zpool import", kind of counter intuitive!
--
Ian.
On 04/26/10 12:08 AM, Edward Ned Harvey wrote:
[why do you snip attributions?]
> On 04/26/10 01:45 AM, Robert Milkowski wrote:
The system should boot-up properly even if some pools are not
accessible
(except rpool of course).
If it is not the case then there is a bug - last time I checked it
wo
On 04/27/10 09:41 AM, Lutz Schumann wrote:
Hello list,
a pool shows some strange status:
volume: zfs01vol
state: ONLINE
scrub: scrub completed after 1h21m with 0 errors on Sat Apr 24 04:22:38
mirror ONLINE 0 0 0
c2t12d0ONLINE 0
On 04/28/10 03:17 AM, Roy Sigurd Karlsbakk wrote:
Hi all
I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no
Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to
something hardly usable while scrubbing the pool.
Is that small random or blo
On 04/28/10 10:01 AM, Bob Friesenhahn wrote:
On Wed, 28 Apr 2010, Ian Collins wrote:
On 04/28/10 03:17 AM, Roy Sigurd Karlsbakk wrote:
Hi all
I have a test system with snv134 and 8x2TB drives in RAIDz2 and
currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on
the testpool
On 04/29/10 10:21 AM, devsk wrote:
I had a pool which I created using zfs-fuse, which is using March code base
(exact version, I don't know; if someone can tell me the command to find the
zpool format version, I would be grateful).
Try [zfs|zpool] upgrade.
These commands will tell you th
On 04/29/10 11:02 AM, autumn Wang wrote:
One quick question: When will the next formal release be released?
Of what?
Does oracle have plan to support OpenSolaris community as Sun did before?
What is the direction of ZFS in future?
Do you really expect answers to those question here?
On 04/30/10 10:35 AM, Bob Friesenhahn wrote:
On Thu, 29 Apr 2010, Roy Sigurd Karlsbakk wrote:
While there may be some possible optimizations, i'm sure everyone
would love the random performance of mirror vdevs, combined with the
redundancy of raidz3 and the space of a raidz1. However, as in al
On 05/ 1/10 03:09 PM, devsk wrote:
Looks like the X's vesa driver can only use 1600x1200 resolution and not the
native 1920x1200.
Asking these question on the ZFS list isn't going to get you very far.
Troy the opensolaris-help list.
--
Ian.
_
On 05/ 1/10 04:46 PM, Edward Ned Harvey wrote:
One more really important gotcha. Let's suppose the version of zfs on the
CD supports up to zpool 14. Let's suppose your "live" system had been fully
updated before crash, and let's suppose the zpool had been upgraded to zpool
15. Wouldn't that me
On 05/ 4/10 11:33 AM, Michael Shadle wrote:
Quick sanity check here. I created a zvol and exported it via iSCSI to
a Windows machine so Windows could use it as a block device. Windows
formats it as NTFS, thinks it's a local disk, yadda yadda.
Is ZFS doing it's magic checksumming and whatnot on t
On 05/ 4/10 03:39 PM, Richard Elling wrote:
On May 3, 2010, at 7:55 PM, Edward Ned Harvey wrote:
From: Richard Elling [mailto:richard.ell...@gmail.com]
Once you register your original Solaris 10 OS for updates, are
you
unable to get updates on the removable OS?
This is
On 05/ 5/10 11:09 AM, Brad wrote:
I yanked a disk to simulate failure to the test pool to test hot spare failover
- everything seemed fine until the copy back completed. The hot spare is still
showing in used...do we need to remove the spare from the pool to get it to
deattach?
Once the
On 05/ 6/10 05:32 AM, Richard Elling wrote:
On May 4, 2010, at 7:55 AM, Bob Friesenhahn wrote:
On Mon, 3 May 2010, Richard Elling wrote:
This is not a problem on Solaris 10. It can affect OpenSolaris, though.
That's precisely the opposite of what I thought. Care to expla
On 05/ 6/10 11:48 AM, Brandon High wrote:
I know for certain that my rpool and tank pool are not both using
c6t0d0 and c6t1d0, but that's what zpool status is showing.
It appears to be an output bug, or a problem with the zpool.cache,
since format shows my rpool devices at c8t0d0 and c8t1d0.
On 05/ 6/10 03:35 PM, Richard Jahnel wrote:
Hmm...
To clarify.
Every discussion or benchmarking that I have seen always show both off,
compression only or both on.
Why never compression off and dedup on?
After some further thought... perhaps it's because compression works at the
byte level
On 05/ 8/10 04:38 PM, Giovanni wrote:
Hi guys,
I have a quick question, I am playing around with ZFS and here's what I did.
I created a storage pool with several drives. I unplugged 3 out of 5 drives
from the array, currently:
NAMESTATE READ WRITE CKSUM
gpool
On 05/ 9/10 06:54 AM, Giovanni Mazzeo wrote:
giova...@server:~# cfgadm
Ap_Id Type Receptacle Occupant
Condition
sata1/0disk connectedunconfigured
unknown
sata1/1::dsk/c8t1d0disk connectedconfigur
On 05/ 9/10 10:07 AM, Tony wrote:
Lets say I have two servers, both running opensolaris with ZFS. I basically
want to be able to create a filesystem where the two servers have a common
volume, that is mirrored between the two. Meaning, each server keeps an
identical, real time backup of the ot
On 05/12/10 02:10 PM, Terence Tan wrote:
I was having quite a bit of problems getting the rpool mirroring to work as
expected.
This appears to be a known issue, see the thread "b134 - Mirrored rpool
won't boot unless both mirrors are present" and
http://bugs.opensolaris.org/bugdatabase/v
I just tried moving a dump volume form rpool into another pool so I used
zfs send/receive to copy the volume (to keep some older dumps) then ran
dumpadm -d to use the new location. This caused a panic. Nothing ended
up in messages and needless to say, there isn't a dump!
Creating a new volum
On 05/13/10 03:27 AM, Lori Alt wrote:
On 05/12/10 04:29 AM, Ian Collins wrote:
I just tried moving a dump volume form rpool into another pool so I
used zfs send/receive to copy the volume (to keep some older dumps)
then ran dumpadm -d to use the new location. This caused a panic.
Nothing
On 05/13/10 08:55 AM, Jens Elkner wrote:
On Wed, May 12, 2010 at 09:34:28AM -0700, Doug wrote:
We have a 2006 Sun X4500 with Hitachi 500G disk drives. Its been running for over
four years and just now fmadm& zpool reports a disk has failed. No data was
lost (RAIDZ2 + hot spares worked a
On 05/13/10 12:46 PM, Erik Trimble wrote:
I've gotten a couple of the newest prototype AMD systems, with the C34
and G34 sockets. All have run various flavors of OpenSolaris quite
well, with the exception of a couple of flaky network problems, which
we've tracked down to pre-production NIC hardw
On 05/15/10 09:43 PM, Jason Barr wrote:
Hello,
I want to slice these 3 disks into 2 partitions each and configure 1 Raid0 and
1 Raidz1 on these 3.
Lets get the obvious question out of the way first: why?
If you intend one two way mirror and one raidz, you will either have to
waste one s
On 05/16/10 06:52 AM, John Balestrini wrote:
Howdy All,
I've a bit of a strange problem here. I have a filesystem with one snapshot
that simply refuses to be destroyed. The snapshots just prior to it and just
after it were destroyed without problem. While running the zfs destroy command
on th
On 05/16/10 12:40 PM, John Balestrini wrote:
Yep. Dedup is on. A zpool list shows a 1.50x dedup ratio. I was imagining that
the large ratio was tied to that particular snapshot.
basie@/root# zpool list pool1
NAMESIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
pool1 2.72T 1.55T 1.17T
On 05/17/10 12:08 PM, Thomas Burgess wrote:
well, i haven't had a lot of time to work with this...but i'm having
trouble getting the onboard sata to work in anything but NATIVE IDE mode.
I'm not sure exactly what the problem isi'm wondering if i bought
the wrong cable (i have a norco 4220
On 05/19/10 09:34 PM, Philippe wrote:
Hi !
It is strange because I've checked the SMART data of the 4 disks, and
everything seems really OK ! (on another hardware/controller, because I needed
Windows to check it). Maybe it's a problem with the SAS/SATA controller ?!
One question : if I halt t
On 05/20/10 08:39 PM, roi shidlovsky wrote:
hi.
i am trying to attach a mirror disk to my root pool. if the two disk are the same size..
it all works fine, but if the two disks are with different size (8GB and 7.5GB) i get a
"I/O error" on the attach command.
can anybody tell me what am i doin
On 05/22/10 12:31 PM, Don wrote:
I just spoke with a co-worker about doing something about it.
He says he can design a small in-line "UPS" that will deliver 20-30
seconds of 3.3V, 5V, and 12V to the SATA power connector for about $50
in parts. It would be even less if only one voltage was needed
On 05/22/10 12:54 PM, Thomas Burgess wrote:
Something i've been meaning to ask
I'm transfering some data from my older server to my newer one. the
older server has a socket 775 intel Q9550 8 gb ddr2 800 20 1TB drives
in raidz2 (3 vdevs, 2 with 7 drives one with 6) connected to 3
AOC-SAT2
On 05/22/10 04:44 PM, Thomas Burgess wrote:
I can't tell you for sure
For some reason the server lost power and it's taking forever to come
back up.
(i'm really not sure what happened)
anyways, this leads me to my next couple questions:
Is there any way to "resume" a zfs send/recv
On 05/22/10 05:22 PM, Thomas Burgess wrote:
yah, it seems that rsync is faster for what i need anywaysat least
right now...
ZFS send/receive should run at wire speed for a Gig-E link.
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
On 05/23/10 08:52 AM, Thomas Burgess wrote:
If you install Opensolaris with the AHCI settings off, then switch
them on, it will fail to boot
I had to reinstall with the settings correct.
Well you probably didn't have to. Booting form the live CD and
importing the pool would have put things
On 05/23/10 08:43 AM, Brian wrote:
Is there a way within opensolaris to detect if AHCI is being used by various
controllers?
I suspect you may be accurate an AHCI is not turned on. The bios for this particular
motherboard is fairly confusing on the AHCI settings. The only setting I have is
On 05/23/10 11:31 AM, Brian wrote:
Sometimes when it hangs on boot hitting space bar or any key won't bring it
back to the command line. That is why I was wondering if there was a way to
not show the splashscreen at all, and rather show what it was trying to load
when it hangs.
From my /
On 05/23/10 01:18 PM, Thomas Burgess wrote:
this worked fine, next today, i wanted to send what has changed
i did
zfs snapshot tank/nas/d...@second
now, heres where i'm confusedfrom reading the man page i thought
this command would work:
pfexec zfs send -i tank/nas/d...@first tank/n
On 05/23/10 03:56 PM, Thomas Burgess wrote:
let me ask a question though.
Lets say i have a filesystem
tank/something
i make the snapshot
tank/someth...@one
i send/recv it
then i do something (add a file...remove something, whatever) on the
send side, then i do a send/recv and force it of
1 - 100 of 777 matches
Mail list logo