I moved my home directories to a new disk and then mounted the disk using a
legacy mount point over /export/home. Here is the output of the zfs list:
NAME USED AVAIL REFER MOUNTPOINT
rpool 55.8G 11.1G83K /rpool
rpool/ROOT 21.1G 11.1G19K legac
Norm,
Thank you. I just wanted to double-check to make sure I didn't mess up things.
There were steps that I was head-scratching after reading the man page. I'll
spend a bit more time re-reading it using the steps outlined so I understand
these fully.
Gary
--
This message posted from opens
I would like to migrate my home directories to a new mirror. Currently, I have
them in rpool:
rpool/export
rpool/export/home
I've created a mirror pool, users.
I figure the steps are:
1) snapshot rpool/export/home
2) send the snapshot to users.
3) unmount rpool/export/home
4) mount pool users
Right now I have a machine with a mirrored boot setup. The SAS drives are 43Gs
and the root pool is getting full.
I do a backup of the pool nightly, so I feel confident that I don't need to
mirror the drive and can break the mirror and expand the pool with the detached
drive.
I understand how
I'm not sure I like this at all. Some of my pools take hours to scrub. I have
a cron job run scrubs in sequence... Start one pool's scrub and then poll
until it's finished, start the next and wait, and so on so I don't create too
much load and bring all I/O to a crawl.
The job is launched on
My guess is that the grub bootloader wasn't upgraded on the actual boot disk.
Search for directions on how to mirror ZFS boot drives and you'll see how to
copy the correct grub loader onto the boot disk.
If you want to do this simpler, swap the disks. I did this when I was moving
from SXCE to
Thanks for all the suggestions. Now for a strange tail...
I tried upgrading to dev 130 and, as expected, things did not go well. All
sorts of permission errors flew by during the upgrade stage and it would not
start X-windows. I've heard that things installed from the contrib and extras
rep
I've just made a couple of consecutive scrubs, each time it found a couple of
checksum errors but on different drives. No indication of any other errors.
That a disk scrubs cleanly on a quiescent pool in one run but fails in the next
is puzzling. It reminds me of the snv_120 odd number of dis
Mattias Pantzare wrote:
On Sun, Jan 10, 2010 at 16:40, Gary Gendel wrote:
I've been using a 5-disk raidZ for years on SXCE machine which I converted to
OSOL. The only time I ever had zfs problems in SXCE was with snv_120, which
was fixed.
So, now I'm at OSOL snv_111b and I
I've been using a 5-disk raidZ for years on SXCE machine which I converted to
OSOL. The only time I ever had zfs problems in SXCE was with snv_120, which
was fixed.
So, now I'm at OSOL snv_111b and I'm finding that scrub repairs errors on
random disks. If I repeat the scrub, it will fix error
+1
I support a replacement for a SCM system that used "open" as an alias for
"edit" and a separate command, "opened" to see what was opened for edit,
delete, etc. Our customers accidentally used "open" when they meant "opened"
so many times that we blocked it as a command. It saved us a lot o
The only reason I thought this news would be of interest is that the
discussions had some interesting comments. Basically, there is a significant
outcry because zfs was going away. I saw NextentaOS and EON mentioned several
times as the path to go.
Seem that there is some opportunity for Open
Apple is known to strong arm in licensing negotiations. I'd really like to
hear the straight-talk about what transpired.
That's ok, it just means that I won't be using mac as a server.
--
This message posted from opensolaris.org
___
zfs-discuss mailin
You shouldn't hit the Raid-Z issue because it only happens with an odd number
of disks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Alan,
Thanks for the detailed explanation. The rollback successfully fixed my 5-disk
RAID-Z errors. I'll hold off another upgrade attempt until 124 rolls out.
Fortunately, I didn't do a zfs upgrade right away after installing 121. For
those that did, this could be very painful.
Gary
--
Th
Alan,
Super find. Thanks, I thought I was just going crazy until I rolled back to
110 and the errors disappeared. When you do work out a fix, please ping me to
let me know when I can try an upgrade again.
Gary
--
This message posted from opensolaris.org
__
It looks like It's definitely related to the snv_121 upgrade. I decided to
roll back to snv_110 and the checksum errors have disappeared. I'd like to
issue a bug report, but I don't have any information that might help track this
down, just lots of checksum errors.
Looks like I'm stuck at snv
I have a 5-500GB disk Raid-Z pool that has been producing checksum errors right
after upgrading SXCE to build 121. They seem to be randomly occurring on all 5
disks, so it doesn't look like a disk failure situation.
Repeatingly running a scrub on the pools randomly repairs between 20 and a few
> Most video formats are designed to handle
> errors--they'll drop a frame
> or two, but they'll resync quickly. So, depending on
> the size of the
> error, there may be a visible glitch, but it'll keep
> working.
Actually, Let's take MPEG as an example. There are two basic frame types,
anchor
I'm not sure. But when I would re-run a scrub, I got the errors at the same
block numbers, which indicated that the disk was really bad. It wouldn't hurt
to make the entry in the /etc/system file, reboot, and then try the scrub
again. If the problem disappears then it is a driver bug.
Gary
Norco usually uses Silicon Image based SATA controllers. The OpenSolaris driver
for this has caused me enough headaches for me to replace it with a Marvell
based board. I would also imagine that they use a 5 to 1 SATA multiplexer,
which is not supported by any OpenSolaris driver that I've tested
Are there any clues in the logs? I have had a similar problem when a disk bad
block was uncovered by zfs. I've also seen this when using the Silicon Image
driver without the recommended patch.
The former became evident when I ran a scrub. I saw the SCSI timeout errors pop
up in the "kern" sys
Just keep in mind that I tried the patched driver and occasionally had kernel
panics because of recursive mutex calls. I believe that it isn't
multi-processor safe. I switched to the Marvell chipset and have been much
happier.
This message posted from opensolaris.org
___
> I'm about to build a fileserver and I think I'm gonna
> use OpenSolaris and ZFS.
>
> I've got a 40GB PATA disk which will be the OS disk,
> and then I've got 4x250GB SATA + 2x500GB SATA disks.
> From what you are writing I would think my best
> option would be to slice the 500GB disks in two 250
Thanks Jim for the entertainment. I was party to a similar mess. My father
owned an operated a small Electrical Supply business that I worked at since the
age of 8. I was recently pulled into a large class-action Asbestos suit against
the business since I was the only one still alive through t
> I can confirm that the marvell88sx driver (or kernel
> 64a) regularly hangs the SATA card (SuperMicro
> 8-port) with the message about a port being reset.
> The hang is temporary but troublesome.
> It can be relieved by turning off NCQ in /etc/system
> with "set sata:sata_func_enable = 0x5"
Than
Al,
That makes so much sense that I can't believe I missed it. One bay was the one
giving me the problems. Switching drives didn't affect that. Switching cabling
didn't affect that. Changing Sata controllers didn't affect that. However,
reorienting the case on it's side did!
I'll be putting in
Thanks for the information. I am using the Marvell8sx driver on a vanilla
Sunfire v20z server. This project has gone through many frustrating phases...
Originally I tried a Si3124 board with the box running a 5-1 Sil Sata
multiplexer. The controller didn't understand the multiplexer so I put in
I've got a 5-500Gb Sata Raid-Z stack running under build 64a. I have two
problems that may or may not be interrelated.
1) zpool scrub stops. If I do a "zpool status" it merrily continues for awhile.
I can't see any pattern in this action with repeated scrubs.
2) Bad blocks on one disk. This is
Al,
Has there been any resolution to this problem? I get it repeatedly on my
5-500GB Raidz configuration. I sometimes get port drop/reconnect errors when
this occurs.
Gary
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-d
Hi,
I've got some issues with my 5-disk SATA stack using two controllers. Some of
the ports are acting strangely, so I'd like to play around and change which
ports the disks are connected to. This means that I need to bring down the
pool, swap some connections and then bring the pool back up. I
This is great news. A question crossed my mind. I'm sure it's a dumb one but I
thought I'd ask anyway...
How will LiveUpdate work when the boot partition is in the pool?
Gary
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
Rayson,
Filter drivers in NTFS are very clever. I was once toying with using it to put
unix-style symbolic links in Windows.
In this case, I think that such a clever idea wasn't thought through. Anyone
and everyone can add such an layer to the file operation stack. The worst part
is that you c
Perforce is based upon berkely db (some early version), so standard "database
XXX on ZFS" techniques are relevant. For example, putting the journal file on a
different disk than the table files. There are several threads about optimizing
databases under ZFS.
If you need a screaming perforce ser
Here is the problem I'm trying to solve...
Ive been using a sparc machine as my primary home server for years. A few years
back the motherboard died. I did a nightly backup on an external USB drive
formatted in ufs format. I use a rsync based backup called dirvish, so I
thought I had all the ba
35 matches
Mail list logo