Would this work? (to get rid of an EFI label).
dd if=/dev/zero of=/dev/dsk/ bs=1024k count=1
Then use
format
format might complain that the disk is not labeled. You
can then label the disk.
Dale
Antonius wrote:
> can you recommend a walk-through for this process, or a bit
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi.
I am trying to move the root volume from an existing svm mirror to a zfs
root. The machine is a Sun V880 (SPARC) running nv_96, with OBP version
4.22.34 which is AFAICT the latest.
The svm mirror was constructed as follows
/
d4 m
seconded
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ZFS should repair any files it can and mark any it can't as bad from memory.
Restore the files zfs has marked as corrupted from backup?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
fredrick phol wrote:
> seconded
Guess I missed something about "JZ" because I created a sieve rule for him
and never saw something from him again. Is he going to be blocked in some
way?
--
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
+http://nagual.nl/ | SunOS 10u6 10/08 ZFS+
___
The pools are upgraded to version 10. Also, this is on Solaris 10u6.
# zpool upgrade
This system is currently running ZFS pool version 10.
All pools are formatted using this version.
Ben
> What's the output of 'zfs upgrade' and 'zpool
> upgrade'? (I'm just
> curious - I had a similar situation
On Thu, 22 Jan 2009, Al Slater wrote:
> Mounting root on rpool/ROOT/Sol11_b105 with filesystem type zfs is not
> supported
This line is coming from svm, which leads me to believe that the zfs
boot blocks were not properly installed by live upgrade.
You can try doing this by hand, with the comman
I'm running OpenSolaris 10/08 snv_101b with the auto snapshot packages.
I'm getting this error:
/usr/lib/time-slider-cleanup -y
Traceback (most recent call last):
File "/usr/lib/time-slider-cleanup", line 10, in
main(abspath(__file__))
File "/usr/lib/../share/time-slider/lib/time_slider/
Brandon High wrote:
> On Wed, Jan 21, 2009 at 5:40 PM, Bob Friesenhahn
> wrote:
>> Several people reported this same problem. They changed their
>> ethernet adaptor to an Intel ethernet interface and the performance
>> problem went away. It was not ZFS's fault.
>
> It may not be a ZFS problem,
On 21-Jan-09, at 9:11 PM, Brandon High wrote:
> On Wed, Jan 21, 2009 at 5:40 PM, Bob Friesenhahn
> wrote:
>> Several people reported this same problem. They changed their
>> ethernet adaptor to an Intel ethernet interface and the performance
>> problem went away. It was not ZFS's fault.
>
> It
I have b105 running on a Sun Fire X4500, and I am constantly seeing checksum
errors reported by zpool status. The errors are showing up over time on every
disk in the pool. In normal operation there might be errors on two or three
disks each day, and sometimes there are enough errors so it repor
Hi Jay,
Jay Anderson schrieb:
> I have b105 running on a Sun Fire X4500, and I am constantly seeing checksum
> errors reported by zpool status. The errors are showing up over time on every
> disk in the pool. In normal operation there might be errors on two or three
> disks each day, and someti
Heres what fixed this:
Added
tx_hcksum_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
lso_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
to /kernel/drv/e1000g.conf
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Here what fixed this:
Added
tx_hcksum_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
lso_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
to /kernel/drv/e1000g.conf
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
We're evaluating the possibility of speeding up NFS operations of our
X4540s with dedicated log devices. What we are specifically evaluating
is replacing 1 or two of our spare sata disks with sata SSDs.
Has anybody tried using SSD device(s) as dedicated ZIL devices in a
X4540? Are there any kno
thanx all, i think it's hdd & nic summary perfomance degrade. i try to replace
it , and hope can see my movies ;))
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
Are you able to qualify that a little?
I'm using a realtek interface with OpenSolaris and am yet to experience
any issues.
Nathan.
Brandon High wrote:
> On Wed, Jan 21, 2009 at 5:40 PM, Bob Friesenhahn
> wrote:
>> Several people reported this same problem. They changed their
>> ethernet adapt
On Thu, Jan 22, 2009 at 1:29 PM, Nathan Kroenert
wrote:
> Are you able to qualify that a little?
>
> I'm using a realtek interface with OpenSolaris and am yet to experience any
> issues.
There's a lot of anecdotal evidence that replacing the rge driver with
the gani driver can fix poor NFS and CI
Interesting. I'll have a poke...
Thanks!
Nathan.
Brandon High wrote:
> On Thu, Jan 22, 2009 at 1:29 PM, Nathan Kroenert
> wrote:
>> Are you able to qualify that a little?
>>
>> I'm using a realtek interface with OpenSolaris and am yet to experience any
>> issues.
>
> There's a lot of anecdotal
I'm using OpenSolaris with ZFS as a backup server. I copy all my data from
various sources onto the OpenSolaris server daily, and run a snapshot at the
end of each backup. Using gzip-1 compression, mount -F smbfs, and the
--in-place and --no-whole-file switches for rsync, I get efficient space
yes, that's exactly what I did. the issue is that I can't get the "corrected"
label to be written once I've zero'd the drive. I get and error from fdisk that
apparently views the backup label
--
This message posted from opensolaris.org
___
zfs-discuss
On Thu, 22 Jan 2009, BJ Quinn wrote:
> Is there any way to speed up a compressed zfs send -R? Or is there
> some other way to approach this? Maybe some way to do a bit-level
> clone of the internal drive to the external drive (the internal
> backup drive is not the same as the OS drive, so it
This is in snv_86. I have a four-drive raidz pool. One of the drives died. I
replaced it, but wasn't careful to put the new drive on the same controller
port; one of the existing drives wound up on the port that had previously been
used by the failed drive, and the new drive wound up on the p
> I would get a new 1.5 TB and make sure it has the new
> firmware and replace
> c6t3d0 right away - even if someone here comes up
> with a magic solution, you
> don't want to wait for another drive to fail.
The replacement disk showed up today but I'm unable to replace the one marked
UNAVAIL:
On Thu, 22 Jan 2009, Scott L. Burson wrote:
> This is in snv_86. I have a four-drive raidz pool. One of the drives
> died. I replaced it, but wasn't careful to put the new drive on the
> same controller port; one of the existing drives wound up on the port
> that had previously been used by
not quite .. it's 16KB at the front and 8MB back of the disk (16384
sectors) for the Solaris EFI - so you need to zero out both of these
of course since these drives are <1TB you i find it's easier to format
to SMI (vtoc) .. with format -e (choose SMI, label, save, validate -
then choose EFI
Well, the second resilver finished, and everything looks okay now. Doing one
more scrub to be sure...
-- Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Mark J Musante wrote:
> On Thu, 22 Jan 2009, Al Slater wrote:
>> Mounting root on rpool/ROOT/Sol11_b105 with filesystem type zfs is not
>> supported
>
> This line is coming from svm, which leads me to believe that the zfs
> boot blocks were not prope
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Al Slater wrote:
> Mark J Musante wrote:
>> On Thu, 22 Jan 2009, Al Slater wrote:
>>> Mounting root on rpool/ROOT/Sol11_b105 with filesystem type zfs is not
>>> supported
>> This line is coming from svm, which leads me to believe that the zfs
>> boot
I don't have an x4540, and this may not be relevant to your usage, but the
concern I would have would be how this is going to affect throughput. An x4540
can stream data to and from the disk far faster than any SATA SSD, or even a
pair of SATA SSD's can. I'd be nervous about improving my laten
However, now I've written that, Sun use SATA (SAS?) SSD's in their high end
fishworks storage, so I guess it definately works for some use cases.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:/
On Fri, Jan 9, 2009 at 11:41 PM, Brent Jones wrote:
> On Fri, Jan 9, 2009 at 7:53 PM, Ian Collins wrote:
>> Ian Collins wrote:
>>> Send/receive speeds appear to be very data dependent. I have several
>>> different filesystems containing differing data types. The slowest to
>>> replicate is ma
Brent Jones wrote:
> On Fri, Jan 9, 2009 at 11:41 PM, Brent Jones wrote:
>
>> On Fri, Jan 9, 2009 at 7:53 PM, Ian Collins wrote:
>>
>>> Ian Collins wrote:
>>>
Send/receive speeds appear to be very data dependent. I have several
different filesystems containing differing
33 matches
Mail list logo