over from it.
Thanks again for all your help,
Austin
--
This message posted from opensolaris.orgTIME CLASS
May 28 2010 07:57:25.712193068 ereport.fs.zfs.checksum
nvlist version: 0
class = ereport.fs.zfs.checksum
ena = 0xd953c9a23d51
detector
w hardware
(after updating the machine), I added two mirrored disks to the pool to
alleviate the space issue until I could back everything up, destroy the pool,
and recreate it with six disks instead of three.
Is this a known bug with a fix, or am I out of luck with these files?
Thanks,
Austi
I don't know how much progress has been made on this, but back when I moved
from FreeBSD (an older version, maybe the first to have stable ZFS) to Solaris,
this couldn't be done since they were not quite compatible yet. I got some new
drives since the ones I had were dated, copied the data to th
I've been trying to figure out how the copies command works and have been
experimenting, but I haven't really seen any results (both with 5 physical
drives I will soon add to my data pool as a 2nd RAIDZ and on a virtual machine
with two RAIDZ in a pool). First: Is data copied across physical dev
I didn't find any clear answer in the documentation, so here it goes:
I've got a 4-device RAIDZ array in a pool. I then add another RAIDZ array to
the pool. If one of the arrays fails, would all the data on the array be lost,
or would it be like disc spanning, and only the data on the failed a
A bit off the subject but what would be the advantage in virtualization using a
pool of files verse just creating another zfs on an existing pool. My purpose
for using the file pools was to experiment and learn about any quirks before I
go production. It let me do things like set up a large ra
Which part is the bug? The crash or allowing pools of files that are on a zfs?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I should clarify. Say I have a zfs with the mount point /u00 that I import on
the system. When it creates the /u00 directory on the UFS root, it's created
with 700, and then the zfs is mounted and it appears to have the permissions of
the root of the zfs. 755 in this case.
But, if a non-ro
After importing some pools after a re-install of the OS, i hit that "..:
Permission denied" problem. I figured out I could unmount, chmod, and mount to
fix it but that wouldn't be a good situation on a production box. Is there
anyway to fix this problem without unmounting?
This message pos
When messing around with zfs trying to break it, I creating a new pool using
files on an existing zfs filesystem. It seem to work fine until I created a
snapshot of the original filesystem and then tried to destroy the pool using
the files. The system appeared to deadlock and had to be reboote
10 matches
Mail list logo