Hello,
I currently have a raid1 array setup on Windows 7 with a pair of 1.5TB drives.
I don't have enough space in any other drives to make a backup of all this data
and I really don't want to copy my ~1.1 TB of files over the network anyways.
What I want to do it get a third 1.5 TB drive and m
Wow thank you very much for the clear instructions.
And Yes, I have another 120GB drive for the OS, separate from A, B and C. I
will repartition the drive and install Solaris. Then maybe at some point I'll
delete the entire drive and just install a single OS.
I have a question about step 6, "S
raw partition into a Solaris FS.
3) Install opensolaris 2009.06, the setup should automatically configure the
dual boot with windows and opensolaris.
Does that make sense?
Thanks again.
Message was edited by: zfsnoob4
--
This message posted from opensolaris.org
___
I'm also considering adding a cheap SSD as a a cache drive. The only problem is
that SSDs loose performance over time because when something is deleted, it is
not actually deleted. So the next time something is written on the same blocks,
it must first delete, then write.
To fix this, SSDs allo
I was talking about a write cache (slog/zil I suppose). This is just a media
server for home. The idea is when I copy an HD video from my camera to the
network drive it is always several GBs. So if it could copy the file to the SSD
first and then have it slowly copy to the normal HDs that would
I'm very excited. Throw some code up on github as soon as you are able. I'm
sure there are plenty of people (me) that would like to help test it out. I've
already been playing around with ZFS using zvol on Fedora 12. I would love to
have a ZPL, no matter how experimental.
--
This message posted
Hey,
I'm running some test right now before setting up my server. I'm running
Nexenta Core 3.02 (RC2, based on opensolaris build 134 I believe) in Virtualbox.
To do the test, I'm creating three empty files and then making a raidz mirror:
mkfile -n 1g /foo
mkfile -n 1g /foo1
mkfile -n 1g /foo2
T
Thanks, that works. But it only when I do a proper export first.
If I export the pool then I can import with:
zpool import -d /
(test files are located in /)
but if I destroy the pool, then I can no longer import it back, even though the
files are still there. Is this normal?
Thanks for your h
Thanks. As I discovered from that post, VB does not have cache flush enabled by
default. Ignoreflush must be explicitly turned off.
VBoxManage setextradata VMNAME
"VBoxInternal/Devices/piix3ide/0/LUN#[x]/Config/IgnoreFlush" 0
where VMNAME is the name of your virtual machine.
Although I tried
Thank you. The -D option works.
And yes, now I feel a lot more confident about playing around with the FS. I'm
planning on moving an existing raid1 NTFS setup to ZFS, but since I'm on a
budget I only have three drive in total to work with. I want to make sure I
know what I'm doing before I mess
Hello,
I finally got the new drive and I am in the process of moving the data. The
problem I have now is that I can't mount the NTFS partition. I followed the
directions here:
http://sun.drydog.com/faq/9.html
and tried both methods, but the problem is that when I run fdisk on the ntfs
drive, i
Hello,
I'm using opensolaris b134 and I'm trying to mount a ntfs partition. I followed
the instructions located here:
http://sun.drydog.com/faq/9.html
and tried both methods, but the problem is that when I run fdisk on the ntfs
drive, it does not detect the partitions. In all the tutorials, fdi
Hi,
I have a question about snapshots. If I restore a file system based on some
snapshot I took in the past, is it possible to revert back to before I
restored? ie:
zfs snapshot t...@yesterday
mkdir /test/newfolder
zfs rollback t...@yesterday
so now newfolder is gone. But is there a way to t
I'm not trying to fix anything in particular, I'm just curious. In case I
rollback a filesystem and then realize, I wanted a file from the original file
system (before rollback).
I read the section on clones here:
http://docs.sun.com/app/docs/doc/819-5461/gavvx?a=view
but I'm still not sure wha
Hey guys,
I had a zfs system in raidz1 that was working until there was a power outage
and now I'm getting this:
pool: tank
state: DEGRADED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see
Ok so now I have no idea what to do. The scrub is not working either. The pool
is only 3x 1.5TB drives so it should not take so long. Does anyone know what I
should do next?
pool: tank
state: DEGRADED
status: One or more devices are faulted in response to IO failures.
action: Make sure the a
Thanks for your help,
I did a zpool clear and now this happens:
pool: tank
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
en
Reading through this page
(http://dlc.sun.com/osol/docs/content/ZFSADMIN/gbbwl.html), it seems like all I
need to do is 'rm' the file. The problem is finding it in the first place. Near
the bottom of this page it says:
"
If the damage is within a file data block, then the file can safely be rem
18 matches
Mail list logo