Please help. :(
I use snv134
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On May 28, 2010, at 4:28 PM, Juergen Nickelsen wrote:
> Bob Friesenhahn writes:
>> On Fri, 28 May 2010, Gregory J. Benscoter wrote:
>>>
>>> I’m primarily concerned with in the possibility of a bit flop. If
>>> this occurs will the stream be lost? Or will the file that that bit
>>> flop occurred
On Fri, May 28, 2010 at 9:04 AM, Thanassis Tsiodras wrote:
> Is there no better solution?
If you don't care about snapshots, you can also create a new dataset
and move or copy the files to it.
If you do care about snapshots, you can send/recv the dataset, which
will apply the copies property and
Bob Friesenhahn writes:
> On Fri, 28 May 2010, Gregory J. Benscoter wrote:
>>
>> I’m primarily concerned with in the possibility of a bit flop. If
>> this occurs will the stream be lost? Or will the file that that bit
>> flop occurred in be the only degraded file? Lastly how does the
>> relia
On 28 May, 2010, at 17.21, Vadim Comanescu wrote:
> In a stripe zpool configuration (no redundancy) is a certain disk regarded as
> an individual vdev or do all the disks in the stripe represent a single vdev
> ? In a raidz configuration im aware that every single group of raidz disks is
> reg
On 05/27/10 09:49 PM, Haudy Kazemi wrote:
Brandon High wrote:
On Thu, May 27, 2010 at 1:02 PM, Cassandra Pugh wrote:
I was wondering if there is a special option to share out a set of
nested
directories? Currently if I share out a directory with
/pool/mydir1/mydir2
on a system, mydir1 shows up
In a stripe zpool configuration (no redundancy) is a certain disk regarded
as an individual vdev or do all the disks in the stripe represent a single
vdev ? In a raidz configuration im aware that every single group of raidz
disks is regarded as a top level vdev but i was wondering how is it in the
Hi--
I can't speak to running a ZFS root pool in a VM, but the problem is
that you can't add another disk to a root pool. All the boot info needs
to be contiguous. This is a boot limitation.
I've not attempted either of these operations in a VM but you might
consider:
1. Replacing the root pool
Hi,
whenever I create a new zfs my PC hangs at boot. Basically where the login
screen should appear. After booting from livecd and removing the zfs the boot
works again.
This also happened when I created a new zpool for the other half of my HDD.
Any idea why? How to solve it?
--
This message po
hello, all
I am have constraint disk space (only 8GB) while running os inside vm. Now i
want to add more. It is easy to add for vm but how can i update fs in os?
I cannot use autoexpand because it doesn't implemented in my system:
$ uname -a
SunOS sopen 5.11 snv_111b i86pc i386 i86pc
If it was 17
Hello ZFS guru's on the list :)
I started ZFS approx 1.something years ago and I'm following the discussions
here for some time now. What confused my all the time is the different
parameters and ZFS tunables and how they affect data integrity and
availability.
Now I took some time and tried
On Fri, 28 May 2010, Gregory J. Benscoter wrote:
I’m primarily concerned with in the possibility of a bit flop. If
this occurs will the stream be lost? Or will the file that that bit
flop occurred in be the only degraded file? Lastly how does the
reliability of this plan compare to more tradi
After looking through the archives I haven't been able to assess the
reliability of a backup procedure which employs zfs send and recv. Currently
I'm attempting to create a script that will allow me to write a zfs stream to a
tape via tar like below.
# zfs send -R p...@something
Following is the output of "zpool iostat -v". My question is regarding the
datapool row and the raidz2 row statistics. The datapool row statistic "write
bandwidth" is 381 which I assume takes into account all the disks - although it
doesn't look like it's an average. The raidz2 row static "write
I've read on the web that copies=2 affects only the files copied *after* I have
changed the setting - does this mean I have to ...
bash$ cat /tmp/stupid.sh
cp "$1" /tmp/ || exit 1
rm "$1" || exit 1
cp /tmp/$(basename "$1") "$1" || exit 1
bash$ gfind /path/to/... -type f -exec /tmp/stupid.sh '{}'
I have a Sun A5000, 22x 73GB 15K disks in split-bus configuration, two dual 2Gb
HBAs and four fibre cables from server to array, all for just under $200.
The array gives 4Gb of aggregate thoughput in each direction across two 11 disk
buses.
Right now it is the main array, but when we outgrow it
Just to close this. It turns out you can't get the crtime over NFS so without
access to the NFS server there is only limited checking that can be done.
I filed
CR 6956379 Unable to open extended attributes or get the crtime of files in
snapshots over NFS.
--chris
--
This message posted fro
17 matches
Mail list logo