On Thu, Jun 30, 2011 at 11:40:53PM +0100, Andrew Gabriel wrote:
> On 06/30/11 08:50 PM, Orvar Korvar wrote:
>> I have a 1.5TB disk that has several partitions. One of them is 900GB. Now I
>> can only see 300GB. Where is the rest? Is there a command I can do to reach
>> the rest of the data? Will
On 06/30/11 08:50 PM, Orvar Korvar wrote:
I have a 1.5TB disk that has several partitions. One of them is 900GB. Now I
can only see 300GB. Where is the rest? Is there a command I can do to reach the
rest of the data? Will scrub help?
Not much to go on - no one can answer this.
How did you g
On 06/30/2011 11:56 PM, Sašo Kiselkov wrote:
> On 06/30/2011 01:33 PM, Jim Klimov wrote:
>> 2011-06-30 15:22, Sašo Kiselkov пишет:
>>> I tried increasing this
> value to 2000 or 3000, but without an effect - prehaps I need to set it
> at pool mount time or in /etc/system. Could somebody wit
On 06/30/2011 01:33 PM, Jim Klimov wrote:
> 2011-06-30 15:22, Sašo Kiselkov пишет:
>> I tried increasing this
value to 2000 or 3000, but without an effect - prehaps I need to set it
at pool mount time or in /etc/system. Could somebody with more
knowledge
of these internals pleas
Hi,
I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris
b134) running zpool v26.
Replication (with zfs send/receive) from the nexenta box to the freebsd works
fine, but I have a problem accessing my replicated volume. When I'm typing and
autocomplete with tab key th
Hi there.
I am trying to get my filesystems off a pool that suffered irreparable damage
due to 2 disks partially failing in a 5 disk raidz.
One of the filesystems has an io error when trying to read one of the files off
it.
This filesystem cannot be sent - zfs send stops with this error:
"war
I have a 1.5TB disk that has several partitions. One of them is 900GB. Now I
can only see 300GB. Where is the rest? Is there a command I can do to reach the
rest of the data? Will scrub help?
--
This message posted from opensolaris.org
___
zfs-discuss
> Actually, you do want /usr and much of /var on the root pool, they
> are integral parts of the "svc:/filesystem/local" needed to bring up
> your system to a useable state (regardless of whether the other
> pools are working or not).
Ok. I have my feelings on that topic but they may not be as rel
Thanks for the input. This was not a case of degraded vdev, but only a
missing log device (which i cannot get rid of..). I'll try offlining some
vdevs and see what happens - altough this should be automatic atf all times
IMO.
On Jun 30, 2011 1:25 PM, "Markus Kovero" wrote:
>
>
>> To me it seems th
2011-06-30 15:22, Sašo Kiselkov пишет:
I tried increasing this
value to 2000 or 3000, but without an effect - prehaps I need to set it
at pool mount time or in /etc/system. Could somebody with more knowledge
of these internals please chime in?
And about this part - it was my understanding an
On 06/30/2011 01:10 PM, Jim Klimov wrote:
> 2011-06-30 11:47, Sašo Kiselkov пишет:
>> On 06/30/2011 02:49 AM, Jim Klimov wrote:
>>> 2011-06-30 2:21, Sašo Kiselkov пишет:
On 06/29/2011 02:33 PM, Sašo Kiselkov wrote:
>> Also there is a buffer-size limit, like this (384Mb):
>> set zfs:zfs
2011-06-30 11:47, Sašo Kiselkov пишет:
On 06/30/2011 02:49 AM, Jim Klimov wrote:
2011-06-30 2:21, Sašo Kiselkov пишет:
On 06/29/2011 02:33 PM, Sašo Kiselkov wrote:
Also there is a buffer-size limit, like this (384Mb):
set zfs:zfs_write_limit_override = 0x1800
or on command-line like this:
> To me it seems that writes are not directed properly to the devices that have
> most free space - almost exactly the opposite. The writes seem to go to the
> devices that have _least_ free space, instead of the devices that have most
> free space. The same effect that can be seen in these 6
Am 30.06.11 04:44, schrieb Erik Trimble:
On 6/29/2011 12:51 AM, Stephan Budach wrote:
Hi,
what are the steps necessary to move the OS rpool from an external
USB drive to an internal drive?
I thought about adding the internal hd as a mirror to the rpool and
then detaching the USB drive, but I
14 matches
Mail list logo