Hello everybody,
is there any known way to configure the point-in-time *when* the time-slider
will snapshot/rotate?
With hundreds of zfs filesystems, the daily snapshot rotation slows down a big
file server significantly, so it would be better to have the snapshots rotated
outside the usual w
On 2011-Jul-26 17:24:05 +0800, "Fajar A. Nugraha" wrote:
>Shouldn't modern SSD controllers be smart enough already that they know:
>- if there's a request to overwrite a sector, then the old data on
>that sector is no longer needed
ZFS never does update-in-place and UFS only does update-in-place
On Tue, Jul 26, 2011 at 1:14 PM, Cindy Swearingen <
cindy.swearin...@oracle.com> wrote:
> Yes, you can reinstall the OS on another disk and as long as the
> OS install doesn't touch the other pool's disks, your
> previous non-root pool should be intact. After the install
> is complete, just import
Are the "disk active" lights typically ON when this happens?
On Tue, Jul 26, 2011 at 3:27 PM, Garrett D'Amore wrote:
> This is actually a recently known problem, and a fix for it is in the
> 3.1 version, which should be available any minute now, if it isn't
> already available.
>
> The problem ha
On Tue, Jul 26, 2011 at 1:33 PM, Bernd W. Hennig
wrote:
> G'Day,
>
> - zfs pool with 4 disks (from Clariion A)
> - must migrate to Clariion B (so I created 4 disks with the same size,
> avaiable for the zfs)
>
> The zfs pool has no mirrors, my idea was to add the new 4 disks from
> the Clariion B
Hi Roberto,
Yes, you can reinstall the OS on another disk and as long as the
OS install doesn't touch the other pool's disks, your
previous non-root pool should be intact. After the install
is complete, just import the pool.
Thanks,
Cindy
On 07/26/11 10:49, Roberto Scudeller wrote:
Hi all,
Hi Garrett-
It is something that could happen at any time on a system that has been working
fine for a while? That system has 256G of RAM, I think "adequate" is not a
concern here :)
We'll try 3.1 as soon as we can download it.
Ian
--
This message posted from opensolaris.org
__
This is actually a recently known problem, and a fix for it is in the
3.1 version, which should be available any minute now, if it isn't
already available.
The problem has to do with some allocations which are sleeping, and jobs
in the ZFS subsystem get backed behind some other work.
If you have
I'm on S11E 150.0.1.9 and I replaced one of the drives and the pool seems to be
stuck in a resilvering loop. I performed a 'zpool clear' and 'zpool scrub' and
just complains that the drives I didn't replace are degraded because of too
many errors. Oddly the replaced drive is reported as being
On Tue, Jul 26, 2011 at 7:51 AM, David Dyer-Bennet wrote:
> "Processing" the request just means flagging the blocks, though, right?
> And the actual benefits only acrue if the garbage collection / block
> reshuffling background tasks get a chance to run?
>
I think that's right. TRIM just gives h
On Tue, Jul 26, 2011 at 5:59 AM, Edward Ned Harvey <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
> like 4%, and for some reason (I don't know why) there's a benefit to
> optimizing on 8k pages. Which means no. If you overwrite a sector of a
>
>From what I've heard it's due in lar
Hi all,
I lost my storage because rpool don't boot. I try to recover, but
opensolaris says to "destroy and re-create".
My rpool installed on flash drive, and my pool (with my info) it's on
another disks.
My question is: It's possible I reinstall opensolaris in new flash drive,
without stirring on
No dedup.
The hiccups started around 2am on Sunday while (obviously) nobody was
interacting with neither the clients or the server. It's been running for
months (as is) without any problem.
My guess is that it's a defective hard drive that instead of totally failing,
just stutters. Or mayb
Hi
It is better just ceate new ool in array 8
Then use cpio ro copy the data
On 7/26/11, Bernd W. Hennig wrote:
> G'Day,
>
> - zfs pool with 4 disks (from Clariion A)
> - must migrate to Clariion B (so I created 4 disks with the same size,
> avaiable for the zfs)
>
> The zfs pool has no mirrors
Ian,
Did you enable DeDup?
Rocky
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Ian D
Sent: Tuesday, July 26, 2011 7:52 AM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Entire client hangs every few secon
To add to that... iostat on the client boxes show the connection to always be
around 98% util and tops at 100% whenever it hangs. The same clients are
connected to another ZFS server with much lower specs and a smaller number of
slower disks, it performs much better and rarely get past 5% util
Subject: Re: [zfs-discuss] Adding mirrors to an existing zfs-pool
Date: Tue, 26 Jul 2011 08:54:38 -0600
From: Cindy Swearingen
To: Bernd W. Hennig
References: <342994905.11311662049567.JavaMail.Twebapp@sf-app1>
Hi Bernd,
If you are talking about attaching 4 new disks to a non redundant pool
w
Hi all-
We've been experiencing a very strange problem for two days now.
We have three client (Linux boxes) connected to a ZFS box (Nexenta) via iSCSI.
Every few seconds (seems random), iostats shows the clients go from an normal
80K+ IOPS to zero. It lasts up to a few seconds and things are
On Mon, July 25, 2011 10:03, Orvar Korvar wrote:
> "There is at least a common perception (misperception?) that devices
> cannot process TRIM requests while they are 100% busy processing other
> tasks."
>
> Just to confirm; SSD disks can do TRIM while processing other tasks?
"Processing" the requ
Bernd W. Hennig wrote:
G'Day,
- zfs pool with 4 disks (from Clariion A)
- must migrate to Clariion B (so I created 4 disks with the same size,
avaiable for the zfs)
The zfs pool has no mirrors, my idea was to add the new 4 disks from
the Clariion B to the 4 disks which are still in the pool -
G'Day,
- zfs pool with 4 disks (from Clariion A)
- must migrate to Clariion B (so I created 4 disks with the same size,
avaiable for the zfs)
The zfs pool has no mirrors, my idea was to add the new 4 disks from
the Clariion B to the 4 disks which are still in the pool - and later
remove the ori
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha
>
> Shouldn't modern SSD controllers be smart enough already that they know:
> - if there's a request to overwrite a sector, then the old data on
> that sector is no longer nee
On 07/26/11 11:56, Fred Liu wrote:
It is up to how big the delta is. It does matter if the data backup can not
be finished within the required backup window when people use zfs send/receive
to do the mass data backup.
The only way you will know of decrypting and decompressing causes a
problem
Op 26-07-11 12:56, Fred Liu schreef:
> Any alternatives, if you don't mind? ;-)
vpn's, openssl piped over netcat, a password-protected zip file,... ;)
ssh would be the most practical, probably.
--
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means,
>
> Yes, which is exactly what I said.
>
> All data as seen by the DMU is decrypted and decompressed, the DMU
> layer
> is what the ZPL layer is built ontop of so it has to be that way.
>
Understand. Thank you. ;-)
>
> There is always some overhead for doing a decryption and decompression,
> t
On 07/26/11 11:28, Fred Liu wrote:
The ZFS Send stream is at the DMU layer at this layer the data is
uncompress and decrypted - ie exactly how the application wants it.
Even the data compressed/encrypted by ZFS will be decrypted?
Yes, which is exactly what I said.
All data as seen by the DM
>
> The ZFS Send stream is at the DMU layer at this layer the data is
> uncompress and decrypted - ie exactly how the application wants it.
>
Even the data compressed/encrypted by ZFS will be decrypted? If it is true,
will it be any CPU overhead?
And ZFS send/receive tunneled by ssh becomes th
On 07/26/11 10:14, Andrew Gabriel wrote:
Does anyone know if it's OK to do zfs send/receive between zpools with
different ashift values?
The ZFS Send stream is at the DMU layer at this layer the data is
uncompress and decrypted - ie exactly how the application wants it.
The ashift is a vdev
>Shouldn't modern SSD controllers be smart enough already that they know:
>- if there's a request to overwrite a sector, then the old data on
>that sector is no longer needed
>- allocate a "clean" sector from pool of available sectors (part of
>wear-leveling mechanism)
>- clear the old sector, an
On Tue, Jul 26, 2011 at 3:28 PM, wrote:
>
>
>>Bullshit. I just got a OCZ Vertex 3, and the first fill was 450-500MB/s.
>>Second and sequent fills are at half that speed. I'm quite confident
>>that it's due to the flash erase cycle that's needed, and if stuff can
>>be TRIM:ed (and thus flash erase
Does anyone know if it's OK to do zfs send/receive between zpools with
different ashift values?
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>Bullshit. I just got a OCZ Vertex 3, and the first fill was 450-500MB/s.
>Second and sequent fills are at half that speed. I'm quite confident
>that it's due to the flash erase cycle that's needed, and if stuff can
>be TRIM:ed (and thus flash erased as well), speed would be regained.
>Overwritin
Phil,
Recently, we have built a large configuration on 4 way Xeon sever with 8 4U
24 Bay JBOD. We are using 2x LSI 6160 SAS switch so we can easy to expand
the Storage in the future.
1) If you are planning to expand your storage, you should consider
using LSI SAS switch for easy future
33 matches
Mail list logo