Rocky,
Can individuals buy your products in the retail market?
Thanks.
Fred
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Rocky Shek
> Sent: 星期五, 一月 28, 2011 7:02
> To: 'Pasi Kärkkäinen'
> Cc: 'Philip Brown'
You should also check out VA Technologies (
http://www.va-technologies.com/servicesStorage.php) in the UK which supply a
range of JBOD's. I've used this is very large deployments with no JBOD
related failures to-date. Interestingly the laso list co-raid boxes.
---
W. A. Khushil Dep - khushil@g
Khushil,
Thanks.
Fred
From: Khushil Dep [mailto:khushil@gmail.com]
Sent: 星期一, 一月 31, 2011 17:37
To: Fred Liu
Cc: Rocky Shek; Pasi Kärkkäinen; Philip Brown; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] reliable, enterprise worthy JBODs?
You should also check out VA Technologies
(h
Brandon High wrote:
> On Sat, Jan 29, 2011 at 8:31 AM, Edward Ned Harvey
> wrote:
> > What is the status of ZFS support for TRIM?
>
> I believe it's been supported for a while now.
> http://www.c0t0d0s0.org/archives/6792-SATA-TRIM-support-in-Opensolaris.html
The command is implemented in the sa
Torrey McMahon wrote:
> On 1/30/2011 5:26 PM, Joerg Schilling wrote:
> > Richard Elling wrote:
> >
> >> ufsdump is the problem, not ufsrestore. If you ufsdump an active
> >> file system, there is no guarantee you can ufsrestore it. The only way
> >> to guarantee this is to keep the file system q
Why do you say fssnap has the same problem?
If it write locks the file system, it is only for a matter of seconds, as I
recall.
Years ago, I used it on a daily basis to do ufsdumps of large fs'es.
Mark
On Jan 30, 2011, at 5:41 PM, Torrey McMahon wrote:
> On 1/30/2011 5:26 PM, Joerg Schilling
A matter of seconds is a long time for a running Oracle database. The
point is that if you have to keep writing to a UFS filesystem - "when
the file system also needs to accommodate writes" - you're still out of
luck. If you can quiesce the apps, great, but if you can't then you're
still stuck.
iirc, we would notify the user community that the FS'es were going to hang
briefly.
Locking the FS'es is the best way to quiesce it, when users are worldwide, imo.
Mark
On Jan 31, 2011, at 9:45 AM, Torrey McMahon wrote:
> A matter of seconds is a long time for a running Oracle database. The po
He says he's using FreeBSD. ZFS recorded names like "ada0" which always means
a whole disk.
In any case FreeBSD will search all block storage for the ZFS dev components if
the cached name is wrong: if the attached disks are connected to the system at
all FreeBSD will find them wherever they ma
On 1/29/2011 6:18 PM, Richard Elling wrote:
>
> On Jan 29, 2011, at 12:58 PM, Mike Tancsa wrote:
>
>> On 1/29/2011 12:57 PM, Richard Elling wrote:
0(offsite)# zpool status
pool: tank1
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
Hi Mike,
Yes, this is looking much better.
Some combination of removing corrupted files indicated in the zpool
status -v output, running zpool scrub and then zpool clear should
resolve the corruption, but its depends on how bad the corruption is.
First, I would try least destruction method: Try
On Mon, Jan 31, 2011 at 03:41:52PM +0100, Joerg Schilling wrote:
> Brandon High wrote:
>
> > On Sat, Jan 29, 2011 at 8:31 AM, Edward Ned Harvey
> > wrote:
> > > What is the status of ZFS support for TRIM?
> >
> > I believe it's been supported for a while now.
> > http://www.c0t0d0s0.org/archives
Pasi Kärkkäinen wrote:
> On Mon, Jan 31, 2011 at 03:41:52PM +0100, Joerg Schilling wrote:
> > Brandon High wrote:
> >
> > > On Sat, Jan 29, 2011 at 8:31 AM, Edward Ned Harvey
> > > wrote:
> > > > What is the status of ZFS support for TRIM?
> > >
> > > I believe it's been supported for a while
On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
> Hi Mike,
>
> Yes, this is looking much better.
>
> Some combination of removing corrupted files indicated in the zpool
> status -v output, running zpool scrub and then zpool clear should
> resolve the corruption, but its depends on how bad the corru
G'day All.
I’m trying to select the appropriate disk spindle speed for a proposal and
would welcome any experience and opinions (e.g. has anyone actively chosen
10k/15k drives for a new ZFS build and, if so, why?).
This is for ZFS over NFS for VMWare storage ie. primarily random 4kB read/
Fred,
You can easier get them from our resellers. Our resellers are all around the
world.
Rocky
From: Fred Liu [mailto:fred_...@issi.com]
Sent: Monday, January 31, 2011 1:43 AM
To: Khushil Dep
Cc: Rocky Shek; Pasi Kärkkäinen; Philip Brown; zfs-discuss@opensolaris.org
Subject: RE: [z
On Jan 31, 2011, at 1:19 PM, Mike Tancsa wrote:
> On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
>> Hi Mike,
>>
>> Yes, this is looking much better.
>>
>> Some combination of removing corrupted files indicated in the zpool
>> status -v output, running zpool scrub and then zpool clear should
>> res
> I'm not sure about *docs*, but my rough estimations:
>
> Assume 1TB of actual used storage. Assume 64K block/slab size. (Not
> sure how realistic that is -- it depends totally on your data set.)
> Assume 300 bytes per DDT entry.
>
> So we have (1024^4 / 65536) * 300 = 5033164800 or about 5GB RA
> As I've said here on the list a few times earlier, the last on the
> thread 'ZFS not usable (was ZFS Dedup question)', I've been doing some
> rather thorough testing on zfs dedup, and as you can see from the
> posts, it wasn't very satisfactory. The docs claim 1-2GB memory usage
> per terabyte st
After an upgrade of a busy server to Oracle Solaris 10 9/10, I notice
a process called zpool-poolname that has 99 threads. This seems to be
a limit, as it never goes above that. It is lower on workstations.
The `zpool' man page says only:
Processes
Each imported pool has an associated pro
> Even *with* an L2ARC, your memory requirements are *substantial*,
> because the L2ARC itself needs RAM. 8 GB is simply inadequate for your
> test.
With 50TB storage, and 1TB if L2ARC, with no dedup, what amount of ARC would
you would you recommeend?
And then, _with_ dedup, what would you reco
- Original Message -
> > Even *with* an L2ARC, your memory requirements are *substantial*,
> > because the L2ARC itself needs RAM. 8 GB is simply inadequate for
> > your
> > test.
>
> With 50TB storage, and 1TB if L2ARC, with no dedup, what amount of ARC
> would you would you recommeend?
>
On Jan 31, 2011, at 6:16 PM, Roy Sigurd Karlsbakk wrote:
>> Even *with* an L2ARC, your memory requirements are *substantial*,
>> because the L2ARC itself needs RAM. 8 GB is simply inadequate for your
>> test.
>
> With 50TB storage, and 1TB if L2ARC, with no dedup, what amount of ARC would
> you
How do you verify that a zfs send binary object is valid?
I tried running a truncated file through zstreamdump and it completed
with no error messages and an exit() status of 0. However, I noticed it
was missing a final print statement with a checksum value,
END checksum = ...
Is there any normal
On 01/31/11 06:40 PM, Roy Sigurd Karlsbakk wrote:
- Original Message -
Even *with* an L2ARC, your memory requirements are *substantial*,
because the L2ARC itself needs RAM. 8 GB is simply inadequate for
your
test.
With 50TB storage, and 1TB if L2ARC, with no dedup, what amou
On Jan 30, 2011, at 6:03 PM, Richard Elling wrote:
> On Jan 30, 2011, at 5:01 PM, Stuart Anderson wrote:
>> On Jan 30, 2011, at 2:29 PM, Richard Elling wrote:
>>
>>> On Jan 30, 2011, at 12:21 PM, stuart anderson wrote:
>>>
Is it possible to partition the global setting for the maximum ARC
The threads associated with the zpool process have special purposes and are
used by the different I/O types of the ZIO pipeline. The number of threads
doesn't change for workstations or servers. They are fixed values per ZIO
type. The new process you're seeing is just exposing the work that has
alw
27 matches
Mail list logo