On 06/04/2010 06:15 PM, Bob Friesenhahn wrote:
> On Fri, 4 Jun 2010, Sandon Van Ness wrote:
>>
>> Interesting enough when I went to copy the data back I got even worse
>> download speeds than I did write speeds! It looks like i need some sort
>> of read-ahead as unlike the writes it doesn't appear
On Fri, 4 Jun 2010, Sandon Van Ness wrote:
Interesting enough when I went to copy the data back I got even worse
download speeds than I did write speeds! It looks like i need some sort
of read-ahead as unlike the writes it doesn't appear to be CPU bound as
using mbuffer/tar gives me full gigabit
On Fri, Jun 4, 2010 at 2:59 PM, David Magda wrote:
> Are you referring to a read cache or a write cache?
A cache vdev is a L2ARC, used for reads.
A log vdev is a slog/zil, used for writes.
Oh, how we overload our terms.
-B
--
Brandon High : bh...@freaks.com
___
On Fri, Jun 4, 2010 at 11:28 AM, zfsnoob4 wrote:
> Does anyone know if opensolaris supports Trim?
It does not. However, it doesn't really matter for a cache device.
The cache device is written to rather slowly, and only needs to have
low latency access on reads.
Most current gen SSDs such as th
Frank,
The format utility is not technically correct because it refers to
slices as partitions. Check the output below.
We might describe that the "partition" menu is used to partition the
disk into slices, but all of format refers to partitions, not slices.
I agree with Brandon's explanation,
Victor Latushkin wrote:
On Jun 4, 2010, at 5:01 PM, Sigbjørn Lie wrote:
R. Eulenberg wrote:
Sorry for reviving this old thread.
I even have this problem on my (productive) backup server. I lost my system-hdd
and my separate ZIL-device while the system crashs and now I'm in trouble. T
On Jun 4, 2010, at 14:28, zfsnoob4 wrote:
Does anyone know if opensolaris supports Trim?
Not at this time.
Are you referring to a read cache or a write cache?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
On 6/4/10 11:46 AM -0700 Brandon High wrote:
Be aware that Solaris on x86 has two types of partitions. There are
fdisk partitions (c0t0d0p1, etc) which is what gparted, windows and
other tools will see. There are also Solaris partitions or slices
(c0t0d0s0). You can create or edit these with the
On 05.06.10 00:10, Ray Van Dolson wrote:
On Fri, Jun 04, 2010 at 01:03:32PM -0700, Brandon High wrote:
On Fri, Jun 4, 2010 at 12:37 PM, Ray Van Dolson wrote:
Makes sense. So, as someone else suggested, decreasing my block size
may improve the deduplication ratio.
It might. It might make your
On Jun 4, 2010, at 10:18 PM, Miles Nordin wrote:
>> "sl" == Sigbjørn Lie writes:
>
>sl> Excellent! I wish I would have known about these features when
>sl> I was attempting to recover my pool using 2009.06/snv111.
>
> the OP tried the -F feature. It doesn't work after you've lost
On Jun 4, 2010, at 5:01 PM, Sigbjørn Lie wrote:
>
> R. Eulenberg wrote:
>> Sorry for reviving this old thread.
>>
>> I even have this problem on my (productive) backup server. I lost my
>> system-hdd and my separate ZIL-device while the system crashs and now I'm in
>> trouble. The old system
On Fri, Jun 04, 2010 at 01:03:32PM -0700, Brandon High wrote:
> On Fri, Jun 4, 2010 at 12:37 PM, Ray Van Dolson wrote:
> > Makes sense. So, as someone else suggested, decreasing my block size
> > may improve the deduplication ratio.
>
> It might. It might make your performance tank, too.
>
> De
On Fri, Jun 4, 2010 at 12:37 PM, Ray Van Dolson wrote:
> Makes sense. So, as someone else suggested, decreasing my block size
> may improve the deduplication ratio.
It might. It might make your performance tank, too.
Decreasing the block size increases the size of the dedup table (DDT).
Every e
On Fri, Jun 04, 2010 at 12:37:01PM -0700, Ray Van Dolson wrote:
> On Fri, Jun 04, 2010 at 11:16:40AM -0700, Brandon High wrote:
> > On Fri, Jun 4, 2010 at 9:30 AM, Ray Van Dolson wrote:
> > > The ISO's I'm testing with are the 32-bit and 64-bit versions of the
> > > RHEL5 DVD ISO's. While both ha
> Makes sense. So, as someone else suggested, decreasing my block size
> may improve the deduplication ratio.
>
> recordsize I presume is the value to tweak?
It is, but keep in mind that zfs will need about 150 bytes for each block. 1TB
with 128k blocks will need about 1GB memory for the index
On Fri, Jun 04, 2010 at 11:16:40AM -0700, Brandon High wrote:
> On Fri, Jun 4, 2010 at 9:30 AM, Ray Van Dolson wrote:
> > The ISO's I'm testing with are the 32-bit and 64-bit versions of the
> > RHEL5 DVD ISO's. While both have their differences, they do contain a
> > lot of similar data as well.
On 06/01/2010 07:57 AM, Bob Friesenhahn wrote:
> On Mon, 31 May 2010, Sandon Van Ness wrote:
>> With sequential writes I don't see how parity writing would be any
>> different from when I just created a 20 disk zpool which is doing the
>> same writes every 5 seconds but the only difference is it is
On Fri, Jun 4, 2010 at 11:41 AM, Andres Noriega
wrote:
> I understand now. So each vol's available space is reporting it's
> reservation and whatever is still available in the pool.
>
> I appreciate the explanation. Thank you!
>
>
If you want the available space to be a hard limit, have a look at
On Fri, Jun 4, 2010 at 12:59 AM, zfsnoob4 wrote:
> This is what I'm thinking:
> 1) Use Gparted to resize the windows partition and therefore create a 50GB
> raw partition.
> 2) Use the opensolaris installer to format the raw partition into a Solaris
> FS.
> 3) Install opensolaris 2009.06, the se
I understand now. So each vol's available space is reporting it's reservation
and whatever is still available in the pool.
I appreciate the explanation. Thank you!
> On Thu, Jun 3, 2010 at 1:06 PM, Andres Noriega
> wrote:
> > Hi everyone, I have a question about the zfs list
> output. I create
On Fri, Jun 4, 2010 at 6:36 AM, Andreas Iannou
wrote:
> I'm wondering if we can see the amount of usage for a drive in ZFS raidz
> mirror. I'm in the process of replacing some drives but I want to replace
By definition, a mirror has the a copy of all the data on each drive.
A raidz vdev is auto-
I'm also considering adding a cheap SSD as a a cache drive. The only problem is
that SSDs loose performance over time because when something is deleted, it is
not actually deleted. So the next time something is written on the same blocks,
it must first delete, then write.
To fix this, SSDs allo
On Fri, Jun 4, 2010 at 1:29 AM, sensille wrote:
> But what I'm really targeting with my question: How much coverage can be
> reached with a find | xargs wc in contrast to scrub? It misses the snapshots,
> but anything beyond that?
Your script will also update the atime on every file, which may no
> "sl" == Sigbjørn Lie writes:
sl> Excellent! I wish I would have known about these features when
sl> I was attempting to recover my pool using 2009.06/snv111.
the OP tried the -F feature. It doesn't work after you've lost zpool.cache:
op> I was setting up a new systen (osol 20
On Fri, Jun 4, 2010 at 9:30 AM, Ray Van Dolson wrote:
> The ISO's I'm testing with are the 32-bit and 64-bit versions of the
> RHEL5 DVD ISO's. While both have their differences, they do contain a
> lot of similar data as well.
Similar != identical.
Dedup works on blocks in zfs, so unless the i
On Thu, Jun 3, 2010 at 1:06 PM, Andres Noriega
wrote:
> Hi everyone, I have a question about the zfs list output. I created a large
> zpool and then carved out 1TB volumes (zfs create -V 1T vtl_pool/lun##).
> Looking at the zfs list output, I'm a little thrown off by the AVAIL amount.
> Can any
Thanks... here's the requested output:
NAMEAVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
vtl_pool1020G 15.0T 0 46.3K 0 15.0T
vtl_pool/lun00 1.99T 1T 0 6.05G 1018G 0
vtl_pool/lun01 1.99T 1T 0
I'm running zpool version 23 (via ZFS fuse on Linux) and have a zpool
with deduplication turned on.
I am testing how well deduplication will work for the storage of many,
similar ISO files and so far am seeing unexpected results (or perhaps
my expectations are wrong).
The ISO's I'm testing with a
Well, yes I understand I need to research the issue of running the idmapd
service, but I also need to figure out how to use nfsv4 and automount.
-
Cassandra
(609) 243-2413
Unix Administrator
"From a little spark may burst a mighty flame."
-Dante Alighieri
On Fri, Jun 4, 2010 at 10:00 AM, Pasi K
On Fri, 2010-06-04 at 16:03 +0100, Robert Milkowski wrote:
> On 04/06/2010 15:46, James Carlson wrote:
> > Petr Benes wrote:
> >
> >> add to /etc/system something like (value depends on your needs)
> >>
> >> * limit greedy ZFS to 4 GiB
> >> set zfs:zfs_arc_max = 4294967296
> >>
> >> And yes, th
On 04/06/2010 15:46, James Carlson wrote:
Petr Benes wrote:
add to /etc/system something like (value depends on your needs)
* limit greedy ZFS to 4 GiB
set zfs:zfs_arc_max = 4294967296
And yes, this has nothing to do with zones :-).
That leaves unanswered the underlying question: wh
On Fri, June 4, 2010 03:29, sensille wrote:
> Hi,
>
> I have a small question about the depth of scrub in a raidz/2/3
> configuration.
> I'm quite sure scrub does not check spares or unused areas of the disks
> (it
> could check if the disks detects any errors there).
> But what about the parity?
On Fri, Jun 4, 2010 at 6:36 AM, Andreas Iannou <
andreas_wants_the_w...@hotmail.com> wrote:
> Hello again,
>
> I'm wondering if we can see the amount of usage for a drive in ZFS raidz
> mirror. I'm in the process of replacing some drives but I want to replace
> the less used drives first (maybe o
On Fri, Jun 04, 2010 at 08:43:32AM -0400, Cassandra Pugh wrote:
>Thank you, when I manually mount using the "mount -t nfs4" option, I am
>able to see the entire tree, however, the permissions are set as
>nfsnobody.
>"Warning: rpc.idmapd appears not to be running.
> All u
> I have a small question about the depth of scrub in a
> raidz/2/3 configuration.
> I'm quite sure scrub does not check spares or unused
> areas of the disks (it
> could check if the disks detects any errors there).
> But what about the parity?
>From some informal performance testing of RAIDZ2/3
Hello again,
I'm wondering if we can see the amount of usage for a drive in ZFS raidz
mirror. I'm in the process of replacing some drives but I want to replace the
less used drives first (maybe only 40-50% utilisation). Is there such a thing?
I saw somewhere that a guy had 3 drives in a rai
On Fri, Jun 4, 2010 at 2:59 PM, zfsnoob4 wrote:
> "It's not easy to make Solaris slices on the boot drive."
>
> As I am just realizing. The installer does not have any kind of partition
> software.
>
> I have a linux boot disc and I am contemplating using gparted to resize the
> win partition to
R. Eulenberg wrote:
Sorry for reviving this old thread.
I even have this problem on my (productive) backup server. I lost my system-hdd and my separate ZIL-device while the system crashs and now I'm in trouble. The old system was running under the least version of osol/dev (snv_134) with zfs v2
David Magda wrote:
On Wed, June 2, 2010 02:20, Sigbjorn Lie wrote:
I have just recovered from a ZFS crash. During the antagonizing time
this took, I was surprised to learn how undocumented the tools and
options for ZFS recovery we're. I managed to recover thanks to some great
forum posts fro
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of sensille
>
> I'm quite sure scrub does not check spares or unused areas of the disks
> (it
> could check if the disks detects any errors there).
> But what about the parity? Obviously it has to
On Jun 3, 2010 7:35 PM, David Magda wrote:
> On Jun 3, 2010, at 13:36, Garrett D'Amore wrote:
>
> > Perhaps you have been unlucky. Certainly, there is
> a window with N
> > +1 redundancy where a single failure leaves the
> system exposed in
> > the face of a 2nd fault. This is a statistics
>
Thank you, when I manually mount using the "mount -t nfs4" option, I am able
to see the entire tree, however, the permissions are set as nfsnobody.
"Warning: rpc.idmapd appears not to be running.
All uids will be mapped to the nobody uid."
-
Cassandra
(609) 243-2413
Unix Administrator
Sorry for reviving this old thread.
I even have this problem on my (productive) backup server. I lost my system-hdd
and my separate ZIL-device while the system crashs and now I'm in trouble. The
old system was running under the least version of osol/dev (snv_134) with zfs
v22.
After the server
Hi,
I have a small question about the depth of scrub in a raidz/2/3 configuration.
I'm quite sure scrub does not check spares or unused areas of the disks (it
could check if the disks detects any errors there).
But what about the parity? Obviously it has to be checked, but I can't find
any indicat
"It's not easy to make Solaris slices on the boot drive."
As I am just realizing. The installer does not have any kind of partition
software.
I have a linux boot disc and I am contemplating using gparted to resize the win
partition to create a raw 50GB empty partition. Can the installer format
45 matches
Mail list logo