The dips are gone. I've run simple copy operations via CIFS for two days and
the problem hasn't appeared anymore.
I'll try to find out what caused it though, thanks for trying to help me.
--
This message posted from opensolaris.org
___
zfs-discuss mail
On Tue, Apr 6 at 17:56, Markus Kovero wrote:
Our Dell T610 is and has been working just fine for the last year and
a half, without a single network problem. Do you know if they're
using the same integrated part?
--eric
Hi, as I should have mentioned, integrated nics that cause issues
are u
On Tue, Apr 06, 2010 at 06:53:04PM -0700, Richard Elling wrote:
> >> Disagree. Swap is a perfectly fine workload for SSDs. Under ZFS,
> >> even more so. I'd really like to squash this rumour and thought we
> >> were making progress on that front :-( Today, there are millions or
> >> thousand
On Apr 6, 2010, at 5:38 PM, Erik Trimble wrote:
> On Tue, 2010-04-06 at 17:17 -0700, Richard Elling wrote:
>> On Apr 6, 2010, at 5:00 PM, Erik Trimble wrote:
>
> [snip]
>
>>> For L2ARC, you are more concerned with total size/capacity, and
>>> modest IOPS (3000-1 IOPS, or the ability to writ
> > We ran into something similar with these drives in an X4170 that
> turned
> > out to
> > be an issue of the preconfigured logical volumes on the drives. Once
> > we made
> > sure all of our Sun PCI HBAs where running the exact same version of
> > firmware
> > and recreated the volumes on new d
> I have reason to believe that both the drive, and the OS are correct.
> I have suspicion that the HBA simply handled the creation of this
> volume somehow differently than how it handled the original. Don't
> know the answer for sure yet.
Ok, that's confirmed now. Apparently when the drives sh
On 04/06/10 17:17, Richard Elling wrote:
You could probably live with an X25-M as something to use for all three,
but of course you're making tradeoffs all over the place.
That would be better than almost any HDD on the planet because
the HDD tradeoffs result in much worse performance.
Indeed
On Tue, 2010-04-06 at 17:17 -0700, Richard Elling wrote:
> On Apr 6, 2010, at 5:00 PM, Erik Trimble wrote:
[snip]
> > For L2ARC, you are more concerned with total size/capacity, and
> > modest IOPS (3000-1 IOPS, or the ability to write at least 100Mb/s
> > at 4-8k write sizes, plus as high as
Erik Trimble wrote:
On Tue, 2010-04-06 at 08:26 -0700, Anil wrote:
Seems a nice sale on Newegg for SSD devices. Talk about choices. What's the
latest recommendations for a log device?
http://bit.ly/aL1dne
The Vertex LE models should do well as ZIL (though not as well as an
X25-E or a Zeus)
On Apr 6, 2010, at 5:00 PM, Erik Trimble wrote:
> On Tue, 2010-04-06 at 19:43 -0400, Kyle McDonald wrote:
>> On 4/6/2010 3:41 PM, Erik Trimble wrote:
>>> On Tue, 2010-04-06 at 08:26 -0700, Anil wrote:
>>>
Seems a nice sale on Newegg for SSD devices. Talk about choices. What's
the lates
On Tue, 2010-04-06 at 19:43 -0400, Kyle McDonald wrote:
> On 4/6/2010 3:41 PM, Erik Trimble wrote:
> > On Tue, 2010-04-06 at 08:26 -0700, Anil wrote:
> >
> >> Seems a nice sale on Newegg for SSD devices. Talk about choices. What's
> >> the latest recommendations for a log device?
> >>
> >> htt
On Wed, Apr 07, 2010 at 06:27:09AM +1000, Daniel Carosone wrote:
> You have reminded me.. I wrote some patches to the zfs manpage to help
> clarify this issue, while travelling, and never got around to posting
> them when I got back. I'll dig them up off my netbook later today.
http://defect.ope
On 4/6/2010 3:41 PM, Erik Trimble wrote:
> On Tue, 2010-04-06 at 08:26 -0700, Anil wrote:
>
>> Seems a nice sale on Newegg for SSD devices. Talk about choices. What's the
>> latest recommendations for a log device?
>>
>> http://bit.ly/aL1dne
>>
> The Vertex LE models should do well as ZIL
Hi Roch,
> Can you try 4 concurrent tar to four different ZFS
> filesystems (same pool).
Hmmm, you're on to something here:
http://www.science.uva.nl/~jeroen/zil_compared_e1000_iostat_iops_svc_t_10sec_interval.pdf
In short: when using two exported file systems total time goes down to around
On Wed, Apr 07, 2010 at 01:52:23AM +1000, taemun wrote:
> I was wondering if someone could explain why the DDT is seemingly
> (from empirical observation) kept in a huge number of individual blocks,
> randomly written across the pool, rather than just a large binary chunk
> somewhere.
It's not rea
On Tue, Apr 06, 2010 at 01:44:20PM -0400, Tony MacDoodle wrote:
> I am trying to understand how "refreservation" works with snapshots.
>
> If I have a 100G zfs pool
>
> I have 4 20G volume groups in that pool.
>
> refreservation = 20G on all volume groups.
>
> Now when I want to do a sn
On Tue, 2010-04-06 at 08:26 -0700, Anil wrote:
> Seems a nice sale on Newegg for SSD devices. Talk about choices. What's the
> latest recommendations for a log device?
>
> http://bit.ly/aL1dne
The Vertex LE models should do well as ZIL (though not as well as an
X25-E or a Zeus) for all non-ent
Willard Korfhage wrote:
Yes, I was hoping to find the serial numbers. Unfortunately, it
doesn't show any serial numbers for the disk attached to the Areca
raid card.
Does Areca provide any Solaris tools that will show you the drive info?
If you are using the Areca in JBOD mode, smartctl will f
I am trying to understand how "refreservation" works with snapshots.
If I have a 100G zfs pool
I have 4 20G volume groups in that pool.
refreservation = 20G on all volume groups.
Now when I want to do a snapshot will this snapshot need 20G + the amount
changed (REFER)? If not I get a "o
On 6/04/10 11:47 PM, Willard Korfhage wrote:
Yes, I was hoping to find the serial numbers. Unfortunately, it doesn't
show any serial numbers for the disk attached to the Areca raid card.
You'll need to reboot and go into the card bios to
get that information.
James C. McPherson
--
Senior So
Correct.
Jeff
Sent from my iPhone
On Apr 5, 2010, at 6:32 PM, Learner Study
wrote:
Hi Folks:
I'm wondering what is the correct flow when both raid5 and de-dup are
enabled on a storage volume
I think we should do de-dup first and then raid5 ... is that
understanding correct?
Thanks!
Hi Folks:
I'm wondering what is the correct flow when both raid5 and de-dup are
enabled on a volume
I think we should do de-dup first and then raid5 ... is that
understanding correct?
Thanks!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Tue, Apr 06, 2010 at 11:53:23AM -0400, Tony MacDoodle wrote:
> Can I rollback a snapshot that I did a zfs send on?
>
> ie: zfs send testpool/w...@april6 > /backups/w...@april6_2010
That you did a zfs send does not prevent you from rolling back to a
previous snapshot. Similarly for zfs recv --
Can I rollback a snapshot that I did a zfs send on?
ie: zfs send testpool/w...@april6 > /backups/w...@april6_2010
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I was wondering if someone could explain why the DDT is seemingly
(from empirical observation) kept in a huge number of individual blocks,
randomly written across the pool, rather than just a large binary chunk
somewhere.
Having been victim of the relly long times it takes to destroy a dataset
Hmmm.. Tried to post this before, but it doesn't appear. I'll try again.
I've been discussing the concept of a reference design for Opensolaris systems
with a few people. This comes very close to a system you can "just buy".
I spent about six months burning up google and pestering people here ab
Seems a nice sale on Newegg for SSD devices. Talk about choices. What's the
latest recommendations for a log device?
http://bit.ly/aL1dne
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
Hi,
I also ran into the problem of Dell+Broadcom. I fixed it by downgrading
the firmware to version 4.xxx instead of running in version 5.xxx .
You may try that one as well.
Bruno
On 6-4-2010 16:54, Eric D. Mudama wrote:
> On Tue, Apr 6 at 13:03, Markus Kovero wrote:
>>> Install nexenta on a de
> Our Dell T610 is and has been working just fine for the last year and
> a half, without a single network problem. Do you know if they're
> using the same integrated part?
> --eric
Hi, as I should have mentioned, integrated nics that cause issues are using
Broadcom BCM5709 chipset and these co
On Tue, Apr 6 at 13:03, Markus Kovero wrote:
Install nexenta on a dell poweredge ?
or one of these http://www.pogolinux.com/products/storage_director
FYI; More recent poweredges (R410,R710, possibly blades too, those with
integrated Broadcom chips) are not working very well with opensolaris d
Yes, I was hoping to find the serial numbers. Unfortunately, it doesn't show
any serial numbers for the disk attached to the Areca raid card.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
On Tue, Apr 6, 2010 at 12:47 AM, Daniel Carosone wrote:
> On Tue, Apr 06, 2010 at 12:29:35AM -0500, Tim Cook wrote:
> > On Tue, Apr 6, 2010 at 12:24 AM, Daniel Carosone
> wrote:
> >
> > > On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote:
> > > > By the way, I see that now one of
> Install nexenta on a dell poweredge ?
> or one of these http://www.pogolinux.com/products/storage_director
FYI; More recent poweredges (R410,R710, possibly blades too, those with
integrated Broadcom chips) are not working very well with opensolaris due
broadcom network issues, hang-ups packet
On 03/04/2010 00:57, Richard Elling wrote:
This is annoying. By default, zdb is compiled as a 32-bit executable and
it can be a hog. Compiling it yourself is too painful for most folks :-(
/usr/sbin/zdb is actually a link to /usr/lib/isaexec
$ ls -il /usr/sbin/zdb /usr/lib/isaexec
300679
34 matches
Mail list logo