On Tue, Jan 19, 2010 at 9:25 PM, Matthew Ahrens wrote:
> Michael Schuster wrote:
>>
>> Mike Gerdts wrote:
>>>
>>> On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem to have
I can produce the timeout error on multiple, similar servers.
These are storage servers, so no zones or gui running.
Hardware:
Supermicro X7DWN with AOC-USASLP-L8i controller
E1 (single port) backplanes (16 & 24 bay)
(LSILOGICSASX28 A.0 and LSILOGICSASX36 A.1)
up to 36 1Tb WD Sata disks
This serv
On January 24, 2010 12:20:55 PM -0800 "R.G. Keen" wrote:
I do apologize for the snottier parts of my reply to your first note,
which I am editing. I did not get a chance to read this note from you
before responding.
Oh not at all. Snotty is as snotty does. um, what that is supposed
to mean i
the proper solution to out of control cabling is ML cables. However, aside from
the Supermicro 8-in-2 type of pseudo-enclosures nobody seems to make them for
the typical consumer use. You can get 12 or 16 drive backplanes that use ML
cables but they are necessarily wed to a particular chassis.
uep,
This solution seems like the best and most efficient way of handling large
filesystems. My biggest question however is, when backing this up to tape, can
it be split across several tapes? I will be using bacula to back this up. Will
i need to tar or star this filesystem before writing it to
On Mon, Jan 25, 2010 at 05:36:35PM -0500, Miles Nordin wrote:
> > "sb" == Simon Breden writes:
>
> sb> 1. In simple non-RAID single drive 'desktop' PC scenarios
> sb> where you have one drive, if your drive is experiencing
> sb> read/write errors, as this is the only drive you hav
Hi,
I installed opensolaris on a x2200 m2 with two internal drives that had
an existing root pool with a Solaris 10 update 6. After installing
opensolaris 2009.06 the host refused to boot. The opensolaris install
was fine. I had to pull the second hard drive to get the host to boot.
Then inser
I don't claim to understand all the nitty gritty details but there seems to be
a difference between the supermicro motherboard integrated 1068E controllers
and the HBA based controllers in that when I had a machine built with a
motherboard based controller it wouldn't properly talk to/work with
> I got over the reluctance to do drive replacements in
> larger batches
> quite some time ago (well before there was zfs),
> though I can
> certainly sympathise.
Yep, it's not so much of a big deal. One has to think a moment to see what is
needed, check out any possible gotchas in order to carry
It may depend on the firmware you're running. We've got a SAS1068E based
card in Dell R710 at the moment, connected to an external SAS JBOD, and
we did have problems with the as shipped firmware.
However we've upgraded that, and _so far_ haven't had further issues. I
didn't do the upgrade myse
I just tried to create a new share and got the same error.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 01/25/10 04:50 PM, David Dyer-Bennet wrote:
What's it cost to run a drive for a year again? Maybe I really should
just replace one existing pool with larger drives and let it go at that,
rather than running two more drives.
It seems to vary nowadays, but it seems the fewer the RPMs and
the
As in they work without any possibility of mpt timeout issues? I'm at my wits
end with a machine right now that has an integrated 1068E and is dying almost
hourly at this point.
If I could spend three hundred dollars or so and have my problems magically go
away, I'd love to pull the trigger on
Well I guess I am glad I am not the only one. Thanks for the heads up!
On Mon, Jan 25, 2010 at 3:39 PM, David Magda wrote:
> On Jan 25, 2010, at 18:28, Gregory Durham wrote:
>
> One option I have seen is zfs send zfs_s...@1 > /some_dir/some_file_name.
>> Then I can back this up to tape. This se
On Jan 25, 2010, at 18:28, Gregory Durham wrote:
One option I have seen is zfs send zfs_s...@1 > /some_dir/
some_file_name. Then I can back this up to tape. This seems easy as
I already have a created a script that does just this but I am
worried that this is not the best or most secure way
Hello all,
I have quite a bit of data transferring between two machines via snapshot
send and receive, this has been working flawlessly. I am now wanting to back
the data from the failover to tape.I was planning on using bacula as I have
a bit of experience with it. I am now trying to figure out th
On 21/01/2010 11:55, Julian Regel wrote:
>> Until you try to pick one up and put it in a fire safe!
>Then you backup to tape from x4540 whatever data you need.
>In case of enterprise products you save on licensing here as you need
a one client license per x4540 but in fact can >backup data fro
Hi! So after reading through this thread and checking the bug report...do we
still need to tell zfs to disable cache flush?
set zfs:zfs_nocacheflush=1
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Mon, Jan 25, 2010 at 05:42:59PM -0500, Miles Nordin wrote:
> et> You cannot import a stream into a zpool of earlier revision,
> et> thought the reverse is possible.
>
> This is very bad, because it means if your backup server is pool
> version 22, then you cannot use it to back up pool
> this sounds convincing to fetishists of an ordered
> world where
> egg-laying mammals do not exist, but it's utter
> rubbish.
Very insightful! :)
> As drives go bad they return errors frequently, and...
Yep, so have good regular backups, and move quickly once probs start.
Cheers,
Simon
http:
On 25-Jan-10, at 2:59 PM, Freddie Cash wrote:
We have the WDC WD15EADS-00P8B0 1.5 TB Caviar Green drives.
Unfortunately, these drives have the "fixed" firmware and the 8
second idle timeout cannot be changed.
That sounds like a laptop spec, not a server spec! How silly. Maybe
you can set
One problem with the write cache is that I do not know if it is needed for
write wearing ?
As mentioned, disabeling write cache might be ok in terms of performance (I
want to use MLC SSD as data disks, not as ZIL, to have a SSD only appliance -
I'm looking for read speed for dedupe, zfs send
> "et" == Erik Trimble writes:
>> Can I send a zfs send stream (ZFS pool version 22 ; ZFS
>> filesystem version 4) to a zfs receive stream on Solaris 10
>> (ZFS pool version 15 ; ZFS filesystem version 4)?
et> No.
et> You cannot import a stream into a zpool of earlier re
On Mon, Jan 25, 2010 at 04:08:04PM -0600, David Dyer-Bennet wrote:
> > - Don't be afraid to dike out the optical drive, either for case
> >space or available ports. [..]
> >[..] Put the drive in an external USB case if you want,
> >or leave it in the case connected via a USB bridge in
> "sb" == Simon Breden writes:
sb> 1. In simple non-RAID single drive 'desktop' PC scenarios
sb> where you have one drive, if your drive is experiencing
sb> read/write errors, as this is the only drive you have, and
sb> therefore you have no alternative redundant source of dat
> Well, they'll be in a space designated as a drive
> bay, so it should have
> some airflow. I'll certainly check.
Yes, it's certainly worth checking.
> It's an OCZ Core II, I believe. I've got an Intel -M
> waiting to replace
> it when I can find time (probably when I install
> Windows 7).
AF
> "cs" == Cindy Swearingen writes:
> "re" == Richard Elling writes:
cs> http://defect.opensolaris.org/bz/show_bug.cgi?id=5993
the procedure described here is like doing FLAR by hand, so it'll
probably be useful in quite a lot of situations, especially with the
frequent genunix livec
On Mon, January 25, 2010 15:44, Daniel Carosone wrote:
>
> Some other points and recommendations to consider:
>
> - Since you have the bays, get the controller to drive them,
>regardless. They will have many uses, some of which below.
>A 4-port controller would allow you enough ports for
On Mon, January 25, 2010 15:26, Simon Breden wrote:
>> I've got at least one available 5.25" bay. I hadn't
>> considered 2.5" HDs;
>> that's a tempting way to get the physical space I
>> need.
>
> Yes, it is an interesting option. But remember about any necessary cooling
> if moving them from a c
Some other points and recommendations to consider:
- Since you have the bays, get the controller to drive them,
regardless. They will have many uses, some of which below.
A 4-port controller would allow you enough ports for both the two
empty hotswap bays, plus the dual 2.5" carrier.
> In general, any system which detects and acts upon
> faults, would like
> to detect faults sooner rather than later.
Yes, it makes sense. I think my main concern was about loss - in question 2.
> > 2. Does having shorter error reporting times
> provide any significant data safety through, for
> I've got at least one available 5.25" bay. I hadn't
> considered 2.5" HDs;
> that's a tempting way to get the physical space I
> need.
Yes, it is an interesting option. But remember about any necessary cooling if
moving them from a currently cooled area. As I used SSDs this turned out to be
i
> "ca" == Carsten Aulbert writes:
> "ls" == Lutz Schumann writes:
ca> X25-E drives and a converter from 3.5 to 2.5 inches. So far
ca> two systems have shown pretty bad instabilities with that.
instability after crashing or instability while running? Lutz
Schumann 2010-01-10 see
On Mon, January 25, 2010 14:11, Simon Breden wrote:
>> I've given some though to booting from a thumb drive
>> instead of disks.
>> That would free up two SATA ports AND two hot-swap
>> disk bays, which would
>> be nice. And by simply keeping an image of the thumb
>> drive contents, I
>> could r
Good news. Are those the HD154UI models?
Cheers,
Simon
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
> We have the WDC WD15EADS-00P8B0 1.5 TB Caviar Green
> drives.
>
> Unfortunately, these drives have the "fixed" firmware
> and the 8 second idle timeout cannot be changed.
> Since we starting replacing these drives in our pool
> about 6 weeks ago (replacing 1 drive per week), the
> drives has reg
Hi CD,
Practical in what kind of environment? What are your goals?
Do you want the ACL deny entries to be inherited?
Do you plan to use CIFS to access these files + ACLs from
systems running Windows?
Thanks,
Cindy
On 01/25/10 07:21, CD wrote:
Hello forum.
I'm in the process of re-organizi
> One of those EIDE ports is running the optical drive,
> so I don't actually
> have two free ports there even if I replaced the two
> boot drives with IDE
> drives.
Yep, as I expected.
> I've given some though to booting from a thumb drive
> instead of disks.
> That would free up two SATA ports
We have the WDC WD15EADS-00P8B0 1.5 TB Caviar Green drives.
Unfortunately, these drives have the "fixed" firmware and the 8 second idle
timeout cannot be changed. Since we starting replacing these drives in our
pool about 6 weeks ago (replacing 1 drive per week), the drives has registered
almo
On Mon, January 25, 2010 13:11, Simon Breden wrote:
> I have the same motherboard and have been through this upgrade
> head-scratching before with my system, so hopefully I can give some useful
> tips.
Great! Thanks.
> First of all, unless the situation has changed, forget trying to get the
>
Hi David,
I have the same motherboard and have been through this upgrade head-scratching
before with my system, so hopefully I can give some useful tips.
First of all, unless the situation has changed, forget trying to get the extra
2 SATA devices on the motherboard to work, as last time I look
My current home fileserver (running Open Solaris 111b and ZFS) has an ASUS
M2N-SLI DELUXE motherboard. This has 6 SATA connections, which are
currently all in use (mirrored pair of 80GB for system zfs pool, two
mirrors of 400GB both in my data pool).
I've got two more hot-swap drive bays. And I'
Hello forum.
I'm in the process of re-organizing my server and ACL-settings.
I've seen so many different ways of doing ACL, which makes me wonder how
I should do it myself.
This is obviously the easiest way, only describing the positive permissions:
/usr/bin/chmod -R A=\
group:sa:full_set:fd:
Thanks Jack,
I was just a listener in this case. Tim did all the work. :-)
Cindy
On 01/23/10 21:49, Jack Kielsmeier wrote:
I'd like to thank Tim and Cindy at Sun for providing me with a new zfs binary
file that fixed my issue. I was able to get my zpool back! Hurray!
Thank You.
I've been using 10 Samsung eco greens in a raidz2 on freebsd for about 6
months. (Yeah I know it's above 9, the performance is fine for my usage
though)
Haven't had any problems.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.openso
With the absolutely deplorable reliability of drives >1TB why would one even
waste their money? The 500GB RE2/3 and NS drives are very reliable and
<$.12/gb. I get new drives off ebay all the time.
NAS speed is all about spindles. 6 spindles will always outrun a setup with 3.
Almost any mid-siz
Mike Gerdts writes:
> Kjetil Torgrim Homme wrote:
>> Mike Gerdts writes:
>>
>>> John Hoogerdijk wrote:
Is there a way to zero out unused blocks in a pool? I'm looking for
ways to shrink the size of an opensolaris virtualbox VM and using the
compact subcommand will remove zero'd s
On Mon, Jan 25, 2010 at 2:32 AM, Kjetil Torgrim Homme
wrote:
> Mike Gerdts writes:
>
>> John Hoogerdijk wrote:
>>> Is there a way to zero out unused blocks in a pool? I'm looking for
>>> ways to shrink the size of an opensolaris virtualbox VM and using the
>>> compact subcommand will remove zero
> Any comments on this Dec. 2005 study on disk failure
> and error rates?
> http://research.microsoft.com/apps/pubs/default.aspx?i
> d=64599
Will take a read...
> The OP originally asked "Best 1.5TB drives for
> consumer RAID?". Despite
> the entertainment value of the comments, it isn't
> clear
> Extended timeouts lead to manual intervention, not a
> change in the
> probability of data loss. In other words, they
> affect the MTTR, not
> the reliability. For a 7x24x365 deployments, MTTR is
> a concern because
> it impacts availability. For home use, perhaps not so
> much.
> -- richard
On 24 janv. 2010, at 08:36, Erik Trimble wrote:
> These days, I've switched to 2.5" SATA laptop drives for large-storage
> requirements.
> They're going to cost more $/GB than 3.5" drives, but they're still not
> horrible ($100 for a 500GB/7200rpm Seagate Momentus). They're also easier to
> cr
Mike Gerdts writes:
> John Hoogerdijk wrote:
>> Is there a way to zero out unused blocks in a pool? I'm looking for
>> ways to shrink the size of an opensolaris virtualbox VM and using the
>> compact subcommand will remove zero'd sectors.
>
> I've long suspected that you should be able to just u
52 matches
Mail list logo