I had a system that I was testing zfs on using EMC Luns to create a striped
zpool without using the multi-pathing software PowerPath. Of coarse a storage
emergency came up so I lent this storage out for temp storage and we're still
using. I'd like to add PowerPath to take advanage of the multi
Thanks Cindys for your input... I love your fear example too, but lucky for me
I have 10 years before I have to worry about that and hopefully we'll all be in
hovering bumper cars by then.
It looks like I'm going to have to create another test system and try
recommondations give here...and hop
Alex, thanks for the info. You made my heart stop a little when reading your
problem with PowerPath, but MPxIO seems like it might be a good option for me.
I'll will try that as well although I have not used it before. Thank you!
--
This message posted from opensolaris.org
___
Just thought I would let you all know that I followed what Alex suggested along
with what many of you pointed out and it worked! Here are the steps I followed:
1. Break root drive mirror
2. zpool export filesystem
3. run the command to start MPIOX and reboot the machine
4. zpool import filesystem
I'm just wondering what some of you might do with your systems.
We have an EMC Clariion unit that I connect several sun machines to. I allow
the EMC to do it's hardware raid5 for several luns and then I stripe them
together. I considered using raidz and just configuring the EMC as a JBOD, bu
Thanks for the response Marion. I'm glad that I"m not the only one. :)
Message was edited by: mijohnst
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
On 9/23/2010 at 12:38 PM Erik Trimble wrote:
| [snip]
|If you don't really care about ultra-low-power, then there's
absolutely
|no excuse not to buy a USED server-class machine which is 1- or 2-
|generations back. They're dirt cheap, readily available,
| [snip]
=
Anyone have
Did you have success?
What version of Solaris? OpenSolaris? etc?
I'd want to use this card with the latest Solaris 10 (update 5?)
The connector on the adapter itself is "IPASS" and the Supermicro part number
for cables from the adapter to standard SATA drives is CBL-0118L-02 "IPASS to 4
SATA C
i have that chassis too. did solaris install for you? what version/build?
i think i tried a nexenta build and it crapped out on install.
i also only have 2 gigs of ram in it and a CF card to boot off of...
4 drives is too small for what i want, 5 drives would be my minimum. i was
hoping this wo
Don't take my opinion. I am a newbie to everything solaris.
>From what it looks like in the HCL, some of the VIA stuff is supported. Like I
>said I tried some nexenta CD...
They don't make 64-bit, first off, and I am not sure if any of their mini-itx
boards support more than 2 gig ram. ZFS love
yeah, i have not been pleased with the quality of the HCL.
there's plenty of hardware discussed on the forums and if you search the bugs
db that has been confirmed and/or fixed to work on various builds of osol and
solaris 10.
i wound up buying an AMD based machine (i wanted Intel) with 6 onboa
I have built mine the last few days, and it seems to be running fine right now.
Originally I wanted Solaris 10, but switched to using SXCE (nevada build 94,
the latest right now) because I wanted the new CIFS support and some additional
ZFS features.
Here's my setup. These were my goals:
- Quie
I would love to go back to using shuttles.
Actually, my ideal setup would be:
Shuttle XPC w/ 2x PCI-e x8 or x16 lanes
2x PCI-e eSATA cards (each with 4 eSATA port multiplier ports)
then I could chain up to 8 enclosures off a single small, nearly silent host
machine.
8 enclosures x 5 drives = 40
Holy crap! That sounds cool. Firmware-based-VPN connectivity!
At Intel we're getting better too I suppose.
Anyway... I don't know where you're at in the company but you should rattle
some cages about my idea :)
This message posted from opensolaris.org
I didn't use any.
That would be my -ideal- setup :)
I waited and waited, and still no eSATA/Port Multiplier support out there, or
isn't stable enough. So I scrapped it.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
I'd say some good places to look are silentpcreview.com and mini-itx.com.
I found this tasty morsel on an ad at mini-itx...
http://www.american-portwell.com/product.php?productid=16133
6x onboard SATA. 4 gig support. core2duo support. which means 64 bit = yes, 4
gig = yes, 6x sata is nice.
now
that mashie link might be exactly what i wanted...
that mini-itx board w/ 6 SATA. use CF maybe for boot (might need IDE to CF
converter) - 5 drive holder (hotswap as a bonus) - you get 4 gig ram,
core2-based chip (64-bit), onboard graphics, 5 SATA2 drives... that is cool.
however. would need to
exactly.
that's why i'm trying to get an account on that site (looks like open
registration for the forums is disabled) so i can shoot the breeze and talk
about all this stuff too.
zfs would be perfect for this as most these guys are trying to find hardware
raid cards that will fit, etc... wit
Yeah but 2.5" aren't that big yet. What, they max out ~ 320 gig right?
I want 1tb+ disks :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
i must pose the question then:
is ECC required?
i am running non-ECC RAM right now on my machine (it's AMD and it would support
ECC, i'd just have to buy it online and wait for it)
but will it have any negative effects on ZFS integrity/checksumming if ECC RAM
is not used? obviously it's nice t
Question #1:
I've seen 5-6 disk zpools are the most recommended setup.
In traditional RAID terms, I would like to do RAID5 + hot spare (13 disks
usable) out of the 15 disks (like raidz2 I suppose). What would make the most
sense to setup 15 disks with ~ 13 disks of usable space? This is for a h
i could probably do 16 disks and maybe do a raidz on both for 14 disks
usable combined... that's probably as redundant as i'd need, i think.
can you combine two zpools together? or will i have two separate
"partitions" (i.e. i'll have "tank" for example and "tank2" instead of
making one single lar
see, originally when i read about zfs it said it could expand to petabytes or
something. but really, that's not as a single "filesystem" ? that could only be
accomplished through combinations of pools?
i don't really want to have to even think about managing two separate
"partitions" - i'd like
likewise i could also do something like
zpool create tank raidz1 disk1 disk2 disk3 disk4 disk5 disk6 disk7 \
raidz1 disk8 disk9 disk10 disk11 disk12 disk13 disk14 disk15
and i'd have a 7 disk raidz1 and an 8 disk raidz1... and i'd have 15 disks
still broken up into not-too-horrible pool sizes an
I hear everyone's concerns about multiple parity disks.
Are there any benchmarks or numbers showing the performance difference using a
15 disk raidz2 zpool? I am fine sacrificing some performance but obviously
don't want to make the machine crawl.
It sounds like I could go with 15 disks evenly
Oh sorry - for boot I don't care if it's redundant or anything.
Worst case the drive fails, I replace it and reinstall, and just re-mount the
ZFS stuff.
If I have the space in the case and the ports I could get a pair of 80 gig
drives or something and mirror them using SVM (which was recommende
> No that isn't correct.
> One or move vdevs create a pool. Each vdev in a pool can be a
> different type, eg a mix or mirror, raidz, raidz2.
> There is no such thing as zdev.
Sorry :)
Okay, so you can create a zpool from multiple vdevs. But you cannot
add more vdevs to a zpool once the zpool
On 8/22/08, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> I could if I wanted to add another vdev to this pool but it doesn't
> have to be raidz it could be raidz2 or mirror.
> If they did they are wrong, hope the above clarifies.
I get it now. If you add more disks they have to be in their own
m
On 8/22/08, Kyle McDonald <[EMAIL PROTECTED]> wrote:
> Antoher note, as someone said earlier, if you can go to 16 drives, you
> should consider 2 8disk RAIDZ2 vDevs, over 2 7disk RAIDZ vDevs with a spare,
> or (I would think) even a 14disk RAIDZ2 vDev with a spare.
>
> If you can (now or later) ge
It looks like this will be the way I do it:
initially:
zpool create mypool raidz2 disk0 disk1 disk2 disk3 disk4 disk5 disk6 disk7
when I need more space and buy 8 more disks:
zpool add mypool raidz2 disk8 disk9 disk10 disk11 disk12 disk13 disk14 disk15
Correct?
> Enable compression, and set up
On 8/22/08, Rich Teer <[EMAIL PROTECTED]> wrote:
> ZFS boot works fine; it only recently integrated into Nevada, but it
> has been in use for quite some time now.
Yeah I got the install option when I installed snv_94 but wound up not
having enough disks to use it.
> Even better: just use ZFS roo
On 8/22/08, Kyle McDonald <[EMAIL PROTECTED]> wrote:
> You only need 1 disk to use ZFS root. You won't have any redundancy, but as
> Darren said in another email, you can convert single device vDevs to
> Mirror'd vDevs later without any hassle.
I'd just get some 80 gig disks and mirror them. Migh
On 8/22/08, Ross <[EMAIL PROTECTED]> wrote:
> Yes, that looks pretty good mike. There are a few limitations to that as you
> add the 2nd raidz2 set, but nothing major. When you add the extra disks,
> your original data will still be stored on the first set of disks, if you
yeah i am on gigabit, but the clients are things like an xbox which is
only 10/100, etc. right now the setup works fine. i'm thinking the new
CIFS implementation should make it run even cleaner too.
On 8/22/08, Ross Smith <[EMAIL PROTECTED]> wrote:
> Yup, you got it, and an 8 disk raid-z2 array sh
On 8/26/08, Cyril Plisko <[EMAIL PROTECTED]> wrote:
> that's very interesting ! Can you share more info on what these
> bugs/issues are ? Since it is LU related I guess we'll never see these
> via opensolaris.org, right ? So I would appreciate if community will
> be updated when these fixes will
Yeah, I'm looking at using 10 disks or 16 disks (depending on which
chassis I get) - and I would like reasonable redundancy (not HA-crazy
redundancy where I can suffer tons of failures, I can power this down
and replace disks, it's a home server) and maximize the amount of
usable space.
Putting up
I have a weekly scrub setup, and I've seen at least once now where it
says "don't snapshot while scrubbing"
Is this a data integrity issue, or will it make one or both of the
processes take longer?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensol
Okay, well I am running snv_94 already. So I guess I'm good :)
On Fri, Sep 5, 2008 at 10:23 AM, Mark Shellenbaum
<[EMAIL PROTECTED]> wrote:
> mike wrote:
>>
>> I have a weekly scrub setup, and I've seen at least once now where it
>> says "don't
On Tue, Sep 16, 2008 at 2:28 PM, Peter Tribble <[EMAIL PROTECTED]> wrote:
> For what it's worth, we put all the disks on our thumpers into a single pool -
> mostly it's 5x 8+1 raidz1 vdevs with a hot spare and 2 drives for the OS and
> would happily go much bigger.
so you have 9 drive raidz1 (8 d
On Fri, Sep 19, 2008 at 10:16 AM, Volker A. Brandt <[EMAIL PROTECTED]> wrote:
> You need to check if the SMF service is running:
> # svcadm -v enable webconsole
> svc:/system/webconsole:console enabled.
> # svcs webconsole
> STATE STIMEFMRI
> online 19:07:24 svc:/system/
On Sun, Sep 21, 2008 at 1:31 PM, Volker A. Brandt <[EMAIL PROTECTED]> wrote:
> Yes, you need to set the corresponding SMF property. Check
> for the value of "options/tcp_listen":
>
> # svcprop -p options/tcp_listen webconsole
> true
>
> If it says "false", you need to set it to "true". Here's
On Sun, Sep 21, 2008 at 11:49 PM, Volker A. Brandt <[EMAIL PROTECTED]> wrote:
> Hmmm... I run Solaris 10/sparc U4. My /usr/java points to
> jdk/jdk1.5.0_16. I am using Firefox 2.0.0.16. Works For Me(TM) ;-)
> Sorry, can't help you any further. Maybe a question for desktop-discuss?
it's a jav
On Wed, Sep 24, 2008 at 9:37 PM, James Andrewartha <[EMAIL PROTECTED]> wrote:
> Can you post the java error to the list? Do you have gzip compressed or
> aclinherit properties on your filesystems, hitting bug 6715550?
> http://mail.opensolaris.org/pipermail/zfs-discuss/2008-June/048457.html
> http
I posted a thread here...
http://forums.opensolaris.com/thread.jspa?threadID=596
I am trying to finish building a system and I kind of need to pick
working NIC and onboard SATA chipsets (video is not a big deal - I can
get a silent PCIe card for that, I already know one which works great)
I need
patible, and have to return it online...
On Tue, Oct 7, 2008 at 1:33 AM, gm_sjo <[EMAIL PROTECTED]> wrote:
> 2008/10/6 mike <[EMAIL PROTECTED]>:
>> I am trying to finish building a system and I kind of need to pick
>> working NIC and onboard SATA chipsets (video is not a
; supports ECC ram. Coincidentally, it's also the chipset used in the
> Sun Ultra 24 workstation
> (http://www.sun.com/desktop/workstation/ultra24/index.xml).
>
>
> On Mon, Oct 6, 2008 at 1:41 PM, mike <[EMAIL PROTECTED]> wrote:
>> I posted a thread here...
>
l of it thanks to Newegg. I will need to pick up some
4-in-3 enclosures and a better CPU heatsink/fan - this is supposed to
be quiet but it has an annoying hum. Weird. Anyway, so far so good.
Hopefully the power supply can handle all 16 disks too...
On Thu, Oct 9, 2008 at 12:46 PM, mike &
27;m
>> going to pick up a couple of Supermicro's 5-in-3 enclosures for mine:
>>
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16817121405
>>
>>
>> Scott
>>
>> On Wed, Oct 15, 2008 at 12:26 AM, mike <[EMAIL PROTECTED]> wrote:
>
Yeah for this plan I needed with 8 onboard SATA or another 8 port SATA
controller, so I opted just to get two of the PCI-X ones.
The Supermicro 5-in-3's don't have a fan alarm so you could remove it
or find a quieter fan. I think most of them have quite noisy fans (the
main goal for this besides l
On Wed, Oct 15, 2008 at 9:13 PM, Al Hopper <[EMAIL PROTECTED]> wrote:
> The exception to the "rule" of multiple 12v output sections is PC
> Power & Cooling - who claim that there is no technical advantage to
> having multiple 12v outputs (and this "feature" is only a marketing
> gimmick). But now
I'm running ZFS on nevada (b94 and b98) on two machines at home, both
with 4 gig ram. one has a quad core intel core2 w/ ECC ram, the other
has normal RAM and an athlon 64 dual-core low power. both seem to be
working great.
On Thu, Oct 23, 2008 at 2:04 PM, Peter Bridge <[EMAIL PROTECTED]> wrote:
>
On Sun, Oct 26, 2008 at 12:47 AM, Peter Bridge <[EMAIL PROTECTED]> wrote:
> Well for a home NAS I'm looking at noise as a big factor. Also for a 24x7
> box, power consumption, that's why the northbridge is putting me off slightly.
That's why I built a full-sized tower using a Lian-Li case with
Hi all,
I have been asked to build a new server and would like to get some opinions on
how to setup a zfs pool for the application running on the server. The server
will be exclusively for running netbackup application.
Now which would be better? setting up a raidz pool with 6x146gig drives or
By Better I meant the best practice for a server running the Netbackup
application.
I am not seeing how using raidz would be a performance hit. Usually stripes
perform faster than mirrors.
--
This message posted from opensolaris.org
___
zfs-discuss ma
s chmod 0755 $foo fixes it
- the ACL inheriting doesn't seem to be remembered or I'm not
understanding it properly...
The user 'mike' should have -all- the privileges, period, no matter
what the client machine is etc. I am mounting it -as- mike from both
clients...
Depends on your hardware. I've been stable for the most part on b98.
Live upgrade to b101 messed up my networking to nearly a standstill.
It stuck even after I nuked the upgrade. I had to reinstall b98.
On Nov 13, 2008, at 10:01 AM, "Vincent Boisard" <[EMAIL PROTECTED]>
wrote:
Thanks for
erent Solaris versions:
http://blogs.sun.com/weber/entry/solaris_opensolaris_nevada_indiana_sxde
On Fri, Nov 14, 2008 at 2:15 AM, Vincent Boisard <[EMAIL PROTECTED]> wrote:
> Do you have an idea if your problem is due to live upgrade or b101 itself ?
>
> Vincent
>
> On Thu, Nov
On Fri, Nov 14, 2008 at 3:18 PM, Al Hopper <[EMAIL PROTECTED]> wrote:
>> No clue. My friend also upgraded to b101. Said it was working awesome
>> - improved network performance, etc. Then he said after a few days,
>> he's decided to downgrade too - too many other weird side effects.
>
> Any more d
I think you'll need to get device support first. Last I checked there
was still no device support for PMPs, sadly.
On Thu, Nov 20, 2008 at 4:52 PM, Krenz von Leiberman
<[EMAIL PROTECTED]> wrote:
> Does ZFS support pooled, mirrored, and raidz storage with
> SATA-port-multipliers (http://www.serial
i'm not sure how many via chips support 64-bit, which seems to be
highly recommended.
atoms seem to be more suitable.
On Mon, Jan 12, 2009 at 1:14 PM, Joe S wrote:
> In the last few weeks, I've seen a number of new NAS devices released
> from companies like HP, QNAP, VIA, Lacie, Buffalo, Iomega,
I do a daily snapshot of two filesystems, and over the past few months
it's obviously grown to a bunch.
"zfs list" shows me all of those.
I can change it to use the "-t" flag to not show them, so that's good.
However, I'm worried about boot times and other things.
Will it get to a point with 100
unted. Changes have
> been made to speed this up by reducing the number of mnttab lookups.
>
> And zfs list has been changed to no longer show snapshots by default.
> But it still might make sense to limit the number of snapshots saved:
> http://blogs.sun.com/timf/entry/zfs_automatic_s
Brad Stone about
rolling up daily snapshots into monthly snapshots, which would roll up
into yearly snapshots...
On Mon, Mar 9, 2009 at 1:29 PM, Richard Elling wrote:
> mike wrote:
>>
>> Well, I could just use the same script to create my daily snapshot to
>> remove a snapshot
http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm
http://www.intel.com/support/motherboards/server/ssr212mc2/index.htm
It's hard to use the HAL sometimes.
I am trying to locate chipset info but having a hard time...
_
2007) would be forward compatible...
On Wed, Mar 11, 2009 at 5:14 PM, mike wrote:
> http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm
> http://www.intel.com/support/motherboards/server/ssr212mc2/index.htm
>
> It's hard to use the HAL sometimes.
>
doesnt it require java and x11?
On Wed, Mar 11, 2009 at 6:53 PM, David Magda wrote:
>
> On Mar 11, 2009, at 20:14, mike wrote:
>
>>
>> http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm
>> http://www.intel.com/support/motherboards
me.
also making the tools simpler - absolutely no UI for instance. does it
really need one to dump out things? :)
On Wed, Mar 11, 2009 at 7:15 PM, David Magda wrote:
>
> On Mar 11, 2009, at 21:59, mike wrote:
>
>> On Wed, Mar 11, 2009 at 6:53 PM, David Magda wrote:
>>>
>
ooh. they support it? cool. i'll have to explore that option now.
however i still really want eSATA.
On 1/23/07, Samuel Hexter <[EMAIL PROTECTED]> wrote:
We've got two Areca ARC-1261ML cards (PCI-E x8, up to 16 SATA disks each)
running a 12TB zpool on snv54 and Areca's arcmsr driver. They're a
My two (everyman's) cents - could something like this be modeled after
MySQL replication or even something like DRBD (drbd.org) ? Seems like
possibly the same idea.
On 1/26/07, Jim Dunham <[EMAIL PROTECTED]> wrote:
Project Overview:
...
___
zfs-discus
nix based machines.
Thanks in advance! When I saw ZFS and the upcoming crypto support
planned, it truly would meet all my needs. I have been telling all my
friends about ZFS, we're all excited but none of us have had a use or
equipment that we could use for it yet.
- mike
On 2/5/07, Richa
able. That would be my only design
constraint.
Thanks a ton. Again, any input (good, bad, ugly, personal experiences
or opinions) is appreciated A LOT!
- mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Crair <[EMAIL PROTECTED]> wrote:
Mike,
Take a look at
http://video.google.com/videoplay?docid=8100808442979626078&q=CSI%3Amunich
Granted, this was for demo purposes, but the team in Munich is clearly
leveraging USB sticks for their purposes.
HTH,
Bev.
mike wrote:
> I still haven't
okay so since this is fixed, Chris, would you consider using USB/FW now?
I am desperate to replace a server that is failing and I want to
replace it with a proper quiet ZFS-based solution, I hate being held
captive by NTFS issues (it may have corrupted my data now a second
time)
ZFS's checksummi
Would the system be able to halt if something was unplugged/some
massive failure happened?
That way if something got tripped, I could fix it before any
corruption or issue occured.
That would be my safety net, I suppose.
On 3/20/07, Sanjeev Bagewadi <[EMAIL PROTECTED]> wrote:
Mike,
W
using port multipler eSATA with
FreeBSD (perhaps I will hunt down people on a FreeBSD list, to clarify
#3)
I'd like it to be PCI express based. PCI-x is only on normal-sized
motherboards, and I'd love to be using a smaller form factor machine
as the &quo
I'm building a system with two Apple RAIDs attached. I have hardware RAID5
configured so no RAIDZ or RAIDZ2, just a basic zpool pointing at the four LUNs
representing the four RAID controllers. For on-going maintenance, will a zpool
scrub be of any benefit? From what I've read with this layer of
efore it's completely failed...
- mike
On 5/4/07, Al Hopper <[EMAIL PROTECTED]> wrote:
On Fri, 4 May 2007, Lee Fyock wrote:
> Hi--
>
> I'm looking forward to using zfs on my Mac at some point. My desktop
> server (a dual-1.25GHz G4) has a motley collection of discs that h
ly PCI-e
adapters... Marvell or SI or anything as long as it's PCI-e and has 4
or 5 eSATA ports that can work with a port multipler (for 4-5 disks
per port) ... I don't think there is a clear fully supported option
yet or I'd be using it right now.
- mike
__
HO use, sharing files over samba to a couple
Windows machines + a media player.
Side note: Is this right? "ditto" blocks are extra parity blocks
stored on the same disk (won't prevent total disk failures, but could
provide data recovery if enough pari
thanks for the reply.
On 5/10/07, Al Hopper <[EMAIL PROTECTED]> wrote:
My personal opinion is that USB is not robust enough under (Open)Solaris
to provide the reliability that someone considering ZFS is looking for.
I base this on experience with two 7 port powered USB hubs, each with 4 *
2Gb K
looks like you used 3 for a total of 15 disks, right?
I have a CM stacker too - I used the CM 4-disks-in-3-5.25"-slots
though. I am currently trying to sell it too, as it is bulky and I
would prefer using eSATA/maybe Firewire/USB enclosures and a small
"controller" machine (like a Shuttle) so it
it's about time. this hopefully won't spark another license debate,
etc... ZFS may never get into linux officially, but there's no reason
a lot of the same features and ideologies can't make it into a
linux-approved-with-no-arguments filesystem...
as a more SOHO user i like ZFS mainly for it's CO
times (FAT32, NTFS, XFS, JFS) it is encouraging
to see more options that put emphasis on integrity...
On 6/14/07, Frank Cusack <[EMAIL PROTECTED]> wrote:
On June 14, 2007 3:57:55 PM -0700 mike <[EMAIL PROTECTED]> wrote:
> as a more SOHO user i like ZFS mainly for it's COW and
On 6/14/07, Frank Cusack <[EMAIL PROTECTED]> wrote:
Yes, but there are many ways to get transactions, e.g. journalling.
ext3 is journaled. it doesn't seem to always be able to recover data.
it also takes forever to fsck. i thought COW might alleviate some of
the fsck needs... it just seems like
On 6/15/07, Brian Hechinger <[EMAIL PROTECTED]> wrote:
Hmmm, that's an interesting point. I remember the old days of having to
stagger startup for large drives (physically large, not capacity large).
Can that be done with SATA?
I had to link 2 600w power supplies together to be able to power
. If I really do need room for two
to fail then I suppose I can look for a 14 drive space usable setup
and use raidz-2.
Thanks,
mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 6/20/07, Paul Fisher <[EMAIL PROTECTED]> wrote:
I would not risk raidz on that many disks. A nice compromise may be 14+2
raidz2, which should perform nicely for your workload and be pretty reliable
when the disks start to fail.
Would anyone on the list not recommend this setup? I could li
can power down to replace
any drives or do maintenance. It's mainly for cheap, quiet enclosures
that can export JBOD...
Thanks,
mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 8/29/07, Jeffrey W. Baker <[EMAIL PROTECTED]> wrote:
> I have a lot of people whispering "zfs" in my virtual ear these days,
> and at the same time I have an irrational attachment to xfs based
> entirely on its lack of the 32000 subdirectory limit. I'm not afraid of
> ext4's newness, since real
On 9/5/07, Joerg Schilling <[EMAIL PROTECTED]> wrote:
> As I wrote before, my wofs (designed and implemented 1989-1990 for SunOS 4.0,
> published May 23th 1991) is copy on write based, does not need fsck and always
> offers a stable view on the media because it is COW.
Side question:
If COW is su
On 9/6/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> This is my personal opinion and all, but even knowing that Sun
> encourages open conversations on these mailing lists and blogs it seems to
> falter common sense for people from @sun.com to be commenting on this
> topic. It seems like
On 9/7/07, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> For me, quotas are likely to be a pain point that prevents me from
> making good use of snapshots. Getting changes in application teams'
> understanding and behavior is just too much trouble. Others are:
not to mention the
I actually have a related motherboard, chassis, dual power-supplies
and 12x400 gig drives already up on ebay too. If I recall Areca cards
are supported in OpenSolaris...
http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=300172982498
On 11/22/07, Jason P. Warr <[EMAIL PROTECTED]> wrote:
> If you
On 1/14/08, eric kustarz <[EMAIL PROTECTED]> wrote:
>
> On Jan 14, 2008, at 11:08 AM, Tim Cook wrote:
>
> > www.mozy.com appears to have unlimited backups for 4.95 a month.
> > Hard to beat that. And they're owned by EMC now so you know they
> > aren't going anywhere anytime soon.
mozy's been oka
except in my experience it is piss poor slow... but yes it is another
option that is -basically- built on standards (i say that only because
it's not really a traditional filesystem concept)
On 1/14/08, David Magda <[EMAIL PROTECTED]> wrote:
>
> On Jan 14, 2008, at 17:15, mike
1) Is a hardware-based RAID behind the scenes needed? Can ZFS safely
be considered a replacement for that? I assume that anything below the
filesystem level in regards to redundancy could be an added bonus, but
is it necessary at all?
2) I am looking into building a 10-drive system using 750GB or
Would this be the same as failing a drive on purpose to remove it?
I was under the impression that was supported, but I wasn't sure if
shrinking a ZFS pool would work though.
On 1/18/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> > This is a pretty high priority. We are working on it.
___
what is the technical difference between forcing a removal and an
actual failure?
isn't it the same process? except one is manually triggered? i would
assume the same resilvering process happens when a usable drive is put
back in...
On 1/18/07, Wee Yeh Tan <[EMAIL PROTECTED]> wrote:
Not quite.
Couldn't this be considered a compatibility list that we can trust for
OpenSolaris and ZFS?
http://www.sun.com/io_technologies/
I've been looking at it for the past few days. I am looking for eSATA
support options - more details below.
Only 2 devices on the list show support for eSATA, both are
for a conversion from RAIDZ
to RAIDZ2, or vice-versa then, correct?
On 1/18/07, Erik Trimble <[EMAIL PROTECTED]> wrote:
Mike,
I think you are missing the point. What we are talking about is
removing a drive from a zpool, that is, reducing the zpool's total
capacity by a drive. Say you
1 - 100 of 527 matches
Mail list logo