it seems like a ZFS issue, but then it seems like
a hardware issue as it's always the same drive that seems to hang things up
(but dd read tests from it are fine).
I'd really appreciate if anyone has any ideas or things to try..my read
throughput is around 300kb/sec, next to nothing. I&
the closest bug I can find it this : 6772082 (ahci: ZFS hangs when IO happens)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ne back). The drives had only a few days use.
Tim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'll dd the whole disk tonight. I was thinking it was bad spots, given how
some files I can copy (admittedly they are small ones) better then others
but in saying that, seeing the throughput at 349k/sec often is rather odd on
different files. And then the files that manage to copy ok, the
I never formatted these drives when I built the box, I just added them to zfs.
I can try format>analyze>read as well.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
ng though...I dare say
there's no bad sectors, but something is amiss with the scanning. Anyone ever
seen a drive behave like this before ? I thought the count being 512x a
little odd too.
Going to do some more tests.
Tim
--
This message post
at exactly the same spot it slows down ? I've just run the test a number of
times, and without fail at the exactly the same spot, the read will just crawl
along and erratically. It's at approx 51256xxx.
--
This message posted from opensolaris.org
__
I'm just doing a surface scan via the Samsung utility to see if I see the same
slow down..
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hmm, not seeing the same slow down when I boot from the Samsung EStool CD and
run a diag which performs a surface scan...
could this still be a hardware issue, or possibly something with the Solaris
data format on the disk?
--
This message posted from opensolaris.org
fmdump shows errors on a different drive, and none on the one that has this
slow read problem:
Nov 27 2009 20:58:28.670057389 ereport.io.scsi.cmd.disk.recovered
nvlist version: 0
class = ereport.io.scsi.cmd.disk.recovered
ena = 0xbeb7f4dd531
detector = (embedded nvlist
t of the RAIDZ data (not the best backup, but at least
something) but to date I can't read well from the RAIDZ pool to do that
backup..and it's a few years of home video. I think I'd just fall to pieces if
I lost it.
I'd greatly appreciate
ly 2 weeks of total use.
Cindy, thanks for the reply, I really appreciate it.
Tim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
there's actually no device c6d1 in /dev/dsk, only:
t...@opensolaris:/dev/dsk$ ls -l c6d1*
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1p0 ->
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:q
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1p1 ->
../../devices/p...@0,0/pci10de,5..
should I use slice 2 instead of p0:
Part TagFlag Cylinders SizeBlocks
0 unassignedwm 00 (0/0/0) 0
1 unassignedwm 00 (0/0/0) 0
2 backupwu 0 - 60796
I had referred to this blog entry:
http://blogs.sun.com/observatory/entry/which_disk_devices_to_use
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-disc
hmm ok, the replace with the existing drive still in place wasn't the best
option...it's replacing, but very slowly as it's reading from that sus disk:
pool: storage
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly i
slow and steady wins the race ?
I ended up doing a zpool remove of c6d1p0. This stopped the replace and it
removed c6d1p0, and left the array doing a scrub, which was going to take by
rough calculations around 12 months and increasing !
So I shut the box down, disconnected the SATA cable fro
.opensolaris.org/mailman/listinfo/zfs-discuss
>
Ross,
Have you gotten a card and had a chance to test this yet? Does it work with
the new(er) SNV builds?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Except the article was redacted. The reason the battery life
decreased was because the throughput increased so much that it drove
up the cpu usage up, thus bringing down battery life. It just goes to
show how SEVERELY io bound we currently are. The flash itself was
using LESS power.
--tim
On Wed, Jul 23, 2008 at 2:37 PM, Steve <[EMAIL PROTECTED]> wrote:
> I'm a fan of ZFS since I've read about it last year.
>
> Now I'm on the way to build a home fileserver and I'm thinking to go with
> Opensolaris and eventually ZFS!!
>
> Apart from the other components, the main problem is to choo
On Mon, Aug 4, 2008 at 8:02 AM, Ross <[EMAIL PROTECTED]> wrote:
> Did anybody ever get this card working? SuperMicro only have Windows and
> Linux drivers listed on their site. Do Sun's generic drivers work with this
> card?
>
>
Still waiting to buy a set. I've already got the supermicro marve
Thanks for the link. I'll consider those, but it still means a new CPU, and
it appears it does not support any of the opteron line-up.
On Mon, Aug 4, 2008 at 3:58 PM, Brandon High <[EMAIL PROTECTED]> wrote:
> On Mon, Aug 4, 2008 at 6:49 AM, Tim <[EMAIL PROTECTED]> wrote
the IBM bridge chips.
Food for thought.
--Tim
On Thu, Aug 14, 2008 at 5:24 AM, Ross <[EMAIL PROTECTED]> wrote:
> This is the problem when you try to write up a good summary of what you
> found. I've got pages and pages of notes of all the tests I did here, far
> mo
You could always try FreeBSD :)
--Tim
On Fri, Aug 15, 2008 at 9:44 AM, Ross <[EMAIL PROTECTED]> wrote:
> Haven't a clue, but I've just gotten around to installing windows on this
> box to test and I can confirm that hot plug works just fine in windows.
>
> Drives app
ing so reliably.
--Tim
On Mon, Aug 18, 2008 at 6:06 AM, Bernhard Holzer <[EMAIL PROTECTED]>wrote:
> Hi,
>
> I am searching for a roadmap for shrinking a pool. Is there some
> project, where can I find informations, when will it be implemented in
> Solars10
>
> Tha
ally improved with the SSD imho. Of course,
the fact that it started giving me i/o errors after just 3 weeks means it's
going to be RMA'd and won't find a home back in my laptop anytime soon.
This was one of the 64GB OCZ Core drives for reference.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I don't think its just b94, I recall this behavior for as long as I've
had the card. I'd also be interested to know if the sun driver team
has ever even tested with this card. I realize its probably not a top
priority, but it sure would be nice to have it working properly.
On 8/20/08, Ross
14+2 or 7+1
On 8/22/08, Miles Nordin <[EMAIL PROTECTED]> wrote:
>> "m" == mike <[EMAIL PROTECTED]> writes:
>
> m> can you combine two zpools together?
>
> no. You can have many vdevs in one pool. for example you can have a
> mirror vdev and a raidz2 vdev in the same pool. You can al
On Sat, Aug 23, 2008 at 11:06 PM, Todd H. Poole <[EMAIL PROTECTED]>wrote:
> Howdy yall,
>
> Earlier this month I downloaded and installed the latest copy of
> OpenSolaris (2008.05) so that I could test out some of the newer features
> I've heard so much about, primarily ZFS.
>
> My goal was to rep
I'm saying the hardware is most likely
the cause by way of driver. There really isn't any *setting* in solaris I'm
aware of that says "hey, freeze my system when a drive dies". That just
sounds like hot-swap isn't working as it should be.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm pretty sure pci-ide doesn't support hot-swap. I believe you need ahci.
On 8/24/08, Todd H. Poole <[EMAIL PROTECTED]> wrote:
> Ah, yes - all four hard drives are connected to the motherboard's onboard
> SATA II ports. There is one additional drive I have neglected to mention
> thus far (th
>
>
> By the way: Is there a way to pull up a text-only interface from the log in
> screen (or during the boot process?) without having to log in (or just sit
> there reading about "SunOS Release 5.11 Version snv_86 64-bit")? It would be
> nice if I could see a bit more information during boot, or
usy having a pissing match to get that
USEFUL information back to the list.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Aug 27, 2008 at 1:08 PM, Kenny <[EMAIL PROTECTED]> wrote:
> Has anyone had issues with creating ZFS pools greater than 1 terabyte (TB)?
>
> I've created 11 LUNs from a Sun 2540 Disk array (approx 1 TB each). The
> host system ( SUN Enterprise 5220) reconizes the "disks" as each having
> 9
to access that data via CIFS. Any ideas?
>
>
You'd have to share out an iSCSI LUN to *insert destination*. Then share
out the LUN from the host it's presented to via cifs/nfs/whatever. You
can't magically make an iSCSI LUN out of the cifs data currently
drive 1.
>
Solaris does not do this. This is one of the many annoyances I have with
linux. The way they handle /dev is ridiculous. Did you add a new drive?
Let's renumber everything!
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
, zpool status, and format so we can verify
what you're seeing. :)
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
storage? FC/SATA/SAS? Internal, coming off a SAN? I'm assuming the files
won't get hit very hard if they're just office documents, but you may have a
special use-case :)
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Aug 27, 2008 at 5:33 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
> >>>>> "t" == Tim <[EMAIL PROTECTED]> writes:
>
> t> Solaris does not do this.
>
> yeah but the locators for local disks are still based on
> pci/controll
m $20 components' and you begin to dilute the
> quality and predictability of the composite system's behaviour.
>
But this NEVER happens on linux *grin*.
>
> If hard drive firmware is as cr*ppy as anecdotes indicate, what can
> we really expect from a $20 USB pendrive?
&
e to find the package? I would really love to have the zfs
> admin gui on my system.
>
> -Klaus
>
>
My personal conspiracy theory is it's part of "project fishworks" that is
still under wraps.
--Tim
___
zfs-discuss maili
exactly :)
On 8/28/08, Kyle McDonald <[EMAIL PROTECTED]> wrote:
> Daniel Rock wrote:
>>
>> Kenny schrieb:
>> >2. c6t600A0B800049F93C030A48B3EA2Cd0
>>
>> > /scsi_vhci/[EMAIL PROTECTED]
>> >3. c6t600A0B800049F93C030D48B3EAB6d0
>>
>> > /scsi_vhci/[EMAIL
; Fantastic engineering from a company which went defunct shortly after
> delivering the system.
And let this be a lesson to all of you not to write code that is too good.
If you can't sell an "update" (patch) every 6 months, you'll be out of
business as well :D
--Tim
__
Netapp does NOT recommend 100 percent. Perhaps you should talk to
netapp or one of their partners who know their tech instead of their
competitors next time.
Zfs, the way its currently implemented will require roughly the same
as netapp... Which still isn't 100.
On 8/30/08, Ross <[EMAIL PROTEC
With the restriping: wouldn't it be as simple as creating a new
folder/dataset/whatever on the same pool and doing an rsync to the
same pool/new location. This would obviously cause a short downtime
to switch over and delete the old dataset, but seems like it should
work fine. If you're doubling
On Sun, Aug 31, 2008 at 10:39 AM, Ross Smith <[EMAIL PROTECTED]> wrote:
> Hey Tim,
>
> I'll admit I just quoted the blog without checking, I seem to remember the
> sales rep I spoke to recommending putting aside 20-50% of my disk for
> snapshots. Compared to ZFS whe
s which I don't think is quite production ready yet.
I'm sure there are others on this list much better versed in PNFS than
myself that can speak to that solution.
--Tim
On Thu, Sep 4, 2008 at 12:59 PM, Jean Luc Berrier <[EMAIL PROTECTED]> wrote:
> Hi,
>
> My problem i
__
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
The driver is a binary only with no support that was passed on behind the
scene's as a favor. I don't know what debugging is going ot be possible.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The new Intel MLC's have proven to be as fast if not
faster than the SLC's, but they also cost just as much. If they brought the
price down, I'd say MLC all the way. All other things being equal though,
SLC.
--Tim
___
zfs-discuss mailing
>
>
>But to be honest I don't wish for a driver for every chip---I'm
>not trying to ``convert'' machines, I buy them specifically for
>the task. I just want an open driver that works well for some
>fairly-priced card I can actually buy. I'm willing to fight the
>OEM problem:
>
On Fri, Sep 26, 2008 at 1:02 PM, Will Murnane <[EMAIL PROTECTED]>wrote:
> On Thu, Sep 25, 2008 at 18:51, Tim <[EMAIL PROTECTED]> wrote:
> > So what's wrong with this card?
> > http://www.supermicro.com/products/accessories/addon/AOC-USASLP-L8i.cfm
> If you have
On Fri, Sep 26, 2008 at 12:29 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
> >>>>> "t" == Tim <[EMAIL PROTECTED]> writes:
>
> t>
> http://www.supermicro.com/products/accessories/addon/AOC-USASLP-L8i.cfm
>
> I'm not sure. A diffe
On Fri, Sep 26, 2008 at 5:07 PM, Will Murnane <[EMAIL PROTECTED]>wrote:
> On Fri, Sep 26, 2008 at 21:51, Tim <[EMAIL PROTECTED]> wrote:
> > This is not a UIO card. It's a standard PCI-E card. What the
> description
> > is telling you is that you can com
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
Did you try disabling the card cache as others advised?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ive, not an application clustering
perspective) to anything Solaris has to offer right now, and also much,
much, MUCH easier to configure/manage. Let me know when I can plug an
infiniband cable between two Solaris boxes and type "cf enable" and we'll
talk :)
--Tim
___
- Apr 2005 to Mar 2007
> http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
>
Wrong link?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> --Toby
>
> > ...
>
So how is a Server running Solaris with a QLogic HBA connected to an FC JBOD
any different than a NetApp filer, running ONTAP with a QLogic HBA directly
connected to an FC JBOD? How is it "several unreliable subsystem
On Tue, Sep 30, 2008 at 5:19 PM, Erik Trimble <[EMAIL PROTECTED]> wrote:
>
> To make Will's argument more succinct (), with a NetApp, undetectable
> (by the NetApp) errors can be introduced at the HBA and transport layer (FC
> Switch, slightly damage cable) levels. ZFS will detect such errors, a
oints... if you don't mind spending 8x as much for the cards
:)
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
guess this makes them equal. How about a new "reliable NFS" protocol,
> that computes the hashes on the client side, sends it over the wire to be
> written remotely on the zfs storage node ?!
>
Won't be happening anytime soon.
--Tim
___
On Tue, Sep 30, 2008 at 7:15 PM, David Magda <[EMAIL PROTECTED]> wrote:
> On Sep 30, 2008, at 19:09, Tim wrote:
>
> SAS has far greater performance, and if your workload is extremely random,
>> will have a longer MTBF. SATA drives suffer badly on random workloads.
&g
this a recommended setup ? It looks too good to be true ?
>
I *HIGHLY* doubt you'll see better performance out of the SATA, but it is
possible. You don't need 2 spares with SAS, 1 is more than enough with that
few disks. I'd suggest
2 raids, It's was aside from
> performance, basically sata is obviously cheaper, it will saturate the gig
> link, so performance yes too, so the question becomes which has better data
> protection ( 8 sata raid1 or 8 sas raidz2)
>
SAS's main benefits are seek time and max IOPS.
es a GREAT job of creating disparate storage islands,
something EVERY enterprise is trying to get rid of. Not create more of.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Sep 30, 2008 at 10:44 PM, Toby Thain <[EMAIL PROTECTED]>wrote:
>
>
> ZFS allows the architectural option of separate storage without losing end
> to end protection, so the distinction is still important. Of course this
> means ZFS itself runs on the application server, but so what?
>
> --T
On Wed, Oct 1, 2008 at 12:24 AM, Ian Collins <[EMAIL PROTECTED]> wrote:
> Tim wrote:
> >
> > As it does in ANY fileserver scenario, INCLUDING zfs. He is building
> > a FILESERVER. This is not an APPLICATION server. You seem to be
> > stuck on this idea that ev
On Tue, Sep 30, 2008 at 11:58 PM, Nicolas Williams <[EMAIL PROTECTED]
> wrote:
> On Tue, Sep 30, 2008 at 08:54:50PM -0500, Tim wrote:
> > As it does in ANY fileserver scenario, INCLUDING zfs. He is building a
> > FILESERVER. This is not an APPLICATION server. You seem to
On Wed, Oct 1, 2008 at 9:18 AM, Joerg Schilling <
[EMAIL PROTECTED]> wrote:
> David Magda <[EMAIL PROTECTED]> wrote:
>
> > On Sep 30, 2008, at 19:09, Tim wrote:
> >
> > > SAS has far greater performance, and if your workload is extremely
> > > ran
On Wed, Oct 1, 2008 at 10:28 AM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:
> On Wed, 1 Oct 2008, Tim wrote:
>
>>
>>> I think you'd be surprised how large an organisation can migrate most,
>>> if not all of their application servers to zones one or two Th
On Wed, Oct 1, 2008 at 11:20 AM, <[EMAIL PROTECTED]> wrote:
>
>
> >Ummm, no. SATA and SAS seek times are not even in the same universe.=
> > They
> >most definitely do not use the same mechanics inside. Whoever told y=
> >ou that
> >rubbish is an outright liar.
>
>
> Which particular disks are
On Wed, Oct 1, 2008 at 11:53 AM, Ahmed Kamal <
[EMAIL PROTECTED]> wrote:
> Thanks for all the opinions everyone, my current impression is:
> - I do need as much RAM as I can afford (16GB look good enough for me)
>
Depends on both the workload, and the amount of storage behind it. From
your descr
th 7200 rpm SATA drives. There are faster SATA drives but
> these
> drives consume more power.
>
That's because the faster SATA drives cost just as much money as their SAS
counterparts for less performance and none of the advantages SAS brings such
as dual ports. Not to me
there with a spec sheet and tell me how
you think things are going to work. I can tell you from real-life
experience you're not even remotely correct in your assumptions.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
g cost more than the equipment.
>
> Bob
>
It's called USABLE IOPS/$. You can throw 500 drives at a workload, if
you're attempting to access lots of small files in random ways, it won't
make a lick of difference.
--Tim
__
n
that companies can take CDDL code, modify it, and keep the content closed.
They are not forced to share their code. That's why there are "closed"
patches that go into mainline Solaris, but are not part of OpenSolaris.
While you may not like it, this isn't the GPL.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
pers and mirroring them,
there are none. The motherboard is a single point of failure.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Oct 20, 2008 at 11:32 AM, William Saadi <[EMAIL PROTECTED]>wrote:
> Hi all,
>
> I have a little question.
> WIth RAID-Z rules, what is the true usable disks space?
> Is there a calcul like any RAID (ex. RAID5 = nb of disks - 1 for parity)?
>
>
# of disks - 1 for parity
Well, what's the end goal? What are you testing for that you need from the
thumper?
I/O interfaces? CPU? Chipset? If you need *everything* you don't have any
other choice.
--Tim
On Tue, Nov 4, 2008 at 5:11 PM, Gary Mills <[EMAIL PROTECTED]> wrote:
> On Tue, Nov 04, 2008
Just got an email about this today. Fishworks finally unveiled?
http://www.sun.com/launch/2008-1110/index.jsp
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Nov 10, 2008 at 3:07 PM, Andy Lubel <[EMAIL PROTECTED]> wrote:
> LOL, I guess Sun forgot that they had xvm! I wonder if you could use a
> converter (vmware converter) to make it work on vbox etc?
>
> I would also like to see this available as an upgrade to our 4500's..
> Webconsole/zfs ju
CPU/Memory get swapped out, and from
> the chasis layout, looks fairly involved. We don't want to "upgrade"
> something that we just bought so we can take advantage of this software
> which appears to finally complete the Sun NAS picture with zfs!
>
works
were to go away, could I attach the JBOD/disks to a system running
snv/mainline solaris/whatever, and import the pool to get at the data? Or
is the zfs underneath fishworks proprietary as well?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opens
literature seems to
suggest otherwise though. I can totally understand the issue with adding
one, or just a few additional disks to the pool, but if you were to double
the number of disks, in theory, that should be fairly seamless.
Thanks,
--Tim
_
possible via means like you've described above. If I
had no exposure to zfs though, it might be a bit less clear, and I guess in
this case, I wasn't 100% positive even with the background I have.
If I've missed something, it wouldn
On Mon, Nov 17, 2008 at 2:36 PM, Eric Schrock <[EMAIL PROTECTED]> wrote:
> On Mon, Nov 17, 2008 at 01:38:29PM -0600, Tim wrote:
> >
> > And this passage:
> > "If there is a broken or missing disk, we don't let you proceed without
> > explicit confirmat
On Mon, Nov 17, 2008 at 3:33 PM, Will Murnane <[EMAIL PROTECTED]>wrote:
> On Mon, Nov 17, 2008 at 20:54, BJ Quinn <[EMAIL PROTECTED]> wrote:
> > 1. Dedup is what I really want, but it's not implemented yet.
> Yes, as I read it. greenBytes [1] claims to have dedup on their
> system; you might inv
install grub, claiming that s0 was an invalid
location. After a reboot though, all was well.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ithout requiring
> a reboot (or export/import)
>
> Casper
It's a known bug, I don't have the ID offhand, but I'm sure someone can get
it to you ;)
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
Moral of the story continues to be: If you want protection against a failed
disk, use a raid algorithm that provides it.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Nov 20, 2008 at 12:35 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
> >>>>> "t" == Tim <[EMAIL PROTECTED]> writes:
>
> >> a fourth 500gb disk and add
> >> it to the pool as the second vdev, what happens when that
>
On Thu, Nov 20, 2008 at 5:02 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
> >>>>> "t" == Tim <[EMAIL PROTECTED]> writes:
>
> t> Pretty sure ALL of the above are settings that can be changed.
>
> nope. But feel free to be more specif
On Thu, Nov 20, 2008 at 5:02 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
> >>>>> "t" == Tim <[EMAIL PROTECTED]> writes:
>
> t> Pretty sure ALL of the above are settings that can be changed.
>
> nope. But feel free to be more specif
On Thu, Nov 20, 2008 at 5:37 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
> >>>>> "t" == Tim <[EMAIL PROTECTED]> writes:
>
> t> Uhh, yes. There's more than one post here describing how to
> t> set what the system does when the
ing like "zfs list snapshots", and if you wanted to
limit that to a specific pool "zfs list snapshots poolname".
--TIm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
if you stuck another 2GB ram in there
you'd see far less of a *hit* from X or a VM.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> :_classblock
> :name sd
>
>
>
> hth,
> James
> --
>
I don't know that that necessarily makes it *EASY* to find the drive,
especially if they're in a hot-swap bay. Something like an "led_on" type
c
ver with an external 4 bay enclosure.
>
> -Marko
> --
>
> <http://mail.opensolaris.org/mailman/listinfo/zfs-discuss>
It's about what you want in a home device, not what Sun's target enterprise
market uses. I suggest looking into windows home server, it meets your
requiremen
On Mon, Nov 24, 2008 at 4:04 PM, marko b <[EMAIL PROTECTED]> wrote:
> Darren,
>
> Perhaps I misspoke when I said that it wasn't about cost. It is _partially_
> about cost.
>
> Monetary cost of drives isn't a major concern. At about $110-150 each.
> Loss of efficiency (mirroring 50%), zraid1 (25%),
s, and/or some level of
> management integration thru either the web UI or ESX's console ? If there's
> nothing official, did anyone hack any scripts for that?
>
> Regards
>
It can be scripted, no there's no integration. If there were it would have
to be
700 -R /rpool/export/home/user01/fs1
This will give user01 full privileges for fs1, and no permissions for
everyone else. You should really do some reading though, that's unix101:
http://www.perlfect.com/articles/chmod.shtml
--Tim
___
1 - 100 of 959 matches
Mail list logo