Yes, but pricing that's so obviously disconnected with cost leads customers to
feel they're being ripped off.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
User Name wrote:
> I am building a 14 disk raid 6 array with 1 TB seagate AS (non-enterprise)
> drives.
>
> So there will be 14 disks total, 2 of them will be parity, 12 TB space
> available.
>
> My drives have a BER of 10^14
>
> I am quite scared by my calculations - it appears that if one drive
I am building a 14 disk raid 6 array with 1 TB seagate AS (non-enterprise)
drives.
So there will be 14 disks total, 2 of them will be parity, 12 TB space
available.
My drives have a BER of 10^14
I am quite scared by my calculations - it appears that if one drive fails, and
I do a rebuild, I w
Hi there,
I'm currently setting up a new system to my lab. 4 SATA drives would be turned
into the main file system (ZFS?) running on a soft raid (raid-z?).
My main target is reliability, my experience with Linux SoftRaid was
catastrophic and the array could no be restored after some testing sim
On Thu, 10 Jul 2008, Robb Snavely wrote:
>
> Now in the VERY unlikely event that we lost the first tray in each rack
> which contain 0 and 4 respectively...
>
> somepool
> mirror---
> 0 |
> 4 | Bye Bye
> ---
>
I have a scenario (tray failure) that I am trying to "predict" how zfs
will behave and am looking for some input . Coming from the world of
svm, ZFS is WAY different ;)
If we have 2 racks, containing 4 trays each, 2 6540's that present 8D
Raid5 luns to the OS/zfs and through zfs we setup a mir
Will Murnane wrote:
> On Thu, Jul 10, 2008 at 12:14, Richard Elling <[EMAIL PROTECTED]> wrote:
>
>> Drive carriers are a different ballgame. AFAIK, there is no
>> industry standard carrier that meets our needs. We require
>> service LEDs for many of our modern disk carriers, so there
>> is a l
No, the problem data must be moved or copied from where it is, to a different
ZFS.
Raquel
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Jul 10, 2008 at 1:40 PM, Carson Gaspar <[EMAIL PROTECTED]> wrote:
> Moore, Joe wrote:
>> Because the zfs dataset mountpoint may not be the same as the zfs pool
>> name. This makes things a bit complicated for the initial request.
>
> The leading slash will be a problem with the current cod
Richard Elling wrote:
> Torrey McMahon wrote:
>> Spencer Shepler wrote:
>>
>>> On Jul 10, 2008, at 7:05 AM, Ross wrote:
>>>
>>>
Oh god, I hope not. A patent on fitting a card in a PCI-E slot,
or using nvram with RAID (which raid controllers have been doing
for years) wou
Moore, Joe wrote:
> Carson Gaspar wrote:
>> Darren J Moffat wrote:
>>> $ pwd
>>> /cube/builds/darrenm/bugs
>>> $ zfs create -c 6724478
>>>
>>> Why "-c" ? -c for "current directory" "-p" partial is
>> already taken to
>>> mean "create all non existing parents" and "-r" relative is
>> already us
On Thu, Jul 10, 2008 at 4:13 PM, Kyle McDonald <[EMAIL PROTECTED]> wrote:
> In my 15 year experience with Sun Products, I've never known one to care
> about drive brand, model, or firmware. If it was standards compliant for
> both physical interface, and protocol the machine would use it in my
> ex
On Thu, Jul 10, 2008 at 12:14, Richard Elling <[EMAIL PROTECTED]> wrote:
> Drive carriers are a different ballgame. AFAIK, there is no
> industry standard carrier that meets our needs. We require
> service LEDs for many of our modern disk carriers, so there
> is a little bit of extra electronics
On Thu, Jul 10, 2008 at 13:05, Glaser, David <[EMAIL PROTECTED]> wrote:
> Could I trouble you for the x86 package? I don't seem to have much in the way
> of software on this try-n-buy system...
No problem. Packages are posted at
http://will.incorrige.us/solaris-packages/ . You'll need gettext an
Will Murnane wrote:
> On Thu, Jul 10, 2008 at 12:43, Glaser, David <[EMAIL PROTECTED]> wrote:
>
>> I guess what I was wondering if there was a direct method rather than the
>> overhead of ssh.
>>
> On receiving machine:
> nc -l 12345 | zfs recv mypool/[EMAIL PROTECTED]
> and on sending mac
Fajar A. Nugraha wrote:
> Brandon High wrote:
>> Another alternative is to use an IDE to Compact Flash adapter, and
>> boot off of flash.
> Just curious, what will that flash contain?
> e.g. will it be similar to linux's /boot, or will it contain the full
> solaris root?
> How do you manage redund
Torrey McMahon wrote:
> Spencer Shepler wrote:
>
>> On Jul 10, 2008, at 7:05 AM, Ross wrote:
>>
>>
>>
>>> Oh god, I hope not. A patent on fitting a card in a PCI-E slot, or
>>> using nvram with RAID (which raid controllers have been doing for
>>> years) would just be rediculous. Th
Could I trouble you for the x86 package? I don't seem to have much in the way
of software on this try-n-buy system...
Thanks,
Dave
-Original Message-
From: Will Murnane [mailto:[EMAIL PROTECTED]
Sent: Thursday, July 10, 2008 12:58 PM
To: Glaser, David
Cc: zfs-discuss@opensolaris.org
Sub
On Thu, Jul 10, 2008 at 12:43, Glaser, David <[EMAIL PROTECTED]> wrote:
> I guess what I was wondering if there was a direct method rather than the
> overhead of ssh.
On receiving machine:
nc -l 12345 | zfs recv mypool/[EMAIL PROTECTED]
and on sending machine:
zfs send sourcepool/[EMAIL PROTECTED]
Thankfully right now it's between a private IP network between the two
machines. I'll play with it a bit and let folks know if I can't get it to work.
Thanks,
Dave
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Darren J Moffat
Sent: Thursday, July 10,
Glaser, David wrote:
> I guess what I was wondering if there was a direct method rather than the
> overhead of ssh.
As others have suggested use netcat (/usr/bin/nc) however you get no
over the wire data confidentiality or integrity and no strong
authentication with that.
If you need those the
Mike Gerdts wrote:
> On Thu, Jul 10, 2008 at 11:31 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>> Mike Gerdts wrote:
>>> On Thu, Jul 10, 2008 at 5:42 AM, Darren J Moffat <[EMAIL PROTECTED]>
>>> wrote:
Thoughts ? Is this useful for anyone else ? My above examples are some
of the short
On Jul 10, 2008, at 9:20 AM, Bob Friesenhahn wrote:
>
> I expect that Sun is realizing that it is already undercutting much of
> the rest of its product line.
a) Failure to do so just means that someone else does, and wins the
customer.
b) A lot of "enterprise class" infrastructure wonks are v
I guess what I was wondering if there was a direct method rather than the
overhead of ssh.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Darren J Moffat
Sent: Thursday, July 10, 2008 11:40 AM
To: Glaser, David
Cc: zfs-discuss@opensolaris.org
Subject:
On Thu, Jul 10, 2008 at 11:31 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Mike Gerdts wrote:
>>
>> On Thu, Jul 10, 2008 at 5:42 AM, Darren J Moffat <[EMAIL PROTECTED]>
>> wrote:
>>>
>>> Thoughts ? Is this useful for anyone else ? My above examples are some
>>> of the shorter dataset names I
On Thu, Jul 10, 2008 at 10:20 AM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:
> On Thu, 10 Jul 2008, Ross wrote:
> >
> > As a NFS storage platform, you'd be beating EMC and NetApp on price,
> > spindle count, features and performance. I really hope somebody at
> > Sun considers this, and thinks a
Is that faster than blowfish?
Dave
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Darren J Moffat
Sent: Thursday, July 10, 2008 12:27 PM
To: Florin Iucha
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS send/receive questions
Florin Iucha
Mike Gerdts wrote:
> On Thu, Jul 10, 2008 at 5:42 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>> Thoughts ? Is this useful for anyone else ? My above examples are some
>> of the shorter dataset names I use, ones in my home directory can be
>> even deeper.
>
> Quite usable and should be done.
Florin Iucha wrote:
> On Thu, Jul 10, 2008 at 09:02:35AM -0700, Tim Spriggs wrote:
>>> zfs(1) man page, Examples 12 and 13 show how to use senn/receive with
>>> ssh. What isn't clear about them ?
>> I found that the overhead of SSH really hampered my ability to transfer
>> data between thumpers
Kyle McDonald wrote:
> Tommaso Boccali wrote:
>
>> .. And the answer was yes I hope. we are sriously thinking of buying
>> 48 1 tb disk to replace those in a 1 year old thumper
>>
>> please confirm it again :)
>>
>>
>>
> In my 15 year experience with Sun Products, I've never known on
On Thu, Jul 10, 2008 at 09:02:35AM -0700, Tim Spriggs wrote:
> > zfs(1) man page, Examples 12 and 13 show how to use senn/receive with
> > ssh. What isn't clear about them ?
>
> I found that the overhead of SSH really hampered my ability to transfer
> data between thumpers as well. When I simply
Darren J Moffat wrote:
> Glaser, David wrote:
>
>> Hi all,
>>
>> I'm a little (ok, a lot) confused on the whole zfs send/receive commands.
>>
> > I've seen mention of using zfs send between two different machines,
> > but no good howto in order to make it work.
>
> zfs(1) man page, Examp
On Thu, 10 Jul 2008, Glaser, David wrote:
> x4500 that we've purchased. Right now I'm using rsync over ssh (via
> 1GB/s network) to copy the data but it is almost painfully slow
> (700GB over 24 hours). Yeah, it's a load of small files for the most
> part. Anyway, would zfs send/receive work bet
On Thu, Jul 10, 2008 at 5:42 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Thoughts ? Is this useful for anyone else ? My above examples are some
> of the shorter dataset names I use, ones in my home directory can be
> even deeper.
Quite usable and should be done.
The key problem I see is ho
Glaser, David wrote:
> Hi all,
>
> I'm a little (ok, a lot) confused on the whole zfs send/receive commands.
> I've seen mention of using zfs send between two different machines,
> but no good howto in order to make it work.
zfs(1) man page, Examples 12 and 13 show how to use senn/receive with
Carson Gaspar wrote:
> Why not "zfs create $PWD/6724478". Works today, traditional UNIX
> behaviour, no coding required. Unles you're in some bizarroland shell
Did you actually try that ?
braveheart# echo $PWD
/tank/p2/2/1
braveheart# zfs create $PWD/44
cannot create '/tank/p2/2/1/44':
On Thu, 10 Jul 2008, Ross wrote:
>
> As a NFS storage platform, you'd be beating EMC and NetApp on price,
> spindle count, features and performance. I really hope somebody at
> Sun considers this, and thinks about expanding the "What can you do
> with an x4540" section on the website to include
Carson Gaspar wrote:
> Darren J Moffat wrote:
> > $ pwd
> > /cube/builds/darrenm/bugs
> > $ zfs create -c 6724478
> >
> > Why "-c" ? -c for "current directory" "-p" partial is
> already taken to
> > mean "create all non existing parents" and "-r" relative is
> already used
> > consistently a
Yup, worked fine. We removed a 500GB disk and replaced it with a 1TB one,
didn't even need any downtime (although you have to be quick with the
screwdriver). I spoke to the x64 line product manager last year and they
confirmed that Sun plan to support 2TB drives, and probably even 4TB drives a
Spencer Shepler wrote:
> On Jul 10, 2008, at 7:05 AM, Ross wrote:
>
>
>> Oh god, I hope not. A patent on fitting a card in a PCI-E slot, or
>> using nvram with RAID (which raid controllers have been doing for
>> years) would just be rediculous. This is nothing more than cache,
>> and eve
Tommaso Boccali wrote:
> .. And the answer was yes I hope. we are sriously thinking of buying
> 48 1 tb disk to replace those in a 1 year old thumper
>
> please confirm it again :)
>
>
In my 15 year experience with Sun Products, I've never known one to care
about drive brand, model, or firm
On Jul 10, 2008, at 7:05 AM, Ross wrote:
> Oh god, I hope not. A patent on fitting a card in a PCI-E slot, or
> using nvram with RAID (which raid controllers have been doing for
> years) would just be rediculous. This is nothing more than cache,
> and even with the American patent system
Darren J Moffat wrote:
> Today:
>
> $ zfs create cube/builds/darrenm/bugs/6724478
>
> With this proposal:
>
> $ pwd
> /cube/builds/darrenm/bugs
> $ zfs create 6724478
>
> Both of these would result in a new dataset cube/builds/darrenm/6724478
...
> Maybe the easiest way out of the ambiquity is
Hi. I am working on the ADM project within OpenSolaris. ADM is a Hierarchical
Storage manager (HSM) for ZFS. An HSM serves several purposes. One is a
virtualization of amount of disk space by copying (archiving to other media) a
files
data, then freeing that data space, when needed. Another is as
Hi all,
I'm a little (ok, a lot) confused on the whole zfs send/receive commands. I've
seen mention of using zfs send between two different machines, but no good
howto in order to make it work. I have one try-n-buy x4500 that we are trying
to move data from onto a new x4500 that we've purchased
On Thu, Jul 10, 2008 at 12:47:26AM -0700, Ross wrote:
> My recommendation: buy a small, cheap 2.5" SATA hard drive (or 1.8" SSD) and
> use that as your boot volume, I'd even bolt it to the side of your case if
> you have to. Then use the whole of your three large disks as a raid-z set.
Yup, I'
Oh god, I hope not. A patent on fitting a card in a PCI-E slot, or using nvram
with RAID (which raid controllers have been doing for years) would just be
rediculous. This is nothing more than cache, and even with the American patent
system I'd have though it hard to get that past the obviousne
On Thu, Jul 10, 2008 at 3:37 AM, Ross <[EMAIL PROTECTED]> wrote:
> I think it's a cracking upgrade Richard. I was hoping Sun would do
> something like this, so it's great to see it arrive.
>
> As others have said though, I think Sun are missing a trick by not working
> with Vmetro or Fusion-io to
On Thu, 10 Jul 2008, Tim Foster wrote:
> Mark Musante (famous for recently beating the crap out of lu)
Heh. Although at this point it's hard to tell who's the beat-er and who's
the beat-ee...
Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@o
On Thu, 2008-07-10 at 07:12 -0400, Mark J Musante wrote:
> On Thu, 10 Jul 2008, Mark Phalan wrote:
>
> > I find this annoying as well. Another way that would help (but is fairly
> > orthogonal to your suggestion) would be to write a completion module for
> > zsh/bash/whatever that could -complet
On Thu, 10 Jul 2008, Mark Phalan wrote:
> I find this annoying as well. Another way that would help (but is fairly
> orthogonal to your suggestion) would be to write a completion module for
> zsh/bash/whatever that could -complete options to the z* commands
> including zfs filesystems.
You mea
On Thu, 2008-07-10 at 13:01 +0200, Mark Phalan wrote:
> > Both of these would result in a new dataset cube/builds/darrenm/6724478
>
> I find this annoying as well. Another way that would help (but is fairly
> orthogonal to your suggestion) would be to write a completion module for
> zsh/bash/whate
On Thu, 2008-07-10 at 11:42 +0100, Darren J Moffat wrote:
> I regularly create new zfs filesystems or snapshots and I find it
> annoying that I have to type the full dataset name in all of those cases.
>
> I propose we allow zfs(1) to infer the part of the dataset name upto the
> current working
Hey everybody,
Well, my pestering paid off. I have a Solaris driver which you're welcom to
download, but please be aware that it comes with NO SUPPORT WHATSOEVER.
I'm very grateful to the chap who provided this driver, please don't abuse his
generosity by calling Micro Memory or Vmetro if you
I regularly create new zfs filesystems or snapshots and I find it
annoying that I have to type the full dataset name in all of those cases.
I propose we allow zfs(1) to infer the part of the dataset name upto the
current working directory. For example:
Today:
$ zfs create cube/builds/darrenm/
Fajar A. Nugraha wrote:
>> If you have enough memory (say 4gb) you probably won't need swap. I
>> believe swap can live in a ZFS pool now too, so you won't necesarily
>> need another slice. You'll just have RAID-Z protected swap.
>>
> Really? I think solaris still needs non-zfs swap for default
.. And the answer was yes I hope. we are sriously thinking of buying
48 1 tb disk to replace those in a 1 year old thumper
please confirm it again :)
2008/7/10, Ross <[EMAIL PROTECTED]>:
> Heh, I like the way you think Tim. I'm sure Sun hate people like us. The
> first thing I tested when I
Heh, I like the way you think Tim. I'm sure Sun hate people like us. The
first thing I tested when I had an x4500 on trial was to make sure an off the
shelf 1TB disk worked in it :)
This message posted from opensolaris.org
___
zfs-discuss mailing
The problem with that is that I'd need to mirror them to guard against failure,
I'd loose storage capacity, and the peak throughput would be horrible when
compared to the array.
I'd be sacrificing streaming speed for random write speed, whereas with a PCIe
nvram card I can have my cake an eat i
I think it's a cracking upgrade Richard. I was hoping Sun would do something
like this, so it's great to see it arrive.
As others have said though, I think Sun are missing a trick by not working with
Vmetro or Fusion-io to add nvram cards to the range now. In particular, if Sun
were to work w
Brandon High wrote:
On Wed, Jul 9, 2008 at 3:37 PM, Florin Iucha <[EMAIL PROTECTED]> wrote:
The question is, how should I partition the drives, and what tuning
parameters should I use for the pools and file systems? From reading
the best practices guides [1], [2], it seems that I cannot have
My recommendation: buy a small, cheap 2.5" SATA hard drive (or 1.8" SSD) and
use that as your boot volume, I'd even bolt it to the side of your case if you
have to. Then use the whole of your three large disks as a raid-z set.
If I were in your shoes I would also have bought 4 drives for ZFS i
Yes, but that talks about Flash systems, and the end of the year. My concern
is whether Sun will also be releasing flash add-on cards that we can make use
of elsewhere, including on already purchased Sun kit.
Much as I'd love to see Sun add a lightning fast flash boosted server to their
x64 r
Florin Iucha wrote:
> On Wed, Jul 09, 2008 at 08:42:37PM -0700, Bohdan Tashchuk wrote:
>>> I cannot use OpenSolaris 2008.05 since it does not
>>> recognize the SATA disks attached to the southbridge.
>>> A fix for this problem went into build 93.
>> Which forum/mailing list discusses SATA issues li
64 matches
Mail list logo