Torrey McMahon wrote:
> A Darren Dunham wrote:
>
>> On Tue, Jun 10, 2008 at 05:32:21PM -0400, Torrey McMahon wrote:
>>
>>
>>> However, some apps will probably be very unhappy if i/o takes 60 seconds
>>> to complete.
>>>
>>>
>> It's certainly not uncommon for that to occur in
Vincent Fox wrote:
> Ummm, could you back up a bit there?
>
> What do you mean "disk isn't sync'd so boot should fail"? I'm coming from
> UFS of course where I'd expect to be able to fix a damaged boot drive as it
> drops into a single-user root prompt.
>
> I believe I did try boot disk1 but tha
Vincent Fox wrote:
> So I decided to test out failure modes of ZFS root mirrors.
>
> Installed on a V240 with nv90. Worked great.
>
> Pulled out disk1, then replaced it and attached again, resilvered, all good.
>
> Now I pull out disk0 to simulate failure there. OS up and running fine, but
> lot
Ummm, could you back up a bit there?
What do you mean "disk isn't sync'd so boot should fail"? I'm coming from UFS
of course where I'd expect to be able to fix a damaged boot drive as it drops
into a single-user root prompt.
I believe I did try boot disk1 but that failed I think due to prior t
"Glaser, David" <[EMAIL PROTECTED]> writes:
> Hi all, I?m new to the list and I thought I?d start out on the right
> foot. ZFS is great, but I have a couple questions?.
>
> I have a Try-n-buy x4500 with one large zfs pool with 40 1TB drives in
> it. The pool is named backup.
>
> Of this pool, I ha
Sounds correct to me. The disk isn't sync'd so boot should fail. If
you pull disk0 or set disk1 as the primary boot device what does it
do? You can't expect it to resliver before booting.
On 6/11/08, Vincent Fox <[EMAIL PROTECTED]> wrote:
> So I decided to test out failure modes of ZFS root
So I decided to test out failure modes of ZFS root mirrors.
Installed on a V240 with nv90. Worked great.
Pulled out disk1, then replaced it and attached again, resilvered, all good.
Now I pull out disk0 to simulate failure there. OS up and running fine, but
lots of error message about SYNC CA
Thanks, Matt. Are you interested in feedback on various questions regarding
how to display results? On list or off? Thanks.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
A Darren Dunham wrote:
> On Tue, Jun 10, 2008 at 05:32:21PM -0400, Torrey McMahon wrote:
>
>> However, some apps will probably be very unhappy if i/o takes 60 seconds
>> to complete.
>>
>
> It's certainly not uncommon for that to occur in an NFS environment.
> All of our applications seem
What do you mean about "mirrored vdevs" ? RAID1 hardware? Because I have only
ICH9R and opensolaris doesn't know about it.
Would be network boot a good idea?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolar
On Wed, Jun 11, 2008 at 01:51:17PM -0500, Al Hopper wrote:
> I think that I'll (personally) avoid the initial rush-to-market
> comsumer level products by vendors with no track record of high tech
> software development - let alone those who probably can't afford the
> PhD level talent it takes to g
see: http://bugs.opensolaris.org/view_bug.do?bug_id=6700597
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I had a similar configuration until my recent re-install to snv91. Now I am
have just 2 ZFS pools - one for root+boot (big enough to hold multiple BEs and
do LiveUpgrades) and another for the rest of my data.
-Wyllys
This message posted from opensolaris.org
_
This is one of those issues, where the developers generally seem to think that
old-style quotas is legacy baggage. And that people running large
home-directory sort of servers with 10,000+ users are a minority that can
safely be ignored.
I can understand their thinking.However it does repr
On Wed, 2008-06-11 at 07:40 -0700, Richard L. Hamilton wrote:
> > I'm not even trying to stripe it across multiple
> > disks, I just want to add another partition (from the
> > same physical disk) to the root pool. Perhaps that
> > is a distinction without a difference, but my goal is
> > to grow
Your key problem is going to be:
Will Sun use SLC or MLC?
>From what I have read the trend now is towards MLC chips which have much lower
>number of write cycles but are cheaper and more storage. So then they end up
>layering ECC and wear-levelling on to address this shortened life-span. A
Luckily, my system had a pair of identical, 232GB disks. The 2nd wasn't yet
used, so by juggling mirrors (create 3 mirrors, detach the one to change,
etc...), I was able to reconfigure my disks more to my liking - all without a
single reboot or loss of data. I now have 2 pools - a 20GB root po
On Wed, Jun 11, 2008 at 4:31 AM, Adam Leventhal <[EMAIL PROTECTED]> wrote:
> On Jun 11, 2008, at 1:16 AM, Al Hopper wrote:
>>
>> But... if you look
>> broadly at the current SSD product offerings, you see: a) lower than
>> expected performance - particularly in regard to write IOPS (I/O Ops
>> per
On Wed, Jun 11, 2008 at 10:35 AM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Wed, 11 Jun 2008, Al Hopper wrote:
>>
>> disk drives. But - based on personal observation - there is a lot of
>> hype surrounding SSD reliability. Obviously the *promise* of this
>> technology is higher performance
On Wed, Jun 11, 2008 at 8:21 AM, Tim <[EMAIL PROTECTED]> wrote:
> Are those universal though? I was under the impression it had to be
> supported by the motherboard, or you'd fry all components involved.
There are PCI/PCI-X to PCI-e bridge chips available (as well as PCI-e
to AGP) and they're par
Hi All ;
Every NAND based SSD HDD have some ram. Consumer grade products will have
smaller not battery protected ram with a smaller number of prallel working nand
chips and a slower cpu to distribute the load. Also consumer product will have
less number of spare cells.
Enterprise SSD's are g
Hi all.
I'm new to ZFS, and I have just installed my first ZFS pools and file systems.
My Oracle DBA tells me that he's seeing poor performance and would like to go
back to VxFS.
Here's my hardware:
Sun E4500 with Solaris 10, 08/07 release. SAN attached through a Brocade
switch to EMC CX700
On Wed, 11 Jun 2008, Richard L. Hamilton wrote:
>
> But if you already have the ZAP code, you ought to be able to do
> quick lookups of arbitrary byte sequences, right? Just assume that
> a value not stored is zero (or infinity, or uninitialized, as applicable),
> and you have the same functionali
On Jun 11, 2008, at 11:35 AM, Bob Friesenhahn wrote:
> On Wed, 11 Jun 2008, Al Hopper wrote:
>> disk drives. But - based on personal observation - there is a lot of
>> hype surrounding SSD reliability. Obviously the *promise* of this
>> technology is higher performance and *reliability* with lo
I don't think so, not all of them anyway. They also sell ones that have a
proprietary goldfinger, which obviously would not work.
The spec does not mention any specific restrictions, just lists the interface
types (but it is fairly breif), and you can certianly buy PCI - PCI-E generic
adapters
On Wed, 11 Jun 2008, Al Hopper wrote:
> disk drives. But - based on personal observation - there is a lot of
> hype surrounding SSD reliability. Obviously the *promise* of this
> technology is higher performance and *reliability* with lower power
> requirements due to no (mechanical) moving parts
Richard L. Hamilton wrote:
> Whatever mechanism can check at block allocation/deallocation time
> to keep track of per-filesystem space (vs a filesystem quota, if there is one)
> could surely also do something similar against per-uid/gid/sid quotas. I
> suspect
> a lot of existing functions and d
On Wed, Jun 11, 2008 at 10:18 AM, Lee <[EMAIL PROTECTED]> wrote:
> If your worried about the bandwidth limitations of putting something like
> the supermicro card in a pci slot how about using an active riser card to
> convert from PCI-E to PCI-X. One of these, or something similar:
>
> http://www
If your worried about the bandwidth limitations of putting something like the
supermicro card in a pci slot how about using an active riser card to convert
from PCI-E to PCI-X. One of these, or something similar:
http://www.tyan.com/product_accessories_spec.aspx?pid=26
on sale at
http://www.am
Yeah. The command line works fine. Thought it to be a bit curious that there
was an issue with the HTTP interface. It's low priority I guess because it
doesn't impact the functionality really.
Thanks for the responses.
This message posted from opensolaris.org
> I'm not even trying to stripe it across multiple
> disks, I just want to add another partition (from the
> same physical disk) to the root pool. Perhaps that
> is a distinction without a difference, but my goal is
> to grow my root pool, not stripe it across disks or
> enable raid features (for
Richard L. Hamilton wrote:
>
> Older SSDs (before cheap and relatively high-cycle-limit flash)
> were RAM cache+battery+hard disk. Surely RAM+battery+flash
> is also possible; the battery only needs to keep the RAM alive long
> enough to stage to the flash. That keeps the write count on the flash
Wyllys Ingersoll wrote:
> I'm not even trying to stripe it across multiple disks, I just want to add
> another partition (from the same physical disk) to the root pool. Perhaps
> that is a distinction without a difference, but my goal is to grow my root
> pool, not stripe it across disks or ena
Hi all, I'm new to the list and I thought I'd start out on the right foot. ZFS
is great, but I have a couple questions
I have a Try-n-buy x4500 with one large zfs pool with 40 1TB drives in it. The
pool is named backup.
Of this pool, I have a number of volumes.
backup/clients
backup/clien
On Wed, Jun 11, 2008 at 12:58 AM, Robin Guo <[EMAIL PROTECTED]> wrote:
> Hi, Mike,
>
> It's like 6452872, it need enough space for 'zfs promote'
Not really - in 6452872 a file system is at its quota before the
promote is issued. I expect that a promote may cause several KB of
metadata changes th
I'm not even trying to stripe it across multiple disks, I just want to add
another partition (from the same physical disk) to the root pool. Perhaps that
is a distinction without a difference, but my goal is to grow my root pool, not
stripe it across disks or enable raid features (for now).
Cu
> On Tue, Jun 10, 2008 at 11:33:36AM -0700, Wyllys
> Ingersoll wrote:
> > Im running build 91 with ZFS boot. It seems that
> ZFS will not allow
> > me to add an additional partition to the current
> root/boot pool
> > because it is a bootable dataset. Is this a known
> issue that will be
> > fixe
Hi
after updating to svn_90 (several retries before I patched pkg) I was left with
the following
NAME USED AVAIL REFER
MOUNTPOINT
rpool 9.87G 24.6G62K
/rpool
[EMAIL PROTECTED]
> > btw: it's seems to me that this thread is a little
> bit OT.
>
> I don't think its OT - because SSDs make perfect
> sense as ZFS log
> and/or cache devices. If I did not make that clear
> in my OP then I
> failed to communicate clearly. In both these roles
> (log/cache)
> reliability is of t
> On Sat, 7 Jun 2008, Mattias Pantzare wrote:
> >
> > If I need to count useage I can use du. But if you
> can implement space
> > usage info on a per-uid basis you are not far from
> quota per uid...
>
> That sounds like quite a challenge. UIDs are just
> numbers and new
> ones can appear at an
Tobias Exner wrote:
> The reliability of flash increasing alot if "wear leveling" is
> implemented and there's the capability to build a raid over a couple of
> flash-modules ( maybe automatically by the controller ).
> And if there are RAM-modules as a cache infront of the flash the most
> prob
The reliability of flash increasing alot if "wear leveling" is
implemented and there's the capability to build a raid over a couple of
flash-modules ( maybe automatically by the controller ).
And if there are RAM-modules as a cache infront of the flash the most
problems will be solved regarding
On Jun 11, 2008, at 1:16 AM, Al Hopper wrote:
> But... if you look
> broadly at the current SSD product offerings, you see: a) lower than
> expected performance - particularly in regard to write IOPS (I/O Ops
> per Second)
True. Flash is quite asymmetric in its performance characteristics.
That sa
On Wed, Jun 11, 2008 at 3:59 AM, Tobias Exner <[EMAIL PROTECTED]> wrote:
> Hi Al,
>
> Sorry, but "leading the market" is not right at this point.
>
> www.superssd.com has the answer to all those questions about SSD and
> reliability/speed for many years..
>
> But I'm with you. I'm looking forward t
Hi Al,
Sorry, but "leading the market" is not right at this point.
www.superssd.com has the answer to all those questions about SSD and
reliability/speed for many years..
But I'm with you. I'm looking forward the coming products of SUN
concerning SSD..
btw: it's seems to me that this thread
I've been reading, with great (personal/professional) interest about
Sun getting very serious about SSD-equipping servers as a standard
feature in the 2nd half of this year. Yeah! Excellent news - and
it's nice to see Sun lead, rather than trail the market! Those of us,
who are ZFS zealots, know
Richard Elling schrieb:
> Tobias Exner wrote:
>> Hi John,
>>
>> I've done some tests with a SUN X4500 with zfs and "MAID" using the
>> powerd of Solaris 10 to power down the disks which weren't access for
>> a configured time. It's working fine...
>>
>> The only thing I run into was the problem
47 matches
Mail list logo