Hi there
I post this problem in Xen discussion before but with different title, I
thought it is something has to do with the memory .. so guys can you read the
thread first !!
http://www.opensolaris.org/jive/thread.jspa?threadID=76870&tstart=0
I tried this yesterday , I brought my friend
Just a thought, will we be able to split the ioDrive into slices and use it
simultaneously as a ZIL and slog device? 5GB of write cache and 75GB of read
cache sounds to me like a nice way to use the 80GB model.
--
This message posted from opensolaris.org
_
Very interesting idea, thanks for sharing it.
Infiniband would definately be worth looking at for performance, although I
think you'd need iSER to get the benefits and that might still be a little new:
http://www.opensolaris.org/os/project/iser/Release-notes/.
It's also worth bearing in mind
Anas,
Are both (IDE and SATA) disks plugged in ?
I had similar problems where the machine woudl just drop into GRUB and never
boot up despite giving the right GRUB commands.
I finally disconnected the IDE disk and things are fine now.
Thanks and regards,
Sanjeev.
On Mon, Oct 06, 2008 at 12:03:0
Fajar A. Nugraha wrote:
> On Fri, Oct 3, 2008 at 10:37 PM, Vasile Dumitrescu
> <[EMAIL PROTECTED]> wrote:
>
>> VMWare 6.0.4 running on Debian unstable,
>> Linux bigsrv 2.6.26-1-amd64 #1 SMP Wed Sep 24 13:59:41 UTC 2008 x86_64
>> GNU/Linux
>>
>> Solaris is vanilla snv_90 installed with no GUI.
>
Original Message
Subject:Re: [zfs-discuss] ZSF Solaris
Date: Wed, 01 Oct 2008 07:21:56 +0200
From: Jens Elkner <[EMAIL PROTECTED]>
To: zfs-discuss@opensolaris.org
References: <[EMAIL PROTECTED]>
<[EMAIL PROTECTED]>
<[EMAIL PROTECTED]>
<[EMAIL PROTECTED]>
Nicolas Williams wrote
> There have been threads about adding a feature to support slow mirror
> devices that don't stay synced synchronously. At least IIRC. That
> would help. But then, if the pool is busy writing then your slow ZIL
> mirrors would generally be out of sync, thus being of no hel
Anton B. Rang wrote:
> Erik:
>
>>> (2) a SAS drive has better throughput and IOPs than a SATA drive
>>>
>
> Richard:
>
>> Disagree. We proved that the transport layer protocol has no bearing
>> on throughput or iops. Several vendors offer drives which are
>> identical in all respect
> I've upgraded to b98, checked if zpool.cache is not
> being added to
> boot archive and tried to boot from VB by presenting
> a prtition to it.
> It didn't.
I got it working by installing a new build of OpenSolaris 2008.11 from scratch
rather than upgrading, but deleting zpool.cache, deleting b
> Cannot mount root on /[EMAIL PROTECTED],0/pci103c,[EMAIL PROTECTED],2/[EMAIL
> PROTECTED],0:a fstype zfs
Is that physical device path correct for your new system?
Or is this the physical device path (stored on-disk in the zpool label)
from some other system? In this case you may be able to
Hi,
I am having a problem running zpool imports when we import multiple storage
pools at one time. Below are the details of the setup:
- We are using a SAN with Sun 6140 storage arrays.
- Dual port HBA on each server is Qlogic running the QLC driver with Sun
mpxio(SFCSM) running.
- We have 400
On 06 October, 2008 - Luke Schwab sent me these 2,0K bytes:
> Is this a design choice with ZFS coding or a bug? Is there anything I
> can do to increase my import times? We do have the same setup on one
> of our SANs with only 10-20 luns instead of 400+ and the imports take
> only 1-3 seconds. My
Do you have a lot of snapshots? If so, CR 6612830 could be contributing.
Alas, many such fixes are not yet available in S10.
-- richard
Luke Schwab wrote:
> Hi,
> I am having a problem running zpool imports when we import multiple storage
> pools at one time. Below are the details of the setup:
D'oh, meant ZIL / slog and L2ARC device. Must have posted that before my early
morning cuppa!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi all
In another thread a short while ago.. A cool little movie with some
gumballs was all we got to learn about green-bytes. The product
launched and maybe some of the people that follow this list have had a
chance to take a look at the code/product more closely? Wstuart asked
how they wer
On Fri, 3 Oct 2008, [EMAIL PROTECTED] wrote:
> Eric Boutilier wrote:
>> Is the following issue related to (will probably get fixed by) bug 6748133?
>> ...
>>
>> During a net-install of b96, I modified the name of the root pool,
>> overriding the default name, rpool. After the install, the pool wa
[EMAIL PROTECTED] wrote on 10/06/2008 01:57:10 PM:
> Hi all
>
> In another thread a short while ago.. A cool little movie with some
> gumballs was all we got to learn about green-bytes. The product
> launched and maybe some of the people that follow this list have had a
> chance to take a look at
Matt Aitkenhead wrote:
> I see that you have wasted no time. I'm still determining if you have a
> sincere interest in working with us or alternatively have an axe to grind.
> The latter is shining through.
>
> Regards,
> Matt
>
Hi Matt,
I'd like to make our correspondence in public if you do
I posted a thread here...
http://forums.opensolaris.com/thread.jspa?threadID=596
I am trying to finish building a system and I kind of need to pick
working NIC and onboard SATA chipsets (video is not a big deal - I can
get a silent PCIe card for that, I already know one which works great)
I need
On Mon, Oct 6, 2008 at 3:00 PM, "C. Bergström" <[EMAIL PROTECTED]>wrote:
> Matt Aitkenhead wrote:
> > I see that you have wasted no time. I'm still determining if you have a
> sincere interest in working with us or alternatively have an axe to grind.
> The latter is shining through.
> >
> > Regard
Speaking of this, is there a list anywhere that details what we can expect
to see for (zfs) updates in S10U6?
On Mon, Oct 6, 2008 at 2:44 PM, Richard Elling <[EMAIL PROTECTED]>wrote:
> Do you have a lot of snapshots? If so, CR 6612830 could be contributing.
> Alas, many such fixes are not yet av
On Mon, 6 Oct 2008, Tim wrote:
> ZFS is licensed under the CDDL, and as far as I know does not require
> derivative works to be open source. It's truly free like the BSD license in
It doesn't, but changes made to CDDL-licensed files must be released
(under the CDDL).
> that companies can take C
> On Mon, Oct 6, 2008 at 3:00 PM, "C. Bergström" <[EMAIL PROTECTED]
> > wrote:
> Matt Aitkenhead wrote:
> > I see that you have wasted no time. I'm still determining if you
> have a sincere interest in working with us or alternatively have an
> axe to grind. The latter is shining through.
> >
> >
Tim <[EMAIL PROTECTED]> wrote:
> ZFS is licensed under the CDDL, and as far as I know does not require
> derivative works to be open source. It's truly free like the BSD license in
> that companies can take CDDL code, modify it, and keep the content closed.
> They are not forced to share their co
On Sun, Oct 05, 2008 at 11:30:54PM -0500, Nicolas Williams wrote:
>
> There have been threads about adding a feature to support slow mirror
> devices that don't stay synced synchronously. At least IIRC. That
> would help. But then, if the pool is busy writing then your slow ZIL
That would defi
On Mon, Oct 06, 2008 at 10:47:04AM -0400, Moore, Joe wrote:
>
> I wonder if an AVS-replicated storage device on the backends would be
> appropriate?
>
> write -> ZFS-mirrored slog -> ramdisk -AVS-> physical disk
>\
> +-iscsi-> ramdisk -AVS-> physical disk
On Mon, Oct 06, 2008 at 05:38:33PM -0400, Brian Hechinger wrote:
> On Sun, Oct 05, 2008 at 11:30:54PM -0500, Nicolas Williams wrote:
> > There have been threads about adding a feature to support slow mirror
> > devices that don't stay synced synchronously. At least IIRC. That
> > would help. But
On Mon, 6 Oct 2008, Joerg Schilling wrote:
>> While you may not like it, this isn't the GPL.
>
> The GPL is more free than many people may believe now ;-)
>
> The GPL is unfortunately missunderstood by most people.
The GPL is missunderstood due the profusion of confusing technobabble
such as you
On Mon, Oct 06, 2008 at 10:47:04AM -0400, Moore, Joe wrote:
>
> I wonder if an AVS-replicated storage device on the backends would be
> appropriate?
>
> write -> ZFS-mirrored slog -> ramdisk -AVS-> physical disk
>\
> +-iscsi-> ramdisk -AVS-> physical disk
Bob Friesenhahn <[EMAIL PROTECTED]> wrote:
> > The GPL is unfortunately missunderstood by most people.
>
> The GPL is missunderstood due the profusion of confusing technobabble
> such as you provided in your explanation.
If you don't understand it, just don't comment it ;-)
Jörg
--
EMail:[EM
On Mon, Oct 06, 2008 at 01:13:40AM -0700, Ross wrote:
>
> It's also worth bearing in mind that you can have multiple mirrors. I don't
> know what effect that will have on the performance, but it's an easy way to
> boost the reliability even further. I think this idea configured on a set of
>
Scott Williamson wrote:
> Speaking of this, is there a list anywhere that details what we can
> expect to see for (zfs) updates in S10U6?
The official release name is "Solaris 10 10/08"
http://www.sun.com/software/solaris/10
has links to "what's new" videos.
When the release is downloadable,
On Mon, Oct 06, 2008 at 08:01:39PM +0530, Pramod Batni wrote:
>
> On Tue, Sep 30, 2008 at 09:44:21PM -0500, Al Hopper wrote:
> >
> > This behavior is common to tmpfs, UFS and I tested it on early ZFS
> > releases. I have no idea why - I have not made the time to figure it
> > out. What I have ob
Jens Elkner wrote:
On Mon, Oct 06, 2008 at 08:01:39PM +0530, Pramod Batni wrote:
On Tue, Sep 30, 2008 at 09:44:21PM -0500, Al Hopper wrote:
This behavior is common to tmpfs, UFS and I tested it on early ZFS
releases. I have no idea why - I have not made the time to figure it
out. Wh
> Or would they? A box dedicated to being a RAM based
> slog is going to be
> faster than any SSD would be. Especially if you make
> the expensive jump
> to 8Gb FC.
Not necessarily. While this has some advantages in terms of price &
performance, at ~$2400 the 80GB ioDrive would give it a run f
35 matches
Mail list logo