Paul B. Henson wrote:
> What would be the best way to allow the service account to chown the newly
> created ZFS filesystem to the appropriate user? Right now I'm tentatively
> thinking of making a small suid root binary only executable by the service
> account which would take a username and chown
Last time I played with one of those, the problem was that it didn't have any
drivers for Solaris. It's a PCIE device unlike something like the gigabyte
I-Ram or Intel SSD or something.
--
This message posted from opensolaris.org
___
zfs-discuss mailing
So I've been playing with SXCE in anticipation of the release of S10U6
(which last I heard has been delayed until sometime in October :( ) seeing
how I might integrate our identity management system and ZFS provisioning
using a minimum privileges service account.
I need to be able to create files
http://www.fusionio.com/Products.aspx
Looks like a cool SSD to go with ZFS
Has anybody tried ZFS with Fusion-IO storage? For that matter even with
Solaris?
-Jignesh
--
Jignesh Shah http://blogs.sun.com/jkshah
Sun Microsystems,Inc http://sun.com/postgresql
> "bi" == Blake Irvin <[EMAIL PROTECTED]> writes:
bi> running 'zpool status' or 'zpool status -xv'
bi> during a resilver as a non-privileged user has no adverse
bi> effect, but if i do the same as root, the resilver restarts.
I have this in my ZFS bug notes:
From: Thomas Bleek <[
On Sep 23, 2008, at 12:48 PM, Richard Elling wrote:
>
> So you admit that you didn't grok it? :-)
> Dude poured in a big bag of gumballs, but they were de-duped,
> so the gumball machine only had a few gumballs.
>
When my data is deduped that's a GoodThing (other than my unanswered
query to th
Richard Elling wrote:
> Bob Friesenhahn wrote:
>
>> On Tue, 23 Sep 2008, Eric Schrock wrote:
>>
>>
>>
>>> See:
>>>
>>> http://www.opensolaris.org/jive/thread.jspa?threadID=73740&tstart=0
>>>
>>>
>> I must apologize for anoying everyone. When Richard Elling posted the
>> Gre
Tim Haley wrote:
> Vincent Fox wrote:
>
>> Just make SURE the other host is actually truly DEAD!
>>
>> If for some reason it's simply wedged, or you have lost console access but
>> the hostA is still "live", then you can end up with 2 systems having access
>> to same ZFS pool.
>>
>> I have don
Vincent Fox wrote:
> Just make SURE the other host is actually truly DEAD!
>
> If for some reason it's simply wedged, or you have lost console access but
> the hostA is still "live", then you can end up with 2 systems having access
> to same ZFS pool.
>
> I have done this in test, 2 hosts acces
Bob Friesenhahn wrote:
> On Tue, 23 Sep 2008, Eric Schrock wrote:
>
>
>> See:
>>
>> http://www.opensolaris.org/jive/thread.jspa?threadID=73740&tstart=0
>>
>
> I must apologize for anoying everyone. When Richard Elling posted the
> GreenBytes link without saying what it was I completely ig
On 23.09.08 21:25, Bob Friesenhahn wrote:
> Today while reading EE Times I read an article about a startup company
> named Greenbytes which will be offering a system called Cypress which
> supports deduplication and arrangement of data to minimize power
> consumption. It seems that deduplicatio
Bob Friesenhahn wrote:
> Today while reading EE Times I read an article about a startup company
> named Greenbytes which will be offering a system called Cypress which
> supports deduplication and arrangement of data to minimize power
> consumption. It seems that deduplication is at the file le
On Tue, 23 Sep 2008, Eric Schrock wrote:
> See:
>
> http://www.opensolaris.org/jive/thread.jspa?threadID=73740&tstart=0
I must apologize for anoying everyone. When Richard Elling posted the
GreenBytes link without saying what it was I completely ignored it.
I assumed that it would be Windows-c
Today while reading EE Times I read an article about a startup company
named Greenbytes which will be offering a system called Cypress which
supports deduplication and arrangement of data to minimize power
consumption. It seems that deduplication is at the file level. The
product is initially
See:
http://www.opensolaris.org/jive/thread.jspa?threadID=73740&tstart=0
On Tue, Sep 23, 2008 at 12:25:59PM -0500, Bob Friesenhahn wrote:
> Today while reading EE Times I read an article about a startup company
> named Greenbytes which will be offering a system called Cypress which
> supports d
On Tue, Sep 23, 2008 at 10:25 AM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> Today while reading EE Times I read an article about a startup company
> named Greenbytes which will be offering a system called Cypress which
> supports deduplication and arrangement of data to minimize power
> consumpt
On Tue, Sep 23, 2008 at 01:04:34PM -0500, Bob Friesenhahn wrote:
> On Tue, 23 Sep 2008, Eric Schrock wrote:
> > http://www.opensolaris.org/jive/thread.jspa?threadID=73740&tstart=0
>
> I must apologize for anoying everyone. When Richard Elling posted the
> GreenBytes link without saying what it w
Just make SURE the other host is actually truly DEAD!
If for some reason it's simply wedged, or you have lost console access but the
hostA is still "live", then you can end up with 2 systems having access to same
ZFS pool.
I have done this in test, 2 hosts accessing same pool, and the result is
is there a bug for the behavior noted in the subject line of this post?
running 'zpool status' or 'zpool status -xv' during a resilver as a
non-privileged user has no adverse effect, but if i do the same as root, the
resilver restarts.
while i'm not running opensolaris here, i feel this is a go
On Tue, Sep 23, 2008 at 08:56:39AM +0200, Nils Goroll wrote:
>> That case appears to be about trying to get a raidz sized properly
>> against disks of different sizes. I don't see a similar issue for
>> someone preferring a concat over a stripe.
>
> I don't quite understand your comment.
>
> The q
Leal,
Yes, it was stripe, so I have problems. There is really nothing I can do at
this point, but luckily I've backed up my important data elsewhere, but it'll
take awhile to get some of my other non-critical information back. Oh well, you
win some, you lose some. It's all a learning experie
Hi Michael,
Sorry, Here is the info. Main thing I noticed is not able to start
the nfs server.
ech3-mes01.prod:schadala[561] ~ $ svcs -a |grep nfs
disabled 19:43:31 svc:/network/nfs/server:default
online 19:11:49 svc:/network/nfs/cbd:default
online 19:11:49 svc:/netwo
What was the configuration of that pool? It was a mirror, raidz, or just
stripe? If was just stripe, and you loose one, you got problems...
Leal.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
Hello Aaron,
Tuesday, September 23, 2008, 8:24:36 AM, you wrote:
>
I actually ran into a situation where I needed to concatenate LUNs last week. In my case, the Sun 2540 storage arrays don't yet have the ability to create LUNs over 2TB, so to use all the storage within the array on one
I actually ran into a situation where I needed to concatenate LUNs last
week. In my case, the Sun 2540 storage arrays don't yet have the ability to
create LUNs over 2TB, so to use all the storage within the array on one host
efficiently, I created two LUNs per RAID group, for a total of 4 LUNs. T
25 matches
Mail list logo