I am the maintainer of GDM, and I am noticing that GDM has a problem when
running on a ZFS filesystem, as with Indiana.
When GDM (the GNOME Display Manager) starts the login GUI, it runs the
following commands on Solaris:
/usr/bin/setfacl -m user:gdm:rwx,mask:rwx /dev/audio
/usr/bin/setfac
> "da" == David Anderson <[EMAIL PROTECTED]> writes:
da> (I have never
da> compiled ONNV sources - do I need to do this or can I just
da> recompile the iscsi initiator)?
The source offering is disorganized and spread over many
``consolidations'' which are pushed through ``gates'',
Hi Dan, replying in line:
On Fri, Dec 5, 2008 at 9:19 PM, David Anderson <[EMAIL PROTECTED]> wrote:
> Trying to keep this in the spotlight. Apologies for the lengthy post.
Heh, don't apologise, you should see some of my posts... o_0
> I'd really like to see features as described by Ross in his s
Trying to keep this in the spotlight. Apologies for the lengthy post.
I'd really like to see features as described by Ross in his summary of
the "Availability: ZFS needs to handle disk removal / driver failure
better" (http://www.opensolaris.org/jive/thread.jspa?messageID=274031
). I'd li
On Fri, Dec 05, 2008 at 11:35:27AM -0800, Orvar Korvar wrote:
> I see this old post about ZFS fragmenting the RAM if it is 32 bit. This makes
> the memory run out. Is it still true, or has it been fixed?
Don't waste your time trying to run ZFS on a 32-bit machine. The performance
is horrible. I
> "mb" == Mike Brancato <[EMAIL PROTECTED]> writes:
mb> if a 4x100GB raidz only used 150GB of space, one could do
mb> 'zpool remove tank c0t3d0' and data residing on c0t3d0 would
mb> be migrated to other disks in the raidz.
that sounds like in-place changing of stripe width, and w
I see this old post about ZFS fragmenting the RAM if it is 32 bit. This makes
the memory run out. Is it still true, or has it been fixed?
http://mail.opensolaris.org/pipermail/zfs-discuss/2006-July/003506.html
--
This message posted from opensolaris.org
__
[EMAIL PROTECTED] said:
> Thanks for the tips. I'm not sure if they will be relevant, though. We
> don't talk directly with the AMS1000. We are using a USP-VM to virtualize
> all of our storage and we didn't have to add anything to the drv
> configuration files to see the new disk (mpxio was alr
Mike Brancato wrote:
> With ZFS, we can enable copies=[1,2,3] to configure how many copies of data
> there are. With copies of 2 or more, in theory, an entire disk can have read
> errors, and the zfs volume still works.
No, this is not a completely true statement.
> The unfortunate part here
Well, I knew it wasn't available. I meant to ask what is the status of the
development of the feature? Not started, I presume.
Is there no timeline?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
In theory, with 2 80GB drives, you would always have a copy somewhere else.
But a single drive, no.
I guess I'm thinking in the optimal situation. With multiple drives, copies
are spread through the vdevs. I guess it would work better if we could define
that if copies=2 or more, that at leas
Mike Brancato wrote:
> I've seen discussions as far back as 2006 that say development is underway to
> allow the addition and remove of disks in a raidz vdev to grow/shrink the
> group. Meaning, if a 4x100GB raidz only used 150GB of space, one could do
> 'zpool remove tank c0t3d0' and data resi
On Fri, 5 Dec 2008, Mike Brancato wrote:
> With ZFS, we can enable copies=[1,2,3] to configure how many copies
> of data there are. With copies of 2 or more, in theory, an entire
> disk can have read errors, and the zfs volume still works.
So you are saying that if we use copies of 2 or more t
With ZFS, we can enable copies=[1,2,3] to configure how many copies of data
there are. With copies of 2 or more, in theory, an entire disk can have read
errors, and the zfs volume still works.
The unfortunate part here is that the redundancy lies in the volume, not the
pool vdev like with ra
Richard Elling wrote:
> The answer may lie in the /var/adm/messages file which should report
> if a reset was received or sent.
Here is a sample set of messages at that time. It looks like timeouts
on the SSD for various requested blocks. Maybe I need to talk with
Intel about this issue.
Ethan
I've seen discussions as far back as 2006 that say development is underway to
allow the addition and remove of disks in a raidz vdev to grow/shrink the
group. Meaning, if a 4x100GB raidz only used 150GB of space, one could do
'zpool remove tank c0t3d0' and data residing on c0t3d0 would be migra
Ethan Erchinger wrote:
>
> Richard Elling wrote:
>>>
>>>asc = 0x29
>>>ascq = 0x0
>>
>> ASC/ASCQ 29/00 is POWER ON, RESET, OR BUS DEVICE RESET OCCURRED
>> http://www.t10.org/lists/asc-num.htm#ASC_29
>>
>> [this should be more descriptive as the codes are, more-or-less,
>> standardiz
17 matches
Mail list logo