Hi all,
It seems a user managed to create files dated Oct 16, 2057, from a Linux distro
that mounted by NFS the volumes on an x2100 server running S10U5, with ZFS
volumes.
The problem is, those files are completely unreachable on the S10 server:
# ls -l .gtk-bookmarks
.gtk-bookmarks: Value too
I see, thanks.
And as Jörg said, I only need a 64 bit binary. I didn't know, but there is one
for ls, and it does work as expected:
$ /usr/bin/amd64/ls -l .gtk-bookmarks
-rw-r--r-- 1 user opc0 oct. 16 2057 .gtk-bookmarks
This is a bit absurd. I thought Solaris was fully 64 bit. I
(As I'm not subscribed to this list, you can keep me in CC:, but I'll check out
the Jive thread)
Hi all,
I've seen this questions asked several times, but there wasn't any solution
provided.
I'm trying to offline a faulted device in a RAID-Z2 device on Solaris 10. This
is done according to the
I don't have a replacement, but I don't want the disk to be used right now by
the volume: how do I do that?
This is exactly the point of the offline command as explained in the
documentation: disabling unreliable hardware, or removing it temporarily.
So this is a huge bug of the documentation?
W
> You could offline the disk if [b]this[/b] disk (not
> the pool) had a replica. Nothing wrong with the
> documentation. Hmm, maybe it is little misleading
> here. I walked into the same "trap".
I apologize for being daft here, but I don't find any ambiguity in the
documentation.
This is explicit
> You're right, from the documentation it definitely
> should work. Still, it doesn't. At least not in
> Solaris 10. But i am not a zfs-developer, so this
> should probably answered by them. I will give it a
> try with a recent OpneSolaris-VM and check, wether
> this works in newer implementations
Thanks a lot, Cindy!
Let me know how it goes or if I can provide more info.
Part of the bad luck I've had with that set, is that it reports such errors
about once a month, then everything goes back to normal again. So I'm pretty
sure that I'll be able to try to offline the disk someday.
Lauren
Hi all,
I've just put my first ZFS into production, and users are complaining about
some regressions.
One problem for them is that now, they can't see all the users directories in
the automount point: the homedirs used to be part of a single UFS, and were
browsable with the correct autofs opti
As I pointed out above, it used to work: there is no nobrowse flag there. I
also tried forcefully to put -browse, no change.
Laurent
PS:
I had answered by email yesterday, but my post is still waiting moderator
approval, I would rather have it rejected directly than wait hopelessly for it
to
Hi all,
Another issue users have pointed out with ZFS: now, when their ZFS homedirs are
automounted, the total size shown is not always correct.
All the homedirs have a 10GB quota, which means that is supposed to be the
total size shown. However, when there are snapshots on those FS, and they a
10 matches
Mail list logo