Hi there!
I am new to the list, and to OpenSolaris, as well as ZPS.
I am creating a zpool/zfs to use on my NAS server, and basically I want some
redundancy for my files/media. What I am looking to do, is get a bunch of
2TB drives, and mount them mirrored, and in a zpool so that I don't have to
> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
> Sent: Saturday, May 01, 2010 7:07 PM
>
> On Sat, 1 May 2010, Peter Tribble wrote:
> >>
> >> With the new Oracle policies, it seems unlikely that you will be
> able to
> >> reinstall the OS and achieve what you had before.
> >
> > And
On 05/01/2010 06:07 PM, Bill Sommerfeld wrote:
> there are two reasons why you could get this:
> 1) the labels are gone.
Possible, since I got the metadata errors on `zfs status` before.
> 2) the labels are not at the start of what solaris sees as p1, and thus
> are somewhere else on the disk.
This actually turned out be a lot of fun! The end of it is that I have a hard
disk partition now which can boot in both physical and virtual world (got rid
of the VDIs finally!). The physical world has outstanding performance but has
ugly graphics (1600x1200 vesa driver with weird DPI and fonts.
When I first started using ZFS I tried to create a pool from my disks
/dev/c8d1 and /dev/c8d1 . I could see the slices though.
I could not see those disks with out being root and all though I get
it ZFS didnt.
It could not find the disks and did not tell me I needed to be root.
That is all...
On Sat, 1 May 2010, Peter Tribble wrote:
With the new Oracle policies, it seems unlikely that you will be able to
reinstall the OS and achieve what you had before.
And what policies have Oracle introduced that mean you can't reinstall
your system?
The main concern is that you might not be ab
On 05/ 1/10 04:46 PM, Edward Ned Harvey wrote:
One more really important gotcha. Let's suppose the version of zfs on the
CD supports up to zpool 14. Let's suppose your "live" system had been fully
updated before crash, and let's suppose the zpool had been upgraded to zpool
15. Wouldn't that me
On Fri, Apr 30, 2010 at 6:39 PM, Bob Friesenhahn
wrote:
> On Thu, 29 Apr 2010, Edward Ned Harvey wrote:
>>
>> This is why I suggested the technique of:
>> Reinstall the OS just like you did when you first built your machine,
>> before
>> the catastrophy. It doesn't even matter if you make the sam
On 05/01/10 13:06, Diogo Franco wrote:
After seeing that on some cases labels were corrupted, I tried running
zdb -l on mine:
...
(labels 0, 1 not there, labels 2, 3 are there).
I'm looking for pointers on how to fix this situation, since the disk
still has available metadata.
there are two
On Fri, 30 Apr 2010, Freddie Cash wrote:
Without a periodic scrub that touches every single bit of data in the pool, how
can you be sure
that 10-year files that haven't been opened in 5 years are still intact?
You don't. But it seems that having two or three extra copies of the
data on diff
On Apr 29, 2010, at 2:20 AM, Freddie Cash wrote:
> On Wed, Apr 28, 2010 at 2:48 PM, Victor Latushkin
> wrote:
>
> 2. Run 'zdb -ddd storage' and provide section titles Dirty Time Logs
>
> See attached.
So you really do have enough redundancy to be able to handle this scenario, so
this is s
I had a single spare 500GB HDD and I decided to install a FreeBSD file
server in it for learning purposes, and I moved almost all of my data
to it. Yesterday, and naturally after no longer having backups of the
data in the server, I had a controller failure (SiS 180 (oh, the
quality)) and the HDD w
On Sat, May 1, 2010 at 7:08 AM, Gabriele Bulfon wrote:
> My question is:
> - is it correct to mount the iScsi device as base disks for the VM and then
> create zpools/volumes in it, considering that behind it there is already
> another zfs?
Yes, that will work fine. In fact, zfs checksums will
> If the kernel (or root) can open an arbitrary directory by inode number,
> then the kernel (or root) can find the inode number of its parent by looking
> at the '..' entry, which the kernel (or root) can then open, and identify
> both: the name of the child subdir whose inode number is already k
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Mattias Pantzare
>
> The nfs server can find the file but not the file _name_.
>
> inode is all that the NFS server needs, it does not need the file name
> if it has the inode number.
It is n
What problem are you trying to solve?
On 1 May 2010, at 02:18, Tuomas Leikola
wrote:
Hi.
I have a simple question. Is it safe to place log device on another
zfs disk?
I'm planning on placing the log on my mirrored root partition. Using
latest opensolaris.
On Sat, 1 May 2010, Edward Ned Harvey wrote:
Would that be fuel to recommend people, "Never upgrade your version of zpool
or zfs on your rpool?"
It does seem to be a wise policy to not update the pool and filesystem
versions unless you require a new pool or filesystem feature. Then
you woul
On Sat, May 1, 2010 at 16:49, wrote:
>
>
>>No, a NFS client will not ask the NFS server for a name by sending the
>>inode or NFS-handle. There is no need for a NFS client to do that.
>
> The NFS clients certainly version 2 and 3 only use the "file handle";
> the file handle can be decoded by the
>No, a NFS client will not ask the NFS server for a name by sending the
>inode or NFS-handle. There is no need for a NFS client to do that.
The NFS clients certainly version 2 and 3 only use the "file handle";
the file handle can be decoded by the server. It filehandle does not
contain the name
On Sat, May 1, 2010 at 16:23, wrote:
>
>
>>I understand you cannot lookup names by inode number in general, because
>>that would present a security violation. Joe User should not be able to
>>find the name of an item that's in a directory where he does not have
>>permission.
>>
>>
>>
>>But, even
>I understand you cannot lookup names by inode number in general, because
>that would present a security violation. Joe User should not be able to
>find the name of an item that's in a directory where he does not have
>permission.
>
>
>
>But, even if it can only be run by root, is there some wa
I'm trying to guess what is the best practice in this scenario:
- let's say I have a zfs based storage (let's say nexenta) that has it zfs
pools and volumes shared as iScsi raw devices
- let's say I have another server running xvm or virtualbox connected to the
storage
- let's say one of the virt
Forget about files for the moment, because directories are fundamentally
easier to deal with.
Let's suppose I've got the inode number of some directory in the present
filesystem.
[r...@filer ~]# ls -id /share/projects/foo/goo/rev1.0/working
14363 /share/projects/foo/goo/rev1.0/working/
I was going though this posting and it seems that were is some "personal
tension" :).
However going back to the technical problem of scrubbing a 200 TB pool I think
this issue needs to be addressed.
One warning up front: This writing is rather long, and if you like to jump to
the part dealin
Hi.
I have a simple question. Is it safe to place log device on another zfs
disk?
I'm planning on placing the log on my mirrored root partition. Using latest
opensolaris.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
- "Cindy Swearingen" skrev:
> Brandon,
>
> You're probably hitting this CR:
>
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924824
Interesting - reported in february and still no fix?
roy
___
zfs-discuss mailing list
zfs-discuss@
26 matches
Mail list logo