On Mon, Jun 25, 2007 at 02:34:21AM -0400, Dennis Clarke wrote:
>
> > in /usr/src/cmd/zpool/zpool_main.c :
> >
>
> at line 680 forwards we can probably check for this scenario :
>
> if ( ( altroot != NULL ) && ( altroot[0] != '/') ) {
> (void) fprintf(stderr, gettext("invalid alternate root '
> in /usr/src/cmd/zpool/zpool_main.c :
>
at line 680 forwards we can probably check for this scenario :
if ( ( altroot != NULL ) && ( altroot[0] != '/') ) {
(void) fprintf(stderr, gettext("invalid alternate root '%s': "
"must be an absolute path\n"), altroot);
nvlist_free(nvroot);
Not sure if this has been reported or not.
This is fairly minor but slightly annoying.
After fresh install of snv_64a I run zpool import to find this :
# zpool import
pool: zfs0
id: 13628474126490956011
state: ONLINE
status: The pool is formatted using an older on-disk version.
action: T
Gary Mills wrote:
On Wed, Jun 20, 2007 at 12:23:18PM -0400, Torrey McMahon wrote:
James C. McPherson wrote:
Roshan Perera wrote:
But Roshan, if your pool is not replicated from ZFS' point of view,
then all the multipathing and raid controller backup in the world will
not make a
Victor Engle wrote:
On 6/20/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
Also, how does replication at the ZFS level use more storage - I'm
assuming raw block - then at the array level?
___
Just to add to the previous comments. In the case where you
The interesting collision is going to be file system level encryption
vs. de-duplication as the former makes the latter pretty difficult.
dave johnson wrote:
How other storage systems do it is by calculating a hash value for
said file (or block), storing that value in a db, then checking every
How other storage systems do it is by calculating a hash value for said file
(or block), storing that value in a db, then checking every new file (or
block) commit against the db for a match and if found, replace file (or
block) with duplicate entry in db.
The most common non-proprietary hash
On Sun, Jun 24, 2007 at 03:39:40PM -0700, Erik Trimble wrote:
> Matthew Ahrens wrote:
> >Will Murnane wrote:
> >>On 6/23/07, Erik Trimble <[EMAIL PROTECTED]> wrote:
> >>>Now, wouldn't it be nice to have syscalls which would implement "cp"
> >>>and
> >>>"mv", thus abstracting it away from the userl
Matthew Ahrens wrote:
Will Murnane wrote:
On 6/23/07, Erik Trimble <[EMAIL PROTECTED]> wrote:
Now, wouldn't it be nice to have syscalls which would implement "cp"
and
"mv", thus abstracting it away from the userland app?
>
Not really. Different apps want different behavior in their copying,
update on this:
i think i have been caught by a rsync trap.
it seems, using rsync locally (i.e. rsync --inplace localsource
localdestination) and "remotely" (i.e. rsync --inplace localsource
localhost:/localdestination) is something different and rsync seems to handle
the writing very differen
whoops - i see i have posted the same several times.
this was duo to i got an error message when posting and thought, it didn`t get
trough
could some moderator probably delete those double posts ?
meanwhile, i did some tests and have very weird results.
first off, i tried "--inplace" to updat
On Sat, Jun 23, 2007 at 10:21:14PM -0700, Anton B. Rang wrote:
> > Oliver Schinagl wrote:
> > > zo basically, what you are saying is that on FBSD there's no performane
> > > issue, whereas on solaris there (can be if write caches aren't enabled)
> >
> > Solaris plays it safe by default. You can,
Will Murnane wrote:
On 6/23/07, Erik Trimble <[EMAIL PROTECTED]> wrote:
Now, wouldn't it be nice to have syscalls which would implement "cp" and
"mv", thus abstracting it away from the userland app?
>
Not really. Different apps want different behavior in their copying,
so you'd have to expose
We have seen this behavior, but it appears to be entirely related to the hardware having
the "Intel IPMI" stuff swallow up the NFS traffic on port 623 directly by the
network hardware and never getting.
http://blogs.sun.com/shepler/entry/port_623_or_the_mount
Unfortunately, this nfs hangs acr
So I ended up recreating the zpool from scratch, there seems no chance to
repair anything. All data lost - luckily nothing really important. Never had
such an experience with mirrored volumes on svm/ods since solaris 2.4.
Just to clarify things: there was no mocking with the underlying disk devic
15 matches
Mail list logo