Should work just fine.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It's probably not quite what you had in mind, but this will achieve the result.
I've done something similar for zfs testing.
Have each individual machine schlep out the disk as a target via iscsi. I
haven't happened to notice whether solaris has an iscsi target, but netbsd's
target works with
In case anyone is bored and wants some zfs reading here are some links for you.
comparison of ZFS vs. Linux Raid and LVM
http://unixconsult.org/zfs_vs_lvm.html
zfs ready for home use http://uadmin.blogspot.com/2006/05/why-zfs-for-home.html
moving zfs filesystems using zfs back/restore comman
On Wed, 2006-05-03 at 17:20, Matthew Ahrens wrote:
> We appreciate your suggestion that we implement a higher-performance
> method for storing additional metadata associated with files. This will
> most likely not be possible within the extended attribute interface, and
> will require that we desi
> # zpool history jen
> History for 'jen':
> 2006-04-27T10:38:36 zpool create jen mirror ...
I have two suggestions which are just minor nits compared with the rest of this
discussion:
1. Why do you print a "T" between the date and the time? I think a space would
be more readable.
2. When pri
ZFS doesn't demand that the drives have the same capacity. If you
mirror a 30GB drive and a 40GB drive then ZFS will treat them as two
30GB drives, so you're effectively wasting 10GB of space.
Noel
**
"Question all
Bill Sommerfeld wrote:
...
So its really both - the subcommand successfully executes when its
actually written to disk and txg group is synced.
I found myself backtracking while reading that sentence due to the
ambiguity in the first half -- did you mean the write of the literal
text of
On Wed, May 03, 2006 at 03:52:39PM -0700, Craig Cory wrote:
> I, too, am late to this thread but I caught something that didn't seem right
> to me in this specific example. For the administration of the non-global
> zones, SunEducation (for whom I am an instructor) is stressing that the ng
> zones
Craig Cory wrote:
I, too, am late to this thread but I caught something that didn't seem right
to me in this specific example. For the administration of the non-global
zones, SunEducation (for whom I am an instructor) is stressing that the ng
zones are "Software Virtualizations" (my quotes) and
I, too, am late to this thread but I caught something that didn't seem right
to me in this specific example. For the administration of the non-global
zones, SunEducation (for whom I am an instructor) is stressing that the ng
zones are "Software Virtualizations" (my quotes) and that the hardware and
On Wed, May 03, 2006 at 03:05:25PM -0700, Eric Schrock wrote:
> On Wed, May 03, 2006 at 02:47:57PM -0700, eric kustarz wrote:
> > Jason Schroeder wrote:
> >
> > >eric kustarz wrote:
> > >
> > >>The following case is about to go to PSARC. Comments are welcome.
> > >>
> > >>eric
> > >>
> > >To piggy
Ed Gould wrote:
On May 3, 2006, at 15:21, eric kustarz wrote:
There's basically two writes that need to happen: one for time and
one for the subcommand string. The kernel just needs to make sure if
a write completes, the data is parseable (has a delimiter). Its then
up to the userland pars
On Wed, May 03, 2006 at 03:34:56PM -0700, Ed Gould wrote:
> I think this might be a case where a structured record (like the
> compact XML suggestion made earlier) would help. At least having
> distinguished "start" and "end" markers (whether they be one byte each,
> or XML constructs) for a re
On May 3, 2006, at 15:21, eric kustarz wrote:
There's basically two writes that need to happen: one for time and one
for the subcommand string. The kernel just needs to make sure if a
write completes, the data is parseable (has a delimiter). Its then up
to the userland parser (zpool history)
On May 3, 2006, at 13:33, Donald Lee wrote:
I have a bunch of sparc servers that have 2 disks a piece. I was
wondering if there was a way to bunch them all together in one zpool?
I couldn't find any information on this...
One thing we've chatted about to address this issue (but not tested,
pa
On Wed, May 03, 2006 at 02:47:57PM -0700, eric kustarz wrote:
> Jason Schroeder wrote:
>
> >eric kustarz wrote:
> >
> >>The following case is about to go to PSARC. Comments are welcome.
> >>
> >>eric
> >>
> >To piggyback on earlier comments re: adding hostname and user:
> >
> >What is the need fo
Nicolas Williams wrote:
On Wed, May 03, 2006 at 01:58:10PM -0400, Bill Sommerfeld wrote:
On Wed, 2006-05-03 at 13:40, Nicolas Williams wrote:
On Wed, May 03, 2006 at 01:32:27PM -0400, Bill Sommerfeld wrote:
5) I assume that new zfs and zpool subcommands will need to specify
whe
Jason Schroeder wrote:
eric kustarz wrote:
The following case is about to go to PSARC. Comments are welcome.
eric
To piggyback on earlier comments re: adding hostname and user:
What is the need for zpool history to distinguish zfs commands that
were executed by priviledged users in non-g
On Wed, May 03, 2006 at 03:22:53PM -0400, Maury Markowitz wrote:
> >> I think that's the disconnect. WHY are they "full-fledged files"?
> >
> >Because that's what the specification calls for.
>
> Right, but that's my concern. To me this sounds like "historically
> circular" reasoning...
> 20xx)
Hi,
I just got an Ultra 20 with the default 80GB internal disk. Right now,
I'm using around 30GB for zfs. I will be getting a new 250GB drive.
Question: If I create a 30GB slice on the 250GB drive, will that be okay
to use as mirror (or raidz) of the current 30GB that I now have on the
30G
Folks -
Given the response to my previous mail, we've created
'[EMAIL PROTECTED]' and the corresponding Jive discussion forum:
http://www.opensolaris.org/jive/forum.jspa?forumID=131
This forum should be used for detailed discussion of ZFS code,
implementation details, codereview requests, portin
eric kustarz wrote:
The following case is about to go to PSARC. Comments are welcome.
eric
To piggyback on earlier comments re: adding hostname and user:
What is the need for zpool history to distinguish zfs commands that were
executed by priviledged users in non-global zones for those dat
On Wed, 3 May 2006, Donald Lee wrote:
> I have a bunch of sparc servers that have 2 disks a piece. I was
> wondering if there was a way to bunch them all together in one zpool? I
> couldn't find any information on this...
I don't think you can do this. My understanding is that all ZFS devices
mu
I have a bunch of sparc servers that have 2 disks a piece. I was wondering if
there was a way to bunch them all together in one zpool? I couldn't find any
information on this...
This message posted from opensolaris.org
___
zfs-discuss mailing list
z
On Wed, May 03, 2006 at 01:58:10PM -0400, Bill Sommerfeld wrote:
> On Wed, 2006-05-03 at 13:40, Nicolas Williams wrote:
> > On Wed, May 03, 2006 at 01:32:27PM -0400, Bill Sommerfeld wrote:
> > > 5) I assume that new zfs and zpool subcommands will need to specify
> > > whether or not they create a
On Wed, May 03, 2006 at 11:58:43AM -0700, eric kustarz wrote:
> >Why not use a terse XML format? It could be extended later as needed
> >without affecting tools and can easily accomodate the argv[] array.
>
> I didn't see a need for it. Do you have a specific example where it
> works but what i
On Wed, 2006-05-03 at 14:10, eric kustarz wrote:
> Hmm, i'm just taking the argv array and taking each of its entries to
> form one string and pass that down. So that's going to potentially
> change the whitespace. Why does that matter?
potential ambiguity if arguments can contain whitespace.
Richard Elling wrote:
On Wed, 2006-05-03 at 11:10 -0700, eric kustarz wrote:
2) structured or un-structured log information? there are at least two
fields now, and potentially more later (if you add the hostname,
username, etc.). If it's unstructured, why? if it's structured, how do
I get
On Wed, May 03, 2006 at 07:39:30AM -0700, Tom Smith wrote:
> > Casper's point about 64-bit's is very important. ZFS needs to map
> > the cache into the kernel's address space, which is very limited in
> > the 32-bit world, thus the suggested requirement for 64-bit CPUs.
>
> I'm a newbie to ZFS. C
On Wed, 2006-05-03 at 11:10 -0700, eric kustarz wrote:
> > 2) structured or un-structured log information? there are at least two
> >fields now, and potentially more later (if you add the hostname,
> >username, etc.). If it's unstructured, why? if it's structured, how do
> >I get at individual f
Bill Sommerfeld wrote:
On Wed, 2006-05-03 at 13:40, Nicolas Williams wrote:
On Wed, May 03, 2006 at 01:32:27PM -0400, Bill Sommerfeld wrote:
5) I assume that new zfs and zpool subcommands will need to specify
whether or not they create a zpool history entry; is there a general
rule her
Bill Sommerfeld wrote:
On Wed, 2006-05-03 at 11:52, eric kustarz wrote:
The following case is about to go to PSARC. Comments are welcome.
and just so folks outside sun can see what a fast-track review looks
like, I'll send my comments here rather than wait for it to show up at
PSARC.
On Wed, 2006-05-03 at 13:40, Nicolas Williams wrote:
> On Wed, May 03, 2006 at 01:32:27PM -0400, Bill Sommerfeld wrote:
> > 5) I assume that new zfs and zpool subcommands will need to specify
> > whether or not they create a zpool history entry; is there a general
> > rule here going forward? Wor
On Wed, 2006-05-03 at 12:40 -0500, Nicolas Williams wrote:
> On Wed, May 03, 2006 at 01:32:27PM -0400, Bill Sommerfeld wrote:
> > 5) I assume that new zfs and zpool subcommands will need to specify
> > whether or not they create a zpool history entry; is there a general
> > rule here going forward
On Wed, May 03, 2006 at 01:32:27PM -0400, Bill Sommerfeld wrote:
> 5) I assume that new zfs and zpool subcommands will need to specify
> whether or not they create a zpool history entry; is there a general
> rule here going forward? Working backwards from your list of "logged"
> vs "not logged",
On Wed, 2006-05-03 at 11:52, eric kustarz wrote:
> The following case is about to go to PSARC. Comments are welcome.
and just so folks outside sun can see what a fast-track review looks
like, I'll send my comments here rather than wait for it to show up at
PSARC..
(I don't thing absolutely everyt
On Wed, May 03, 2006 at 10:14:51AM -0700, eric kustarz wrote:
> Ed Gould wrote:
> >On May 3, 2006, at 8:52, eric kustarz wrote:
> >
> >>In the future, we are looking to add additional data to log and
> >>retrieve, such
> >>as hostname (for shared storage) or username (for delegated
> >>administra
Ed Gould wrote:
On May 3, 2006, at 8:52, eric kustarz wrote:
In the future, we are looking to add additional data to log and
retrieve, such
as hostname (for shared storage) or username (for delegated
administration).
Why not include these in the initial implementation? They both strike
m
On Thu, May 04, 2006 at 12:54:10AM +0800, Jeremy Teo wrote:
>
> Quick question: it says on the current page:
> "It has the convenient attribute that the kernel-specific code has
> already been factored out as zfs_context.c, and should be relatively
> simple to port."
> Minor question : It says on
Hello Eric,
On 5/3/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
Folks -
Several people have vocalized interest in porting ZFS to operating
systems other than solaris. While our 'mentoring' bandwidth may be
small, I am hoping to create a common forum where people could share
their experiences
On May 3, 2006, at 8:52, eric kustarz wrote:
In the future, we are looking to add additional data to log and
retrieve, such
as hostname (for shared storage) or username (for delegated
administration).
Why not include these in the initial implementation? They both strike
me as important and v
The following case is about to go to PSARC. Comments are welcome.
eric
(i'll be going out of town tonight, so apologies if i can't respond
immediatley to feedback)
A. DESCRIPTION
Add the capability of ZFS to log commands to disk (peristently). Only
successful commands are logged. At first,
Bill -- thanks for the write up!
Note that we're working on a best-practices document about how to
migrate file systems/volumes to ZFS. It didn't make it for the launch
yesterday, though.
Bev.
Bill Sommerfeld wrote:
This doesn't really answer your question directly but could probably
help a
>> Casper's point about 64-bit's is very important. ZFS
> needs to map the
> cache into the kernel's address space, which is very
> limited in the
> 32-bit world, thus the suggested requirement for
> 64-bit CPUs.
>
I'm a newbie to ZFS. Can some explain this point a bit deeper. If I try to
run
For background on what this is, see:
http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200
=
zfs-discuss 04/16 - 04/30
=
Threads or announcements origin
grant beattie wrote:
On Wed, May 03, 2006 at 04:03:18PM +1000, James C. McPherson wrote:
Exists (or It will exists) any metoth or tool for migrate a UFS/SVM
filesystems with soft partitions to ZFS filesystems with pools?
Any ideas for migrate a instaled base: Solaris 10 UFS/Solaris Volme
Man
This doesn't really answer your question directly but could probably
help anyone planning a UFS->ZFS migration...
I conducted a UFS/SVM -> ZFS over the weekend for a file/build server
used by about 30 developers which had roughly 180GB allocated in a
~400GB UFS/SVM partition. The server is a v40z
Hi,
On Tuesday 02 May 2006 22:41, Eric Schrock wrote:
> Folks -
>
> Several people have vocalized interest in porting ZFS to operating
> systems other than solaris. While our 'mentoring' bandwidth may be
> small, I am hoping to create a common forum where people could share
> their experiences an
Roch Bourbonnais - Performance Engineering wrote:
Reported freemem will be lower when running with ZFS than
say UFS. The UFS page cache is considered as freemem. ZFS
will return it's 'cache' only when memory is needed. So you
will operate with lower freemem but won't actually suffer
fr
On Wed, May 03, 2006 at 04:03:18PM +1000, James C. McPherson wrote:
> >Exists (or It will exists) any metoth or tool for migrate a UFS/SVM
> >filesystems with soft partitions to ZFS filesystems with pools?
> >Any ideas for migrate a instaled base: Solaris 10 UFS/Solaris Volme
> >Manager to Solar
50 matches
Mail list logo