> The B-trees I'm used to tree divide in arbitrary
> places across the whole
> key, so doing partial-key queries is painful.
While the b-trees in DEC's Record Management Services (RMS) allowed
multi-segment keys, they treated the entire key as a byte-string as far as
prefix searches went (i.e.,
On Tue, Nov 20, 2007 at 11:39:30AM -0600, Albert Chin wrote:
> On Tue, Nov 20, 2007 at 11:10:20AM -0600, [EMAIL PROTECTED] wrote:
> >
> > [EMAIL PROTECTED] wrote on 11/20/2007 10:11:50 AM:
> >
> > > On Tue, Nov 20, 2007 at 10:01:49AM -0600, [EMAIL PROTECTED] wrote:
> > > > Resilver and scrub are
grew tired of the recycled 32bit cpus in
http://www.opensolaris.org/jive/thread.jspa?messageID=127555
and bought this to put the two marvell88sx cards in:
$255 http://www.supermicro.com/products/motherboard/Xeon3000/3210/X7SBE.cfm
http://www.supermicro.com/manuals/motherboard/3210/MNL-0970.p
Ian Collins wrote:
> James C. McPherson wrote:
>> Ian Collins wrote:
>> ...
>>> I don't know if anything else breaks when you do this, but if you are
>>> building software in a zone on a lofs filesystem, dmake hangs. Regular
>>> make works fine.
>>>
>>> The output from truss is:
>>>
>>> stat64("/e
James C. McPherson wrote:
> Ian Collins wrote:
> ...
>> I don't know if anything else breaks when you do this, but if you are
>> building software in a zone on a lofs filesystem, dmake hangs. Regular
>> make works fine.
>>
>> The output from truss is:
>>
>> stat64("/export/home", 0x08045B60) = 0
>
Ian Collins wrote:
...
> I don't know if anything else breaks when you do this, but if you are
> building software in a zone on a lofs filesystem, dmake hangs. Regular
> make works fine.
>
> The output from truss is:
>
> stat64("/export/home", 0x08045B60) = 0
> llseek(8, 0, SEEK_CUR) = 0
> llsee
James C. McPherson wrote:
> Anil Jangity wrote:
>
>> I have pool called "data".
>>
>> I have zones configured in that pool. The zonepath is: /data/zone1/fs.
>> (/data/zone1 itself is not used for anything else, by anyone, and has no
>> other data.) There are no datasets being delegated to this z
Thanks James/John!
That link specifically mentions "new Solaris 10 release", so I am assuming that
means going from like u4 to Sol 10 u5, and that shouldn't cause a problem when
doing plain patchadd's (w/o live upgrade). If so, then I am fine with those
warnings and can use zfs with zones' path
Anil Jangity wrote:
> I have pool called "data".
>
> I have zones configured in that pool. The zonepath is: /data/zone1/fs.
> (/data/zone1 itself is not used for anything else, by anyone, and has no
> other data.) There are no datasets being delegated to this zone.
>
> I want to create a snapsh
Hi Bill,
Yes, that covers all of my selfish questions, thanks.
The B-trees I'm used to tree divide in arbitrary places across the whole
key, so doing partial-key queries is painful.
I can't find "Structured File System" "Transarc" usefully in Google. Do
you have a link handy? If not, never m
Anil Jangity wrote:
> I have pool called "data".
>
> I have zones configured in that pool. The zonepath is: /data/zone1/fs.
> (/data/zone1 itself is not used for anything else, by anyone, and has no
> other data.) There are no datasets being delegated to this zone.
>
> I want to create a snapshot
I'm going to combine three posts here because they all involve jcone:
First, as to my message heading:
The 'search forum' mechanism can't find his posts under the 'jcone' name (I was
curious, because they're interesting/strange, depending on how one looks at
them). I've also noticed (once in h
I have pool called "data".
I have zones configured in that pool. The zonepath is: /data/zone1/fs.
(/data/zone1 itself is not used for anything else, by anyone, and has no other
data.) There are no datasets being delegated to this zone.
I want to create a snapshot that I would want to make avail
None of the below are available or planned in ZFS.
In fact, I'm not aware of those services in any of Sun's filesystems.
What's the interface for them? Is there a standard or proposed
standard? Also what's the purpose? Maybe the same can be
achieved in other ways.
Neil.
James Cone wrote:
> Hello
Hello All,
Is any of the following available in ZFS, or is there any plan to add it?
- persistent atomic-inc/atomic-dec of a group of bytes in a file?
- LL/SC or Compare-and-swap of a group of bytes in a file, or a whole
file
- multiple renames, where:
- all or none of them hap
John,
> I'm working on a Sun Ultra 80 M2 workstation. It has eight 750 GB
> SATA disks installed. I've tried the following on both ON build 72,
> Solaris 10 update 4, and Indiana with the same results.
>
> If I create a ZFS filesystem using 1-7 hard drives (I've tried 1
> and 7), and then tr
This bug is still not integrated? To upgrade to a community release I still
have to patch and compile the kernel? How can this bug fix be integrated with
the code?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@ope
OK, I'll bite; it's not like I'm getting an answer to my other question.
Bill, please explain why deciding what to do about sequential scan
performance in ZFS is urgent?
ie why it's urgent rather than important (I agree that if it's bad
then it's going to be important eventually).
ie why
Here is one issue I am running into when setting up a new NFS server to share
several zfs file systems.
I created following zfs file system from a zfs pool called bigpool. The bigpool
is the top level file system and mounted as /export/bigpool.
file system mount point
bigpool
...
> This needs to be proven with a reproducible,
> real-world workload before it
> makes sense to try to solve it. After all, if we
> cannot measure where
> we are,
> how can we prove that we've improved?
Ah - Tests & Measurements types: you've just gotta love 'em.
Wife: "Darling, is there
Hello All-
I'm working on a Sun Ultra 80 M2 workstation. It has eight 750 GB SATA disks
installed. I've tried the following on both ON build 72, Solaris 10 update 4,
and Indiana with the same results.
If I create a ZFS filesystem using 1-7 hard drives (I've tried 1 and 7), and
then try to make
BillTodd wrote:
> In order to be reasonably representative of a real-world
> situation, I'd suggest the following additions:
>
Your suggestions (make the benchmark big enough so seek times are really
noticed) are good. I'm hoping that over the holidays, I'll get to play
with an extra server...
On Nov 21, 2007 10:37 AM, Lion, Oren-P64304 <[EMAIL PROTECTED]> wrote:
>
> I recently tweaked Oracle (8K blocks, log_buffer gt 2M) on a Solaris
Oracle here is setup as 16K and 2G log buffer.
I am using a testpool with raid0 of 6 10K RPM FC disks (2 from each of
3 trays).
I played with 16K and 32
I'm guessing that if you could offline the pool you'd still see it listed in
zpool status. Other than that I can't think of a reason.
- Original Message -
From: "Will Murnane" <[EMAIL PROTECTED]>
To: "Ben" <[EMAIL PROTECTED]>
Cc: zfs-discuss@opensolaris.org
Sent: Wednesday, November 21,
On Nov 21, 2007 10:09 AM, Ben <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I would like to offline an entire storage pool (not some devices),
> ( I want to stop all io activity to the pool)
How is this different from 'zpool export tank'?
Will
___
zfs-discuss mai
Hi,
I would like to offline an entire storage pool (not some devices),
( I want to stop all io activity to the pool)
Maybe it could be implemented with a
a command like :
zpool offline -f tank
which should implicity do a zfs unmount tank
I use zfs with solaris 10 update 4.
Thanks,
Ben
___
In order to be reasonably representative of a real-world situation, I'd suggest
the following additions:
> 1) create a large file (bigger than main memory) on
> an empty ZFS pool.
1a. The pool should include entire disks, not small partitions (else seeks
will be artificially short).
1b. The
Hi Dan,
Dan Pritts wrote:
> On Mon, Nov 19, 2007 at 11:10:32AM +0100, Paul Boven wrote:
>> Any suggestions on how to further investigate / fix this would be very
>> much welcomed. I'm trying to determine whether this is a zfs bug or one
>> with the Transtec raidbox, and whether to file a bug with
Moore, Joe writes:
> Louwtjie Burger wrote:
> > Richard Elling wrote:
> > >
> > > >- COW probably makes that conflict worse
> > > >
> > > >
> > >
> > > This needs to be proven with a reproducible, real-world
> > workload before it
> > > makes sense to try to solve it. After all, if
29 matches
Mail list logo