Hi,
I have a test server that I use for testing my different jumpstart
installations. This system is continuously installed and reinstalled with
different system builds.
For some builds I have a finish script that creates a zpool using the utility
found in the Solaris 10 update 3 miniroot.
I
> > No, you will not be able to change the number of
> > disks in a raid-z set
> > (I think that answers questions 1-4). There is no
> > plan to implement
> > this feature.
>
> Am I interpreting this correctly that there are no plans to allow
> expansion of raid-z vdevs? This is one feature tha
[EMAIL PROTECTED] wrote on 05/10/2007 02:19:17 PM:
> I have a scenario where I have several ORACLE databases. I'm trying to
> keep system downtime to a minimum for business reasons. I've created
> zpools on three devices, an internal 148 Gb drive (data) and two
> partitions on an HP SAN.
mike wrote:
> thanks for the reply.
>
> On 5/10/07, Al Hopper <[EMAIL PROTECTED]> wrote:
>
>> Suggestion - try two 4-way raidz pools.
>
> wouldn't that bring usable space down to 2 pairs of 3x750?
>
> can those be combined into a single filesystem (for a total of 6x750
> usable, but underlying woul
mike wrote:
this is exactly the kind of feedback i was hoping for.
i'm wondering if some people consider firewire to be better in opensolaris?
I've written some about a 4-drive Firewire-attached box based on the
Oxford 911 chipset, and I've had I/O grind to a halt in the face of
media errors
thanks for the reply.
On 5/10/07, Al Hopper <[EMAIL PROTECTED]> wrote:
My personal opinion is that USB is not robust enough under (Open)Solaris
to provide the reliability that someone considering ZFS is looking for.
I base this on experience with two 7 port powered USB hubs, each with 4 *
2Gb K
On 5/8/07, Mario Goebbels <[EMAIL PROTECTED]> wrote:
While trying some things earlier in figuring out how zpool iostat is
supposed to be interpreted, I noticed that ZFS behaves kind of weird when
writing data. Not to say that it's bad, just interesting. I wrote 160MB of
zeroed data with dd. I ha
Mark J Musante [EMAIL PROTECTED] wrote:
>Maybe I'm misunderstanding what you're saying, but 'zfs clone' is
exactly
the way to mount a snapshot. Creating a clone uses up a negligible
amount
of disk space, provided you never write to it. And you can always set
readonly=on if that's a concern.
So
On Thu, 10 May 2007, Bruce Shaw wrote:
>
> I don't have enough disk to do clones and I haven't figured out how to
> mount snapshots directly.
Maybe I'm misunderstanding what you're saying, but 'zfs clone' is exactly
the way to mount a snapshot. Creating a clone uses up a negligible amount
of disk
I have a scenario where I have several ORACLE databases. I'm trying to
keep system downtime to a minimum for business reasons. I've created
zpools on three devices, an internal 148 Gb drive (data) and two
partitions on an HP SAN. HP won't do JBOD so I'm stuck with relying
upon HP to give me a cl
On Wed, 9 May 2007, Anantha N. Srirama wrote:
> However, the poor performance of the destroy is still valid. It is quite
> possible that we might create another clone for reasons beyond my
> original reason.
There are a few open bugs against destroy. It sounds like you may be
running into 650962
On Thu, 2007-05-10 at 10:10 -0700, Jürgen Keil wrote:
> Btw: In one experiment I tried to boot the kernel under kmdb
> control (-kd), patched "minclsyspri := 61" and used a
> breakpoint inside spa_active() to patch the spa_zio_* taskq
> to use prio 60 when importing the gzip compressed pool
> (so
> > Side note: Is this right? "ditto" blocks are extra parity blocks
> > stored on the same disk (won't prevent total disk failures, but could
> > provide data recovery if enough parity is available)
>
> Yes. See Richard Ellings' excellent blog titled "ZFS, copies, and data
> protection", where o
Bart wrote:
> Adam Leventhal wrote:
> > On Wed, May 09, 2007 at 11:52:06AM +0100, Darren J Moffat wrote:
> >> Can you give some more info on what these problems are.
> >
> > I was thinking of this bug:
> >
> > 6460622 zio_nowait() doesn't live up to its name
> >
> > Which was surprised to find
On 10 May, 2007 - Bakul Shah sent me these 3,2K bytes:
> [1] Top down resilvering seems very much like a copying
> garbage collector. That similarity make me wonder if the
> physical layout can be rearranged in some way for a more
> efficient access to data -- the idea is to resilver and
> compac
> > It seems to me that once you copy meta data, you can indeed
> > copy all live data sequentially.
>
> I don't see this, given the top down strategy. For instance, if I
> understand the transactional update process, you can't commit the
> metadata until the data is in place.
>
> Can you expla
i have the same problem, the users cant remove there files when the quota is
reached.
workaround is to raise the quota, remove the files and set the original quota.
so you can keep your snapshots.
This message posted from opensolaris.org
___
zfs-di
Andreas Koppenhoefer wrote:
which one is the most performant: copies=2 or zfs-mirror?
Good question, hope to have some data soon. From the back of the napkin
analysis, for the 2-disk case, it will be very similar. However, copies
offers more possibilities than just 2 disks, so there is more i
We have around 1000 users all with quotas set on their ZFS filesystems on
Solaris 10 U3. We take snapshots daily and rotate out the week old ones. The
situation is that some users ignore the advice of keeping space used below 80%
and keep creating large temporary files. They then try to remov
On Thu, 10 May 2007, mike wrote:
> The host for this is up in the air. I'd hope I could use a Shuttle XPC.
>
> It's an 8 drive USB enclosure. The total bandwidth to all 8 drives
> would be 480Mbps, which is fine for me. I was hoping to do a RAID-Z or
> RAID-Z2. I would have it export the drives as
That page says the b62 (which I have installed, with the ZFS-root bits)
doesn't support the recursive '-r'???
So it looks like I have to learn something else first So how do I
upgrade to b63 without corrupting the existing root ZFS mirroring bits?
Thanks,
Malachi
On 5/10/07, Dick Davies <
Oh god I found it. So freakin' bizarre. I'm pushing now 27MB/s average, instead
of meager 1.6MB/s. That's more like it.
This is what happened:
Back in the day when I bought my first SATA drive, incidentally a WD Raptor, I
wanted Windows to boot off it, including bootloader placement on it and
> Lot of small files perhaps? What kind of protection
> have you used?
No protection, and as much small files as a full distro install has, plus some
more source code for some libs. It's just 28GB that needs to be resilvered, yet
it takes like 6 hours at this abysmal speed.
At first I thought i
To clarify further; EMC note "EMC Host Connectivity Guide for Solaris"
indicates that ZFS is supported on 11/06 (aka Update 3) and onwards. However,
they sneak in a cautionary disclaimer that snapshot and clone features are
supported by Sun. If one reads it carefully it appears that they do supp
The host for this is up in the air. I'd hope I could use a Shuttle XPC.
It's an 8 drive USB enclosure. The total bandwidth to all 8 drives
would be 480Mbps, which is fine for me. I was hoping to do a RAID-Z or
RAID-Z2. I would have it export the drives as JBOD.
http://fwdepot.com/thestore/produc
As far as I can see, there are no real errors:
-bash-3.00# zpool status database
pool: database
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
databaseONLINE 0 0 0
c0t1d0ONLINE 0 0 0
c
> What does "zpool status database" say?
Hello,
As far as I can see, there are no real errors:
-bash-3.00# zpool status database
pool: database
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
databaseONLINE 0 0 0
What does "zpool status database" say?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Simple test - mkfile 8gb now and see where the data goes... :)
Unless you've got compression=on, in which case you won't see anything!
cheers,
--justin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/list
Hello Victor,
Thursday, May 10, 2007, 11:26:35 AM, you wrote:
VL> Robert Milkowski wrote:
>> Hello Leon,
>>
>> Thursday, May 10, 2007, 10:43:27 AM, you wrote:
>>
>> LM> Hello,
>>
>> LM> I've got some weird problem: ZFS does not seem to be utilizing
>> LM> all disks in my pool properly. For som
> I'm not sure but i suspect this may be somehow
> related to meta data
> allocation, given that ZFS stores two copies for file
> system meta data.
> But this is nothing more than a wild guess.
>
> Leon, What kind of data is stored in this pool? What
> Solaris version are
> you using? How is yo
Simple test - mkfile 8gb now and see where the data goes... :)
Victor Latushkin wrote:
Robert Milkowski wrote:
Hello Leon,
Thursday, May 10, 2007, 10:43:27 AM, you wrote:
LM> Hello,
LM> I've got some weird problem: ZFS does not seem to be utilizing
LM> all disks in my pool properly. For some
Robert Milkowski wrote:
Hello Leon,
Thursday, May 10, 2007, 10:43:27 AM, you wrote:
LM> Hello,
LM> I've got some weird problem: ZFS does not seem to be utilizing
LM> all disks in my pool properly. For some reason, it's only using 2 of the 3
disks in my pool:
LM>capacity o
Hello Leon,
Thursday, May 10, 2007, 10:43:27 AM, you wrote:
LM> Hello,
LM> I've got some weird problem: ZFS does not seem to be utilizing
LM> all disks in my pool properly. For some reason, it's only using 2 of the 3
disks in my pool:
LM>capacity operationsbandwidth
LM>
Hello,
I've got some weird problem: ZFS does not seem to be utilizing all disks in my
pool properly. For some reason, it's only using 2 of the 3 disks in my pool:
capacity operationsbandwidth
pool used avail read write read write
-- - - --
> which one is the most performant: copies=2 or zfs-mirror?
What type of copies are you talking about?
Mirrored data in underlying storage subsystem or a (new) feature in zfs?
- Andreas
This message posted from opensolaris.org
___
zfs-discuss mailin
Hi Malachi
Tims SMF bits work well (and also supports remote backups (via send/recv)).
I use something like the process laid out at the bottom of:
http://blogs.sun.com/mmusante/entry/rolling_snapshots_made_easy
because it's dirt-simple and easily understandable.
On 10/05/07, Malachi de Ælfw
37 matches
Mail list logo