> "h" == Hua writes:
h> b. Create a 10G Solaris partition for system and an "other"
h> type partition of 990G for data.
h> So far the zpool on top of a fdisk partition seems working
h> fine. But I don't think this is usual/normal way
yeah fmthard/prtvtoc/format/fdisk ar
Hua wrote:
I am building a system on a small x86 system by using Solaris 10 10/09. The
system disk is 1TB. As Solaris only take 6GB, I plan to allocate the rest to a
zpool for data. I want to keep system and data as separated as possible,
therefore I tried
a. Create a 10G Solaris partition fo
I am building a system on a small x86 system by using Solaris 10 10/09. The
system disk is 1TB. As Solaris only take 6GB, I plan to allocate the rest to a
zpool for data. I want to keep system and data as separated as possible,
therefore I tried
a. Create a 10G Solaris partition for install sys
On Wed, Nov 18, 2009 at 4:09 PM, Brent Jones wrote:
> On Tue, Nov 17, 2009 at 10:32 AM, Ed Plese wrote:
>> You can reclaim this space with the SDelete utility from Microsoft.
>> With the -c option it will zero any free space on the volume. For
>> example:
>>
>> C:\>sdelete -c C:
>>
>> I've teste
We had an existing zfs storage pool comprised of 2 raidz2 devices (or arrays)
and we just added a 3rd raidz2 device of equal size to the first two. While
making use of the extra capacity is mindlessly simple, we also want to take
advantage of the performance benefits.
How do we spread the data
On 2009-Nov-18 08:40:41 -0800, Orvar Korvar
wrote:
>There is a new PSARC in b126(?) that allows to rollback to latest functioning
>uber block. Maybe it can help you?
It's in b128 and the feedback I've received suggests it will work.
I've been trying to get the relevant ZFS bits for my b127 syst
Thanks Richard,
I've set the refreservation down and this has "freed" up space, I'm now setting
up a process to monitor and update the refreservation attribute on zfs volumes
used so we can thin provision yet keep some storage (half the remaining volume
size) available to ensure that these volu
On Nov 19, 2009, at 10:28 AM, Dushyanth Harinath wrote:
Thanks a lot. This clears many of the doubts I had.
I was actually trying to improve the performance of our email
storage. We are using dovecot as the LDA on a set of RHEL boxes and
the email volume seems to be saturating the write thr
Have you tried webmin? I think it allows to handle ZFS pools and such in a
simple manner?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Nov 18, 2009, at 8:54 PM, Duncan Bradey wrote:
Richard,
Thanks for this, this explains why I am seeing this. I am using
snapshots as I am replicating the data to other servers (via zfs
send/recieve) is there another way to prevent this behaviour and
still use snapshots? Or do I need
On Nov 19, 2009, at 7:39 AM, Mart van Santen wrote:
Hi,
We are using multiple opensolaris 06/09 and solaris 10 servers.
Currently we are 'dumping' (incremental)backups to a backup server.
I wonder if anybody knows what happens when I send/recv a zfs volume
from version 15 to a (backup) sys
Hi,
We are using multiple opensolaris 06/09 and solaris 10 servers.
Currently we are 'dumping' (incremental)backups to a backup server. I
wonder if anybody knows what happens when I send/recv a zfs volume from
version 15 to a (backup) system with version 14. I've the feeling it's
not very wi
>How did your migration to ESXi go? Are you using it on the same hardware or
>did you just switch that server to an NFS server and run the VMs on another
>box?
The latter, we run these VMs over NFS anyway and had ESXi boxes under test
already. we were already separating "data" exports from "VM"
Hello Paul,
am Donnerstag, 19. November 2009 um 12:59 hat u.a.
in mid:1183443158.191258632003651.javamail.tweb...@sf-app1 geschrieben:
> I have seen some mention of the SXCE version but apparently support
> for this finished last month?
The community wrote:
Note that CD media is no longer avail
You need Solaris for the zfs webconsole, not OpenSolaris.
Paul wrote:
Hi there, my first post (yay).
I have done much googling and everywhere I look I see people saying "just browse to
https://localhost:6789 and it is there". Well its not, I am running 2009.06
(snv_111b) the current latest st
Hi there, my first post (yay).
I have done much googling and everywhere I look I see people saying "just
browse to https://localhost:6789 and it is there". Well its not, I am running
2009.06 (snv_111b) the current latest stable release I do believe?
This is my first major foray into the world o
Hi,
IMHO, it would be useful to have something like:
zfs set userquota=5G tank/home
...
I think that would be great feature.
thanks. I just created CR 6902902 to track this. I hope it becomes viewable
soon here:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6902902
Cheers
Constantin Gonzalez wrote:
Imagine a system that needs to handle thousands of users. Setting quota
individually for all of these users would quickly become unwieldly, in a
similar
manner to the unwieldliness that having a filesystem for each user
presented.
The main reason for not having a fi
Hi,
first of all, many thanks to those who made user/group quotas possible. This
is a huge improvement for many users of ZFS!
While presenting on this new future at the Munich OpenSolaris User Group meeting
yesterday, a question came up that I couldn't find an answer for: Can you set
a default u
Darren J Moffat wrote:
Meilicke wrote:
I second the use of zilstat - very useful, especially if you don't want to mess
around with adding a log device and then having to destroy the pool if you don't
want the log device any longer.
log devices can be removed as of zpool version 19.
no change
20 matches
Mail list logo