On Thu, 25 Feb 2010, Marion Hakanson wrote:
It's not easy to get them right, and usually the hardest task is in
figuring out what the users want, so we don't use them unless the users'
needs cannot be met using traditional Unix/POSIX permissions.
Yeah, I've had nothing but horror from AC
On Thu, 25 Feb 2010, Marion Hakanson wrote:
> It's not easy to get them right, and usually the hardest task is in
> figuring out what the users want, so we don't use them unless the users'
> needs cannot be met using traditional Unix/POSIX permissions.
We've got a web GUI that hides the complexit
So just to verify, from what you said and searching based on what you said, the
following is the commands I would use?
# zpool create newpool mirror c8d0 c9d0
# zfs create newpool/VM
# zfs snapshot files/v...@beforemigration
# zfs send files/v...@beforemigration | zfs receive newpool/VM
# zfs umo
On Thu, 25 Feb 2010, Alastair Neil wrote:
I do not know and I don't think anyone would deploy a system in that way with
UFS.
This is the model that is imposed in order to take full advantage of zfs
advanced
features such as snapshots, encryption and compression and I know many
universities
i
I do not know and I don't think anyone would deploy a system in that way
with UFS. This is the model that is imposed in order to take full advantage
of zfs advanced features such as snapshots, encryption and compression and I
know many universities in particular are eager to adopt it for just that
On Thu, 25 Feb 2010, Shane Cox wrote:
I'm new to ZFS and looking for some assistance with a performance problem:
At the interval of zfs_txg_timeout (I'm using the default of 30), I observe
100-200ms
pauses in my application. Based on my application log files, it appears that
the
write() syst
On Thu, 25 Feb 2010, Alastair Neil wrote:
I don't think I have seen this addressed in the follow-ups to your message. One
issue we have is with deploying large numbers of files systems per pool - not
necessarily large numbers of disk. There are major scaling issues with the
sharing
of large n
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/24/2010 11:42 PM, Robert Milkowski wrote:
> mi...@r600:~# ls -li /bin/bash
> 1713998 -r-xr-xr-x 1 root bin 799040 2009-10-30 00:41 /bin/bash
>
> mi...@r600:~# zdb -v rpool/ROOT/osol-916 1713998
> Dataset rpool/ROOT/osol-916 [ZPL], ID 302, cr_txg
hen...@acm.org said:
> I've been surveying various forums looking for other places using ZFS ACL's
> in production to compare notes and see how if at all they've handled some of
> the issues we've found deploying them.
>
> So far, I haven't found anybody using them in any substantial way, let alon
On Wed, Feb 24, 2010 at 10:57:08AM +, li...@di.cx wrote:
> 2 x SuperMicro AOC-SAT2-MV8 SATA controllers (so 16 ports in total,
> plus 6 on the motherboard)
What about case space for the disks?
> Disks: 3x40GB
rpool mirror and spare on shelf. 3 way mirror if you really want and have the
spa
I've been surveying various forums looking for other places using ZFS ACL's
in production to compare notes and see how if at all they've handled some
of the issues we've found deploying them.
So far, I haven't found anybody using them in any substantial way, let
alone trying to leverage them to a
I'm new to ZFS and looking for some assistance with a performance problem:
At the interval of zfs_txg_timeout (I'm using the default of 30), I observe
100-200ms pauses in my application. Based on my application log files, it
appears that the write() system call is blocked. Digging deeper into th
Correct, if you upgraded this pool, you would not be able to import
it back on your existing Solaris 10 system.
My advice would be to wait.
Cindy
On 02/25/10 13:05, Ray Van Dolson wrote:
On Thu, Feb 25, 2010 at 11:55:35AM -0800, Cindy Swearingen wrote:
Ray,
Log removal integrated into build
d...@dd-b.net said:
> I know from startup log messages that I've got several interrupts being
> shared. I've been wondering how serious this is. I don't have any
> particular performance problems, but then again my cpu and motherboard are
> from 2006 and I'd like to extend their service life, so
+--
| On 2010-02-25 12:05:03, Ray Van Dolson wrote:
|
| Thanks Cindy. I need to stay on Solaris 10 for the time being, so I'm
| guessing I'd have to Live boot into an OpenSolaris build, fix my pool
| then hope it re-impor
On Thu, Feb 25, 2010 at 11:55:35AM -0800, Cindy Swearingen wrote:
> Ray,
>
> Log removal integrated into build 125, so yes, if you upgraded to at
> least OpenSolaris build 125 you could fix this problem. See the syntax
> below on my b133 system.
>
> In this particular case, importing the pool fro
One other question - I'm seeing the same sort of behavior when I try to do
something like "zfs set sharenfs=off storage/fs" - is there a reason that
turning off NFS sharing should halt I/O?
--
This message posted from opensolaris.org
___
zfs-discuss ma
Ray,
Log removal integrated into build 125, so yes, if you upgraded to at
least OpenSolaris build 125 you could fix this problem. See the syntax
below on my b133 system.
In this particular case, importing the pool from b125 or later media
and attempting to remove the log device could not fix thi
I don't think I have seen this addressed in the follow-ups to your message.
One issue we have is with deploying large numbers of files systems per pool
- not necessarily large numbers of disk. There are major scaling issues
with the sharing of large numbers of file systems, in my configuration I
Well, it doesn't seem like this is possible -- I was hoping there was
some "hacky" way to do it via zdb or something.
Sun support pointed me to a document[1] that leads me to believe this
might have worked on OpenSolaris. Anyone out there in Sun-land care to
comment?
To recap, I accidentally cre
On Wed, 24 Feb 2010, Gregory Gee wrote:
files
files/home
files/mail
files/VM
I want to move the files/VM to another zpool, but keep the same mount
point. What would be the right steps to create the new zpool, move the
data and mount in the same spot?
Create the new pool, take a snapshot of
On Thu, February 25, 2010 08:25, tomwaters wrote:
> So I rebooted and selected 111b and I no longer have the issue.
> Interestingly, the rpool is still in place..as it should be. So I have
> now set this 111b as my default BE ...and removed /dev from the update
> package list using ...
> $pfexec
On Thu, Feb 25, 2010 at 12:44 PM, Chad wrote:
> I'm looking to migrate a pool from using multiple smaller LUNs to one
> larger LUN. I don't see a way to do a zpool replace for multiple to one.
> Anybody know how to do this? It needs to be non disruptive.
>
As others have noted, it doesn't seem p
>I'm looking to migrate a pool from using multiple smaller LUNs to one larger
>LUN.
>I don't see a way to do a zpool replace for multiple to one. Anybody know how
>to do this? It needs to be non disruptive.
Depends on the zpool's layout and the source of the old and the new files;
you can only
You might have to force the import with -f.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 25/02/2010 15:44, Chad wrote:
I'm looking to migrate a pool from using multiple smaller LUNs to one larger
LUN. I don't see a way to do a zpool replace for multiple to one. Anybody know
how to do this? It needs to be non disruptive.
You can't do that just now, this needs device removal (ie
I'm looking to migrate a pool from using multiple smaller LUNs to one larger
LUN. I don't see a way to do a zpool replace for multiple to one. Anybody know
how to do this? It needs to be non disruptive.
--
This message posted from opensolaris.org
___
z
On Feb 25, 2010, at 9:11 AM, Giovanni Tirloni
wrote:
On Thu, Feb 25, 2010 at 9:47 AM, Jacob Ritorto > wrote:
It's a kind gesture to say it'll continue to exist and all, but
without commercial support from the manufacturer, it's relegated to
hobbyist curiosity status for us. If I even mention
On 25 Feb 2010, at 14:28, Sean Sprague wrote:
> Bob,
>
>> On Tue, 23 Feb 2010, Joerg Schilling wrote:
>>>
>>> and what uname -s reports.
>>
>> It will surely report "OrkOS".
>
> For OpenSolaris, "OracOS" - surely there must be Blakes 7 fans in Oracle
> Corp.?
You can see all the working bit
On 25/02/2010 12:48, Robert Milkowski wrote:
On 17/02/2010 09:55, Robert Milkowski wrote:
On 16/02/2010 23:59, Christo Kutrovsky wrote:
On ZVOLs it appears the setting kicks in life. I've tested this by
turning it off/on and testing with iometer on an exported iSCSI
device (iscsitgtd not coms
Bob,
On Tue, 23 Feb 2010, Joerg Schilling wrote:
and what uname -s reports.
It will surely report "OrkOS".
For OpenSolaris, "OracOS" - surely there must be Blakes 7 fans in Oracle
Corp.?
I am glad to be able to contribute positively and constructively to
this discussion.
Metoo ;-) ..
Just an update on this, I was seeing high CPU utilisation (100% on all 4 cores)
for ~10 seconds every 20 seconds when transfering files to the server using
Samba under 133.
So I rebooted and selected 111b and I no longer have the issue. Interestingly,
the rpool is still in place..as it should b
On Thu, Feb 25, 2010 at 9:47 AM, Jacob Ritorto wrote:
> It's a kind gesture to say it'll continue to exist and all, but
> without commercial support from the manufacturer, it's relegated to
> hobbyist curiosity status for us. If I even mentioned using an
> unsupported operating system to the high
On Thu, Feb 25, 2010 at 6:59 AM, tomwaters wrote:
> Yes, I am glad that I learned this lesson now, rather than in 6 months when I
> have re-purposed the exiting drives...makes me all the more committed to
> maintaining an up to date remote backup.
>
> The reality is that I can not afford to mirr
On 17/02/2010 09:55, Robert Milkowski wrote:
On 16/02/2010 23:59, Christo Kutrovsky wrote:
On ZVOLs it appears the setting kicks in life. I've tested this by
turning it off/on and testing with iometer on an exported iSCSI
device (iscsitgtd not comstar).
I haven't looked at zvol's code handling
It's a kind gesture to say it'll continue to exist and all, but
without commercial support from the manufacturer, it's relegated to
hobbyist curiosity status for us. If I even mentioned using an
unsupported operating system to the higherups here, it'd be considered
absurd. I like free stuff to fo
Hi james,
thanks for the reply, stating this is there a way i can restirct the size of
the zvol.
So if i have the zvol of 10 GB on the CDOM, which is presented to the LDOM
as the disk and then we create the UFS file system on that, but this grows
with time and we eve see the situation where it gro
Hi
see my response in line.
On Thu, Feb 25, 2010 at 2:05 AM, Milan Shah wrote:
> Hi,
>
> This is what we are trying to understand.
>
> Luns are presented to the CDOM and then we create the zpool on them.
>
> On top of the zpool zvol is created and then it is presented to the GDOM.
>
> zpool list
Yes, I am glad that I learned this lesson now, rather than in 6 months when I
have re-purposed the exiting drives...makes me all the more committed to
maintaining an up to date remote backup.
The reality is that I can not afford to mirror the 8TB in the zpool, so I'll
balance the risk and just
tomwaters writes:
> I created a zfs file system, cloud/movies and shared it.
> I then filled it with movies and music.
> I then decided to rename it, so I used rename in the Gnome to change
> the folder name to media...ie cloud/media. < MISTAKE
> I then noticed the zfs share was pointing to /
On Thu, Feb 25, 2010 at 8:56 AM, Michael Schuster
wrote:
> perhaps this helps:
>
> http://www.eweek.com/c/a/Linux-and-Open-Source/Oracle-Explains-Unclear-Message-About-OpenSolaris-444787/
Not really. It doesn't explain that the page in question was an
explanation of how the
OpenSolaris support mo
perhaps this helps:
http://www.eweek.com/c/a/Linux-and-Open-Source/Oracle-Explains-Unclear-Message-About-OpenSolaris-444787/
Michael
On 02/24/10 20:02, Troy Campbell wrote:
http://www.oracle.com/technology/community/sun-oracle-community-continuity.html
Half way down it says:
Will Oracle supp
Hi,
This is what we are trying to understand.
Luns are presented to the CDOM and then we create the zpool on them.
On top of the zpool zvol is created and then it is presented to the GDOM.
zpool list | grep testpool
testpool12G 114K 12.0G 0% ONLINE -
zfs list | grep testpool
t
43 matches
Mail list logo