On 06/25/2012 05:20 PM, Martin Frost wrote:
> > My guess is that since all the pools are by default set to have failmode
> > set to "wait" on failure, it'll wait forever.
> >
> > Now, changing it to "continue" which will return an error, but it could
> > lead to worse behavior in some cases.
Hello anybody!
I've got a problem centraly sharing ZFS filesystems.
What i want is to create zfs filesystems for different Usergroups and i
want to share them in a central location.
The idea was to create a "/mountbase" directory and share it with
"sharemgr" (this is working and i can access t
Simply use the mountpoint property of the zfs filesystem to set
the mountpoint to /mountbase/zfsfilesys1, then set the sharenfs
or sharesmb property of the filesystem to export them. You may
also look at the casesensitivity property for the smb shares.
There's no need to use the sharemgr for this
Hi all
Seems Garrett D'Amore from Nexenta has a few things to say about using SATA
drives in SAS systems
http://gdamore.blogspot.no/2010/08/why-sas-sata-is-not-such-great-idea.html
The digest is "Just don't do it"…
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
r.
Hello Udo, i know that mountpoints can be changed and sharing is
possible for a zfs filesystem, but that is not what i want, in your
scenario there is a share for each filesystem, i want to have 1 share on
the server trough wich i can see and browse all zfs filesystems.
Please correct me if i h
Keep in mind this is almost 2 yrs old, though. I seem to recall a thread
here or there that has pinned the SATA toxicity issues to an mpt driver bug
or somesuch?
-Original Message-
From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
Sent: Tuesday, June 26, 2012 9:02 AM
To: Discussion
On 26/06/2012 15:24, Armin Maier wrote:
zfs umount /zfspool/zfsfilesys1
mkdir /mountbase/zfsfilesys1
zfs set mountpoint=/mountbase/zfsfilesys1 /zfspool/zfsfilesys1
zfs mount /zfspool/zfsfilesys1
That worked, but i still cannot access the "zfsfilesys1" folder over the
network, the folder disappea
On 26/06/2012 15:24, Armin Maier wrote:
zfs umount /zfspool/zfsfilesys1
mkdir /mountbase/zfsfilesys1
zfs set mountpoint=/mountbase/zfsfilesys1 /zfspool/zfsfilesys1
zfs mount /zfspool/zfsfilesys1
And no need to mkdir /mountbase/zfsfilesys1, this is also managed
by zfs set mountpoint=
--
Dr.Udo
Hi, all.
I installed OpenIndiana b151 in virtualbox on MacOSX, there is a zone with
dedicated ip over e1000g1 which connecting to a host-only virtual NIC.
Everything was okay before updating to b151a4. I could access it
everywhere, host OS, global zone, or any other VMs whose NICs are in the
same
On Jun 26, 2012, at 6:29 AM, Dan Swartzendruber wrote:
> Keep in mind this is almost 2 yrs old, though. I seem to recall a thread
> here or there that has pinned the SATA toxicity issues to an mpt driver bug
> or somesuch?
Not really. Search for other OSes and their tales of woe. In some cases,
On 6/26/2012 1:15 PM, Richard Elling wrote:
On Jun 26, 2012, at 6:29 AM, Dan Swartzendruber wrote:
Keep in mind this is almost 2 yrs old, though. I seem to recall a thread
here or there that has pinned the SATA toxicity issues to an mpt driver bug
or somesuch?
Not really. Search for
2012-06-26 4:55, Vishwas Durai пишет:
Hello all,
I'm wondering what options are available for root filesystem in OI? By
default, install uses ZFS and creates a rpool. But If I'm a ZFS hacker
and made some changes to some core structures, how does one go about
debugging that? Is dropping to kmdb
On 26/06/2012 14:44, Armin Maier wrote:
Hello anybody!
I've got a problem centraly sharing ZFS filesystems.
What i want is to create zfs filesystems for different Usergroups and
i want to
share them in a central location.
The idea was to create a "/mountbase" directory and share it with
"sharemg
On Tue, Jun 26, 2012 at 6:23 PM, WarGrey[战斗暴龙] wrote:
> I installed OpenIndiana b151 in virtualbox on MacOSX, there is a zone with
> dedicated ip over e1000g1 which connecting to a host-only virtual NIC.
> Everything was okay before updating to b151a4. I could access it
> everywhere, host OS, glob
On Tue, 26 Jun 2012 20:04:51 +0200, you wrote:
>On Tue, Jun 26, 2012 at 6:23 PM, WarGrey[]
>wrote:
>> I installed OpenIndiana b151 in virtualbox on MacOSX, there is a zone with
>> dedicated ip over e1000g1 which connecting to a host-only virtual NIC.
>> Everything was okay before updatin
On 06/26/2012 03:01 PM, Roy Sigurd Karlsbakk wrote:
> Hi all
>
> Seems Garrett D'Amore from Nexenta has a few things to say about using SATA
> drives in SAS systems
>
> http://gdamore.blogspot.no/2010/08/why-sas-sata-is-not-such-great-idea.html
>
> The digest is "Just don't do it"…
Couldn't ag
Hello all, I've got a new small matter for generic discussion:
Some of my systems have hardware watchdogs, either on motherboards
features or in IPMI addons.
A small intro for newcomers: Watchdogs include a timer that can be
started by BIOS or a watchdog driver in the OS, and the driver shou
Hello all,
I am revising an older OpenSolaris file-server before an upgrade
to OI, and this server uses COMSTAR to publish some zvols via iSCSI.
As I revised the procedure used to set it up originally, I remember
that the initial OpenSolaris iSCSI stack performed poorer, but only
it was integra
I don't see what is wrong with option #1. Just leave the watchdog running
until the kernel shuts down the computer. There is no reason the watchdog
should ever stop (that way it offers protection continuously until the
system is otherwise shut off).
On 6/26/12 1:26 PM, "Jim Klimov" wrote:
>He
2012-06-27 0:54, Rennie Allen пишет:
I don't see what is wrong with option #1. Just leave the watchdog running
until the kernel shuts down the computer. There is no reason the watchdog
should ever stop (that way it offers protection continuously until the
system is otherwise shut off).
Well, t
On Jun 26, 2012, at 10:36 AM, Dan Swartzendruber wrote:
> On 6/26/2012 1:15 PM, Richard Elling wrote:
>> On Jun 26, 2012, at 6:29 AM, Dan Swartzendruber wrote:
>>
>>
>>> Keep in mind this is almost 2 yrs old, though. I seem to recall a thread
>>> here or there that has pinned the SATA toxicity
On Mon, Jun 25, 2012 at 5:55 PM, Vishwas Durai wrote:
> Hello all,
> I'm wondering what options are available for root filesystem in OI? By
> default, install uses ZFS and creates a rpool. But If I'm a ZFS hacker
> and made some changes to some core structures, how does one go about
> debugging
I am working on setting up 2 SAN to replicate via AVS. I think I get the idea
of how to set this up but have a few questions.
The 2 sans I build have 15 drives SAS drives + 2 Cache SSD's and 2 Log SSD's.
The question I have is do you replicate the SSD's ?
Do you need a bitmap for all the drives o
Hi Jim,
>1) Is COMSTAR still not-integrated with shareiscsi ZFS attributes?
>Or can the pool use the attribute, and the correct (new COMSTAR)
>iSCSI target daemon will fire up?
COMSTAR is not integrated with ZFS and it will ignore the ZFS props.
What is the release of the old host OS?
I s
Okay, thanks for your replies.
But this workaround is not very good, since the topology of my virtual
network has been changed both on physically and logically.
I want to know when will they get together well? the previous versions are
okay.
Before that, I will not use zones any more to deploy my
+1 what Jay says.
Also, I don't think that dropping to mdb is all that painful. It is one
of the best debuggers around.
On 6/26/12 4:09 PM, "Jay Heyl" wrote:
>On Mon, Jun 25, 2012 at 5:55 PM, Vishwas Durai
>wrote:
>
>> Hello all,
>> I'm wondering what options are available for root filesyste
On Tue, Jun 26, 2012 at 1:53 PM, Jim Klimov wrote:
>> On 26/06/2012 14:44, Armin Maier wrote:
>>>
>>> Hello anybody!
>>> I've got a problem centraly sharing ZFS filesystems.
>>> What i want is to create zfs filesystems for different Usergroups and
>>> i want to share them in a central location.
[.
On Tue, Jun 26, 2012 at 4:26 PM, Jim Klimov wrote:
> Hello all, I've got a new small matter for generic discussion:
> Now, in OpenSolaris (and portable to OI) there is a bmc-watchdog
> package for some proprietary hardware implementations, as well as
> a newly ported open-source driver is brewin
28 matches
Mail list logo