Michelle,
Set the file-system to inherit ACL's and then set the permissions correctly.
Something like this:
zfs set aclinherit=passthrough pool01/fs01
/usr/bin/chmod
A=group:writegroup:rwxpdDaARWcCos:fd-:allow,group:readgroup:r-x---a-R-c--s:fd-:allow
/pool01/fs01
If you already have data
ng tasks
>
> echo stmf_nworkers_cur/D | mdb -k # number of running workers
>
> best regards!
> --
> pawel
>
>
>
> On Tue, Jun 24, 2014 at 8:21 PM, w...@vandenberge.us
> wrote:
>
> > Hello,
> >
> > I have three OpenIndiana (151A8) servers used as
Hello,
I have three OpenIndiana (151A8) servers used as iSCSI targets. All servers have
two 10Gbe interfaces to separate Dell 8024F switches running the latest
firmware. These servers provide storage for a bank of 16 Windows 2012R2
virtualization servers, each running 16 virtual machines (Windows
My guess would be that this is due to the lack of USB3 support in OpenIndiana.
Have you tried plugging the drive into a USB2 port, forcing the device into USB2
mode, and seeing if it works (I know that's not what you really want but it
would narrow down the issue)
Wim
> On March 28, 2014 at 11
Good morning,
We have two high capacity OpenIndiana 151a8 iSCSI servers in production (besides
many 151a7's), The two a8 systems are displaying an annoying but repeatable
issue. every 7 or 8 days, Comstar will completely stop responding. From the
network side it looks like the iSCSI targets have d
I have a system built using the Xyratex RS-1220-E311-XPN-2 enclosures. These
have twelve SAS/SATA slots, dual power supplies and one or two
RS-SCM-E311-XPN-1220 6Mb/s SAS "expander/controllers". Just add an LSI 9200-8e
(or equivalent) and an SFF8088-SFF8088 cable and you're in business. They've
bee
st nightly, i added a comment at this point
>
> Gea
>
> Am 17.06.2013 22:14, schrieb w...@vandenberge.us:
> > Thanks for the useful responses everyone. As one of the responses I received
> > P2P
> > mentioned, it turned out to be a fairly welknown issue with the snippet
That is an interesting thought. I did check the aggregate CPU utilization (Dual
6-core 2.0GHz Xeon, E5-2630L) and overall it looks fine with a load average
below 0.5 during transfers. However, I have not checked if there is a single
thread that is blocking the process. I guess the cheap way to ver
Hello,
I was hoping someone can point me in the right direction. I have a server
(151A8) with two identical zpool's Each of the pools has a number of file
systems on it and both are over 80% empty. When I copy a file system from one
pool to the other using something like:
zfs send -R pool1 /fs01@
Hi,
I just rolled my first a8 server into testing. This is a server build for space,
not speed (D2D2T scenario). It has 135 drives grouped into twelve RAIDZ3 groups
of 11 drives each (plus three spares). With each drive having a usable capacity
of 3.64TB (as reported by format) I would expect the
Hello,
Does anyone have any real world experience using a RAM based device like the
DDRdrive X1 as a ZIL on 151a7? At 4GB they seem to be a little small but with
some txg commit interval tweaking it looks like it may work. The entire 4GB is
saved to NAND in case of a power failure so it seems like
Good morning,
Last week we put three identical oi_151a7 systems into pre-production. Each
system has 240 drives in 9drive RAIDZ1 vdevs (I'm aware of the potential DR
issues with this configuration and I'm ok with them in this case). The drives
are Seagate Enterprise nearline SAS, 7200RPM. The serve
I'd go with an HBA and present ZFS with the raw disks. Save yourself a couple of
bucks and a bunch of potential hassle. I've had good luck with the LSI 9200-82
(external) and 9210-8i (internal). Both are PCIe 2.0. The 9207-8i and 9207-8e
are the PCIe-3.0 equivalents but I have not tested them. 85 d
13 at 4:27 PM Saso Kiselkov wrote:
>
>
> On 05/07/2013 21:00, w...@vandenberge.us wrote:
> > Latencytop reports the following continuously for the pool process and it
> > doesn’t change significantly under load, which looks ok to me:
> >
> > genunix`cv_wait g
s slowing down for a single bad/dying disk).
>
> -Lucas Van Tol
>
> > Date: Fri, 5 Jul 2013 20:09:45 +0200
> > From: iszczesn...@gmail.com
> > To: openindiana-discuss@openindiana.org
> > Subject: Re: [OpenIndiana-discuss] Sudden ZFS performance issue
> >
> > On F
gt; -Lucas Van Tol
>
> > Date: Fri, 5 Jul 2013 20:09:45 +0200
> > From: iszczesn...@gmail.com
> > To: openindiana-discuss@openindiana.org
> > Subject: Re: [OpenIndiana-discuss] Sudden ZFS performance issue
> >
> > On Fri, Jul 5, 2013 at 8:00 PM, Saso Kiselko
Good morning,
I have a weird problem with two of the 15+ OpenSolaris storage servers in our
environment. All the Nearline servers are essentially the same. Supermicro
X9DR3-F based server, Dual E5-2609's, 64GB memory, Dual 10Gb SFP+ NICs, LSI
9200-8e HBA, Supermicro CSE-826E26-R1200LPB storage arr
rds,
>
> The out-side
>
> Op 17 jun. 2013 om 20:16 heeft James Carlson het
> volgende geschreven:
>
> > On 06/17/13 11:59, w...@vandenberge.us wrote:
> >> At this point the interface is plumbed with the 127.0.0.1 address and the
> >> machine is essential
Hello,
An Open Indiana server that has been running fine for months suddenly started to
exhibit weird and annoying behavior. Each time it is rebooted an extra line is
added to the /etc/hosts file.
The correct file is :
10.0.9.21 st01a st01a.local
127.0.0.1 localhost loghost
after the first rebo
Hello,
After reading the Storage Controllers section
(http://wiki.openindiana.org/pages/viewpage.action?pageId=4883876) on the Open
Indiana website I decided to build my system on the Super Micro X9DRW-3F-O
motherboard which uses the Intel C606 SAS/SATA controller which shows as
supported using th
20 matches
Mail list logo