[OpenIndiana-discuss] server hangs

2011-08-31 Thread Roman Naumenko

Hi,

I have SunOS 5.11 oi_148 installed on my storage server with 8 disks in 
raidz2 pool.

It hangs about once in a week and I had to restart it.
Can you help me troubleshoot it?

It has some zfs volumes shared over nfs and afpd. (afpd is unfortunately 
a development version to satisfy OSX Lion).


roks@data:~$ afpd -V
afpd 2.2.0 - Apple Filing Protocol (AFP) daemon of Netatalk

afpd has been compiled with support for these features:

AFP3.x support: Yes
TCP/IP Support: Yes
DDP(AppleTalk) Support: No
CNID backends: dbd last tdb
SLP support: No
Zeroconf support: Yes
TCP wrappers support: Yes
Quota support: Yes
Admin group support: Yes
Valid shell checks: Yes
cracklib support: No
Dropbox kludge: No
Force volume uid/gid: No
ACL support: Yes
EA support: ad | sys
LDAP support: Yes

It also has time-slider enabled, which is pretty buggy peace of hmmm 
software, but it shouldn't cause server to crash or hang.


So the problems start with nfs and/or afpd timeouts on clients, but I 
still can ssh to the server. Can't read any files or logs though.
Then network service disappears in a minute or few minutes, console 
becomes frozen and I have to do hard restart at that point.


Where should I look to understand what causing this?
Since I can't reproduce the problem, I'd like to get prepared when it 
happens next time.

I couldn't find anything unusual in the logs after restart.

time-slider complains for some reason about space on rpool
Aug 31 19:41:36 data time-sliderd: [ID 702911 daemon.notice] No more 
hourly snapshots left
Aug 31 19:41:36 data time-sliderd: [ID 702911 daemon.warning] rpool 
exceeded 80% capacity. Hourly and daily automatic snapshots were destroyed


Where does it see 80%?

$ df -h

FilesystemSize  Used Avail Use% Mounted on
rpool/ROOT/solaris5.5G  3.0G  2.6G  54% /
swap  1.4G  396K  1.4G   1% /etc/svc/volatile
/usr/lib/libc/libc_hwcap1.so.1 5.5G  3.0G  2.6G  54% /lib/libc.so.1
swap  1.4G  8.0K  1.4G   1% /tmp
swap  1.4G   52K  1.4G   1% /var/run
rpool/export  2.6G   32K  2.6G   1% /export
rpool/export/home 2.6G   33K  2.6G   1% /export/home
rpool/export/home/usr1 2.6G   38K  2.6G   1% /export/home/usr1
rpool/export/home/usr2 3.0G  385M  2.6G  13% /export/home/usr2
rpool 2.6G   48K  2.6G   1% /rpool


--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] server hangs

2011-08-31 Thread Roman Naumenko
Well, might be the reason. 8 drivers is certainly limit too much for a 
stock psu. But there should be some traces, no?

How did you figure out the reason for errors on your system?

--Roman

Daniel Kjar said the following, on 31-08-11 9:43 PM:
Careful... are you overtaxing your power supply?  My 148 system was 
behaving like that when I put too many drives in an ultra 20.


On 8/31/2011 7:48 PM, Roman Naumenko wrote:

Hi,

I have SunOS 5.11 oi_148 installed on my storage server with 8 disks 
in raidz2 pool.

It hangs about once in a week and I had to restart it.
Can you help me troubleshoot it?

It has some zfs volumes shared over nfs and afpd. (afpd is 
unfortunately a development version to satisfy OSX Lion).


roks@data:~$ afpd -V
afpd 2.2.0 - Apple Filing Protocol (AFP) daemon of Netatalk

afpd has been compiled with support for these features:

AFP3.x support: Yes
TCP/IP Support: Yes
DDP(AppleTalk) Support: No
CNID backends: dbd last tdb
SLP support: No
Zeroconf support: Yes
TCP wrappers support: Yes
Quota support: Yes
Admin group support: Yes
Valid shell checks: Yes
cracklib support: No
Dropbox kludge: No
Force volume uid/gid: No
ACL support: Yes
EA support: ad | sys
LDAP support: Yes

It also has time-slider enabled, which is pretty buggy peace of hmmm 
software, but it shouldn't cause server to crash or hang.


So the problems start with nfs and/or afpd timeouts on clients, but I 
still can ssh to the server. Can't read any files or logs though.
Then network service disappears in a minute or few minutes, console 
becomes frozen and I have to do hard restart at that point.


Where should I look to understand what causing this?
Since I can't reproduce the problem, I'd like to get prepared when it 
happens next time.

I couldn't find anything unusual in the logs after restart.

time-slider complains for some reason about space on rpool
Aug 31 19:41:36 data time-sliderd: [ID 702911 daemon.notice] No more 
hourly snapshots left
Aug 31 19:41:36 data time-sliderd: [ID 702911 daemon.warning] rpool 
exceeded 80% capacity. Hourly and daily automatic snapshots were 
destroyed


Where does it see 80%?

$ df -h

FilesystemSize  Used Avail Use% Mounted on
rpool/ROOT/solaris5.5G  3.0G  2.6G  54% /
swap  1.4G  396K  1.4G   1% /etc/svc/volatile
/usr/lib/libc/libc_hwcap1.so.1 5.5G  3.0G  2.6G  54% /lib/libc.so.1
swap  1.4G  8.0K  1.4G   1% /tmp
swap  1.4G   52K  1.4G   1% /var/run
rpool/export  2.6G   32K  2.6G   1% /export
rpool/export/home 2.6G   33K  2.6G   1% /export/home
rpool/export/home/usr1 2.6G   38K  2.6G   1% /export/home/usr1
rpool/export/home/usr2 3.0G  385M  2.6G  13% /export/home/usr2
rpool 2.6G   48K  2.6G   1% /rpool


--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss




___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] server hangs

2011-08-31 Thread Roman Naumenko

Hmm, rpool/swap is again back to 8GB, I remember I was changing it.

fmdump -eV Aug 04 2011 19:39:34.864250546 ereport.fs.zfs.vdev.open_failed
That's the latest error.
storage pool was resilvered, it was small hiccup of disk connectivity.

/var/svc/log/ - no error for the day when server went to limbo state.

--Roman

Jason Matthews said the following, on 31-08-11 11:07 PM:

perhaps  checking fmd would be a good start but an overloaded ps is possible.

fmdump -eV

for your storage utilization some handy commands to add to your arsenal are:

zpool list
zfs get -r used

cheers

Sent from Jasons' hand held

On Aug 31, 2011, at 6:43 PM, Daniel Kjar  wrote:


Careful... are you overtaxing your power supply?  My 148 system was behaving 
like that when I put too many drives in an ultra 20.

On 8/31/2011 7:48 PM, Roman Naumenko wrote:

Hi,

I have SunOS 5.11 oi_148 installed on my storage server with 8 disks in raidz2 
pool.
It hangs about once in a week and I had to restart it.
Can you help me troubleshoot it?

It has some zfs volumes shared over nfs and afpd. (afpd is unfortunately a 
development version to satisfy OSX Lion).

roks@data:~$ afpd -V
afpd 2.2.0 - Apple Filing Protocol (AFP) daemon of Netatalk

afpd has been compiled with support for these features:

AFP3.x support: Yes
TCP/IP Support: Yes
DDP(AppleTalk) Support: No
CNID backends: dbd last tdb
SLP support: No
Zeroconf support: Yes
TCP wrappers support: Yes
Quota support: Yes
Admin group support: Yes
Valid shell checks: Yes
cracklib support: No
Dropbox kludge: No
Force volume uid/gid: No
ACL support: Yes
EA support: ad | sys
LDAP support: Yes

It also has time-slider enabled, which is pretty buggy peace of hmmm software, 
but it shouldn't cause server to crash or hang.

So the problems start with nfs and/or afpd timeouts on clients, but I still can 
ssh to the server. Can't read any files or logs though.
Then network service disappears in a minute or few minutes, console becomes 
frozen and I have to do hard restart at that point.

Where should I look to understand what causing this?
Since I can't reproduce the problem, I'd like to get prepared when it happens 
next time.
I couldn't find anything unusual in the logs after restart.

time-slider complains for some reason about space on rpool
Aug 31 19:41:36 data time-sliderd: [ID 702911 daemon.notice] No more hourly 
snapshots left
Aug 31 19:41:36 data time-sliderd: [ID 702911 daemon.warning] rpool exceeded 
80% capacity. Hourly and daily automatic snapshots were destroyed

Where does it see 80%?

$ df -h

FilesystemSize  Used Avail Use% Mounted on
rpool/ROOT/solaris5.5G  3.0G  2.6G  54% /
swap  1.4G  396K  1.4G   1% /etc/svc/volatile
/usr/lib/libc/libc_hwcap1.so.1 5.5G  3.0G  2.6G  54% /lib/libc.so.1
swap  1.4G  8.0K  1.4G   1% /tmp
swap  1.4G   52K  1.4G   1% /var/run
rpool/export  2.6G   32K  2.6G   1% /export
rpool/export/home 2.6G   33K  2.6G   1% /export/home
rpool/export/home/usr1 2.6G   38K  2.6G   1% /export/home/usr1
rpool/export/home/usr2 3.0G  385M  2.6G  13% /export/home/usr2
rpool 2.6G   48K  2.6G   1% /rpool


--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

--
Dr. Daniel Kjar
Assistant Professor of Biology
Division of Mathematics and Natural Sciences
Elmira College
1 Park Place
Elmira, NY 14901
607-735-1826
http://faculty.elmira.edu/dkjar

"...humans send their young men to war; ants send their old ladies"
-E. O. Wilson


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] server hangs

2011-09-01 Thread Roman Naumenko
Costly troubleshooting you had. 
All right then, I will wait for the next failure to look through it once again 
and maybe swap psu if nothing again found. 

--Roman N 

- Original Message -

> I burned through about 3 disks before I figured it out. Nothing in
> the
> logs made me think this but the eventual failure of the disks alerted
> me
> that something hardwarish was happening.

> On 08/31/11 11:01 PM, Roman Naumenko wrote:
> > Well, might be the reason. 8 drivers is certainly limit too much
> > for a
> > stock psu. But there should be some traces, no?
> > How did you figure out the reason for errors on your system?
> >
> > --Roman
> >
> > Daniel Kjar said the following, on 31-08-11 9:43 PM:
> >> Careful... are you overtaxing your power supply? My 148 system was
> >> behaving like that when I put too many drives in an ultra 20.
> >>
> >> On 8/31/2011 7:48 PM, Roman Naumenko wrote:
> >>> Hi,
> >>>
> >>> I have SunOS 5.11 oi_148 installed on my storage server with 8
> >>> disks
> >>> in raidz2 pool.
> >>> It hangs about once in a week and I had to restart it.
> >>> Can you help me troubleshoot it?
> >>>
> >>> It has some zfs volumes shared over nfs and afpd. (afpd is
> >>> unfortunately a development version to satisfy OSX Lion).
> >>>
> >>> roks@data:~$ afpd -V
> >>> afpd 2.2.0 - Apple Filing Protocol (AFP) daemon of Netatalk
> >>>
> >>> afpd has been compiled with support for these features:
> >>>
> >>> AFP3.x support: Yes
> >>> TCP/IP Support: Yes
> >>> DDP(AppleTalk) Support: No
> >>> CNID backends: dbd last tdb
> >>> SLP support: No
> >>> Zeroconf support: Yes
> >>> TCP wrappers support: Yes
> >>> Quota support: Yes
> >>> Admin group support: Yes
> >>> Valid shell checks: Yes
> >>> cracklib support: No
> >>> Dropbox kludge: No
> >>> Force volume uid/gid: No
> >>> ACL support: Yes
> >>> EA support: ad | sys
> >>> LDAP support: Yes
> >>>
> >>> It also has time-slider enabled, which is pretty buggy peace of
> >>> hmmm
> >>> software, but it shouldn't cause server to crash or hang.
> >>>
> >>> So the problems start with nfs and/or afpd timeouts on clients,
> >>> but
> >>> I still can ssh to the server. Can't read any files or logs
> >>> though.
> >>> Then network service disappears in a minute or few minutes,
> >>> console
> >>> becomes frozen and I have to do hard restart at that point.
> >>>
> >>> Where should I look to understand what causing this?
> >>> Since I can't reproduce the problem, I'd like to get prepared
> >>> when
> >>> it happens next time.
> >>> I couldn't find anything unusual in the logs after restart.
> >>>
> >>> time-slider complains for some reason about space on rpool
> >>> Aug 31 19:41:36 data time-sliderd: [ID 702911 daemon.notice] No
> >>> more
> >>> hourly snapshots left
> >>> Aug 31 19:41:36 data time-sliderd: [ID 702911 daemon.warning]
> >>> rpool
> >>> exceeded 80% capacity. Hourly and daily automatic snapshots were
> >>> destroyed
> >>>
> >>> Where does it see 80%?
> >>>
> >>> $ df -h
> >>>
> >>> Filesystem Size Used Avail Use% Mounted on
> >>> rpool/ROOT/solaris 5.5G 3.0G 2.6G 54% /
> >>> swap 1.4G 396K 1.4G 1% /etc/svc/volatile
> >>> /usr/lib/libc/libc_hwcap1.so.1 5.5G 3.0G 2.6G 54% /lib/libc.so.1
> >>> swap 1.4G 8.0K 1.4G 1% /tmp
> >>> swap 1.4G 52K 1.4G 1% /var/run
> >>> rpool/export 2.6G 32K 2.6G 1% /export
> >>> rpool/export/home 2.6G 33K 2.6G 1% /export/home
> >>> rpool/export/home/usr1 2.6G 38K 2.6G 1% /export/home/usr1
> >>> rpool/export/home/usr2 3.0G 385M 2.6G 13% /export/home/usr2
> >>> rpool 2.6G 48K 2.6G 1% /rpool
> >>>
> >>>
> >>> --Roman
> >>>
> >>> ___
> >>> OpenIndiana-discuss mailing list
> >>> OpenIndiana-discuss@openindiana.org
> >>> http://openindiana.org/mailman/listinfo/openindiana-discuss
> >>
> >
> > ___
> > OpenIndiana-discuss mailing list
> > OpenIndiana-discuss@openindiana.org
> > http://openindiana.org/mailman/listinfo/openindiana-discuss

> --
> Dr. Daniel Kjar
> Assistant Professor of Biology
> Division of Mathematics and Natural Sciences
> Elmira College
> 1 Park Place
> Elmira, NY 14901
> 607-735-1826
> http://faculty.elmira.edu/dkjar

> "...humans send their young men to war; ants send their old ladies"
> -E. O. Wilson
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] server hangs

2011-09-01 Thread Roman Naumenko
I need to dig into MB manual, but its basically all commodity hw based 
(although mb is some server-type Asus). 

--Roman N 

- Original Message -

> what about hw event logs? if you have power flucuations it might show
> ip there.

> you can probably pull those out from your service processor or boot
> to bios and read them there.

> Sent from Jasons' hand held

> On Sep 1, 2011, at 8:37 AM, Roman Naumenko  wrote:

> > Costly troubleshooting you had.
> > All right then, I will wait for the next failure to look through it
> > once again and maybe swap psu if nothing again found.
> >
> > --Roman N
> >
> > - Original Message -
> >
> >> I burned through about 3 disks before I figured it out. Nothing in
> >> the
> >> logs made me think this but the eventual failure of the disks
> >> alerted
> >> me
> >> that something hardwarish was happening.
> >
> >> On 08/31/11 11:01 PM, Roman Naumenko wrote:
> >>> Well, might be the reason. 8 drivers is certainly limit too much
> >>> for a
> >>> stock psu. But there should be some traces, no?
> >>> How did you figure out the reason for errors on your system?
> >>>
> >>> --Roman
> >>>
> >>> Daniel Kjar said the following, on 31-08-11 9:43 PM:
> >>>> Careful... are you overtaxing your power supply? My 148 system
> >>>> was
> >>>> behaving like that when I put too many drives in an ultra 20.
> >>>>
> >>>> On 8/31/2011 7:48 PM, Roman Naumenko wrote:
> >>>>> Hi,
> >>>>>
> >>>>> I have SunOS 5.11 oi_148 installed on my storage server with 8
> >>>>> disks
> >>>>> in raidz2 pool.
> >>>>> It hangs about once in a week and I had to restart it.
> >>>>> Can you help me troubleshoot it?
> >>>>>
> >>>>> It has some zfs volumes shared over nfs and afpd. (afpd is
> >>>>> unfortunately a development version to satisfy OSX Lion).
> >>>>>
> >>>>> roks@data:~$ afpd -V
> >>>>> afpd 2.2.0 - Apple Filing Protocol (AFP) daemon of Netatalk
> >>>>>
> >>>>> afpd has been compiled with support for these features:
> >>>>>
> >>>>> AFP3.x support: Yes
> >>>>> TCP/IP Support: Yes
> >>>>> DDP(AppleTalk) Support: No
> >>>>> CNID backends: dbd last tdb
> >>>>> SLP support: No
> >>>>> Zeroconf support: Yes
> >>>>> TCP wrappers support: Yes
> >>>>> Quota support: Yes
> >>>>> Admin group support: Yes
> >>>>> Valid shell checks: Yes
> >>>>> cracklib support: No
> >>>>> Dropbox kludge: No
> >>>>> Force volume uid/gid: No
> >>>>> ACL support: Yes
> >>>>> EA support: ad | sys
> >>>>> LDAP support: Yes
> >>>>>
> >>>>> It also has time-slider enabled, which is pretty buggy peace of
> >>>>> hmmm
> >>>>> software, but it shouldn't cause server to crash or hang.
> >>>>>
> >>>>> So the problems start with nfs and/or afpd timeouts on clients,
> >>>>> but
> >>>>> I still can ssh to the server. Can't read any files or logs
> >>>>> though.
> >>>>> Then network service disappears in a minute or few minutes,
> >>>>> console
> >>>>> becomes frozen and I have to do hard restart at that point.
> >>>>>
> >>>>> Where should I look to understand what causing this?
> >>>>> Since I can't reproduce the problem, I'd like to get prepared
> >>>>> when
> >>>>> it happens next time.
> >>>>> I couldn't find anything unusual in the logs after restart.
> >>>>>
> >>>>> time-slider complains for some reason about space on rpool
> >>>>> Aug 31 19:41:36 data time-sliderd: [ID 702911 daemon.notice] No
> >>>>> more
> >>>>> hourly snapshots left
> >>>>> Aug 31 19:41:36 data time-sliderd: [ID 702911 daemon.warning]
> >>>>> rpool
> >>>>> exceeded 80% capacity. Hourly and daily automatic snapshots
> >>>>> were
> >>>>> destroyed
> >>>&g

Re: [OpenIndiana-discuss] server hangs

2011-09-01 Thread Roman Naumenko

It's Kingston 16GB ssd drive.

--Roman N

Lucas Van Tol said the following, on 01-09-11 5:34 PM:

What is your rpool like?  I saw some bizzare behavior with a compact-flash 
based rpool; as the CF card got overused and got slower and slower, it 
eventually would hang without throwing any actual errors (just service times 
approaching infinity).
Services that had enough information stored in memory continued to work, but 
anytime something read from the rpool it would hang, and services slowly died 
off.   The system never seemed to fault/offline the rpool either...


Date: Thu, 1 Sep 2011 14:42:54 -0400
From: ro...@naumenko.ca
To: openindiana-discuss@openindiana.org
Subject: Re: [OpenIndiana-discuss] server hangs

I need to dig into MB manual, but its basically all commodity hw based 
(although mb is some server-type Asus).

--Roman N

- Original Message -


what about hw event logs? if you have power flucuations it might show
ip there.
you can probably pull those out from your service processor or boot
to bios and read them there.
Sent from Jasons' hand held
On Sep 1, 2011, at 8:37 AM, Roman Naumenko  wrote:

Costly troubleshooting you had.
All right then, I will wait for the next failure to look through it
once again and maybe swap psu if nothing again found.

--Roman N

- Original Message -


I burned through about 3 disks before I figured it out. Nothing in
the
logs made me think this but the eventual failure of the disks
alerted
me
that something hardwarish was happening.
On 08/31/11 11:01 PM, Roman Naumenko wrote:

Well, might be the reason. 8 drivers is certainly limit too much
for a
stock psu. But there should be some traces, no?
How did you figure out the reason for errors on your system?

--Roman

Daniel Kjar said the following, on 31-08-11 9:43 PM:

Careful... are you overtaxing your power supply? My 148 system
was
behaving like that when I put too many drives in an ultra 20.

On 8/31/2011 7:48 PM, Roman Naumenko wrote:

Hi,

I have SunOS 5.11 oi_148 installed on my storage server with 8
disks
in raidz2 pool.
It hangs about once in a week and I had to restart it.
Can you help me troubleshoot it?

It has some zfs volumes shared over nfs and afpd. (afpd is
unfortunately a development version to satisfy OSX Lion).

roks@data:~$ afpd -V
afpd 2.2.0 - Apple Filing Protocol (AFP) daemon of Netatalk

afpd has been compiled with support for these features:

AFP3.x support: Yes
TCP/IP Support: Yes
DDP(AppleTalk) Support: No
CNID backends: dbd last tdb
SLP support: No
Zeroconf support: Yes
TCP wrappers support: Yes
Quota support: Yes
Admin group support: Yes
Valid shell checks: Yes
cracklib support: No
Dropbox kludge: No
Force volume uid/gid: No
ACL support: Yes
EA support: ad | sys
LDAP support: Yes

It also has time-slider enabled, which is pretty buggy peace of
hmmm
software, but it shouldn't cause server to crash or hang.

So the problems start with nfs and/or afpd timeouts on clients,
but
I still can ssh to the server. Can't read any files or logs
though.
Then network service disappears in a minute or few minutes,
console
becomes frozen and I have to do hard restart at that point.

Where should I look to understand what causing this?
Since I can't reproduce the problem, I'd like to get prepared
when
it happens next time.
I couldn't find anything unusual in the logs after restart.

time-slider complains for some reason about space on rpool
Aug 31 19:41:36 data time-sliderd: [ID 702911 daemon.notice] No
more
hourly snapshots left
Aug 31 19:41:36 data time-sliderd: [ID 702911 daemon.warning]
rpool
exceeded 80% capacity. Hourly and daily automatic snapshots
were
destroyed

Where does it see 80%?

$ df -h

Filesystem Size Used Avail Use% Mounted on
rpool/ROOT/solaris 5.5G 3.0G 2.6G 54% /
swap 1.4G 396K 1.4G 1% /etc/svc/volatile
/usr/lib/libc/libc_hwcap1.so.1 5.5G 3.0G 2.6G 54%
/lib/libc.so.1
swap 1.4G 8.0K 1.4G 1% /tmp
swap 1.4G 52K 1.4G 1% /var/run
rpool/export 2.6G 32K 2.6G 1% /export
rpool/export/home 2.6G 33K 2.6G 1% /export/home
rpool/export/home/usr1 2.6G 38K 2.6G 1% /export/home/usr1
rpool/export/home/usr2 3.0G 385M 2.6G 13% /export/home/usr2
rpool 2.6G 48K 2.6G 1% /rpool


--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

--
Dr. Daniel Kjar
Assistant Professor of Biology
Division of Mathematics and Natural Sciences
Elmira College
1 Park Place
Elmira, NY 14901
607-735-1826
http://faculty.elmira.edu/dkjar
"...humans send their young men to war; ants send their old
ladies"
-E. O. Wilson
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openind

Re: [OpenIndiana-discuss] server hangs

2011-09-10 Thread Roman Naumenko

A continuation, I hoped it wouldn't follow, but the server hanged again.

The error I saw on the console was

Sep 10 20:15:39/256 ERROR: svc:/system/hal:default: Method 
"/lib/svc/method/svc-hal start" failed with exit status 95.
Sep 10 20:15:39/256: system/hal:default failed fatally: transitioned to 
maintenance (see 'svcs -xv' for details)


I couldn't do anything on the console, had to do restart server.

The mounts were lost at 20:08 on the client
Sep 10 20:08:07 station KernelEventAgent[72]: tid  received 
event(s) VQ_NOTRESP (1)


The last fmdump was 5 days ago
Sep 05 2011 14:37:37.325349500 ereport.fs.zfs.vdev.open_failed
nvlist version: 0

So does it confirming either version for failing psu or bad ssd?

--Roman N

Lucas Van Tol said the following, on 02-09-11 10:12 AM:

You might not want to have any swap enabled on that.   SSD's tend to perform 
worse when they are full (I'm not sure if allocating 8G to swap actually uses 
up space on the physical device or not) and I have seen other Kingston SSD's 
hang for a bit at times, which would probably not be good for swap.

If possible, you might try and redirect some logs off of rpool; it might not be 
able to log anything if the rpool is the problem.

Date: Fri, 2 Sep 2011 00:07:23 -0400
From: ro...@naumenko.ca
To: openindiana-discuss@openindiana.org
Subject: Re: [OpenIndiana-discuss] server hangs

It's Kingston 16GB ssd drive.

--Roman N

Lucas Van Tol said the following, on 01-09-11 5:34 PM:

What is your rpool like?  I saw some bizzare behavior with a compact-flash 
based rpool; as the CF card got overused and got slower and slower, it 
eventually would hang without throwing any actual errors (just service times 
approaching infinity).
Services that had enough information stored in memory continued to work, but 
anytime something read from the rpool it would hang, and services slowly died 
off.   The system never seemed to fault/offline the rpool either...


Date: Thu, 1 Sep 2011 14:42:54 -0400
From: ro...@naumenko.ca
To: openindiana-discuss@openindiana.org
Subject: Re: [OpenIndiana-discuss] server hangs

I need to dig into MB manual, but its basically all commodity hw based 
(although mb is some server-type Asus).

--Roman N

- Original Message -


what about hw event logs? if you have power flucuations it might show
ip there.
you can probably pull those out from your service processor or boot
to bios and read them there.
Sent from Jasons' hand held
On Sep 1, 2011, at 8:37 AM, Roman Naumenko   wrote:

Costly troubleshooting you had.
All right then, I will wait for the next failure to look through it
once again and maybe swap psu if nothing again found.

--Roman N

- Original Message -


I burned through about 3 disks before I figured it out. Nothing in
the
logs made me think this but the eventual failure of the disks
alerted
me
that something hardwarish was happening.
On 08/31/11 11:01 PM, Roman Naumenko wrote:

Well, might be the reason. 8 drivers is certainly limit too much
for a
stock psu. But there should be some traces, no?
How did you figure out the reason for errors on your system?

--Roman

Daniel Kjar said the following, on 31-08-11 9:43 PM:

Careful... are you overtaxing your power supply? My 148 system
was
behaving like that when I put too many drives in an ultra 20.

On 8/31/2011 7:48 PM, Roman Naumenko wrote:

Hi,

I have SunOS 5.11 oi_148 installed on my storage server with 8
disks
in raidz2 pool.
It hangs about once in a week and I had to restart it.
Can you help me troubleshoot it?

It has some zfs volumes shared over nfs and afpd. (afpd is
unfortunately a development version to satisfy OSX Lion).

roks@data:~$ afpd -V
afpd 2.2.0 - Apple Filing Protocol (AFP) daemon of Netatalk

afpd has been compiled with support for these features:

AFP3.x support: Yes
TCP/IP Support: Yes
DDP(AppleTalk) Support: No
CNID backends: dbd last tdb
SLP support: No
Zeroconf support: Yes
TCP wrappers support: Yes
Quota support: Yes
Admin group support: Yes
Valid shell checks: Yes
cracklib support: No
Dropbox kludge: No
Force volume uid/gid: No
ACL support: Yes
EA support: ad | sys
LDAP support: Yes

It also has time-slider enabled, which is pretty buggy peace of
hmmm
software, but it shouldn't cause server to crash or hang.

So the problems start with nfs and/or afpd timeouts on clients,
but
I still can ssh to the server. Can't read any files or logs
though.
Then network service disappears in a minute or few minutes,
console
becomes frozen and I have to do hard restart at that point.

Where should I look to understand what causing this?
Since I can't reproduce the problem, I'd like to get prepared
when
it happens next time.
I couldn't find anything unusual in the logs after restart.

time-slider complains for some reason about space on rpool
Aug 31 19:41:36 data time-sliderd: [ID 702911 daemon.notice] No
more
hourly snapshots left
Aug 31 19:41:36 d

Re: [OpenIndiana-discuss] server hangs

2011-09-17 Thread Roman Naumenko
It was a fresh install from openindiana distro. 

--Roman 

- Original Message -

> On Wed, 2011-08-31 at 19:48 -0400, Roman Naumenko wrote:
> > Hi,
> >
> > I have SunOS 5.11 oi_148 installed on my storage server with 8
> > disks in
> > raidz2 pool.
> > It hangs about once in a week and I had to restart it.
> > Can you help me troubleshoot it?

> Did you upgrade to this version from lets say OpenSolaris ?

> --
> Mateusz Pawlowski 

> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Can't install Vbox on oi_148

2011-09-17 Thread Roman Naumenko
Hi, 

It's oi_148. 
I tried few different versions of virtualbox (latest and VirtualBox-4.1.0) and 
it all ends in hanged system. 

The installation process goes to this point : 

Loading Virtualbox kernel modules... 
kthread_t::t_preempt at 142 
cput_t::cpu_runrun at 216 
(something) pkrunrun at 217 

and then the system is dead. 

If restarted, it says "bla-bla-bla, can't load vboxnet module ... it's not in 
PMPI" - or something like that, then few more lines listed above and system is 
dead. 
I have to reboot it in a single-user mode and remove the package. 

--Roman N 
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Can't install Vbox on oi_148

2011-09-17 Thread Roman Naumenko

Any other options?
oi_148 is working more or less ok, I'm not inclined much to upgrade it 
right now to experiment with vbox.


--Roman

ken mays said the following, on 17-09-11 1:12 PM:

Use oi_151a and let us know.

--- On *Sat, 9/17/11, Roman Naumenko //* wrote:


From: Roman Naumenko 
Subject: [OpenIndiana-discuss] Can't install Vbox on oi_148
To: "Discussion list for OpenIndiana"

Date: Saturday, September 17, 2011, 12:42 PM

Hi,

It's oi_148.
I tried few different versions of virtualbox (latest and
VirtualBox-4.1.0) and it all ends in hanged system.

The installation process goes to this point :

Loading Virtualbox kernel modules...
kthread_t::t_preempt at 142
cput_t::cpu_runrun at 216
(something) pkrunrun at 217

and then the system is dead.

If restarted, it says "bla-bla-bla, can't load vboxnet module ...
it's not in PMPI" - or something like that, then few more lines
listed above and system is dead.
I have to reboot it in a single-user mode and remove the package.

--Roman N
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org

http://openindiana.org/mailman/listinfo/openindiana-discuss


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Can't install Vbox on oi_148

2011-09-18 Thread Roman Naumenko

Ok, it makes sense.
I'll be waiting for the next major release then.

And Vbox issues has resolved: vmware workstation is installed on a spare 
box.


--Roman N

Alex Viskovatoff said the following, on 18-09-11 3:19 AM:

Well, oi151a works better than oi_148 in my experience, and neither are
considered to be stable releases: they are development releases.

And developers are not inclined much to pay attention to problems
reported for old development releases, because the developers have moved
on. One of the meanings of "development release" is that no hint was
ever given that there was ever any intention of supporting it. The whole
point of a development release is to get the bugs out of it so that a
stable release can come out. And the best way for that to work is for
everyone to work as a team and to be at the latest development release.

On Sat, 2011-09-17 at 13:16 -0400, Roman Naumenko wrote:

Any other options?
oi_148 is working more or less ok, I'm not inclined much to upgrade it
right now to experiment with vbox.

--Roman

ken mays said the following, on 17-09-11 1:12 PM:

Use oi_151a and let us know.
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS and AVS guru $500

2012-07-26 Thread Roman Naumenko

Richard Elling said the following, on 25-07-12 1:14 PM:

On Jul 24, 2012, at 9:11 AM, Jason Matthews wrote:

are you missing a zero to the left of the decimal place?

Been there, done that, wrote a whitepaper. Add 2 zeros.
  -- richard

Or add three and buy pair of FAS3200
:)

--Roman N

Sent from Jasons' hand held

On Jul 23, 2012, at 8:57 PM, "John T. Bittner"  wrote:


Subject: ZFS and AVS guru

I am working on setting up 2 SAN's to replicate via AVS.
The 2 sans I build have 15 drives SAS drives + 2 Cache SSD's and 2 Log SSD's.
OS drives are also SSD's and are mirrored.
Units are running current version of Openindiana with AVS installed.
Our environment we run comstar fiber channel targets with ISCSI backup.
I have conflicting reports on if active / active is possible but if not active 
/ passive will do.

I need someone that has done this before and is familiar with this type of 
setup.
$500.00 for the work, to include 1 hour of training on the system, how to 
monitor the replication, failover and failback.

The need to get this done is a rush, so you must be available in the next day 
or so.

Anyone interested please email me direct at j...@xaccel.net

Thanks

John Bittner
Xaccel Networks.





___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Script to clean mess after zfs-auto-snapshot

2013-02-11 Thread Roman Naumenko
After another big clenup on home filer during which I got terrible 
headache because of zfs-auto-snapshost, I decided to ask if anybody 
tried to simplified snapshot management.
If a user could list all fs with zfs-auto weekly snapshots enabled, or 
count them up or be able to enable other periodical snapshot fs, that 
would make a lot of sense I think.


Something like this:
asnap -f filesystem -p period {on|off|list|clean} {recursive}

Of course, all these can be done with zfs/svc/grep and combinations but 
usually it gets tedious.

But I don't want to envent another bicycle, so decided to ask around.

--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Script to clean mess after zfs-auto-snapshot

2013-02-11 Thread Roman Naumenko

Jan Owoc said the following, on 11-02-13 8:35 PM:

On Mon, Feb 11, 2013 at 5:39 PM, Roman Naumenko  wrote:

After another big clenup on home filer during which I got terrible headache
because of zfs-auto-snapshost, I decided to ask if anybody tried to
simplified snapshot management.

I think everyone has slightly different needs, so some combination of
zfs commands are necessary. Any tool/wrapper you'd write is likely to
be as overwhelming as zfs itself.
That's exactly the problem: too overwhelming, also hard to remember 
commands if not working with it often.

If a user could list all fs with zfs-auto weekly snapshots enabled, or count
them up or be able to enable other periodical snapshot fs, that would make a
lot of sense I think.

$ zfs get com.sun:auto-snapshot

will list all the filesystems along with their "auto-snapshot status".
If you were tinkering with turning specific times on/off, you'd need
to run:
$ zfs get com.sun:auto-snapshot:weekly


Should the snapshots itself have property assigned? I see they got 
property inherited from dataset itself (maybe I did something wrong).


--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] zfs-auto-snapshot makes snapshots even if "false" set

2013-02-14 Thread Roman Naumenko

Hi,

I have a weird issue with zfs-auto-snapshot on oi_151a5, it continues to 
make snapshots even when I asked not to do so.
By the way, is it possible to update this package to newer version 
without upgrading whole distro?


@data:~$ zfs  get all storpool/mailserver_data/zca8vm  | grep snap
storpool/mailserver_data/zca8vm  snapdir visible
 inherited from storpool
storpool/mailserver_data/zca8vm  usedbysnapshots 464M   
 -
storpool/mailserver_data/zca8vm  com.sun:auto-snapshot:hourlyfalse  
 local
storpool/mailserver_data/zca8vm  com.sun:auto-snapshot:daily false  
 local
storpool/mailserver_data/zca8vm  com.sun:auto-snapshot:weeklyfalse  
 local
storpool/mailserver_data/zca8vm  com.sun:auto-snapshot:frequent  false  
 inherited from storpool


@data:~$ zfs list -t snapshot |  grep -v "rpool/e" | grep 
mailserver_data | grep  8vm
storpool/mailserver_data/zca8vm@zfs-auto-snap_hourly-2013-02-13-00h52 
112M  -  6.82G  -
storpool/mailserver_data/zca8vm@zfs-auto-snap_hourly-2013-02-13-01h52 
71.4M  -  7.22G  -
storpool/mailserver_data/zca8vm@zfs-auto-snap_hourly-2013-02-13-02h52 
20.5M  -  7.22G  -
storpool/mailserver_data/zca8vm@zfs-auto-snap_hourly-2013-02-13-03h52 
20.6M  -  7.22G  -
storpool/mailserver_data/zca8vm@zfs-auto-snap_hourly-2013-02-13-04h52 
20.6M  -  7.23G  -
storpool/mailserver_data/zca8vm@zfs-auto-snap_hourly-2013-02-13-05h52 
20.9M  -  7.23G  -
storpool/mailserver_data/zca8vm@zfs-auto-snap_hourly-2013-02-13-06h52 
2.24K  -  7.23G  -
storpool/mailserver_data/zca8vm@zfs-auto-snap_hourly-2013-02-13-08h52 
0  -  7.23G  -
storpool/mailserver_data/zca8vm@zfs-auto-snap_hourly-2013-02-13-09h52 
0  -  7.23G  -
storpool/mailserver_data/zca8vm@zfs-auto-snap_hourly-2013-02-13-10h52 
0  -  7.23G  -


Initially it inherited snapshot options for dataset above, but when I 
set all options to false it still goes on and on.


--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] zfs-auto-snapshot makes snapshots even if "false" set

2013-02-14 Thread Roman Naumenko

Jan Owoc said the following, on 14-02-13 10:19 PM:

On Thu, Feb 14, 2013 at 8:07 PM, Roman Naumenko  wrote:

I have a weird issue with zfs-auto-snapshot on oi_151a5, it continues to
make snapshots even when I asked not to do so.

[...]

Initially it inherited snapshot options for dataset above, but when I set
all options to false it still goes on and on.

Just a guess, but maybe time-slider reads these properties only on
startup, and if you change them you need to restart time-slider (?).
Worth a try.

I probably tried that as well, but don't remember for sure.

Anyway, another command below did the trick
sudo zfs set com.sun:auto-snapshot=false storpool/mailserver_data/zca8vm

By the way, is it possible to update this package to newer version without
upgrading whole distro?

I would imagine it possible with some combination of "pkg refresh" and
"pkg update", but not sure of the exact commands. You would likely
need to update all the dependencies as well.
Ok, I probably leave it as is. Overall it works too reliable to justify 
any upgrades.


--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] zfs-auto-snapshot yet again

2013-03-18 Thread Roman Naumenko

Hi,

Just wanted to ask if this is the latest version of time-slider?

data:~$ pkginfo -l SUNWgnome-time-slider
   PKGINST:  SUNWgnome-time-slider
  NAME:  Time Slider ZFS snapshot management for GNOME
  CATEGORY:  GNOME2,application,JDSoi
  ARCH:  i386
   VERSION:  0.2.97,REV=110.0.4.2011.04.17.15.25
   BASEDIR:  /
VENDOR:  Sun Microsystems, Inc.
  DESC:  Time Slider ZFS snapshot management for GNOME
  INSTDATE:  Feb 26 2013 07:39
   HOTLINE:  Please contact your local service provider
STATUS:  completely installed

It's not managing snapshots very reliably.
For example, an error below is similar to what was discussed here 
https://www.illumos.org/issues/1013 couple of year ago.

tail /var/svc/log/application-time-slider\:default.log
Snapshot monitor thread exited.
[ Mar 17 23:51:50 Stopping because all processes in service exited. ]
[ Mar 17 23:51:50 Executing stop method (:kill). ]
[ Mar 17 23:51:50 Executing start method ("/lib/svc/method/time-slider start"). 
]
[ Mar 17 23:51:52 Method "start" exited with status 0. ]
[ Mar 18 20:55:31 Enabled. ]
[ Mar 18 20:56:09 Executing start method ("/lib/svc/method/time-slider start"). 
]
[ Mar 18 20:56:16 Method "start" exited with status 0. ]
Failed to create snapshots for schedule: daily
Caught RuntimeError exception in snapshot manager thread
Error details:
BEGIN ERROR MESSAGE
['/usr/bin/pfexec', '/usr/sbin/zfs', 'snapshot', '-r', 
'pool1/groupd/vol1@zfs-auto-snap_daily-2013-03-18-23h51'] failed with exit code 
1
cannot create snapshot 
'pool1/groupd/vol1/snap20130315@zfs-auto-snap_daily-2013-03-18-23h51': dataset 
already exists
no snapshots were created

Has anybody had positive experience with zfs-auto-snapshot and what version 
you're using then?

--Roman


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] zfs-auto-snapshot yet again

2013-03-20 Thread Roman Naumenko

dormitionsk...@hotmail.com said the following, on 19-03-13 7:14 PM:

On Mar 18, 2013, at 10:00 PM, Roman Naumenko wrote:

Hi,

Just wanted to ask if this is the latest version of time-slider?

data:~$ pkginfo -l SUNWgnome-time-slider
   PKGINST:  SUNWgnome-time-slider
  NAME:  Time Slider ZFS snapshot management for GNOME
  CATEGORY:  GNOME2,application,JDSoi
  ARCH:  i386
   VERSION:  0.2.97,REV=110.0.4.2011.04.17.15.25
   BASEDIR:  /
VENDOR:  Sun Microsystems, Inc.
  DESC:  Time Slider ZFS snapshot management for GNOME
  INSTDATE:  Feb 26 2013 07:39
   HOTLINE:  Please contact your local service provider
STATUS:  completely installed

It's not managing snapshots very reliably.
For example, an error below is similar to what was discussed here 
https://www.illumos.org/issues/1013 couple of year ago.

tail /var/svc/log/application-time-slider\:default.log
Snapshot monitor thread exited.
[ Mar 17 23:51:50 Stopping because all processes in service exited. ]
[ Mar 17 23:51:50 Executing stop method (:kill). ]
[ Mar 17 23:51:50 Executing start method ("/lib/svc/method/time-slider start"). 
]
[ Mar 17 23:51:52 Method "start" exited with status 0. ]
[ Mar 18 20:55:31 Enabled. ]
[ Mar 18 20:56:09 Executing start method ("/lib/svc/method/time-slider start"). 
]
[ Mar 18 20:56:16 Method "start" exited with status 0. ]
Failed to create snapshots for schedule: daily
Caught RuntimeError exception in snapshot manager thread
Error details:
BEGIN ERROR MESSAGE
['/usr/bin/pfexec', '/usr/sbin/zfs', 'snapshot', '-r', 
'pool1/groupd/vol1@zfs-auto-snap_daily-2013-03-18-23h51'] failed with exit code 
1
cannot create snapshot 
'pool1/groupd/vol1/snap20130315@zfs-auto-snap_daily-2013-03-18-23h51': dataset 
already exists
no snapshots were created

Has anybody had positive experience with zfs-auto-snapshot and what version 
you're using then?

--Roman


Well, nobody else has piped in, so I guess I will, for what it's worth.

My pkginfo looks the same as yours.  My logs don't show any errors, thank God.  (I did  a 
"cat" on it, too.  No errors.)

I'm not sure how helpful it is to you.  The bug report you mentioned was for 
more complex configurations.  Mine is a a simple, default configuration.

Apparently, from what others have said on this list, you have to change the 
root password before enabling the time slider -- at least for the gui, anyway, 
as I understand.

I had already done that.  Then I went into the gui app, and enabled Time Slider 
with the defaults.

I hope this helps.

fp


dsad...@trinity.dsicons.net:~$ su
Password:
dsad...@trinity.dsicons.net:~# pkginfo -l SUNWgnome-time-slider
PKGINST:  SUNWgnome-time-slider
   NAME:  Time Slider ZFS snapshot management for GNOME
   CATEGORY:  GNOME2,application,JDSoi
   ARCH:  i386
VERSION:  0.2.97,REV=110.0.4.2011.04.17.15.25
BASEDIR:  /
 VENDOR:  Sun Microsystems, Inc.
   DESC:  Time Slider ZFS snapshot management for GNOME
   INSTDATE:  Oct 04 2012 23:26
HOTLINE:  Please contact your local service provider
 STATUS:  completely installed

dsad...@trinity.dsicons.net:~# tail 
/var/svc/log/application-time-slider\:default.log
[ Mar 18 15:00:25 Executing start method ("/lib/svc/method/time-slider start"). 
]
[ Mar 18 15:00:37 Method "start" exited with status 0. ]
[ Mar 18 22:26:55 Stopping because service disabled. ]
[ Mar 18 22:26:55 Executing stop method (:kill). ]
[ Mar 19 15:04:51 Enabled. ]
[ Mar 19 15:05:02 Executing start method ("/lib/svc/method/time-slider start"). 
]
[ Mar 19 15:05:18 Method "start" exited with status 0. ]
[ Mar 19 16:13:22 Enabled. ]
[ Mar 19 16:13:46 Executing start method ("/lib/svc/method/time-slider start"). 
]
[ Mar 19 16:14:05 Method "start" exited with status 0. ]
dsad...@trinity.dsicons.net:~#

I don't have gui, that just a storage server. And it didn't complain 
about permissions, rather recursive snapshots.
So I had to completely whack recursive from 
/usr/share/time-slider/lib/time_slider/zfs.py


#no need for recursive snapshots!
#if recursive == True:
#cmd.append("-r")

Then changed settings for auto-snapshot.xml and imported it back.

svccfg export auto-snapshot > auto-snapshot.smf
sudo svccfg import auto-snapshot.smf

Manually set false for whatever fs I didn't want snapshots
sudo zfs set com.sun:auto-snapshot:daily=false rpool/ROOT/oi_151a

Seems working now more or less as I wanted it to.

--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] 3737 days of uptime

2013-03-21 Thread Roman Naumenko

Edward Ned Harvey (openindiana) said the following, on 20-03-13 7:32 AM:

From: dormitionsk...@hotmail.com [mailto:dormitionsk...@hotmail.com]
Sent: Tuesday, March 19, 2013 11:42 PM

A Sun Solaris machine was shut down last week in Hungary, I think, after 3737
days of uptime.  Below are links to the article and video.

Warning:  It might bring a tear to your eye...

It would only bring a tear to my eye, because of how foolishly irresponsible 
that is.  3737 days of uptime means 10 years of never applying security patches 
and bugfixes.  Whenever people are proud of a really long uptime, it's a sign 
of a bad sysadmin.

Not to mention the bill for electricity.

--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] 3737 days of uptime

2013-04-07 Thread Roman Naumenko

Andrew Gabriel said the following, on 07-04-13 10:34 AM:

Edward Ned Harvey (openindiana) wrote:

From: Ben Taylor [mailto:bentaylor.sol...@gmail.com]

Patching is a bit of arcane art.  Some environments don't have
test/acceptance/pre-prod with similar hardware and configurations, so
minimizing impact is understandable, which means patching only what is
necessary.
This thread has long since become pointless and fizzled, but just for 
the fun of it:


I recently started a new job, where updates had not been applied to 
any of the production servers in several years.  (By decree of former 
CIO).  We recently ran into an obstacle where some huge critical 
deliverable was not possible without applying the updates.  So we 
were forced, the whole IT team working overnight on the weekend, to 
apply several years' backlog of patches to all the critical servers 
worldwide.  Guess how many patch-related issues were discovered.  
(Hint:  none.)


Patching is extremely safe.  But let's look at the flip side. Suppose 
you encounter the rare situation where patching *does* cause a 
problem.  It's been known to happen; heck, it's been known to happen 
*by* *me*.  You have to ask yourself, which is the larger risk?  
Applying the patches, or not applying the patches?
First thing to point out:  Suppose you patch something and it goes 
wrong ...  Generally speaking you can back out of the patch.  Suppose 
you don't apply the patch, and you get a virus or hacked, or some 
data corruption.  Generally speaking, that is not reversible.


For the approx twice in my life that I've seen OS patches cause 
problems, and then had to reverse out the patches...  I've seen 
dozens of times that somebody inadvertently sets a virus loose on the 
internal network, or a server's memory or storage became corrupted 
due to misbehaving processes or subsystem, or some server has some 
kind of instability and needs periodic rebooting, or becomes 
incompatible with the current release of some critical software or 
hardware, until you apply the patches.
Patches are "bug fixes" and "security fixes" for known flaws in the 
software.  You can't say "if it ain't broke, don't fix it." It is 
broke, that's why they gave you the fix for it.  At best, you can 
say, "I've been ignoring it, and we haven't noticed any problems yet."
10 years ago, it was the case that something like half the support 
calls would have never arisen if the system was patched up to date. (I 
don't know the current figure for this.)


OTOH, I have worked in environments where everything is going to be 
locked down for 6-10 years. You get as current and stable as you can 
for the final testing, and then that's it - absolutely nothing is 
allowed to change. As someone else already hinted earlier in the 
thread, the security design of such infrastructure assumes from the 
outset that the systems are riddled with security holes, and they need 
to be made secure in some other (external) way
And the side effect would be dramatically reduced OPEX due to lower 
numbers of stuff that should be supporting environment.


--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Expanding storage with JBOD

2014-01-01 Thread Roman Naumenko

Hello,

Looking for cheap way of expanding current home storage server running 
on openindiana 151_a5.

jbod (12 bay are 400$) with SFF-8088 is most cost-effective.

Now the question what card with 8088 to stick in the server. Will this 
one work? http://www.adaptec.com/en-us/support/sas/sas/asc-1045
Is there any similar listed on HCL here 
http://wiki.openindiana.org/pages/viewpage.action?pageId=4883876?


Regards,
--Roman
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Expanding storage with JBOD

2014-01-01 Thread Roman Naumenko

Saso Kiselkov said the following, on 01-01-14 4:30 PM:

On 1/1/14, 9:11 PM, Roman Naumenko wrote:

Hello,

Looking for cheap way of expanding current home storage server running
on openindiana 151_a5.
jbod (12 bay are 400$) with SFF-8088 is most cost-effective.

Now the question what card with 8088 to stick in the server. Will this
one work? http://www.adaptec.com/en-us/support/sas/sas/asc-1045
Is there any similar listed on HCL here
http://wiki.openindiana.org/pages/viewpage.action?pageId=4883876?

My best experience is with stuff from LSI, mainly the LSI SAS
2008-derived products such as the LSI 9200-8e, HP SC08e, Dell 6Gbps SAS
HBA, etc.
http://accessories.us.dell.com/sna/productdetail.aspx?c=us&l=en&s=dhs&cs=19&sku=342-0910
I'll probably use LSI SAS3801E, it's listed as compatible on HCL and 
priced in 80-150 range on ebay.


--Roman
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Expanding storage with JBOD

2014-01-01 Thread Roman Naumenko

cjt said the following, on 01-01-14 7:14 PM:

On 01/01/2014 04:23 PM, Roman Naumenko wrote:

Saso Kiselkov said the following, on 01-01-14 4:30 PM:

On 1/1/14, 9:11 PM, Roman Naumenko wrote:

Hello,

Looking for cheap way of expanding current home storage server running
on openindiana 151_a5.
jbod (12 bay are 400$) with SFF-8088 is most cost-effective.

Now the question what card with 8088 to stick in the server. Will this
one work? http://www.adaptec.com/en-us/support/sas/sas/asc-1045
Is there any similar listed on HCL here
http://wiki.openindiana.org/pages/viewpage.action?pageId=4883876?

My best experience is with stuff from LSI, mainly the LSI SAS
2008-derived products such as the LSI 9200-8e, HP SC08e, Dell 6Gbps SAS
HBA, etc.
http://accessories.us.dell.com/sna/productdetail.aspx?c=us&l=en&s=dhs&cs=19&sku=342-0910 




I'll probably use LSI SAS3801E, it's listed as compatible on HCL and
priced in 80-150 range on ebay.

--Roman

as long as you don't want to use drives bigger than 2GB ...
I don't know if even 2TB will fill fast enough to justifying any 
"investment" into storage expansion.
Speaking about storage expansion, even HBA cards are dirt cheap, pricing 
on enclosures with integrated SAS expander is just nuts. Can't figure 
out how to add 8-12 disks externally to the storage server without 
paying for this 2x cost of the server itself.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Expanding storage with JBOD

2014-01-02 Thread Roman Naumenko

Saso Kiselkov said the following, on 02-01-14 9:49 AM:

On 1/2/14, 2:28 AM, Roman Naumenko wrote:

cjt said the following, on 01-01-14 7:14 PM:

On 01/01/2014 04:23 PM, Roman Naumenko wrote:

Saso Kiselkov said the following, on 01-01-14 4:30 PM:

On 1/1/14, 9:11 PM, Roman Naumenko wrote:

Hello,

Looking for cheap way of expanding current home storage server running
on openindiana 151_a5.
jbod (12 bay are 400$) with SFF-8088 is most cost-effective.

Now the question what card with 8088 to stick in the server. Will this
one work? http://www.adaptec.com/en-us/support/sas/sas/asc-1045
Is there any similar listed on HCL here
http://wiki.openindiana.org/pages/viewpage.action?pageId=4883876?

My best experience is with stuff from LSI, mainly the LSI SAS
2008-derived products such as the LSI 9200-8e, HP SC08e, Dell 6Gbps SAS
HBA, etc.
http://accessories.us.dell.com/sna/productdetail.aspx?c=us&l=en&s=dhs&cs=19&sku=342-0910



I'll probably use LSI SAS3801E, it's listed as compatible on HCL and
priced in 80-150 range on ebay.

--Roman

as long as you don't want to use drives bigger than 2GB ...

I don't know if even 2TB will fill fast enough to justifying any
"investment" into storage expansion.
Speaking about storage expansion, even HBA cards are dirt cheap, pricing
on enclosures with integrated SAS expander is just nuts. Can't figure
out how to add 8-12 disks externally to the storage server without
paying for this 2x cost of the server itself.

What's your budget? A Supermicro SC826-based enclosure can be had for
under a thousand bucks and you can set it up as a JBOD easily (get a
power distribution board for it - Supermicro sells those too). Comes
with a dual-path expander, dual PSUs and a nice rack-mountable with rack
rails - kind of the equivalent of something like an HP MSA 60 or Dell
MD1200, only much cheaper.

Mmmm, budget for 8 disks JBOD...
Lets see:
Case for the 8-16 disks: 100$
PSU: 60$
Controller to get SFF-8088 in/out: 150$
2 or 4 SFF8087x10$: 20-40$
SFF-8088: 15$
2TB disks, 8x80 = 640
Controller with SFF-8088 for the head: 70$
-
Total: ~1100$

If you're looking to grow in the future, definitely have a look at
SC837E26-RJBOD1 and SC847E26-RJBOD1 - these are pre-assembled
36/45-drive JBODs and they cost a lot less per drive (either box can be
had for around $2k). Given that quality 3TB NL-SAS drives cost around
$250-300 a piece (and I recommend you buy good drives; don't cheap out
on SATA if you want performance, reliability and peaceful sleep at
night), the cost of the enclosure will, in the end, be a drop in a
bucket in your overall investment.

It's for the home storage, cheapo sata is the must :)
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Expanding storage with JBOD

2014-01-02 Thread Roman Naumenko

David Scharbach said the following, on 02-01-14 1:53 PM:

Supermicro may bit a bit overkill for the home server :)  Although I have 
considered it for myself at home…  We used them where I used to work and they 
are pretty nice for the money.

For myself, I opted to go with the Norco 4220, a Tyan MB with an integral 
LSI-2008 based HBA, LSI based intel SAS expander card, 16GB of ECC, a low end 
xeon and 13 3TB cheap seagate HDDs.  This setup allows for 20 hot swap bays 
connected to the SAS card, 2 2.5” SSDs connected to the MB via SATA for 
cache/zil and a slim optical drive with a spare SAS channel that I could use to 
connect to an external JBOD enclosure if needed.  My drives have seen a maximum 
temp of 39C and that is when doing a full scrub with the server in my utility 
room in winter (furnace is on, heats the room up).

I agree that cheap HDDs suck, I have returned 3 of them in a little over a year 
but ZFS and SMART are awesome at detecting early troubles.  Just had another 
bark at me for uncorrectable sector errors.  Resilvering my RaidZ2 production 
array does take a long time, but I have a RaidZ1 backup array so I would have 
to lose 5 of 13 drives at the same time to affect all my data.  Nice part is I 
can still add 7 more drives to the case if needed.
There are number of option if you want to assemble something like a 
storage server, but it's all limited if you want to connect a JBOD to an 
existing server.

By limited I mean the cost going to be >= original equipment.

Which card did you use by the way?

Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Expanding storage with JBOD

2014-01-02 Thread Roman Naumenko

Saso Kiselkov said the following, on 02-01-14 6:26 PM:

On 1/2/14, 11:15 PM, Roman Naumenko wrote:

Saso Kiselkov said the following, on 02-01-14 9:49 AM:

On 1/2/14, 2:28 AM, Roman Naumenko wrote:

cjt said the following, on 01-01-14 7:14 PM:

On 01/01/2014 04:23 PM, Roman Naumenko wrote:

Saso Kiselkov said the following, on 01-01-14 4:30 PM:

On 1/1/14, 9:11 PM, Roman Naumenko wrote:

Hello,

Looking for cheap way of expanding current home storage server
running
on openindiana 151_a5.
jbod (12 bay are 400$) with SFF-8088 is most cost-effective.

Now the question what card with 8088 to stick in the server. Will
this
one work? http://www.adaptec.com/en-us/support/sas/sas/asc-1045
Is there any similar listed on HCL here
http://wiki.openindiana.org/pages/viewpage.action?pageId=4883876?

My best experience is with stuff from LSI, mainly the LSI SAS
2008-derived products such as the LSI 9200-8e, HP SC08e, Dell
6Gbps SAS
HBA, etc.
http://accessories.us.dell.com/sna/productdetail.aspx?c=us&l=en&s=dhs&cs=19&sku=342-0910

I'll probably use LSI SAS3801E, it's listed as compatible on HCL and
priced in 80-150 range on ebay.

--Roman

as long as you don't want to use drives bigger than 2GB ...

I don't know if even 2TB will fill fast enough to justifying any
"investment" into storage expansion.
Speaking about storage expansion, even HBA cards are dirt cheap, pricing
on enclosures with integrated SAS expander is just nuts. Can't figure
out how to add 8-12 disks externally to the storage server without
paying for this 2x cost of the server itself.

What's your budget? A Supermicro SC826-based enclosure can be had for
under a thousand bucks and you can set it up as a JBOD easily (get a
power distribution board for it - Supermicro sells those too). Comes
with a dual-path expander, dual PSUs and a nice rack-mountable with rack
rails - kind of the equivalent of something like an HP MSA 60 or Dell
MD1200, only much cheaper.

Mmmm, budget for 8 disks JBOD...
Lets see:
Case for the 8-16 disks: 100$
PSU: 60$
Controller to get SFF-8088 in/out: 150$
2 or 4 SFF8087x10$: 20-40$
SFF-8088: 15$
2TB disks, 8x80 = 640
Controller with SFF-8088 for the head: 70$

Grab the IcyBox I mentioned before and stuff it full of 3TB drives. Then
grab the cheapest (but compatible) SATA controller board (or just a
bunch of SATA extension cables, if you've got enough spare ports on the
motherboard). There's no pretending with a quality SAS controller and
shielded wiring when you're going for bottom of the barrel drives.

Nah, same exercise in a year.
Need something expandable without much headache.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Expanding storage with JBOD

2014-01-02 Thread Roman Naumenko

Edward Ned Harvey (openindiana) said the following, on 02-01-14 8:33 AM:

From: Roman Naumenko [mailto:ro...@naumenko.ca]

I don't know if even 2TB will fill fast enough to justifying any "investment"
into storage expansion.

I don't get that comment.


I don't need crazy amount of space nor throughput.


Speaking about storage expansion, even HBA cards are dirt cheap, pricing on
enclosures with integrated SAS expander is just nuts. Can't figure out how to
add 8-12 disks externally to the storage server without paying for this 2x cost
of the server itself.

Depends how much you paid for the server itself.  ;-)  But that's kind of 
irrelevant.  Often, the storage *is* more expensive than the rest of the 
server.  Depends on how much storage you're adding.  There's a cost for the 
HBA, for the drive bay, and then a cost for the disks, multiplied by the number 
of disks.  So the disks add up quickly.

I'm a fan of Sans Digital products for the price.  And I'm a fan of using the 
disks that they recommend.

Maybe something like this?
Well, netapp won't work without ONTAP on mothership server.
http://www.ebay.com/itm/NetApp-DS14-Shelf-14-Drive-Capacity-Storage-Array-Unit-/350956372695?pt=US_NAS_Disk_Arrays&hash=item51b6a142d7

HP then? Is it possible to find used array like one below, fill it with 
2TB disk and connect over 8088 to the controller on the server?

http://www.ebay.com/itm/HP-STORAGEWORKS-MSA70-MODULAR-SMART-ARRAY-418800-B21-25-x-HDD-BAYS-/121234564919?pt=US_NAS_Disk_Arrays&hash=item1c3a24a737

--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Expanding storage with JBOD

2014-01-02 Thread Roman Naumenko

Roman Naumenko said the following, on 01-01-14 4:11 PM:

Hello,

Looking for cheap way of expanding current home storage server running 
on openindiana 151_a5.

jbod (12 bay are 400$) with SFF-8088 is most cost-effective.

Now the question what card with 8088 to stick in the server. Will this 
one work? http://www.adaptec.com/en-us/support/sas/sas/asc-1045
Is there any similar listed on HCL here 
http://wiki.openindiana.org/pages/viewpage.action?pageId=4883876?

Seems like this the config that fits budget well:
HP StorageWorks MSA50 (50-100 + delivery + trays) ~150$
SFF-8088 cable: 15$
LSI SAS3801E: from 30$, but lets not be too cheap and find something 
descent for 70$

1TB hdd 7200, x10: 65$x10=650
---
Total:  < 900$

The only question if msa50 works with non-"enterprise" drives.
Saw few reports people tested hp array with consumer disk and it was 
working fine.

Anybody here using HP disk arrays?

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Expanding storage with JBOD

2014-01-03 Thread Roman Naumenko

Saso Kiselkov said the following, on 03-01-14 5:47 AM:

On 1/3/14, 4:10 AM, Roman Naumenko wrote:

Roman Naumenko said the following, on 01-01-14 4:11 PM:

Hello,

Looking for cheap way of expanding current home storage server running
on openindiana 151_a5.
jbod (12 bay are 400$) with SFF-8088 is most cost-effective.

Now the question what card with 8088 to stick in the server. Will this
one work? http://www.adaptec.com/en-us/support/sas/sas/asc-1045
Is there any similar listed on HCL here
http://wiki.openindiana.org/pages/viewpage.action?pageId=4883876?

Seems like this the config that fits budget well:
HP StorageWorks MSA50 (50-100 + delivery + trays) ~150$
SFF-8088 cable: 15$
LSI SAS3801E: from 30$, but lets not be too cheap and find something
descent for 70$
1TB hdd 7200, x10: 65$x10=650
---
Total:  < 900$

So you'd rather pay $650 instead of $400 for the exact same 10TB
instead? (i.e. 10x1TB ($65) vs. 5x2TB ($80)) Why are you so heavily
focused on the number of spindles vs capacity?

Well, theoretically if 1TB works I don't see the reason why 2TB wouldn't.

Also, the IcyBox I recommended to you is a tiny thing you can run on
your desk or stick underneath a staircase out of sight. The MSA50, on
the other hand, is a huge rack-mounted hunk of loud fans. Not least of
all it's gonna cost you a lot more over time if you factor in
electricity costs (http://dft.ba/-7EzE).

Power is 200W, I can live with that.

I just don't get this obsession of yours with datacenter-grade hardware.
Expandable and works. Nothing with SFF-8088 comes even close with the 
price (plus all those brand-new jbods in the price range 1000-2000 - is 
utterly junk).


--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Expanding storage with JBOD

2014-01-03 Thread Roman Naumenko
- Original Message -
> On 1/3/14, 12:13 PM, Roman Naumenko wrote:
> > Saso Kiselkov said the following, on 03-01-14 5:47 AM:
> >> So you'd rather pay $650 instead of $400 for the exact same 10TB
> >> instead? (i.e. 10x1TB ($65) vs. 5x2TB ($80)) Why are you so
> >> heavily
> >> focused on the number of spindles vs capacity?
> > Well, theoretically if 1TB works I don't see the reason why 2TB
> > wouldn't.
> Because they're much more expensive on a $/GB basis than equivalent
> 3.5'' drives (and much slower). Also, you can get 4TB 3.5'' drives
> today, whereas 2.5'' maxes out at half that.

Yeah, right. msa50 won't be as much efficient as enclosure with 3.5 disks. 
There are MSA with 3.5, but they are even less suitable for home and not so 
cheap. 

> >> Also, the IcyBox I recommended to you is a tiny thing you can run
> >> on
> >> your desk or stick underneath a staircase out of sight. The MSA50,
> >> on
> >> the other hand, is a huge rack-mounted hunk of loud fans. Not
> >> least of
> >> all it's gonna cost you a lot more over time if you factor in
> >> electricity costs (http://dft.ba/-7EzE).
> > Power is 200W, I can live with that.
> 
> 200W running for 8760 hours (1 year) comes to around $200 (at
> $0.12/kWh). That IcyBox consumes less than 1/4 of that, so if you're
> examining your finances so carefully, factor in an extra $300 over 2
> years you'll spend on the MSA50.
> 
> >> I just don't get this obsession of yours with datacenter-grade
> >> hardware.
> > Expandable and works.
> And a small box with a power supply and pass-through connectors
> somehow
> doesn't work? You know you can get SFF-8088 to individual SATA/SAS
> fanout cables, right? And once you stuff the HP MSA50 full of 1TB
> hard
> drives, it's not expandable anymore. And if you're going to be
> replacing
> hard drives in the MSA50, you can do the same in the IcyBox.
> 
> > Nothing with SFF-8088 comes even close with the
> > price (plus all those brand-new jbods in the price range 1000-2000
> > - is
> > utterly junk).
> This is used outdated 3G SAS kit you're showing, so it's pretty clear
> it'd be much cheaper.
> 
> Overall I think you're trying to save money on entirely the wrong
> things. Get a few good high-capacity disks and a low-power enclosure
> and don't worry about buying a SAS HBA (or if you do, buy one which
> doesn't limit you to 2TB per drive).

Ok, lets run the numbers for icybox
http://www.raidsonic.de/data/datasheet/raidon/EN/datasheet_iS2880_e.pdf
I'll probably need this model, iS2880-8S-U5D - 245$ on amazon.
And probably some kind of power supply required or its supposed to work form 
psu of the server? 60$
What about cables and adapter? Something like this probably, with 3-4 ports for 
expansion farther. 100$, easy to find cheaper I think.
http://www.ebay.com/itm/LSI-3ware-9650SE-8LPML-SATA-II-PCI-e-RAID-Controller-with-2x-Mini-SAS-Cables-/200989621284?pt=US_Server_Disk_Controllers_RAID_Cards&hash=item2ecbea3c24

Total ~300 + 8x2TBx80$ = ~1000 for 16TB of raw space. 
Are the numbers correct?

--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Expanding storage with JBOD

2014-01-03 Thread Roman Naumenko
- Original Message -
> On 1/3/14, 3:14 PM, Roman Naumenko wrote:
> > - Original Message -
> >> Overall I think you're trying to save money on entirely the wrong
> >> things. Get a few good high-capacity disks and a low-power
> >> enclosure
> >> and don't worry about buying a SAS HBA (or if you do, buy one
> >> which
> >> doesn't limit you to 2TB per drive).
> > 
> > Ok, lets run the numbers for icybox
> > http://www.raidsonic.de/data/datasheet/raidon/EN/datasheet_iS2880_e.pdf
> > I'll probably need this model, iS2880-8S-U5D - 245$ on amazon.
> 
> What do 8x 2.5'' spindles give you that 5x 3.5'' don't? Do you need
> the extra performance?

That was only the model with 8 drives on that website. 
I don't think there there are descent enclosures with >=8 disks and <400$

--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Expanding storage with JBOD

2014-01-03 Thread Roman Naumenko
- Original Message -
> Le 2014/01/03 16:02 +0100, Roman Naumenko a écrit:
> > Power is 200W, I can live with that.
> 
> I'll be pedantic on this point, as I've researched it for my own
> little
> home NAS and checked with a power meter :-)
> 200W is the *max* power rating. The enclosure itself, with its couple
> of
> LEDs and fans, will use about nothing. The actual use depends on the
> disks, and for a modern 3.5" disk, is about ~10w (specs for my
> Seagates
> say around 13, IIRC). My 8-disk enclosure uses about 70-80w,
> depending
> on the level of use.
> So buying a noisy HP or a silent IcyBox will just consume the same
> amount of electricity if they're fitted with the same disks inside.
> 
> My 0,02€,

Ok, thanks for this clarification. 
Its obvious that vendor provides max power consumption.

Which box did you buy for you storage, MSAxx? (or is it just server with the 
disks).

--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Expanding storage with JBOD

2014-01-03 Thread Roman Naumenko
- Original Message -
> On 1/3/14, 12:13 PM, Roman Naumenko wrote:
> > Saso Kiselkov said the following, on 03-01-14 5:47 AM:
> >> On 1/3/14, 4:10 AM, Roman Naumenko wrote:
> >>> Roman Naumenko said the following, on 01-01-14 4:11 PM:
> >>>> Hello,
> >>>>
> >>>> Looking for cheap way of expanding current home storage server
> >>>> running
> >>>> on openindiana 151_a5.
> >>>> jbod (12 bay are 400$) with SFF-8088 is most cost-effective.
> >>>>
> > Expandable and works. Nothing with SFF-8088 comes even close with
> > the
> > price (plus all those brand-new jbods in the price range 1000-2000
> > - is
> > utterly junk).
> 
> Btw: just had another look at the MSA50, it comes with the older
> pre-Mini SAS connectors, so if you do decide to got that route, be
> sure
> to buy some sort of a SFF-8088 to SFF-8470 conversion cable:
> http://www.pc-pitstop.com/sas_cables_adapters/MS-1MIB.asp

Thanks, the connector indeed is different.

--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Expanding storage with JBOD

2014-01-03 Thread Roman Naumenko
- Original Message -
> On 1/3/14, 5:43 PM, Roman Naumenko wrote:
> > - Original Message -
> >> On 1/3/14, 3:14 PM, Roman Naumenko wrote:
> >>> - Original Message -
> >>>> Overall I think you're trying to save money on entirely the
> >>>> wrong
> >>>> things. Get a few good high-capacity disks and a low-power
> >>>> enclosure
> >>>> and don't worry about buying a SAS HBA (or if you do, buy one
> >>>> which
> >>>> doesn't limit you to 2TB per drive).
> >>>
> >>> Ok, lets run the numbers for icybox
> >>> http://www.raidsonic.de/data/datasheet/raidon/EN/datasheet_iS2880_e.pdf
> >>> I'll probably need this model, iS2880-8S-U5D - 245$ on amazon.
> >>
> >> What do 8x 2.5'' spindles give you that 5x 3.5'' don't? Do you
> >> need
> >> the extra performance?
> > 
> > That was only the model with 8 drives on that website.
> > I don't think there there are descent enclosures with >=8 disks and
> > <400$
> 
> You do realize there's a difference between 2.5'' and 3.5'' drives,
> right?

I'm with both hands for 3.5
But where to put them?! Nothing descent even closer available that would fit 
$1000 target.

--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Expanding storage with JBOD

2014-01-03 Thread Roman Naumenko
- Original Message -
> On 1/3/14, 5:52 PM, Roman Naumenko wrote:
> > - Original Message -
> >> On 1/3/14, 5:43 PM, Roman Naumenko wrote:
> >>> - Original Message -----
> >>>> On 1/3/14, 3:14 PM, Roman Naumenko wrote:
> >>>>> - Original Message -
> >>>>>> Overall I think you're trying to save money on entirely the
> >>>>>> wrong
> >>>>>> things. Get a few good high-capacity disks and a low-power
> >>>>>> enclosure
> >>>>>> and don't worry about buying a SAS HBA (or if you do, buy one
> >>>>>> which
> >>>>>> doesn't limit you to 2TB per drive).
> >>
> What's the problem with this thing:
> http://www.overclockers.co.uk/showproduct.php?prodid=HD-036-BT

Its fine, but MSA60 is a deal too.

--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Expanding storage with JBOD

2014-01-03 Thread Roman Naumenko
- Original Message -
> On 1/3/14, 6:04 PM, Roman Naumenko wrote:
> > - Original Message -
> >> On 1/3/14, 5:52 PM, Roman Naumenko wrote:
> >>> - Original Message -----
> >>>> On 1/3/14, 5:43 PM, Roman Naumenko wrote:
> >>>>> - Original Message -
> >>>>>> On 1/3/14, 3:14 PM, Roman Naumenko wrote:
> >>>>>>> - Original Message -
> >>>>>>>> Overall I think you're trying to save money on entirely the
> >>>>>>>> wrong
> >>>>>>>> things. Get a few good high-capacity disks and a low-power
> >>>>>>>> enclosure
> >>>>>>>> and don't worry about buying a SAS HBA (or if you do, buy
> >>>>>>>> one
> >>>>>>>> which
> >>>>>>>> doesn't limit you to 2TB per drive).
> >>>>
> >> What's the problem with this thing:
> >> http://www.overclockers.co.uk/showproduct.php?prodid=HD-036-BT
> > 
> > Its fine, but MSA60 is a deal too.
> 
> If you can get it cheaply enough, don't mind the form factor, noise,
> power consumption and maintenance difficulties, go ahead.

Thanks for the input, I appreciate it.
I'm glad that alternatives emerged. We'll research further what the real prices 
with delivery and everything.

--Roman

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss