Hi,
I have SunOS 5.11 oi_148 installed on my storage server with 8 disks in
raidz2 pool.
It hangs about once in a week and I had to restart it.
Can you help me troubleshoot it?
It has some zfs volumes shared over nfs and afpd. (afpd is unfortunately
a development version to satisfy OSX Lion).
148 system was
behaving like that when I put too many drives in an ultra 20.
On 8/31/2011 7:48 PM, Roman Naumenko wrote:
Hi,
I have SunOS 5.11 oi_148 installed on my storage server with 8 disks
in raidz2 pool.
It hangs about once in a week and I had to restart it.
Can you help me troubleshoot
from Jasons' hand held
On Aug 31, 2011, at 6:43 PM, Daniel Kjar wrote:
Careful... are you overtaxing your power supply? My 148 system was behaving
like that when I put too many drives in an ultra 20.
On 8/31/2011 7:48 PM, Roman Naumenko wrote:
Hi,
I have SunOS 5.11 oi_148 installed on
me think this but the eventual failure of the disks alerted
> me
> that something hardwarish was happening.
> On 08/31/11 11:01 PM, Roman Naumenko wrote:
> > Well, might be the reason. 8 drivers is certainly limit too much
> > for a
> > stock psu. But there should be som
r service processor or boot
> to bios and read them there.
> Sent from Jasons' hand held
> On Sep 1, 2011, at 8:37 AM, Roman Naumenko wrote:
> > Costly troubleshooting you had.
> > All right then, I will wait for the next failure to look through it
> > once again
ut hw event logs? if you have power flucuations it might show
ip there.
you can probably pull those out from your service processor or boot
to bios and read them there.
Sent from Jasons' hand held
On Sep 1, 2011, at 8:37 AM, Roman Naumenko wrote:
Costly troubleshooting you had.
All right then, I
you can probably pull those out from your service processor or boot
to bios and read them there.
Sent from Jasons' hand held
On Sep 1, 2011, at 8:37 AM, Roman Naumenko wrote:
Costly troubleshooting you had.
All right then, I will wait for the next failure to look through it
once agai
It was a fresh install from openindiana distro.
--Roman
- Original Message -
> On Wed, 2011-08-31 at 19:48 -0400, Roman Naumenko wrote:
> > Hi,
> >
> > I have SunOS 5.11 oi_148 installed on my storage server with 8
> > disks in
> > raidz2 pool.
> &
Hi,
It's oi_148.
I tried few different versions of virtualbox (latest and VirtualBox-4.1.0) and
it all ends in hanged system.
The installation process goes to this point :
Loading Virtualbox kernel modules...
kthread_t::t_preempt at 142
cput_t::cpu_runrun at 216
(something) pkrunrun at 2
Any other options?
oi_148 is working more or less ok, I'm not inclined much to upgrade it
right now to experiment with vbox.
--Roman
ken mays said the following, on 17-09-11 1:12 PM:
Use oi_151a and let us know.
--- On *Sat, 9/17/11, Roman Naumenko //* wrote:
From: Roman Nau
there was ever any intention of supporting it. The whole
point of a development release is to get the bugs out of it so that a
stable release can come out. And the best way for that to work is for
everyone to work as a team and to be at the latest development release.
On Sat, 2011-09-17 at 13:
Richard Elling said the following, on 25-07-12 1:14 PM:
On Jul 24, 2012, at 9:11 AM, Jason Matthews wrote:
are you missing a zero to the left of the decimal place?
Been there, done that, wrote a whitepaper. Add 2 zeros.
-- richard
Or add three and buy pair of FAS3200
:)
--Roman N
Sent from
After another big clenup on home filer during which I got terrible
headache because of zfs-auto-snapshost, I decided to ask if anybody
tried to simplified snapshot management.
If a user could list all fs with zfs-auto weekly snapshots enabled, or
count them up or be able to enable other periodic
Jan Owoc said the following, on 11-02-13 8:35 PM:
On Mon, Feb 11, 2013 at 5:39 PM, Roman Naumenko wrote:
After another big clenup on home filer during which I got terrible headache
because of zfs-auto-snapshost, I decided to ask if anybody tried to
simplified snapshot management.
I think
Hi,
I have a weird issue with zfs-auto-snapshot on oi_151a5, it continues to
make snapshots even when I asked not to do so.
By the way, is it possible to update this package to newer version
without upgrading whole distro?
@data:~$ zfs get all storpool/mailserver_data/zca8vm | grep snap
Jan Owoc said the following, on 14-02-13 10:19 PM:
On Thu, Feb 14, 2013 at 8:07 PM, Roman Naumenko wrote:
I have a weird issue with zfs-auto-snapshot on oi_151a5, it continues to
make snapshots even when I asked not to do so.
[...]
Initially it inherited snapshot options for dataset above
Hi,
Just wanted to ask if this is the latest version of time-slider?
data:~$ pkginfo -l SUNWgnome-time-slider
PKGINST: SUNWgnome-time-slider
NAME: Time Slider ZFS snapshot management for GNOME
CATEGORY: GNOME2,application,JDSoi
ARCH: i386
VERSION: 0.2.97,REV=110.0.4.2011
dormitionsk...@hotmail.com said the following, on 19-03-13 7:14 PM:
On Mar 18, 2013, at 10:00 PM, Roman Naumenko wrote:
Hi,
Just wanted to ask if this is the latest version of time-slider?
data:~$ pkginfo -l SUNWgnome-time-slider
PKGINST: SUNWgnome-time-slider
NAME: Time Slider ZFS
Edward Ned Harvey (openindiana) said the following, on 20-03-13 7:32 AM:
From: dormitionsk...@hotmail.com [mailto:dormitionsk...@hotmail.com]
Sent: Tuesday, March 19, 2013 11:42 PM
A Sun Solaris machine was shut down last week in Hungary, I think, after 3737
days of uptime. Below are links to t
Andrew Gabriel said the following, on 07-04-13 10:34 AM:
Edward Ned Harvey (openindiana) wrote:
From: Ben Taylor [mailto:bentaylor.sol...@gmail.com]
Patching is a bit of arcane art. Some environments don't have
test/acceptance/pre-prod with similar hardware and configurations, so
minimizing im
Hello,
Looking for cheap way of expanding current home storage server running
on openindiana 151_a5.
jbod (12 bay are 400$) with SFF-8088 is most cost-effective.
Now the question what card with 8088 to stick in the server. Will this
one work? http://www.adaptec.com/en-us/support/sas/sas/asc-1
Saso Kiselkov said the following, on 01-01-14 4:30 PM:
On 1/1/14, 9:11 PM, Roman Naumenko wrote:
Hello,
Looking for cheap way of expanding current home storage server running
on openindiana 151_a5.
jbod (12 bay are 400$) with SFF-8088 is most cost-effective.
Now the question what card with
cjt said the following, on 01-01-14 7:14 PM:
On 01/01/2014 04:23 PM, Roman Naumenko wrote:
Saso Kiselkov said the following, on 01-01-14 4:30 PM:
On 1/1/14, 9:11 PM, Roman Naumenko wrote:
Hello,
Looking for cheap way of expanding current home storage server running
on openindiana 151_a5
Saso Kiselkov said the following, on 02-01-14 9:49 AM:
On 1/2/14, 2:28 AM, Roman Naumenko wrote:
cjt said the following, on 01-01-14 7:14 PM:
On 01/01/2014 04:23 PM, Roman Naumenko wrote:
Saso Kiselkov said the following, on 01-01-14 4:30 PM:
On 1/1/14, 9:11 PM, Roman Naumenko wrote:
Hello
David Scharbach said the following, on 02-01-14 1:53 PM:
Supermicro may bit a bit overkill for the home server :) Although I have
considered it for myself at home… We used them where I used to work and they
are pretty nice for the money.
For myself, I opted to go with the Norco 4220, a Tyan
Saso Kiselkov said the following, on 02-01-14 6:26 PM:
On 1/2/14, 11:15 PM, Roman Naumenko wrote:
Saso Kiselkov said the following, on 02-01-14 9:49 AM:
On 1/2/14, 2:28 AM, Roman Naumenko wrote:
cjt said the following, on 01-01-14 7:14 PM:
On 01/01/2014 04:23 PM, Roman Naumenko wrote:
Saso
Edward Ned Harvey (openindiana) said the following, on 02-01-14 8:33 AM:
From: Roman Naumenko [mailto:ro...@naumenko.ca]
I don't know if even 2TB will fill fast enough to justifying any "investment"
into storage expansion.
I don't get that comment.
I don't need
Roman Naumenko said the following, on 01-01-14 4:11 PM:
Hello,
Looking for cheap way of expanding current home storage server running
on openindiana 151_a5.
jbod (12 bay are 400$) with SFF-8088 is most cost-effective.
Now the question what card with 8088 to stick in the server. Will this
Saso Kiselkov said the following, on 03-01-14 5:47 AM:
On 1/3/14, 4:10 AM, Roman Naumenko wrote:
Roman Naumenko said the following, on 01-01-14 4:11 PM:
Hello,
Looking for cheap way of expanding current home storage server running
on openindiana 151_a5.
jbod (12 bay are 400$) with SFF-8088 is
- Original Message -
> On 1/3/14, 12:13 PM, Roman Naumenko wrote:
> > Saso Kiselkov said the following, on 03-01-14 5:47 AM:
> >> So you'd rather pay $650 instead of $400 for the exact same 10TB
> >> instead? (i.e. 10x1TB ($65) vs. 5x2TB ($80)) Why are you
- Original Message -
> On 1/3/14, 3:14 PM, Roman Naumenko wrote:
> > - Original Message -
> >> Overall I think you're trying to save money on entirely the wrong
> >> things. Get a few good high-capacity disks and a low-power
> >> enclosur
- Original Message -
> Le 2014/01/03 16:02 +0100, Roman Naumenko a écrit:
> > Power is 200W, I can live with that.
>
> I'll be pedantic on this point, as I've researched it for my own
> little
> home NAS and checked with a power meter :-)
> 200W is the
- Original Message -
> On 1/3/14, 12:13 PM, Roman Naumenko wrote:
> > Saso Kiselkov said the following, on 03-01-14 5:47 AM:
> >> On 1/3/14, 4:10 AM, Roman Naumenko wrote:
> >>> Roman Naumenko said the following, on 01-01-14 4:11 PM:
> >>>>
- Original Message -
> On 1/3/14, 5:43 PM, Roman Naumenko wrote:
> > - Original Message -
> >> On 1/3/14, 3:14 PM, Roman Naumenko wrote:
> >>> - Original Message -
> >>>> Overall I think you're trying to save money on entir
- Original Message -
> On 1/3/14, 5:52 PM, Roman Naumenko wrote:
> > - Original Message -
> >> On 1/3/14, 5:43 PM, Roman Naumenko wrote:
> >>> - Original Message -----
> >>>> On 1/3/14, 3:14 PM, Roman Naumenko wrote:
> >>>
- Original Message -
> On 1/3/14, 6:04 PM, Roman Naumenko wrote:
> > - Original Message -
> >> On 1/3/14, 5:52 PM, Roman Naumenko wrote:
> >>> - Original Message -----
> >>>> On 1/3/14, 5:43 PM, Roman Naumenko wrote:
> >>>
36 matches
Mail list logo