On Mar 8, 2010, at 12:05 AM, Dedhi Sujatmiko wrote:
> 2. OpenSolaris (and EON) does not have proper implementation of SMART
> monitoring. Therefore I cannot get to know the temperature of my hard disks.
> Since they are DIY storage without chassis environment monitoring, I consider
> this an im
On Monday 08,March,2010 10:09 AM, Slack-Moehrle wrote:
OpenSolaris or FreeBSD with ZFS?
I am also having some NAS storage at home, and consists of :
a. OpenSolaris booted from hard disk with ZFS, mostly doing NFS and
iSCSI Target for VMWare ESX, Intel Core Duo proc+ICH7 controller
b. EON
Hi Erik,
>>Be sure to read the 3Ware info on their controllers under OpenSolaris:
>>http://www.3ware.com/kb/article.aspx?id=15643
>>That said, 3ware controllers are hardly the best option for a
>>OpenSolaris server. You DON'T want to make use of any of the hardware
>>raid features of them, and
I think ZFS should look for more opportunities to write to disk rather than
leaving it to the last second (5seconds) as it appears it does. e.g.
if a file has record size worth of data outstanding it should be queued within
ZFS to be written out. If the record is updated again before a txg, the
Be sure to read the 3Ware info on their controllers under OpenSolaris:
http://www.3ware.com/kb/article.aspx?id=15643
That said, 3ware controllers are hardly the best option for a
OpenSolaris server. You DON'T want to make use of any of the hardware
raid features of them, and you may not even
Slack-Moehrle wrote:
Do you have any thoughts on implementation? I think I would just like to put my
Home directory on the ZFS pool and just SCP files up as needed. I dont think I
need to mount drives on my mac, etc. SCP seems to suite me.
One important point to note is you can only boot off a
Hi David,
>Did you pick the chassis and disk size based on planned storage
>requirements, or because it's what you could get to build a big honking
>fileserver box? Just curious.
I have a 4tb Buffalo Terastation that cannot be expanded further and I am using
2.7tb. Also, I have need to mak
On 3/7/2010 2:08 PM, Richard Elling wrote:
On Mar 5, 2010, at 7:32 AM, David Dyer-Bennet wrote:
sending from @bup-4hr-20100228-04CST to
zp1/l...@bup-4hr-20100228-08cst
received 312B stream in 1 seconds (312B/sec)
receiving incremental stream of zp1/l...@bup-4hr-20100224-12cst int
David Dyer-Bennet wrote:
For a system where you care about capacity and safety, but not that
much about IO throughput (that's my interpretation of what you said
you would use it for), with 16 bays, I believe the expert opinion will
tell you that two RAIDZ2 groups of 8 disks each is one of th
On 3/7/2010 8:09 PM, Slack-Moehrle wrote:
I build a new Storage Server to backup my data, keep archives of client files,
etc I recently had a near loss of important items.
So I built a 16 SATA bay enclosure (16 hot swappable + 3 internal) enclosure, 2
x 3Ware 8 port RAID cards, 8gb RAM, dual
On Sun, Mar 7, 2010 at 6:09 PM, Slack-Moehrle
wrote:
> OpenSolaris or FreeBSD with ZFS?
zfs for sure. it's nice having something bitrot-resistant.
it was designed with data integrity in mind.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
Hello All,
I build a new Storage Server to backup my data, keep archives of client files,
etc I recently had a near loss of important items.
So I built a 16 SATA bay enclosure (16 hot swappable + 3 internal) enclosure, 2
x 3Ware 8 port RAID cards, 8gb RAM, dual AMD Opertron.
I have a 1tb boot
I want to move my pool (consisting of five 1.5TB sata drives in raidz1) to a
different computer. I am encountering issues with controllers - the
motherboard (Asus P5BV-C/4L) has 8 sata ports: 4 on a marvell 88se6145,
which seems not to be supported at all; and 4 on intel 82801G, which uses
the pci-
On 8/03/10 01:42 AM, Tim Cook wrote:
On Sun, Mar 7, 2010 at 3:12 AM, James C. McPherson mailto:j...@opensolaris.org>> wrote:
On 7/03/10 12:28 PM, norm.tallant wrote:
I'm about to try it! My LSI SAS 9211-8i should arrive Monday or
Tuesday. I bought the cable-less versio
On Sun, Mar 7, 2010 at 3:12 PM, Ethan wrote:
> On Sun, Mar 7, 2010 at 15:30, Tim Cook wrote:
>
>>
>>
>> On Sun, Mar 7, 2010 at 2:10 PM, Ethan wrote:
>>
>>> On Sun, Mar 7, 2010 at 14:55, Tim Cook wrote:
>>>
On Sun, Mar 7, 2010 at 1:05 PM, Dennis Clarke wrote:
>
> > O
On Mar 7, 2010, at 10:30 AM, Ethan wrote:
> I have a failing drive, and no way to correlate the device with errors in the
> zpool status with an actual physical drive.
> If I could get the device's serial number, I could use that as it's printed
> on the drive.
> I come from linux, so I tried
On Sun, Mar 7, 2010 at 15:30, Tim Cook wrote:
>
>
> On Sun, Mar 7, 2010 at 2:10 PM, Ethan wrote:
>
>> On Sun, Mar 7, 2010 at 14:55, Tim Cook wrote:
>>
>>>
>>>
>>> On Sun, Mar 7, 2010 at 1:05 PM, Dennis Clarke wrote:
>>>
> On Sun, Mar 7, 2010 at 12:30 PM, Ethan wrote:
>
>> I
On Mar 7, 2010, at 11:43 AM, Lutz Schumann wrote:
> Hello list,
>
> when consolidating storage services, it may be required to prioritize I/O.
>
> e.g. the important SAP database get all the I/O we can deliver and that it
> needs. The test systems should use whats left.
>
> While this is a
On Mar 6, 2010, at 9:31 PM, Abdullah Al-Dahlawi wrote:
> Hi ALL
>
> I might be little bit confused !!!
>
> I will try to ask my question in a simple way ...
>
> Why would a 16GB L2ARC device got filled by running a benchmark that uses a
> 2GB workingset while having a 2GB ARC max ?
ZFS is
On Mar 5, 2010, at 7:32 AM, David Dyer-Bennet wrote:
> My full backup script errorred out the last two times I ran it. I've got
> a full Bash trace of it, so I know exactly what was done.
>
> There are a moderate number of snapshots on the zp1 pool, and I'm
> intending to replicate the whole thin
On Sun, Mar 7, 2010 at 1:05 PM, Dennis Clarke wrote:
>
> > On Sun, Mar 7, 2010 at 12:30 PM, Ethan wrote:
> >
> >> I have a failing drive, and no way to correlate the device with errors
> >> in
> >> the zpool status with an actual physical drive.
> >> If I could get the device's serial number, I
Hello list,
when consolidating storage services, it may be required to prioritize I/O.
e.g. the important SAP database get all the I/O we can deliver and that it
needs. The test systems should use whats left.
While this is a difficult topic in disk based systems (even little I/o with
long h
Hello, to automate all these, the best thing to do is to create Sun Cluster
HA Storage resource.
Have a look:
http://docs.sun.com/app/docs/doc/819-2974/gbspx?a=view
--
http://unixinmind.blogspot.com
From: zfs-discuss-boun...
> On Sun, Mar 7, 2010 at 12:30 PM, Ethan wrote:
>
>> I have a failing drive, and no way to correlate the device with errors
>> in
>> the zpool status with an actual physical drive.
>> If I could get the device's serial number, I could use that as it's
>> printed
>> on the drive.
>> I come from li
On Sun, Mar 7, 2010 at 12:30 PM, Ethan wrote:
> I have a failing drive, and no way to correlate the device with errors in
> the zpool status with an actual physical drive.
> If I could get the device's serial number, I could use that as it's printed
> on the drive.
> I come from linux, so I tried
I have a failing drive, and no way to correlate the device with errors in
the zpool status with an actual physical drive.
If I could get the device's serial number, I could use that as it's printed
on the drive.
I come from linux, so I tried dmesg, as that's what's familiar (I see that
the man page
On 3/7/2010 11:23 AM, Tomas Ögren wrote:
On 07 March, 2010 - David Dyer-Bennet sent me these 1,1K bytes:
There isn't some syntax I'm missing to use wildcards in zfs list to list
snapshots, is there? I find nothing in the man page, and nothing I've
tried works (yes, I do understand that nor
On 07 March, 2010 - David Dyer-Bennet sent me these 1,1K bytes:
> There isn't some syntax I'm missing to use wildcards in zfs list to list
> snapshots, is there? I find nothing in the man page, and nothing I've
> tried works (yes, I do understand that normally wildcards are expanded
> by th
There isn't some syntax I'm missing to use wildcards in zfs list to list
snapshots, is there? I find nothing in the man page, and nothing I've
tried works (yes, I do understand that normally wildcards are expanded
by the shell, and I don't expect bash to have zfs-specific stuff like
that in it
On Sun, Mar 7, 2010 at 3:12 AM, James C. McPherson wrote:
> On 7/03/10 12:28 PM, norm.tallant wrote:
>
>> I'm about to try it! My LSI SAS 9211-8i should arrive Monday or
>> Tuesday. I bought the cable-less version, opting instead to save a few
>> $ and buy Adaptec 2247000-R SAS to SATA cables.
It turns out that the problem that was being hit by Kristin was bug
10990 which is caused by using zoneadm cone to clone a zone. This
causes a snapshot name collision that we where not catching due to
bug 11062.
To work around this issue there are two possibilites:
1) delete zones that have been
Hi, have a look at
http://defect.opensolaris.org/bz/show_bug.cgi?id=11062#c4
think it's related to your problem.
--
http://unixinmind.blogspot.com
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris
On 7/03/10 12:28 PM, norm.tallant wrote:
I'm about to try it! My LSI SAS 9211-8i should arrive Monday or
Tuesday. I bought the cable-less version, opting instead to save a few
$ and buy Adaptec 2247000-R SAS to SATA cables.
My rig will be based off of fairly new kit, so it should be interesti
33 matches
Mail list logo