On Sun, Oct 9, 2011 at 12:28 PM, Jim Klimov wrote:
> So, one version of the solution would be to have a single host
> which imports the pool in read-write mode (i.e. the first one
> which boots), and other hosts would write thru it (like iSCSI
> or whatever; maybe using SAS or FC to connect betwee
On Tue, Oct 11, 2011 at 11:15 PM, Richard Elling
wrote:
> On Oct 9, 2011, at 10:28 AM, Jim Klimov wrote:
>> ZFS developers have for a long time stated that ZFS is not intended,
>> at least not in near term, for clustered environments (that is, having
>> a pool safely imported by several nodes simu
On Oct 9, 2011, at 10:28 AM, Jim Klimov wrote:
> Hello all,
>
> ZFS developers have for a long time stated that ZFS is not intended,
> at least not in near term, for clustered environments (that is, having
> a pool safely imported by several nodes simultaneously). However,
> many people on forums
On Oct 11, 2011, at 2:03 PM, Frank Van Damme wrote:
> 2011/10/11 Richard Elling :
>>> ZFS Tunables (/etc/system):
>>> set zfs:zfs_arc_min = 0x20
>>> set zfs:zfs_arc_meta_limit=0x1
>>
>> It is not uncommon to tune arc meta limit. But I've not seen a case
>> where tuning
Banging my head against a Seagate 3TB USB3 drive.
Its marketing name is:
Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102
format(1M) shows it identify itself as:
Seagate-External-SG11-2.73TB
Under both Solaris 10 and Solaris 11x, I receive the evil message:
| I/O request is n
2011/10/11 Richard Elling :
>> ZFS Tunables (/etc/system):
>> set zfs:zfs_arc_min = 0x20
>> set zfs:zfs_arc_meta_limit=0x1
>
> It is not uncommon to tune arc meta limit. But I've not seen a case
> where tuning arc min is justified, especially for a storage server. Can
>
On Tue, Oct 11, 2011 at 6:25 AM, wrote:
> I'm not familiar with ZFS stuff, so I'll try to give you as much as info I
> can get with our environment
> We are using a ZFS pool as a VLS for a backup server (Sun V445 Solaris 10),
> and we are faced with very low read performance (whilst write perf
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of deg...@free.fr
>
> I'm not familiar with ZFS stuff, so I'll try to give you as much as info I
can get
> with our environment
> We are using a ZFS pool as a VLS for a backup server (Sun V445 Sol
On Oct 6, 2011, at 5:19 AM, Frank Van Damme wrote:
> Hello,
>
> quick and stupid question: I'm breaking my head over how to tunz
> zfs_arc_min on a running system. There must be some magic word to pipe
> into mdb -kw but I forgot it. I tried /etc/system but it's still at the
> old value after re
Hi,
I'm not familiar with ZFS stuff, so I'll try to give you as much as info I can
get with our environment
We are using a ZFS pool as a VLS for a backup server (Sun V445 Solaris 10), and
we are faced with very low read performance (whilst write performance is much
better, i.e : up to 40GB/h t
Subject: FYI on Storage Event
Just an FYI
on storage. I just learned that an
OpenStorage Summit is happening, in San Jose, during the last week of October.
Some great speakers are presenting and some really interesting topics will be
addressed, including Korea Telecom on Public Cloud Storage, Inte
On Oct 11, 2011, at 2:25 AM, KES wrote:
> Hi
>
> I have the next configuration: 3 disk 1Gb in raid0
> all disks in zfs pool
we recommend protecting the data. Friends don't let friends use raid-0.
nit: We tend to refer to disk size in bytes (B), not bits (b)
> freespace on so raid is 1.5Gb and
On 09/26/11 20:03, Jesus Cea wrote:
# zpool upgrade -v
[...]
24 System attributes
[...]
This is really an on disk format issue rather than something that the
end user or admin can use directly.
These are special on disk blocks for storing file system metadata
attributes when there isn't en
Have you looked at the time-slider functionality that is already in
Solaris ?
There is a GUI for configuration of the snapshots and time-slider can be
configured to do a 'zfs send' or 'rsync'. The GUI doesn't have the
ability to set the 'zfs recv' command but that is set one-time in the
SMF
On Tue, Oct 11, 2011 at 9:25 AM, KES wrote:
> Hi
>
> I have the next configuration: 3 disk 1Gb in raid0
> all disks in zfs pool
>
> freespace on so raid is 1.5Gb and 1.5Gb is used.
>
> so I have some questions:
> 1. If I don plan to use 3 disks in pool any more. How can I remove one of it?
> 2. Im
Hi
I have the next configuration: 3 disk 1Gb in raid0
all disks in zfs pool
freespace on so raid is 1.5Gb and 1.5Gb is used.
so I have some questions:
1. If I don plan to use 3 disks in pool any more. How can I remove one of it?
2. Imaine one disk has failures. I want to replace it, but now I do
16 matches
Mail list logo