interesting project..I shall try it out
be carefully..NTAPP might sue you ;)
selim
--
--
Blog: http://fakoli.blogspot.com/
On 11/1/07, Joe Little <[EMAIL PROTECTED]> wrote:
> I consider myself an early adopter of ZFS and pushed it hard on this
Hi all,
Just in case anyone uses these, I've got new versions of the ZFS
Automatic Snapshot SMF Service and the ZFS Automatic Backup SMF Service
on my blog now.
http://blogs.sun.com/timf/entry/zfs_automatic_for_the_people
All comments would be most welcome!
cheers,
On Thu, 2007-11-01 at 18:15 -0700, Denis wrote:
> But after the reboot, where the resilvering restarted by itself
> without a problem I noticed that it started from the beginning!?
that's expected behavior today. it remembers it has work to do but not
where it left off.
> Why is that the case w
On Thu, 2007-11-01 at 08:08 -0700, Scott Spyrison wrote:
> Given 4 internal drives in a server, what kind of ZFS layout would you use?
Assuming you needed more than one disk's worth of ZFS space after
mirroring:
disks 0+1: partition them with a "space hog" partition at the start of
the disk fol
I've had this happen once or twice now, running n74. I'll run 'zpool
scrub' on my root pool and *immediately* get an error reported:
# zpool status -v tank
pool: tank
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be a
Hi
Today I attached two disks to my two disk zfs stripe. The resilvering started
immediately and everything worked fine. But while it was running I had to
reboot and thought that the resilvering would continue where it was because it
works Top-down like Bonwick said in his blog.
But after the
I observed something like this a while ago, but assumed it was something
I did. (It usually is... ;)
Tell me - If you watch with an iostat -x 1, do you see bursts of I/O
then periods of nothing, or just a slow stream of data?
I was seeing intermittent stoppages in I/O, with bursts of data on
o
>
>>
>>> Basically, I want to know if somebody here on this list is using
>>> a ZFS
>>> file system for a proxy cache and what will be it's performance?
>>> Will it
>>> improve and degrade Squid's performance? Or better still, is
>>> there any
>>> kind of benchmark tools for ZFS performance?
If SunVTS is installed you may also
want to consider running ramtest:
SunVTS 7.0:
cd /usr/sunvts/bin/sparcv9 ( or bin/64 )
./ramtest -xo pass=2
HTH,
Marion
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolar
Hello
I've got Solaris Express Community Edition build 75 (75a) installed on an Asus
P5K-E/WiFI-AP (ip35/ICH9R based) board. CPU=Q6700, RAM=8Gb, disk=Samsung
HD501LJ and (older) Maxtor 6H500F0.
When the O/S is running on bare metal, ie no xVM/Xen hypervisor, then
everything is fine.
When it'
I consider myself an early adopter of ZFS and pushed it hard on this
list and in real life with regards to iSCSI integration, zfs
performance issues with latency there of, and how best to use it with
NFS. Well, I finally get to talk more about the ZFS-based product I've
been beta testing for quite
On 11/1/07, Scott Spyrison <[EMAIL PROTECTED]> wrote:
> Given 4 internal drives in a server, what kind of ZFS layout would you use?
What's wrong with mirroring? What are you doing with the machine in
question? I think that will make a big difference in what best to do
with the disks.
Will
_
Hello,
I've been turning this over in my mind, thought I'd post and see what creative
ideas came up here.
Given 4 internal drives in a server, what kind of ZFS layout would you use?
This is SPARC, so I can't boot off ZFS, and ideally the OS should be mirrored.
Right now I feel tied to SVM mi
Hi Davies,
Dick Davies wrote:
> On 29/10/2007, Tek Bahadur Limbu <[EMAIL PROTECTED]> wrote:
>
>> I created a ZFS file system like the following with /mypool/cache being
>> the partition for the Squid cache:
>>
>> 18:51:27 [EMAIL PROTECTED]:~$ zfs list
>> NAME USED AVAIL REFER
Hidehiko Jono wrote:
> Hi,
>
> IHAC who wants to use ZFS for user's home directory.
> He is worry about the number of mount points.
> Does ZFS have any limitation about the number of mount points in a server?
In theory yes but in all practical terms no (IIRC there isn't enough
storage on earth n
15 matches
Mail list logo