Hello Richard,
Wednesday, March 21, 2007, 1:48:23 AM, you wrote:
RE> Yes, PSARC 2007/121 integrated into build 61 (and there was much rejoicing
:-)
RE> I'm working on some models which will show the affect on various RAID
RE> configurations and intend to post some results soon. Suffice to say,
Torrey McMahon wrote:
Matthew Ahrens wrote:
I'm only doing an initial investigation now so I have no test data at
this point. The reason I asked, and I should have tacked this on at the
end of the last email, was a blog entry that stated zfs send was slow
http://www.lethargy.org/~jesus/archiv
Would the system be able to halt if something was unplugged/some
massive failure happened?
That way if something got tripped, I could fix it before any
corruption or issue occured.
That would be my safety net, I suppose.
On 3/20/07, Sanjeev Bagewadi <[EMAIL PROTECTED]> wrote:
Mike,
We have us
Mike,
We have used 4 disks (2X80GB disks and 2X250GB disks) on USB and things
worked well.
Hot plugging the disks was not all that smooth for us.
Other than that we had no issues using the disks. We used this setup for
demos at the FOSS 2007 conference
at Bangalore and that went through sever
Matthew Ahrens wrote:
Torrey McMahon wrote:
Howdy folks.
I've a customer looking to use ZFS in a DR situation. They have a
large data store where they will be taking snapshots every N minutes
or so, sending the difference of the snapshot and previous snapshot
with zfs send -i to a remote hos
Torrey McMahon wrote:
Howdy folks.
I've a customer looking to use ZFS in a DR situation. They have a large
data store where they will be taking snapshots every N minutes or so,
sending the difference of the snapshot and previous snapshot with zfs
send -i to a remote host, and in case of DR fi
Howdy folks.
I've a customer looking to use ZFS in a DR situation. They have a large
data store where they will be taking snapshots every N minutes or so,
sending the difference of the snapshot and previous snapshot with zfs
send -i to a remote host, and in case of DR firing up the secondary.
I just integrated into snv_62:
6529406 zpool history needs to bump the on-disk version
The original CR for 'zpool history':
6343741 want to store a command history on disk
was integrated into snv_51.
Both of these are planned to make s10u4.
But wait, 'zpool history' has existed for several mont
Yes, PSARC 2007/121 integrated into build 61 (and there was much rejoicing :-)
I'm working on some models which will show the affect on various RAID
configurations and intend to post some results soon. Suffice to say, if
you have a stripe with > 1 disk, you might be able to survive loss of a
disk
On Wed, Mar 21, 2007 at 01:36:10AM +0100, Robert Milkowski wrote:
> btw: I assume that compression level will be hard coded after all,
> right?
Nope. You'll be able to choose from gzip-N with N ranging from 1 to 9 just
like gzip(1).
Adam
--
Adam Leventhal, Solaris Kernel Development http:
Hello Adam,
Wednesday, March 21, 2007, 1:24:35 AM, you wrote:
AL> On Wed, Mar 21, 2007 at 01:23:06AM +0100, Robert Milkowski wrote:
>> Adam, while you are here, what about gzip compression in ZFS?
>> I mean are you going to integrate changes soon?
AL> I submitted the RTI today.
Great!
btw: I a
Hello zfs-discuss,
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6459491
I guess some people here will be happy :)
ps. now think about all these questions: what do you think about HW
RAID5 LUNs with raidz2 on top of this with ditto block set to 3? Or
maybe 2 would be good
On Wed, Mar 21, 2007 at 01:23:06AM +0100, Robert Milkowski wrote:
> Adam, while you are here, what about gzip compression in ZFS?
> I mean are you going to integrate changes soon?
I submitted the RTI today.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
__
Hello Adam,
Wednesday, March 21, 2007, 12:42:49 AM, you wrote:
AL> On Tue, Mar 20, 2007 at 06:01:28PM -0400, Brian H. Nelson wrote:
>> Why does this happen? Is it a bug? I know there is a recommendation of
>> 20% free space for good performance, but that thought never occurred to
>> me when thi
On Tue, Mar 20, 2007 at 06:01:28PM -0400, Brian H. Nelson wrote:
> Why does this happen? Is it a bug? I know there is a recommendation of
> 20% free space for good performance, but that thought never occurred to
> me when this machine was set up (zvols only, no zfs proper).
It sounds like this b
On March 20, 2007 1:41:53 PM -0700 mike <[EMAIL PROTECTED]> wrote:
I am desperate to replace a server that is failing and I want to
replace it with a proper quiet ZFS-based solution
Slightly off your point, but I can't imagine 4 drives being anything
near "quiet".
-frank
__
IIRC, there is at least some of the necessary code for file change
notification present in order to support NFSv4 delegations on the server
side. Last time I looked it wasn't exposed to userspace.
On 3/21/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
On the file event monitor portion of
Jim Mauro wrote:
>
>
> http://www.cnn.com/2007/US/03/20/lost.data.ap/index.html
"$71,800 for computer consultants" wow.
Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Dear list,
Solaris 10 U3 on SPARC.
I had a 197GB raidz storage pool. Within that pool, I had allocated a
191GB zvol (filesystem A), and a 6.75GB zvol (filesystem B). These used
all but a couple hundred K of the zpool. Both zvols contained UFS
filesystems with logging enabled. The (A) filesyst
We've got some work going on in the NFS group to alleviate this problem. Doug
McCallum has introduced the sharemgr (see http://blogs.sun.com/dougm) and I'm
about to putback the In-Kernel Sharetab bits (look in http://blogs.sun.com/tdh
- especially http://blogs.sun.com/tdh/entry/in_kernel_shareta
I think this is a systems engineering problem, not just a ZFS problem.
Few have bothered to look at mount performance in the past because
most systems have only a few mounted file systems[1]. Since ZFS does
file system quotas instead of user quotas, now we have the situation
where there could be
Hi Kory - Your problem came our way through other Sun folks a few days ago,
and I wish I had that magic setting to help, but the reality is that I'm
not aware
of anything that will improve the time required to mount 12k file systems.
I would add (not that this helps) that I'm not convinced this
Hi Peter,
The bugs are filed:
http://bugs.opensolaris.org/view_bug.do?bug_id=6430563
Your coworker might be able to workaround this by setting a 10GB quota
on the ZFS file system.
cs
Peter Eriksson wrote:
A coworker of mine ran into a large ZFS-related bug the other day. He was
trying to
okay so since this is fixed, Chris, would you consider using USB/FW now?
I am desperate to replace a server that is failing and I want to
replace it with a proper quiet ZFS-based solution, I hate being held
captive by NTFS issues (it may have corrupted my data now a second
time)
ZFS's checksummi
Robert Milkowski wrote:
Hello Darren,
Tuesday, March 20, 2007, 3:27:26 AM, you wrote:
Using Solaris 10, Update 2
I've just rebooted my desktop and I have discovered that a ZFS
filesystem appears to have gone missing.
The filesystem in question was called "biscuit
Viktor Turskyi wrote:
Is there any performance problem with hard links in ZFS? I have a
large storage. There will be near 5 hard links for every file.
Is it ok for ZFS? May be some problems with snapshots(every 30
minutes there will be a snapshot creating)? What about difference in
speed w
Begin forwarded message:
From: Chris Csanady <[EMAIL PROTECTED]>
Date: March 20, 2007 11:58:24 AM PDT
To: mike <[EMAIL PROTECTED]>
Cc: ZFS Discussions
Subject: Re: [zfs-discuss] ZFS and Firewire/USB enclosures
It looks like the following bug is still open:
6424510 usb ignores DKIOCFLUSHWRI
On Mar 20, 2007, at 10:27 AM, [EMAIL PROTECTED] wrote:
Folks,
Is there any update on the progress of fixing the resilver/
snap/scrub
reset issues? If the bits have been pushed is there a patch for
Solaris
10U3?
http://bugs.opensolaris.org/view_bug.do?bug_id=6343667
Matt and
http://www.cnn.com/2007/US/03/20/lost.data.ap/index.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The big problem is that if you don't do your redundancy in the zpool, then the
loss of a single device flatlines the system. This occurs in single device
pools or stripes or concats. Sun support has said in support calls and Sunsolve
docs that this is by design, but I've never seen the loss of a
It looks like the following bug is still open:
6424510 usb ignores DKIOCFLUSHWRITECACHE
Until it is fixed, I wouldn't even consider using ZFS on USB storage.
Even so, not all bridge boards (Firewire included) implement this
command. Unless you can verify that it functions correctly, it is
sa
OKAY that's an idea, but then this becomes not so easy to manage. I
have
made some tries and I found iscsi{,t}adm not that cool to use
confronted
to what zfs,zpool interfaces provides.
hey Cedrice,
Could you be more specific here? What wasn't easy? Any suggestions
to improve it?
eri
cedric briner wrote:
But _how_ can you achieve a well sized storage (40TB)
with such technologies. I mean, how can you bind physicaly 70 HD in an
zfs pool.
Using SAS JBODs sounds simpler, but I get the impression that they don't
actually work correctly right now.
Wes Felter - [EMAIL PROTECT
Jim Mauro wrote:
(I'm probably not the best person to answer this, but that has never stopped me
before, and I need to give Richard Elling a little more time to get the Goats,
Cows
and Horses fed, sip his morning coffee, and offer a proper response...)
chores are done, wading through the morni
Would you consider a USB stick to be the same usability model as a
handful of 750GB drives (backing up large files for home backup needs
- DVD backups, home pictures, etc) - that wouldn't be hot plugged
often if at all (only on failure, or accidental power loss/etc)
On 3/20/07, Bev Crair <[EMAI
(I'm probably not the best person to answer this, but that has never
stopped me
before, and I need to give Richard Elling a little more time to get the
Goats, Cows
and Horses fed, sip his morning coffee, and offer a proper response...)
Would it benefit us to have the disk be setup as a raidz alo
Folks,
Is there any update on the progress of fixing the resilver/snap/scrub
reset issues? If the bits have been pushed is there a patch for Solaris
10U3?
http://bugs.opensolaris.org/view_bug.do?bug_id=6343667
Also the scrub/resilver priority setting?
http://bugs.opensolaris.o
Viktor Turskyi wrote:
Is there any performance problem with hard links in ZFS?
I have a large storage. There will be near 5 hard links for every file. Is it ok for ZFS? May be some problems with snapshots(every 30 minutes there will be a snapshot creating)? What about difference in speed
On Mar 19, 2007, at 7:26 PM, Jens Elkner wrote:
On Wed, Feb 28, 2007 at 11:45:35AM +0100, Roch - PAE wrote:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6460622
Any estimations, when we'll see a [feature] fix for U3?
Should I open a call, to perhaps rise the priority f
On the file event monitor portion of the OP, has Solaris added dnotify,
inotify or FAM support to the kernel or is the goal still to extend the
ports/poll framework junk with a "file events notification facility"? As
far as I know the file attributes do not handle file change monitoring.
h
Erast Benson wrote:
On Tue, 2007-03-20 at 09:29 -0700, Erast Benson wrote:
On Tue, 2007-03-20 at 16:22 +, Darren J Moffat wrote:
Robert Milkowski wrote:
Hello devid,
Tuesday, March 20, 2007, 3:58:27 PM, you wrote:
d> Does ZFS have a Data Management API to monitor events on files and
d> t
Erast Benson wrote:
On Tue, 2007-03-20 at 16:22 +, Darren J Moffat wrote:
Robert Milkowski wrote:
Hello devid,
Tuesday, March 20, 2007, 3:58:27 PM, you wrote:
d> Does ZFS have a Data Management API to monitor events on files and
d> to store arbitrary attribute information with a file? Any
On Tue, 2007-03-20 at 09:29 -0700, Erast Benson wrote:
> On Tue, 2007-03-20 at 16:22 +, Darren J Moffat wrote:
> > Robert Milkowski wrote:
> > > Hello devid,
> > >
> > > Tuesday, March 20, 2007, 3:58:27 PM, you wrote:
> > >
> > > d> Does ZFS have a Data Management API to monitor events on fil
Hello Darren,
Tuesday, March 20, 2007, 5:22:26 PM, you wrote:
DJM> Robert Milkowski wrote:
>> Hello devid,
>>
>> Tuesday, March 20, 2007, 3:58:27 PM, you wrote:
>>
>> d> Does ZFS have a Data Management API to monitor events on files and
>> d> to store arbitrary attribute information with a file
>On Tue, 2007-03-20 at 16:22 +, Darren J Moffat wrote:
>> Robert Milkowski wrote:
>> > Hello devid,
>> >
>> > Tuesday, March 20, 2007, 3:58:27 PM, you wrote:
>> >
>> > d> Does ZFS have a Data Management API to monitor events on files and
>> > d> to store arbitrary attribute information with
On Tue, 2007-03-20 at 16:22 +, Darren J Moffat wrote:
> Robert Milkowski wrote:
> > Hello devid,
> >
> > Tuesday, March 20, 2007, 3:58:27 PM, you wrote:
> >
> > d> Does ZFS have a Data Management API to monitor events on files and
> > d> to store arbitrary attribute information with a file? A
Hello Kory,
Tuesday, March 20, 2007, 4:38:03 PM, you wrote:
KW> The reason for this question is we currently have our disk setup
KW> in a hardware raid5 on a EMC device and these disks are configured
KW> as a zfs file system. Would it benefit us to have the disk be
KW> setup as a raidz along wit
Robert Milkowski wrote:
Hello devid,
Tuesday, March 20, 2007, 3:58:27 PM, you wrote:
d> Does ZFS have a Data Management API to monitor events on files and
d> to store arbitrary attribute information with a file? Any answer on
d> this would be really appreciated.
IIRC correctly there's being de
Hello devid,
Tuesday, March 20, 2007, 3:58:27 PM, you wrote:
d> Does ZFS have a Data Management API to monitor events on files and
d> to store arbitrary attribute information with a file? Any answer on
d> this would be really appreciated.
IIRC correctly there's being developed file event mechani
We cannot go to an OpenSolaris Nevada build for political as well as support
reasons. It's not an option.
We have been running several other systems using Oracle on ZFS without issues.
The current problem we have is more about getting the DBA's to understand how
things have changed with Sol10/Z
Jürgen Keil wrote:
I still haven't got any "warm and fuzzy" responses
yet solidifying ZFS in combination with Firewire or USB enclosures.
I was unable to use zfs (that is "zpool create" or "mkfs -F ufs") on
firewire devices, because scsa1394 would hang the system as
soon as multiple concurrent
Thanks, I'll give it a whirl.
Rainer
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The reason for this question is we currently have our disk setup in a hardware
raid5 on a EMC device and these disks are configured as a zfs file system.
Would it benefit us to have the disk be setup as a raidz along with the
hardware raid 5 that is already setup too? Or with this double raid
A coworker of mine ran into a large ZFS-related bug the other day. He was
trying to install Sun Studio 11 on a ZFS filesystem and it just kept on
failing. Then he tried to install on a UFS filesystem on the same machine and
it worked just fine...
After much headscratching and testing and trussi
Does ZFS have a Data Management API to monitor events on files and to store
arbitrary attribute information with a file? Any answer on this would be really
appreciated.
Thanks
This message posted from opensolaris.org
___
zfs-discuss mailing list
zf
Mike,
Take a look at
http://video.google.com/videoplay?docid=8100808442979626078&q=CSI%3Amunich
Granted, this was for demo purposes, but the team in Munich is clearly
leveraging USB sticks for their purposes.
HTH,
Bev.
mike wrote:
I still haven't got any "warm and fuzzy" responses yet solidif
Hello zfs-discuss,
Ok, not all of them :)
But if anyone thinks he/she is hitting 6458218 or 6495013 or 6460107
then both of them seems to bo solved in b60 and b61.
For those with support contract as for and IDR126199-01.
ps. the problem is then after I send|recv|send|recv entire pool it's
Hello the list,
After participating at the presentation of Bill Moore & Jeff Bonwick, I
started to think about:
``No special hardware – ZFS loves cheap disks''
okay it loves it. But _how_ can you achieve a well sized storage (40TB)
with such technologies. I mean, how can you bind physicaly 70 HD i
Darren Reed wrote:
Using Solaris 10, Update 2
I've just rebooted my desktop and I have discovered that a ZFS
filesystem appears to have gone missing.
The filesystem in question was called "biscuit/home" and should
have been modified to have its mountpoint set to /export/home.
Is ther
I have this external Firewire box with 4 IDE drives in it, attached to
a Sunblade 2500. I've built the following pool on them:
banff[1]% zpool status
pool: pond
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
pond ONLINE 0
> I still haven't got any "warm and fuzzy" responses
> yet solidifying ZFS in combination with Firewire or USB enclosures.
I was unable to use zfs (that is "zpool create" or "mkfs -F ufs") on
firewire devices, because scsa1394 would hang the system as
soon as multiple concurrent write commands are
Viktor Turskyi wrote:
Is there any performance problem with hard links in ZFS?
I have a large storage. There will be near 5 hard links for every file. Is it ok for ZFS? May be some problems with snapshots(every 30 minutes there will be a snapshot creating)? What about difference in speed
Is there any performance problem with hard links in ZFS?
I have a large storage. There will be near 5 hard links for every file. Is
it ok for ZFS? May be some problems with snapshots(every 30 minutes there will
be a snapshot creating)? What about difference in speed while working with
50
Hello Darren,
Tuesday, March 20, 2007, 3:27:26 AM, you wrote:
>
Using Solaris 10, Update 2
I've just rebooted my desktop and I have discovered that a ZFS
filesystem appears to have gone missing.
The filesystem in question was called "biscuit/home" and should
have been modified to
Hi Mike, This already integrated in Nevada:
6510807 ARC statistics should be exported via kstat
kstat zfs:0:arcstats
module: zfs instance: 0
name: arcstatsclass:misc
c 534457344
I still haven't got any "warm and fuzzy" responses yet solidifying ZFS
in combination with Firewire or USB enclosures.
I am looking for 4-10 drive enclosures for quiet SOHO desktop-ish use.
I am trying to confirm that OpenSolaris+ZFS would be stable with this,
if exported out as JBOD and allow ZF
> -when using highly available SAN storage, export the
> disks as LUNS and use zfs to do your redundancy -
> using array rundandancy (say 5 mirrors that you will
> zpool together as a stripe) will cause the machine
> to crap out and die if any of those mirrored
> devices, say, gets too much io and
67 matches
Mail list logo