I was wondering if there were work done in the area of zfs configuration
running out of 100% SSD disks.
L2ARC and ZIL have been designed as a way to improve long seek
times/latencies of rotational disks.
now if we use only SSD (F5100 or F20) as back end drives for zfs, we should
not need those add
the question is: does the "IO pausing" behaviour you noticed penalize
your application?
what are the consequences at the application level?
for instance we have seen application doing some kind of data capture
from external device (video for example) requiring a constant
throughput to disk (data f
unfortunately in this area, Symantec is not helping anyone. they even
take their time to officially include zfs in their compatibility lists
s-
On Jan 16, 2008 1:26 PM, Paul Kraus <[EMAIL PROTECTED]> wrote:
> Previous posts from various people:
>
> > > > But ... NBU (at least version 6.0)
n 1/15/08, Selim Daoud <[EMAIL PROTECTED]> wrote:
>
> > with zfs you can compress data on disk ...that is a grat advantage
> > when doing backup to disk
> > also, for DSSU you need to multiply number of filesystem (1 fs per
> > stu), the advantage of zfs is that yo
with zfs you can compress data on disk ...that is a grat advantage
when doing backup to disk
also, for DSSU you need to multiply number of filesystem (1 fs per
stu), the advantage of zfs is that you don't need to fix the size
of the fs upfront (the space is shared among all the fs)
s-
On Jan 10,
grand-dad,
why don't you put your immense experience and knowledge to contribute
to what is going to be
the next and only filesystems in modern operating systems, instead of
spending your time asking for "specifics" and treating everyone of
"ignorant"..at least we will remember you in the after
can you run a database on RMS?
I guess its not suited
we are already trying to get ride of a 15 years old filesystem called
wafl, and a 10 years old "file system" called Centera, so do you thing
we are going to consider a 35 years old filesystem now... computer
science made a lot of improvement sin
from the description here
http://www.djesys.com/vms/freevms/mentor/rms.html
so who cares here ?
RMS is not a filesystem, but more a CAS type of data repository
On Dec 8, 2007 7:04 AM, Anton B. Rang <[EMAIL PROTECTED]> wrote:
> > NOTHING anton listed takes the place of ZFS
>
> That's not surpri
basically you would add ZFS redundancy level, if you want to be
protected from silent data corruption (data corruption that could
occur somewhere along the IO path)
- XP12000 has all the features to protect from hardware failure (no-SPOF)
- ZFS has all the feature to protect from silent data corru
some business do not accept any kind of risk and hence will try hard
(i.e spend a lot of money) to eliminate it (create 2, 3, 4 copies,
read-verify, cksum...)
at the moment only ZFS can give this assurance, plus the ability to
self correct detected
errors.
It's a good things that ZFS can help peo
it's got to do with vmware obviously, as we've been able to make +20TB
fs with zfs
selim
--
Blog: http://fakoli.blogspot.com/
On 11/7/07, Chris Murray <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I am experiencing an issue when trying to set up a l
Bud ig 6538014 (
http://bugs.opensolaris.org/view_bug.do?bug_id=6538014 ) was also
related to mounting many fs. but I have no visibility on this bugID
progress..maybe someone from Sun can update us?
selim
--
Blog: http://fakoli.blogspot.com/
On
interesting project..I shall try it out
be carefully..NTAPP might sue you ;)
selim
--
--
Blog: http://fakoli.blogspot.com/
On 11/1/07, Joe Little <[EMAIL PROTECTED]> wrote:
> I consider myself an early adopter of ZFS and pushed it hard on this
wasn't that an NDA info??
s-
On 10/18/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
> MC wrote:
> > Sun's storage strategy:
> >
> > 1) Finish Indiana and distro constructor
> > 2) (ship stuff using ZFS-Indiana)
> > 3) Success
>
> 4) Profit :)
> ___
> zf
provided 3310 cache does not induce silent block corruption when
writing to disks
s.
On 10/5/07, Vincent Fox <[EMAIL PROTECTED]> wrote:
> So I went ahead and loaded 10u4 on a pair of V210 units.
>
> I am going to set this nocacheflush option and cross my fingers and see how
> it goes.
>
> I have
http://www.netapp.com/go/ipsuit/spider-complaint.pdf
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
long topic, it was discuss in a previous thread.
in relation to this, there is
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6415647
which can be of interest
selim
On 9/4/07, Matty <[EMAIL PROTECTED]> wrote:
> Are there any plans to support record sizes larger than 128k? We use
> ZFS
Hi all,
has an alternative to ARC been considered to improve sequential write IO in zfs?
here's a reference for DULO:
http://www.usenix.org/event/fast05/tech/full_papers/jiang/jiang_html/dulo-html.html#BG03
sd-
___
zfs-discuss mailing list
zfs-discuss@o
zfs encryption has been updated ...just in case
http://opensolaris.org/os/project/zfs-crypto/plan/
s.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
superbe job...synaptic package manager is really impressive
is there a way to transform Sun package to a synaptic package?
selim
On 6/22/07, Al Hopper <[EMAIL PROTECTED]> wrote:
On Fri, 22 Jun 2007, Erast Benson wrote:
> New unstable ISO of NexentaCP (Core Platform) available.
>
> http://www.g
from what I know this operation goes via an zpool export, re-label
(with format) , then zpool import
it's not online
On 6/5/07, Yan <[EMAIL PROTECTED]> wrote:
so does anyone know how to the LUN(s) part of its pool
and detect new size of the LUN ?
This message posted from opensolaris.org
_
hi all,
is there a way to obtain the list of ZFS bug/fixes/RFE that have been
integrated in S10U4
thanks
selim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
which one is the most performant: copies=2 or zfs-mirror?
s.
On 5/9/07, Richard Elling <[EMAIL PROTECTED]> wrote:
comment below...
Toby Thain wrote:
>
> On 9-May-07, at 4:45 AM, Andreas Koppenhoefer wrote:
>
>> Hello,
>>
>> solaris Internals wiki contains many interesting things about zfs.
>>
go ahead with filebench and don't forget to set
set zfs:zfs_nocacheflush=1
in /etc/system (if using nevada)
s.
On 5/9/07, cesare VoltZ <[EMAIL PROTECTED]> wrote:
Hy,
I'm planning to test on pre-production data center a ZFS solution for
our application and I'm searching a good filesystem benchm
the Sun's equivalent is SAMFS and especially the latest version (4.6)
which can be entirely used for backup/restore/archive
SAMFS will be opensource very soon
s.
On 4/28/07, Rayson Ho <[EMAIL PROTECTED]> wrote:
On 4/28/07, Brian Hechinger <[EMAIL PROTECTED]> wrote:
> So what you *really* want i
Roch,
isn't there another flag in /etc/system to force zfs not to send flush
requests to NVRAM?
s.
On 4/20/07, Marion Hakanson <[EMAIL PROTECTED]> wrote:
[EMAIL PROTECTED] said:
> We have been combing the message boards and it looks like there was a lot of
> talk about this interaction of zfs
this port was done in the case of QFS
how come they managed to release a QFS for linux?
On 4/17/07, Erik Trimble <[EMAIL PROTECTED]> wrote:
Joerg Schilling wrote:
> "David R. Litwin" <[EMAIL PROTECTED]> wrote:
>
>
>> On 17/04/07, Wee Yeh Tan <[EMAIL PROTECTED]> wrote:
>>
>>> On 4/17/07, David R.
filebench for example
On 4/17/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
Tony Galway wrote:
>
> I had previously undertaken a benchmark that pits "out of box"
> performance of UFS via SVM, VxFS and ZFS but was waylaid due to some
> outstanding availability issues in ZFS. These have been taken
hi all ,
when doing several zfs snapshot of a given fs, there are dependencies
between snapshots that complexify the management of snapshots
is there a plan to easy thes dependencies, so we can reach snapshot
functionalities that are offered in other products suchs as Compellent
(http://www.compe
Roch,
true that a fixed-block size filesystems introduces these seek
operations you talked about.
now this has to be counter-balanced with "latencies" introduced in a
log-structures filesystems
(which personally, I am unable to list)
the 10% you talked about is roughly the difference in performa
talking of which,
what's the effort and consequences to increase the max allowed block
size in zfs to highr figures like 1M...
s.
On 3/28/07, Jonathan Edwards <[EMAIL PROTECTED]> wrote:
right on for optimizing throughput on solaris .. a couple of notes
though (also mentioned in the QFS manuals)
Here are some raw data during my tests.
This tests consisted in timing mount/umount time for ufs and zfs
We are not dowing mount/umount using the "[zfs] mount -a" because it
is a serialized mount/umount.
instead we do mount/umount in parallel, using the shell script below
For UFS, we've created
I observed better predictable thoughput if I use a IO generator that
can do throttling (xdd or vdbench)
s.
On 3/11/07, Jesse DeFer <[EMAIL PROTECTED]> wrote:
OK, I tried it with txg_time set to 1 and am seeing less predictable results.
The first time I ran the test it completed in 27 seconds
it's an absolute necessity
On 3/8/07, Roch Bourbonnais <[EMAIL PROTECTED]> wrote:
Le 8 mars 07 à 20:08, Selim Daoud a écrit :
> robert,
> this applies only if you have full control on the application forsure
> ..but how do you do it if you don't own the applicatio
robert,
this applies only if you have full control on the application forsure
..but how do you do it if you don't own the application ... can you
mount zfs with forcedirectio flag ?
selim
On 3/8/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Manoj,
Thursday, March 8, 2007, 7:10:57 AM,
one question,
is there a way to stop the default txg push behaviour (push at regular
timestep-- default is 5sec) but instead push them "on the fly"...I
would imagine this is better in the case of an application doing big
sequential write (video streaming... )
s.
On 3/5/07, Jeff Bonwick <[EMAIL P
, dropping the MB/s to very low
values, then up again.
On 2/27/07, Richard Elling <[EMAIL PROTECTED]> wrote:
Selim Daoud wrote:
> indeed, a customer is doing 2TB of daily backups on a zfs filesystem
> the throughput doesn't go above 400MB/s, knowing that at raw speed,
> the through
indeed, a customer is doing 2TB of daily backups on a zfs filesystem
the throughput doesn't go above 400MB/s, knowing that at raw speed,
the throughput goes up to 800MB/s, the gap is quite wide
also, sequential IO is a very common in real life..unfortunately zfs
is not performing well still
sd
it seems there isn't an algorithm in ZFS that detects sequential
write
in traditional fs such as ufs, one would trigger directio.
qfs can be set to automatically go to directio if sequential IO is detected.
the txg trigger of 5sec is inappropriate in this case (as stated by
bug 6415647)
even a 1.
all,
here's an interesting status report published by Microsoft labs
http://research.microsoft.com/research/pubs/view.aspx?msr_tr_id=MSR-TR-2005-166
sd.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listi
btw, I liked your blog entry about Lun Masking
http://elektronkind.org/
selim.
On 1/31/07, Dale Ghent <[EMAIL PROTECTED]> wrote:
On Jan 31, 2007, at 4:26 AM, Selim Daoud wrote:
> you can still do some lun masking at the HBA level (Solaris 10)
> this feature is call "black
you can still do some lun masking at the HBA level (Solaris 10)
this feature is call "blacklist"
On 1/31/07, Dale Ghent <[EMAIL PROTECTED]> wrote:
On Jan 24, 2007, at 1:19 PM, Frank Cusack wrote:
>> On Wed, Jan 24, 2007 at 09:46:11AM -0800, Moazam Raja wrote:
>>
>> Note that the 3511 is being r
it would be good to have real data and not only guess ot anecdots
this story about wrong blocks being written by RAID controllers
sounds like the anti-terrorism propaganda we are leaving in: exagerate
the facts to catch everyone's attention
.It's going to take more than that to prove RAID ctrls
43 matches
Mail list logo