On Wed, 23 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
> 10,000 x 700 = 7MB per second ..
>
> We have this rate for whole day
>
> 10,000 orders per second is minimum requirments of modern day stock exchanges
> ...
>
> Cache still help us for ~1 hours, but after that who will help
On Thu, 24 Apr 2008, Daniel Rock wrote:
> Joerg Schilling schrieb:
>> WOM Write-only media
>
> http://www.national.com/rap/files/datasheet.pdf
I love this part of the specification:
Cooling
The 25120 is easily cooled by employment of a six-foot fan,
1/2" from the packag
On Mon, 31 Dec 2007, Darren Reed wrote:
> Frank Hofmann wrote:
>>
>>
>> On Fri, 28 Dec 2007, Darren Reed wrote:
>> [ ... ]
>>> Is this behaviour defined by a standard (such as POSIX or the
>>> VFS design) or are we free to innovate here and do
On Fri, 28 Dec 2007, Joerg Schilling wrote:
[ ... ]
> POSIX grants that st_dev and st_ino together uniquely identify a file
> on a system. As long as neither st_dev nor st_ino change during the
> rename(2) call, POSIX does not prevent this rename operation.
Clarification request: Where's the pie
On Fri, 28 Dec 2007, Joerg Schilling wrote:
> Frank Hofmann <[EMAIL PROTECTED]> wrote:
>
>> I don't think the standards would prevent us from adding "cross-fs rename"
>> capabilities. It's beyond the standards as of now, and I'd expect that
On Fri, 28 Dec 2007, Darren Reed wrote:
[ ... ]
> Is this behaviour defined by a standard (such as POSIX or the
> VFS design) or are we free to innovate here and do something
> that allowed such a shortcut as required?
Wrt. to standards, quote from:
http://www.opengroup.org/onlinepubs/0
On Fri, 28 Dec 2007, Darren Reed wrote:
> [EMAIL PROTECTED] wrote:
>> On Thu, 27 Dec 2007, [EMAIL PROTECTED] wrote:
>>
>>>
I would guess that this is caused by different st_dev values in the new
filesystem. In such a case, mv copies the files instead of renaming them.
>>>
>>>
On Thu, 27 Dec 2007, [EMAIL PROTECTED] wrote:
>
>>
>> I would guess that this is caused by different st_dev values in the new
>> filesystem. In such a case, mv copies the files instead of renaming them.
>
>
> No, it's because they are different filesystems and the data needs to be
> copied; zfs do
On Mon, 5 Nov 2007, Mark Phalan wrote:
>
> On Mon, 2007-11-05 at 02:16 -0800, Thomas Lecomte wrote:
>> Hello there -
>>
>> I'm still waiting for an answer from Phillip Lougher [the SquashFS
>> developer].
>> I had already contacted him some month ago, without any answer though.
>>
>> I'll still w
On Tue, 30 Oct 2007, Tomasz Torcz wrote:
> On 10/30/07, Neal Pollack <[EMAIL PROTECTED]> wrote:
>>> I'm experiencing major checksum errors when using a syba silicon image 3114
>>> based pci sata controller w/ nonraid firmware. I've tested by copying data
>>> via sftp and smb. With everything I
On Thu, 18 Oct 2007, Mike Gerdts wrote:
> On 10/18/07, Bill Sommerfeld <[EMAIL PROTECTED]> wrote:
>> that sounds like a somewhat mangled description of the cross-calls done
>> to invalidate the TLB on other processors when a page is unmapped.
>> (it certainly doesn't happen on *every* update to a
On Mon, 15 Oct 2007, Tom Davies wrote:
> Say for an example of old custom 32-bit perl scripts.Can it work with
> 128bit ZFS?
That question was posted either here or on some other help aliases
recently ...
If you have any non-largefile-aware application that must under all
circumstances be
On Mon, 8 Oct 2007, Dick Davies wrote:
> I had some trouble installing a zone on ZFS with S10u4
> (bug in the postgres packages) that went away when I used a
> ZVOL-backed UFS filesystem
> for the zonepath.
>
> I thought I'd push on with the experiment (in the hope Live Upgrade
> would be able to
On Fri, 14 Sep 2007, Sergey wrote:
> I am running Solaris U4 x86_64.
>
> Seems that something is changed regarding mdb:
>
> # mdb -k
> Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 uppc
> pcplusmp ufs ip hook neti sctp arp usba fctl nca lofs zfs random nfs sppp
> crypto
On Tue, 28 Aug 2007, David Olsen wrote:
>> On 27/08/2007, at 12:36 AM, Rainer J.H. Brandt wrote:
[ ... ]
>>> I don't see why multiple UFS mounts wouldn't work,
>> if only one
>>> of them has write access. Can you elaborate?
>>
>> Even with a single writer you would need to be
>> concerned with re
On Tue, 28 Aug 2007, Charles DeBardeleben wrote:
> Are you sure that UFS writes a-time on read-only filesystems? I do not think
> that it is supposed to. If it does, I think that this is a bug. I have
> mounted
> read-only media before, and not gotten any write errors.
>
> -Charles
I think what m
On Fri, 3 Aug 2007, Damon Atkins wrote:
[ ... ]
> UFS forcedirectio and VxFS closesync ensure that what ever happens your files
> will always exist if the program completes. Therefore with Disk Replication
> (sync) the file exists at the other site at its finished size. When you
> introduce DR
On Thu, 26 Jul 2007, Damon Atkins wrote:
> Guys,
> What is the best way to ask for a feature enhancement to ZFS.
>
> To allow ZFS to be usefull for DR disk replication, we need to be able
> set an option against the pool or file system or both, called close
> sync. ie When a programme closes a f
I'm not quite sure what this test should show ?
Compressing random data is the perfect way to generate heat.
After all, compression working relies on input entropy being low.
But good random generators are characterized by the opposite - output
entropy being high.
Even a good compressor, if ope
On Fri, 13 Apr 2007, Ignatich wrote:
Bart Smaalders writes:
Abide by the terms of the CDDL and all is well. Basically, all you
have to do is make your changes to CDDL'd files available. What you
do w/ the code you built (load it into MVS, ship a storage appliance,
build a ZFS for Linux) is u
On Mon, 26 Mar 2007, Viktor Turskyi wrote:
i have tested links performance. and i have got such results:
with hardlinks - no problems, reading of 5 files one million times takes 38
seconds
in case with symlinks is another situation - reading of 5 files(through
symlinks) one million tim
On Tue, 27 Feb 2007, Jeff Davis wrote:
Given your question are you about to come back with a
case where you are not
seeing this?
As a follow-up, I tested this on UFS and ZFS. UFS does very poorly: the I/O
rate drops off quickly when you add processes while reading the same blocks
from the
On Fri, 23 Feb 2007, Dan Mick wrote:
So, that would be an "error", and, other than reporting it accurately, what
would you want ZFS to do to "support" it?
It's not an error for write(2) to return with less bytes written than
requested. In some situations, that's pretty much expected. Like, fo
On Mon, 12 Feb 2007, Toby Thain wrote:
[ ... ]
I'm no guru, but would not ZFS already require strict ordering for its
transactions ... which property Peter was exploiting to get "fbarrier()" for
free?
It achieves this by flushing the disk write cache when there's need to
barrier. Which compl
On Mon, 12 Feb 2007, Chris Csanady wrote:
[ ... ]
> Am I missing something?
How do you guarantee that the disk driver and/or the disk firmware doesn't
reorder writes ?
The only guarantee for in-order writes, on actual storage level, is to
complete the outstanding ones before issuing new ones.
On Mon, 12 Feb 2007, Peter Schuller wrote:
Hello,
Often fsync() is used not because one cares that some piece of data is on
stable storage, but because one wants to ensure the subsequent I/O operations
are performed after previous I/O operations are on stable storage. In these
cases the latency
Btw, in case that gets lost between my devil's advocatism:
A happy +1 from me for the proposal !
FrankH.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, 5 Feb 2007, Jim Dunham wrote:
Frank,
On Fri, 2 Feb 2007, Torrey McMahon wrote:
Jason J. W. Williams wrote:
Hi Jim,
Thank you very much for the heads up. Unfortunately, we need the
write-cache enabled for the application I was thinking of combining
this with. Sounds like SNDR and ZFS
On Fri, 2 Feb 2007, Torrey McMahon wrote:
Jason J. W. Williams wrote:
Hi Jim,
Thank you very much for the heads up. Unfortunately, we need the
write-cache enabled for the application I was thinking of combining
this with. Sounds like SNDR and ZFS need some more soak time together
before you ca
On Wed, 3 Jan 2007, Darren Dunham wrote:
We have some HDS storage that isn't supported by mpxio, so we have to
use veritas dmp to get multipathing.
Whats the recommended way to use DMP storage with ZFS. I want to use
DMP but get at the multipathed virtual luns at as low a level as
possible to
On Wed, 20 Dec 2006, Pawel Jakub Dawidek wrote:
On Tue, Dec 19, 2006 at 02:04:37PM +, Darren J Moffat wrote:
In case it wasn't clear I am NOT proposing a UI like this:
$ zfs bleach ~/Documents/company-finance.odp
Instead ~/Documents or ~ would be a ZFS file system with a policy set someth
On Tue, 19 Dec 2006, Anton B. Rang wrote:
"INFORMATION: If a member of this striped zpool becomes unavailable or
develops corruption, Solaris will kernel panic and reboot to protect your data."
OK, I'm puzzled.
Am I the only one on this list who believes that a kernel panic, instead of
EIO,
On Tue, 19 Dec 2006, Darren J Moffat wrote:
Frank Hofmann wrote:
On the technical side, I don't think a new VOP will be needed. This could
easily be done in VOP_SPACE together with a new per-fs property - bleach
new block when it's allocated (aka VOP_SPACE directly, or in a ba
On Tue, 19 Dec 2006, Jonathan Edwards wrote:
On Dec 18, 2006, at 11:54, Darren J Moffat wrote:
[EMAIL PROTECTED] wrote:
Rather than bleaching which doesn't always remove all stains, why can't
we use a word like "erasing" (which is hitherto unused for filesystem use
in Solaris, AFAIK)
and t
On Thu, 5 Oct 2006, Erblichs wrote:
Casper Dik,
After my posting, I assumed that a code question should be
directed to the ZFS code alias, so I apologize to the people
show don't read code. However, since the discussion is here,
I will post a code proof here. Jus
On Wed, 4 Oct 2006, Erblichs wrote:
Casper Dik,
Yes, I am familiar with Bonwick's slab allocators and tried
it for wirespeed test of 64byte pieces for a 1Gb and then
100Mb Eths and lastly 10Mb Eth. My results were not
encouraging. I assume it has improved over ti
On Fri, 7 Jul 2006, Darren J Moffat wrote:
Eric Schrock wrote:
On Thu, Jul 06, 2006 at 09:53:32PM +0530, Pramod Batni wrote:
offtopic query :
How can ZFS require more VM address space but not more VM ?
The real problem is VA fragmentation, not consumption. Over time, ZFS's
heavy use
On Tue, 9 May 2006, Darren J Moffat wrote:
Paul van der Zwan wrote:
I just booted up Minix 3.1.1 today in Qemu and noticed to my surprise
that it has a disk nameing scheme similar to what Solaris uses.
It has c?d?p?s? note that both p (PC FDISK I assume) and s is used,
HP-UX uses the same sc
ZFS must support POSIX semantics, part of which is hard links. Hard
links allow you to create multiple names (directory entries) for the
same file. Therefore, all UNIX filesystems have chosen to store the
file information separately for the directory entries (otherwise, you'd
have multiple copie
39 matches
Mail list logo