I suspect this is what it is all about:
# devfsadm -v
devfsadm[16283]: verbose: no devfs node or mismatched dev_t for
/devices/p...@0,0/pci10de,3...@b/pci1000,1...@0/s...@5,0:a
[snip]
and indeed:
brw-r- 1 root sys 30, 2311 Aug 6 15:34 s...@4,0:wd
crw-r- 1 root sys
The case is made by Chyangfun, and the model made for Mini-ITX
motherboards is called CGN-S40X. They had 6 pcs left last I talked to
them, and need 3 week lead for more if I understand it correctly. I need
to finish my LCD panel work before I will open shop to sell these.
As for temperature,
x4540 snv_117
We lost a HDD last night, and it seemed to take out most of the bus or
something and forced us to reboot. (We have yet to experience losing a
disk that didn't force a reboot mind you).
So today, I'm looking at replacing the broken HDD, but no amount of work
makes it "turn on t
Ok, i am ready to try.
2 last questions before I go for it:
- which version of (open)solaris for Ecc support (which seems to have been
dropped from 200906) and general as-few-headaches-as-possible installation?
- do you think this issue with the AMD Athlon II X2 250
http://www.anandtech.com/cpu
Is there any way to increase the ZFS performance?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Matt,
On Wed, Aug 05, 2009 at 07:06:06PM -0700, Matt Ingenthron wrote:
> Hi,
>
> Other than modifying /etc/system, how can I keep the ARC cache low at boot
> time?
>
> Can I somehow create an SMF service and wire it in at a very low level to put
> a fence around ZFS memory usage before other s
Chris,
On Wed, Aug 05, 2009 at 05:33:24AM -0700, Chris Baker wrote:
> Sanjeev
>
> Thanks for taking an interest. Unfortunately I did have failmode=continue,
> but I have just destroyed/recreated and double confirmed and got exactly the
> same results.
>
> zpool status shows both drives mirror,
And along those lines, why stop at SSD's? Get ZFS shrink working, and Sun
could release a set of upgrade kits for x4500's and x4540's. Kits could range
from a couple of SSD devices to crazy specs like 40 2TB drives, and 8 SSD's.
And zpool shrink would be a key facilitator driving sales of thes
I have the same case which I use as directed attached storage. I never thought
about using it with a motherboard inside.
Could you provide a complete parts list?
What sort of temperatures at the chip, chipset, and drives did you find?
Thanks!
--
This message posted from opensolaris.org
__
I can confirm that it is fixed in 121430-37, too.
Bill
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
A lot of us have run *with * the ability to shrink because we were
using Veritas. Once you have a feature, processes tend to expand to
use it. Moving to ZFS was a good move for many reasons but I still
missed being able to do something that used to be so easy
__
Bob wrote:
> Perhaps the problem is one of educating the customer
> so that they can
> ammend their accounting practices. Different
> business groups can
> share the same pool if necessary.
Bob, while I don't mean to pick on you, that statement captures a major
thinking flaw in IT when it com
Hi,
Other than modifying /etc/system, how can I keep the ARC cache low at boot time?
Can I somehow create an SMF service and wire it in at a very low level to put a
fence around ZFS memory usage before other services come up?
I have a deployment scenario where I will have some reasonably large
cindy,
You are brilliant.
I can successfully boot os after follow the steps below.
But I got some small problem
1. when I "zpool list", I saw two pools (altrpool & rpool)
I want to delete altrpool using "zpool destroy altrpool" but after I reboot it
panic
2. I got this error message
ERROR MSG:
/
On 08/05/09 07:10, Mark Shellenbaum wrote:
Christian Flaig wrote:
Hello,
I got a very strange problem here, tried out many things, can't solve it.
I run a virtual machine via VirtualBox 2.2.4, with Ubuntu 9.04.
OpenSolaris as the host is 2009-06, with snv118. Now I try to mount
(via CIFS) a s
How does the permission look like on one of these files that you
have problem copying?
A network trace would also be helpful. Start the trace before you
do the mount to have a complete context, and stop it after trying
to copy a file. Don't do any extra stuff between mounting and copying
so the t
Hi Cindy, thanks for the reply...
On 08/05/09 18:55, cindy.swearin...@sun.com wrote:
Hi Steffen,
Go with a mirrored root pool is my advice with all the disk space in s0
on each disk. Simple is best and redundant simple is even better.
I will suggest that. Had already considered it. Since they
>Preface: yes, shrink will be cool. But we've been running highly
available,
>mission critical datacenters for more than 50 years without shrink being
>widely available.
Agreed, and shrink IS cool, I used it to migrate VxVM volumes from direct
attached storage to slightly smaller SAN LUNS on a s
Brian Kolaci wrote:
So Sun would see increased hardware revenue stream if they would just
listen to the customer... Without [pool shrink], they look for alternative
hardware/software vendors.
Just to be clear, Sun and the ZFS team are listening to customers on this
issue. Pool shrink has be
On Wed, 5 Aug 2009, Richard Elling wrote:
Thanks Cindy,
This is another way to skin the cat. It works for simple volumes, too.
But there are some restrictions, which could impact the operation when a
large change in vdev size is needed. Is this planned to be backported
to Solaris 10?
CR 6844090
cindy.swearin...@sun.com wrote:
Brian,
CR 4852783 was updated again this week so you might add yourself or
your customer to continue to be updated.
Will do. I thought I was on it, but didn't see any updates...
In the meantime, a reminder is that a mirrored ZFS configuration
is flexible in
On Aug 5, 2009, at 4:06 PM, cindy.swearin...@sun.com wrote:
Brian,
CR 4852783 was updated again this week so you might add yourself or
your customer to continue to be updated.
In the meantime, a reminder is that a mirrored ZFS configuration
is flexible in that devices can be detached (as long
Bob Friesenhahn wrote:
On Wed, 5 Aug 2009, Brian Kolaci wrote:
I have a customer that is trying to move from VxVM/VxFS to ZFS,
however they have this same need. They want to save money and move to
ZFS. They are charged by a separate group for their SAN storage
needs. The business group st
Brian,
CR 4852783 was updated again this week so you might add yourself or
your customer to continue to be updated.
In the meantime, a reminder is that a mirrored ZFS configuration
is flexible in that devices can be detached (as long as the redundancy
is not compromised) or replaced as long as t
Richard Elling wrote:
On Aug 5, 2009, at 2:58 PM, Brian Kolaci wrote:
I'm chiming in late, but have a mission critical need of this as well
and posted as a non-member before. My customer was wondering when
this would make it into Solaris 10. Their complete adoption depends
on it.
I have a
Hi Steffen,
Go with a mirrored root pool is my advice with all the disk space in s0
on each disk. Simple is best and redundant simple is even better.
I'm no write cache expert, but a few simple tests on Solaris 10 5/09,
show me that the write cache is enabled on a disk that is labeled with
an SM
On Wed, 5 Aug 2009, Brian Kolaci wrote:
I have a customer that is trying to move from VxVM/VxFS to ZFS, however they
have this same need. They want to save money and move to ZFS. They are
charged by a separate group for their SAN storage needs. The business group
storage needs grow and shr
On Aug 5, 2009, at 2:58 PM, Brian Kolaci wrote:
I'm chiming in late, but have a mission critical need of this as
well and posted as a non-member before. My customer was wondering
when this would make it into Solaris 10. Their complete adoption
depends on it.
I have a customer that is tr
I'm chiming in late, but have a mission critical need of this as well and
posted as a non-member before. My customer was wondering when this would make
it into Solaris 10. Their complete adoption depends on it.
I have a customer that is trying to move from VxVM/VxFS to ZFS, however they
have
Robert Lawhead wrote:
I recently tried to post this as a bug, and received an auto-ack, but can't
tell whether its been accepted. Does this seem like a bug to anyone else?
Default for zfs list is now to show only filesystems. However, a `zfs list` or
`zfs list -t filesystem` shows filesystem
Roch wrote:
I don't know what 'enters the txg' exactly is but ZFS disk-block
allocation is done in the ZIO pipeline at the latest
possible time.
Thanks Roch,
I stand corrected in my assumptions.
Cheers,
Henk
___
zfs-discuss mailing list
zfs-discuss@o
I recently tried to post this as a bug, and received an auto-ack, but can't
tell whether its been accepted. Does this seem like a bug to anyone else?
Default for zfs list is now to show only filesystems. However, a `zfs list` or
`zfs list -t filesystem` shows filesystems AND incomplete snapsho
For Solaris 10 5/09...
There are supposed to be performance improvements if you create a zpool
on a full disk, such as one with an EFI label. Does the same apply if
the full disk is used with an SMI label, which is required to boot?
I am trying to determine the trade-off, if any, of having a
Problem itself happened on FreeBSD, but as I understand it's ZFS related, not
FreeBSD.
So:
I got error when tried to migrate zfs disk between 2 different servers. After
exporting on first, import on second one are failing with following:
Output from import pool:
#zpool import storage750
cannot i
On Aug 5, 2009, at 1:06 PM, Martin wrote:
richard wrote:
Preface: yes, shrink will be cool. But we've been
running highly
available,
mission critical datacenters for more than 50 years
without shrink being
widely available.
I would debate that. I remember batch windows and downtime delaying
Joseph L. Casale wrote:
Quick snipped from zpool iostat :
mirror 1.12G 695G 0 0 0 0
c8t12d0 - - 0 0 0 0
c8t13d0 - - 0 0 0 0
c7t2d04K 29.0G 0 1.56K 0 200M
c7t3d04K 29
Interesting, this is the same procedure I invented (with the exception
that the zfs send came from the net) and used to hack OpenSolaris
2009.06 onto my home SunBlade 2000 since it couldn't do AI due to low
OBP rev..
I'll have to rework it this way, then, which will unfortunately cause
downti
>Quick snipped from zpool iostat :
>
> mirror 1.12G 695G 0 0 0 0
> c8t12d0 - - 0 0 0 0
> c8t13d0 - - 0 0 0 0
> c7t2d04K 29.0G 0 1.56K 0 200M
> c7t3d04K 29.0G 0 1.
Will Murnane wrote:
I'm using Solaris 10u6 updated to u7 via patches, and I have a pool
with a mirrored pair and a (shared) hot spare. We reconfigured disks
a while ago and now the controller is c4 instead of c2. The hot spare
was originally on c2, and apparently on rebooting it didn't get foun
richard wrote:
> Preface: yes, shrink will be cool. But we've been
> running highly
> available,
> mission critical datacenters for more than 50 years
> without shrink being
> widely available.
I would debate that. I remember batch windows and downtime delaying one's
career movement. Today w
Kyle McDonald wrote:
Jacob Ritorto wrote:
Is this implemented in OpenSolaris 2008.11? I'm moving move my
filer's rpool to an ssd mirror to free up bigdisk slots currently
used by the os and need to shrink rpool from 40GB to 15GB. (only
using 2.7GB for the install).
Your best bet would be
Jacob Ritorto wrote:
Is this implemented in OpenSolaris 2008.11? I'm moving move my filer's rpool
to an ssd mirror to free up bigdisk slots currently used by the os and need to
shrink rpool from 40GB to 15GB. (only using 2.7GB for the install).
Your best bet would be to install the new ssd
Martin wrote:
C,
I appreciate the feedback and like you, do not wish to start a side rant, but
rather understand this, because it is completely counter to my experience.
Allow me to respond based on my anecdotal experience.
What's wrong with make a new pool.. safely copy the data. verify
Preface: yes, shrink will be cool. But we've been running highly
available,
mission critical datacenters for more than 50 years without shrink being
widely available.
On Aug 5, 2009, at 9:17 AM, Martin wrote:
You are the 2nd customer I've ever heard of to use shrink.
This attitude seems to
Hi Will,
I simulated this issue on s10u7 and then imported the pool on a
current Nevada release. The original issue remains, which is you
can't remove a spare device that no longer exists.
My sense is that the bug fix prevents the spare from getting messed
up in the first place when the device I
doesn´t solaris have the great builtin dtrace for issues like these ?
if we knew in which syscall or kernel-thread the system is stuck, we may get a
clue...
unfortunately, i don´t have any real knowledge of solaris kernel internals or
dtrace...
--
This message posted from opensolaris.org
_
> On 4-Aug-09, at 19:46 , Chris Du wrote:
> > Yes Constellation, they also have sata version.
> CA$350 is way too
> > high. It's CA$280 for SAS and CA$235 for SATA,
> 500GB in Vancouver.
>
>
> Wow, that is a much better price than I've seen:
>
> http://pricecanada.com/p.php/Seagate-Constellati
+1
Thanks for putting this in a real world perspective, Martin. I'm faced with
this exact circumstance right now (see my post to the list from earlier today).
Our ZFS filers are highly utilised, highly trusted components at the core of
our enterprise and serve out OS images, mail storage, cus
C,
I appreciate the feedback and like you, do not wish to start a side rant, but
rather understand this, because it is completely counter to my experience.
Allow me to respond based on my anecdotal experience.
> What's wrong with make a new pool.. safely copy the data. verify data
> and then de
POSIX specification of rename(2) provides a very nice property
for building atomic transcations:
If the old argument points to the pathname of a file that is not a
directory, the new argument shall not po
Yeah, sounds just like the issues I've seen before. I don't think you're
likely to see a fix anytime soon, but the good news is that so far I've not
seen anybody reporting problems with LSI 1068 based cards (and I've been
watching for a while).
With the 1068 being used in the x4540 Thumper 2,
On Wed, 5 Aug 2009, Bob Friesenhahn wrote:
Quite a few computers still come with a legacy PCI slot. Are there PCI cards
which act as a carrier for one or two CompactFlash devices and support system
boot?
For example, does this product work well with OpenSolaris? Can it
work as a boot devi
Martin wrote:
You are the 2nd customer I've ever heard of to use shrink.
This attitude seems to be a common theme in ZFS discussions: "No enterprise uses
shrink, only grow."
Maybe. The enterprise I work for requires that every change be reversible and
repeatable. Every change requires
On 5-Aug-09, at 12:21 , Bob Friesenhahn wrote:
i would be VERY surprised if you couldn't fit these in there
SOMEWHERE, the
sata to compactflash adapter i got was about 1.75 inches across and
very
very thin, i was able to mount them side by side on top of the
drive tray in
my machine, you can
On Wed, 5 Aug 2009, Thomas Burgess wrote:
i would be VERY surprised if you couldn't fit these in there SOMEWHERE, the
sata to compactflash adapter i got was about 1.75 inches across and very
very thin, i was able to mount them side by side on top of the drive tray in
my machine, you can easily m
> You are the 2nd customer I've ever heard of to use shrink.
This attitude seems to be a common theme in ZFS discussions: "No enterprise
uses shrink, only grow."
Maybe. The enterprise I work for requires that every change be reversible and
repeatable. Every change requires a backout plan and
On 5-Aug-09, at 12:07 , Thomas Burgess wrote:
i would be VERY surprised if you couldn't fit these in there
SOMEWHERE, the sata to compactflash adapter i got was about 1.75
inches across and very very thin, i was able to mount them side by
side on top of the drive tray in my machine, you can
i would be VERY surprised if you couldn't fit these in there SOMEWHERE, the
sata to compactflash adapter i got was about 1.75 inches across and very
very thin, i was able to mount them side by side on top of the drive tray in
my machine, you can easily make a bracket...i know a guy who used double
i think you need to give more information about your setup
On Wed, Aug 5, 2009 at 5:40 AM, Mr liu wrote:
> 0811 or 0906 or sun solairs
>
> I read a lot of aarticles about zfs performance .and test 0811/0906
> /nexentastor 2.0 .
>
> The write performance is at most 60Mb/s (32k),the other only a
>From what i understand, and from everything i've read by following threads
here, there are ways to do it but there is not a standardized tool yet, and
it's complicated and on a per-case basis but people who pay for support have
recovered pools.
i'm sure they are working on it, and i would imagine
On Tue, 4 Aug 2009, Chookiex wrote:
You know, ZFS afford a very Big buffer for write IO.
So, When we write a file, the first stage is put it to buffer.
But, if the file is VERY short-lived? Is it bring IO to disk?
or else, it just put the meta data and data to memory, and then removed it?
This
I've left it hanging about 2 hours. I've also just learned that whatever the
issue is it is also blocking an "init 5" shutdown. I was thinking about setting
a watchdog with a forced reboot but that will get me nowhere if I need I reset
button restart.
Thanks for the advice re the LSI 1068, not
Hi Nawir,
I haven't tested these steps myself, but the error message
means that you need to set this property:
# zpool set bootfs=rpool/ROOT/BE-name rpool
Cindy
On 08/05/09 03:14, nawir wrote:
Hi,
I have sol10u7 OS with 73GB HD in c1t0d0.
I want to clone it to 36GB HD
These steps below is w
Hi Will,
Since no workaround is provided in the CR, I don't know if importing on
a more recent OpenSolaris release and trying to remove it will work.
I will simulate this error, try this approach, and get back to you.
Thanks,
Cindy
On 08/04/09 18:34, Will Murnane wrote:
On Tue, Aug 4, 2009
Christian Flaig wrote:
Hello,
I got a very strange problem here, tried out many things, can't solve it.
I run a virtual machine via VirtualBox 2.2.4, with Ubuntu 9.04. OpenSolaris as the host
is 2009-06, with snv118. Now I try to mount (via CIFS) a share in Ubuntu from
OpenSolaris. Mounting is
Is this implemented in OpenSolaris 2008.11? I'm moving move my filer's rpool
to an ssd mirror to free up bigdisk slots currently used by the os and need to
shrink rpool from 40GB to 15GB. (only using 2.7GB for the install).
thx
jake
--
This message posted from opensolaris.org
_
Just a thought, but how long have you left it? I had problems with a failing
drive a while back which did eventually get taken offline, but took about 20
minutes to do so.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
I'm still struggling with slow resilvering performance. There doesn't seem to
be any clear bottleneck at this point.. and it's going glacially slow.
scrub: resilver in progress for 11h2m, 27.86% done, 28h35m to go
Load averages are like 0.13-0.15 range, CPU usage is <10%, the machine is doing
n
I'm still struggling with slow resilvering performance. There doesn't seem to
be any clear bottleneck at this point.. and it's going glacially slow.
scrub: resilver in progress for 11h2m, 27.86% done, 28h35m to go
Load averages are like 0.13-0.15 range, CPU usage is <10%, the machine is doing
n
Ross Walker wrote:
On Aug 5, 2009, at 2:49 AM, Henrik Johansen wrote:
Ross Walker wrote:
On Aug 4, 2009, at 8:36 PM, Carson Gaspar wrote:
Ross Walker wrote:
I get pretty good NFS write speeds with NVRAM (40MB/s 4k
sequential write). It's a Dell PERC 6/e with 512MB onboard.
...
there,
On Aug 5, 2009, at 8:50 AM, Ketan wrote:
How can we remove disk from zfs pool, i want to remove disk c0d3
zpool status datapool
pool: datapool
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
datapoolONLINE 0 0 0
c0d2
Ross Walker wrote:
On Aug 5, 2009, at 3:09 AM, Henrik Johansen wrote:
Ross Walker wrote:
On Aug 4, 2009, at 10:22 PM, Bob Friesenhahn > wrote:
On Tue, 4 Aug 2009, Ross Walker wrote:
Are you sure that it is faster than an SSD? The data is indeed
pushed closer to the disks, but there may
On Aug 5, 2009, at 3:09 AM, Henrik Johansen wrote:
Ross Walker wrote:
On Aug 4, 2009, at 10:22 PM, Bob Friesenhahn > wrote:
On Tue, 4 Aug 2009, Ross Walker wrote:
Are you sure that it is faster than an SSD? The data is indeed
pushed closer to the disks, but there may be considerably mor
On Aug 5, 2009, at 2:49 AM, Henrik Johansen wrote:
Ross Walker wrote:
On Aug 4, 2009, at 8:36 PM, Carson Gaspar wrote:
Ross Walker wrote:
I get pretty good NFS write speeds with NVRAM (40MB/s 4k
sequential write). It's a Dell PERC 6/e with 512MB onboard.
...
there, dedicated slog devic
On Wed, 5 Aug 2009, Ketan wrote:
How can we remove disk from zfs pool, i want to remove disk c0d3
[snip]
Currently, you can't remove a vdev without destroying the pool.
--
Andre van Eyssen.
mail: an...@purplecow.org jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the ma
I created a snapshot and subsequent clone of a zfs volume. But now i 'm not
able to remove the snapshot it gives me following error
zfs destroy newpool/ldom2/zdi...@bootimg
cannot destroy 'newpool/ldom2/zdi...@bootimg': snapshot has dependent clones
use '-R' to destroy the following datasets:
ne
How can we remove disk from zfs pool, i want to remove disk c0d3
zpool status datapool
pool: datapool
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
datapoolONLINE 0 0 0
c0d2 ONLINE 0 0 0
On 4-Aug-09, at 19:46 , Chris Du wrote:
Yes Constellation, they also have sata version. CA$350 is way too
high. It's CA$280 for SAS and CA$235 for SATA, 500GB in Vancouver.
Wow, that is a much better price than I've seen:
http://pricecanada.com/p.php/Seagate-Constellation-7200-500GB-7200-ST9
On 5-Aug-09, at 0:14 , Thomas Burgess wrote:
i boot from compact flash. it's not a big deal if you mirror it
because you shouldn't be booting up very often. Also, they make
these great compactflash to sata adapters so if yer motherboard has
2 open sata ports then you'll be golden there.
Sanjeev
Thanks for taking an interest. Unfortunately I did have failmode=continue, but
I have just destroyed/recreated and double confirmed and got exactly the same
results.
zpool status shows both drives mirror, ONLINE, no errors
dmesg shows:
SATA device detached at port 0
cfgadm shows:
sa
Little update...
I can read files (within the share) with the following ACL:
-r--r--r--+ 1 chrisstaff 35 Aug 5 13:18 .txt
user:tmns:r-x---a-R-c---:--I:allow
user:chris:rwxpdDaARWc--s:--I:allow
everyone@:r-a-R-c--s:---:a
Hello,
I got a very strange problem here, tried out many things, can't solve it.
I run a virtual machine via VirtualBox 2.2.4, with Ubuntu 9.04. OpenSolaris as
the host is 2009-06, with snv118. Now I try to mount (via CIFS) a share in
Ubuntu from OpenSolaris. Mounting is successful, I can see al
0811 or 0906 or sun solairs
I read a lot of aarticles about zfs performance .and test 0811/0906
/nexentastor 2.0 .
The write performance is at most 60Mb/s (32k),the other only around 10Mb/s.
I test it from comstar iscsi target and used IOMeter in windows.
What shall I do , I am very very di
Hi,
I have sol10u7 OS with 73GB HD in c1t0d0.
I want to clone it to 36GB HD
These steps below is what come in my mind
STEPS TAKEN
# zpool create -f altrpool c1t1d0s0
# zpool set listsnapshots=on rpool
# SNAPNAME=`date +%Y%m%d`
# zfs snapshot -r rpool/r...@$snapname
# zfs list -t snapshot
# zfs se
On 05.08.09 11:40, Tristan Ball wrote:
Can anyone tell me why successive runs of "zdb" would show very
different values for the cksum column? I had thought these counters were
"since last clear" but that doesn't appear to be the case?
zdb is not intended to be run on live pools. For a live pool
I created a clone from the most recent snapshot of a filesystem, the clone's
parent filesystem was the same as the snapshot itself. When I did a rollback to
a previous snapshot it erased my clone. Yes it was really stupid to keep the
colne on the same filesystem, I was tired and wasn't thinking
Can anyone tell me why successive runs of "zdb" would show very
different values for the cksum column? I had thought these counters were
"since last clear" but that doesn't appear to be the case?
If I run "zdb poolname", right at the end of the output, it lists pool
statistics:
Ross Walker wrote:
On Aug 4, 2009, at 10:17 PM, James Lever wrote:
On 05/08/2009, at 11:41 AM, Ross Walker wrote:
What is your recipe for these?
There wasn't one! ;)
The drive I'm using is a Dell badged Samsung MCCOE50G5MPQ-0VAD3.
So the key is the drive needs to have the Dell badging
Le 5 août 09 à 06:06, Chookiex a écrit :
Hi All,
You know, ZFS afford a very Big buffer for write IO.
So, When we write a file, the first stage is put it to buffer.
But, if the file is VERY short-lived? Is it bring IO to disk?
or else, it just put the meta data and data to memory, and then
re
Ross Walker wrote:
On Aug 4, 2009, at 10:22 PM, Bob Friesenhahn > wrote:
On Tue, 4 Aug 2009, Ross Walker wrote:
Are you sure that it is faster than an SSD? The data is indeed
pushed closer to the disks, but there may be considerably more
latency associated with getting that data into the c
90 matches
Mail list logo