RackTop/EraStor/Illumos/???)
>
I'm not sure, but I think there are people running NexentaStor on that h/w.
If not, then on something pretty close. NS supports clustering, etc.
--
Gordon Ross
Nexenta Systems, Inc. www.nexenta.com
Enterprise cla
r#13 EACCES
>
> Accessing files or directories through /proc/$$/fd/ from a shell
> otherwise works, only the xattr directories cause trouble. Native C
> code has the same problem.
>
> Olga
Does "runat" let you see those xattr files?
--
Gordon Ross
Nexenta System
On Thu, Jul 21, 2011 at 9:58 PM, Paul B. Henson wrote:
> On 7/19/2011 7:10 PM, Gordon Ross wrote:
>
>> The idea: A new "aclmode" setting called "discard", meaning that
>> the users don't care at all about the traditional mode bits. A
>> dataset
Are the "disk active" lights typically ON when this happens?
On Tue, Jul 26, 2011 at 3:27 PM, Garrett D'Amore wrote:
> This is actually a recently known problem, and a fix for it is in the
> 3.1 version, which should be available any minute now, if it isn't
> already available.
>
> The problem ha
I'm looking to upgrade the disk in a high-end laptop (so called
"desktop replacement" type). I use it for development work,
runing OpenIndiana (native) with lots of ZFS data sets.
These "hybrid" drives look kind of interesting, i.e. for about $100,
one can get:
Seagate Momentus XT ST95005620AS 5
On Mon, Jul 18, 2011 at 9:44 PM, Paul B. Henson wrote:
> Now that illumos has restored the aclmode option to zfs, I would like to
> revisit the topic of potentially expanding the suite of available modes.
[...]
At one point, I was experimenting with some code for smbfs that would
"invent" the mod
ersioning, nothing else (no API, no additional features,
> etc.).
I believe NTFS was built on the same concept of file streams the VMS FS used
for versioning.
It's a very simple versioning system.
Personnally I use Sharepoint, but there are other content man
of latency hit which would
kill read performance.
Try disabling the on-board write or read cache and see how your sequential IO
performs and you'll see just how valuable those puny caches are.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolar
at supports "Previous Versions" using the hosts native snapshot method.
The one glaring deficiency Samba has though, in Sun's eyes not mine, is that it
runs in user space, though I believe that's just the cover song for "It wasn't
invented here"
and sustained throughput in 1MB+ sequential IO workloads. Only SSD
makers list their random IOPS workload numbers and their 4K IO workload numbers.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
"GPL" ZFS? In what way would that save you annoyance?
I actually think Doug was trying to say he wished Oracle would open the
development and make the source code open-sourced, not necessarily GPL'd.
-Ross
___
zfs-discuss m
might find that as you get more
machines on the storage the performance will decrease a lot faster then it
otherwise would if it were standalone as it competes with the very machines it
is suppose to be serving.
-Ross
___
zfs-discuss mailing list
zfs-discuss
On Dec 7, 2010, at 9:49 PM, Edward Ned Harvey
wrote:
>> From: Ross Walker [mailto:rswwal...@gmail.com]
>>
>> Well besides databases there are VM datastores, busy email servers, busy
>> ldap servers, busy web servers, and I'm sure the list goes on and on.
>>
gunpoint.
Well besides databases there are VM datastores, busy email servers, busy ldap
servers, busy web servers, and I'm sure the list goes on and on.
I'm sure it is much harder to list servers that are truly sequential in IO then
random. This is especially
utilizing 1Gbps before MC/S then going
to MC/S won't give you more, as you weren't using what you had (in
fact added latency in MC/S may give you less!).
I am going to say that the speed improvement from 134->151a was due to
OS and comstar improvements and not the MC/S.
-Ross
On Nov 16, 2010, at 7:49 PM, Jim Dunham wrote:
> On Nov 16, 2010, at 6:37 PM, Ross Walker wrote:
>> On Nov 16, 2010, at 4:04 PM, Tim Cook wrote:
>>> AFAIK, esx/i doesn't support L4 hash, so that's a non-starter.
>>
>> For iSCSI one just needs to have a s
unless you have at least as many TCP streams as cores, which is
> honestly kind of obvious. lego-netadmin bias.
>
>
>
> AFAIK, esx/i doesn't support L4 hash, so that's a non-starter.
For iSCSI one just needs to have a second (third or fourth...) iSCSI session on
a different IP to the target and run mpio/mpxio/mpath whatever your OS calls
multi-pathing.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ot creation/deletion during a
resilver causes it to start over.
Try suspending all snapshot activity during the resilver.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sustained throughput to give an accurate figure based on one's setup,
otherwise start with a reasonable value, say 1GB, and decrease until the pauses
stop.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
datasets that have this option set.
This doesn't prevent pool loss in the face of a vdev failure, merely reduces
the likelihood of file loss due to block corruption.
A loss of a vdev (mirror, raidz or non-redundant disk) means the loss of the
pool.
-Ross
__
> like a DDRDrive X1 and an OCZ Z-Drive which are both PCIe cards and don't use
> the local controller.
What mount options are you using on the Linux client for the NFS share?
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> the 100 TB range? That would be quite a number of single drives then,
> especially when you want to go with zpool raid-1.
A pool consisting of 4 disk raidz vdevs (25% overhead) or 6 disk raidz2 vdevs
(33% overhead) should deliver the storage and performance for a pool that size,
versu
FS' built-in mirrors, otherwise if I were
to use HW RAID I would use RAID5/6/50/60 since errors encountered can be
reproduced, two parity raids mirrored in ZFS would probably provide the best of
both worlds, for a steep cost though.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
recover from a read
error themselves. With ZFS one really needs to disable this and have the drives
fail immediately.
Check your drives to see if they have this feature, if so think about replacing
the drives in the source pool that have long se
cache rather than disk.
Breaking your pool into two or three, setting different vdev types of different
type disks and tiering your VMs based on their performance profile would help.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
On Aug 21, 2010, at 4:40 PM, Richard Elling wrote:
> On Aug 21, 2010, at 10:14 AM, Ross Walker wrote:
>> I'm planning on setting up an NFS server for our ESXi hosts and plan on
>> using a virtualized Solaris or Nexenta host to serve ZFS over NFS.
>
> Please follow
On Aug 21, 2010, at 2:14 PM, Bill Sommerfeld wrote:
> On 08/21/10 10:14, Ross Walker wrote:
>> I am trying to figure out the best way to provide both performance and
>> resiliency given the Equallogic provides the redundancy.
>
> (I have no specific experience with Equallo
s setup perform? Anybody with experience in this type of setup?
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the OS' VFS layer to the lower-level block
layer, but this would assure both reliability and performance.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ed in such a way
> that it specifically depends on GPL components.
This is how I see it as well.
The big problem is not the insmod'ing of the blob but how it is distributed.
As far as I know this can be circumvented by not including it in the main
distribution but thro
On Aug 17, 2010, at 5:44 AM, joerg.schill...@fokus.fraunhofer.de (Joerg
Schilling) wrote:
> Frank Cusack wrote:
>
>> On 8/16/10 9:57 AM -0400 Ross Walker wrote:
>>> No, the only real issue is the license and I highly doubt Oracle will
>>> re-release ZFS under
On Aug 16, 2010, at 11:17 PM, Frank Cusack wrote:
> On 8/16/10 9:57 AM -0400 Ross Walker wrote:
>> No, the only real issue is the license and I highly doubt Oracle will
>> re-release ZFS under GPL to dilute it's competitive advantage.
>
> You're saying Oracle wan
it maintainer.
Linux is an evolving OS, what determines a FS's continued existence is the
public's adoption rate of that FS. If nobody ends up using it then the kernel
will drop it in which case it will eventually die.
-Ross
___
zfs-discuss mailing
m competition in order to drive innovation
so it would be beneficial for both FSs to continue together into the future.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e same, regardless of NFS vs iSCSI.
>
> You should always copy files via GUI. That's the lesson here.
Technically you should always copy vmdk files via vmfstool on the command line.
That will give you wire speed transfers.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Aug 5, 2010, at 2:24 PM, Roch Bourbonnais wrote:
>
> Le 5 août 2010 à 19:49, Ross Walker a écrit :
>
>> On Aug 5, 2010, at 11:10 AM, Roch wrote:
>>
>>>
>>> Ross Walker writes:
>>>> On Aug 4, 2010, at 12:04 PM, Roch wrote:
>>>&
On Aug 5, 2010, at 11:10 AM, Roch wrote:
>
> Ross Walker writes:
>> On Aug 4, 2010, at 12:04 PM, Roch wrote:
>>
>>>
>>> Ross Walker writes:
>>>> On Aug 4, 2010, at 9:20 AM, Roch wrote:
>>>>
>>>>>
>>>&g
On Aug 4, 2010, at 12:04 PM, Roch wrote:
>
> Ross Walker writes:
>> On Aug 4, 2010, at 9:20 AM, Roch wrote:
>>
>>>
>>>
>>> Ross Asks:
>>> So on that note, ZFS should disable the disks' write cache,
>>> not enable t
On Aug 4, 2010, at 9:20 AM, Roch wrote:
>
>
> Ross Asks:
> So on that note, ZFS should disable the disks' write cache,
> not enable them despite ZFS's COW properties because it
> should be resilient.
>
> No, because ZFS builds resiliency on top of unre
On Aug 4, 2010, at 3:52 AM, Roch wrote:
>
> Ross Walker writes:
>
>> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais
>> wrote:
>>
>>>
>>> Le 27 mai 2010 à 07:03, Brent Jones a écrit :
>>>
>>>> On Wed, May 26, 2010 at 5:08 A
On Aug 3, 2010, at 5:56 PM, Robert Milkowski wrote:
> On 03/08/2010 22:49, Ross Walker wrote:
>> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais
>> wrote:
>>
>>
>>> Le 27 mai 2010 à 07:03, Brent Jones a écrit :
>>>
>>>
>>&g
On Aug 3, 2010, at 5:56 PM, Robert Milkowski wrote:
> On 03/08/2010 22:49, Ross Walker wrote:
>> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais
>> wrote:
>>
>>
>>> Le 27 mai 2010 à 07:03, Brent Jones a écrit :
>>>
>>>
>>&g
her synchronous nor asynchronous is is
simply SCSI over IP.
It is the application using the iSCSI protocol that determines whether it is
synchronous, issue a flush after write, or asynchronous, wait until target
flushes.
I think the ZFS developers didn't quite understand that and wanted stric
or and let it resilver and sit for a
> week.
If that's the case why not create a second pool called 'backup' and 'zfs send'
periodically to the backup pool?
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
n small operations, or (b) implement raidz such that striping
> of blocks behaves differently for small operations (plus parity). So the
> confirmation I'm looking for would be somebody who knows the actual source
> code, and the actual architecture that was chosen to implement raidz i
orruption than the worry when
> people give fire-and-brimstone speeches about never disabling
> zil-writing while using the NFS server. but it seems to mostly work
> anyway when I do this, so I'm probably confused about something.
To add to Miles' comments, what you are tr
g
written (worse performance). If it's a partial stripe width then the remaining
data needs to be read off disk which doubles the IOs.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
have an rpool mirror.
-Ross
On Jul 12, 2010, at 6:30 PM, "Beau J. Bechdol" wrote:
> I do apologies but I am completely lost here Maybe I am just not
> understanding. Are you saying that a slice has to be created on the seond
> drive before it can bee added to th
VFS API separate from the Linux VFS API so file
systems can be implemented in user space. Fuse needs a little more work to
handle ZFS as a file system.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
on a regular LSI SAS (non-RAID) controller.
The only change the PERC made was to coerce the disk size down 128MB, so left
128MB unused at the end of the drive, which would mean new disks would be
slightly bigger.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Jun 24, 2010, at 10:42 AM, Robert Milkowski wrote:
> On 24/06/2010 14:32, Ross Walker wrote:
>> On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote:
>>
>>
>>> On 23/06/2010 18:50, Adam Leventhal wrote:
>>>
>>>>> Does i
To get good
random IO with raidz you need a zpool with X raidz vdevs where X = desired
IOPS/IOPS of single drive.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
z2) specific disks?
What's the record size on those datasets?
8k?
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
lstat64("/tank/ws/fubar", 0x080465D0) Err#89 ENOSYS
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Anyone know why my ZFS filesystem might suddenly start
giving me an error when I try to "ls -d" the top of it?
i.e.: ls -d /tank/ws/fubar
/tank/ws/fubar: Operation not applicable
zpool status says all is well. I've tried snv_139 and snv_137
(my latest and previous installs). It's an amd64 box.
B
On Jun 22, 2010, at 8:40 AM, Jeff Bacon wrote:
>> The term 'stripe' has been so outrageously severely abused in this
>> forum that it is impossible to know what someone is talking about when
>> they use the term. Seemingly intelligent people continue to use wrong
>> terminology because they thin
Set a max size the ARC can grow to, saving room for system services,
get an SSD drive to act as an L2ARC, run a scrub first to prime the
L2ARC (actually probably better to run something targetting just those
datasets in question), then delete the dedup objects, smallest to
largest.
oblem again in the future.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On long threads with inlined comments, think about keeping the
previous 2 comments before or trimming anything 3 levels of indents or
more.
Of course that's just my general rule of thumb and different
discussions require different quotings, but just being mindful is
often
a 1M bs or better instead.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
bably rethink the setup.
ZIL wil not buy you much here and if your VM software is like VMware
then each write over NFS will be marked FSYNC which will force the
lack of IOPS to the surface.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
or VMs and data. If you need high
performance data such as databases, use iSCSI zvols directly into the
VM, otherwise NFS/CIFS into the VM should be good enough.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
There is a high potential for tears here.
Get an external disk for your own sanity.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On May 20, 2010, at 7:17 PM, Ragnar Sundblad wrote:
On 21 maj 2010, at 00.53, Ross Walker wrote:
On May 20, 2010, at 6:25 PM, Travis Tabbal wrote:
use a slog at all if it's not durable? You should
disable the ZIL
instead.
This is basically where I was going. There only seems
kup should do the trick. It might
not have the capacity of an SSD, but in my experience it works well in
the 1TB data moderately loaded range.
Have more data/activity then try more cards and more pools, otherwise
pony up the for a capacitor backed SSD.
-Ross
one in containers within the 2 original VMs
so as to maximize ARC space.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On May 12, 2010, at 3:06 PM, Manoj Joseph
wrote:
Ross Walker wrote:
On May 12, 2010, at 1:17 AM, schickb wrote:
I'm looking for input on building an HA configuration for ZFS. I've
read the FAQ and understand that the standard approach is to have a
standby system with access t
ng state as the original.
There should be no interruption of services in this setup.
This type of arrangement provides for oodles of flexibility in testing/
upgrading deployments as well.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolari
ent, but if an application
doesn't flush it's data, then it can definitely have partially written
data.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Apr 22, 2010, at 11:03 AM, Geoff Nordli wrote:
From: Ross Walker [mailto:rswwal...@gmail.com]
Sent: Thursday, April 22, 2010 6:34 AM
On Apr 20, 2010, at 4:44 PM, Geoff Nordli
wrote:
If you combine the hypervisor and storage server and have students
connect to the VMs via RDP or VNC
ith it.
It also allows you to abstract the hypervisor from the client.
Need a bigger storage server with lots of memory, CPU and storage
though.
Later, if need be, you can break out the disks to a storage appliance
with an 8GB FC or 10Gbe iSCSI interconnect.
-Ross
__
scratch non-important data or may be even mirrored with a slice from
750GB disk.
Will this work as I am hoping it should?
Any potential gotchas?
Wouldn't it just be easier to zfs send to a file on the 1TB, build
your raidz, then zfs recv into the new raidz from this file?
system. If so- how. If not- why is this
unimportant?
I don't run the cluster suite, but I'd be surprised if the software
doesn't copy the cache to the passive node whenever it's updated.
-Ross
___
zfs-discuss mailing list
zfs
and another.
> ZFS is smart enough to aggregate all these tiny write operations into a
> single larger sequential write before sending it to the spindle disks.
Hmm, when you did the write-back test was the ZIL SSD included in the
write-back?
What I was proposing was write-back only on the dis
On Thu, Apr 1, 2010 at 10:03 AM, Darren J Moffat
wrote:
> On 01/04/2010 14:49, Ross Walker wrote:
>>>
>>> We're talking about the "sync" for NFS exports in Linux; what do they
>>> mean
>>> with "sync" NFS exports?
>>
>>
hey mean
with "sync" NFS exports?
See section A1 in the FAQ:
http://nfs.sourceforge.net/
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t a drive a
little smaller it still should fit.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
one little test.
Seriously, all disks configured WriteThrough (spindle and SSD disks
alike)
using the dedicated ZIL SSD device, very noticeably faster than
enabling the
WriteBack.
What do you get with both SSD ZIL and WriteBack disks enabled?
I mean if you have both why not use bot
On Mar 31, 2010, at 10:25 PM, Richard Elling
wrote:
On Mar 31, 2010, at 7:11 PM, Ross Walker wrote:
On Mar 31, 2010, at 5:39 AM, Robert Milkowski
wrote:
On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss
Use something other than Open/Solaris with ZFS as an NFS
server? :)
I don
ted the data would be
lost too. Should we care more for data written remotely then locally?
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mar 20, 2010, at 11:48 AM, vikkr wrote:
THX Ross, i plan exporting each drive individually over iSCSI.
I this case, the write, as well as reading, will go to all 6 discs
at once, right?
The only question - how to calculate fault tolerance of such a
system if the discs are all
over iSCSI and setting the 6 drives
as a raidz2 or even raidz3 which will give 3-4 drives of capacity,
raidz3 will provide resiliency of a drive failure during a server
failure.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
csi works as expected?
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mar 15, 2010, at 11:10 PM, Tim Cook wrote:
On Mon, Mar 15, 2010 at 9:10 PM, Ross Walker
wrote:
On Mar 15, 2010, at 7:11 PM, Tonmaus wrote:
Being an iscsi
target, this volume was mounted as a single iscsi
disk from the solaris host, and prepared as a zfs
pool consisting of this
scenario is rather one to be avoided.
There is nothing saying redundancy can't be provided below ZFS just if
you want auto recovery you need redundancy within ZFS itself as well.
You can have 2 separate raid arrays served up via iSCSI to ZFS which
then makes a mirror out of the storage.
as an option to
disable write-back caching at least then if it doesn't honor flushing
your data should still be safe.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
IET I hope you were NOT using the write-back option on it as
it caches write data in volatile RAM.
IET does support cache flushes, but if you cache in RAM (bad idea) a
system lockup or panic will ALWAYS loose data.
-Ross
___
zfs-discuss mailing lis
On Mar 11, 2010, at 12:31 PM, Andrew wrote:
Hi Ross,
Ok - as a Solaris newbie.. i'm going to need your help.
Format produces the following:-
c8t4d0 (VMware-Virtualdisk-1.0 cyl 65268 alt 2 hd 255 sec 126) /
p...@0,0/pci15ad,1...@10/s...@4,0
what dd command do I need to run to refe
ill need to get rid of the
RDM and use the iSCSI initiator in the solaris vm to mount the volume.
See how the first 34 sectors look, and if they are damaged take the
backup GPT to reconstruct the primary GPT and recreate the MBR.
-Ross
___
zfs-discu
for
memory.
It is a wonder it didn't deadlock.
If I were to put a ZFS file system on a ramdisk, I would limit the
size of the ramdisk and ARC so both, plus the kernel fit nicely in
memory with room to spare for user apps.
-Ross
___
ory file system.
This would be more for something like temp databases in a RDBMS or a
cache of some sort.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
due to the newness and the binary stability with
patches. Without it OS is no longer really production quality.
A little scattered in my reasoning but I think I get the main idea
across.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ting the storage
policy up to the system admin rather then the storage admin.
It would be better to put effort into supporting FUA and DPO options
in the target then dynamically changing a volume's cache policy from
the initiator side.
-Ross
_
e new Dell MD11XX series is 24 2.5" drives and you can chain 3 of
them together off a single controller. If your drives are dual ported
you can use both HBA ports for redundant paths.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opens
could do the same with LDAP, but winbind has the
advantage of auto-creating UIDs based on the user's RID+mapping range
which saves A LOT of work in creating UIDs in AD.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
, but you need a lot more drives then what multiple
mirror vdevs can provide IOPS wise with the same amount of spindles.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Interesting, can you explain what zdb is dumping exactly?
I suppose you would be looking for blocks referenced in the snapshot
that have a single reference and print out the associated file/
directory name?
-Ross
On Feb 4, 2010, at 7:29 AM, Darren Mackay wrote:
Hi Ross,
zdb - f
system
functions offered by OS. I scan every byte in every file manually
and it
^^^
On February 3, 2010 10:11:01 AM -0500 Ross Walker >
wrote:
Not a ZFS method, but you could use rsync with the dry run option
to list
all changed fi
On Feb 3, 2010, at 8:59 PM, Frank Cusack
wrote:
On February 3, 2010 6:46:57 PM -0500 Ross Walker
wrote:
So was there a final consensus on the best way to find the difference
between two snapshots (files/directories added, files/directories
deleted
and file/directories changed)?
Find
nly real option is
rsync. Of course you can zfs send the snap to another system and do
the rsync there against a local previous version.
-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 696 matches
Mail list logo