JD Trout wrote:
> I have a quick ZFS question. With most hardware raid controllers all
> the data and the info is stored on the disk. Therefore, the integrity
> of the data can survive a controller failure or the deletion of the
> LUN as long as it is recreated with the same drives in the same
>
joerg.schill...@fokus.fraunhofer.de wrote:
> The secure deletion of the data would be something that hallens before
> the file is actually unlinked (e.g. by rm). This secure deletion would
> need open the file in a non COW mode.
That may not be sufficient. Earlier writes to the file might have lef
Ross Walker wrote:
> On May 12, 2010, at 1:17 AM, schickb wrote:
>
>> I'm looking for input on building an HA configuration for ZFS. I've
>> read the FAQ and understand that the standard approach is to have a
>> standby system with access to a shared pool that is imported during
>> a failov
schickb wrote:
> I'm looking for input on building an HA configuration for ZFS. I've
> read the FAQ and understand that the standard approach is to have a
> standby system with access to a shared pool that is imported during a
> failover.
>
> The problem is that we use ZFS for a specialized purpos
Hi,
I believe, ZFS, at least in the design ;) , provides APIs other than
POSIX (for databases and other applications) to directly talk to the DMU.
Are such interfaces ready/documented? If this is documented somewhere,
could you point me to it?
Regards,
Manoj
Matt B wrote:
Any thoughts on the best practice points I am raising? It disturbs me
that it would make a statement like "don't use slices for
production".
ZFS turns on write cache on the disk if you give it the entire disk to
manage. It is good for performance. So, you should use whole disks w
Ayaz Anjum wrote:
HI !
I have tested the following scenario
created a zfs filesystem as part of HAStoragePlus in SunCluster 3.2,
Solaris 11/06
Currently i am having only one fc hba per server.
1. There is no IO to the zfs mountpoint. I disconnected the FC cable.
Filesystem on zfs still sh
Ayaz,
Ayaz Anjum wrote:
HI !
I have some concerns here, from my experience in the past, touching a
file ( doing some IO ) will cause the ufs filesystem to failover, unlike
zfs where it did not ! Why the behaviour of zfs different than ufs ? is
not this compromising data integrity ?
As ot
David Anderson wrote:
Hi,
I'm attempting to build a ZFS SAN with iSCSI+IPMP transport. I have two
ZFS nodes that access iSCSI disks on the storage network and then the
ZFS nodes share ZVOLs via iSCSI to my front-end Linux boxes. My
throughput from one Linux box is about 170+MB/s with nv59 (ea
Richard L. Hamilton wrote:
and does it vary by filesystem type? I know I ought to know the
answer, but it's been a long time since I thought about it, and
I must not be looking at the right man pages. And also, if it varies,
how does one tell? For a pipe, there's fpathconf() with _PC_PIPE_BUF,
Richard Elling wrote:
Atul Vidwansa wrote:
Hi Richard,
I am not talking about source(ASCII) files. How about versioning
production data? I talked about file level snapshots because
snapshotting entire filesystem does not make sense when application is
changing just few files at a time.
CVS
Constantin Gonzalez wrote:
Do I still have the advantages of having the whole disk
'owned' by zfs, even though it's split into two parts?
I'm pretty sure that this is not the case:
- ZFS has no guarantee that someone will do something else with that other
partition, so it can't assume the r
Erik Trimble wrote:
While expanding a zpool in the way you've show is useful, it has nowhere
near the flexibility of simply adding single disks to existing RAIDZ
vdevs, which was the original desire expressed. This conversation has
been had several times now (take a look in the archives aroun
Pawel Jakub Dawidek wrote:
... The other problem is that there is no spare room in
ZIL structures, ie. I can't add anything to lr_setattr_t, which won't
break on-disk compatibility. The suggested way of moving pools is to
export it, move to antoher box and import it. Once pool is exported
there s
Simon wrote:
So,does mean this is oracle bug ? Or it's impossible(or inappropriate)
to use ZFS/SVM volumes to create oracle data file,instead,should use
zfs or ufs filesystem to do this.
Oracle can use SVM volumes to hold its data. Unless I am mistaken, it
should be able to use zvols as well.
Bob Bownes wrote:
I like the 'take a look at what Vertias' did suggestion. has anyone done
so?
Does anyone *know* what Veritas did? I tried Google. It seems VxFS for
Linux is not GPL.
I saw posts on the linux-kernel list expressing concerns about potential
GPL violations when accepting pat
Dennis Clarke wrote:
So now here we are ten years later with a new filesystem and I have no
way to back it up in such a fashion that I can restore it perfectly. I
can take snapshots. I can do a strange send and receive but the
process is not stable From zfs (1M) we see :
The format of the st
Wee Yeh Tan wrote:
On 4/23/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
bash-3.00# mdb -k
Loading modules: [ unix krtld genunix dtrace specfs ufs sd pcisch md
ip sctp usba fcp fctl qlc ssd crypto lofs zfs random ptm cpc nfs ]
> segmap_percent/D
segmap_percent:
segmap_percent: 12
(it's stat
Gino wrote:
Apr 23 02:02:22 SERVER144 ^Mpanic[cpu1]/thread=ff0017fa1c80:
Apr 23 02:02:22 SERVER144 genunix: [ID 809409 kern.notice] ZFS: I/O failure (write on
off 0: zio 9a5d4cc0 [L0 bplist] 4000L/4000P DVA[0]=<0:770b24
000:4000> DVA[1]=<0:dfa984000:4000> fletcher4 uncompressed LE
Richard Elling wrote:
In other words, the "sync" command schedules a sync. The consistent way
to tell if writing is finished is to observe the actual I/O activity.
ZFS goes beyond this POSIX requirement. When a sync(1M) returns, all
dirty data that has been cached has been committed to disk.
Brian Hechinger wrote:
After having set my desktop to install (to a pair of 140G SATA disks
that zfs is mirroring) at work, I was trying to skip the dump slice
since in this case, no, I don't really want it. ;)
Don't underestimate the usefulness of a dump device. You might run into
a panic so
Hi,
I was wondering about the ARC and its interaction with the VM
pagecache... When a file on a ZFS filesystem is mmaped, does the ARC
cache get mapped to the process' virtual memory? Or is there another copy?
-Manoj
___
zfs-discuss mailing list
zfs
Lee Fyock wrote:
least this year. I'd like to favor available space over performance, and
be able to swap out a failed drive without losing any data.
Lee Fyock later wrote:
In the mean time, I'd like to hang out with the system and drives I
have. As "mike" said, my understanding is that zfs wo
Mario Goebbels wrote:
do it". So I added the disk using the zero slice notation (c0d0s0),
as suggested for performance reasons. I checked the pool status and
noticed however that the pool size didn't raise.
I believe you got this wrong. You should have given ZFS the whole disk -
c0d0 and not a
Robert Thurlow wrote:
I've written some about a 4-drive Firewire-attached box based on the
Oxford 911 chipset, and I've had I/O grind to a halt in the face of
media errors - see bugid 6539587. I haven't played with USB drives
enough to trust them more, but this was a hole I fell in with Firewire
Hi,
This is probably better discussed on zfs-discuss. I am CCing the list.
Followup emails could leave out opensolaris-discuss...
Shweta Krishnan wrote:
Does zfs/zpool support the layered driver interface?
I wrote a layered driver with a ramdisk device as the underlying
device, and successfu
Shweta Krishnan wrote:
I ran zpool with truss, and here is the system call trace. (again, zfs_lyr is
the layered driver I am trying to use to talk to the ramdisk driver).
When I compared it to a successful zpool creation, the culprit is the last
failing ioctl
i.e. ioctl(3, ZFS_IOC_CREATE_POOL,
Michael Barrett wrote:
Normally if you have a ufs file system hit 100% and you have a very high
level of system and application load on the box (that resides in the
100% file system) you will run into inode issues that require a fsck and
show themselves by not being about to long list out all
dudekula mastan wrote:
Atleaset in my experience, I saw Corruptions when ZFS file system was
full. So far there is no way to check the file system consistency on ZFS
(to the best of my knowledge). ZFS people claiming that ZFS file system
is always consistent and there is no need for FSCK comman
Hi,
I find that fchmod(2) on a zfs filesystem can sometimes generate errno =
ENOSPC. However this error value is not in the manpage of fchmod(2).
Here's where ENOSPC is generated.
zfs`dsl_dir_tempreserve_impl
zfs`dsl_dir_tempreserve_space+0x4e
zfs`dmu
Matthew Ahrens wrote:
Manoj Joseph wrote:
Hi,
I find that fchmod(2) on a zfs filesystem can sometimes generate errno
= ENOSPC. However this error value is not in the manpage of fchmod(2).
Here's where ENOSPC is generated.
zfs`dsl_dir_tempreserve_impl
Matthew Ahrens wrote:
In a COW filesystem such as ZFS, it will sometimes be necessary to
return ENOSPC in cases such as chmod(2) which previously did not. This
is because there could be a snapshot, so "overwriting" some information
actually requires a net increase in space used.
That said, w
Hi,
In brief, what I am trying to do is to use libzpool to access a zpool -
like ztest does.
Matthew Ahrens wrote:
Manoj Joseph wrote:
Hi,
Replying to myself again. :)
I see this problem only if I attempt to use a zpool that already
exists. If I create one (using files instead of devices
Manoj Joseph wrote:
Hi,
In brief, what I am trying to do is to use libzpool to access a zpool -
like ztest does.
[snip]
No, AFAIK, the pool is not damaged. But yes, it looks like the device
can't be written to by the userland zfs.
Well, I might have figured out something.
Turssin
Manoj Joseph wrote:
Manoj Joseph wrote:
Hi,
In brief, what I am trying to do is to use libzpool to access a zpool
- like ztest does.
[snip]
No, AFAIK, the pool is not damaged. But yes, it looks like the device
can't be written to by the userland zfs.
Well, I might have figure
Manoj Joseph wrote:
> Manoj Joseph wrote:
>> Manoj Joseph wrote:
>>> Hi,
>>>
>>> In brief, what I am trying to do is to use libzpool to access a zpool
>>> - like ztest does.
>>
>> [snip]
>>
>>> No, AFAIK, the pool is not da
Peter Tribble wrote:
> I've not got that far. During an import, ZFS just pokes around - there
> doesn't seem to be an explicit way to tell it which particular devices
> or SAN paths to use.
You can't tell it which devices to use in a straightforward manner. But
you can tell it which directories
into opensolaris?
Regards,
Manoj
Matthew Ahrens wrote:
> Manoj Joseph wrote:
>> Unlike what I had assumed earlier, zio_t that is passed to
>> vdev_file_io_start() has aligned offset and size.
>>
>> The libzpool library, when writing data to the devices below a zpool,
&
Tatjana S Heuser wrote:
Is it planned to have the cluster fs or proxy fs layer between the ZFS layer
and the Storage pool layer?
This, AFAIK, is not the current plan of action. Sun Cluster should be
moving towards ZFS as a 'true' cluster filesystem.
Not going the 'proxy fs layer' way (PxFS/G
Alan Romeril wrote:
PxFS performance improvements of the order of 5-6 times are possible
depending on the workload using Fastwrite option.
Fantastic! Has this been targetted at directory operations? We've
had issues with large directorys full of small files being very slow
to handle over PxFS.
Robert Milkowski wrote:
Hello Sanjeev,
Wednesday, August 30, 2006, 3:26:52 PM, you wrote:
SB> Hi,
SB> We were trying out the "compression=on" feature of ZFS and were
SB> wondering if it would make
SB> sense to have ZFS do compression only on a certain kind of files (or
SB> rather the otherwa
41 matches
Mail list logo