ot;.. einee menee minee moe
> P
As suggested at
http://opensolaris.org/jive/thread.jspa?messageID=416264, you can try
viewing the disk serial numbers with cfgadm:
cfgadm -al -s "select=type(disk),cols=ap_id:info"
You may need to power down the system to view the serial numbers
p
On Mon, Dec 7, 2009 at 4:32 PM, Ed Plese wrote:
> Would it be beneficial to have a command line option to zpool that
> would only "preview" or do a "dry-run" through the changes, but
> instead just display what the pool would look like after the operation
> an
es, getting in the habit of always using an option like this
might be a good way to ensure the change is really what is desired.
Some information that might be nice to see would be the before and
after versions of "zpool list", the "zpool status", and what command
could be run to rev
-
>
> It should be ~ 3.8-3.9 TB, right?
An autoexpand property was added a few months ago for zpools. This
needs to be turned on to enable the automatic vdev expansion. For
example,
# zpool set autoexpand=on bigpool
Ed Plese
___
zfs-disc
You can reclaim this space with the SDelete utility from Microsoft.
With the -c option it will zero any free space on the volume. For
example:
C:\>sdelete -c C:
I've tested this with xVM and with compression enabled for the zvol,
but it worked very well.
Ed Plese
On Tue, Nov 17, 20
pares or cache dev-
ices.
-tTemporary. Upon reboot, the specified physical
device reverts to its previous state.
Ed Plese
On Wed, Nov 11, 2009 at 12:15 PM, Tim Cook wrote:
> So, I've done a bit of research and RTFM, and haven't found an
executing. With ~3500 filesystems on S10U3 the boot
time for our X4500 was around 40 minutes. Any idea what your boot
time is like with that many filesystems on the newer releases?
Ed Plese
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
en the ZFS filesystem is a subdirectory of
the Samba share. In addition, make sure that there are actually
changes between the snapshots. If there aren't any then the Previous
Versions tab may not appear.
Ed Plese
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
atic
> API.
While it won't help you in your case since your users access the files
using protocols other than CIFS, if you use only CIFS it's possible to
configure Samba to automatically create a user's home directory the
first time the user connects to the server. This is done usin
turns EOVERFLOW which probably isn't a good thing.
> Is there a solution here but to move the zone root to a smaller disk?
Set a quota (10G should work just fine) on the filesystem and then
perform the zone install. Afterwards remove the quota.
Ed Plese
___
e of that,
any file permissions or ACLs are respected even if Samba doesn't have
support for the ACLs. The main thing that Samba support for ZFS ACLs
will bring is the ability to view and set the ACLs from a Windows client
and in particular through the normal Windows
>
> http://www.sun.com/emrkt/startupessentials/
>
> For an idea on the levels of discounts see
> http://kalsey.com/2006/11/sun_startup_essentials_pricing/
In addition, here are Sun's promotions for educational institutions:
http://www.sun.com/products-n-solutions/edu/promot
207.100738.a8abc689
Ed Plese
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to
restore files from their snapshots.
See http://www.edplese.com/samba-with-zfs.html (at the bottom of the
page) for more info.
Ed Plese
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ing 2 disks from each JBOD enclosure. This would give you the
ability to sustain the simultaneous failure of an entire enclosure and
at least one (though sometimes multiple) disk failure in the working
enclosure. This would also be a pure ZFS solution.
Ed Plese
is to a single command.
Taking individual snapshots of each filesystem can take a decent amount
of time, but I was under the impression that recursive snapshots would
be much faster due to the snapshots being committed in a single transaction.
Is this not correct?
Ed Plese
_
On Thu, Sep 28, 2006 at 12:40:17PM -0500, Ed Plese wrote:
> This can be elaborated on to do neat things like create a ZFS clone when
> a client connects and then destroy the clone when the client
> disconnects (via "root postexec"). This could possibly be useful for
> the sh
airly simple
VFS module for Samba that would replace every mkdir call with a call
to "zfs create". This method is a bit more involved than the above
method since the VFS modules are coded in C, but it's definitely a
possibility.
Ed Plese
On Mon, Aug 07, 2006 at 02:36:27PM -0500, Ed Plese wrote:
> A quick Google search turned up the following URL which has some
> screenshots to illustrate what the Shadow Copy Client looks like.
Oops.. forgot the URL:
http://www.petri.co.il/how_to_use_the_shadow_copy_client.htm
Ed
h turned up the following URL which has some
screenshots to illustrate what the Shadow Copy Client looks like.
The default shadow copy VFS module for Samba doesn't work very well
with ZFS but after some modifications it provides very good integration
of ZFS with Windows Explorer.
If anyone is
Thanks, that's exactly what I was looking for.
Ed Plese
On Wed, Jun 14, 2006 at 10:09:35AM -0700, Eric Schrock wrote:
> No, but this is a known issue. See:
>
> 6431277 want filesystem-only quotas
>
> - Eric
>
> On Wed, Jun 14, 2006 at 11:58:25AM -0500, Ed Plese wro
h
seems to defeat the purpose of the quotas), or destroy all of their
snapshots (which seems to defeat the purpose of using the snapshots for
any sort of backup mechanism).
Is there any way to work around this with the quota and snapshot
mechanisms built into ZFS?
Thanks,
22 matches
Mail list logo