want.
you need to clone a filesystem per guest because ZFS can only rollback full
filesystems, not invidual files. your VM solution may have finer tuned
controlls for its own snapshots but those are don't use ZFS' abililities.
James Dickens
uadmin.blogspot.com
+1
On Wed, Jul 28, 2010 at 6:11 PM, Robert Milkowski wrote:
>
> fyi
>
> --
> Robert Milkowski
> http://milek.blogspot.com
>
>
> Original Message Subject: zpool import despite missing
> log [PSARC/2010/292 Self Review] Date: Mon, 26 Jul 2010 08:38:22 -0600 From:
> Tim Haley
On Fri, Jul 2, 2010 at 1:18 AM, Ray Van Dolson wrote:
> We have a server with a couple X-25E's and a bunch of larger SATA
> disks.
>
> To save space, we want to install Solaris 10 (our install is only about
> 1.4GB) to the X-25E's and use the remaining space on the SSD's for ZIL
> attached to a z
ol/dsk/rpool/puddle_slog ONLINE 0 0 0
zfs list -rt volume puddle
NAMEUSED AVAIL REFER MOUNTPOINT
puddle/l2arc 8.25G 538G 7.20G -
puddle/log_test1.25G 537G 1.25G -
puddle/temp_cache 4.13G 537G 4.00G -
James Dickens
http://uadmin.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ly what to keep and
what to throw away.
James Dickens
http://uadmin.blogspot.com
On Sat, Mar 6, 2010 at 2:15 AM, Abdullah Al-Dahlawi wrote:
> hi James
>
>
> here is the out put you've requested
>
> abdul...@hp_hdx_16:~/Downloads# zpool status -v
> pool: hdd
>
please post the output of zpool status -v.
Thanks
James Dickens
On Fri, Mar 5, 2010 at 3:46 AM, Abdullah Al-Dahlawi wrote:
> Greeting All
>
> I have create a pool that consists oh a hard disk and a ssd as a cache
>
> zpool create hdd c11t0d0p3
> zpool add hdd cache c8
ed) exceeds memory, your
performance degrades exponentially probably before that.
James Dickens
http://uadmin.blogspot.com
> I.e., I am not using any snapshots and have also turned off automatic
> snapshots because I was bitten by system hangs while destroying datasets
> with living s
/dev/dsk/c0d1s09.8G10M 9.7G 1%/test*
>
the act of deleting files in UFS simply does a few accounting changes to the
filesystem thus has no affect on the blocks in ZFS volume, and in some cases
could actually make the zvol space grow. The only possible way to have ZFS
hot spares to the system should
one fail. If you are truly paranoid, 3-way mirror can be used. then you can
loose 2 disks without a loss of data.
Spread disks across multiple controllers, and get disks from different
companies and different lots to less the likely hood of getting hit by a bad
batch takin
Yes send and receive will do the job. see zfs manpage for details.
James Dickens
http://uadmin.blogspot.com
On Mon, Feb 15, 2010 at 11:56 AM, Tiernan OToole wrote:
> Good morning all.
>
> I am in the process of building my V1 SAN for media storage in house, and i
> am already thin
No, sorry Dennis, this functionality doesn't exist yet, but is being worked,
but will take a while, lots of corner cases to handle.
James Dickens
uadmin.blogspot.com
On Sun, Jan 10, 2010 at 3:23 AM, Dennis Clarke wrote:
>
> Suppose the requirements for storage shrink ( it can hap
but have disk space on non-utilized disk to try but haven't researched
the effect of adding and removing (if possible) l2arc or zil log slices on a
pool. it would be great to enable a 5-50GB slice off a sata drive to use as
logging device for greater performance.
James Dickens
uadmin.bl
not sure of your experience level, but did you try running devfsadm and
then checking in format for your new disks
James Dickens
uadmin.blogspot.com
On Sun, Dec 27, 2009 at 3:59 AM, Muhammed Syyid wrote:
> Hi
> I just picked up one of these cards and had a few questions
> After i
0 4 0 7 2 0 0 2 8.00M zfs
James Dickens
On Thu, Dec 24, 2009 at 11:22 PM, Michael Herf wrote:
> FWIW, I just disabled prefetch, and my dedup + zfs recv seems to be
> running visibly faster (somewhere around 3-5x faster).
>
> echo zfs_prefetch_
existing
data to a new device using functions from device removal modifications, i
could be wrong but it may not be as far as people fear. Device removal was
mentioned in the Next word for ZFS video.
James Dickens
http://uadmin.blogspot.com
jamesd...@gmail.com
>
>
> --
> Erik Trimble
one more time
On Dec 28, 2007 9:22 PM, James Dickens <[EMAIL PROTECTED]> wrote:
> s
>
> On Dec 5, 2007 4:48 AM, Jürgen Keil <[EMAIL PROTECTED]> wrote:
>
> > > > I use the following on my snv_77 system with 2
> > > > internal SATA dr
is the root cause and when this is fixed so we can remove the work around from
our systems.
If an engineers help debugging this I'm handy with dtrace and the machine is
not in use so I would be more than happy to investigate any ideas or fixes.
James Dickens
uadmin.blo
Hi
The minimum disks for raidz is 3, ( you can fool it but it wont
protect your data), and the minimum disks for raidz2 is 4.
James Dickens
uadmin.blogspot.com
On 6/19/07, Huitzi <[EMAIL PROTECTED]> wrote:
Hi,
I'm planning to deploy a small file server based on ZFS, but I want
disk, they just adjust the file size accordingly.
James Dickens
uadmin.blogspot.com
Thanks.
-mg
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
On 3/12/07, Erast Benson <[EMAIL PROTECTED]> wrote:
On Mon, 2007-03-12 at 20:53 -0600, James Dickens wrote:
>
>
> On 3/12/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> What issues, if any, are likely to surface with using Solaris
> inside vmware
there likely to be any issues with disk drive IO
performance?
i'm getting 11MB/s on bonnie++, the disks are backed by sata drives on a
ultra 20 2.6ghz and has 512MB allocated.
not exactly a speed demon it would get about 130MB/s on the raw hardware.
James Dickens
uadmin.blogspot.com
0 0 0
c2t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c3t0d0 UNAVAIL 0 0 0 cannot open
errors: No known data errors
James Dickens
uadmin.blogspot.com
___
zfs-discuss mailin
;m sure every 15 minutes is suffient, if the worker doesn't have a slight
penalty he will won't ever learn to be careful.
James Dickens
uadmin.blogspot.com
I once cobbled up a poor man's version of this sort of thing, aliasing
rm to a scripted mv, and pushing everything into a
On 1/13/07, roland <[EMAIL PROTECTED]> wrote:
thanks for your infos!
> > can zfs protect my data from such single-bit-errors with a single drive ?
> >
>nope.. but it can tell you that it has occurred.
can it also tell (or can i use a tool to determine), which data/file is
affected by this erro
s with a single drive ?
nope.. but it can tell you that it has occurred.
James Dickens
uadmin.blogspot.com
regards
roland
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
x pieces to make a
120GB pool. But for best performance its best to allocate full drives
so ZFS can activate write caching on the drives.
James Dickens
uadmin.blogspot.com
>
>
Why not take all but 40GB from the 80 for the OS/Boot, then take the
remaining 40GB, and the 40GB each from D
possible, even if you have to buy new cases and external disk boxes you
should see savings simply because its cheaper to power one box, than 5. You
will also need to budget differently for disks, adding 3-5 disks at a time
instead of a single disk, b
for a while, of course someone would still need to write a
file rescue utility to benefit from this but it could a be tunable option
either per pool or per file system.
James Dickens
uadmin.blogspot.com
References:
[1]
http://www.sun.com/software/solaris/trustedsolaris/ts_tech_faq/faqs/pur
default
pool aclmodegroupmask default
pool aclinherit secure default
okay i guess the question is why is zpool iostat pool output is
different from zfs get all info
James Dickens
uadmin.bl
to be IO limited in the end so 4 cores may be
enough to keep oracle happy when linked with upto 2GB/s disk IO speed.
James Dickens
uadmin.blogspot.com
I'd like to target a sequential read performance of 500++MB/sec while reading
from the db on multiple tablespaces. We're experiencing
On 11/23/06, James Dickens <[EMAIL PROTECTED]> wrote:
On 11/23/06, Dennis Clarke <[EMAIL PROTECTED]> wrote:
>
>
> this is off list on purpose ?
>
> > run zpool import, it will search all attached storage and give you a
> list
> > of availible pools. th
enfs to how you want the filesystem(s) shared by
default
zfs set sharenfs=rw sump
is a good setting if you are on a secure network
James Dickens
uadmin.blogspot.com
sump/install_image mountpoint /sump/install_imagedefault
Thanks
-Sanjay
___
On 11/6/06, Yuen L. Lee <[EMAIL PROTECTED]> wrote:
I'm curious whether there is a version of Linux 2.6 ZFS available?Many thanks.sorry there is no ZFS in Linux, and given current stands of Linus Torvalds and the current Kernel team there never will be, because Linux is GPLv2 and it is incompatible
Not sure if this is a bug, or desired behavior, but it doesn't seem
right to me, and a possible admin headache.
bash-3.00# zfs create pool/test2
bash-3.00# zfs create pool/test2/blah
on another box, in this case it was a linux box. mount the first filesystem.
[EMAIL PROTECTED] systemtap]# mo
On 9/14/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
James Dickens wrote:
> On 9/13/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
>> On Wed, Sep 13, 2006 at 02:29:55PM -0500, James Dickens wrote:
>> >
>> > this would not be the first time that Solari
rom the old endian type.
James Dickens
uadmin.blogspot.com
Normally, I'd run into problems with Fdisk vs EFI vs VTOC
labeling/partitioning, but I was hoping that ZFS would magically make my
life simpler here...
:-)
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
On 9/13/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
On Wed, Sep 13, 2006 at 02:29:55PM -0500, James Dickens wrote:
>
> this would not be the first time that Solaris overrided an administive
> command, because its just not safe or sane to do so. For example.
>
> rm -rf /
As
27;t address any time soon. The latter can be addressed by
presenting more useful information when 'zpool import' is run without
the '-f' flag.
- Eric
On Wed, Sep 13, 2006 at 12:14:06PM -0500, James Dickens wrote:
> I filed this RFE earlier, since there is no way for non su
I filed this RFE earlier, since there is no way for non sun personel
to see this RFE for a while I am posting it here, and asking for
feedback from the community.
[Fwd: CR 6470231 Created P5 opensolaris/triage-queue Add an inuse
check that is inforced even if import -f is used.] Inbox
Assign a
On 9/11/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
James Dickens wrote:
> On 9/11/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
>> B. DESCRIPTION
>>
>> A new property will be added, 'copies', which specifies how many copies
>> of the given f
ld
disk space needs require it?
If I start out 2 copies and later change it to on 1 copy, do the
files created before keep there 2 copies?
what happens if root needs to store a copy of an important file and
there is no space but there is space if extra copies are reclaimed?
Will this be configurabl
ace or come back here and ask
for more help, but at this time there is nothing to worry about.
James Dickens
uadmin.blogspot.com
c10t600A0B800011730E66C544C5EBB8d0 ONLINE 0 0 0
c10t600A0B800011730E66CA44C5EBEAd0 ONLINE 0 0 0
h 3x 18GB 10k rpm drives using
2x 40MB/s scsi channels , os is on a 80GB ide drive, has problems
interactively because as soon as you push zfs hard it hogs all the ram
and may take 5 or 10 seconds to get response on xterms while the
machine clears out ram and loads its applications/data back into ram.
running in degraded mode. then copy data
over, and then add the 3rd disk to the pool. perhaps that would be a
good RFE "create a raidz with N-1 disks so it runs in degraded mode"
to cope with solutions like these.
James Dickens
uadmin.blogspot.com
1. I have the pool as it stands, but
's the device as a readable and writeable block device
or a slice on the device, be it a scsi, ide, pata, usb flash drive,
lofi mounted text file and it is larger than 128MB of diskspace it can
be part of a zfs pool.
James Dickens
uadmin.blogspot.co
On 8/30/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Jason,
Tuesday, August 29, 2006, 9:35:13 PM, you wrote:
JAH> On Aug 29, 2006, at 12:17 PM, James Dickens wrote:
>> ZFS + rsync, backup on steroids.
>>
>> I was thinking today about backing up files
On 8/29/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
On August 29, 2006 2:17:06 PM -0500 James Dickens <[EMAIL PROTECTED]> wrote:
> ZFS + rsync, backup on steroids.
Seems to me 'zfs send | zfs recv' would be both faster and more efficient.
only if you assume, the sour
ZFS + rsync, backup on steroids.
I was thinking today about backing up filesystems, and came up with an
awesome idea. Use the power of rsync and ZFS together.
Start with a one or two large SATA/PATA drives if you use two and
don't need the space you can mirror other wise just use as in raid0,
en
community as a whole, please let us know and we'll add it to the page.
http://www.opensolaris.org/os/community/zfs/links/
you are welcome to use any or all of the links included in this blog entry
http://uadmin.blogspot.com/2006/06/interested-in-zfs.html
James Dickens
uadmin.bl
c
00018898 main (4, ffbff69c, 20860, 20800, ffbff7aa, 20400) + 148
00012858 _start (0, 0, 0, 0, 0, 0) + 108
bash-3.00#
Let me know if anyone has a core file fetish ;-) and wants to see it.
James Dickens
uadmin.blogspot.com
___
zfs-discuss mailin
ONLINE 0 0 0
James Dickens
uadmin.blogspot.com
errors: No known data errors
The df -k output of te newly created pool as raidz.
# df -k
Filesystemkbytesused avail capacity Mounted on
pool 210567168 49 210567033 1%/pool
I can
On 8/20/06, Mike Gerdts <[EMAIL PROTECTED]> wrote:
On 8/20/06, James Dickens <[EMAIL PROTECTED]> wrote:
> On 8/20/06, trevor pretty <[EMAIL PROTECTED]> wrote:
> > Team
> >
> > During a ZFS presentation I had a question from Vernon which I could not
> &
en you turn on ZFS compression?
not an expert, but most if not all compression is integer based, and
I don't think floating point is supported inside the kernel anyway so
it has to be integer based.
James Dickens
uadmin.blogspot.com
--
==
er 3.2) will
automate this, but for now you can roll your own along these lines.
Hi
okay just wondering but can you define "won't work" will ZFS spot
someone else writing to the disks and refuse to do any work? will it
spill its guts all over the dumpdevice? or just fight eac
egation prior to commiting the clone itself. If the aggregation is
too crowded, they will fail with the error message "not enough space".
If there is enough for snapshots, but not enough to guarantee a full
clone, you'll get a message saying "space not gu
WAFL.pdf or
http://unixconsult.org/wafl/ZFS%20vs%20WAFL.html
James Dickens
uadmin.blogspot.com
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San
one more time with the attachment
On 7/27/06, James Dickens <[EMAIL PROTECTED]> wrote:
On 7/27/06, Praveen Mogili <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I m sure some of you may have heard this already
> ' ZFS is a reverse engineered WAFL'
> from NetAp
limitations than AWFL.
James Dickens
uadmin.blogspot.com
Thanks for any pointers.
/Praveen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
__
l free to contact the system builders and sell them on your
idea, of how they should procure and install the drives from multiple
venders perferably from different lots, into the x4500, they will love
the extra hassle ;-p
James Dickens
uadmin.blogspot.com
__
permission "allow", for example that would allow
this behavior. When a normal user delegates to another user they would
be allowed to only hand out permissions they currently have.
For example:
# zfs allow joe create,destroy,allow
having a "allow" as an atribure and a command
asically how do I add a dataset to a zone ?
this is covered in the zfs guide at
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qsm?a=view
James Dickens
Thanks
Roshan
please cc me [EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@ope
f you get any performance
change.
I wonder if more cpus/cores would help this test, well theoretically
the single cpu is fast enough but when you have checksuming, creating
parity, reading and writing, and the benchmark you may get some
strange interaction.
You may want to try again in a few
c
setuid
readonly
zoned
snapdir
aclmode
aclinherit
Hi
just one addition, "all" or "full" attributes, for the case you want
to get full permissions to the user or group
zfs create p1/john
zfs allow p1/john john full
s
the primary goal monitoring zpool is just a stop gap measure till when
SMF has the needed functionality.
James Dickens
uadmin.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
__
et mountpoint=/export tank
should work
James Dickens
uadmin.blogspot.com
As part of the script I destroy the pool as well, to do that currently I
just get the mountpoints and then zfs umount starting at the parent which
does the trick as you explained.
So at this point I have just sharenfs
orks as expected, or if done on the local system, if the file
was not part of the clone.
# uname -av
SunOS enterprise 5.11 snv_39 sun4u sparc SUNW,Ultra-2
#
James Dickens
uadmin.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@open
assign a quota or
reservation for each user. you can see how much space they are using
with df /export/home/username or zfs list username no more waiting
for du -s to complete , you can make a snapshot of each users
data/filesystem.
I'm sure they are more but another time
Ja
On 7/3/06, James Dickens <[EMAIL PROTECTED]> wrote:
On 7/3/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> >>
> >> Currently, I'm using executable maps to create zfs
> >> home directories.
> >>
> >> Casper
> >
&g
0
fi
/usr/sbin/zfs create "export/home/$1" || exit 1
another way to do this that is quicker if you are executing this often
is create a user directory with all the skel files in place, snapshot
it, then clone that directory and chown the files.
zfs snapshot /export/home/[EMAIL PROTECTED]
oops forgot to attach the file
On 7/2/06, James Dickens <[EMAIL PROTECTED]> wrote:
Hi
i was boning up on my shell scripting skills, and wrote this little
utility to make taking snapshots easier in ZFS.
# ./snaphere -h
usage: snaphere [snapshot name]
create a snapshot of the zfs file
, and email me the changes.
Enjoy
James Dickens
uadmin.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be able to
create and mount a ZFS file system? is there something i'm missing
here?
ZFS File system Management – Provides the ability to create, destroy,
and modify ZFS file systems
James Dickens
uadmin.blogspot.com
___
zfs-discuss mailing list
zfs-d
okay this was meant to go to the list... i thought i had, its a work
around to the problem.
that people can use untill Sun engineers fix it.
James
-- Forwarded message --
From: James Dickens <[EMAIL PROTECTED]>
Date: May 27, 2006 4:57 PM
Subject: Re: [zfs-discuss] Re: Re
ested
with LUN's that are larger than 1TB so shouldn't be a problem.
James Dickens
uadmin.blogspot.com
If I should ask some place else please advise
//Lars
This message posted from opensolaris.org
___
zfs-discuss mailing list
z
On 5/27/06, Dennis Clarke <[EMAIL PROTECTED]> wrote:
>> >
>> > I had a one disk pool, that I want to use as a boot
>> > disk, but I can't
>> > seem to get rid of the efi label, when i use format
>> > -e it and try to
>> > relabel, format bitches that it can't set disk
>> > geometry or write th
On 5/27/06, Wes Williams <[EMAIL PROTECTED]> wrote:
> Hi
>
> I had a one disk pool, that I want to use as a boot
> disk, but I can't
> seem to get rid of the efi label, when i use format
> -e it and try to
> relabel, format bitches that it can't set disk
> geometry or write the
> new label.
>
>
On 5/27/06, James Dickens <[EMAIL PROTECTED]> wrote:
Hi
I had a one disk pool, that I want to use as a boot disk, but I can't
seem to get rid of the efi label, when i use format -e it and try to
relabel, format bitches that it can't set disk geometry or write the
new label.
Hi
I had a one disk pool, that I want to use as a boot disk, but I can't
seem to get rid of the efi label, when i use format -e it and try to
relabel, format bitches that it can't set disk geometry or write the
new label.
any one have any clues how to fix this?
james
_
0 0 0
c1t9d0s0 ONLINE 0 0 0
c1t10d0s0 ONLINE 0 0 0
errors: No known data errors
James Dickens
uadmin.blogspot.com
Thanks.
This message posted from opensolaris.org
___
zfs-discuss
le, shouldn't be much work. Another possible
enhancement would be adding anything field in stat(stat) in the files
name after its deleted. This would be set per filesystem. mod, uid,
username(the code should do the conversion), gid, size, mtime and just
parse a format string like
On 5/23/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello James,
Tuesday, May 23, 2006, 6:43:11 PM, you wrote:
JD> Hi
JD> I think ZFS should add the concept of ownership to a ZFS filesystem,
JD> so if i create a filesystem for joe, he should be able to use his
JD> space how ever he see's f
Hi
I think ZFS should add the concept of ownership to a ZFS filesystem,
so if i create a filesystem for joe, he should be able to use his
space how ever he see's fit, if he wants to turn on compression or
take 5000 snapshots its his filesystem, let him. If he wants to
destroy snapshots, he create
Java(TM) Web Console Version 3.0.1 ...
Cannot determine if console started successfully
#
connecting to https://localhost:6789/ gives me connection refused
this is
# uname -av
SunOS opteron 5.11 snv_38 i86pc i386 i86pc
#
u20 with 2GB of ram...
James Dickens
uadmin.blogspot.
then the user can have almost instant access to old
copies of files. and is a lot quicker than even the fastest tape
library. Just make daily snapshots and the need to restore a single
file from tape is almost completely eliminated, you can still use
netbackup for disasters, but to get access t
daily snapshots older than a week.
find .zfs/snapshot/daily -ctime +7d
and to create snapshots and place in the special directories
zfs snapshot data/[EMAIL PROTECTED]/05-05-2006
all snapshot directories would start under .zfs/snapshot
James Dickens
uadmin.blogspo
aily -ctime +7d
and to create snapshots and place in the special directories
zfs snapshot data/[EMAIL PROTECTED]/05-05-2006
all snapshot directories would start under .zfs/snapshot
James Dickens
uadmin.blogspot.com
For instance:
zfs snapshot -e 3d tank/[EMAIL PROTECTED]
would creat
commands
http://uadmin.blogspot.com/2006/05/moving-zfs-pools.html
James Dickens
uadmin.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
87 matches
Mail list logo