1. Due to the COW nature of zfs, files on zfs are more tender to be fragmented
comparing to traditional file system. Is this statement correct?
2. If so, common understanding is that fragmentation cause perform degradation,
will zfs or to what extend zfs performance is affected by the fragmentat
I'm realizing I never sent the answer to this story, which is that the
server needed more RAM. We knew the ARC cache was implicated but had
missed just how much RAM zfs needs for the ARC cache, and this server
had a LOT of file systems. THOUSANDS. Partially because a lot of
this information wasn
If her adds the spare and then manually forces a replace, it will take
no more time than any other way. I do this quite frequently and without
needing the scrub which does take quite a lot of time.
cindy.swearin...@sun.com wrote:
Hi Andreas,
Good job for using a mirrored configuration. :-)
I believe there are a couple of ways that work. The commands I've
always used are to attach the new disk as a spare (if not already) and
then replace the failed disk with the spare. I don't know if there are
advantages or disavantages but I also have never had a problem doing it
this way.
A
erik.ableson wrote:
You're running into the same problem I had with 2009.06 as they have
"corrected" a bug where the iSCSI target prior to
2009.06 didn't honor completely SCSI sync commands issued by the initiator.
I think I've hit the same thing. I'm using an iscsi volume as the target
for T
On Fri, 7 Aug 2009, Henrik Johansson wrote:
"We're already looking forward to the next release due in 2010. Look out for
great new features like an interactive installation for SPARC, the ability to
install packages directly from the repository during the install, offline IPS
support, a new ve
On 6 aug 2009, at 23.52, Bob Friesenhahn
wrote:
I still have not seen any formal announcement from Sun regarding
deduplication. Everything has been based on remarks from code
developers.
To be fair, the official "what's new" document for 2009.06 states that
dedup will be part of the
On Thu, Aug 6, 2009 at 16:59, Ross wrote:
> But why do you have to attach to a pool? Surely you're just attaching to the
> root
> filesystem anyway? And as Richard says, since filesystems can be shrunk
> easily
> and it's just as easy to detach a filesystem from one machine and attach to
> it
On Thu, 6 Aug 2009, Nigel Smith wrote:
I guess it depends on the rate of progress of ZFS compared to say btrfs.
Btrfs is still an infant whereas zfs is now into adolescence.
I would say that maybe Sun should have held back on
announcing the work on deduplication, as it just seems to
I stil
Dang. This is a bug we talked about recently that is fixed in Nevada and
an upcoming Solaris 10 release.
Okay, so you can't offline the faulted disk, but you were able to
replace it and detach the spare.
Cool beans...
Cindy
On 08/06/09 15:35, Andreas Höschler wrote:
Hi Cindy,
I think you c
Hi Cindy,
I think you can still offline the faulted disk, c1t6d0.
OK, here it gets tricky. I have
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
mirror ONLINE 0 0 0
c1t2d0 ONLINE 0 0
Hi Kyle,
Except that in the case of spares, you can't replace them.
You'll see a message like the one below.
Cindy
# zpool create pool mirror c1t0d0 c1t1d0 spare c1t5d0
# zpool status
pool: pool
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
Andreas,
I think you can still offline the faulted disk, c1t6d0.
The difference between these two replacements:
zpool replace tank c1t6d0 c1t15d0
zpool replace tank c1t6d0
Is that in the second case, you are telling ZFS that c1t6d0
has been physically replaced in the same location. This would
Hi all,
zpool add tank spare c1t15d0
? After doing that c1t6d0 is offline and ready to be physically
replaced?
Yes, that is correct.
Then you could physically replace c1t6d0 and add it back to the pool
as
a spare, like this:
# zpool add tank spare c1t6d0
For a production system, the s
Hello,
I am having a problem importing a pool in 2009.06 that was created on zfs-fuse
(ubuntu 8.10).
Basically, I was having issues with a controller, and took a disk offline.
After restarting with a new controller, I was unable to import the pool (in
ubuntu). Someone had suggested that I try
On 08/06/09 12:19, Robert Lawhead wrote:
I'm puzzled by the size reported for incremental zfs send|zfs receive. I'd expect the
stream to be roughly the same size as the "used" blocks reported by zfs list.
Can anyone explain why the stream size reported is so much larger that the used data in
Andreas,
More comments below.
Cindy
On 08/06/09 14:18, Andreas Höschler wrote:
Hi Cindy,
Good job for using a mirrored configuration. :-)
Thanks!
Your various approaches would work.
My only comment about #2 is that it might take some time for the spare
to kick in for the faulted disk.
Thomas Burgess wrote:
that's strange...it works for me.at least the ones i've used have
worked with opensolaris freebsd and linux.
It just shows up as a normal sata drive. did you try more than one
type of compactflash card?
with the IDE unit, it was ALWAYS due to the cardmost of them w
Hi Cindy,
Good job for using a mirrored configuration. :-)
Thanks!
Your various approaches would work.
My only comment about #2 is that it might take some time for the spare
to kick in for the faulted disk.
Both 1 and 2 would take a bit more time than just replacing the faulted
disk with
that's strange...it works for me.at least the ones i've used have worked
with opensolaris freebsd and linux.
It just shows up as a normal sata drive. did you try more than one type of
compactflash card?
with the IDE unit, it was ALWAYS due to the cardmost of them would work
SOMEWHAT but no
Hi Andreas,
Good job for using a mirrored configuration. :-)
Your various approaches would work.
My only comment about #2 is that it might take some time for the spare
to kick in for the faulted disk.
Both 1 and 2 would take a bit more time than just replacing the faulted
disk with a spare dis
Hi Darren,
Darren J Moffat wrote:
> That is no different to the vast majority of Open Source projects
> either. Open Source and Open Development usually don't give you access
> to individuals work in progress.
Yes thats true. But there are more 'open' models for running
an open source project.
F
ob Friesenhahn wrote:
> Sun has placed themselves in the interesting predicament that being
> open about progress on certain high-profile "enterprise" features
> (such as shrink and de-duplication) could cause them to lose sales to
> a competitor. Perhaps this is a reason why Sun is not nearly
Excellent advice, thans Ian.
A.
--
Adam Sherman
+1.613.797.6819
On 2009-08-06, at 15:16, Ian Collins wrote:
Adam Sherman wrote:
On 4-Aug-09, at 16:54 , Ian Collins wrote:
Use a CompactFlash card (the board has a slot) for root, 8
drives in raidz2 tank, backup the root regularly
If bo
Adam Sherman wrote:
On 6-Aug-09, at 11:32 , Thomas Burgess wrote:
i've seen some people use usb sticks, and in practice it works on
SOME machines. The biggest difference is that the bios has to allow
for usb booting. Most of todays computers DO. Personally i like
compact flash because it is
Dear managers,
one of our servers (X4240) shows a faulty disk:
-bash-3.00# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONL
Adam Sherman wrote:
On 4-Aug-09, at 16:54 , Ian Collins wrote:
Use a CompactFlash card (the board has a slot) for root, 8 drives
in raidz2 tank, backup the root regularly
If booting/running from CompactFlash works, then I like this one.
Backing up root should be trivial since you can back it
On 08/06/09 14:28, Matt Ingenthron wrote:
If ZFS is not beinng used significantly, then ARC
should not grow. ARC grows
based on the usage (ie. amount of ZFS files/data
accessed). Hence, if you are
sure that the ZFS usage is low, things should be
fine.
I understand that it won't grow, but I want
Greg Mason wrote:
What is the downtime for doing a send/receive? What is the downtime
for zpool export, reconfigure LUN, zpool import?
We have a similar situation. Our home directory storage is based on
many X4540s. Currently, we use rsync to migrate volumes between
systems, but our proces
On Aug 6, 2009, at 7:59 AM, Ross wrote:
But why do you have to attach to a pool? Surely you're just
attaching to the root filesystem anyway? And as Richard says, since
filesystems can be shrunk easily and it's just as easy to detach a
filesystem from one machine and attach to it from ano
> If ZFS is not beinng used significantly, then ARC
> should not grow. ARC grows
> based on the usage (ie. amount of ZFS files/data
> accessed). Hence, if you are
> sure that the ZFS usage is low, things should be
> fine.
I understand that it won't grow, but I want it to be smaller than the defaul
I'm puzzled by the size reported for incremental zfs send|zfs receive. I'd
expect the stream to be roughly the same size as the "used" blocks reported by
zfs list. Can anyone explain why the stream size reported is so much larger
that the used data in the source snapshots? Thanks.
% zfs list
Is there a way to change the device name used to create a zpool?
My customer created their pool with on emc powerpath. An SA removed
powerpath by mistake, then reinstalled it. The names on the zpool are
now the physical device names of one path. They have data on there
already, so they woul
Thanks for you reply. Sorry I overlooked that bug; my bugster search skills
seem lacking.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Aug 6, 2009, at 11:09 AM, Scott Meilicke
wrote:
You can use a separate SSD ZIL.
Yes, but to see if a separate ZIL will make a difference the OP should
try his iSCSI workload first with ZIL then temporarily disable ZIL and
re-try his workload.
Nothing worse then buying expensive har
I've had SOME problem with the ide ones in the past. It depends on the card
you get with idethe sata ones tend to work regardless...I'm not saying
not to use ide, i'm just saying you might have to research your cf cards if
you do. not all ide->cf will boot.
On Thu, Aug 6, 2009 at 11:59 AM,
Adam Sherman wrote:
On 6-Aug-09, at 11:50 , Kyle McDonald wrote:
i've seen some people use usb sticks, and in practice it works on
SOME machines. The biggest difference is that the bios has to
allow for usb booting. Most of todays computers DO. Personally i
like compact flash because it is
if it's this one
http://www.newegg.com/Product/Product.aspx?Item=N82E16812186051 it works
perfectly. I've used them on several machines. They just show up as sata
drives. That unit also has a very tiny red led that lights upit's QUITE
brightbut you likely won't see it if it's inside the
I have a ZFS (e.g. tank/zone1/data) which is delegated to a zone as a dataset.
As root in the global zone, I can "zfs snapshot" and "zfs send" this ZFS:
zfs snapshot tank/zone1/data
and
zfs send tank/zone1/data
without any problem. When I "zfs allow" another user (e.g. amanda) with:
zfs allow
i've seen these before, if i remember right, it has a jumper on it to set as
a sort of onboard raid0 or raid1...i'm not sure it it has a jbod mode
thoughpersoanlly i prefer the small single cf to sata adapters, you'd be
surprised how thin they are, you can attatch them with screws or even hot
g
On 6-Aug-09, at 11:50 , Kyle McDonald wrote:
i've seen some people use usb sticks, and in practice it works on
SOME machines. The biggest difference is that the bios has to
allow for usb booting. Most of todays computers DO. Personally i
like compact flash because it is fairly easy to use
Adam Sherman wrote:
On 6-Aug-09, at 11:32 , Thomas Burgess wrote:
i've seen some people use usb sticks, and in practice it works on
SOME machines. The biggest difference is that the bios has to allow
for usb booting. Most of todays computers DO. Personally i like
compact flash because it is
On 6-Aug-09, at 11:32 , Thomas Burgess wrote:
i've seen some people use usb sticks, and in practice it works on
SOME machines. The biggest difference is that the bios has to allow
for usb booting. Most of todays computers DO. Personally i like
compact flash because it is fairly easy to us
i've seen some people use usb sticks, and in practice it works on SOME
machines. The biggest difference is that the bios has to allow for usb
booting. Most of todays computers DO. Personally i like compact flash
because it is fairly easy to use as a cheap alternative to a hard drive. I
mirror t
On Thu, 6 Aug 2009, Chookiex wrote:
But, you know, ZIO is pipelined, it means that the IO request may be
sent, and when you unlink the file, the IO stage is in progress.
so, would it be canceled else?
In POSIX filesystems, if a file is still open when it is unlinked,
then the file directory e
You can use a separate SSD ZIL.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
But why do you have to attach to a pool? Surely you're just attaching to the
root filesystem anyway? And as Richard says, since filesystems can be shrunk
easily and it's just as easy to detach a filesystem from one machine and attach
to it from another, why the emphasis on pools?
For once I'm
On Thu, 6 Aug 2009, Cyril Plisko wrote:
May I suggest using this forum (zfs-discuss) to periodically report
the progress ? Chances are that most of the people waiting for this
feature reading this list.
Sun has placed themselves in the interesting predicament that being
open about progress
What is the downtime for doing a send/receive? What is the downtime
for zpool export, reconfigure LUN, zpool import?
We have a similar situation. Our home directory storage is based on many
X4540s. Currently, we use rsync to migrate volumes between systems, but
our process could very easily
On Aug 6, 2009, at 5:36 AM, Ian Collins wrote:
Brian Kolaci wrote:
They understand the technology very well. Yes, ZFS is very
flexible with many features, and most are not needed in an
enterprise environment where they have high-end SAN storage that is
shared between Sun, IBM, linux,
Thanks. :)
I have tested in my system, it's great.
But, you know, ZIO is pipelined, it means that the IO request may be sent, and
when you unlink the file, the IO stage is in progress.
so, would it be canceled else?
From: Bob Friesenhahn
To: Chookiex
Cc: zf
Well, to be fair, there were some special cases.
I know we had 3 separate occasions with broken HDDs, when we were using
UFS. 2 of these appeared to hang, and the 3rd only hung once we replaced
the disk. This is most likely due to use using UFS in zvol (for quotas).
We got an IDR patch, and e
Ross wrote:
But with export / import, are you really saying that you're going to physically
move 100GB of disks from one system to another?
zpool export/import would not move anything on disk. It just changes
which host the pool is attached to. This is exactly how cluster
failover works in
In a sol10 box which in ZFS filesystem,I took a snapshot of whole sol box (root
dir) and then made some changes in /opt dir(30 - 40 MB).After this ,When I
tried to rollback the snapshot,the sol box got hanged.Does any one faced
similar issues? Is it depends on the size of changes we make?Please
But with export / import, are you really saying that you're going to physically
move 100GB of disks from one system to another?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
On 4-Aug-09, at 16:54 , Ian Collins wrote:
Use a CompactFlash card (the board has a slot) for root, 8 drives
in raidz2 tank, backup the root regularly
If booting/running from CompactFlash works, then I like this one.
Backing up root should be trivial since you can back it up into
your big
Nigel Smith wrote:
Hi Matt
Thanks for this update, and the confirmation
to the outside world that this problem is being actively
worked on with significant resources.
But I would like to support Cyril's comment.
AFAIK, any updates you are making to bug 4852783 are not
available to the outside w
On Thu, Aug 6, 2009 at 12:45, Ian Collins wrote:
> Mattias Pantzare wrote:
>>>
>>> If they accept virtualisation, why can't they use individual filesystems
>>> (or
>>> zvol) rather than pools? What advantage do individual pools have over
>>> filesystems? I'd have thought the main disadvantage of
Mattias Pantzare wrote:
If they accept virtualisation, why can't they use individual filesystems (or
zvol) rather than pools? What advantage do individual pools have over
filesystems? I'd have thought the main disadvantage of pools is storage
flexibility requires pool shrink, something ZFS prov
Hi Matt
Thanks for this update, and the confirmation
to the outside world that this problem is being actively
worked on with significant resources.
But I would like to support Cyril's comment.
AFAIK, any updates you are making to bug 4852783 are not
available to the outside world via the normal b
> If they accept virtualisation, why can't they use individual filesystems (or
> zvol) rather than pools? What advantage do individual pools have over
> filesystems? I'd have thought the main disadvantage of pools is storage
> flexibility requires pool shrink, something ZFS provides at the filesy
Alan, I thought the "read_set" of the ACLs should be enough to list directories
and subdirectories, and read files of course... And this set does not include
the execute right.
Obviously I was wrong... :-)
Cheers,
Chris
--
This message posted from opensolaris.org
_
Afshin, thanks for the response. You seem to be everywhere on the forum...
Respect... :-)
The ACL on the files I tried are the same, I always do a "chmod -R" when
changing ACLs on the dataset/directory.
You got a recommendation for a network trace tool? I could do it on OpenSolaris
(file serve
Brian Kolaci wrote:
They understand the technology very well. Yes, ZFS is very flexible
with many features, and most are not needed in an enterprise
environment where they have high-end SAN storage that is shared
between Sun, IBM, linux, VMWare ESX and Windows. Local disk is only
for the O
Or use UFS filesystem ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Whoah!
"We have yet to experience losing a
disk that didn't force a reboot"
Do you have any notes on how many times this has happened Jorgen, or what steps
you've taken each time?
I appreciate you're probably more concerned with getting an answer to your
question, but if ZFS needs a reboot to
On Wed, Aug 5, 2009 at 11:48 PM, Jorgen Lundman wrote:
>
> I suspect this is what it is all about:
>
> # devfsadm -v
> devfsadm[16283]: verbose: no devfs node or mismatched dev_t for
> /devices/p...@0,0/pci10de,3...@b/pci1000,1...@0/s...@5,0:a
> [snip]
>
> and indeed:
>
> brw-r- 1 root s
>
> It is unfortunately a very difficult problem, and will take some time to
> solve even with the application of all possible resources (including the
> majority of my time). We are updating CR 4852783 at least once a month with
> progress reports.
Matt,
should these progress reports be visible
68 matches
Mail list logo