I'm considering setting up a poor man's cluster. The hardware I'd
like to use for some critical services is especially attractive for
price/space/performance reasons, however it only has a single power
supply. I'm using S10 U8 and can't migrate to OpenSolaris.
It's fine if a server dies (ie, po
On Tue, Dec 15, 2009 at 5:28 PM, Bill Sprouse wrote:
> Hi Everyone,
>
> I hope this is the right forum for this question. A customer is using a
> Thumper as an NFS file server to provide the mail store for multiple email
> servers (Dovecot). They find that when a zpool is freshly created and
> p
On Dec 15, 2009, at 5:31 PM, Bill Sprouse wrote:
This is most likely a naive question on my part. If recordsize is
set to 4k (or a multiple of 4k), will ZFS ever write a record that
is less than 4k or not a multiple of 4k?
Yes. The recordsize is the upper limit for a file record.
This i
I have also had slow scrubbing on filesystems with lots of files, and I
agree that it does seem to degrade badly. For me, it seemed to go from 24
hours to 72 hours in a matter of a few weeks.
I did these things on a pool in-place, which helped a lot (no rebuilding):
1. reduced number of snapshots
On Tue, 15 Dec 2009, Bill Sprouse wrote:
Hi Everyone,
I hope this is the right forum for this question. A customer is using a
Thumper as an NFS file server to provide the mail store for multiple email
servers (Dovecot). They find that when a zpool is freshly created and
It seems that Dove
On Tue, 2009-12-15 at 17:28 -0800, Bill Sprouse wrote:
> After
> running for a while (couple of months) the zpool seems to get
> "fragmented", backups take 72 hours and a scrub takes about 180
> hours.
Are there periodic snapshots being created in this pool?
Can they run with atime turne
This is most likely a naive question on my part. If recordsize is set
to 4k (or a multiple of 4k), will ZFS ever write a record that is less
than 4k or not a multiple of 4k? This includes metadata. Does
compression have any effect on this?
thanks for the help,
bill
__
Hi Everyone,
I hope this is the right forum for this question. A customer is using
a Thumper as an NFS file server to provide the mail store for multiple
email servers (Dovecot). They find that when a zpool is freshly
created and populated with mail boxes, even to the extent of 80-90%
c
Thanks for letting me know. I plan on attempting in a couple of weeks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
None of these look like the issue either. With 128, I did have to edit the
code to avoid the month rollover error, and add the missing dependency
dbus-python26.
I think I have a new install that went to 129 without having auto snapshots
enabled yet. When I can get to that machine later, I wil
Okay, not much help.
I found a couple more problems reported with workarounds, here:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6904417
time-slider unable to start after upgrade to snv_128
http://defect.opensolaris.org/bz/show_bug.cgi?id=13301
time-slider ignores all filesystems
I do not see this problem in build 129:
# mkfile 10g /export1/file123
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
export1 10.3G 33.0G 10.3G /export1
export1/cindys21K 33.0G21K /export1/cindys
# rm /export1/file123
.
.
.
# zfs list
NAME
Hi Markus,
CR 6909931 is filed to cover this problem.
We'll let you know when a fix is putback.
Thanks,
Cindy
On 12/15/09 10:24, Cindy Swearingen wrote:
Hi Markus,
We're checking your panic below against a similar problem reported very
recently. The bug is filed and if its the same problem,
As I have noted above after editing the initial post, its the same locally too.
>>I found that the "ls -l" on the zpool also reports 51,193,782,290 bytes
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
> "n" == Nathan writes:
n> http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery
This sounds silly. Does it actually work for you?
It seems like comparing 7 seconds to the normal 30 seconds would be
useless. Instead you want to compare (7 seconds * n levels * of
cargo-cult retry
Hello,
On Dec 15, 2009, at 8:02 AM, Giridhar K R wrote:
> Hi,
> Created a zpool with 64k recordsize and enabled dedupe on it.
> zpool create -O recordsize=64k TestPool device1
> zfs set dedup=on TestPool
>
> I copied files onto this pool over nfs from a windows client.
>
> Here is the output of
I take your point Mike. Yes, this seems to be an inconsistency in accounting.
I have simply become accustomed to this (esp. when dealing with virtual disk
images), so I just don't think about it, but it *is* harder to balance accounts.
For instance, if my guest cleans up it's vdisk by writing
On Tuesday 15 December 2009 20:47, Allen wrote:
> I would like to load OpenSolaris on my file server. I have previously
> loaded FBSD using zfs as the storage file system. Will OpenSolaris be
> able to import the pool and mount the file system created on FBSD or will
> I have to recreate the t
I would like to load OpenSolaris on my file server. I have previously loaded
FBSD using zfs as the storage file system. Will OpenSolaris be able to import
the pool and mount the file system created on FBSD or will I have to recreate
the the file system.
--
This message posted from opensolaris
Hi Markus,
We're checking your panic below against a similar problem reported very
recently. The bug is filed and if its the same problem, a fix should be
available soon.
We'll be in touch.
Cindy
On 12/15/09 08:07, Markus Kovero wrote:
Hi, I encountered panic and spontaneous reboot after canc
On 12/15/09 09:26, Luca Morettoni wrote:
As reported here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/zfsbootFAQ
we can't boot from a pool with raidz, any plan to have this feature?
At this time, there is no scheduled availability for raidz boot. It's
on the list of possible enhan
As reported here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/zfsbootFAQ
we can't boot from a pool with raidz, any plan to have this feature?
--
Luca Morettoni | OpenSolaris SCA #OS0344
Web/BLOG: http://www.morettoni.net/ | http://twitter.com/morettoni
jugUmbria founder: https://jugU
Hi--
I haven't had a chance to reproduce this problem but Niall's heads up
message, says that default schedules that include "frequent" still
work:
http://mail.opensolaris.org/pipermail/zfs-auto-snapshot/2009-November/000199.html
I included a snippet of his instructions below.
If this doesn'
> We import the pool with the -R parameter, might that contribute to the
> problem? Perhaps a zfs mount -a bug in correspondence with the -R parameter?
This Bugreport seems to confirm this:
http://bugs.opensolaris.org/view_bug.do?bug_id=6612218
Note that the /zz directory mentioned in the bugrep
Great news. Thanks for letting us know. Cindy
On 12/15/09 06:48, Cesare wrote:
Hy all,
after upgrade PowerPath (from 5.2 to 5.2 SP 2) and then retry commands
to create zpool, it was executed successfully:
--
r...@solaris10# zpool history
History for 'tank':
2009-12-15.14:37:00 zpool create -f
Hi, I encountered panic and spontaneous reboot after canceling zfs send from
another server. It took around 2-3hrs to remove 2TB data server had sent and
then:
Dec 15 16:54:05 foo ^Mpanic[cpu2]/thread=ff0916724560:
Dec 15 16:54:05 foo genunix: [ID 683410 kern.notice] BAD TRAP: type=0 (#de
D
On Tue, Dec 15, 2009 at 2:31 AM, Craig S. Bell wrote:
> Mike, I believe that ZFS treats runs of zeros as holes in a sparse file,
> rather than as regular data. So they aren't really present to be counted for
> compressratio.
>
> http://blogs.sun.com/bonwick/entry/seek_hole_and_seek_data
> http:
On Tue, Dec 15, 2009 at 3:06 PM, Kjetil Torgrim Homme
wrote:
> Robert Milkowski writes:
>> On 13/12/2009 20:51, Steve Radich, BitShop, Inc. wrote:
>>> Because if you can de-dup anyway why bother to compress THEN check?
>>> This SEEMS to be the behaviour - i.e. I would suspect many of the
>>> file
Martin,
i think we should continue offline. Anyway see my comments/answers inline.
Thanks,
Gonzalo.
-->
Martin Uhl wrote:
The dirs blocking the mount are created at import/mount time.
how you know that??
In the previous example I could reconstruct that using zfs mount. Just lo
Hy all,
after upgrade PowerPath (from 5.2 to 5.2 SP 2) and then retry commands
to create zpool, it was executed successfully:
--
r...@solaris10# zpool history
History for 'tank':
2009-12-15.14:37:00 zpool create -f tank mirror emcpower7a emcpower5a
2009-12-15.14:37:20 zpool add tank mirror emcpow
Hi All,
NAS storage server in OpenSolaris JeOS Prototype
http://blogs.sun.com/VirtualGuru/entry/nas_storage_server_in_opensolaris
Nice day
Rudolf Kutina
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
Greetings!
The Gluster Team is happy to announce the release of Gluster Storage
Platform 3.0. The Gluster Storage Platform is based on the popular open
source clustered file system GlusterFS, integrating the file system, an
operating system layer, a web based management interface, and an easy to
Robert Milkowski writes:
> On 13/12/2009 20:51, Steve Radich, BitShop, Inc. wrote:
>> Because if you can de-dup anyway why bother to compress THEN check?
>> This SEEMS to be the behaviour - i.e. I would suspect many of the
>> files I'm writing are dups - however I see high cpu use even though
>> o
>>The dirs blocking the mount are created at import/mount time.
>how you know that??
In the previous example I could reconstruct that using zfs mount. Just look at
the last post.
I doubt ZFS removes mount directories.
>If you're correct you should been able to reproduce
>the problem by doing a
Cyril Plisko wrote:
On Mon, Dec 14, 2009 at 9:32 PM, Andrey Kuzmin
wrote:
Right, but 'verify' seems to be 'extreme safety' and thus rather rare
use case.
Hmm, dunno. I wouldn't set anything, but scratch file system to
dedup=on. Anything of even slight significance is set to dedup=verify.
Wh
Hy Cindy,
I downloaded that document and I'll follow istruction before update
the host. I just tried the procedure on a different host (but did not
have the problem I wrote) and it worked.
I'll follow news after upgrade the host where the problem occur.
Cesare
On Mon, Dec 14, 2009 at 9:12 PM, C
Mike, I believe that ZFS treats runs of zeros as holes in a sparse file, rather
than as regular data. So they aren't really present to be counted for
compressratio.
http://blogs.sun.com/bonwick/entry/seek_hole_and_seek_data
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-April/017565.htm
37 matches
Mail list logo