Quick question about the interaction of ZFS filesystem compression and the
filesystem cache. We have an Opensolaris (actually Nexenta alpha-6) box
running RRD collection. These files seem to be quite compressible. A test
filesystem containing about 3,000 of these files shows a compressratio
> Looking at the source code overview, it looks like
> the compression happens "underneath" the ARC layer,
> so by that I am assuming the uncompressed blocks are
> cached, but I wanted to ask to be sure.
>
> Thanks!
> -Andy
>
> Yup, your assumption is correct. We currently do
> compression
> Be careful here. If you are using files that have no
> data in them yet
> you will get much better compression than later in
> life. Judging by
> the fact that you got only 12.5x, I suspect that your
> files are at
> least partially populated. Expect the compression to
> get worse over
> time.
D] 814M - 2.03G -
stage 6.49G 979M 6.49G /stage
Can someone point me to the right direction please?
Thanks,
Andrew
IMPORTANT NOTICE: This e-mail and any attachment to it are intended only to be
read or used by the named addressee. It is confidential and may contain legally
c0d0s7ONLINE
# zpool import stage
internal error: No such device
Abort - core dumped
All is strange...
Andrew
IMPORTANT NOTICE: This e-mail and any attachment to it are intended only to be
read or used by the named addressee. It is confidential and may contain legally
privileged in
ase explain the importance of this other than the self
heal and those other features.
Thank you very much,
Andrew
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks Ben, and thanks Jason for clearing everything up for me via e-mail!
Hope you two, and everyone here have a great Christmas and a happy holiday!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I am running a home fileserver with a pair of 4-port cheapo Silicon Image 3114
based cards. I had to down-rev the firmware on the cards to make them dumb
SATA controllers vs. RAID cards. I bought them at Fry's, they were about
$70/ea, they're "SIIG SATA 4-channel RAID", part number appears to
The SATA frame work has laready been integrated and is available on Solaris 10
Update 3 and Nevada.
Cheers
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Sorry, yes - update 2.
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
@200908271200
347 r...@thumper1:~> zfs rollback -r thumperpool/m...@200908270100
cannot destroy 'thumperpool/m...@200908271200': dataset already exists
This is an X4500 running Solaris U8. I'm running zpool version 15 and zfs
version 2.
Any guidance much appreciated.
Andre
months ago.
Has anyone else seen this?
Thanks,
Andrew
--
Systems Developer
e: andrew.nic...@luns.net.uk
im: a.nic...@jabber.lancs.ac.uk
t: +44 (0)1524 5 10147
Lancaster University Network Services is a limited company registered in
England and Wales. Registered number: 04311892. Registe
We've been using ZFS for about two years now and make a lot of use of zfs
send/receive to send our data from one X4500 to another. This has been
working well for the past 18 months that we've been doing the sends.
I recently upgraded the receiving thumper to Solaris 10 u8 and since then,
I've been
On Thu, Dec 10, 2009 at 09:50:43AM +, Andrew Robert Nicols wrote:
> We've been using ZFS for about two years now and make a lot of use of zfs
> send/receive to send our data from one X4500 to another. This has been
> working well for the past 18 months that we've been doin
Does "current" include sol10u10 as well as sol11? If so, when did that go in?
Was it in sol10u9?
Thanks,
Andrew
From: Cindy Swearingen
mailto:cindy.swearin...@oracle.com>>
Subject: Re: [zfs-discuss] Can I create a mirror for a root rpool?
Date: December 16, 2011 10:38:2
Do you have any details on that CR? Either my Google-fu is failing or Oracle
has moved the CR database private. I haven't encountered this problem but would
like to know if there are certain behaviors to avoid to not risk this.
Has it been fixed in Sol10 or OpenSolaris?
Thanks,
A
e so if anyone can suggest
useful diagnostics to run on it while it's like this, please get back to me
asap. I will need to restart the box so that our backups aren't too out of
sync this afternoon.
Thanks in advance,
Andrew Nicols
--
Systems Developer
e: andrew.nic...@luns.net.uk
im: a
core dump but not sure where the best place to start
analysis of this is - any tips would be appreciated as I've not been into
the nitty gritty of the Solaris kernel yet.
Thanks in advance,
Andrew
--
Systems Developer
e: andrew.nic...@luns.net.uk
im: a.nic...@jabber.lancs.ac.uk
t: +4
ical X4500 which was running
Nevada release 110 though I've only seen it fail once so far. I've just
upgraded this box to Nevada 112. I'm not sure whether it's related to the
sending/receiving of snapshots but that's the only FS activity on these two
boxes.
TIA,
Andrew
On
On Fri, Apr 17, 2009 at 12:29:23PM +0100, Andrew Robert Nicols wrote:
> I'm still seeing this problem frequently and the suggestions Viktor made
> below haven't helped (exclude: drv/ohci in /etc/system).
>
> I've got a selection of core dumps for analysis if anyone can
181800
cannot destroy 'thumperpool/m...@200906181900': dataset already exists
As a result, I'm a bit scuppered. I'm going to try going back to by 112
installation instead to see if that resolves any of my issues.
All of our thumpers have the following disk configuration:
On Wed, Jul 08, 2009 at 08:31:54PM +1200, Ian Collins wrote:
> Andrew Robert Nicols wrote:
>
>> The thumper unning 112 has continued to experience the issues described by
>> Ian and others. I've just upgraded to 117 and am having even more issues -
>> I'm unable
On Wed, Jul 08, 2009 at 09:41:12AM +0100, Andrew Robert Nicols wrote:
> On Wed, Jul 08, 2009 at 08:31:54PM +1200, Ian Collins wrote:
> > Andrew Robert Nicols wrote:
> >
> >> The thumper unning 112 has continued to experience the issues described by
> >> Ian and
14.0K DMU dnode
1416K 8K 24.4G 38.0K zvol object <<<<<<
2116K512512 1K zvol prop
thanks
/andrew
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rmware.
Unfortunately, the zpool and zfs versions are too high to downgrade
thumper1 too.
I've tried upgrading thumper1 to 117 and now 121. We were originally
running 112. I'm still seeing exactly the same issues though.
What can I do in an attempt to find out what is causing these lockup
.
> The case has been identified and I've just received an IDR,which I will
> test next week. I've been told the issue is fixed in update 8, but I'm
> not sure if there is an nv fix target.
>
> I'll post back once I've abused a test system for a while.
All,
Is there anywhere which suggests what versions of zfs and zpool will make
it into Solaris 10 update 7 05/09 next month? I'm currently running Update
6 on an x4500 but would really like to have the new zpool scrub code
released in version 11.
Thanks in advance,
Andrew
--
Systems Deve
301 - 327 of 327 matches
Mail list logo