.
I thought I'll collect some advice to make each crash as useful as possible.
Any pointers are appreciated.
Thanks,
-- Peter
On Wed, Mar 20, 2013 at 8:34 AM, Peter Wood wrote:
> I have two identical Supermicro boxes with 32GB ram. Hardware details at
> the end of the message.
&g
700, Peter Wood wrote:
> >I'm sorry. I should have mentioned it that I can't find any errors in
> the
> >logs. The last entry in /var/adm/messages is that I removed the
> keyboard
> >after the last reboot and then it shows the new boot up messages whe
e oi151a5 image from
> grub. Apologies in advance if I'm stating the obvious.
>
> -- Trey
>
>
> On Mar 20, 2013, at 11:34 AM, "Peter Wood" wrote:
>
> I have two identical Supermicro boxes with 32GB ram. Hardware details
> at the end of the message
Hi Jim,
Thanks for the pointers. I'll definitely look into this.
--
Peter Blajev
IT Manager, TAAZ Inc.
Office: 858-597-0512 x125
On Wed, Mar 20, 2013 at 11:29 AM, Jim Klimov wrote:
> On 2013-03-20 17:15, Peter Wood wrote:
>
>> I'm going to need some help with the cra
?
>
> michael
>
>
> On Wed, Mar 20, 2013 at 4:50 PM, Peter Wood wrote:
>
>> I'm sorry. I should have mentioned it that I can't find any errors in the
>> logs. The last entry in /var/adm/messages is that I removed the keyboard
>> after the last reboot and t
oes the Supermicro IPMI show anything when it crashes? Does anything
> show up in event logs in the BIOS, or in system logs under OI?
>
>
> On Wed, Mar 20, 2013 at 11:34 AM, Peter Wood wrote:
>
>> I have two identical Supermicro boxes with 32GB ram. Hardware details at
>> the
e max
memory they can take.
In summary all I did is upgrade to OI 151.a.7 and reconfigured zpool.
Any idea what could be the problem.
Thank you
-- Peter
Supermicro X9DRH-iF
Xeon E5-2620 @ 2.0 GHz 6-Core
LSI SAS9211-8i HBA
32x 3TB Hitachi HUS723030ALS640, SAS,
20, 2013 at 5:46 PM, Bob Friesenhahn <
> bfrie...@simple.dallas.tx.us> wrote:
>
>> On Thu, 21 Feb 2013, Sašo Kiselkov wrote:
>>
>> On 02/21/2013 12:27 AM, Peter Wood wrote:
>>>
>>>> Will adding another vdev hurt the performance?
>>>>
spares
c8t5000CCA01ABDB020d0AVAIL
c8t5000CCA01ABDB060d0AVAIL
errors: No known data errors
#
Will adding another vdev hurt the performance?
Thank you,
-- Peter
___
zfs-discuss mailing list
zfs-discuss@opensolari
I forgot about compression. Makes sense. As long as the zeroes find their way
to the backend storage this should work. Thanks!
Kind regards
JP
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
Why should it?
Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap
support (I believe currently only Nexenta but correct me if I am wrong) the
blocks will not be freed, will they?
Kind regards
JP
Sent from a mobile device.
Am 10.02.2013 um 11:01 schrieb "Datnus" :
> I
Hi,
OK then, I guess my next question would be what's the best way to "undedupe"
the data I have?
Would it work for me to zfs send/receive on the same pool (with dedup off),
deleting the old datasets once they have been 'copied'?
yes. Worked for my.
I think I remember reading somewhere that
Hi Edward,
From:
zfs-discuss-boun...@opensolaris.org<mailto:zfs-discuss-boun...@opensolaris.org>
[mailto:zfs-discuss-
boun...@opensolaris.org<mailto:boun...@opensolaris.org>] On Behalf Of Koopmann,
Jan-Peter
all I can tell you is that I've had terrible scrub rates when I u
Hi Karl,
Recently, however, it has started taking over 20hours to complete. Not much has
happened to it in that time: A few extra files added, maybe a couple of
deletions, but not a huge amount. I am finding it difficult to understand why
performance would have dropped so dramatically.
FYI th
Right on Tim. Thanks. I didn't know that. I'm sure it's documented
somewhere and I should have read it so double thanks for explaining it.
--
Peter Blajev
IT Manager, TAAZ Inc.
Office: 858-597-0512 x125
On Thu, Jan 17, 2013 at 4:18 PM, Timothy Coalson wrote:
> On Thu, Jan 1
Great points Jim. I have requested more information how the gallery share
is being used and any temporary data will be moved out of there.
About atime, it is set to "on" right now and I've considered to turn it off
but I wasn't sure if this will effect incremental zfs send/receive.
'zfs send -i s
Right on Tim. Thanks. I didn't know that. I'm sure it's documented
somewhere and I should have read it so double thanks for explaining it.
On Thu, Jan 17, 2013 at 4:18 PM, Timothy Coalson wrote:
> On Thu, Jan 17, 2013 at 5:33 PM, Peter Wood wrote:
>
>>
>>
re these write operations
are applied.
The 'zpool iostat -v' output is uncomfortably static. The values of
read/write operations and bandwidth are the same for hours and even days.
I'd expect at least some variations between morning and night. The load on
the servers is different for
arget 0 lun 0
>da1 at twa0 bus 0 scbus0 target 1 lun 0
>da2 at twa0 bus 0 scbus0 target 2 lun 0
>da3 at twa0 bus 0 scbus0 target 3 lun 0
>da4 at twa0 bus 0 scbus0 target 4 lun 0
Are these all JBOD devices?
--
Peter Jeremy
pgpykCYjUFT7j.pgp
Description: PGP signature
HI Eugen,
Whether it's compatible entirely depends on the chipset of the SATA controller.
Basically that card is just a dual port 6gbps PCIe SATA controller with the
space to mount one ($149) or two ($299) 2.5inch disks. Sonnet, a mac focused
company, offers it as a way to better utilize exist
Hi Jerry,
Couple of things that might help you troubleshoot your Intel SASUC8I HBA:
1. Are you seeing all the 8 devices in the BIOS for the card?
2. If yes, do other operating systems (say a Linux LiveCD) see all the disks
too?
3. Is there any difference between the disks (e.g. four 2TB Seagate
Hi Nathan,
You've misunderstood how the Zil works and why it reduces write latency for
synchronous writes.
Since you've partitioned a single SSD into two silces, one as pool storage and
one as Zil for that pool, all sync writes will be 2X amplified. There's no way
around it. ZFS will write to
magine those come cheap and I'm sure they are deeper than most things in your
rack. Dell/DataOn do 60 disks in 4U, but with eleven of the SGI JBODs in a
single rack (44U), that's a whopping 660 4TB disks (2.4PB raw) per cabinet.
Truly silly silly dense.
On Nov 13, 2012, at 3:57
I assumed that ZFS reset the
parent to "unknown" rather than leaving it as a pointer to a random
no-longer-valid object.
This probably needs to be documented as a caveat on "zfs diff" -
especially since it can cause hangs and panics with older kernel code.
--
Pete
On 2012-Nov-19 21:10:56 +0100, Jim Klimov wrote:
>On 2012-11-19 20:28, Peter Jeremy wrote:
>> Yep - that's the fallback solution. With 1874 snapshots spread over 54
>> filesystems (including a couple of clones), that's a major undertaking.
>> (And it loses time
On 2012-Nov-19 13:47:01 -0500, Ray Arachelian wrote:
>On 11/19/2012 12:03 PM, Peter Jeremy wrote:
>> The damage exists in the oldest snapshot for that filesystem.
>Are you able to delete that snapshot?
Yes but it has no effect - the corrupt object exists in the current
pool so del
ing.
(And it loses timestamp information).
--
Peter Jeremy
pgpxBLIU3wbVi.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t if you know the damage is recent.
The damage exists in the oldest snapshot for that filesystem.
--
Peter Jeremy
pgpxQIIBICxmG.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
at is a last resort (since there
are 54 filesystems and ~1900 snapshots in the pool).
--
Peter Jeremy
pgpi6E6cZupsp.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Nov 13, 2012 at 6:16 PM, Karl Wagner wrote:
> On 2012-11-13 17:42, Peter Tribble wrote:
>
> > Given storage provisioned off a SAN (I know, but sometimes that's
> > what you have to work with), what's the best way to expand a pool?
> >
> > Specific
s what folks
were using when they needed more than two dozen 3.5" SAS disks for use with ZFS.
Thanks
-Peter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
starts slowing down?
> so I was planning to rebuild the server with FreeBSD
>9.0 and ZFS 28 but I didn't want to make any basic design mistakes in
>doing this.
I'd suggest you test 9.1-RC2 (just released) with a view to using 9.1,
rather than installing 9.0.
Since your qu
re flags, and that is currently
>supported by oi_151a5 prebuilt distro (I don't know of other
>builds with that - feature integrated into code this summer).
FreeBSD-head does.
--
Peter Jeremy
pgpaswWHOLhMp.pgp
Description: PGP signature
ly repair the sector thanks to copies=2.
b) Attempt to rebuild your laptop and restore from backups (left securely
at home) via the dodgy hotel wifi.
--
Peter Jeremy
pgpvosNQQa9DJ.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it'll be available in ZFS in the near future.
--
Peter Jeremy
pgpNyzMT6fOdD.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
but if you want a guarantee that your data is
securely written to stable storage then you need to wait for that
stable storage. msync(MS_ASYNC) should have no impact on a later
munmap(2) and it should always be safe to call msync(MS_ASYNC) before
munmap(2) (in fact, it's a good idea to maximi
Hi Timothy,
>
> I think that if you are running an illumos kernel, you can use
> /kernel/drv/sd.conf and tell it that the physical sectors for a disk
> model are 4k, despite what the disk says (and whether they really
> are). So, if you want an ashift=12 pool on disks that report 512
> sectors,
>
> What makes you think the Barracuda 7200.14 drives report 4k sectors?
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg48912.html
Nigel stated this here a few days ago. I did not check for myself. Maybe Nigel
can comment on this?
As for the question "why do you want 4k drives":
8/12 12:19 AM, Koopmann, Jan-Peter wrote:
>> Hi Carson,
>>
>>
>>I have 2 Sans Digital TR8X JBOD enclosures, and they work very well.
>>They also make a 4-bay TR4X.
>>
>>http://www.sansdigital.com/towerraid/tr4xb.html
>>http://www.sansdig
Hi Bob,
> On Mon, 18 Jun 2012, Koopmann, Jan-Peter wrote:
>>
>> looks nice! The only thing coming to mind is that according to the
>> specifications the enclosure is 3Gbits "only". If I choose
>> to put in a SSD with 6Gbits this would be not optimal. I
Hi Carson,
>
> I have 2 Sans Digital TR8X JBOD enclosures, and they work very well.
> They also make a 4-bay TR4X.
>
> http://www.sansdigital.com/towerraid/tr4xb.html
> http://www.sansdigital.com/towerraid/tr8xb.html
looks nice! The only thing coming to mind is that according to the
specificati
Hi Tim,
thanks to you and the others for answering.
> worst case). The worst case for 512 emulated sectors on zfs is
> probably small (4KB or so) synchronous writes (which if they mattered
> to you, you would probably have a separate log device, in which case
> the data disk write penalty may no
Hi,
my oi151 based home NAS is approaching a frightening "drive space" level. Right
now the data volume is a 4*1TB Raid-Z1, 3 1/2" local disks individually
connected to an 8 port LSI 6Gbit controller.
So I can either exchange the disks one by one with autoexpand, use 2-4 TB disks
and be happy.
ld leave your drive seriously fragmented. If you do try this,
I'd recommend creating a snapshot first and then rolling back to it,
rather than just deleting the junk file. Also, this (obviously) won't
work at all on a filesystem with compressi
Hi,
are those DELL branded WD disks? DELL tends to manipulate the firmware of
the drives so that power handling with Solaris fails. If this is the case
here:
Easiest way to make it work is to modify /kernel/drv/sd.conf and add an
entry
for your specific drive similar to this
sd-config-list= "WD
ch vdev in a raidz configuration. In
practice we're finding that our raidz systems actually perform
pretty well when compared with dynamic stripes, mirrors, and
hardware raid LUNs.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
y
6) Verify data written in (2) can be read.
7) Argue with drive vendor that drive doesn't meet specifications :-)
A similar approach can also be used to verify that NCQ & cache flush
commands actually work.
--
Peter Jeremy
pgp4WNXKBfWaW.pgp
Description: PGP signature
___
verhead there. FreeBSD leaves the drive cache enabled
in either situation. I'm not sure how OI or Linux behave.
--
Peter Jeremy
pgprzpycAxFkZ.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
Thank you all for the replies. I'll try the suggested solutions.
--
Peter
On Thu, Apr 12, 2012 at 2:06 PM, Roberto Waltman wrote:
> Cindy Swearingen wrote:
>
>>
>> We don't yet have an easy way to clear a disk label, ...
>>
>
> dd if=/dev/zero of=...
ps with
/dev/dsk/c2t5000CCA369C89636d0s2
root:~#
I used -f and it worked but I was wondering is there a way to completely
"reset" the new disk? Remove all partitions and start from scratch.
Thank you
Peter
___
zfs-discuss mailing list
zfs-disc
a dying (or dead) disk in the target pool?
--
Peter Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Brandon,
On Mon, Mar 5, 2012 at 9:52 AM, luis Johnstone
mailto:l...@luisjohnstone.com>> wrote:
As far as I can tell, the Hitachi Deskstar 7K3000 (HDS723030ALA640) uses
512B sectors and so I presume does not suffer from such issues (because it
doesn't lie about the physical layout of sectors o
are therefore a good thing.)
> How *do* some things get fixed then - can only dittoed data
> or metadata be salvaged from second good copies on raidZ?
You can recover anything you have enough redundancy for. Which
means everything, up to the redundancy of the vdev. B
It's supposed to be
7111576: arc shrinks in the absence of memory pressure
currently in status "accepted" and an RPE escalation pending.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tomas Forsman
Sent: Donnerstag,
Thanks. The guys from Oracle are currently looking at some new code that was
introduced in arc_reclaim_thread() between b151a and b175.
Peter Radig, Ahornstrasse 34, 85774 Unterföhring, Germany
tel: +49 89 99536751 - fax: +49 89 99536754 - mobile: +49 171 2652977
email: pe...@radig.de<mailto
) 5504210%
Free (freelist) 3284924 12831 78%
Total 4192012 16375
Physical 4192011 16375
I will create an SR with Oracle.
Thanks,
Peter
-Original Message-
From: Tomas Forsman
not seeing this on SolEx 11/10.
Thanks,
Peter
*** ::memstat ***
Page SummaryPagesMB %Tot
Kernel 860254 3360 21%
ZFS File Data304711
ining a
mirrored root with RAIDZ data aren't that great. At home, I have 6
1TB disks and I've carved out 8GB from the front of each (3GB for swap
and 5GB for root) and the remainder in a RAIDZ2 pool - that's less
than 1% overhead. 5GB is big enough to hold the comple
the system is out of service
and I can reconstruct the data if necessary. Although knowing
how to fix this would be generally useful in the future...
Thanks,
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mail
>
> On the Dell website I've the choice between :
>
>
>SAS 6Gbps External Controller
>PERC H800 RAID Adapter for External JBOD, 512MB Cache, PCIe
>PERC H800 RAID Adapter for External JBOD, 512MB NV Cache, PCIe
>PERC H800 RAID Adapter for External JBOD, 1GB NV Cache, PCIe
>
s seldom needed at
>my former site once UFS journalling
>became available. Sweet update.
Whilst Solaris very rarely insists we run fsck, we have had a number
of cases where we have found files corrupted following a crash - even
with UFS journalling enabled. Unfortunately, this isn'
On Tue, Oct 18, 2011 at 9:12 PM, Tim Cook wrote:
>
>
> On Tue, Oct 18, 2011 at 3:06 PM, Peter Tribble
> wrote:
>>
>> On Tue, Oct 18, 2011 at 8:52 PM, Tim Cook wrote:
>> >
>> > Every scrub I've ever done that has found an error required manual
>
as a
result of a scrub, and I've never had to intervene manually.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Sep 13, 2011 at 8:34 PM, Paul B. Henson wrote:
> On 9/13/2011 5:21 AM, Peter Tribble wrote:
>
>> Update 10 has been out for about 3 weeks.
>
> Where was any announcement posted? I haven't heard anything about it. As far
> as I can tell, the Oracle site still o
(This doesn't affect me all that much, as ACLs on ZFS have never
really worked right, so anything where the ACL is critical gets stored
on ufs [yuck].)
Also, aclmode is no longer listed in the usage message you see
if you do 'zfs get'.
--
-Peter Tribble
http://www.petertribb
have the ability to slot that copy of the data
instantly into service if the primary copy fails.
For tar, you can substitute a free or commercial backup solution.
It works the same way.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
__
ent to disk in the
background.
Second, use a proper benchmark suite, and one that isn't itself
a bottleneck. Something like vdbench, although there are others.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-disc
s of snapshots and a fair amount
of activity. A scrub takes around 17 hours.
This is another area where the mythical block rewrite would help a lot.
--
Peter Jeremy
pgpH1dpSOBHnT.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-dis
Hi,
my system is running oi148 on a super micro X8SIL-F board. I have two pools (2
disc mirror, 4 disc RAIDZ) with RAID level SATA drives. (Hitachi HUA72205 and
SAMSUNG HE103UJ). The system runs as expected however every few days
(sometimes weeks) the system comes to a halt due to these errors
performance would still
improve. This means you either get better system performance from
the same SSD, or you can get the same system performance from a
lower-performance (cheaper) SSD.
--
Peter Jeremy
pgpoOozgavEXj.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ode. ZFS makes it easier to switch modes because it doesn't care
about the actual device name - at worst, you will need an export and
import.
--
Peter Jeremy
pgpHPygB4VeNl.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@ope
o OS-X would want to
know whether the pool contained resource forks even if opened R/O
but this should not stop a different ZFS port from reading (and
maybe even writing to) the pool.
--
Peter Jeremy
pgpj1BokjEkft.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be a disadvantage).
--
Peter Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
lesystem.
BTW, if you do elect to build a bootable, removable drive for backups,
you should be aware that gzip compression isn't supported - at least
in v15, trying to make a gzip compressed filesystem bootable or trying
to set compression=gzip on a bootable filesystem gives a very
uninformative err
CPUs and 2 GB of RAM.
Hopefully a silly question but does the SB1000 support USB2? All of
the Sun hardware I've dealt with only has USB1 ports.
And, BTW, 2GB RAM is very light on for ZFS (though I note you only
have a very small amount of data).
--
Peter Jeremy
s (maybe one showing the sizes, one showing the ARC
efficiency, another one for L2ARC).
> 5. Who wants to help with this little project?
I'm definitely interested in emulating arcstat in jkstat. OK, I have
an old version,
but it's pretty much
o the backup host).
--
Peter Jeremy
pgpn3inOqECRR.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ly be better
>switching to nexenta or openindiana or solaris 11 express, because they all
>support ZFS much better than freebsd.
I'm primarily interested in running FreeBSD and will be upgrading to
ZFSv28 once it's been shaken out a bit longer.
--
Peter
3-17.11:17:31 zpool import zroot
2011-03-17.11:30:13 [internal rollback txg:872819992] dataset = 469
2011-03-17.11:30:13 zfs rollback zroot/home@20110309
2011-03-17.12:01:02 zfs recv -vd zroot
2011-03-17.12:03:57 [internal rollback txg:872820399] dataset = 469
2011-03-17.12:03:57 zfs rollback
d
upgrade your pool to v15 or rebuild your pool (via send/recv or similar).
--
Peter Jeremy
pgp2oCcOvB9YH.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Z1 with a
hot spare (7+1+1) is better than 9-way RAIDZ2 (7+2). In the latter
case, your "hot spare" is already part of the pool so you don't
lose the time-to-notice plus time-to-resilver before regaining
redundancy. The downside is that actively using the "hot spar
Hey folks,
While scrubbing, zpool status shows nearly 40MB "repaired" but 0 in each of the
read/write/checksum columns for each disk. One disk has "(repairing)" to the
right but once the scrub completes there's no mention that anything ever needed
fixing. Any idea what would need to be repair
y "gpart list" - which will display FreeBSD's view
of the physical disks. It might also be worthwhile looking at a hexdump
of the first and last few MB of the "faulty" disks - it's possible that
the controller has decided to just shift things by a few sectors so the
labels a
g - which might let me recover it in other ways.
--
Peter Jeremy
pgppzMAxBmwjV.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
BA (and one slot
in the server) for each MD1200, which chews up slots pretty quick.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nd not 128 bits or 512
bits? I guess Sha512 may be an overkill. In your formula, how many blocks of
data would be needed to have one collision using Sha128?
Appreciate your help.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discu
a would be needed to have one collision using Sha128?
Appreciate your help.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
identify the
bottlenecks. Don't want to replace a component just to find that there was no
improvement in iometer reading.
Thank you in advance for your insight.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing lis
y* it is possible to reconstruct the original block from the
256-bit signature by using a simple lookup. Essentially, we would now have
world's best compression algorithm irrespective of whether the data is text or
binary. This is hard to digest.
Peter
--
This message posted from opens
) is more
efficient than (Sha256+Verification). And both are 100% accurate in detecting
duplicate blocks.
Thank you in advance for your help.
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
oint within the kernel also means changes to
when FPU context is saved - and, unless this can be implemented
lazily, it will adversely impact the cost of all context switches
and potentially system calls.
--
Peter Jeremy
pgphVXYz2zc3s.pgp
Description: PG
ifree %iused Mounted on
/images/fred 140738056 36000718887 0% /images/fred
average 11k
I've never seen ZFS run out of inodes, though.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
Thank you for your help.
I actually had the script working. However, I just wanted to make sure that
spaces are not permitted within the field value itself. Otherwise, the regular
expression would break.
Regards,
Peter
--
This message posted from opensolaris.org
the field values. If this is the
case, I can split the output across spaces.
Thank you in advance for your help.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
sted and supported", and it's reasonably clear that the way to
get support is via the existing Premier Support offering. And it's just the
same deal as with S10 - you want to use it in production, you need to
have a support contract. It's not hard to find this out, just a few seconds
ailing list I'm subscribed to where signatures
get mangled.
--
Peter Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thank you for your help.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The only option to grow a root-vdev seems to be to use "zpool replace" and
replace an existing disk with a bigger disk.
Is my understanding correct?
Thank you in advance for your help.
Regards,
Peter
--
This message posted from opensolaris.org
__
er is "active" is also the "default?"
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to
manage checkpoints. I would appreciate your help in how I can create, destroy
and roll back to a checkpoint, and how I can list all the checkpoints.
Thank you in advance for your help.
Regards,
Peter
--
This message posted from opensolaris.org
_
1 - 100 of 562 matches
Mail list logo