I should note that my import command was:
zpool import -f vault
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I got my pool back
Did a rig upgrade (new motherboard, processor, and 8 GB of RAM), re-installed
opensolaris 2009.06, did an upgrade to snv_130, and did the import!
The import only took about 4 hours!
I have a hunch that I was running into some sort of issue with not having
enough RAM prev
@ross
"If the write doesn't span the whole stripe width then there is a read
of the parity chunk, write of the block and a write of the parity
chunk which is the write hole penalty/vulnerability, and is 3
operations (if the data spans more then 1 chunk then it is written in
parallel so you can thi
Thanks a bunch - that did the trick :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Each of these problems that you faced can be solved. Please ask for
help on each of these via separate emails to osol-discuss and you'll
get help.
I say so because I'm moving my infrastructure to opensolaris for these
services, among others.
-- Sriram
On 12/29/09, Duane Walker wrote:
> I tried
On Dec 29, 2009, at 5:37 PM, Brad wrote:
Hi! I'm attempting to understand the pros/cons between raid5 and
raidz after running into a performance issue with Oracle on zfs (http://opensolaris.org/jive/thread.jspa?threadID=120703&tstart=0
).
I would appreciate some feedback on what I've und
On Dec 29, 2009, at 12:36 PM, Bob Friesenhahn > wrote:
On Tue, 29 Dec 2009, Ross Walker wrote:
A mirrored raidz provides redundancy at a steep cost to performance
and might I add a high monetary cost.
I am not sure what a "mirrored raidz" is. I have never heard of
such a thing before.
On Tue, Dec 29 at 17:00, Duane Walker wrote:
I am running a combination of Win7-64 and 32 bit computers and
someone else mentioned that win7 64 causes problems. The server
itself was very stable and SCP (WinSCP) worked fine but SMB wouldn't
stay up. I tried restarting the servives but only a re
not sure of your experience level, but did you try running devfsadm and
then checking in format for your new disks
James Dickens
uadmin.blogspot.com
On Sun, Dec 27, 2009 at 3:59 AM, Muhammed Syyid wrote:
> Hi
> I just picked up one of these cards and had a few questions
> After installing i
I was trying to get Cacti running and it was all working except the PHP-SNMP.
I installed it but the SNMP support wasn't recognised (in phpinfo()).
I was reading the posts for the Cacti package and they said they were planning
to add the SNMP support.
I am running a combination of Win7-64 and
On Tue, Dec 29 at 11:14, Erik Trimble wrote:
Eric D. Mudama wrote:
On Tue, Dec 29 at 9:16, Brad wrote:
The disk cost of a raidz pool of mirrors is identical to the disk cost
of raid10.
ZFS can't do a raidz of mirrors or a mirror of raidz. Members of a
mirror or raidz[123] must be a fundament
On Tue, Dec 29 at 12:49, Tim Cook wrote:
"Serious CIFS work" meaning what? I've got a system that's been running
2009.06 for 6 months in a small office setting and it hasn't been "unusable"
for anything I've needed.
Wierd. Win7-x64 clients crashed my 2009.06 installation within 30
seconds of
On Sun, Dec 27, 2009 at 06:02:18PM +0100, Colin Raven wrote:
> Are there any negative consequences as a result of a force import? I mean
> STUNT; "Sudden Totally Unexpected and Nasty Things"
> -Me
If the pool is not in use, no. It's a safety check to avoid problems
that can easily crop up when st
On Tue, Dec 29, 2009 at 02:37:20PM -0800, Brad wrote:
> I would appreciate some feedback on what I've understood so far:
>
> WRITES
>
> raid5 - A FS block is written on a single disk (or multiple disks
depending on size data???)
There is no direct relationship between a filesystem and the RAID
s
Hi! I'm attempting to understand the pros/cons between raid5 and raidz after
running into a performance issue with Oracle on zfs
(http://opensolaris.org/jive/thread.jspa?threadID=120703&tstart=0).
I would appreciate some feedback on what I've understood so far:
WRITES
raid5 - A FS block is
On Wed, Dec 16, 2009 at 8:19 AM, Brandon High wrote:
> On Wed, Dec 16, 2009 at 8:05 AM, Bob Friesenhahn
> wrote:
>> In his case 'zfs send' to /dev/null was still quite fast and the network
>> was also quite fast (when tested with benchmark software). The implication
>> is that ssh network trans
I have a 4-disk RAIDZ, and I reduced the time to scrub it from 80
hours to about 14 by reducing the number of snapshots, adding RAM,
turning off atime, compression, and some other tweaks. This week
(after replaying a large volume with dedup=on) it's back up, way up.
I replayed a 700G filesystem to
On Dec 29, 2009, at 11:26 AM, Brad wrote:
@relling
"For small, random read IOPS the performance of a single, top-level
vdev is
performance = performance of a disk * (N / (N - P))
133 * 12/(12-1)=
133 * 12/11
where,
N = number of disks in the vdev
P = number of parity d
> I booted the snv_130 live cd and ran zpool import
> -fFX and it took a day, but it imported my pool and
> rolled it back to a previous version. I haven't
> looked to see what was missing, but I didn't need any
> of the changes over the last few weeks.
>
> Scott
I'll give it a shot. Hope this
i have a problem which is perhaps related.
i installed opensolaris snv_130.
after adding 4 additional disks and creating a raidz on them with
compression=gzip and dedup enabled, i got reproducable system freeze (not sure,
but the desktop/mouse-coursor froze) directly after login - without active
@relling
"For small, random read IOPS the performance of a single, top-level
vdev is
performance = performance of a disk * (N / (N - P))
133 * 12/(12-1)=
133 * 12/11
where,
N = number of disks in the vdev
P = number of parity devices in the vdev"
performance of a dis
I booted the snv_130 live cd and ran zpool import -fFX and it took a day, but
it imported my pool and rolled it back to a previous version. I haven't looked
to see what was missing, but I didn't need any of the changes over the last few
weeks.
Scott
--
This message posted from opensolaris.org
Eric D. Mudama wrote:
On Tue, Dec 29 at 9:16, Brad wrote:
The disk cost of a raidz pool of mirrors is identical to the disk cost
of raid10.
ZFS can't do a raidz of mirrors or a mirror of raidz. Members of a
mirror or raidz[123] must be a fundamental device (i.e. file or drive)
"This wind
On Tue, Dec 29, 2009 at 12:48 PM, Eric D. Mudama
wrote:
> On Tue, Dec 29 at 12:40, Tim Cook wrote:
>
>> On Tue, Dec 29, 2009 at 9:04 AM, Duane Walker > >wrote:
>>
>> I tried running an OpenSolaris server so I could use ZFS but SMB Serving
>>> wasn't reliable (it would only work for about 15 minut
On Tue, Dec 29 at 12:40, Tim Cook wrote:
On Tue, Dec 29, 2009 at 9:04 AM, Duane Walker wrote:
I tried running an OpenSolaris server so I could use ZFS but SMB Serving
wasn't reliable (it would only work for about 15 minutes).
I've been running native cifs on Opensolaris for 3 years with abou
On Tue, Dec 29, 2009 at 9:04 AM, Duane Walker wrote:
> I tried running an OpenSolaris server so I could use ZFS but SMB Serving
> wasn't reliable (it would only work for about 15 minutes).
I've been running native cifs on Opensolaris for 3 years with about 15
minutes of downtime total which was
On Tue, Dec 29, 2009 at 12:07 PM, Richard Elling
wrote:
> On Dec 29, 2009, at 9:16 AM, Brad wrote:
>
> @eric
>>
>> "As a general rule of thumb, each vdev has the random performance
>> roughly the same as a single member of that vdev. Having six RAIDZ
>> vdevs in a pool should give roughly the per
On Dec 29, 2009, at 10:03 AM, Eric D. Mudama wrote:
On Tue, Dec 29 at 9:50, Richard Elling wrote:
I don't believe compression matters. But dedup can really make a big
difference. When you enable dedup, the deduplication table (DDT) is
created to keep track of the references to blocks. When y
On Dec 29, 2009, at 9:16 AM, Brad wrote:
@eric
"As a general rule of thumb, each vdev has the random performance
roughly the same as a single member of that vdev. Having six RAIDZ
vdevs in a pool should give roughly the performance as a stripe of six
bare drives, for random IO."
This model be
On Tue, Dec 29 at 9:50, Richard Elling wrote:
I don't believe compression matters. But dedup can really make a big
difference. When you enable dedup, the deduplication table (DDT) is
created to keep track of the references to blocks. When you remove a
Are there any published notes on relativ
On Tue, Dec 29 at 9:16, Brad wrote:
@eric
"As a general rule of thumb, each vdev has the random performance
roughly the same as a single member of that vdev. Having six RAIDZ
vdevs in a pool should give roughly the performance as a stripe of six
bare drives, for random IO."
It sounds like we'l
On Tue, Dec 29, 2009 at 18:16, Brad wrote:
> @eric
>
> "As a general rule of thumb, each vdev has the random performance
> roughly the same as a single member of that vdev. Having six RAIDZ
> vdevs in a pool should give roughly the performance as a stripe of six
> bare drives, for random IO."
>
>
On Dec 29, 2009, at 12:34 AM, Brent Jones wrote:
On Sun, Dec 27, 2009 at 1:35 PM, Brent Jones
wrote:
On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach > wrote:
Brent,
I had known about that bug a couple of weeks ago, but that bug has
been files against v111 and we're at v130. I have also sea
On Tue, 29 Dec 2009, Ross Walker wrote:
A mirrored raidz provides redundancy at a steep cost to performance and might
I add a high monetary cost.
I am not sure what a "mirrored raidz" is. I have never heard of such
a thing before.
With raid10 each mirrored pair has the IOPS of a single dr
@eric
"As a general rule of thumb, each vdev has the random performance
roughly the same as a single member of that vdev. Having six RAIDZ
vdevs in a pool should give roughly the performance as a stripe of six
bare drives, for random IO."
It sounds like we'll need 16 vdevs striped in a pool to at
@ross
"Because each write of a raidz is striped across the disks the
effective IOPS of the vdev is equal to that of a single disk. This can
be improved by utilizing multiple (smaller) raidz vdevs which are
striped, but not by mirroring them."
So with random reads, would it perform better on a rai
On Tue, Dec 29 at 4:55, Brad wrote:
Thanks for the suggestion!
I have heard mirrored vdevs configuration are preferred for Oracle
but whats the difference between a raidz mirrored vdev vs a raid10
setup?
We have tested a zfs stripe configuration before with 15 disks and
our tester was extremel
On Dec 29, 2009, at 7:55 AM, Brad wrote:
Thanks for the suggestion!
I have heard mirrored vdevs configuration are preferred for Oracle
but whats the difference between a raidz mirrored vdev vs a raid10
setup?
A mirrored raidz provides redundancy at a steep cost to performance
and might
I tried running an OpenSolaris server so I could use ZFS but SMB Serving wasn't
reliable (it would only work for about 15 minutes). I also couldn't get Cacti
working (No PHP-SNMP support and I tried building PHP with SNMP but it failed).
So now I am going to run Ubuntu with RAID1 drives. I am t
Thanks for the suggestion!
I have heard mirrored vdevs configuration are preferred for Oracle but whats
the difference between a raidz mirrored vdev vs a raid10 setup?
We have tested a zfs stripe configuration before with 15 disks and our tester
was extremely happy with the performance. After
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I tried removing the flow and subjectively packet loss occurs a bit less
often, but still it is happening. Right now I'm trying to figure out of
it's due to the load on the server or not - I've left only about 15
concurrent recording instances, produci
I included networking-discuss@
On 28/12/2009 15:50, Saso Kiselkov wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Thank you for the advice. After trying flowadm the situation improved
somewhat, but I'm still getting occasional packet overflow (10-100
packets about every 10-15 minutes). Th
Hi Brent,
what you have noticed makes sense and that behaviour has been present since
v127, when dedupe was introduced in OpenSolaris. This also fits into my
observations. I thought I had totally messed up one of my OpenSolaris boxes
which I used to take my first steps with ZFS/dedupe and re-c
On Mon, Dec 28, 2009 at 01:40:03PM -0800, Brad wrote:
> "This doesn't make sense to me. You've got 32 GB, why not use it?
> Artificially limiting the memory use to 20 GB seems like a waste of
> good money."
>
> I'm having a hard time convincing the dbas to increase the size of the SGA to
> 20GB b
On Sun, Dec 27, 2009 at 1:35 PM, Brent Jones wrote:
> On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach
> wrote:
>> Brent,
>>
>> I had known about that bug a couple of weeks ago, but that bug has been
>> files against v111 and we're at v130. I have also seached the ZFS part of
>> this forum and
45 matches
Mail list logo