Brandon,
Thanks for replying to the message.
I believe that this is more related to the variable stripe size of RAIDZ
than the fdisk MBR. I say this because the disk works without any issues in
a mirror configuration or as standalone reaching 80 MB/s burst transfer
rates.
In RAIDZ, however, the t
On Thu, May 20, 2010 at 10:53 PM, Richard Elling
wrote:
> On May 20, 2010, at 7:09 PM, Asif Iqbal wrote:
>
>> On Thu, May 20, 2010 at 8:34 PM, Richard Elling
>> wrote:
>>> On May 20, 2010, at 11:07 AM, Asif Iqbal wrote:
>>>
On Thu, May 20, 2010 at 1:51 PM, Asif Iqbal wrote:
> I have a T
> So, IMHO, a cheap consumer ssd used as a zil may still be worth it (for
> some use cases) to narrow the window of data loss from ~30 seconds to a
> sub-second value.
There are lots of reasons to enable the ZIL now- I can throw four very
inexpensive SSD's in there now in a pair of mirrors, and th
Le 21 mai 10 à 02:44, Freddie Cash a écrit :
On Thu, May 20, 2010 at 4:40 PM, Brent Jones
wrote:
On Thu, May 20, 2010 at 3:42 PM, Brandon High
wrote:
> On Thu, May 20, 2010 at 1:23 PM, Thomas Burgess
wrote:
>> I know i'm probably doing something REALLY stupid.but for
some reason i
On May 20, 2010, at 7:09 PM, Asif Iqbal wrote:
> On Thu, May 20, 2010 at 8:34 PM, Richard Elling
> wrote:
>> On May 20, 2010, at 11:07 AM, Asif Iqbal wrote:
>>
>>> On Thu, May 20, 2010 at 1:51 PM, Asif Iqbal wrote:
I have a T2000 with a dual port 4gb hba (QLE2462) and a 3510FC with
on
On Thu, May 20, 2010 at 8:34 PM, Richard Elling
wrote:
> On May 20, 2010, at 11:07 AM, Asif Iqbal wrote:
>
>> On Thu, May 20, 2010 at 1:51 PM, Asif Iqbal wrote:
>>> I have a T2000 with a dual port 4gb hba (QLE2462) and a 3510FC with
>>> one controller 2gb/s attached to it.
>>> I am running sol 10
On Thu, May 20, 2010 at 8:34 PM, Richard Elling
wrote:
> On May 20, 2010, at 11:07 AM, Asif Iqbal wrote:
>
>> On Thu, May 20, 2010 at 1:51 PM, Asif Iqbal wrote:
>>> I have a T2000 with a dual port 4gb hba (QLE2462) and a 3510FC with
>>> one controller 2gb/s attached to it.
>>> I am running sol 10
On Thu, May 20, 2010 at 4:40 PM, Brent Jones wrote:
> On Thu, May 20, 2010 at 3:42 PM, Brandon High wrote:
> > On Thu, May 20, 2010 at 1:23 PM, Thomas Burgess
> wrote:
> >> I know i'm probably doing something REALLY stupid.but for some
> reason i
> >> can't get send/recv to work over ssh.
On May 20, 2010, at 1:12 PM, Bill Sommerfeld wrote:
> On 05/20/10 12:26, Miles Nordin wrote:
>> I don't know, though, what to do about these reports of devices that
>> almost respect cache flushes but seem to lose exactly one transaction.
>> AFAICT this should be a works/doesntwork situation, not
On May 20, 2010, at 11:07 AM, Asif Iqbal wrote:
> On Thu, May 20, 2010 at 1:51 PM, Asif Iqbal wrote:
>> I have a T2000 with a dual port 4gb hba (QLE2462) and a 3510FC with
>> one controller 2gb/s attached to it.
>> I am running sol 10 u3 .
>>
>> every time I change the recordsize of the zfs fs t
On Sat, Apr 24, 2010 at 5:02 PM, Leandro Vanden Bosch
wrote:
> Confirmed then that the issue was with the WD10EARS.
> I swapped it out with the old one and things look a lot better:
The problem with the EARS drive is that it was not 4k aligned.
The solaris partition table was, but that does not
On Thu, May 20, 2010 at 3:42 PM, Brandon High wrote:
> On Thu, May 20, 2010 at 1:23 PM, Thomas Burgess wrote:
>> I know i'm probably doing something REALLY stupid.but for some reason i
>> can't get send/recv to work over ssh. I just built a new media server and
>
> Unless you need to have th
On 21 maj 2010, at 00.53, Ross Walker wrote:
> On May 20, 2010, at 6:25 PM, Travis Tabbal wrote:
>
>>> use a slog at all if it's not durable? You should
>>> disable the ZIL
>>> instead.
>>
>>
>> This is basically where I was going. There only seems to be one SSD that is
>> considered "worki
On May 20, 2010, at 6:25 PM, Travis Tabbal wrote:
use a slog at all if it's not durable? You should
disable the ZIL
instead.
This is basically where I was going. There only seems to be one SSD
that is considered "working", the Zeus IOPS. Even if I had the
money, I can't buy it. As my ap
On Thu, May 20, 2010 at 1:23 PM, Thomas Burgess wrote:
> I know i'm probably doing something REALLY stupid.but for some reason i
> can't get send/recv to work over ssh. I just built a new media server and
Unless you need to have the send to be encrypted, ssh is going to slow
you down a lot.
> use a slog at all if it's not durable? You should
> disable the ZIL
> instead.
This is basically where I was going. There only seems to be one SSD that is
considered "working", the Zeus IOPS. Even if I had the money, I can't buy it.
As my application is a home server, not a datacenter, thin
> Deon Cui gmail.com> writes:
> >
> > So I had a bunch of them lying around. We've bought
> a 16x SAS hotswap
> > case and I've put in an AMD X4 955 BE with an ASUS
> M4A89GTD Pro as
> > the mobo.
> >
> > In the two 16x PCI-E slots I've put in the 1068E
> controllers I had
> > lying around. Ever
On Thu, May 20, 2010 at 2:07 PM, Asif Iqbal wrote:
> On Thu, May 20, 2010 at 1:51 PM, Asif Iqbal wrote:
>> I have a T2000 with a dual port 4gb hba (QLE2462) and a 3510FC with
>> one controller 2gb/s attached to it.
>> I am running sol 10 u3 .
>>
>> every time I change the recordsize of the zfs fs
On 20 May, 2010 - John Andrunas sent me these 0,3K bytes:
> Can I make a pool not mount on boot? I seem to recall reading
> somewhere how to do it, but can't seem to find it now.
zpool export thatpool
zpool import thatpool when you want it back.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http:
> "rsk" == Roy Sigurd Karlsbakk writes:
> "dm" == David Magda writes:
> "tt" == Travis Tabbal writes:
rsk> Disabling ZIL is, according to ZFS best practice, NOT
rsk> recommended.
dm> As mentioned, you do NOT want to run with this in production,
dm> but it is a quick w
On Thu, May 20, 2010 at 04:23:49PM -0400, Thomas Burgess wrote:
> I know i'm probably doing something REALLY stupid.but for some reason i
> can't get send/recv to work over ssh. I just built a new media server and
> i'd like to move a few filesystem from my old server to my new server but
> fo
also, i forgot to say:
one server is b133, the new one is b134
On Thu, May 20, 2010 at 4:23 PM, Thomas Burgess wrote:
> I know i'm probably doing something REALLY stupid.but for some reason i
> can't get send/recv to work over ssh. I just built a new media server and
> i'd like to move
I know i'm probably doing something REALLY stupid.but for some reason i
can't get send/recv to work over ssh. I just built a new media server and
i'd like to move a few filesystem from my old server to my new server but
for some reason i keep getting strange errors...
At first i'd see somethi
On 05/20/10 12:26, Miles Nordin wrote:
I don't know, though, what to do about these reports of devices that
almost respect cache flushes but seem to lose exactly one transaction.
AFAICT this should be a works/doesntwork situation, not a continuum.
But there's so much brokenness out there. I've
Miles Nordin wrote:
"et" == Erik Trimble writes:
et> No, you're reading that blog right - dedup is on a per-pool
et> basis.
The way I'm reading that blog is that deduped data is expaned in the
ARC.
What I think is being done is this: for pool A and B, each have a
sep
> "d" == Don writes:
d> "Since it ignores Cache Flush command and it doesn't have any
d> persistant buffer storage, disabling the write cache is the
d> best you can do." This actually brings up another question I
d> had: What is the risk, beyond a few seconds of lost wri
- "John Andrunas" skrev:
> Can I make a pool not mount on boot? I seem to recall reading
> somewhere how to do it, but can't seem to find it now.
I guess saying zfs mountpoint=legcy will help this, but still, that's for the
dataset, not the pool
Vennlige hilsener / Best regards
roy
--
Ro
On 20 maj 2010, at 20.35, David Magda wrote:
> On Thu, May 20, 2010 14:12, Travis Tabbal wrote:
>>> On May 19, 2010, at 2:29 PM, Don wrote:
>>
>>> The data risk is a few moments of data loss. However,
>>> if the order of the
>>> uberblock updates is not preserved (which is why the
>>> caches are
> "et" == Erik Trimble writes:
et> No, you're reading that blog right - dedup is on a per-pool
et> basis.
The way I'm reading that blog is that deduped data is expaned in the
ARC.
pgpozjcLXZlNV.pgp
Description: PGP signature
___
zfs-discu
Anyone have any idea on this.
I wanted to separate out my VirtualBox VDIs so that I could activate
compression on the rest of the parent directory structure so I created a ZFS
filesystem under my user directory.
mv .VirtualBox .VirtualBox_orig
zfs create /export/home/user/.VirtualBox
zfs create
John Andrunas wrote:
Can I make a pool not mount on boot? I seem to recall reading
somewhere how to do it, but can't seem to find it now.
You can't do this at a pool level, but you can at a zfs/zvol level.
to prevent a filesystem or vol from being mounted at boot:
zfs set canmount=no
Can I make a pool not mount on boot? I seem to recall reading
somewhere how to do it, but can't seem to find it now.
--
John
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
If I'm not mistaken, L2ARC cached blocks will not get striped across more
than one device in your L2ARC, which means your L2ARC only helps for
latency, and not throughput.
Regardless of wither it does or not it can still help overall system
throughput by avoiding having to read from slower (may
On Thu, May 20, 2010 14:12, Travis Tabbal wrote:
>> On May 19, 2010, at 2:29 PM, Don wrote:
>
>> The data risk is a few moments of data loss. However,
>> if the order of the
>> uberblock updates is not preserved (which is why the
>> caches are flushed)
>> then recovery from a reboot may require man
On Thu, May 20, 2010 13:58, Roy Sigurd Karlsbakk wrote:
> - "Travis Tabbal" skrev:
>
>> Disable ZIL and test again. NFS does a lot of sync writes and kills
>> performance. Disabling ZIL (or using the synchronicity option if a
>> build with that ever comes out) will prevent that behavior, and s
> On May 19, 2010, at 2:29 PM, Don wrote:
> The data risk is a few moments of data loss. However,
> if the order of the
> uberblock updates is not preserved (which is why the
> caches are flushed)
> then recovery from a reboot may require manual
> intervention. The amount
> of manual interventio
On Thu, May 20, 2010 at 1:51 PM, Asif Iqbal wrote:
> I have a T2000 with a dual port 4gb hba (QLE2462) and a 3510FC with
> one controller 2gb/s attached to it.
> I am running sol 10 u3 .
>
> every time I change the recordsize of the zfs fs the disk IO improves
> (doubles) and stay like that for
>
- "Travis Tabbal" skrev:
> Disable ZIL and test again. NFS does a lot of sync writes and kills
> performance. Disabling ZIL (or using the synchronicity option if a
> build with that ever comes out) will prevent that behavior, and should
> get your NFS performance close to local. It's up to yo
Disable ZIL and test again. NFS does a lot of sync writes and kills
performance. Disabling ZIL (or using the synchronicity option if a build with
that ever comes out) will prevent that behavior, and should get your NFS
performance close to local. It's up to you if you want to leave it that way.
I have a T2000 with a dual port 4gb hba (QLE2462) and a 3510FC with
one controller 2gb/s attached to it.
I am running sol 10 u3 .
every time I change the recordsize of the zfs fs the disk IO improves
(doubles) and stay like that for
about 5 to 6 hrs. Then it dies down. I increase the recordsize ag
Hi Kyle,
very likely that you hit driver bug in isp. After the reboot, take a
look on /var/adm/messages file - anything related might shed some light.
I wouldn't suspect Intel GigE card - fairly good one and driver is very
stable.
Also, some upgrades posted, make sure the kernel displays 13
Edward Ned Harvey wrote:
But one more thing:
If I'm not mistaken, L2ARC cached blocks will not get striped across more
than one device in your L2ARC, which means your L2ARC only helps for
latency, and not throughput. (I'm really not certain about this, but I
think so.) Given the stated usage s
Hi all,
I recently installed Nexenta Community 3.0.2 on one of my servers:
IBM eSeries X346
2.8Ghz Xeon
12GB DDR2 RAM
1 builtin BGE interface for management
4 port Intel GigE card aggregated for Data
IBM ServRAID 7k with 256MB BB Cache with (isp driver)
6 RAID0 single drive LUNS (so I can use t
- "Rob Levy" skrev:
> Folks I posted this question on (OpenSolaris - Help) without any
> replies
> http://opensolaris.org/jive/thread.jspa?threadID=129436&tstart=0 and
> am re-posting here in the hope someone can help ... I have updated the
> wording a little too (in an attempt to clarify)
>
Hi Roi,
You need equivalent sized disks for a mirrored pool. When you attempt to
attach a disk that is too small, you will see a message similar to the
following:
cannot attach c1t3d0 to c1t2d0: device is too small
In general, an "I/O error" message means that the partition slice is not
avai
On Thu, 20 May 2010, Edward Ned Harvey wrote:
Also, since you've got "s0" on there, it means you've got some
partitions on that drive. You could manually wipe all that out via
format, but the above is pretty brainless and reliable.
The "s0" on the old disk is a bug in the way we're formattin
Roy Sigurd Karlsbakk wrote:
Hi all
I've been doing a lot of testing with dedup and concluded it's not really ready
for production. If something fails, it can render the pool unuseless for hours
or maybe days, perhaps due to single-threded stuff in zfs. There is also very
little data available
On Wed, 19 May 2010, John Andrunas wrote:
ff001f45e830 unix:die+dd ()
ff001f45e940 unix:trap+177b ()
ff001f45e950 unix:cmntrap+e6 ()
ff001f45ea50 zfs:ddt_phys_decref+c ()
ff001f45ea80 zfs:zio_ddt_free+55 ()
ff001f45eab0 zfs:zio_execute+8d ()
ff001f45eb50 genunix:taskq
Folks I posted this question on (OpenSolaris - Help) without any replies
http://opensolaris.org/jive/thread.jspa?threadID=129436&tstart=0 and am
re-posting here in the hope someone can help ... I have updated the wording a
little too (in an attempt to clarify)
I currently use OpenSolaris on a T
On 20/05/2010 12:46, Edward Ned Harvey wrote:
Also, since you've got "s0" on there, it means you've got some partitions on
that drive.
There are always partitions once the disk is in use by ZFS, but there
may be 1 or more of them and they maybe SMI or EFI partitions.
So just because there is
- "Mihai" skrev:
hello all,
I have the following scenario of using zfs.
- I have a HDD images that has a NTFS partition stored in a zfs dataset in a
file called images.img
Wouldn't it be better to use zfs volumes? AFAIK they are way faster than using
files
Vennlige hilsener / Best
> Any idea ?
> action: Wait for the resilver to complete.
> -- richard
Very fine ! And thank you a lot for your answers !
Philippe
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensol
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Andrew Gabriel
>
> If you are reading blocks from your initial hdd images (golden images)
> frequently enough, and you have enough memory on your system, these
> blocks will end up on the ARC (
On May 20, 2010, at 4:24 AM, Philippe wrote:
> Current status :
>
> pool: zfs_raid
> state: DEGRADED
> status: One or more devices is currently being resilvered. The pool will
>continue to function, possibly in a degraded state.
> action: Wait for the resilver to complete.
> scrub: resil
On May 20, 2010, at 4:46 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Philippe
>>
>> c7t2d0s0/o FAULTED 0 0 0 corrupted data
>>
>> When I've done the "zpool replace", I had to a
On May 20, 2010, at 4:12 AM, Philippe wrote:
>> I'm starting with the replacement of the very bad
>> disk, and hope the resilvering won't take too long !!
>
> Replacing c7t2d0, I get the following :
>
>NAME STATE READ WRITE CKSUM
>zfs_raid DEGRADED 0
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Philippe
>
> c7t2d0s0/o FAULTED 0 0 0 corrupted data
>
> When I've done the "zpool replace", I had to add "-f" to force, because
> ZFS told that these was a ZFS la
Mihai wrote:
hello all,
I have the following scenario of using zfs.
- I have a HDD images that has a NTFS partition stored in a zfs
dataset in a file called images.img
- I have X physical machines that boot from my server via iSCSI from
such an image
- Every time a machine ask for a boot reque
hello all,
I have the following scenario of using zfs.
- I have a HDD images that has a NTFS partition stored in a zfs dataset in a
file called images.img
- I have X physical machines that boot from my server via iSCSI from such an
image
- Every time a machine ask for a boot request from my server
Current status :
pool: zfs_raid
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h17m, 3,72% done, 7h22m to go
config
> I'm starting with the replacement of the very bad
> disk, and hope the resilvering won't take too long !!
Replacing c7t2d0, I get the following :
NAME STATE READ WRITE CKSUM
zfs_raid DEGRADED 0 0 0
raidz1 DEGRADED 0
> > One question : if I halt the server, and change the
> order of the disks on the SATA array, will RAIDZ
> still detect the array fine
> >
>
> Yes, it will.
Hi !
I've done the moves this morning, and the high service times followed the disks
!
So, I have 3 disks to replace urgently !
- "roi shidlovsky" skrev:
> hi.
> i am trying to attach a mirror disk to my root pool. if the two disk
> are the same size.. it all works fine, but if the two disks are with
> different size (8GB and 7.5GB) i get a "I/O error" on the attach
> command.
>
> can anybody tell me what am i doing
As queried by Ian, the new disk being attached must be at least as big
as the original root pool disk. It can be bigger, but the difference
will not be used in the mirroring.
cheers
Matt
On 05/20/10 10:11 AM, Ian Collins wrote:
On 05/20/10 08:39 PM, roi shidlovsky wrote:
hi.
i am trying to
On 05/20/10 08:39 PM, roi shidlovsky wrote:
hi.
i am trying to attach a mirror disk to my root pool. if the two disk are the same size..
it all works fine, but if the two disks are with different size (8GB and 7.5GB) i get a
"I/O error" on the attach command.
can anybody tell me what am i doin
hi.
i am trying to attach a mirror disk to my root pool. if the two disk are the
same size.. it all works fine, but if the two disks are with different size
(8GB and 7.5GB) i get a "I/O error" on the attach command.
can anybody tell me what am i doing wrong?
--
This message posted from opensol
66 matches
Mail list logo