Hi David,
why not just use a couple of SAS expanders?
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Apr 04, 2010 at 07:13:58AM -0700, Kevin wrote:
> I am trying to recover a raid set, there are only three drives that
> are part of the set. I attached a disk and discovered it was bad.
> It was never part of the raid set.
Are you able to tell us more precisely what you did with this disk
At 11:19 AM +1000 2/19/10, James C. McPherson wrote:
On 19/02/10 12:51 AM, Maurice Volaski wrote:
For those who've been suffering this problem and who have non-Sun
jbods, could you please let me know what model of jbod and cables
(including length thereof) you have in your configuration.
For th
On Wed, Apr 7, 2010 at 10:47 AM, Daniel Bakken
wrote:
> When I send a filesystem with compression=gzip to another server with
> compression=on, compression=gzip is not set on the received filesystem. I am
> using:
Is compression set on the dataset, or is it being inherited from a
parent dataset?
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Daniel Bakken
>
> My zfs filesystem hangs when transferring large filesystems (>500GB)
> with a couple dozen snapshots between servers using zfs send/receive
> with
On Sat, Apr 10 at 7:22, Daniel Carosone wrote:
On Fri, Apr 09, 2010 at 10:21:08AM -0700, Eric Andersen wrote:
If I could find a reasonable backup method that avoided external
enclosures altogether, I would take that route.
I'm tending to like bare drives.
If you have the chassis space, the
Now that Erik has made me all nervous about my "3xRAIDz2 of 8x2TB 7200RPM
disks" approach, I'm considering moving forward using more and smaller 2.5"
disks instead. The problem is that at eight drives per LSI 3018, I run out
of PCIe slots quickly. The ARC-1680 cards would appear to offer greater
dr
On 09 April, 2010 - Abdullah Al-Dahlawi sent me these 5,3K bytes:
> Hi Tomas
>
>
> I understand from previous post
> http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg36914.html
>
> that if the data gets invalidated, the l2arc size that is shown by zpool
> iostat is the one that change
On Fri, 9 Apr 2010, Harry Putnam wrote:
Am I way wrong on this, and further I'm curious if it would make more
versatile use of the space if I were to put the mirrored pairs into
one big pool containing 3 mirrored pairs (6 discs)
Besides more versatile use of the space, you would get 3X the
pe
On Fri, Apr 9, 2010 at 6:14 AM, Edward Ned Harvey
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Eric Andersen
>>
>> I backup my pool to 2 external 2TB drives that are simply striped using
>> zfs send/receive followed by a scrub.
On 04/10/10 06:20 AM, Daniel Bakken wrote:
My zfs filesystem hangs when transferring large filesystems (>500GB)
with a couple dozen snapshots between servers using zfs send/receive
with netcat. The transfer hangs about halfway through and is
unkillable, freezing all IO to the filesystem, requirin
On Fri, Apr 09, 2010 at 10:21:08AM -0700, Eric Andersen wrote:
> If I could find a reasonable backup method that avoided external
> enclosures altogether, I would take that route.
I'm tending to like bare drives.
If you have the chassis space, there are 5-in-3 bays that don't need
extra driv
Hi Tomas
I understand from previous post
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg36914.html
that if the data gets invalidated, the l2arc size that is shown by zpool
iostat is the one that changed (always growing because of COW) not the
actual size shown by kstat which represe
On 09 April, 2010 - Abdullah Al-Dahlawi sent me these 27K bytes:
> Hi all
>
> I ran an OLTP-Filebench workload
>
> I set Arc max size = 2 gb
> l2arc ssd device size = 32gb
> workingset(dataset) = 10gb , 10 files , 1gb each
>
> after running the workload for 6 hours and monitoring kstat , I hav
Mirrored sets do protect against disk failure, but most of the time you'll find
proper backups are better as most issues are more on the order of "oops" than
"blowed up sir".
Perhaps mirrored sets with daily snapshots and a knowedge of how to mount
snapshots as clones so that you can pull a cop
On Fri, April 9, 2010 14:38, Harry Putnam wrote:
> I happened to notice someones' config posted here recently where a
> single zpool was made up of several mirror sets.
>
>From: Andreas Höschler
>Subject: Replacing disk in zfs pool
>Newsgroups: gmane.os.solaris.opensolaris.zfs
>
I had some issues with direct send/receives myself. In the end I elected to
send to a gz file and then scp that file across to receive from the file on the
otherside. This has been working fine 3 times a day for about 6 months now.
two sets of systems using doing this so far, a set running b111b
When I started using zfs a while back, I got the impression that
setting my home server up with mirror sets rather than some kind of
zraid would offer the most reliable setup for my data.
My data is just what you'd expect on a home lan... no real commercial
value involved.
I've since created 2 zp
Hi all
I ran an OLTP-Filebench workload
I set Arc max size = 2 gb
l2arc ssd device size = 32gb
workingset(dataset) = 10gb , 10 files , 1gb each
after running the workload for 6 hours and monitoring kstat , I have noticed
that l2_size from kstat has reached 10gb which is great . however, l2_size
On Fri, April 9, 2010 13:20, Daniel Bakken wrote:
> My zfs filesystem hangs when transferring large filesystems (>500GB)
> with a couple dozen snapshots between servers using zfs send/receive
> with netcat. The transfer hangs about halfway through and is
> unkillable, freezing all IO to the filesy
try again...
On Apr 9, 2010, at 5:33 AM, F. Wessels wrote:
> Hi all,
>
> I want to backup a pool called mpool. I want to do this by doing a zfs send
> of a mpool snapshot and receive into a different pool called bpool. All this
> on the same machine.
> I'm sharing various filesystems via zfs sh
My zfs filesystem hangs when transferring large filesystems (>500GB)
with a couple dozen snapshots between servers using zfs send/receive
with netcat. The transfer hangs about halfway through and is
unkillable, freezing all IO to the filesystem, requiring a hard
reboot. I have attempted this three
> I am doing something very similar. I backup to external USB's, which I
> leave connected to the server for obviously days at a time ... zfs send
> followed by scrub. You might want to consider eSATA instead of USB. Just a
> suggestion. You should be able to go about 4x-6x faster than 27MB/s.
You may be absolutely right. CPU clock frequency certainly has hit a wall at
around 4GHz. However, this hasn't stopped CPUs from getting progressively
faster. I know this is mixing apples and oranges, but my point is that no
matter what limits or barriers computing technology hits, someone co
Hi Richard,
thanks for the reply. As you can see I already use that option. But that
doesn't prevent the filesystems in the pool from mounting when I import the
pool after it was exported. I'm specifically looking for a zpool import option
to prevent the filesystems from mounting automatically.
On Apr 9, 2010, at 7:07 AM, Orvar Korvar wrote:
> ONStor sells a ZFS based machine
> http://searchstorage.techtarget.com/news/article/0,289142,sid5_gci1354658,00.html
> It seems more like FreeNAS or something?
It doesn't look like a ZFS-based product... too many limitations. Also LSI
bought
the
Use the "-u" option on the receiving pool. From the zfs(1m) man page:
-u
File system that is associated with the received
stream is not mounted.
NB this works for root pools, too.
-- richard
On Apr 9, 2010, at 5:33 AM, F. Wessels wrote:
> Hi all,
>
> I
No idea about the build quality, but is this the sort of thing you're looking
for?
Not cheap, integrated RAID (sigh), but one cable only
http://www.pc-pitstop.com/das/fit-500.asp
Cheap, simple, 4 eSATA connections on one box
http://www.pc-pitstop.com/sata_enclosures/scsat4eb.asp
Still cheap, us
ONStor sells a ZFS based machine
http://searchstorage.techtarget.com/news/article/0,289142,sid5_gci1354658,00.html
It seems more like FreeNAS or something?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.o
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Eric Andersen
>
> I backup my pool to 2 external 2TB drives that are simply striped using
> zfs send/receive followed by a scrub. As of right now, I only have
> 1.58TB of actual data. ZFS sen
On 9 apr 2010, at 14.17, Edward Ned Harvey wrote:
...
> I recently went through an exercise very similar to this on an x4275. I
> also tried to configure the HBA via the ILOM but couldn't find any way to do
> it.
...
Oh no, this is a BIOS system. The card is an autonomous entity
that lives a lif
Hi all,
I want to backup a pool called mpool. I want to do this by doing a zfs send of
a mpool snapshot and receive into a different pool called bpool. All this on
the same machine.
I'm sharing various filesystems via zfs sharenfs and sharesmb.
Sending and receiving of the entire pool works as e
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> I don't know how to identify what card is installed in your system.
Actually, this is useful:
prtpicl -v | less
Search for RAID. On my system, I get this snippet (out of
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Andreas Höschler
>
> > I don't think that the BIOS and rebooting part ever has to be true,
> > at least I don't hope so. You shouldn't have to reboot just because
> > you replace a hot plug dis
Hey All,
I'm having some issues with a snv_126 file server running on an HP ML370 G6
server with an Adaptec Raid Card (31605). The server has the rpool, plus two
raidz2 data pools (one is 1.5TB and 1.0TB respectively). I have been using
e-sata to backup the pools to a pool that contains 3x 1.5 Tb
On 9 apr 2010, at 12.04, Andreas Höschler wrote:
> Hi Ragnar,
>
>>> I need to replace a disk in a zfs pool on a production server (X4240
>>> running Solaris 10) today and won't have access to my documentation there.
>>> That's why I would like to have a good plan on paper before driving to tha
On 04/ 9/10 08:58 PM, Andreas Höschler wrote:
zpool attach tank c1t7d0 c1t6d0
This hopefully gives me a three-way mirror:
mirror ONLINE 0 0 0
c1t15d0 ONLINE 0 0 0
c1t7d0 ONLINE 0 0 0
c1t6d0 ONL
Hi Ragnar,
I need to replace a disk in a zfs pool on a production server (X4240
running Solaris 10) today and won't have access to my documentation
there. That's why I would like to have a good plan on paper before
driving to that location. :-)
The current tank pool looks as follows:
pool:
On 9 apr 2010, at 10.58, Andreas Höschler wrote:
> Hi all,
>
> I need to replace a disk in a zfs pool on a production server (X4240 running
> Solaris 10) today and won't have access to my documentation there. That's why
> I would like to have a good plan on paper before driving to that locatio
Hi all,
I need to replace a disk in a zfs pool on a production server (X4240
running Solaris 10) today and won't have access to my documentation
there. That's why I would like to have a good plan on paper before
driving to that location. :-)
The current tank pool looks as follows:
pool: t
40 matches
Mail list logo