2008/5/24 Hernan Freschi <[EMAIL PROTECTED]>:
> I let it run while watching TOP, and this is what I got just before it hung.
> Look at free mem. Is this memory allocated to the kernel? can I allow the
> kernel to swap?
No, the kernel will not use swap for this.
But most of the memory used by th
[EMAIL PROTECTED] wrote:
> > measurable benefits for caching local disks as well? NAND-flash SSD
>
> I'm confused, the only reason I can think of making a
>
> To create a pool with cache devices, specify a "cache" vdev
> with any number of devices. For example:
>
> # zpool cr
oops. replied too fast.
Ran without -n, and space was added successfully... but it didn't work. It died
out of memory again.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Hernan Freschi пишет:
> I tried the mkfile and swap, but I get:
> [EMAIL PROTECTED]:/]# mkfile -n 4g /export/swap
> [EMAIL PROTECTED]:/]# swap -a /export/swap
> "/export/swap" may contain holes - can't swap on it.
You should not use -n for creating files for additional swap. This is
mentioned in
Hi, Herman
You may not use '-n' to Makefile, that'll lead swap comlain.
Hernan Freschi wrote:
> I forgot to post arcstat.pl's output:
>
> Time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
> 22:32:37 556K 525K 94 515K 949K 98 515K 97 1G1G
> 22:3
I forgot to post arcstat.pl's output:
Time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
22:32:37 556K 525K 94 515K 949K 98 515K 97 1G1G
22:32:38636310063 100 0063 100 1G1G
22:32:39747410074 100
> measurable benefits for caching local disks as well? NAND-flash SSD
I'm confused, the only reason I can think of making a
To create a pool with cache devices, specify a "cache" vdev
with any number of devices. For example:
# zpool create pool c0d0 c1d0 cache c2d0 c3d0
> Memory: 3072M phys mem, 31M free mem, 2055M swap, 1993M free swap
perhaps this might help..
mkfile -n 4g /usr/swap
swap -a/usr/swap
http://blogs.sun.com/realneel/entry/zfs_arc_statistics
Rob
___
zfs-discuss mailing lis
I let it run while watching TOP, and this is what I got just before it hung.
Look at free mem. Is this memory allocated to the kernel? can I allow the
kernel to swap?
last pid: 7126; load avg: 3.36, 1.78, 1.11; up 0+01:01:11
21:16:49
88 pr
I let it run for about 4 hours. when I returned, still the same: I can ping the
machine but I can't SSH to it, or use the console. Please, I need urgent help
with this issue!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
On Fri, 23 May 2008, Bill McGonigle wrote:
> The remote-disk cache makes perfect sense. I'm curious if there are
> measurable benefits for caching local disks as well? NAND-flash SSD
> drives have good 'seek' and slow transfer, IIRC, but that might
> still be useful for lots of small reads where
On May 22, 2008, at 19:54, Richard Elling wrote:
> The Adaptive Replacement Cache
> (ARC) uses main memory as a read cache. But sometimes
> people want high performance, but don't want to spend money
> on main memory. So, the Level-2 ARC can be placed on a
> block device, such as a fast [solid st
Yup. They were the first to do so (as far as I know).
--Tim
On Fri, May 23, 2008 at 4:47 PM, Christopher Gibbs <[EMAIL PROTECTED]>
wrote:
> One other thing I noticed is that OpenSolaris (.com) will
> automatically install ZFS root for you. Will Nexenta do that?
>
> On Fri, May 23, 2008 at 4:31
One other thing I noticed is that OpenSolaris (.com) will
automatically install ZFS root for you. Will Nexenta do that?
On Fri, May 23, 2008 at 4:31 PM, Tim <[EMAIL PROTECTED]> wrote:
> Depends on what your end goal is really. The opensolaris.com version is
> releasing every 6 months, and I don't
Depends on what your end goal is really. The opensolaris.com version is
releasing every 6 months, and I don't believe there's currently any patching
between releases. If you feel comfortable sitting on it that long, with
potential bugs for 6 months, great. If not... it should be an easy choice.
Orvar Korvar пишет:
> Ok, so i make one vdev out of 8 discs. And I combine all vdevs into
> one large zpool? Is it correct?
I think it is easier to provide couple of examples:
zpool create pool c1t0d0 mirror c1t1d0 c1t2d0
This command would create storage pool with name 'pool' consisting of 2
t
On Fri, May 23, 2008 at 3:15 PM, Brandon High <[EMAIL PROTECTED]> wrote:
> On Fri, May 23, 2008 at 12:43 PM, Tim <[EMAIL PROTECTED]> wrote:
> > I'm looking on their site and don't even see any data on the 3134... this
> > *something new* that hasn't been released or? The only thing I see is
> 313
On Fri, 2008-05-23 at 13:45 -0700, Orvar Korvar wrote:
> Ok, so i make one vdev out of 8 discs. And I combine all vdevs into one large
> zpool? Is it correct?
>
> I have 8 port SATA card. I have 4 drives into one zpool.
zpool create mypool raidz1 disk0 disk1 disk2 disk3
you have a pool consist
Ok, so i make one vdev out of 8 discs. And I combine all vdevs into one large
zpool? Is it correct?
I have 8 port SATA card. I have 4 drives into one zpool. That is one vdev,
right? Now I can add 4 new drives and make them into one zpool. And now I
combine both zpool into one zpool? That can no
On Fri, May 23, 2008 at 12:43 PM, Tim <[EMAIL PROTECTED]> wrote:
> I'm looking on their site and don't even see any data on the 3134... this
> *something new* that hasn't been released or? The only thing I see is 3132.
There isn't a 3134, but there is a 3124, which is a PCI-X based 4-port.
-B
-
On Fri, May 23, 2008 at 2:36 PM, Brian Hechinger <[EMAIL PROTECTED]> wrote:
> On Fri, May 23, 2008 at 12:25:34PM -0700, Erik Trimble wrote:
> > >
> > >I'm running a 3124 with snv81 and haven't had a single problem with it.
> > >Whatever problems you ran into have likely been resolved.
> > >
> > Th
Pretty much what the subject says. I'm wondering which platform will
have the best stability/performance for a ZFS file server.
I've been using Solaris Express builds of Nevada for quite a while and
I'm currently on build 79b but I'm at a point where I want to upgrade.
So now I have to ask, should
The Solaris SAN Configuration and Multipathing Guide proved very helpful for me:
http://docs.sun.com/app/docs/doc/820-1931/
I, too was surprised to see MPIO enabled by default on x86 (we're using Dell/EMC
CX3-40 with our X4500 & X6250 systems).
Charles
Quoting Krutibas Biswal <[EMAIL PROTECTED]
On Fri, May 23, 2008 at 12:25:34PM -0700, Erik Trimble wrote:
> >
> >I'm running a 3124 with snv81 and haven't had a single problem with it.
> >Whatever problems you ran into have likely been resolved.
> >
> The Silicon Image 3114 also works like a champ, but it's SATA 1.0 only.
> It's dirt cheap
Brian Hechinger wrote:
> On Fri, May 23, 2008 at 12:47:18AM -0700, Pascal Vandeputte wrote:
>
>> I sold it and took the cheap route again with a Silicon Image 3124-based
>> adapter and had more problems which now probably would be solved with the
>> latest Solaris updates.
>>
>
> I'm runn
I got more info. I can run zpool history and this is what I get:
2008-05-23.00:29:40 zfs destroy tera/[EMAIL PROTECTED]
2008-05-23.00:29:47 [internal destroy_begin_sync txg:3890809] dataset = 152
2008-05-23.01:28:38 [internal destroy_begin_sync txg:3891101] dataset = 152
2008-05-23.07:01:36 zpool
Hello, I'm having a big problem here, disastrous maybe.
I have a zpool consisting of 4x500GB SATA drives, this pool was born on S10U4
and was recently upgraded to snv85 because of iSCSI issues with some initiator.
Last night I was doing housekeeping, deleting old snapshots. One snapshot
failed
Why does update 6 have to bve out before a patch can be produced for this? This
is a show-stopper for putting ZFS into production on anything other then local
disks, a production box that panics when a single disk goes offline is worse
then useless. I cannot see why this is not a high priority
On Fri, May 23, 2008 at 12:47:18AM -0700, Pascal Vandeputte wrote:
>
> I sold it and took the cheap route again with a Silicon Image 3124-based
> adapter and had more problems which now probably would be solved with the
> latest Solaris updates.
I'm running a 3124 with snv81 and haven't had a s
That 1420SA will not work, period. Type "1420sa solaris" in Google and you'll
find a thread about the problems I had with it.
I sold it and took the cheap route again with a Silicon Image 3124-based
adapter and had more problems which now probably would be solved with the
latest Solaris updates
David Francis wrote:
> Greetings all
>
> I was looking at creating a little ZFS storage box at home using the
> following SATA controllers (Adaptec Serial ATA II RAID 1420SA) on Opensolaris
> X86 build
>
> Just wanted to know if anyone out there is using these and can vouch for
> them. If not if
I've had great luck with my Supermicro AOC-SAT2-MV8 card so far. I'm
using it in an old PCI slot, so it's probably not as fast as it could
be, but it worked great right out of the box.
-Aaron
On Fri, May 23, 2008 at 12:09 AM, David Francis <[EMAIL PROTECTED]> wrote:
> Greetings all
>
> I was lo
Greetings all
I was looking at creating a little ZFS storage box at home using the following
SATA controllers (Adaptec Serial ATA II RAID 1420SA) on Opensolaris X86 build
Just wanted to know if anyone out there is using these and can vouch for them.
If not if there's something else you can reco
33 matches
Mail list logo