>From my notes from mirroring a new install (I install first then mirror).
You won't need pfexec if your super-user.
Inside format, fdisk twice first time delete anything there, second time it
will ask you if you want to install a default Solaris2 setup.
Obviously change the disk id to match your s
Roy,
> Hi all
>
> There was some discussion on #opensolaris recently about L2ARC being
> dedicated to a pool, or shared. I figured since it's associated with a pool,
> it must be local, but I really don't know.
An L2ARC is made up of one or more "Cache Devices" associated with a single ZFS
st
Hi
got a brand new server with 14 x 2TB disk, and 2X160GB SSD disk , my plan was
to install opensolaris on one of the SSD disk and then zfs mirror the root disk
onto the second SSD disk , but since the server will handle some sync NFS write
i also want to add a zil log on the same SSD disks, al
On Jan 11, 2011, at 8:51 PM, Edward Ned Harvey wrote:
> heheheh, ok, I'll stop after this. ;-) Sorry for going on so long, but it
> was fun.
>
> In 2007, IDC estimated the size of the digital universe in 2010 would be 1
> zettabyte. (10^21 bytes) This would be 2.5*10^18 blocks of 4000 bytes.
Stephan,
The "vmstat" shows you are not actually short of memory; The "pi" and "po"
columns are zero, so the system is not having to do any paging, and it seems
unlike the system is slow directly because of RAM shortage. With the ARC,
it's not unusual for vmstat to show little free memory, but t
Am 12.01.11 18:49, schrieb SR:
You may need to adjust zfs_arc_max in /etc/system to avoid memory contention
http://www.thezonemanager.com/2009/03/filesystem-cache-optimization.htm
Suresh
I though I had that done through this in /etc/system:
set zfs:zfs_arc_max = 17179869184
I do also think
You may need to adjust zfs_arc_max in /etc/system to avoid memory contention
http://www.thezonemanager.com/2009/03/filesystem-cache-optimization.htm
Suresh
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
Am 12.01.11 16:32, schrieb Jeff Savit:
Stephan,
There are a bunch of tools you can use, mostly provided with Solaris
11 Express, plus arcstat, arc_summary that are available as
downloads. The latter tools will tell you the size and state of ARC,
which may be specific to your issues since you
Hi, reminds me about this dedup bug, don't use the "-d" switch in zfs send, it
produces broken stream that you won't be able to receive.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.ope
Stephan,
There are a bunch of tools you can use, mostly provided with Solaris 11
Express, plus arcstat, arc_summary that are available as downloads. The
latter tools will tell you the size and state of ARC, which may be
specific to your issues since you cite memory. For the list, could you
Hi all,
I have exchanged my Dell R610 in favor of a Sun Fire 4170 M2 which has
32 GB RAM installed. I am running Sol11Expr on this host and I use it to
primarily serve Netatalk AFP shares. From day one, I have noticed that
the amount of free RAM decereased and along with that decrease the
ov
Original Message
Subject: [osol-discuss] Its Official...!! GA release on 14th Jan 2011
Date: Wed, 12 Jan 2011 05:34:18 PST
From: darshin
To: opensolaris-disc...@opensolaris.org
Hi All,
Happy New Year !
First of all, a big thanks to you all for the tremendous response to the
Hi all
There was some discussion on #opensolaris recently about L2ARC being dedicated
to a pool, or shared. I figured since it's associated with a pool, it must be
local, but I really don't know.
So - is it local to a pool, or global?
If it's global, will I need to do something for mypool's l2a
On Tue, 11 Jan 2011, Jorgen Lundman wrote:
It would be nice to create a setup similar to
zpool create sub1 raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
zpool add sub1 raidz c0t6d0 c0t7d0 c0t8d0 c0t9d0 c0t10d0 c0t11d0
zpool create sub2 raidz c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0
zpool ad
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ben Rockwood
>
> If you're still having issues go into the BIOS and disable C-States,
if you
> haven't already. It is responsible for most of the problems with 11th Gen
> PowerEdge.
I did
> Edward, this is OT but may I suggest you to use something like Wolfram
Alpha
> to perform your calculations a bit more comfortably?
Wow, that's pretty awesome. Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
I have a server, with two external drive cages attached, on separate
controllers:
c0::dsk/c0t0d0 disk connectedconfigured unknown
c0::dsk/c0t1d0 disk connectedconfigured unknown
c0::dsk/c0t2d0 disk connectedco
If you're still having issues go into the BIOS and disable C-States, if you
haven't already. It is responsible for most of the problems with 11th Gen
PowerEdge.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@o
Ok, think I have the biggest issue. The drives are 4k sector drives,
and I wasn't aware of that. My fault, I should have checked this. Had
the disks for ages and are sub 1TB so had the idea that they wouldn't
be 4k drives...
I will obviously have to address this, either by creating a pool u
Quoting Bob Friesenhahn :
What function is the system performing when it is so busy?
The work load of the server is SMTP mail server, with associated spam
and virus scanning, and serving maildir email via POP3 and IMAP.
Wrong conclusion. I am not sure what the percentages are
percent
Edward, this is OT but may I suggest you to use something like Wolfram Alpha to
perform your calculations a bit more comfortably?
--
Enrico M. Crisostomo
On Jan 12, 2011, at 4:24, Edward Ned Harvey
wrote:
> For anyone who still cares:
>
> I'm calculating the odds of a sha256 collision in an
21 matches
Mail list logo