rg/mailman/listinfo/zfs-discuss
--
_______
Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Private Bag 94006 Manukau
City Auckland New Zealand
Phone : +64 09 968 7611
Fax: +64 09 968 7641
Mobile : +64 27 568 7611
mailto:sc...@manukau.
Hi All,
I have an interesting question that may or may not be answerable from
some internal
ZFS semantics.
I have a Sun Messaging Server which has 5 ZFS based email stores. The
Sun Messaging server
uses hard links to link identical messages together. Messages are stored
in standard SMTP
MIME
On 13/06/11 10:28 AM, Nico Williams wrote:
On Sun, Jun 12, 2011 at 4:14 PM, Scott Lawson
wrote:
I have an interesting question that may or may not be answerable from some
internal
ZFS semantics.
This is really standard Unix filesystem semantics.
I Understand this, just wanting
boxes
we have left. M$ also heavily discounts Exchange CALS to Edu and Oracle
is not very friendly
the way Sun was with their JES licensing. So it is bye bye Sun Messaging
Server for us.
2011-06-13 1:14, Scott Lawson пишет:
Hi All,
I have an interesting question that may or may not be an
to a v120 with 8 x
320 GB scsi's
in a RAIDZ2 for all our home data and home business (which is a printing
outfit
which creates a lot of very big files on our macs).
--
_______
Scott Lawson
Systems Architect
Manukau Institute o
it at home too with and old D1000 attached to a v120 with 8 x
320 GB scsi's
in a RAIDZ2 for all our home data and home business (which is a printing
outfit
which creates a lot of very big files on our macs).
--
_______
Scott Laws
Toby Thain wrote:
On 17-Feb-09, at 3:01 PM, Scott Lawson wrote:
Hi All,
...
I have seen other people discussing power availability on other threads
recently. If you
want it, you can have it. You just need the business case for it. I
don't buy the comments
on UPS unreliability.
H
David Magda wrote:
On Feb 17, 2009, at 21:35, Scott Lawson wrote:
Everything we have has dual power supplies, feed from dual power
rails, feed from separate switchboards, through separate very large
UPS's, backed by generators, feed by two substations and then cloned
to another
Hi Andras,
No problems writing direct. Answers inline below. (If there are any
typo's it cause it's late and I have had a very long day ;))
andras spitzer wrote:
Scott,
Sorry for writing you directly, but most likely you have missed my
questions regarding your SW design, whenever you have t
Robert Milkowski wrote:
Hello Asif,
Wednesday, February 18, 2009, 1:28:09 AM, you wrote:
AI> On Tue, Feb 17, 2009 at 5:52 PM, Robert Milkowski wrote:
Hello Asif,
Tuesday, February 17, 2009, 7:43:41 PM, you wrote:
AI> Hi All
AI> Does anyone have any experience on running qmail on solar
list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
_______
Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Private Bag 94006 Ma
Miles Nordin wrote:
"sl" == Scott Lawson writes:
sl> Electricity *is* the lifeblood of available storage.
I never meant to suggest computing machinery could run without
electricity. My suggestion is, if your focus is _reliability_ rather
than availability
ver
a commercially supported solution for them.
Thanks,
S.
--
_________
Scott Lawson
Systems Architect
Information Communication Technology Services
Manukau Institute of Technology
Private Bag 94006
South Auckland Mail Centre
Manukau 2240
Auckland
New Zealand
Phone : +64 09 968 7611
Fax: +64 09 968 764
to them and they might not understand
having to change their shell paths
to get the userland that they want ;)
On Wed, Mar 4, 2009 at 2:47 AM, Scott Lawson wrote:
Stephen Nelson-Smith wrote:
Hi,
I recommended a ZFS-based archive solution to a client needing to have
a network-b
r,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
_______
Scott Lawson
Systems Architect
Manukau
_
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
___
Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Privat
pensolaris.org/mailman/listinfo/zfs-discuss
--
_____
Scott Lawson
Systems Architect
Information Communication Technology Services
Manukau Institute of Technology
Private Bag 94006
South Auckland Mail Centre
Manukau 2240
Auckla
Michael Shadle wrote:
On Mon, Mar 30, 2009 at 4:13 PM, Michael Shadle wrote:
Sounds like a reasonable idea, no?
Follow up question: can I add a single disk to the existing raidz2
later on (if somehow I found more space in my chassis) so instead of a
7 disk raidz2 (5+2) it becomes a 6+2
Michael Shadle wrote:
On Wed, Apr 1, 2009 at 3:19 AM, Michael Shadle wrote:
I'm going to try to move one of my disks off my rpool tomorrow (since
it's a mirror) to a different controller.
According to what I've heard before, ZFS should automagically
recognize this new location and have no
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
_______
Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology
olaris.org/mailman/listinfo/zfs-discuss
--
_____
Scott Lawson
Systems Architect
Information Communication Technology Services
Manukau Institute of Technology
Private Bag 94006
South Auckland Mail Centre
Manukau 2240
Auckland
New Ze
.opensolaris.org/mailman/listinfo/zfs-discuss
--
_______
Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Private Bag 94006 Manukau
City Auckland New Zealand
Phone : +64 09 968 7611
Fax
Michael Shadle wrote:
On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson
wrote:
If possible though you would be best to let the 3ware controller expose
the 16 disks as a JBOD to ZFS and create a RAIDZ2 within Solaris as you
will then
gain the full benefits of ZFS. Block self healing etc etc
Michael Shadle wrote:
On Mon, Apr 27, 2009 at 5:32 PM, Scott Lawson
wrote:
One thing you haven't mentioned is the drive type and size that you are
planning to use as this
greatly influences what people here would recommend. RAIDZ2 is built for
big, slow SATA
disks as reconstruction
Richard Elling wrote:
Some history below...
Scott Lawson wrote:
Michael Shadle wrote:
On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson
wrote:
If possible though you would be best to let the 3ware controller
expose
the 16 disks as a JBOD to ZFS and create a RAIDZ2 within Solaris
as you
Wilkinson, Alex wrote:
0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote:
>On Thu, 30 Apr 2009, Wilkinson, Alex wrote:
>>
>> I currently have a single 17TB MetaLUN that i am about to present to an
>> OpenSolaris initiator and it will obviously be ZFS. However
Wilkinson, Alex wrote:
0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote:
>On Thu, 30 Apr 2009, Wilkinson, Alex wrote:
>>
>> I currently have a single 17TB MetaLUN that i am about to present to an
>> OpenSolaris initiator and it will obviously be ZFS. However
Roger Solano wrote:
Hello,
Does it make any sense to use a bunch of 15K SAS drives as L2ARC
cache for several TBs of SATA disks?
For example:
A STK2540 storage array with this configuration:
* Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs.
* Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs.
Bob Friesenhahn wrote:
On Thu, 7 May 2009, Scott Lawson wrote:
A STK2540 storage array with this configuration:
* Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs.
* Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs.
Just thought I would point out that these are hardware backed RAID
arrays. You
listinfo/zfs-discuss
--
_________
Scott Lawson
Systems Architect
Information Communication Technology Services
Manukau Institute of Technology
Private Bag 94006
South Auckland Mail Centre
Manukau 2240
Auckland
New Zealand
Phone : +64 09 968 7611
Fax: +64 09 968 7641
Mobi
replacing a disk.
HTH,
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
_______
Scott Lawson
Systems Architect
Manukau Insti
Haudy Kazemi wrote:
Hello,
I've looked around Google and the zfs-discuss archives but have not
been able to find a good answer to this question (and the related
questions that follow it):
How well does ZFS handle unexpected power failures? (e.g.
environmental power failures, power supply
Monish Shah wrote:
A related question: If you are on a UPS, is it OK to disable ZIL?
I think the answer to this is no. UPS's do fail. If you have two
redundant units, answer *might* be maybe. But prudence says *no*.
I have seen numerous UPS' failures over the years, cascading UPS
failures
David Magda wrote:
On Jun 30, 2009, at 14:08, Bob Friesenhahn wrote:
I have seen UPSs help quite a lot for short glitches lasting seconds,
or a minute. Otherwise the outage is usually longer than the UPSs
can stay up since the problem required human attention.
A standby generator is neede
Bob,
Output of my run for you. System is a M3000 with 16 GB RAM and 1 zpool
called test1
which is contained on a raid 1 volume on a 6140 with 7.50.13.10 firmware on
the RAID controllers. RAid 1 is made up of two 146GB 15K FC disks.
This machine is brand new with a clean install of S10 05/09. I
) 'cpio -C 131072 -o > /dev/null'
48000256 blocks
real3m25.13s
user0m2.67s
sys 0m28.40s
Doing second 'cpio -C 131072 -o > /dev/null'
48000256 blocks
real8m53.05s
user0m2.69s
sys 0m32.83s
Feel free to clean up with 'zfs destroy test1/zfscachet
Bob Friesenhahn wrote:
On Wed, 15 Jul 2009, Scott Lawson wrote:
NAME STATE READ WRITE
CKSUM
test1 ONLINE 0
0 0
mirror ONLINE 0
0
second 'cpio -C 131072 -o > /dev/null'
48000256 blocks
real1m59.11s
user0m9.93s
sys 1m49.15s
Feel free to clean up with 'zfs destroy nbupool/zfscachetest'.
Scott Lawson wrote:
Bob,
Output of my run for you. System is a M3000 with 16 GB RAM and 1 zpool
called test
g/mailman/listinfo/zfs-discuss
--
_____
Scott Lawson
Systems Architect
Information Communication Technology Services
Manukau Institute of Technology
Private Bag 94006
South Auckland Mail Centre
Manukau 2240
Auckland
New Zealand
Phone : +64 09 968 7611
Fax: +64 09 968 7641
Mobile : +64 27 568 7611
ma
support are
invited from the list.
Thanks,
Scott.
--
_______
Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Private Bag 94006 Manukau
City Auckland New Zealand
Phon
% ONLINE -
nbupool 40.8T 34.4T 6.37T 84% ONLINE -
[r...@solnbu1 /]#>
--
_______
Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Private Bag 94006 Manukau
City Auckland
work core routers and
needless to say achieves very high throughput. I have seen it pushing
the full capacity of the SAS link to the J4500 quite
commonly. This is probably the choke point for this system.
/Scott
--
_______
Scott
Dave Stubbs wrote:
I don't mean to be offensive Russel, but if you do
ever return to ZFS, please promise me that you will
never, ever, EVER run it virtualized on top of NTFS
(a.k.a. worst file system ever) in a production
environment. Microsoft Windows is a horribly
unreliable operating system
Tobias Exner wrote:
Hi list,
some months ago I spoke with an zfs expert on a Sun Storage event.
He told it's possible to grow a zpool by replacing every single disk
with a larger one.
After replacing and resilvering all disks of this pool zfs will
provide the new size automatically.
Now
__
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
___
Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Private Bag 94006 Manukau
City Auckland New Zealand
n
than you have concurrent streams. This avoids having one save set
that finishes long after all the others because of poorly balanced
save sets.
Couldn't agree more Mike.
--
Mike Gerdts
http://mgerdts.blogspot.com/
--
______
erved.
Use is subject to license terms.
Assembled 27 October 2008
--
___
Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Ser
serge goyette wrote:
actually i did apply the latest recommended patches
Recommended patches and upgrade clusters are different by the way.
10_Recommended != Upgrade Cluster that. Upgrade cluster will upgrade
the system to a effectively the Solaris Release that the upgrade cluster
is minu
Also you may wish to look at the output of 'iostat -xnce 1' as well.
You can post those to the list if you have a specific problem.
You want to be looking for error counts increasing and specifically 'asvc_t'
for the service times on the disks. I higher number for asvc_t may help to
isolate poo
list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
_
Scott Lawson
Systems Architect
Information Communication Technology Services
Manukau Institute of Technology
Private Bag 94006
South Auckland Mail Centre
M
Bob Friesenhahn wrote:
On Fri, 18 Sep 2009, David Magda wrote:
If you care to keep your pool up and alive as much as possible, then
mirroring across SAN devices is recommended.
One suggestion I heard was to get a LUN that's twice the size, and
set "copies=2". This way you have some redund
51 matches
Mail list logo