Re: [zfs-discuss] one more time: pool size changes

2010-06-16 Thread Mertol Özyöney
In addition to all comments below, 7000 series which are competing with
NetApp boxes have the ability to add more storage to the pool in a couple
seconds, online and does load balancing automaticaly. Also we dont have the
16 TB limit NetApp has. Nearly all customers did tihs without any PS
involvement. 



Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email mertol.ozyo...@sun.com



-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Richard Elling
Sent: Thursday, June 03, 2010 3:51 AM
To: Roman Naumenko
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] one more time: pool size changes

On Jun 2, 2010, at 3:54 PM, Roman Naumenko wrote:
> Recently I talked to a co-worker who manages NetApp storages. We discussed
size changes for pools in zfs and aggregates in NetApp.
> 
> And some time before I had suggested to a my buddy zfs for his new home
storage server, but he turned it down since there is no expansion available
for a pool. 

Heck, let him buy a NetApp :-)

> And he really wants to be able to add a drive or couple to an existing
pool. Yes, there are ways to expand storage to some extent without
rebuilding it. Like replacing disk with larger ones. Not enough for a
typical home user I would say. 

Why not? I do this quite often. Growing is easy, shrinking is more
challenging.

> And this is might be an important for corporate too. Frankly speaking I
doubt there are many administrators use it in DC environment. 
> 
> Nevertheless, NetApp appears to have such feature as I learned from my
co-worker. It works with some restrictions (you have to zero disks before
adding, and rebalance the aggregate after and still without perfect
distribution) - but Ontap is able to do aggregates expansion nevertheless. 
> 
> So, my question is: what does prevent to introduce the same for zfs at
present time? Is this because of the design of zfs, or there is simply no
demand for it in community?

Its been there since 2005: zpool subcommand add.
 -- richard

> 
> My understanding is that at present time there are no plans to introduce
it.
> 
> --Regards,
> Roman Naumenko
> ro...@naumenko.com

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
ZFS and NexentaStor training, Rotterdam, July 13-15, 2010
http://nexenta-rotterdam.eventbrite.com/




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] future of OpenSolaris

2010-02-22 Thread Mertol Özyöney
Hi Peter;

ZFS is a strategic software piece for many of Sun's offerings. Sun is
constantly offering several new Technologies on ZFS (without further
development ZFS is laready 5 years ahead of any other filesystem) just like
Dedup. Do not forget that ZFS is also part of the 7000 series. 

I will happy if you can post any details or evidance on why Sun/Oracle will
not invest on ZFS.  

Best regards
Mertol  



Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email mertol.ozyo...@sun.com


-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Peter Tribble
Sent: Monday, February 22, 2010 1:40 PM
To: Eugen Leitl
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] future of OpenSolaris

On Mon, Feb 22, 2010 at 9:22 AM, Eugen Leitl  wrote:
>
> Oracle's silence is starting to become a bit ominous. What are
> the future options for zfs, should OpenSolaris be left dead
> in the water by Suracle? I have no insight into who core
> zfs developers are (have any been fired by Sun even prior to
> the merger?), and who's paying them. Assuming a worst case
> scenario, what would be the best candidate for a fork? Nexenta?
> Debian already included FreeBSD as a kernel flavor into its
> fold, it seems Nexenta could be also a good candidate.
>
> Maybe anyone in the know could provide a short blurb on what
> the state is, and what the options are.

Of course they can't. If they're in the know, then they're almost certainly
not in a position to talk about it in public. Asking here does not help,
as I doubt if anyone from Sun/Oracle would be wise to give any response.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Mertol Özyöney
Hi Mark;

I have installed several 7000 series systems, some running 100's of VM's. 
I can help try to help you but to find where exactly the problem is I may
need more information. 

I can understand that you have no ZIL's. So most probably you are using the
7110 with 250 GB drives. 

All 7000 series have a module called analytics where you can monitor many
components performance. 
Please start with selecting enable advanced analytics in preferences tab at
configuration menu. 

Please make sure that you are running latest FW release Q1.2.1 2010 @
http://wikis.sun.com/display/FishWorks/Software+Updates
Please read all release notes attached from the FW you are running to the FW
level you will upgrade to.   

I understand that you are using iSCSI, if you are running earlier FW's, NFS
can increase performance significantly however for recent FW's iSCSI and NFS
performance is very close but I'd choose NFS over iSCSI for most
installiations. Do so if yu can. 

Please start monitoring fallowing datasets using analytics. 

Network transfer broken by interface or device [check if you are stuck by
gigabit ethernet etc]
iSCSI IOPS 
iSCSI IOPS broken down by LUN (to understand which LUN demands more
performance, with newer FW's you may find it use full to isolate some LUN's
by defining different pools - beware that this may not offer much help if
you use Raid 10) 
iSCSI the iops broken down type
iSCSI write iops latency
iSCSI latency
Arc hit/Miss ratio 
Arc size 

Here are my recomendations :(if you can share some screen shots from
analytics I may be able to help mre) 

1) Convert to Raid10 - this will provide you 4-5x More IOPS on both read and
writes. 

2) Using analytics, decide if increasing L1 cache may help you. If it can,
increase L1 cache 

3) Check  the IO size using analytics and check it against your lun
definations. I suggest that Lun block size should be lower than IO size.  

4) Enable wirte caching for a short time and monitor analytics report and if
you see much improvement you can invest in SSD's.  

5) Enable jumb frames (through out the path)

6) Use multiple interfaces to access data 

PS: I think that you have asked if you can disable ZIL on 7000 series. The
answer is yes and you can decide it at share/lun granularity. 

PS: We usualy recommend to use writezilla for vmware users but I have seen
7210's running 30-40 vm's without much problem when there were no
writezilla, but for sure this depends on the load pattern. 

Very best regards
Mertol 

Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email mertol.ozyo...@sun.com



-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Mark
Sent: Friday, August 27, 2010 10:47 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] VM's on ZFS - 7210

It does, its on a pair of large APC's.

Right now we're using NFS for our ESX Servers.  The only iSCSI LUN's I have
are mounted inside a couple Windows VM's.   I'd have to migrate all our VM's
to iSCSI, which I'm willing to do if it would help and not cause other
issues.   So far the 7210 Appliance has been very stable.

I like the zilstat script.  I emailed a support tech I am working with on
another issue to ask if one of the built in Analytics DTrace scripts will
get that data.   

I found one called L2ARC Eligibility:  3235 true, 66 false.  This makes it
sound like we would benefit from a READZilla, not quite what I had
expected...  I'm sure I don't know what I'm looking at anyways :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Status of the ADM

2008-11-16 Thread Mertol Özyöney
Hi All ;

 

Is there any update on the status of ADM?

 

Best regards

Mertol

 

 

 


  http://www.sun.com/emrkt/sigs/6g_top.gif

Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email   [EMAIL PROTECTED]

 

 

<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] continuous replication

2008-11-16 Thread Mertol Özyöney
Hi All;

Accessing the same data(Raid Group) from different controllers does slow
down the system considerebly. 
All modern controllers will demand the administrator to choose the primary
controler for Raid Groups. 
Two controller accesing the same data will require drive interface switching
between ports, controllers will not be able to optimize the head movement,
caching will suffer due to dublicate records on both controllers,  a lot of
data transfers between each controller... 

Only very few disk systems support multi controler access to same data and
when you read their best practice document you will notice that this is not
recommended.  

Best regards
Mertol 


Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL PROTECTED]



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Mattias Pantzare
Sent: Friday, November 14, 2008 11:48 PM
To: David Pacheco
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] continuous replication

> I think you're confusing our clustering feature with the remote
> replication feature. With active-active clustering, you have two closely
> linked head nodes serving files from different zpools using JBODs
> connected to both head nodes. When one fails, the other imports the
> failed node's pool and can then serve those files. With remote
> replication, one appliance sends filesystems and volumes across the
> network to an otherwise separate appliance. Neither of these is
> performing synchronous data replication, though.

That is _not_ active-active, that is active-passive.



If you have a active-active system I can access the same data via both
controllers at the same time. I can't if it works like you just
described. You can't call it active-active just because different
volumes are controlled by different controllers. Most active-passive
RAID controllers can do that.

The data sheet talks about active-active clusters, how does that work?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss