Oh I think OmniOS is far from hopeless.  The problem you are having is the
same problem you'd have if you were on ubuntu and you made a LVM raid on
vanilla EBS.  EBS is the problem when it comes to predictable write / read
speed.  People still use it, but not without careful thought and
consideration.  You can try using provisioned IOPS for EBS, which the
m1.large supports, or ask in risk-users what other AWS users have setup.  I
know we have a lot of customers and OSS users running on AWS, so they are
far more knowledgeable about real-world performance than I am.

Good luck,
Jared




On Tue, Jan 21, 2014 at 12:05 PM, Hari John Kuriakose <ejh...@gmail.com>wrote:

> I am using the default raid itself.
>
> Well, if this is the case, I will run the tests again with a different
> setup as you said, and get back as soon as possible. I would just like to
> believe that OmniOS is not too hopeless.
>
> Thank you.
> On Jan 21, 2014 11:17 PM, "Jared Morrow" <ja...@basho.com> wrote:
>
>> What type of RAID did you chose for your spool of 5 volumes?  If you
>> chose the default of raidz, you will not be getting much of a performance
>> boost over vanilla EBS, just a big integrity boost.  Also, unless you are
>> using provisioned IOPS for EBS, you are starting from an extremely slow
>> base-case, so adding ZFS on top might not help matters much.
>>
>> If speed is the concern, as a test I'm willing to bet if you do another
>> test run against the two instance storage disks on that m1.large, you will
>> probably beat those 5 EBS volumes pretty easily.
>>
>> -Jared
>>
>>
>> On Tue, Jan 21, 2014 at 9:22 AM, Hari John Kuriakose <ejh...@gmail.com>wrote:
>>
>>> Hello,
>>>
>>> I am using standard EBS devices, with a zpool in an instance comprising
>>> of five 40GB volumes.
>>> Each of the Riak instance is of m1.large type.
>>>
>>> I have made the following changes in zfs properties:
>>>
>>> # My reason: the default sst block size for leveldb is 4k.
>>> zfs set recordsize=4k tank/riak
>>> # My reason: by default, leveldb verifies checksums automatically.
>>> zfs set checksum=off tank/riak
>>> zfs set atime=off tank/riak
>>> zfs set snapdir=visible tank/riak
>>>
>>> And I did the following with help from Basho AWS tuning docs:
>>>
>>> projadd -c "riak" -K "process.max-file-descriptor=(basic,65536,deny)"
>>> user.riak
>>> bash -c "echo 'set rlim_fd_max=65536' >> /etc/system"
>>> bash -c "echo 'set rlim_fd_cur=65536' >> /etc/system"
>>> ndd -set /dev/tcp tcp_conn_req_max_q0 40000
>>> ndd -set /dev/tcp tcp_conn_req_max_q 4000
>>> ndd -set /dev/tcp tcp_tstamp_always 0
>>> ndd -set /dev/tcp tcp_sack_permitted 2
>>> ndd -set /dev/tcp tcp_wscale_always 1
>>> ndd -set /dev/tcp tcp_time_wait_interval 60000
>>> ndd -set /dev/tcp tcp_keepalive_interval 120000
>>> ndd -set /dev/tcp tcp_xmit_hiwat 2097152
>>> ndd -set /dev/tcp tcp_recv_hiwat 2097152
>>> ndd -set /dev/tcp tcp_max_buf 8388608
>>>
>>> Thanks again.
>>>
>>>
>>> On Tue, Jan 21, 2014 at 9:12 PM, Hector Castro <hec...@basho.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> Can you please clarify what type of disk you are using within AWS?
>>>> EBS, EBS with PIOPS, instance storage? In addition, maybe some details
>>>> on volume sizes and instance types.
>>>>
>>>> These details may help someone attempting to answer your question.
>>>>
>>>> --
>>>> Hector
>>>>
>>>>
>>>> On Tue, Jan 21, 2014 at 8:11 AM, Hari John Kuriakose <ejh...@gmail.com>
>>>> wrote:
>>>> >
>>>> > I am running LevelDB on ZFS in Solaris (OmniOS specifically) in
>>>> Amazon AWS.
>>>> > The iops is very very low. There is no significant progress with
>>>> tuning too.
>>>> >
>>>> > Why I chose ZFS is that since LevelDB requires the node to be stopped
>>>> before
>>>> > taking a backup, I needed a filesystem with snapshot ability. And the
>>>> most
>>>> > favourable Amazon community AMI seemed to be using OmniOS (fork of
>>>> Solaris).
>>>> > Everything is fine, except the performance.
>>>> >
>>>> > I did all the AWS tuning proposed by Basho but still Basho Bench gave
>>>> twice
>>>> > iops on Ubuntu as compared to OmniOS, under same conditions. Also, I
>>>> am
>>>> > using riak-js client library, and its a 5 node Riak cluster with 8GB
>>>> ram
>>>> > each.
>>>> >
>>>> > Could not yet figure out what is really causing the congestion in
>>>> OmniOS.
>>>> > Any pointers will be really helpful.
>>>> >
>>>> > Thanks and regards,
>>>> > Hari John Kuriakose.
>>>> >
>>>> >
>>>> > _______________________________________________
>>>> > riak-users mailing list
>>>> > riak-users@lists.basho.com
>>>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>> >
>>>>
>>>
>>>
>>> _______________________________________________
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to