I might have the wrong end of the stick here, but I was under the impression 
that kannel/bearerbox used openSMPP libraries to implement its smpp 
functionality. I have done a bit of googling and cannot see anything confirming 
this. So I guess this is not the case :)

With respect to mtr (both gateways are same device, juniper mx-140 with dual 
10Gbit direct connection to blade hypervisor data fabric):
 Host                                            Loss%   Snt   Last   Avg  Best 
 Wrst StDev
 1. smsc-gateway                        0.0%    95    0.4   0.5   0.3   0.9   
0.0
 2. kannel                                     0.0%    94    0.8   0.6   0.4   
1.0   0.0

 Host                                            Loss%   Snt   Last   Avg  Best 
 Wrst StDev
 1. kannel-gateway                      0.0%    73    0.4   0.3   0.3   0.5   
0.0
 2. smsc                                       0.0%    72    0.5   0.6   0.4   
2.7   0.4

Average on both is sub 1ms. MTU are is 1500 on both.

I am loathe to think it is the hypervisor or hardware as it is all service 
provider grade. I have a spare HP DL380 in the rack at the DC which was 
scheduled for collection. I may just wipe that and put kannel there and see how 
it behaves there.

Thank you for the advice.
> On 13 Jan 2016, at 15:09, spameden <spame...@gmail.com> wrote:
> 
> 
> 
> 2016-01-13 15:14 GMT+03:00 Grant Saicom <grant.sai...@gmail.com 
> <mailto:grant.sai...@gmail.com>>:
> Thanks for the advice.
> 
> Which smpp implementation does kannel code ship with from the repo? Just want 
> to clarify as it sounds it isn’t opensmpp in the answer.
> 
> Why do you even ask about opensmppbox? I thought original question was about 
> sqlbox. OpenSMPPBox is basically a simple proxy to kannel for outside 
> clients. There is not much in it, there is no flow control or accounting. 
> What bearerbox/kannel implements for smsc is basically what opensmppbox 
> offers in terms of connections to upstream links.
> 
> 
> 
> With regards to network, all the machines are on the same subnet, including 
> the smsc. We have an instance within our network. On top of that, they are 
> all on the same hypervisor and data fabric.
> 
> Another thought I just got: could your network problems be due your 
> virtualization used? Try using kannel on a bare machine and see if there is 
> any TCP retransmissions going?
> 
> Did you check mtr output as well to your upstream smsc? And also check 
> reverse network path if it's same or not.
>  
> 
>  
> 
>> On 13 Jan 2016, at 14:02, spameden <spame...@gmail.com 
>> <mailto:spame...@gmail.com>> wrote:
>> 
>> opensmppbox is generally not recommended for production, it's very basic and 
>> there is no accounting at all, for SMPP-server I'd recommended contacting 
>> Stipe Tolk (you can find his e-mail on the lists archive in google), he has 
>> commercial solution carrier-grade.
>> 
>> about TCP retransmissions you might need tuning either Linux network 
>> settings, e.g. MTU if you're behind some sorta NAT or better contact your 
>> provider with mtr output and tcpdump captures there might be unoptimal 
>> direct or/and reverse network path between you and your SMSC provider.
>> 
>> 2016-01-13 14:07 GMT+03:00 Grant Saicom <grant.sai...@gmail.com 
>> <mailto:grant.sai...@gmail.com>>:
>> I suspect the issue I am still experiencing is because of TCP retransmission 
>> between Kannel and SMSC. The time out for ack is 210ms on the SMSC we are 
>> sending to and the delay can sometimes be as long as 370ms.
>> 
>> I have not found a solution, but am exploring further optimisations and 
>> avenues.
>> 
>> Another Kannel user advised me saying that openSMPP can be problematic some 
>> times. Is this a generally known and confirmed issue?
>> 
>> Kind regards
>> Grant
>> 
>>> On 15 Dec 2015, at 12:53, spameden <spame...@gmail.com 
>>> <mailto:spame...@gmail.com>> wrote:
>>> 
>>> store-type = spool
>>> store-location = "/tmp/kannel-spool/"
>>> 
>>> put these 2 lines below group = core in /etc/kannel/kannel.conf
>>> 
>>> do you use for dlr MySQL as well?
>>> 
>>> worth adding index on dlr table as well: KEY `smsc_ts` (`smsc`,`ts`) 
>>> 
>>> ALTER table dlr add index `smsc_ts` (`smsc`,`ts`);
>>> 
>>> 
>>> 
>>> 
>>> 2015-12-15 13:43 GMT+03:00 spameden <spame...@gmail.com 
>>> <mailto:spame...@gmail.com>>:
>>> Yes.
>>> 
>>> The only thing that comes to my mind is to use kannel-store = spool and 
>>> move your spool store to /tmp dir, this way queue will be in multiple files 
>>> instead of the single file and in RAM, I found this way it's very fast.
>>> 
>>> Let me know if it helps.
>>> 
>>> If you want to preserve queue (in case of hard reset or something) you can 
>>> rsync it's contents every 2 minutes or so in some permanent directory.
>>> 
>>> 2015-12-15 12:56 GMT+03:00 Grant Saicom <grant.sai...@gmail.com 
>>> <mailto:grant.sai...@gmail.com>>:
>>> Haven’t really found my answer yet, but I have another question along this 
>>> line of thought.
>>> 
>>> I see the queue in sqlbox rises quite high, but the queue in the smpp 
>>> connector to the smsc barely goes above 0.
>>> 
>>> Is there a way to control.modify the flow of messages from sqlbox to the 
>>> smpp queue?
>>> 
>>> Architecturally, is this correct in terms of the flow of messages:
>>> sqlbox queue -> bearerbox sms store-> smpp queue
>>> 
>>> Regards
>>> Grant
>>> 
>>>> On 09 Dec 2015, at 11:56, spameden <spame...@gmail.com 
>>>> <mailto:spame...@gmail.com>> wrote:
>>>> 
>>>> 
>>>> 
>>>> 2015-12-09 12:43 GMT+03:00 Grant Saicom <grant.sai...@gmail.com 
>>>> <mailto:grant.sai...@gmail.com>>:
>>>> We process the sent_sms into another table on they fly. Maximum size the 
>>>> sent_sms table gets is maybe 40k tops but mostly it averages around 10K. 
>>>> We see this once 1 week maybe.
>>>> 
>>>> I have really made every attempt to remove any bottlenecks in terms 
>>>> unwieldy database sizes to allow kannel to work in a favourable 
>>>> environment.
>>>> 
>>>> Is there reason to add multiple sqlboxes to feed bearer box?
>>>> 
>>>> Is there maybe a concurrency setting I can do for bearer box to receive 
>>>> the messages? I did not come across documentation aside from email posts 
>>>> with respect to the limit-per-cycle setting.
>>>> 
>>>> I have another question, would we be able to get faster performance if we 
>>>> went flat file for the kannel operations?
>>>> 
>>>> Well you can exclude bottlenecks by simply testing same setup with 
>>>> fakesmsc daemon and see if speed will be better.
>>>> 
>>>> It might be that delay is caused by your SMSC uplinks overall speed and 
>>>> not database.
>>>> 
>>>> You can also try classic smsbox implementation for sending instead of 
>>>> sqlbox. But I think sqlbox is fastest and more convinient way because of 
>>>> DB storage.
>>>> 
>>>> 
>>>>  
>>>> 
>>>> Regards
>>>> 
>>>> 
>>>>> On 08 Dec 2015, at 15:12, spameden <spame...@gmail.com 
>>>>> <mailto:spame...@gmail.com>> wrote:
>>>>> 
>>>>> 
>>>>> 
>>>>> 2015-12-08 12:51 GMT+03:00 Grant Vorenberg <gr...@saicomvoice.co.za 
>>>>> <mailto:gr...@saicomvoice.co.za>>:
>>>>> <saicom-voice-services.gif> 
>>>>> <https://branding.synaq.com/api/r/id/22474050/map/0>
>>>>> 
>>>>> Hi
>>>>> 
>>>>> We manage how big send_sms gets. The queue builder puts in 500 messages 
>>>>> at a time to a total maximum of 3000 from a larger main queue which can 
>>>>> go as big as 2M.
>>>>> 
>>>>> 2M is kinda big table, how big is sent_sms? 10-30M ?
>>>>> 
>>>>> I think your issue happens when kannel tries to move from send_sms to 
>>>>> sent_sms table already submitted message this is where it hangs. You can 
>>>>> try testing it yourself with simple query:
>>>>> 
>>>>> INSERT INTO sent_sms SELECT * from send_sms where sql_id=XXXX and measure 
>>>>> time per query.
>>>>> 
>>>>> if it's instant there should be no problem.
>>>>> 
>>>>> Generally it's better to leave sent_sms table at around 1M records not 
>>>>> more, old records you can move to other table daily.
>>>>>  
>>>>> 
>>>>> Actual hardware is a VCenter on blades with plenty ram, cpu and hp 
>>>>> 3PAR(144GB raid card ram for caching in total) fibre attached storage 
>>>>> with dedicated SSD specifically for DB. Calculated IOPS are stupidly good.
>>>>> 
>>>>> The VMs are as follows:
>>>>> Queuebuilder: 4 vcpu, 16Gb on SAS
>>>>> Kannel: 4 vcpu, 8GB on SAS
>>>>> MysqlDB-Master: 8 vcpu, 32GB on SSD
>>>>> MysqlDB-Slave: 8 vcpu, 32GB on SSD
>>>>> 
>>>>> MySQL on SSDs should work just fine and you should have big number of 
>>>>> iops. Btw, I recommend to use MariaDB instead of regular MySQL 
>>>>> (mariadb.org <http://mariadb.org/>) it's very fast and reliable, for 
>>>>> InnoDB it uses modified XtraDB engine which has some tweaks.
>>>>> 
>>>>> did you check mysqladmin status to indicate number of queries processed 
>>>>> per second?
>>>>> 
>>>>> 
>>>>> The Load on the MysqlDB-Master averages around 0.4 with max of 0.6 
>>>>> (single spike). Memory usage hangs around 24GB. I will need to check the 
>>>>> process list to double check, but we generally don’t see much strain here 
>>>>> either, but I stand to be corrected. 
>>>>> DB optimasations as follows:
>>>>> 
>>>>> key_buffer              
>>>>> = 16M
>>>>> max_allowed_packet      
>>>>> = 16M
>>>>> 
>>>>> maybe increase a bit max_allowed_packet to 64mb
>>>>> 
>>>>> innodb_buffer_pool_size 
>>>>> = 12G
>>>>>  
>>>>> this is fine
>>>>> 
>>>>> query_cache_limit       
>>>>> = 20M
>>>>> query_cache_size        
>>>>> = 128M
>>>>> 
>>>>> query_cache in kannel's case doesn't affect much, so it's ok to have it 
>>>>> like that.
>>>>>  
>>>>> 
>>>>> No extra indexes on the send_sms as we limit its size to 3000.
>>>>> 
>>>>> make sure both send_sms and sent_sms tables are InnoDB type.
>>>>>  
>>>>> 
>>>>> All reporting is done on the slaveDB, so no extra strain on monitoring 
>>>>> and reporting.
>>>>> 
>>>>> under 'reporting' do you mean your dlr_url is called to external script 
>>>>> which is connected to slaveDB?
>>>>>  
>>>>> 
>>>>> Historically, we have queued at the SMPP connector and not at the smsbox. 
>>>>> We generally reached a top (AVG) speed of 73 msg/s but when I see the 
>>>>> smsbox queued figure rise and the internal smpp queue drop to 0, we only 
>>>>> hit half of this speed.
>>>>> 
>>>>> I did not see the limit-per-cycle setting in the sqlbox documentation 
>>>>> (2011). I also checked the code and saw that the select limit was a 
>>>>> variable instead of hard-limited.
>>>>> 
>>>>> Yeah, it was introduced some time ago alongside with other optimisations 
>>>>> (year ago or so, can't remember now), I think it should be in new 
>>>>> documentation. However, you need to build documentation to get more 
>>>>> recent version of it.
>>>>> 
>>>>> 
>>>>> Regards
>>>>> G
>>>>> 
>>>>>> On 08 Dec 2015, at 10:52, spameden <spame...@gmail.com 
>>>>>> <mailto:spame...@gmail.com>> wrote:
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Grant Vorenberg
>>>>>> Technical Manager
>>>>>> Office   0861 SAICOM (724 266)
>>>>>> Direct   010 140 5012
>>>>>> Support  010 140 5050
>>>>>> Cell     -
>>>>>> Fax      010 140 5001
>>>>>> Email    gr...@saicomvoice.co.za <mailto:gr...@saicomvoice.co.za>
>>>>>> Visit our website        www.saicomvoice.co.za 
>>>>>> <http://www.saicomvoice.co.za/>
>>>>>> 
>>>>>> <logo-image.png> <http://www.saicomvoice.co.za/> 
>>>>>> 
>>>>>> 2015-12-08 11:23 GMT+03:00 Grant Vorenberg <gr...@saicomvoice.co.za 
>>>>>> <mailto:gr...@saicomvoice.co.za>>:
>>>>>> <saicom-now-offers-cloud-pbx.gif> 
>>>>>> <https://branding.synaq.com/api/r/id/22469522/map/0>
>>>>>> 
>>>>>> Here is my config:
>>>>>> 
>>>>>> group = sqlbox
>>>>>> id = sqlbox-db
>>>>>> smsbox-id = sqlbox
>>>>>> #global-sender = ""
>>>>>> bearerbox-host = localhost
>>>>>> bearerbox-port = 13001
>>>>>> smsbox-port = 13005
>>>>>> smsbox-port-ssl = false
>>>>>> sql-log-table = sent_sms
>>>>>> sql-insert-table = send_sms
>>>>>> log-file = "/var/log/kannel/kannel-sqlbox.log"
>>>>>> log-level = 0
>>>>>> #sqlbox optimisation GAV 20151207
>>>>>> limit-per-cycle = 100
>>>>>> 
>>>>>> 
>>>>>> Try monitoring your database workload.
>>>>>> 
>>>>>> Is send_sms table is big?
>>>>>>  
>>>>>> 
>>>>>>> On 07 Dec 2015, at 15:06, spameden <spame...@gmail.com 
>>>>>>> <mailto:spame...@gmail.com>> wrote:
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Grant Vorenberg
>>>>>>> Technical Manager
>>>>>>> Office          0861 SAICOM (724 266)
>>>>>>> Direct          010 140 5012
>>>>>>> Support         010 140 5050
>>>>>>> Cell    -
>>>>>>> Fax     010 140 5001
>>>>>>> Email   gr...@saicomvoice.co.za <mailto:gr...@saicomvoice.co.za>
>>>>>>> Visit our website       www.saicomvoice.co.za 
>>>>>>> <http://www.saicomvoice.co.za/>
>>>>>>> 
>>>>>>> <logo-image.png> <http://www.saicomvoice.co.za/>
>>>>>>> 2015-12-07 14:03 GMT+03:00 Grant Vorenberg <gr...@saicomvoice.co.za 
>>>>>>> <mailto:gr...@saicomvoice.co.za>>:
>>>>>>> <saicom-now-offers-cloud-pbx.gif> 
>>>>>>> <https://branding.synaq.com/api/r/id/22451334/map/0>
>>>>>>> 
>>>>>>> Hi Guys
>>>>>>> 
>>>>>>> I am a new to the list subscription and am looking for a little clarity 
>>>>>>> on the SQLBOX.
>>>>>>> 
>>>>>>> I have a Debian Wheezy box running:
>>>>>>> Kannel sqlbox version 1.4.4
>>>>>>> Libxml version 2.7.2
>>>>>>> MySQL 5.5.43
>>>>>>> 
>>>>>>> The front end is custom and drops message into the send_sms table. 
>>>>>>> These messages are terminated via smpp to another system of ours. We 
>>>>>>> process and clean out the sent_sms table.
>>>>>>> 
>>>>>>> I gather stats on the performance of the system using the status page: 
>>>>>>> http://localhost:13000/status <http://localhost:13000/status>
>>>>>>> 
>>>>>>> I am trying to understand the following output from the screen:
>>>>>>> Box connections:
>>>>>>>        smsbox:sqlbox, IP 127.0.0.1 (2532 queued), (on-line 0d 2h 48m 
>>>>>>> 19s)
>>>>>>> 
>>>>>>> This means there is a queue on sqlbox. 
>>>>>>> 
>>>>>>> Queue works like this:
>>>>>>> 
>>>>>>> First sqlbox gets messages from DB backend, then it adds messages to 
>>>>>>> own queue, then it sends message into bearerbox queue and bearerbox 
>>>>>>> splits this queue over your connected SMSC/upstream operators. 
>>>>>>> 
>>>>>>> So if there is a huge queue on sqlbox it means there is a big amount of 
>>>>>>> MTs in your send_sms table and sqlbox is still submitting those to 
>>>>>>> bearerbox.
>>>>>>> 
>>>>>>> To optimize sqlbox I'd recommend adding this parameter into sqlbox 
>>>>>>> section (after group = sqlbox):
>>>>>>> limit-per-cycle = 50 this means sqlbox will get 50 bulk messages from 
>>>>>>> DB per query at once instead of 1.
>>>>>>> 
>>>>>>>  
>>>>>>> 
>>>>>>> What I noticed is when our send speeds start dipping on the smpp 
>>>>>>> connection (internal/default route), this smsbox:sql queue starts 
>>>>>>> building up.
>>>>>>> 
>>>>>>> When the smsbox:sqlbox queue starts building up like this:
>>>>>>> 1)What causes this?
>>>>>>> 2) What does this signify?
>>>>>>> 
>>>>>>> We generally don’t see this behaviour very often, but its effect is 
>>>>>>> detrimental to the performance of the system so I would like to know 
>>>>>>> what causes the growth and how to combat it so our send speed is 
>>>>>>> safe-guarded.
>>>>>>> 
>>>>>>> 
>>>>>>> Here is an excerpt from my config files:
>>>>>>> 
>>>>>>> <smsbox conf>:
>>>>>>> group = sqlbox
>>>>>>> id = sqlbox-db
>>>>>>> smsbox-id = sqlbox
>>>>>>> #global-sender = ""
>>>>>>> bearerbox-host = localhost
>>>>>>> bearerbox-port = 13001
>>>>>>> smsbox-port = 13005
>>>>>>> smsbox-port-ssl = false
>>>>>>> sql-log-table = sent_sms
>>>>>>> sql-insert-table = send_sms
>>>>>>> 
>>>>>>> 
>>>>>>> <kannel.conf>:
>>>>>>> group = smsbox
>>>>>>> bearerbox-host = 127.0.0.1
>>>>>>> sendsms-port = 13013
>>>>>>> global-sender = 13013
>>>>>>> smsbox-id=my_smsbox
>>>>>>> 
>>>>>>> group=smsbox-route
>>>>>>> smsbox-id=sqlbox
>>>>>>> smsc-id=internal
>>>>>>> 
>>>>>>> 
>>>>>>> Regards
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Grant Vorenberg
>>>>>>> Technical Manager
>>>>>>> Office          0861 SAICOM (724 266)
>>>>>>> Direct          010 140 5012
>>>>>>> Support         010 140 5050
>>>>>>> Cell    -
>>>>>>> Fax     010 140 5001
>>>>>>> Email   gr...@saicomvoice.co.za <mailto:gr...@saicomvoice.co.za>
>>>>>>> Visit our website       www.saicomvoice.co.za 
>>>>>>> <http://www.saicomvoice.co.za/>
>>>>>>> 
>>>>>>> <logo-image.png> <http://www.saicomvoice.co.za/>
>>>>> 
>>>> 
>>>> 
>>> 
>>> 
>>> 
>> 
>> 
> 
> 

Reply via email to