Innodb buffer pool should be probably around 10-20GB if you're going to use 
just one single innodb table. Currently you are probably killing mariadb with 
these 1.3k threads. First you can try to limit threads running simultaneusly to 
5-30 because you're not goint to have any performance benefit from going 
above certain (rather low) point, you'll only produce mutex contention so 
the server will became overloaded with thread coordination code rather than 
doing any real work. 50 threads would be probably too much already, and you 
have 20 times that ;)   Not sure if you're doing single op per transaction 
but without changes to code you probably won't be able to make this app run 
considerably faster anyway. Because small ops, like couter increment will never 
be fast in SQL so you'll need some intermediate "aggregation" step 
to issue as few sql commands and as few transactions as possible.   In your C 
app you probably need to preprocess and aggregate the data. You take eg. 100k 
operations and process them in memory. So eg. insert, modify and set on same PK 
would became just single set. Then you can further divide these 100k operations 
into 1k-ops transactions.   At the end you can parallelize by starting many 
instances of the application and spreading the keys over using CRC32. So 
probably what you'll need to do is:   1. aggregate as many operations as 
possible  2. divide to N chunks  3. read all affected keys for chunk X in 
single query  4. modify data in memory  5. output a transaction  6. goto 3 
untill all chunks are done   Dnia 7 października 2019 18:17 JCA 
<1.41...@gmail.com> napisał(a):  I am running MariaDB 10.0.34 on a 
Slackware 14.2 system. I have a C application that interacts with MariaDB in 
the following way:   1. Read data from a FIFO.  2. Insert  the data into a 
table in a MariaDB database, if absent, or modify it according to certain 
specific criteria otherwise.   This works as expected.   The problem that I  
have is that data are being written to the FIFO at a fast rate. In order to be 
able to keep up, at any given time my application reads the data available at 
the FIFO, and spawns a thread to process the chunk of data just read. It is in 
this thread that all the database interaction takes place.  In order to deal 
with this, I have the following entries in my /etc/my.cnf file:   # this is 
read by the standalone daemon and embedded servers  [server]   # this is only 
for the mysqld standalone daemon  [mysqld]  # thread_handling=pool-of-thread  # 
log=/var/log/mysqld.log   # this is only for embedded server  [embedded]   # 
This group is only read by MariaDB servers, not by MySQL.  # If you use the 
same .cnf file for MySQL and MariaDB,  # you can put MariaDB-only options here  
[mariadb]  # log=/var/log/mysqld.log  general_log_file        = 
/var/lib/mysql/mysql.log  # general_log             = 1   # 
transaction-isolation = READ-COMMITTED   # key_buffer = 1280M                   
           # 128MB for every 1GB of RAM   # sort_buffer_size =  1M              
            # 1MB for every 1GB of RAM   # read_buffer_size = 1M                
           # 1MB for every 1GB of RAM   # read_rnd_buffer_size = 1M             
          # 1MB for every 1GB of RAM   # thread_concurrency = 24                
         # Based on the number of CPUs                                          
           # so make it CPU*2   # thread-handling=pool-of-thread   # 
innodb_flush_log_at_trx_commit != 1   # open_files_limit = 50000   
thread-handling=pool-of-thread   max_connections = 1000  table_open_cache = 800 
 query_cache_type = 0  innodb_buffer_pool_size = 512M  
innodb_buffer_pool_instances = 10  innodb_adaptive_hash_index_par = 20  
innodb_lock_wait_timeout = 5000  With this, my application can keep up with the 
FIFO writer, but - depending on the circumstances - my database can't. As I 
am writing this, there are over 1300 threads connected to my database; any 
command that I issue at the mysql CLI takes over one minute to return. I am 
keeping track on how long each thread takes to complete, and that is of the 
order of hundreds of seconds - sometimes thousands. Each thread is itself 
simple, in that it just issues a couple of simple MariaDB commands.  Currently 
my table consists of 1.6 million entries, and growing - on this basis, I expect 
that things will get only worse. Each entry,however, will never require more 
than a couple of hundred bytes of storage. The operations that can be 
undertaken on entries are insertion, deletion and modification, the latter 
being straightforward - like e.g. incrementing a counter or replacing a short 
string.   My system has 24 GB of  RAM and 12 cores. Occasionally all the cores 
are fully busy with MariaDB activity, but most of the time barely one or two 
are.   I am a newbie when it comes to interacting with MariaDB - please,  bear 
with me. I know I must use a single database and a single table. I also know - 
because of the nature of the data that are being written to the FIFO - that the 
 probability for two different threads to be operating on the same entry in the 
table at the same time is negligible - i.e. for all practical purposes, that 
will not happen.   What I need is advice on how to configure my instance of 
MariaDB to perform optimally in the scenario above. In particular, I would like 
for it to make better use of all the cores available - in essence, to 
parallelize the database operations as much as possible.   Feedback from the 
experts will be much appreciated.   ______________________________  Mailing 
list:  launchpad.net launchpad.net  Post to     :   
maria-discuss@lists.launchpad.  Unsubscribe :  launchpad.net launchpad.net  
More help   :  help.launchpad.net help.launchpad.net
_______________________________________________
Mailing list: https://launchpad.net/~maria-discuss
Post to     : maria-discuss@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-discuss
More help   : https://help.launchpad.net/ListHelp

Reply via email to