Hello again - this page describes some of the things I am talking about:

https://mariadb.com/kb/en/library/using-the-non-blocking-library/

If LC had a mechanism to use the non-blocking functions in a database,  like 
the library for MariaDB described in that page, then we could eliminate all of 
the extra idle time spent waiting for the database to return a result.


Sent from my iPhone

> On Dec 5, 2017, at 10:16 AM, jonathandly...@gmail.com wrote:
> 
> Thank you Andre,
> 
> The fact that one of the bottlenecks is in waiting for the database to get 
> the data back is what I was targeting. If LC just sends a request to a 
> different program, and that program spawns a new thread for each request, 
> then LC is free to keep working. In this way, LC would be single-threaded and 
> handling database activities asynchronously.
> 
> It appears this can be done with using asynchronous http requests sent to a 
> database program with an http plugin, but that seems weird and clunky to me.
> 
> Just to clarify, I am not trying to multithread with LC or use multiple 
> processes with LC. I am wondering about using multithreading in an external 
> that then sends search results back to LC as they become available. LC would 
> still be single-threaded in this scenario. The end result is that LC keeps 
> working while the database is handling multiple searches.
> 
> Sent from my iPhone
> 
>> On Dec 5, 2017, at 9:40 AM, Andre Garzia <an...@andregarzia.com> wrote:
>> 
>> Jonathan,
>> 
>> It is not that simple. There is no silver bullet that can be added to a 
>> language to make it scale by couple orders of magnitude like this. LiveCode 
>> engine while is executing your business logic is not doing anything else (it 
>> might be tidying up the house bit, I haven't looked at it but it is 
>> definitely not executing other parts of your business code). Other languages 
>> with support of fibers/coroutines can basically switch what they are doing 
>> like an OS with multithreading without actually the threading part. Some 
>> other languages support full threads and some even allow you to fork(). But 
>> as far as I am aware LC datatypes and workflow is not thread safe (someone 
>> familiar with the internals might correct me here) which makes it a time 
>> bomb if you go spawning threads. Also thread programming is quite complex 
>> and data races are a real problem, Mozilla creation of Rust was in-parts to 
>> prevent data races with a safer language, thats how big this problem is, 
>> people go and create new languages to solve it.
>> 
>> LC bottleneck is not database queries, the queries spend more time inside 
>> the database than they do in transit between the revdb external and the 
>> rdbms. There is not something that we can bolt on top of the current LC 
>> engine to make it behave like nodejs (non-blocking with async language and 
>> jit) and that is not even desirable. NodeJS programming requires a ton of 
>> tooling and knowledge that is not at all related to whatever business you're 
>> trying to solve, LC is much more a pick and go language than NodeJS ever 
>> will be. Doing a simple setup of a modern NodeJS sample based on server-side 
>> rendering and react will probably download hundreds of megabytes in 
>> developer dependencies, just to get the sample to compile. Imagine if one of 
>> your stacks required THOUSANDS of little stacks that amounted to HUNDREDS of 
>> megabytes on disk, just to load. That is the land of NodeJS, it has its own 
>> problems.
>> 
>> Other potential solutions for deploying LC based server software have been 
>> attempted in the past, I will summarize two of them below as 
>> food-for-thoughts.
>> 
>> ## THE FASTCGI APPROACH ##
>> Long long time ago, in a Runtime Revolution far far away, I coded a fastcgi 
>> stack. Fastcgi is a protocol specification that allows using a single 
>> connection (or a pool) to multiplex requests from a server. So using 
>> something like  a persistent TCP connection to Apache, you could answer 
>> multiple requests. The problem with fastcgi and LC is the same as outlined 
>> above, while the cgi part was trying to solve something, it would not 
>> respond to requests, thats the bottleneck: the blocking code part. Imagine 
>> that your cgi needs to fetch a large data set from a file and process it 
>> before answering and that this process took 5 seconds. During those seconds, 
>> the server would be unresponsive.
>> 
>> ## THE ENGINE POOL ##
>> I believe it was Richard who did this, can't recall, it was definitely not 
>> me. Keep a pool of engines running, lets say 20, use a node balancer to 
>> round robin them. This allows you to answer at least 20 concurrent requests. 
>> Then engine pool is only desirable over our normal CGI method in one very 
>> specific case: The current LC server is CGI based, so it spawns a new engine 
>> for each request, if you're on a memory constrained machine that can't 
>> afford this escalation of memory and cpu usage, you keep a pool of engines 
>> in a safe threshold and use the pool. I can only see this working well on a 
>> raspberry pi, all other cases CGI should work better.
>> 
>> ## Curiosity nuggets of semi-related trivia ##
>> Oh, and sometimes even NodeJS is slow, check out this article I wrote couple 
>> weeks ago: http://andregarzia.com/en/blog/creating-rust-based-nodejs-modules 
>> in which I show a dramatic speedup in a NodeJS codebase by converting the 
>> most critical part of the code into a Rust based module. The code used to 
>> execute in 3 seconds and went to execute in 150 miliseconds.
>> 
>> 
>>> On Tue, Dec 5, 2017 at 9:29 AM, Jonathan Lynch via use-livecode 
>>> <use-livecode@lists.runrev.com> wrote:
>>> To make this happen, it seems like we would need an external that 
>>> multithreads database queries and sends the query results back to LC as a 
>>> new message. It would have to bring in a new ID for each request and return 
>>> that ID with the result.
>>> 
>>> Can the ODBC external do that?
>>> 
>>> Sent from my iPhone
>>> 
>>> > On Dec 4, 2017, at 2:06 PM, Richard Gaskin via use-livecode 
>>> > <use-livecode@lists.runrev.com> wrote:
>>> >
>>> > jonathandlynch wrote:
>>> >
>>> > > Hi Richard and Andre - thanks for your replies. I was the one who
>>> > > mentioned millions of users at the same time, not out of drunkenness
>>> > > but because I wanted to understand the upper limits of these systems.
>>> >
>>> > Scaling is a fascinating problem.  I found the C10k problem a good 
>>> > starting point (in recent years supplanted with C10m):
>>> >
>>> > <https://en.wikipedia.org/wiki/C10k_problem>
>>> >
>>> >
>>> > > I also found a thread discussing this idea from a few years ago that
>>> > > Richard was part of. It was very informative.
>>> >
>>> > I usually just quote Pierre or Andre, but once in a while my OCD habits 
>>> > with benchmarking add something useful. :)
>>> >
>>> >
>>> > > I think an all-LC very fast server would be a great thing, but it
>>> > > sounds like just using node would be more realistic. I might fiddle
>>> > > a bit with this idea, just to satisfy my curiosity.
>>> >
>>> > Node.js is good where Node.js is needed.  In some cases NginX is good. In 
>>> > other cases Lighttpd is fine.  And in many cases Apache is fine, even 
>>> > with simple CGI.
>>> >
>>> > Most of us never need to think about C10m, or even C10k.  If we do, 
>>> > that's a wonderfully fortunate problem to have.  Just having that problem 
>>> > makes it much easier to get funding to solve it with specialists.  Let 
>>> > the t-shirts sort it out while the suits focus on strategy.
>>> >
>>> > --
>>> > Richard Gaskin
>>> > Fourth World Systems
>>> > Software Design and Development for the Desktop, Mobile, and the Web
>>> > ____________________________________________________________________
>>> > ambassa...@fourthworld.com                http://www.FourthWorld.com
>>> >
>>> > _______________________________________________
>>> > use-livecode mailing list
>>> > use-livecode@lists.runrev.com
>>> > Please visit this url to subscribe, unsubscribe and manage your 
>>> > subscription preferences:
>>> > http://lists.runrev.com/mailman/listinfo/use-livecode
>>> 
>>> _______________________________________________
>>> use-livecode mailing list
>>> use-livecode@lists.runrev.com
>>> Please visit this url to subscribe, unsubscribe and manage your 
>>> subscription preferences:
>>> http://lists.runrev.com/mailman/listinfo/use-livecode
>> 
>> 
>> 
>> -- 
>> http://www.andregarzia.com -- All We Do Is Code.
>> http://fon.nu -- minimalist url shortening service.
_______________________________________________
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode

Reply via email to