That would be awesome. I really need to do this for the File Storage library I want to include in my app. Right now I just save the PDFs on my local HDD, and each client has it’s own subset of the files it creates. But if I could pass the binary data for the PDF over a client server system of some sort, I could centrally store and recall all the documents from ALL the clients, hence the beginnings of a Document Capture and Storage app written in LC.
Bob S > On Jan 8, 2021, at 5:32 PM, Alex Tweedly via use-livecode > <use-livecode@lists.runrev.com> wrote: > > > On 08/01/2021 01:51, Bob Sneidar via use-livecode wrote: >> I have thought about this a bit. If what you mean by multiprocessing is that >> a new process can be spawned while your app can go off and do other things >> and get notified later that something happened, then this is quite doable. >> If what you mean is that you want to make the app spawning the processes >> operate more efficiently by launching the same process over and over in >> multiple instances, then I don’t think so. At some point the parent process >> is going to have to get the result of each thread and do something about it. > > I did something like this, but didn't spawn a new process for each asynch > task being launched. > > What I did instead was have a very general purpose 'task handler' app, and > have multiple instances of this 'server app' running all the time; as long as > they're not doing any task, that's a negligible "cost" in resources. Each > instance would accept a socket connection on a different port (e.g. 6000, > 6001, 6002, ...) and would be passed "requests" to handle a specific task. It > would queue up multiple tasks, handle them in turn, and pass the result back > over the connection to whichever app requested it. > > Then there was a client library, which would handle all the "messiness" for > the client, so the app need not be too involved. The app itself would simply > 'start using' the library, tell it which tasks it wanted to be able to do > (see below), and then pass in multiple requests through library calls. The > library would determine how many task-handler apps were available, parcel out > the requests between them, and provide the responses to the client (if > needed). > > For each task (or set of tasks) you would write a library stack, which would > have handlers to perform the task(s), and respond. > > > Example (trivial and may contain typos). > > (you have to imagine that generating random numbers was very time consuming > :-), > > 1. Write a library stack to perform some tasks. > > stack "randomstuff" > global gResultData, gResultObject -- remember - one task at a time, so no > race conditions > > on randomint pData > local tmp, tMin, tMax > put word 1 of pData into tMin > put word 2 of pData into tMax > put random(tMax - tMin + 1) into tmp > put tmp-1 + tMin into gResultData > send "completedTask" to gResultObject in 1 millisec > end randomint > > > 2. In the client app, determine the tasks to be handled > > (within, say, openstack) > .... > taskClientLoadStack "randomstuff.livecodescript" > .... > > 3. when you need a number of random integers > .... > repeat 1000 times > put taskClientSendARequest("randomstuff", "randomint 12 17", the long ID > of me) into tmp > end repeat > -- (the return value is a request id - often can be ignored). > ... > on taskcomplete pID, pResult > -- pID is the request ID, pResult is the returned data from that request > put pResult &CR after sNumbers -- or whatever > ... > end taskcomplete > > In addition, there were various 'admin' taks you could request (close > connection, cancel pending tasks, get count of remaining tasks pending, ...). > The initial version of the client library simply round-robined the task > requests between the task-handlers, but the 'status' request would allow for > more intelligent load-balancing i fneeded. > > > I did all this many years ago (2006 ??) and had this up on the earliest > version of revonline (which subsequently got deleted). I developed it to help > with indexing (including md5hash) of large numbers of image files. Using 4 > task handlers, it was able to do the indexing in around 1/3 of the time the > single-threaded version took. > > I still have the code - but unfortunately I can't find the write-up / > documentation on how to use it. And, I admit I absolutely cringe now to look > at the code - it *seriously* needs to be re-written, or heavily revised. > > I'll clean up the code, write up how to use it and post it somewhere for > anyone who wishes to try it out. > > Alex. > > _______________________________________________ > use-livecode mailing list > use-livecode@lists.runrev.com > Please visit this url to subscribe, unsubscribe and manage your subscription > preferences: > http://lists.runrev.com/mailman/listinfo/use-livecode _______________________________________________ use-livecode mailing list use-livecode@lists.runrev.com Please visit this url to subscribe, unsubscribe and manage your subscription preferences: http://lists.runrev.com/mailman/listinfo/use-livecode