On Jan 25, 2020, at 3:44 PM, Rowan Tommins <rowan.coll...@gmail.com> wrote:
> 
> On 25/01/2020 18:51, Robert Hickman wrote:
> 
>> Yes that is what I was thinking, for example there is a userspace 
>> implementation
>> 'Swoole' that works in the following way, ReactPHP is similar although I 
>> won't
>> include that example as well.
> 
> 
> So trying to get concrete: the first "official" component we'd need would be 
> an outer event loop, mapping requests and responses to the parameter and 
> return values of userland callbacks. In principle, not too difficult, 
> although I'm sure there are plenty of devils in the details.
> 
> 
>> In my mind right now, everything should be shareable within a single process,
>> as one could do in the Swoole example above, nothing stopping you defining a
>> global in that script that could cache data in-process.
>> 
>> NodeJS, Python (wsgi) and others work fine using this model and allow sharing
>> of data within the same process. Trying to limit it to only some types of 
>> things
>> would be more complex as each type of thing would end up having a different
>> programmatic interface.
> 
> 
> I may be wrong, but I think this is where it gets complicated. It's not that 
> we'd want to deliberately have different things have different behaviour 
> between requests, it's just that we've got a bunch of existing stuff built on 
> the assumptions of the current architecture.
> 
> In a single-threaded event loop, you want as much as possible to be 
> asynchronous, which is why both Swoole and React have a lot of modules for 
> things like network requests, file I/O, databases, and general asynchronous 
> programming.
> 
> Other things just wouldn't exist if PHP hadn't been modelled as shared 
> nothing from the beginning. Would set_time_limit() still be global, and abort 
> the server after a fixed number of seconds? Or would it configure the event 
> loop somehow?
> 
> I think there'd need to be at least a roadmap for sorting out those questions 
> in the official distribution before it felt like a properly supported part of 
> the language.

I'm not following the discussion 100% – more like 85% — but it seems like what 
we might be saying is the need for a user-land implementation of a long-running 
PHP request, one that does not timeout?

If that is the case, could we consider allowing a PHP page to opt-in to no 
timeout?  These types of requests could then handle web sockets, etc.

Then we could look to prior art with GoLang channels where they "Communicate to 
share memory" and do not "Share memory to communicate."  IOW, add an API that 
allows a regular PHP page to communicate with a long-running page.  This would 
decouple and allow for better testing, and hopefully fewer hard to track down 
bugs.

Further, I would suggest the long running requests not be able to generate 
output except when the ini setting display_errors is true to ensure they are 
only used for communicating with regular "shared nothing" pages and not used in 
place of shared-nothing pages?

Would this not be a workable approach?

-Mike
--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to