On Mon, Oct 5, 2015 at 6:38 PM, Anatol Belski <anatol....@belski.net> wrote:

>
>
> > -----Original Message-----
> > From: Dmitry Stogov [mailto:dmi...@zend.com]
> > Sent: Monday, October 5, 2015 3:31 PM
> > To: Anatol Belski <anatol....@belski.net>
> > Cc: Matt Ficken <themattfic...@gmail.com>; Pierre Joye
> > <pierre....@gmail.com>; Laruence <larue...@php.net>; PHP Internals
> > <internals@lists.php.net>; dmi...@php.net
> > Subject: Re: [PHP-DEV] Re: Windows OpCache bug fix
> >
> > > > > > Subject: Re: [PHP-DEV] Re: Windows OpCache bug fix
> > > > > >
> > > > > > > Dmitry, I'd have a question to this
> > > > > > > > Also. if we can't map SHM into desired address space, we may
> > > > > > > > map it in
> > > > > > > any
> > > > > > > > other address and copy data into the process memory similar
> > > > > > > > to
> > > > > > > file-cache.
> > > > > > > In randomized memory layout, even the base were available and
> > > > > > > OpenFileMapping were worky, some parts of that memory were
> > > > > > > already
> > > > > > taken.
> > > > > > > How exactly it could be done, could you please give a couple
> > > > > > > of pointers to this?
> > > > > >
> > > > > >
> > > > > > If MapViewOfFileEx(..., wanted_mapping_base) fails, we do
> > > > > > MapViewOfFileEx(..., NULL).
> > > > > >
> > > > > >
> > > > > > > Would the file cache be always required then?
> > > > > > >
> > > > > >
> > > > > > This is not necessary, but depends on implementation of course.
> > > > > >
> > > > > Thanks for the advice. I was playing with this over these days.
> > > > > There are two usual cases where it possibly fails when reattaching
> > > > > ATM
> > > > >
> > > > > -
> > > > > https://github.com/php/php-src/blob/PHP-7.0/ext/opcache/shared_all
> > > > > oc_w
> > > > > in32.c#L151
> > > > > - the saved address is available but is not suitable
> > > > > -
> > > > > https://github.com/php/php-src/blob/PHP-7.0/ext/opcache/shared_all
> > > > > oc_w
> > > > > in32.c#L159
> > > > > - the actual MapViewOfFileEx case
> > > > >
> > > > > An unrelated plain C test shows, that MapViewOfFileEx can possibly
> > > > > fail when called second time, too. Even with NULL or with another
> > > > > address as base. But even if it could map at a different base -
> > > > > the internal structures will probably have invalid addresses.
> > > >
> > > >
> > > > right. we might need different code for zend_accle_hash access or
> > > > convert corresponding structures to PIC.
> > > > For opcdeos "invalid address" don't matter because we will copy them
> > > > into process memory and fix (like in file-cache).
> > > >
> > > Ah, I have to study the file cache code then. But generally it sounds
> > > not like something that can be done offhand. I've also thought about
> > > other things like interned strings (maybe something else), not sure
> > > they're stored with the cache.
> > >
> > > >
> > > > > So it looks like there's indeed no profit to do any retry once the
> > > > > actualy base address needed was failed to reattach to.
> > > > >
> > > > > IMHO the scenario that were usable - in case it fails to reattach
> > > > > to the exact address, it has to switch to heap. Just for one
> > > > > request, it should get a heap allocated segment and then
> invalidate all the
> > cache.
> > > > > That way we fulfill the following
> > > > >
> > > > > - the request is served for sure
> > > > > - the scripts are not cached indeed, so no divergence with the
> > > > > actual real cache
> > > > >
> > > > > A heap fallback memory handler would be probably quite easy to
> > > implement.
> > > > > What do you think?
> > > > >
> > > > > Apropos with the heap - it also looks like when PHP is used as
> > > > > module under mpm_winnt, all the cache could use heap instead of
> > > > > SHM. In that case, there is only one Apache process serving with
> many
> > threads.
> > > > > Except one would want to share that cache outside Apache, using
> > > > > heap there could be a much simpler solution.
> > > > >
> > > >
> > > > Heap based cache makes the same problems. It increase the memory
> > > > usage
> > > and
> > > > doesn't provide cache consistency.
> > > > Just fall-back to file-cache may be better.
> > > Do you think something like this would suffice as file cache fallback
> > > https://gist.github.com/weltling/224001a468f04de13693  ? Though it'd
> > > still diverge from the "main" cache.
> > >
> >
> > I think we should enable file-cache automatically, but we can set
> > ZCG(accel_directives).file_cache_only
> > if the file_cache already enabled.
> >
> I've revoked the approach
> https://gist.github.com/weltling/69bd1e47dc15273edde5 , also added
> enforcement per request (was missing in the previous version). Or did you
> mean "we should NOT enable file cache automatically"? Can be easy changed
> ofc. IMHO one can enforce automatically, careful programmers do check error
> logs :)
>


I wouldn't enable file cache automatically, but this really not an
implementation problem.

+       if (NULL != ZCG(accel_directives).file_cache) {
+               ZCG(accel_directives).file_cache_only = 1;
+       }



>
> >
> > >
> > > Actually in such case all the processes should switch to file cache?
> >
> >
> > No. Only the processes that weren't be able to attach to SHM.
> >
> >
> > > Just not sure how they all would negotiate that when no SHM is
> > > available (probably through files, or a separate shared chunk).
> > >
> >
> > yeah. Processes that use file-cache-only won't be able to negotiate SHM
> cache.
> > :(
> >
> ACK, so basically it is the same principle Matt suggested with sidestep
> cache. I could imagine synchronizing all the processes through another
> shared segment. Not the big one where all the cache is handled, but just a
> couple of bytes that wouldn't require to be attached to the same address.
> That would allow to signal other file-cache-only processes for cache reset,
> etc.
>

This idea should work. Mapping of small portion into the same address space
shouldn't be required.

Thanks. Dmitry.


>
> >
> > >
> > > >
> > > > Actually, when I implemented file-cache I had a though about
> > > > different
> > > storage
> > > > back-ends (e.g. sqlite or memcache).
> > > > This abstraction might make support for "improperly mapped SHM"
> > > > quite
> > > easy.
> > > >
> > > Yeah, that could be useful. Maybe also module based so one could even
> > > supply them separately, then an arbitrary storage mechanism could be
> > > provided. Fe like I've mentioned for mpm_winnt -if there's no
> > > intention to share the cache outside Apache, just one heap for all
> > > could be much simpler to avoid all that reattach mechanics.
> > >
> >
> > Does mpm_winnt use pure ZTS without processes?
> >
> The master process starts only one child (also documented here
> http://httpd.apache.org/docs/2.4/mod/mpm_winnt.html). IMHO even If one
> would decide supporting virtual hosts (not supported by the current
> solution anyway), that were done relatively simple creating private heaps
> inside the same child process.
>
> Thanks
>
> Anatol
>
>

Reply via email to