Newsgroups: php.internals Path: news.php.net Xref: news.php.net php.internals:88723 Return-Path: Mailing-List: contact internals-help@lists.php.net; run by ezmlm Delivered-To: mailing list internals@lists.php.net Received: (qmail 83828 invoked from network); 9 Oct 2015 19:49:27 -0000 Received: from unknown (HELO lists.php.net) (127.0.0.1) by localhost with SMTP; 9 Oct 2015 19:49:27 -0000 Authentication-Results: pb1.pair.com header.from=dmitry@zend.com; sender-id=pass Authentication-Results: pb1.pair.com smtp.mail=dmitry@zend.com; spf=pass; sender-id=pass Received-SPF: pass (pb1.pair.com: domain zend.com designates 209.85.213.54 as permitted sender) X-PHP-List-Original-Sender: dmitry@zend.com X-Host-Fingerprint: 209.85.213.54 mail-vk0-f54.google.com Received: from [209.85.213.54] ([209.85.213.54:33254] helo=mail-vk0-f54.google.com) by pb1.pair.com (ecelerity 2.1.1.9-wez r(12769M)) with ESMTP id 11/F4-57614-44A18165 for ; Fri, 09 Oct 2015 15:49:26 -0400 Received: by vkaw128 with SMTP id w128so1391012vka.0 for ; Fri, 09 Oct 2015 12:49:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=foYypUTiSO6On4mR7WcOnX7JcTBWUG5GeykIyzg/QCE=; b=DSTBqesK5YJSex9bPLYYMz0zjAd5nwQDYJ27VGptaG1hxF+MTS82YFxaLqb0LMJHCD VyMxM9aF1XxiUecoNRiC7jGOqh4sDNzXCLZCWAdCwZG0z7OMrS8cSYYDUSDvRZNwyjUk OTc54QGdCMDgufj+9YPpo67Axf+0mEnG1VTT9cQADMTY7F5Um42uoUmh0kdHw5y2wjJ8 +wOOQr5jibCbhLKHvi3feoAJojGutfUnRuwJlg3Ylj/7TEXtdgyTWz3+y4UsjdZEbVQT TUrit4LHbkw9RpsXRBhEiZlm2sHWKzangC36KtAGVt2BAXKuL3s8CChi+5g8WIDzpnB3 n1zA== X-Gm-Message-State: ALoCoQnXTUnJ4+ebkL3vzh33t3JWVub2l2UyK7t1JpTb2hNslR7w32pnn+EUs6dv5ixQpl/ntrED3dw0/wO1s8UiHNu18KxgGyMgqSyeh9dyLJ31oLoPTeYODwp2RStqZgjF0Uhh9hfGwEHaRm1uTgZCu/0UIqBgB/9gblq/w0Ms+Kh9Eg70iuw= MIME-Version: 1.0 X-Received: by 10.31.168.79 with SMTP id r76mr4315200vke.102.1444293193194; Thu, 08 Oct 2015 01:33:13 -0700 (PDT) Received: by 10.103.24.5 with HTTP; Thu, 8 Oct 2015 01:33:13 -0700 (PDT) In-Reply-To: <035a01d100f1$5194a730$f4bdf590$@belski.net> References: <024e01d0fc89$31005990$93010cb0$@belski.net> <01e101d0ff3f$8cc0c310$a6424930$@belski.net> <029301d0ff6c$a1cc8370$e5658a50$@belski.net> <02ca01d0ff83$f0d58c70$d280a550$@belski.net> <035a01d100f1$5194a730$f4bdf590$@belski.net> Date: Thu, 8 Oct 2015 11:33:13 +0300 Message-ID: To: Anatol Belski Cc: Matt Ficken , Pierre Joye , Laruence , PHP Internals , "dmitry@php.net" Content-Type: multipart/alternative; boundary=001a11415902bb84bc052193b6ef Subject: Re: [PHP-DEV] Re: Windows OpCache bug fix From: dmitry@zend.com (Dmitry Stogov) --001a11415902bb84bc052193b6ef Content-Type: text/plain; charset=UTF-8 few notes: - this is for Windows only. Right? - are you sure SysV IPC API is available everywhere? I mean shmget() in file_cache_fallback_init(). (I didn't know about Windows). - I think all the changes should be wrapped by "#ifdef ...FALLBACK_SUPPORT" (at least we will able to easily remove this) - it's better to add ini directive to enable/disable this callback triggering to minimize risks for users. I would try to implement this in a simpler way. I'll try to find time Thanks. Dmitry. On Wed, Oct 7, 2015 at 2:14 PM, Anatol Belski wrote: > Hi Dmitry, > > > -----Original Message----- > > From: Dmitry Stogov [mailto:dmitry@zend.com] > > Sent: Tuesday, October 6, 2015 10:01 AM > > To: Anatol Belski > > Cc: Matt Ficken ; Pierre Joye > > ; Laruence ; PHP Internals > > ; dmitry@php.net > > Subject: Re: [PHP-DEV] Re: Windows OpCache bug fix > > > > On Mon, Oct 5, 2015 at 6:38 PM, Anatol Belski > wrote: > > > > > > > > > > > > -----Original Message----- > > > > From: Dmitry Stogov [mailto:dmitry@zend.com] > > > > Sent: Monday, October 5, 2015 3:31 PM > > > > To: Anatol Belski > > > > Cc: Matt Ficken ; Pierre Joye > > > > ; Laruence ; PHP Internals > > > > ; dmitry@php.net > > > > Subject: Re: [PHP-DEV] Re: Windows OpCache bug fix > > > > > > > > > > > > Subject: Re: [PHP-DEV] Re: Windows OpCache bug fix > > > > > > > > > > > > > > > > > Dmitry, I'd have a question to this > > > > > > > > > > Also. if we can't map SHM into desired address space, we > > > > > > > > > > may map it in > > > > > > > > > any > > > > > > > > > > other address and copy data into the process memory > > > > > > > > > > similar to > > > > > > > > > file-cache. > > > > > > > > > In randomized memory layout, even the base were available > > > > > > > > > and OpenFileMapping were worky, some parts of that memory > > > > > > > > > were already > > > > > > > > taken. > > > > > > > > > How exactly it could be done, could you please give a > > > > > > > > > couple of pointers to this? > > > > > > > > > > > > > > > > > > > > > > > > If MapViewOfFileEx(..., wanted_mapping_base) fails, we do > > > > > > > > MapViewOfFileEx(..., NULL). > > > > > > > > > > > > > > > > > > > > > > > > > Would the file cache be always required then? > > > > > > > > > > > > > > > > > > > > > > > > > This is not necessary, but depends on implementation of > course. > > > > > > > > > > > > > > > Thanks for the advice. I was playing with this over these days. > > > > > > > There are two usual cases where it possibly fails when > > > > > > > reattaching ATM > > > > > > > > > > > > > > - > > > > > > > https://github.com/php/php-src/blob/PHP-7.0/ext/opcache/shared > > > > > > > _all > > > > > > > oc_w > > > > > > > in32.c#L151 > > > > > > > - the saved address is available but is not suitable > > > > > > > - > > > > > > > https://github.com/php/php-src/blob/PHP-7.0/ext/opcache/shared > > > > > > > _all > > > > > > > oc_w > > > > > > > in32.c#L159 > > > > > > > - the actual MapViewOfFileEx case > > > > > > > > > > > > > > An unrelated plain C test shows, that MapViewOfFileEx can > > > > > > > possibly fail when called second time, too. Even with NULL or > > > > > > > with another address as base. But even if it could map at a > > > > > > > different base - the internal structures will probably have > invalid > > addresses. > > > > > > > > > > > > > > > > > > right. we might need different code for zend_accle_hash access > > > > > > or convert corresponding structures to PIC. > > > > > > For opcdeos "invalid address" don't matter because we will copy > > > > > > them into process memory and fix (like in file-cache). > > > > > > > > > > > Ah, I have to study the file cache code then. But generally it > > > > > sounds not like something that can be done offhand. I've also > > > > > thought about other things like interned strings (maybe something > > > > > else), not sure they're stored with the cache. > > > > > > > > > > > > > > > > > > So it looks like there's indeed no profit to do any retry once > > > > > > > the actualy base address needed was failed to reattach to. > > > > > > > > > > > > > > IMHO the scenario that were usable - in case it fails to > > > > > > > reattach to the exact address, it has to switch to heap. Just > > > > > > > for one request, it should get a heap allocated segment and > > > > > > > then > > > invalidate all the > > > > cache. > > > > > > > That way we fulfill the following > > > > > > > > > > > > > > - the request is served for sure > > > > > > > - the scripts are not cached indeed, so no divergence with the > > > > > > > actual real cache > > > > > > > > > > > > > > A heap fallback memory handler would be probably quite easy to > > > > > implement. > > > > > > > What do you think? > > > > > > > > > > > > > > Apropos with the heap - it also looks like when PHP is used as > > > > > > > module under mpm_winnt, all the cache could use heap instead > > > > > > > of SHM. In that case, there is only one Apache process serving > > > > > > > with > > > many > > > > threads. > > > > > > > Except one would want to share that cache outside Apache, > > > > > > > using heap there could be a much simpler solution. > > > > > > > > > > > > > > > > > > > Heap based cache makes the same problems. It increase the memory > > > > > > usage > > > > > and > > > > > > doesn't provide cache consistency. > > > > > > Just fall-back to file-cache may be better. > > > > > Do you think something like this would suffice as file cache > > > > > fallback > > > > > https://gist.github.com/weltling/224001a468f04de13693 ? Though > > > > > it'd still diverge from the "main" cache. > > > > > > > > > > > > > I think we should enable file-cache automatically, but we can set > > > > ZCG(accel_directives).file_cache_only > > > > if the file_cache already enabled. > > > > > > > I've revoked the approach > > > https://gist.github.com/weltling/69bd1e47dc15273edde5 , also added > > > enforcement per request (was missing in the previous version). Or did > > > you mean "we should NOT enable file cache automatically"? Can be easy > > > changed ofc. IMHO one can enforce automatically, careful programmers > > > do check error logs :) > > > > > > > > > I wouldn't enable file cache automatically, but this really not an > implementation > > problem. > > > > + if (NULL != ZCG(accel_directives).file_cache) { > > + ZCG(accel_directives).file_cache_only = 1; > > + } > > > > > > > > > > > > > > > > > > > > > > > Actually in such case all the processes should switch to file > cache? > > > > > > > > > > > > No. Only the processes that weren't be able to attach to SHM. > > > > > > > > > > > > > Just not sure how they all would negotiate that when no SHM is > > > > > available (probably through files, or a separate shared chunk). > > > > > > > > > > > > > yeah. Processes that use file-cache-only won't be able to negotiate > > > > SHM > > > cache. > > > > :( > > > > > > > ACK, so basically it is the same principle Matt suggested with > > > sidestep cache. I could imagine synchronizing all the processes > > > through another shared segment. Not the big one where all the cache is > > > handled, but just a couple of bytes that wouldn't require to be > attached to the > > same address. > > > That would allow to signal other file-cache-only processes for cache > > > reset, etc. > > > > > > > This idea should work. Mapping of small portion into the same address > space > > shouldn't be required. > > > > Thanks. Dmitry. > > > > > > > > > > > > > > > > > > > > > > > > > > > > Actually, when I implemented file-cache I had a though about > > > > > > different > > > > > storage > > > > > > back-ends (e.g. sqlite or memcache). > > > > > > This abstraction might make support for "improperly mapped SHM" > > > > > > quite > > > > > easy. > > > > > > > > > > > Yeah, that could be useful. Maybe also module based so one could > > > > > even supply them separately, then an arbitrary storage mechanism > > > > > could be provided. Fe like I've mentioned for mpm_winnt -if > > > > > there's no intention to share the cache outside Apache, just one > > > > > heap for all could be much simpler to avoid all that reattach > mechanics. > > > > > > > > > > > > > Does mpm_winnt use pure ZTS without processes? > > > > > > > The master process starts only one child (also documented here > > > http://httpd.apache.org/docs/2.4/mod/mpm_winnt.html). IMHO even If one > > > would decide supporting virtual hosts (not supported by the current > > > solution anyway), that were done relatively simple creating private > > > heaps inside the same child process. > > > > Updated the patch with one small fix > https://gist.github.com/weltling/925c0a774fa1261ede58 . I think it's > ready to be applied, could you please check? One could shrink it to Windows > yet, or leave as is so the fallback is worky crossplatform. > > Thanks > > anatol > > --001a11415902bb84bc052193b6ef--