Hi,
Recently, I've found that OPcache optimizer misses a lot of abilities,
because it handles only one op_array at once. So it definitely can't
perform any inter-function optimizations (e.g. inlining).
Actually, it was not very difficult to switch to "script at once" approach.
The attached patch demonstrates it and adds per script constants
substitution explained in the following script
<?php
define("FOO", 1);
function foo() {
echo FOO . "\n"; // optimizer will replace it with: echo "1\n";
}
?>
Of course, I ran the PHP test suite and it passed all the same tests.
Personally, I think it's safe to include this patch into 5.5 and make a
green light to some other advanced optimizations in 5.5. (e.g. conversion
INIT_FCALL_BY_NAME into DO_FCALL).
Any thoughts?
Thanks. Dmitry.
2013/4/10 Dmitry Stogov dmitry@zend.com
Hi,
Recently, I've found that OPcache optimizer misses a lot of abilities,
because it handles only one op_array at once. So it definitely can't
perform any inter-function optimizations (e.g. inlining).Actually, it was not very difficult to switch to "script at once" approach.
The attached patch demonstrates it and adds per script constants
substitution explained in the following script<?php
define("FOO", 1);
function foo() {
echo FOO . "\n"; // optimizer will replace it with: echo "1\n";
}
?>Of course, I ran the PHP test suite and it passed all the same tests.
Personally, I think it's safe to include this patch into 5.5 and make a
green light to some other advanced optimizations in 5.5. (e.g. conversion
INIT_FCALL_BY_NAME into DO_FCALL).Any thoughts?
Thanks. Dmitry.
--
Hi!
Many obvious optimizations are not used due to the fact, that script
translation into opcode state has to be fast. The nature of PHP dictated
that and this was re-iterated countless times on this mailing list by the
core developers.
To do advanced stuff, you have to create some sort of pre-compile or
storing that compiled code reliably on disk so that if memory cache is
dropped or restart is done, there is no significant preformance hit while
all the code compiles into optimized opcode again.
I would also imagine that good part of the optimizations would require
multiple files to be processed and optimized, but due to dynamic nature of
the PHP opcode compilation is done on per-file basis, so do the
optimizations.
It's very commendable that you want to push optimizations and stuff, but
there are some fundamental stuff that needs to be cared of to do some
really good stuff.
My 0.02$
2013/4/10 Dmitry Stogov dmitry@zend.com
Hi,
Recently, I've found that OPcache optimizer misses a lot of abilities,
because it handles only one op_array at once. So it definitely can't
perform any inter-function optimizations (e.g. inlining).Actually, it was not very difficult to switch to "script at once" approach.
The attached patch demonstrates it and adds per script constants
substitution explained in the following script<?php
define("FOO", 1);
function foo() {
echo FOO . "\n"; // optimizer will replace it with: echo "1\n";
}
?>Of course, I ran the PHP test suite and it passed all the same tests.
Personally, I think it's safe to include this patch into 5.5 and make a
green light to some other advanced optimizations in 5.5. (e.g. conversion
INIT_FCALL_BY_NAME into DO_FCALL).Any thoughts?
Thanks. Dmitry.
--
Hi!
Many obvious optimizations are not used due to the fact, that script
translation into opcode state has to be fast. The nature of PHP dictated
that and this was re-iterated countless times on this mailing list by the
core developers.To do advanced stuff, you have to create some sort of pre-compile or
storing that compiled code reliably on disk so that if memory cache is
dropped or restart is done, there is no significant preformance hit while
all the code compiles into optimized opcode again.I would also imagine that good part of the optimizations would require
multiple files to be processed and optimized, but due to dynamic nature of
the PHP opcode compilation is done on per-file basis, so do the
optimizations.It's very commendable that you want to push optimizations and stuff, but
there are some fundamental stuff that needs to be cared of to do some
really good stuff.My 0.02$
Hello,
If applying optimizations in multiple passes would be a problem for
speed, especially on the first request, then maybe a way to solve this
would be to have a configurable variable like: opcache.passes which is
between 1 and 10 (lets say) and then have the engine do something like
this:
- load the file, compile it and apply a first round of 'quick'
optimizations for the first time and mark it as passed once; - next request, load the compiled version, apply another round of
optimization then mark it as a second pass - repeat the above step until the optimization passes in the said file
= opcache.passes value
This way only the initial requests will be affected by this but in a
way that the hit on those requests is smaller that applying all the
steps at once.
I'm really not sure if it's that easy to implement but 'on paper' this
could be the way to solve it imho.
What do you think, does it make sense?
Best regards
Florin Patan
https://github.com/dlsniper
http://www.linkedin.com/in/florinpatan
If applying optimizations in multiple passes would be a problem for speed,
especially on the first request, then maybe a way to solve this would be
to have
a configurable variable like: opcache.passes which is between 1 and 10
(lets say)
and then have the engine do something like
this:
My gut/educated guess is that in fact it's not going to be a problem with
the kinds of optimizations that are practical for our execution engine (our
ability to be ultra-creative with optimizations is very limited, compared to
say gcc). I'd defer solutions to that problem until we actually see that
it's a real problem to begin with. Generally, the book-keeping involved
with selectively and intelligently applying optimizations is probably going
to be more costly than doing them in the first place - but that obviously
depends on the nature of optimizations we'll come up with.
Zeev
2013/4/10 Florin Patan florinpatan@gmail.com
On Wed, Apr 10, 2013 at 4:07 PM, Arvids Godjuks arvids.godjuks@gmail.com
wrote:
2013/4/10 Dmitry Stogov dmitry@zend.comHi,
Recently, I've found that OPcache optimizer misses a lot of abilities,
because it handles only one op_array at once. So it definitely can't
perform any inter-function optimizations (e.g. inlining).Actually, it was not very difficult to switch to "script at once"
approach.
The attached patch demonstrates it and adds per script constants
substitution explained in the following script<?php
define("FOO", 1);
function foo() {
echo FOO . "\n"; // optimizer will replace it with: echo "1\n";
}
?>Of course, I ran the PHP test suite and it passed all the same tests.
Personally, I think it's safe to include this patch into 5.5 and make a
green light to some other advanced optimizations in 5.5. (e.g.
conversion
INIT_FCALL_BY_NAME into DO_FCALL).Any thoughts?
Thanks. Dmitry.
--
Hi!
Many obvious optimizations are not used due to the fact, that script
translation into opcode state has to be fast. The nature of PHP dictated
that and this was re-iterated countless times on this mailing list by the
core developers.To do advanced stuff, you have to create some sort of pre-compile or
storing that compiled code reliably on disk so that if memory cache is
dropped or restart is done, there is no significant preformance hit while
all the code compiles into optimized opcode again.I would also imagine that good part of the optimizations would require
multiple files to be processed and optimized, but due to dynamic nature
of
the PHP opcode compilation is done on per-file basis, so do the
optimizations.It's very commendable that you want to push optimizations and stuff, but
there are some fundamental stuff that needs to be cared of to do some
really good stuff.My 0.02$
Hello,
If applying optimizations in multiple passes would be a problem for
speed, especially on the first request, then maybe a way to solve this
would be to have a configurable variable like: opcache.passes which is
between 1 and 10 (lets say) and then have the engine do something like
this:
- load the file, compile it and apply a first round of 'quick'
optimizations for the first time and mark it as passed once;- next request, load the compiled version, apply another round of
optimization then mark it as a second pass- repeat the above step until the optimization passes in the said file
= opcache.passes valueThis way only the initial requests will be affected by this but in a
way that the hit on those requests is smaller that applying all the
steps at once.
I'm really not sure if it's that easy to implement but 'on paper' this
could be the way to solve it imho.What do you think, does it make sense?
Best regards
Florin Patan
https://github.com/dlsniper
http://www.linkedin.com/in/florinpatan
It could be a way out for heavy optimizations. Question is - will there be
any? :)
For now, the optimizations we do are quite chip.
They may increase the compilation time on first request by 2, but on
following requests we will get it back.
Once we come to really expensive optimizations we will do it "offline" (in
context of a separate process).
Thanks. Dmitry.
On Wed, Apr 10, 2013 at 4:07 PM, Arvids Godjuks arvids.godjuks@gmail.com
wrote:
2013/4/10 Dmitry Stogov dmitry@zend.comHi,
Recently, I've found that OPcache optimizer misses a lot of abilities,
because it handles only one op_array at once. So it definitely can't
perform any inter-function optimizations (e.g. inlining).Actually, it was not very difficult to switch to "script at once"
approach.
The attached patch demonstrates it and adds per script constants
substitution explained in the following script<?php
define("FOO", 1);
function foo() {
echo FOO . "\n"; // optimizer will replace it with: echo "1\n";
}
?>Of course, I ran the PHP test suite and it passed all the same tests.
Personally, I think it's safe to include this patch into 5.5 and make a
green light to some other advanced optimizations in 5.5. (e.g.
conversion
INIT_FCALL_BY_NAME into DO_FCALL).Any thoughts?
Thanks. Dmitry.
--
Hi!
Many obvious optimizations are not used due to the fact, that script
translation into opcode state has to be fast. The nature of PHP dictated
that and this was re-iterated countless times on this mailing list by the
core developers.To do advanced stuff, you have to create some sort of pre-compile or
storing that compiled code reliably on disk so that if memory cache is
dropped or restart is done, there is no significant preformance hit while
all the code compiles into optimized opcode again.I would also imagine that good part of the optimizations would require
multiple files to be processed and optimized, but due to dynamic nature
of
the PHP opcode compilation is done on per-file basis, so do the
optimizations.It's very commendable that you want to push optimizations and stuff, but
there are some fundamental stuff that needs to be cared of to do some
really good stuff.My 0.02$
Hello,
If applying optimizations in multiple passes would be a problem for
speed, especially on the first request, then maybe a way to solve this
would be to have a configurable variable like: opcache.passes which is
between 1 and 10 (lets say) and then have the engine do something like
this:
- load the file, compile it and apply a first round of 'quick'
optimizations for the first time and mark it as passed once;- next request, load the compiled version, apply another round of
optimization then mark it as a second pass- repeat the above step until the optimization passes in the said file
= opcache.passes valueThis way only the initial requests will be affected by this but in a
way that the hit on those requests is smaller that applying all the
steps at once.
I'm really not sure if it's that easy to implement but 'on paper' this
could be the way to solve it imho.What do you think, does it make sense?
Best regards
Florin Patan
https://github.com/dlsniper
http://www.linkedin.com/in/florinpatan
Speaking as a userspace developer and site admin, I'd be fine with
trading a more expensive compilation for a runtime improvement. Even a
100% increase in compilation time would pay for itself over only a dozen
or so requests (assuming the runtime improvements are non-trivial, too).
Naturally some optimizations are harder to do than others given PHP's
architecture, but trading more expensive compile for cheaper runtime,
even if not a 1:1 trade, would be a win IMO.
--Larry Garfield
For now, the optimizations we do are quite chip.
They may increase the compilation time on first request by 2, but on
following requests we will get it back.
Once we come to really expensive optimizations we will do it "offline" (in
context of a separate process).Thanks. Dmitry.
On Wed, Apr 10, 2013 at 4:07 PM, Arvids Godjuks arvids.godjuks@gmail.com
wrote:
2013/4/10 Dmitry Stogov dmitry@zend.comHi,
Recently, I've found that OPcache optimizer misses a lot of abilities,
because it handles only one op_array at once. So it definitely can't
perform any inter-function optimizations (e.g. inlining).Actually, it was not very difficult to switch to "script at once"
approach.
The attached patch demonstrates it and adds per script constants
substitution explained in the following script<?php
define("FOO", 1);
function foo() {
echo FOO . "\n"; // optimizer will replace it with: echo "1\n";
}
?>Of course, I ran the PHP test suite and it passed all the same tests.
Personally, I think it's safe to include this patch into 5.5 and make a
green light to some other advanced optimizations in 5.5. (e.g.
conversion
INIT_FCALL_BY_NAME into DO_FCALL).Any thoughts?
Thanks. Dmitry.
--
Hi!
Many obvious optimizations are not used due to the fact, that script
translation into opcode state has to be fast. The nature of PHP dictated
that and this was re-iterated countless times on this mailing list by the
core developers.To do advanced stuff, you have to create some sort of pre-compile or
storing that compiled code reliably on disk so that if memory cache is
dropped or restart is done, there is no significant preformance hit while
all the code compiles into optimized opcode again.I would also imagine that good part of the optimizations would require
multiple files to be processed and optimized, but due to dynamic nature
of
the PHP opcode compilation is done on per-file basis, so do the
optimizations.It's very commendable that you want to push optimizations and stuff, but
there are some fundamental stuff that needs to be cared of to do some
really good stuff.My 0.02$
Hello,
If applying optimizations in multiple passes would be a problem for
speed, especially on the first request, then maybe a way to solve this
would be to have a configurable variable like: opcache.passes which is
between 1 and 10 (lets say) and then have the engine do something like
this:
- load the file, compile it and apply a first round of 'quick'
optimizations for the first time and mark it as passed once;- next request, load the compiled version, apply another round of
optimization then mark it as a second pass- repeat the above step until the optimization passes in the said file
= opcache.passes valueThis way only the initial requests will be affected by this but in a
way that the hit on those requests is smaller that applying all the
steps at once.
I'm really not sure if it's that easy to implement but 'on paper' this
could be the way to solve it imho.What do you think, does it make sense?
Best regards
Florin Patan
https://github.com/dlsniper
http://www.linkedin.com/in/florinpatan
I don't think this is a safe optimization. In the following case it would
output 'b' and not 'a' which is the correct result:
a.php:
<?php
define('FOO', 'a');
include('b.php');
?>
b.php:
<?php
define('FOO', 'b');
echo FOO;
?>
It is certainly not likely for a constant to be defined twice but PHP
currently just issues a notice and continues with the first constant value.
On Thu, Apr 11, 2013 at 3:57 PM, Larry Garfield larry@garfieldtech.comwrote:
Speaking as a userspace developer and site admin, I'd be fine with trading
a more expensive compilation for a runtime improvement. Even a 100%
increase in compilation time would pay for itself over only a dozen or so
requests (assuming the runtime improvements are non-trivial, too).Naturally some optimizations are harder to do than others given PHP's
architecture, but trading more expensive compile for cheaper runtime, even
if not a 1:1 trade, would be a win IMO.--Larry Garfield
For now, the optimizations we do are quite chip.
They may increase the compilation time on first request by 2, but on
following requests we will get it back.
Once we come to really expensive optimizations we will do it "offline" (in
context of a separate process).Thanks. Dmitry.
On Wed, Apr 10, 2013 at 5:16 PM, Florin Patan florinpatan@gmail.com
wrote:On Wed, Apr 10, 2013 at 4:07 PM, Arvids Godjuks <
wrote:
2013/4/10 Dmitry Stogov dmitry@zend.com
Hi,
Recently, I've found that OPcache optimizer misses a lot of abilities,
because it handles only one op_array at once. So it definitely can't
perform any inter-function optimizations (e.g. inlining).Actually, it was not very difficult to switch to "script at once"
approach.
The attached patch demonstrates it and adds per script constants
substitution explained in the following script
<?php
define("FOO", 1);
function foo() {
echo FOO . "\n"; // optimizer will replace it with: echo "1\n";
}
?>Of course, I ran the PHP test suite and it passed all the same tests.
Personally, I think it's safe to include this patch into 5.5 and make a
green light to some other advanced optimizations in 5.5. (e.g.conversion
INIT_FCALL_BY_NAME into DO_FCALL).
Any thoughts?
Thanks. Dmitry.
--
Hi!
Many obvious optimizations are not used due to the fact, that script
translation into opcode state has to be fast. The nature of PHP dictated
that and this was re-iterated countless times on this mailing list by
the
core developers.To do advanced stuff, you have to create some sort of pre-compile or
storing that compiled code reliably on disk so that if memory cache is
dropped or restart is done, there is no significant preformance hit
while
all the code compiles into optimized opcode again.I would also imagine that good part of the optimizations would require
multiple files to be processed and optimized, but due to dynamic natureof
the PHP opcode compilation is done on per-file basis, so do the
optimizations.It's very commendable that you want to push optimizations and stuff, but
there are some fundamental stuff that needs to be cared of to do some
really good stuff.My 0.02$
Hello,
If applying optimizations in multiple passes would be a problem for
speed, especially on the first request, then maybe a way to solve this
would be to have a configurable variable like: opcache.passes which is
between 1 and 10 (lets say) and then have the engine do something like
this:
- load the file, compile it and apply a first round of 'quick'
optimizations for the first time and mark it as passed once;- next request, load the compiled version, apply another round of
optimization then mark it as a second pass- repeat the above step until the optimization passes in the said file
= opcache.passes valueThis way only the initial requests will be affected by this but in a
way that the hit on those requests is smaller that applying all the
steps at once.
I'm really not sure if it's that easy to implement but 'on paper' this
could be the way to solve it imho.What do you think, does it make sense?
Best regards
Florin Patan
https://github.com/dlsniper
http://www.linkedin.com/in/**florinpatan<http://www.linkedin.com/in/florinpatan
Good point.
Thanks. Dmitry.
On Fri, Apr 12, 2013 at 3:09 AM, Graham Kelly-Cohn sgkelly4@gmail.comwrote:
I don't think this is a safe optimization. In the following case it would
output 'b' and not 'a' which is the correct result:a.php:
<?php
define('FOO', 'a');
include('b.php');
?>b.php:
<?php
define('FOO', 'b');
echo FOO;
?>It is certainly not likely for a constant to be defined twice but PHP
currently just issues a notice and continues with the first constant value.On Thu, Apr 11, 2013 at 3:57 PM, Larry Garfield <larry@garfieldtech.com
wrote:
Speaking as a userspace developer and site admin, I'd be fine with
trading
a more expensive compilation for a runtime improvement. Even a 100%
increase in compilation time would pay for itself over only a dozen or so
requests (assuming the runtime improvements are non-trivial, too).Naturally some optimizations are harder to do than others given PHP's
architecture, but trading more expensive compile for cheaper runtime,
even
if not a 1:1 trade, would be a win IMO.--Larry Garfield
For now, the optimizations we do are quite chip.
They may increase the compilation time on first request by 2, but on
following requests we will get it back.
Once we come to really expensive optimizations we will do it "offline"
(in
context of a separate process).Thanks. Dmitry.
On Wed, Apr 10, 2013 at 5:16 PM, Florin Patan florinpatan@gmail.com
wrote:On Wed, Apr 10, 2013 at 4:07 PM, Arvids Godjuks <
wrote:
2013/4/10 Dmitry Stogov dmitry@zend.com
Hi,
Recently, I've found that OPcache optimizer misses a lot of
abilities,
because it handles only one op_array at once. So it definitely can't
perform any inter-function optimizations (e.g. inlining).Actually, it was not very difficult to switch to "script at once"
approach.
The attached patch demonstrates it and adds per script constants
substitution explained in the following script
<?php
define("FOO", 1);
function foo() {
echo FOO . "\n"; // optimizer will replace it with: echo "1\n";
}
?>Of course, I ran the PHP test suite and it passed all the same tests.
Personally, I think it's safe to include this patch into 5.5 and
make a
green light to some other advanced optimizations in 5.5. (e.g.conversion
INIT_FCALL_BY_NAME into DO_FCALL).
Any thoughts?
Thanks. Dmitry.
--
Hi!
Many obvious optimizations are not used due to the fact, that script
translation into opcode state has to be fast. The nature of PHP
dictated
that and this was re-iterated countless times on this mailing list by
the
core developers.To do advanced stuff, you have to create some sort of pre-compile or
storing that compiled code reliably on disk so that if memory cache is
dropped or restart is done, there is no significant preformance hit
while
all the code compiles into optimized opcode again.I would also imagine that good part of the optimizations would require
multiple files to be processed and optimized, but due to dynamic
natureof
the PHP opcode compilation is done on per-file basis, so do the
optimizations.It's very commendable that you want to push optimizations and stuff,
but
there are some fundamental stuff that needs to be cared of to do some
really good stuff.My 0.02$
Hello,
If applying optimizations in multiple passes would be a problem for
speed, especially on the first request, then maybe a way to solve this
would be to have a configurable variable like: opcache.passes which is
between 1 and 10 (lets say) and then have the engine do something like
this:
- load the file, compile it and apply a first round of 'quick'
optimizations for the first time and mark it as passed once;- next request, load the compiled version, apply another round of
optimization then mark it as a second pass- repeat the above step until the optimization passes in the said file
= opcache.passes valueThis way only the initial requests will be affected by this but in a
way that the hit on those requests is smaller that applying all the
steps at once.
I'm really not sure if it's that easy to implement but 'on paper' this
could be the way to solve it imho.What do you think, does it make sense?
Best regards
Florin Patan
https://github.com/dlsniper
http://www.linkedin.com/in/**florinpatan<
http://www.linkedin.com/in/florinpatan
Hi!
I don't think this is a safe optimization. In the following case it would
output 'b' and not 'a' which is the correct result:a.php:
<?php
define('FOO', 'a');
include('b.php');
?>b.php:
<?php
define('FOO', 'b');
echo FOO;
?>It is certainly not likely for a constant to be defined twice but PHP
currently just issues a notice and continues with the first constant value.
It's more likely than you think, especially given that on the way
between define and include there could be other code doing various
checks and calculations. In general, any optimization involving global
context is very dangerous in PHP since the script can be run in
different global contexts.
--
Stanislav Malyshev, Software Architect
SugarCRM: http://www.sugarcrm.com/
(408)454-6900 ext. 227
-----Original Message-----
From: Arvids Godjuks [mailto:arvids.godjuks@gmail.com]
Sent: Wednesday, April 10, 2013 4:08 PM
To: PHP Internals
Subject: Re: [PHP-DEV] OPcache optimizer improvement in PHP-5.5?2013/4/10 Dmitry Stogov dmitry@zend.com
Hi,
Recently, I've found that OPcache optimizer misses a lot of abilities,
because it handles only one op_array at once. So it definitely can't
perform any inter-function optimizations (e.g. inlining).Actually, it was not very difficult to switch to "script at once"
approach.
The attached patch demonstrates it and adds per script constants
substitution explained in the following script<?php
define("FOO", 1);
function foo() {
echo FOO . "\n"; // optimizer will replace it with: echo "1\n"; }
?>Of course, I ran the PHP test suite and it passed all the same tests.
Personally, I think it's safe to include this patch into 5.5 and make
a green light to some other advanced optimizations in 5.5. (e.g.
conversion INIT_FCALL_BY_NAME into DO_FCALL).Any thoughts?
Thanks. Dmitry.
--
To unsubscribe,
visit: http://www.php.net/unsub.phpHi!
Many obvious optimizations are not used due to the fact, that script
translation
into opcode state has to be fast. The nature of PHP dictated that and this
was re-
iterated countless times on this mailing list by the core developers.To do advanced stuff, you have to create some sort of pre-compile or
storing
that compiled code reliably on disk so that if memory cache is dropped or
restart
is done, there is no significant preformance hit while all the code
compiles into
optimized opcode again.I would also imagine that good part of the optimizations would require
multiple
files to be processed and optimized, but due to dynamic nature of the PHP
opcode compilation is done on per-file basis, so do the optimizations.It's very commendable that you want to push optimizations and stuff, but
there
are some fundamental stuff that needs to be cared of to do some really
good
stuff.
I think it very much depends on the nature of the optimizations. For the
vast majority of optimizations we can apply to PHP's execution architecture,
I actually don't think that we need to go back to the fundamentals and
consider things like storing pre-compiled scripts on disk. The compiler,
even with optimization passes still takes split seconds to execute, which
means a 'cold boot' (e.g. when doing a restart) won't be a noticeably
painful process. As long as you end up reusing the results of that process
a lot more frequently than you have to recreate them - you're fine.
Note that our experience was that reading binary serialized data from disk
isn't significantly faster than invoking the compiler in the first place -
you still have to read the data from disk, you still have to analyze it and
backpatch addresses, etc.; I know that some people here are revisiting that
assertion - which is absolutely fine - but the assumption that saving
precompiled files on disk eliminates compilation overhead is wrong. If
anything it gives a marginal benefit...
From my POV I think we're fine with any optimization that does not break the
single-file barrier (in other words, no cross-file optimizations). The one
Dmitry suggested falls in that category, so I think it's fine, and it's
mostly a question of whether we want it in 5.5 or only in 5.6.
Zeev
2013/4/10 Zeev Suraski zeev@zend.com
-----Original Message-----
From: Arvids Godjuks [mailto:arvids.godjuks@gmail.com]
Sent: Wednesday, April 10, 2013 4:08 PM
To: PHP Internals
Subject: Re: [PHP-DEV] OPcache optimizer improvement in PHP-5.5?2013/4/10 Dmitry Stogov dmitry@zend.com
Hi,
Recently, I've found that OPcache optimizer misses a lot of abilities,
because it handles only one op_array at once. So it definitely can't
perform any inter-function optimizations (e.g. inlining).Actually, it was not very difficult to switch to "script at once"
approach.
The attached patch demonstrates it and adds per script constants
substitution explained in the following script<?php
define("FOO", 1);
function foo() {
echo FOO . "\n"; // optimizer will replace it with: echo "1\n"; }
?>Of course, I ran the PHP test suite and it passed all the same tests.
Personally, I think it's safe to include this patch into 5.5 and make
a green light to some other advanced optimizations in 5.5. (e.g.
conversion INIT_FCALL_BY_NAME into DO_FCALL).Any thoughts?
Thanks. Dmitry.
--
To unsubscribe,
visit: http://www.php.net/unsub.phpHi!
Many obvious optimizations are not used due to the fact, that script
translation
into opcode state has to be fast. The nature of PHP dictated that and
this
was re-
iterated countless times on this mailing list by the core developers.To do advanced stuff, you have to create some sort of pre-compile or
storing
that compiled code reliably on disk so that if memory cache is dropped or
restart
is done, there is no significant preformance hit while all the code
compiles into
optimized opcode again.I would also imagine that good part of the optimizations would require
multiple
files to be processed and optimized, but due to dynamic nature of the PHP
opcode compilation is done on per-file basis, so do the optimizations.It's very commendable that you want to push optimizations and stuff, but
there
are some fundamental stuff that needs to be cared of to do some really
good
stuff.I think it very much depends on the nature of the optimizations. For the
vast majority of optimizations we can apply to PHP's execution
architecture,
I actually don't think that we need to go back to the fundamentals and
consider things like storing pre-compiled scripts on disk. The compiler,
even with optimization passes still takes split seconds to execute, which
means a 'cold boot' (e.g. when doing a restart) won't be a noticeably
painful process. As long as you end up reusing the results of that process
a lot more frequently than you have to recreate them - you're fine.Note that our experience was that reading binary serialized data from disk
isn't significantly faster than invoking the compiler in the first place -
you still have to read the data from disk, you still have to analyze it and
backpatch addresses, etc.; I know that some people here are revisiting
that
assertion - which is absolutely fine - but the assumption that saving
precompiled files on disk eliminates compilation overhead is wrong. If
anything it gives a marginal benefit...From my POV I think we're fine with any optimization that does not break
the
single-file barrier (in other words, no cross-file optimizations). The one
Dmitry suggested falls in that category, so I think it's fine, and it's
mostly a question of whether we want it in 5.5 or only in 5.6.Zeev
Yep, I have to agree here. It all depends on the optimizations in question,
the time it takes to preform them.
Regarding the storage if compiled opcode on disk - my thought was to read
from disk only if there are heavy enought optimizations present and be a
one-time thing to populate the RAM cache. Also it is right now that
invoking the compiler is not really slower than reading from disk, but in
the future, when there are numerous optimization passes and stuff - it can
become significant. Anyway - this is just my thoughts on the subject.
People are also asking for the ability to deploy already compiled scripts
(comercial software, faster deployment, etc), this maybe a part of a bigger
functionality for the future.
hi Dmitry,
Hi,
Recently, I've found that OPcache optimizer misses a lot of abilities,
because it handles only one op_array at once. So it definitely can't perform
any inter-function optimizations (e.g. inlining).Actually, it was not very difficult to switch to "script at once" approach.
The attached patch demonstrates it and adds per script constants
substitution explained in the following script<?php
define("FOO", 1);
function foo() {
echo FOO . "\n"; // optimizer will replace it with: echo "1\n";
}
?>Of course, I ran the PHP test suite and it passed all the same tests.
Personally, I think it's safe to include this patch into 5.5 and make a
green light to some other advanced optimizations in 5.5. (e.g. conversion
INIT_FCALL_BY_NAME into DO_FCALL).Any thoughts?
Mixed feeling, I like this "simple" optimization and the possible
gains but 5.5 is very closed from RC.
Cheers,
Pierre
@pierrejoye
Yes. And it's the reason I'm asking for agreement.
I may commit it into master and pecl, but it means that pecl branch is
going to be ahead of PHP-5.5.
Thanks. Dmitry.
hi Dmitry,
Hi,
Recently, I've found that OPcache optimizer misses a lot of abilities,
because it handles only one op_array at once. So it definitely can't
perform
any inter-function optimizations (e.g. inlining).Actually, it was not very difficult to switch to "script at once"
approach.
The attached patch demonstrates it and adds per script constants
substitution explained in the following script<?php
define("FOO", 1);
function foo() {
echo FOO . "\n"; // optimizer will replace it with: echo "1\n";
}
?>Of course, I ran the PHP test suite and it passed all the same tests.
Personally, I think it's safe to include this patch into 5.5 and make a
green light to some other advanced optimizations in 5.5. (e.g. conversion
INIT_FCALL_BY_NAME into DO_FCALL).Any thoughts?
Mixed feeling, I like this "simple" optimization and the possible
gains but 5.5 is very closed from RC.Cheers,
Pierre
@pierrejoye
Hi!
I may commit it into master and pecl, but it means that pecl branch is
going to be ahead of PHP-5.5.
In general, I think there's no harm in trying out new stuff on PECL -
and marking those as alpha/beta initially - pecl has a mechanism to
choose if you want only stable or also bleeding edge releases, so we can
try out stuff without compromising stability for folks that run it in
production.
--
Stanislav Malyshev, Software Architect
SugarCRM: http://www.sugarcrm.com/
(408)454-6900 ext. 227
The attached patch demonstrates it and adds per script constants
substitution explained in the following script
Will this case work properly:
a.php:
<?php
$flag = true;
include('c.php');
?>
b.php:
<?php
$flag = false;
include('c.php');
?>
c.php:
<?php
if ($flag) {
define('C', 1);
} else {
define('C', 2);
}
echo C;
?>
and then request #1 to a.php and request #2 to b.php?
johannes
Hey;
I think it's a great idea, if all op_arrays in one script share the same
literals table, let's say it's main scope 's literals table.
then we can make all class entry, function entry share the same constant
literal..
image that, same class(function) only need to lookup once in one
script.. rest will all hits cache.... I think we can gain significant
peformance improve there.
thanks
Hi,
Recently, I've found that OPcache optimizer misses a lot of abilities,
because it handles only one op_array at once. So it definitely can't
perform any inter-function optimizations (e.g. inlining).Actually, it was not very difficult to switch to "script at once" approach.
The attached patch demonstrates it and adds per script constants
substitution explained in the following script<?php
define("FOO", 1);
function foo() {
echo FOO . "\n"; // optimizer will replace it with: echo "1\n";
}
?>Of course, I ran the PHP test suite and it passed all the same tests.
Personally, I think it's safe to include this patch into 5.5 and make a
green light to some other advanced optimizations in 5.5. (e.g. conversion
INIT_FCALL_BY_NAME into DO_FCALL).Any thoughts?
Thanks. Dmitry.
--
--
Laruence Xinchen Hui
http://www.laruence.com/