I need karma to commit test fixes:
-
session_encode_basic - added serialize_precision=100 ini setting
-
fix for http://bugs.php.net/48203: there's a segfault when
CURLOPT_STDERR
file pointer is closed before calling curl_exec, i.e.
like this:
$fp = fopen(dirname(FILE) . '/bug48203.tmp', 'w');
$ch = curl_init()
;
curl_setopt($ch, CURLOPT_VERBOSE, 1);
curl_setopt($ch, CURLOPT_STDERR, $fp);
curl_setopt($ch, CURLOPT_URL, getenv("PHP_CURL_HTTP_REMOTE_SERVER"));
fclose($fp); // <-- premature close of $fp caused a crash!
curl_exec($ch); // segfault
All tests run ok on php5.3 php5.4 and trunk.
Could somebody review this patch since I'm new to developing php and
could make some silly mistakes?
I already have an account: shein
--
Regards,
Shein Alexey
hi,
I need karma to commit test fixes:
- session_encode_basic - added serialize_precision=100 ini setting
Is it not fixed already?
- fix for http://bugs.php.net/48203: there's a segfault when
Pls open a bug report and attach the patch to it, I will check and
apply it later today.
Thanks for your work,
Cheers,
Pierre
@pierrejoye | http://blog.thepimp.net | http://www.libgd.org
hi,
I need karma to commit test fixes:
- session_encode_basic - added serialize_precision=100 ini setting
Is it not fixed already?
there were other tests with missing serialize_precision, but this one wasn't
fixed imo
http://svn.php.net/viewvc/php/php-src/trunk/ext/session/tests/session_encode_basic.phpt
Tyrael
2011/5/17 Pierre Joye pierre.php@gmail.com:
hi,
I need karma to commit test fixes:
- session_encode_basic - added serialize_precision=100 ini setting
Is it not fixed already?
As Tyrael already said this one was missed.
- fix for http://bugs.php.net/48203: there's a segfault when
Pls open a bug report and attach the patch to it, I will check and
apply it later today.
Should I reopen #48203 or create a new one?
--
Regards,
Shein Alexey
Created http://bugs.php.net/bug.php?id=54798.
--
Regards,
Shein Alexey
Em Tue, 17 May 2011 14:40:53 +0100, Alexey Shein confik@gmail.com
escreveu:
- fix for http://bugs.php.net/48203: there's a segfault when
CURLOPT_STDERR
file pointer is closed before calling curl_exec, i.e.
like this:$fp = fopen(dirname(FILE) . '/bug48203.tmp', 'w');
$ch =
curl_init()
;curl_setopt($ch, CURLOPT_VERBOSE, 1);
curl_setopt($ch, CURLOPT_STDERR, $fp);
curl_setopt($ch, CURLOPT_URL, getenv("PHP_CURL_HTTP_REMOTE_SERVER"));fclose($fp); // <-- premature close of $fp caused a crash!
curl_exec($ch); // segfault
All tests run ok on php5.3 php5.4 and trunk.
Could somebody review this patch since I'm new to developing php and
could make some silly mistakes?
A few remarks:
- Isn't this supposed to be fixed? Was there something that triggered a
regression? - If this strategy is used (checking whether the resource associated with
the stored zval is valid), how about curl_multi_exec? - I think a better strategy would be to just dup the file descriptor
gotten after the cast in curl_setopt, store it (instead of storing the
zval) and close it on curl handle destruction. This way we wouldn't have
to worry about zval refcounts or whether the file descriptor obtained is
still valid. Could you see if this other strategy works? (otherwise I can
do it later this week)
--
Gustavo Lopes
A few remarks:
- Isn't this supposed to be fixed? Was there something that triggered a
regression?
As far as I can see Jani checked only case when already closed file
descriptor was passed to curl_setopt. Here's the check is performed in
curl_exec, i.e. in curl_setopt file descriptor was valid and at the
moment of call curl_setopt it's closed/invalid.
- If this strategy is used (checking whether the resource associated with
the stored zval is valid), how about curl_multi_exec?
The logic is simple - if resource is ok, then go with it, else it's
changed to default stderr. Did not tested it on curl_multi_exec()
,
just took the failed phpt file and got it working :)
- I think a better strategy would be to just dup the file descriptor gotten
after the cast in curl_setopt, store it (instead of storing the zval) and
close it on curl handle destruction. This way we wouldn't have to worry
about zval refcounts or whether the file descriptor obtained is still valid.
Could you see if this other strategy works? (otherwise I can do it later
this week)
Yes, I like your strategy, will try to do it. Also I would be happy if
somebody could answer my questions because "all documentation is
written in C" is not so easy to understand :)
--
Gustavo Lopes--
--
Regards,
Shein Alexey
- I think a better strategy would be to just dup the file descriptor gotten
after the cast in curl_setopt, store it (instead of storing the zval) and
close it on curl handle destruction. This way we wouldn't have to worry
about zval refcounts or whether the file descriptor obtained is still valid.
Could you see if this other strategy works? (otherwise I can do it later
this week)
Ok, I made it to work (thanks for consultation to Pierre and
Johannes). You were right, the bug remains with curl_multi_exec too.
BTW it happens not only with CURL_STDERR but also with all other
options that take fp as a parameter (CURLOPT_FILE,
CURLOPT_WRITEHEADER, CURLOPT_INFILE) so I added them to the test and
made separate test for curl_multi_exec (see the patch).
After some thoughts I think this is the case when user really wants to
shoot into his leg - probably from user pov we should generate a
warning that we can't write into the supplied pointer (but actually we
can :)) like my previous patch did, but it raised a couple of
problems:
- curl_multi_exec is called directly without interception from php
and create a wrapper just for this check seems like an overkill to me - we need to add 3 more checks to restore default values for all fp
options (see above) in two places: curl_exec and curl_multi_exec -
again overkill.
So I decided to go with this patch.
What do you think?
P.S. This patch is just againt trunk, if it's ok I will add 5.3 and
5.4 versions too.
Regards,
Shein Alexey
Em Fri, 20 May 2011 07:59:51 +0100, Alexey Shein confik@gmail.com
escreveu:
- I think a better strategy would be to just dup the file descriptor
gotten after the cast in curl_setopt, store it (instead of storing the
zval) and close it on curl handle destruction. This way we wouldn't
have to worry
about zval refcounts or whether the file descriptor obtained is still
valid.
Could you see if this other strategy works? (otherwise I can do it
later this week)Ok, I made it to work (thanks for consultation to Pierre and
Johannes). You were right, the bug remains with curl_multi_exec too.
BTW it happens not only with CURL_STDERR but also with all other
options that take fp as a parameter (CURLOPT_FILE,
CURLOPT_WRITEHEADER, CURLOPT_INFILE) so I added them to the test and
made separate test for curl_multi_exec (see the patch).
After some thoughts I think this is the case when user really wants to
shoot into his leg - probably from user pov we should generate a
warning that we can't write into the supplied pointer (but actually we
can :)) like my previous patch did, but it raised a couple of
problems:
- curl_multi_exec is called directly without interception from php
and create a wrapper just for this check seems like an overkill to me- we need to add 3 more checks to restore default values for all fp
options (see above) in two places: curl_exec and curl_multi_exec -
again overkill.
So I decided to go with this patch.
What do you think?
First some considerations about your patch:
- You seem to be leaking the FILE* variable you created (you just removed
the zval_ptr_dtor(&ch->handlers->std_err); did not add any fclose call). - You say the problem also occurs with CURLOPT_WRITEHEADER,
CURLOPT_WRITEHEADER
and CURLOPT_INFILE. But again, you don't consider the
curl extension doesn't close the stdio pointers. - No error checking in the calls to fileno, dup or fdopen.
- php_stream.mode cannot be used directly; see
php_stream_mode_sanitize_fdopen_fopencookie.
Now, I'm starting to think something in the line of your original patch is
better. Of course, curl_multi_exec and the other options would have to be
considered too.
The reason is I hadn't considered the curl extensions casts into a FILE*,
not an fd. Sure, sometimes you can still dup the fd and create another
stdio pointer, but this is not always the case (see fopencookie). So this
represents a removal of some functionality.
Another solution would be to force the stream to preserve its handle upon
closing. This is possible with PHP_STREAM_FLAG_NO_CLOSE, but I don't think
throwing the actual file closing to the script shutdown is a good idea.
--
Gustavo Lopes
Now, I'm starting to think something in the line of your original patch is
better. Of course, curl_multi_exec and the other options would have to be
considered too.
Yes, that's what I told Alexy on IRC as well. Especially as I don't
see much of interest in keeping streams aroundif they are not usable
anymore from a script.
Cheers,
Pierre
@pierrejoye | http://blog.thepimp.net | http://www.libgd.org
Em Fri, 20 May 2011 12:14:07 +0100, Pierre Joye pierre.php@gmail.com
escreveu:
On Fri, May 20, 2011 at 1:12 PM, Gustavo Lopes glopes@nebm.ist.utl.pt
wrote:Now, I'm starting to think something in the line of your original patch
is better. Of course, curl_multi_exec and the other options would have
to be considered too.Yes, that's what I told Alexy on IRC as well. Especially as I don't
see much of interest in keeping streams aroundif they are not usable
anymore from a script.
It's not usable from the script, but if the stream represents e.g. a file
or a socket, the side effect of it being written to would still be there.
So the only case it would be completely useless is if it were a temporary
stream.
An interesting fact is that this bug could be "solved" rather simply in
5.3 by duplicating the zval (and therefore incrementing the refcount of
the resource) instead of just incrementing the zval refcount of the
stream. The reason is php fclose()
used not to actually close the file if
its resource refcount were > 1, but I changed that in trunk. Ultimately,
making fclose()
not really do its job seems very flawed to me and someone
who really wanted could call fclose()
twice to force the stream closed,
but it's important to have in mind this change may have made some bugs
more visible.
--
Gustavo Lopes
So what do you think should be done here? Check closed streams and
reset them to default as in my first try?
--
Regards,
Shein Alexey