I asked this last night on the general mailing list and also have
asked around about it. Nobody seems to know. I normally wouldn't ask a
support quesiton on a developers mailing list, but nobody else seems to
know so I thought that perhaps a developer knows.
Why would I be running into this STDIN
data size limit when using
proc_open?
I've tried setting the limits for apache to unlimited in
/etc/security/limits.conf just to see if its a system limit.
----- Forwarded message from Mark Krenz mark@suso.org -----
Date: Sun, 22 Jan 2006 00:25:33 +0000
From: Mark Krenz mark@suso.org
To: php-general@lists.php.net
Subject: [PHP] proc_open and buffer limit?
I'm using PHP 5.1.1 on Apache 2.0.54 on Gentoo Linux. I've been trying
to write a program to pass information to a program using proc_open,
however when I do, it only passes the first 65536 bytes of the stream
and then cuts off the rest. To make sure its not the program I'm trying
to send to, I tries using /bin/cat instead and get the same problem.
Below, I've included the code that I'm using, which for the most part is
from the proc_open documentation page. For testing, I'm reading from a
word dictionary which is over 2MB in size. Is there something I'm
missing about using proc_open?
$program = "/bin/cat";
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("file", "/tmp/error-output.txt", "a")
);
$cwd = '/var/www';
$env = array('HOME' => '/var/www');
$process = proc_open($program, $descriptorspec, $pipes, $cwd, $env);
if (is_resource($process)) {
stream_set_blocking($pipes[0], FALSE);
stream_set_blocking($pipes[1], FALSE);
$handle = fopen("/usr/share/dict/words", "r");
while (!feof($handle)) {
$input .= fread($handle, 8192);
}
fwrite($pipes[0], $input);
fclose($handle);
fclose($pipes[0]);
while (!feof($pipes[1])) {
$output .= fgets($pipes[1], 8192);
}
fclose($pipes[1]);
print "<PRE>$output</PRE><BR><BR>\n";
$return_value = proc_close($process);
echo "command returned $return_value\n";
}
--
Mark S. Krenz
IT Director
Suso Technology Services, Inc.
http://suso.org/
--
PHP General Mailing List (http://www.php.net/)
----- End forwarded message -----
--
Mark S. Krenz
IT Director
Suso Technology Services, Inc.
http://suso.org/
My guess at first look is that you fill the stdout buffer because you don't
read from it untill you stop piping data to stdin.
I asked this last night on the general mailing list and also have
asked around about it. Nobody seems to know. I normally wouldn't ask a
support quesiton on a developers mailing list, but nobody else seems to
know so I thought that perhaps a developer knows.Why would I be running into this
STDIN
data size limit when using
proc_open?I've tried setting the limits for apache to unlimited in
/etc/security/limits.conf just to see if its a system limit.----- Forwarded message from Mark Krenz mark@suso.org -----
Date: Sun, 22 Jan 2006 00:25:33 +0000
From: Mark Krenz mark@suso.org
To: php-general@lists.php.net
Subject: [PHP] proc_open and buffer limit?I'm using PHP 5.1.1 on Apache 2.0.54 on Gentoo Linux. I've been trying
to write a program to pass information to a program using proc_open,
however when I do, it only passes the first 65536 bytes of the stream
and then cuts off the rest. To make sure its not the program I'm trying
to send to, I tries using /bin/cat instead and get the same problem.Below, I've included the code that I'm using, which for the most part is
from the proc_open documentation page. For testing, I'm reading from a
word dictionary which is over 2MB in size. Is there something I'm
missing about using proc_open?
$program = "/bin/cat";
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("file", "/tmp/error-output.txt", "a")
);$cwd = '/var/www';
$env = array('HOME' => '/var/www');$process = proc_open($program, $descriptorspec, $pipes, $cwd, $env);
if (is_resource($process)) {
stream_set_blocking($pipes[0], FALSE); stream_set_blocking($pipes[1], FALSE); $handle = fopen("/usr/share/dict/words", "r"); while (!feof($handle)) { $input .= fread($handle, 8192); } fwrite($pipes[0], $input); fclose($handle); fclose($pipes[0]); while (!feof($pipes[1])) { $output .= fgets($pipes[1], 8192); } fclose($pipes[1]); print "<PRE>$output</PRE><BR><BR>\n"; $return_value = proc_close($process); echo "command returned $return_value\n";
}
--
Mark S. Krenz
IT Director
Suso Technology Services, Inc.
http://suso.org/--
PHP General Mailing List (http://www.php.net/)----- End forwarded message -----
--
Mark S. Krenz
IT Director
Suso Technology Services, Inc.
http://suso.org/--
--
Nicolas Bérard Nault (nicobn@gmail.com)
Étudiant D.E.C. Sciences, Lettres & Arts
Cégep de Sherbrooke
Site web: http://www.lapsus-engineer.org
"La liberté est un bagne aussi longtemps qu'un seul homme est asservi sur la
terre." - Albert Camus, Les Justes.
Well, the program that I'm really doing this with has a -o option to
write its data out to a file and when I use that option I have the same
problem with it only taking the first 64KB on stdin.
On Mon, Jan 23, 2006 at 04:48:12AM GMT, Nicolas Bérard Nault [nicobn@gmail.com] said the following:
My guess at first look is that you fill the stdout buffer because you don't
read from it untill you stop piping data to stdin.
--
Mark S. Krenz
IT Director
Suso Technology Services, Inc.
http://suso.org/
There are quite a few bad streams usages there.
I'd rewrite your code like this:
$words = fopen('/usr/share/dict/words', 'r');
stream_copy_to_stream($words, $pipes[0]);
fclose($pipes[0]);
fclose($words);
$output = stream_get_contents($pipes[1]);
fclose($pipes[1]);
proc_close($process);
My guess at the cause of the problem was that you're tying to write
2MB into a pipe, and the underlying write() syscall could only write
64k; since you're ignoring the return code from fwrite()
, you're
missing this vital fact.
Using the streams functions in PHP 5 helps you to write code that
"does what I mean", and makes for shorter, more readable code.
--Wez.
PS: http://netevil.org/talks/PHP-Streams-Lucky-Dip.pdf
has some tips and insider knowledge on using streams in PHP.
stream_set_blocking($pipes[0], FALSE); stream_set_blocking($pipes[1], FALSE); $handle = fopen("/usr/share/dict/words", "r"); while (!feof($handle)) { $input .= fread($handle, 8192); } fwrite($pipes[0], $input); fclose($handle); fclose($pipes[0]); while (!feof($pipes[1])) { $output .= fgets($pipes[1], 8192); } fclose($pipes[1]); print "<PRE>$output</PRE><BR><BR>\n"; $return_value = proc_close($process); echo "command returned $return_value\n";
}
Thanks for your help. stream_copy_to_stream does seem like a better
way to go. However I still have the same problem. Only 65536 bytes are
written to /tmp/output.txt. Here is the new source code based on your
ideas:
$program = "/usr/bin/dd of=/tmp/output.txt";
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("file", "/tmp/error-output.txt", "a")
);
$cwd = '/var/www';
$env = array('HOME' => '/var/www');
$process = proc_open($program, $descriptorspec, $pipes, $cwd, $env);
if (is_resource($process)) {
stream_set_blocking($pipes[0], FALSE);
stream_set_blocking($pipes[1], FALSE);
$handle = fopen("/usr/share/dict/words", "r");
stream_copy_to_stream($handle, $pipes[0]);
fclose($pipes[0]);
fclose($handle);
$output = stream_get_contents($pipes[1]);
fclose($pipes[1]);
print "<PRE>$output</PRE><BR><BR>\n";
$return_value = proc_close($process);
echo "command returned $return_value\n";
}
I switched to using dd instead of cat because using cat made the
situation more complex (because it sends back data as it gets it). The
program I'm going to eventually do this with doesn't do that.
On Mon, Jan 23, 2006 at 02:08:55PM GMT, Wez Furlong [kingwez@gmail.com] said the following:
There are quite a few bad streams usages there.
I'd rewrite your code like this:
$words = fopen('/usr/share/dict/words', 'r');
stream_copy_to_stream($words, $pipes[0]);
fclose($pipes[0]);
fclose($words);
$output = stream_get_contents($pipes[1]);
fclose($pipes[1]);
proc_close($process);My guess at the cause of the problem was that you're tying to write
2MB into a pipe, and the underlying write() syscall could only write
64k; since you're ignoring the return code fromfwrite()
, you're
missing this vital fact.Using the streams functions in PHP 5 helps you to write code that
"does what I mean", and makes for shorter, more readable code.--Wez.
PS: http://netevil.org/talks/PHP-Streams-Lucky-Dip.pdf
has some tips and insider knowledge on using streams in PHP.stream_set_blocking($pipes[0], FALSE); stream_set_blocking($pipes[1], FALSE); $handle = fopen("/usr/share/dict/words", "r"); while (!feof($handle)) { $input .= fread($handle, 8192); } fwrite($pipes[0], $input); fclose($handle); fclose($pipes[0]); while (!feof($pipes[1])) { $output .= fgets($pipes[1], 8192); } fclose($pipes[1]); print "<PRE>$output</PRE><BR><BR>\n"; $return_value = proc_close($process); echo "command returned $return_value\n";
}
--
Mark S. Krenz
IT Director
Suso Technology Services, Inc.
http://suso.org/
I suggest running the script using strace:
strace -e trace=file php myscript.php
and taking a look to see what the underlying read/write syscalls are up to.
--Wez.
Thanks for your help. stream_copy_to_stream does seem like a better
way to go. However I still have the same problem. Only 65536 bytes are
written to /tmp/output.txt. Here is the new source code based on your
ideas:
$program = "/usr/bin/dd of=/tmp/output.txt";
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("file", "/tmp/error-output.txt", "a")
);$cwd = '/var/www';
$env = array('HOME' => '/var/www');$process = proc_open($program, $descriptorspec, $pipes, $cwd, $env);
if (is_resource($process)) {
stream_set_blocking($pipes[0], FALSE); stream_set_blocking($pipes[1], FALSE); $handle = fopen("/usr/share/dict/words", "r"); stream_copy_to_stream($handle, $pipes[0]); fclose($pipes[0]); fclose($handle); $output = stream_get_contents($pipes[1]); fclose($pipes[1]); print "<PRE>$output</PRE><BR><BR>\n"; $return_value = proc_close($process); echo "command returned $return_value\n";
}
I switched to using dd instead of cat because using cat made the
situation more complex (because it sends back data as it gets it). The
program I'm going to eventually do this with doesn't do that.On Mon, Jan 23, 2006 at 02:08:55PM GMT, Wez Furlong [kingwez@gmail.com] said the following:
There are quite a few bad streams usages there.
I'd rewrite your code like this:
$words = fopen('/usr/share/dict/words', 'r');
stream_copy_to_stream($words, $pipes[0]);
fclose($pipes[0]);
fclose($words);
$output = stream_get_contents($pipes[1]);
fclose($pipes[1]);
proc_close($process);My guess at the cause of the problem was that you're tying to write
2MB into a pipe, and the underlying write() syscall could only write
64k; since you're ignoring the return code fromfwrite()
, you're
missing this vital fact.Using the streams functions in PHP 5 helps you to write code that
"does what I mean", and makes for shorter, more readable code.--Wez.
PS: http://netevil.org/talks/PHP-Streams-Lucky-Dip.pdf
has some tips and insider knowledge on using streams in PHP.stream_set_blocking($pipes[0], FALSE); stream_set_blocking($pipes[1], FALSE); $handle = fopen("/usr/share/dict/words", "r"); while (!feof($handle)) { $input .= fread($handle, 8192); } fwrite($pipes[0], $input); fclose($handle); fclose($pipes[0]); while (!feof($pipes[1])) { $output .= fgets($pipes[1], 8192); } fclose($pipes[1]); print "<PRE>$output</PRE><BR><BR>\n"; $return_value = proc_close($process); echo "command returned $return_value\n";
}
--
Mark S. Krenz
IT Director
Suso Technology Services, Inc.
http://suso.org/