Hi all
Is there anyone who needs anything to be able to debug bug reports,
run sanitychecks, developing.. or whatever?
We really need to improve our architecture/platform/os testing, even
if its just "make all" compile checks.
On php-webmaster@ we get a lot of companies who can provide us with
whatever os/platform/architecture we could possibly need mailing in
and asking how they can help.
I get a lot of these kind of inquiries, and have a good chunk which I
haven't replied to at all, so please let me know if you need anything
at all.
-Hannes
I can provide multiple machines as well from single chip dual core to
quad chip quad core...
Sent from my iPhone
On Jun 26, 2009, at 3:29 PM, Hannes Magnusson <hannes.magnusson@gmail.com
wrote:
Hi all
Is there anyone who needs anything to be able to debug bug reports,
run sanitychecks, developing.. or whatever?We really need to improve our architecture/platform/os testing, even
if its just "make all" compile checks.
On php-webmaster@ we get a lot of companies who can provide us with
whatever os/platform/architecture we could possibly need mailing in
and asking how they can help.I get a lot of these kind of inquiries, and have a good chunk which I
haven't replied to at all, so please let me know if you need anything
at all.-Hannes
Hi all
Is there anyone who needs anything to be able to debug bug reports,
run sanitychecks, developing.. or whatever?We really need to improve our architecture/platform/os testing, even
if its just "make all" compile checks.
On php-webmaster@ we get a lot of companies who can provide us with
whatever os/platform/architecture we could possibly need mailing in
and asking how they can help.I get a lot of these kind of inquiries, and have a good chunk which I
haven't replied to at all, so please let me know if you need anything
at all.
I think we should look at getting buildbot setup and deployed to as
many machines as humanly possible. It would at least give us a heads
up when we've written something that doesn't work.
You can see the ones for Google Chromium at http://build.chromium.org/buildbot/waterfall/waterfall
It just needs someone to spend the time writing the initial slave
deployment script, any volunteers? :)
Scott
--
Sorry for the 2nd email Hannes, forgot reply all.
Hi,
Le samedi 27 juin 2009 à 02:24 +0100, Scott MacVicar a écrit :
I think we should look at getting buildbot setup and deployed to as
many machines as humanly possible. It would at least give us a heads
up when we've written something that doesn't work.You can see the ones for Google Chromium at http://build.chromium.org/buildbot/waterfall/waterfall
It just needs someone to spend the time writing the initial slave
deployment script, any volunteers? :)
This looks like something that matches my subset of things I can do.
I already had some ideas about this, and already done an interface that
can display various test information on a same page.
http://php.magicaltux.net/browse/ (lots of memleaks reported on 5.3 &
HEAD, seems to be glibc-related and not real memleaks, and this is not
up-to-date as I didn't run the tests for a while and didn't automate it
because it takes a lot of cpu)
Of course we can also make something that looks like Google's waterfall,
but I don't find it to be really easy to read.
The good thing with the interface I made is the fact you can see the
status for all modules directly, eg:
http://php.magicaltux.net/browse/ext/
I could, instead of providing branch columns provide instead arch
columns, and let the user select what he want to see before accessing
the interface (ie. compare all arch on branch PHP_5_3).
Anyway that is just an idea, the good part is that most of the code has
been written, missing parts would be ability to remotely provide compile
results (ie. run tests from a different arch) and make code looks better
(I wrote this under influence of beer, the code is missing comments).
Would also need a deployment script as suggested by Scott that will
initialize a slave directly, saving the time required to install this on
xxxxxxx comptuers.
Best regards,
Mark Karpeles
Scott
--
Sorry for the 2nd email Hannes, forgot reply all.
Hi,
Le samedi 27 juin 2009 à 02:24 +0100, Scott MacVicar a écrit :
I think we should look at getting buildbot setup and deployed to as
many machines as humanly possible. It would at least give us a heads
up when we've written something that doesn't work.You can see the ones for Google Chromium at http://build.chromium.org/buildbot/waterfall/waterfall
It just needs someone to spend the time writing the initial slave
deployment script, any volunteers? :)This looks like something that matches my subset of things I can do.
I already had some ideas about this, and already done an interface
that
can display various test information on a same page.http://php.magicaltux.net/browse/ (lots of memleaks reported on 5.3 &
HEAD, seems to be glibc-related and not real memleaks, and this is not
up-to-date as I didn't run the tests for a while and didn't automate
it
because it takes a lot of cpu)Of course we can also make something that looks like Google's
waterfall,
but I don't find it to be really easy to read.The good thing with the interface I made is the fact you can see the
status for all modules directly, eg:
http://php.magicaltux.net/browse/ext/I could, instead of providing branch columns provide instead arch
columns, and let the user select what he want to see before accessing
the interface (ie. compare all arch on branch PHP_5_3).Anyway that is just an idea, the good part is that most of the code
has
been written, missing parts would be ability to remotely provide
compile
results (ie. run tests from a different arch) and make code looks
better
(I wrote this under influence of beer, the code is missing comments).Would also need a deployment script as suggested by Scott that will
initialize a slave directly, saving the time required to install
this on
xxxxxxx comptuers.
I don't think I explained Buildbot properly, its more than just a test
runner, its kind of glue to join together an applications various
building and testing results and capture it all and report back to the
build master.
We'd just need to write a few dozen lines of configuration script that
tells the slaves what to do.
They aren't that hard, it just requires someone to sit and write the
first version.
You write a bootstrapping master.cfg in python and it can then spawn
whatever other language it wants, in our case probably the freshly
built PHP ;-)
From there it can run tests and do whatever we wish with the results,
like grab all test failures and upload it to your test result viewer.
If a build fails we can have it use twitter / irc to inform people
that a slave failed to build PHP for whatever reason.
It's also got various scheduling parts built in, so we can do
continuous regular builds to make sure its still compiling and daily
memory builds to check for new memory leaks.
Other cool features are notifying people when it broke via IRC,
twitter or mailing list. And optionally packaging the freshly built
tests and uploading them. We could hypothetically get our snapshots
built this way so we never release a broken snapshot?
The project URL is http://buildbot.net/trac and the vast documentation
is at http://djmitche.github.com/buildbot/docs/0.7.10/
I might be able to sit down and give this a go on the plane next week.
Scott
Hi Scott
From there it can run tests and do whatever we wish with the results,
like grab all test failures and upload it to your test result viewer.
If a build fails we can have it use twitter / irc to inform people
that a slave failed to build PHP for whatever reason.
I wonder if we can use the new run-tests code as part of this? Georg has
just integrated a first try at parallel running. We are working on
having something that it's easy to grab all the test results from.
Olivier sent a note earlier this week as an attempt to map out
requirements what should happen with the ad-hoc test reports, however
I'm wondering if we should just scrap the ad-hoc test reports and put
the effort into getting results from a defined set of platforms? Aside
from the fact that they go to a mailing list which is hard to view,
would anyone make use of the ad-hoc reports if we could get test results
from a representative set of platforms in a more controlled way?
I'd like to be able to see test results across several platforms
(various *ix, windows, mac....), 64 bit, 32 bit. And when a test fails
to be able to get to the relevant files. This is similar to the gcov
pages (but more) with the ability to have different views of the
results, I like Mark's summary page but would want to be able have a few
different views, the test fest results page has a few of the attributes
that I'd like to see (not the color scheme of course). Is the sort of
thing we would be able to do?
It's also got various scheduling parts built in, so we can do
continuous regular builds to make sure its still compiling and daily
memory builds to check for new memory leaks.Other cool features are notifying people when it broke via IRC,
twitter or mailing list. And optionally packaging the freshly built
tests and uploading them. We could hypothetically get our snapshots
built this way so we never release a broken snapshot?
That would be good :-)The project URL is http://buildbot.net/trac and the vast documentation
is at http://djmitche.github.com/buildbot/docs/0.7.10/I might be able to sit down and give this a go on the plane next week.
Zoe
Hi Zoe and everyone :)
This thread is a proof that many people want to help the QA in PHP.
Although, many things done so far are redundant.
Specific valgrind reports per platform/architecture seems useless to me,
mainly because results are unreadable (most of the time the leak is in a
library that is not part of PHP). Valgrind reports should be prepared with
caution (the one on http://gcov.php.net seems sufficient). Concerning code
coverage, differences between platforms are tiny and maybe should we
aggregate all results (does it has any sense to show all results separately
?)
Olivier sent a note earlier this week as an attempt to map out requirements
what should happen with the ad-hoc test reports, however I'm wondering if
we should just scrap the ad-hoc test reports and put the effort into getting
results from a defined set of platforms? Aside from the fact that they go to
a mailing list which is hard to view, would anyone make use of the ad-hoc
reports if we could get test results from a representative set of platforms
in a more controlled way?
I'm sure that with good tools you can make good things :) Of course the
ad-hoc reports will be useful ! a "controlled way" is not changing anything
for php tests (this is different for valgrind/gcov).
Question is "How to get results from a wide variety of
platforms/configurations and aggregate them all to highlight where bugs
(leaks ?) are ?". We already have the solution through the 'make test'
result, even if they are sent to a mailing list and not easily readable. To
get back to my note posted earlier: it would be easy to have an interface
collecting all phpruntests results (ie from all platforms used by the
testers) and display them by all filters/aggregate functions you ever dreamt
of.
I'd like to be able to see test results across several platforms (various
*ix, windows, mac....), 64 bit, 32 bit. And when a test fails to be able to
get to the relevant files. This is similar to the gcov pages (but more) with
the ability to have different views of the results, I like Mark's summary
page but would want to be able have a few different views, the test fest
results page has a few of the attributes that I'd like to see (not the color
scheme of course). Is the sort of thing we would be able to do?
The interface I was talking about a few days ago is all about that :) And
what's awesome in it is that providing feedback for PHP become very easy :
just a small script (less than 20 bash lines) to get latest php snapshot,
configure + make + make test with test results automatically sent to this
interface (instead of a mailing list) and you're done. Such script for
linux/pc/others could be made available on php-qa website or php website.
What to do for this interface (whether use existing tool, or write one from
scratch) is exactly the draft I wrote on Thursday. You are all welcome to
give feedback on it (how to do it and what should it do) on php-qa.
Olivier
Mark Karpeles wrote:
http://php.magicaltux.net/browse/ (lots of memleaks reported on 5.3 &
HEAD, seems to be glibc-related and not real memleaks, and this is not
up-to-date as I didn't run the tests for a while and didn't automate it
because it takes a lot of cpu)
Wondering why you did this. Was there something missing from our gcov
valgrind output?
http://gcov.php.net/viewer.php?version=PHP_5_3&func=valgrind
And we have a per-extension code coverage thing here:
http://gcov.php.net/PHP_5_3/lcov_html/
which seems like an interface we could tie this into.
-Rasmus
2009/6/26 Hannes Magnusson hannes.magnusson@gmail.com
Hi all
Is there anyone who needs anything to be able to debug bug reports,
run sanitychecks, developing.. or whatever?We really need to improve our architecture/platform/os testing, even
if its just "make all" compile checks.
On php-webmaster@ we get a lot of companies who can provide us with
whatever os/platform/architecture we could possibly need mailing in
and asking how they can help.I get a lot of these kind of inquiries, and have a good chunk which I
haven't replied to at all, so please let me know if you need anything
at all.-Hannes
--
Would it be possible to have a working pecl4win box?
Something to run through all the current stable releases of the PECL
extensions which are NOT included in the core or are bundled (from the win32
perspective) and have the VC6/9, nts/ts builds. Not a small task I'm sure
and not a quick one either. But some of those extensions are useful and
noone is building them.
--
Richard Quadling
Zend Certified Engineer : http://zend.com/zce.php?c=ZEND002498&r=213474731
"Standing on the shoulders of some very clever giants!"