hi,
Moving this discussion here as it makes little to non sense to discuss
that any longer on security@
We are now very late behind an acceptable delay to provide a fix for
the hash DoS, to say it nicely.
I'd strongly suggest to release 5.3.9 (RC5 has been tested now) final
this week using the max_input_vars fix, with the modification from
Laruence (but with a larger limit). Laruence addition also fixes
serialize or json, which are parts that need this fix as well as it is
impossible to valid a string manually (length check only is not enough
or cannot work in all cases).
But 1st of all, the fix addition has to be applied and fully tested.
But if the addition is not desired yet, then we must at least release
5.3.9 with Dmitry's fix only and we can fix json&serialize later,
ideally within 2 weeks max.
Cheers,
Pierre
@pierrejoye | http://blog.thepimp.net | http://www.libgd.org
hi,
Moving this discussion here as it makes little to non sense to discuss
that any longer on security@We are now very late behind an acceptable delay to provide a fix for
the hash DoS, to say it nicely.I'd strongly suggest to release 5.3.9 (RC5 has been tested now) final
this week using the max_input_vars fix, with the modification from
Laruence (but with a larger limit). Laruence addition also fixes
serialize or json, which are parts that need this fix as well as it is
impossible to valid a string manually (length check only is not enough
or cannot work in all cases).
By laruence addition you mean this patch:
https://bugs.php.net/patch-display.php?bug_id=60655&patch=max_input_vars.patch&revision=latest
?
If so, two questions:
- Why should all POST variables be counted into the limit, not only
the ones in one nesting level? - How high would the limit for seriliaze() and
json_decode()
be
approximately? I think that few applications will use more than 1000
POST vars but I could well imagine that they have large serialized
arrays. Putting the limit too high on the other hand will pretty much
defeat the purpose of the fix.
But 1st of all, the fix addition has to be applied and fully tested.
But if the addition is not desired yet, then we must at least release
5.3.9 with Dmitry's fix only and we can fix json&serialize later,
ideally within 2 weeks max.
I'd prefer that. Don't think that it's wise to apply a different fix
shortly before the release.
Dear Pierre and others,
I'd strongly suggest to release 5.3.9 (RC5 has been tested now) final
this week using the max_input_vars fix, with the modification from
Laruence (but with a larger limit). Laruence addition also fixes
serialize or json, which are parts that need this fix as well as it is
impossible to valid a string manually (length check only is not enough
or cannot work in all cases).
Why do you advocate a patch from Laruence that randomizes the size of the HashTable, which does not fix the HashDOS security problem at all?
It seems that the majority of people working on this HashDOS stuff do not understand the actual mathematical problem and try to exploit it by using numerical indices.
In case of numerical indices a collision is trivial:
0x00010000, 0x00020000,0x00030000, … 0xFFFF0000 will all collide because n mod 2^x is always 0 for x<=16.
This is however just the cheap way to cause HashTable collisions in PHP. The actual HashDOS exploit the nruns guys were talking about DOES NOT involve numerical indices at all.
The nruns guys are speaking about collisions in the DJB hash function, which is used for alpha numerical indices. This cannot be fixed by random HashTable size increments.
Random HashTable size increments will only lead to tricky debugging situations, because due to the randomness the PHP memory layout/usage for the same script will be totally different with each run.
Just one of the possible consequences: the same script, running on the same server, called 2 times with the same parameters can e.g. cause memory limit violations in totally different places. Or sometimes does not violate the memory limit at all.
And this all ignores the fact that the patch by Laruence is broken and performs dangerous operations on the size field that could introduce further problems.
*** The only way to fix the HashTable implementation is by using a randomized HashFunction (not a randomized HashTable) ***
*** And of course resource limits are always a good addition to protect against this and future different vulnerabilities. ***
Regards,
Stefan Esser
Dear Pierre and others,
I'd strongly suggest to release 5.3.9 (RC5 has been tested now) final
this week using the max_input_vars fix, with the modification from
Laruence (but with a larger limit). Laruence addition also fixes
serialize or json, which are parts that need this fix as well as it is
impossible to valid a string manually (length check only is not enough
or cannot work in all cases).Why do you advocate a patch from Laruence that randomizes the size of the HashTable, which does not fix the HashDOS security problem at all?
I do not, I refer to his other patch which does exactly what Dmitry's
one does and uses the same limit for json and serialize.
I'm actually against the randomize version of the fix as we have not
yet enough clue about how good (or bad) it is.
Cheers,
Pierre
@pierrejoye | http://blog.thepimp.net | http://www.libgd.org
Hi:
I have a new idea, which is simple and also works for Jason/serialized etc.
That is Restricting a max length of a buckets list in a hash table.
If a bucket's length exceed 1024, any insertion into this bucket
will return failure and a warning will be generated.
What do you think?
Sent from my iPhone
在 2012-1-9,23:42,Pierre Joye pierre.php@gmail.com 写道:
hi,
Moving this discussion here as it makes little to non sense to discuss
that any longer on security@We are now very late behind an acceptable delay to provide a fix for
the hash DoS, to say it nicely.I'd strongly suggest to release 5.3.9 (RC5 has been tested now) final
this week using the max_input_vars fix, with the modification from
Laruence (but with a larger limit). Laruence addition also fixes
serialize or json, which are parts that need this fix as well as it is
impossible to valid a string manually (length check only is not enough
or cannot work in all cases).But 1st of all, the fix addition has to be applied and fully tested.
But if the addition is not desired yet, then we must at least release
5.3.9 with Dmitry's fix only and we can fix json&serialize later,
ideally within 2 weeks max.Cheers,
Pierre
@pierrejoye | http://blog.thepimp.net | http://www.libgd.org
Hey,
That is Restricting a max length of a buckets list in a hash table.
If a bucket's length exceed 1024, any insertion into this bucket
will return failure and a warning will be generated.What do you think?
very bad idea. Especially when it comes to numerical indices a legitimate application might put data into a big array and have legitimate colliding keys.
Regards,
Stefan Esser
Hi:
I am not sure whether you have understood my point.
If an array have more than 1024 buckets in an same bucket
list(same index), there must already be an performance issue.
Sent from my iPhone
在 2012-1-10,0:41,Stefan Esser stefan@nopiracy.de 写道:
Hey,
That is Restricting a max length of a buckets list in a hash table.
If a bucket's length exceed 1024, any insertion into this bucket
will return failure and a warning will be generated.What do you think?
very bad idea. Especially when it comes to numerical indices a legitimate application might put data into a big array and have legitimate colliding keys.
Regards,
Stefan Esser
Hello,
I am not sure whether you have understood my point.
I understood your point: you want to break HashTables because 1024 colliding entries could have an performance impact. This could break thousands of scripts.
for ($i=0; $i<2000; $i++) $arr[$i<<16] = 1;
would stop working, while it should not.
Regards,
Stefan Esser
Sent from my iPhone
在 2012-1-10,1:07,Stefan Esser stefan@nopiracy.de 写道:
Hello,
I am not sure whether you have understood my point.
I understood your point: you want to break HashTables because 1024 colliding entries could have an performance impact. This could break thousands of scripts.for ($i=0; $i<2000; $i++) $arr[$i<<16] = 1;
would stop working, while it should not.
Sure, but why do you want to do this? Kill your own pc?
So I can not agree with you on "thousands scripts".
And if 1024 is not enough, then 2048, 4096 at most.
Thanks
Regards,
Stefan Esser
Hi:
I am not sure whether you have understood my point.If an array have more than 1024 buckets in an same bucket
list(same index), there must already be an performance issue.
The problem is you really need to consider the source. There are many
places where people deal with huge datasets. If they assign it directly
they shouldn't hit any limits or we would need some sort of "large
array" hint which would be nasty.
-Rasmus
Sent from my iPhone
在 2012-1-10,1:14,Rasmus Lerdorf rasmus@php.net 写道:
Hi:
I am not sure whether you have understood my point.If an array have more than 1024 buckets in an same bucket
list(same index), there must already be an performance issue.The problem is you really need to consider the source. There are many
places where people deal with huge datasets. If they assign it directly
they shouldn't hit any limits or we would need some sort of "large
array" hint which would be nasty.
Rasmus:
Large array ! = Large list
An array can have million elements withou exceed the limit ion of max
length of list.
Thanks
-Rasmus
Sent from my iPhone
在 2012-1-10,1:18,Xinchen Hui laruence@gmail.com 写道:
Sent from my iPhone
在 2012-1-10,1:14,Rasmus Lerdorf rasmus@php.net 写道:
Hi:
I am not sure whether you have understood my point.If an array have more than 1024 buckets in an same bucket
list(same index), there must already be an performance issue.The problem is you really need to consider the source. There are many
places where people deal with huge datasets. If they assign it directly
they shouldn't hit any limits or we would need some sort of "large
array" hint which would be nasty.
Rasmus:Large array ! = Large list
An array can have million elements withou exceed the limit ion of max
length of list.Thanks
And sorry I can not do it now ,since it's really later here now and I
have getup early tomorrow to work.
So anyone who have time can try to implement it, just remember also do
limit in resize hash logic.
Thanks very much.
-Rasmus
Sent from my iPhone
在 2012-1-10,1:14,Rasmus Lerdorf rasmus@php.net 写道:
Hi:
I am not sure whether you have understood my point.If an array have more than 1024 buckets in an same bucket
list(same index), there must already be an performance issue.The problem is you really need to consider the source. There are many
places where people deal with huge datasets. If they assign it directly
they shouldn't hit any limits or we would need some sort of "large
array" hint which would be nasty.
Rasmus:Large array ! = Large list
An array can have million elements withou exceed the limit ion of max
length of list.
I understand the difference. But large arrays are obviously the ones
that are prone to hitting the collision limits.
-Rasmus
Hi:
I have a new idea, which is simple and also works for Jason/serialized etc.That is Restricting a max length of a buckets list in a hash table.
If a bucket's length exceed 1024, any insertion into this bucket
will return failure and a warning will be generated.What do you think?
That seems like a very good approach (until we have randomization). It
would fix the issue in a generic way so not all functions need to be
patched one by one. It also will not hurt legit uses of many POST
variables (or large serialized arrays).
-----Original Message-----
From: Nikita Popov [mailto:nikita.ppv@googlemail.com]
Sent: Monday, January 09, 2012 11:54 AM
To: Xinchen Hui
Cc: Pierre Joye; PHP internals; Johannes Schlüter; Laruence
Subject: Re: [PHP-DEV] Re: 5.3.9, Hash DoS, releaseHi:
I have a new idea, which is simple and also works for Jason/serialized etc.That is Restricting a max length of a buckets list in a hash table.
If a bucket's length exceed 1024, any insertion into this bucket
will return failure and a warning will be generated.What do you think?
That seems like a very good approach (until we have randomization). It would
fix the issue in a generic way so not all functions need to be patched one by
one. It also will not hurt legit uses of many POST variables (or large
serialized arrays).--
Yuck. Bad idea. Collisions happen, and for most hash algorithms there are plenty of perfectly likely key sequences that will collide badly.
There are two problems here:
- Large data sets have the potential to behave poorly if things collide badly
- An attacker may initiate a DoS attack by supplying a large set of data that is known to collide badly
To mitigate the impact of collisions, how about using a dynamic bucket behavior? Use a flat list for small/medium buckets, switch to a second level of hashing if the bucket grows beyond a certain size. Something like md5 could be used as part of the hash key calculation at deeper levels to ensure that the buckets don't infinitely collide. This covers the basic performance implications.
To prevent DoS, it has to be impossible for a malicious user to compute a problematic sequence of data. If the bucket level key computation includes an additional cryptographic transformation of any sort using a secret value unique to the machine (or unique to the process) that is handling the request, it would be impossible for an attacker to compute a problematic sequence of keys, which should close the door on DoS.
John Crenshaw
Priacta, Inc.
hi,
No time for new ideas yet. We cannot afford to implement, test and
valid new propositions and provide a fix as soon as possible (read: in
the next days).
What's the status of your patch? The max input var one, not the random
(or derived version), can you post it in this thread again for the
record please?
If not, we will go final with the current fix in 5.3.
Hi:
I have a new idea, which is simple and also works for Jason/serialized etc.That is Restricting a max length of a buckets list in a hash table.
If a bucket's length exceed 1024, any insertion into this bucket
will return failure and a warning will be generated.What do you think?
Sent from my iPhone
在 2012-1-9,23:42,Pierre Joye pierre.php@gmail.com 写道:
hi,
Moving this discussion here as it makes little to non sense to discuss
that any longer on security@We are now very late behind an acceptable delay to provide a fix for
the hash DoS, to say it nicely.I'd strongly suggest to release 5.3.9 (RC5 has been tested now) final
this week using the max_input_vars fix, with the modification from
Laruence (but with a larger limit). Laruence addition also fixes
serialize or json, which are parts that need this fix as well as it is
impossible to valid a string manually (length check only is not enough
or cannot work in all cases).But 1st of all, the fix addition has to be applied and fully tested.
But if the addition is not desired yet, then we must at least release
5.3.9 with Dmitry's fix only and we can fix json&serialize later,
ideally within 2 weeks max.Cheers,
Pierre
@pierrejoye | http://blog.thepimp.net | http://www.libgd.org
--
Pierre
@pierrejoye | http://blog.thepimp.net | http://www.libgd.org
Sent from my iPhone
在 2012-1-10,0:57,Pierre Joye pierre.php@gmail.com 写道:
hi,
No time for new ideas yet. We cannot afford to implement, test and
valid new propositions and provide a fix as soon as possible (read: in
the next days)
That idea will only need one hour to be implemented. :)
Anyone who have time now can do that ?
What's the status of your patch? The max input var one, not the random
(or derived version), can you post it in this thread again for the
record please?
Sorry, can't now, it's 01:00am here.
If not, we will go final with the current fix in 5.3.
Hi:
I have a new idea, which is simple and also works for Jason/serialized etc.That is Restricting a max length of a buckets list in a hash table.
If a bucket's length exceed 1024, any insertion into this bucket
will return failure and a warning will be generated.What do you think?
Sent from my iPhone
在 2012-1-9,23:42,Pierre Joye pierre.php@gmail.com 写道:
hi,
Moving this discussion here as it makes little to non sense to discuss
that any longer on security@We are now very late behind an acceptable delay to provide a fix for
the hash DoS, to say it nicely.I'd strongly suggest to release 5.3.9 (RC5 has been tested now) final
this week using the max_input_vars fix, with the modification from
Laruence (but with a larger limit). Laruence addition also fixes
serialize or json, which are parts that need this fix as well as it is
impossible to valid a string manually (length check only is not enough
or cannot work in all cases).But 1st of all, the fix addition has to be applied and fully tested.
But if the addition is not desired yet, then we must at least release
5.3.9 with Dmitry's fix only and we can fix json&serialize later,
ideally within 2 weeks max.Cheers,
Pierre
@pierrejoye | http://blog.thepimp.net | http://www.libgd.org
--
Pierre@pierrejoye | http://blog.thepimp.net | http://www.libgd.org
hi,
No time for new ideas yet. We cannot afford to implement, test and
valid new propositions and provide a fix as soon as possible (read: in
the next days).What's the status of your patch? The max input var one, not the random
(or derived version), can you post it in this thread again for the
record please?
Hi, FYI
thanks
If not, we will go final with the current fix in 5.3.
Hi:
I have a new idea, which is simple and also works for Jason/serialized etc.That is Restricting a max length of a buckets list in a hash table.
If a bucket's length exceed 1024, any insertion into this bucket
will return failure and a warning will be generated.What do you think?
Sent from my iPhone
在 2012-1-9,23:42,Pierre Joye pierre.php@gmail.com 写道:
hi,
Moving this discussion here as it makes little to non sense to discuss
that any longer on security@We are now very late behind an acceptable delay to provide a fix for
the hash DoS, to say it nicely.I'd strongly suggest to release 5.3.9 (RC5 has been tested now) final
this week using the max_input_vars fix, with the modification from
Laruence (but with a larger limit). Laruence addition also fixes
serialize or json, which are parts that need this fix as well as it is
impossible to valid a string manually (length check only is not enough
or cannot work in all cases).But 1st of all, the fix addition has to be applied and fully tested.
But if the addition is not desired yet, then we must at least release
5.3.9 with Dmitry's fix only and we can fix json&serialize later,
ideally within 2 weeks max.Cheers,
Pierre
@pierrejoye | http://blog.thepimp.net | http://www.libgd.org
--
Pierre@pierrejoye | http://blog.thepimp.net | http://www.libgd.org
--
惠新宸 laruence
Senior PHP Engineer
http://www.laruence.com
2012/1/10 Xinchen Hui laruence@gmail.com
On Tue, Jan 10, 2012 at 12:57 AM, Pierre Joye pierre.php@gmail.com
wrote:hi,
No time for new ideas yet. We cannot afford to implement, test and
valid new propositions and provide a fix as soon as possible (read: in
the next days).What's the status of your patch? The max input var one, not the random
(or derived version), can you post it in this thread again for the
record please?
Hi, FYIthanks
If not, we will go final with the current fix in 5.3.
Hi:
I have a new idea, which is simple and also works for
Jason/serialized etc.That is Restricting a max length of a buckets list in a hash table.
If a bucket's length exceed 1024, any insertion into this bucket
will return failure and a warning will be generated.What do you think?
Sent from my iPhone
在 2012-1-9,23:42,Pierre Joye pierre.php@gmail.com 写道:
hi,
Moving this discussion here as it makes little to non sense to discuss
that any longer on security@We are now very late behind an acceptable delay to provide a fix for
the hash DoS, to say it nicely.I'd strongly suggest to release 5.3.9 (RC5 has been tested now) final
this week using the max_input_vars fix, with the modification from
Laruence (but with a larger limit). Laruence addition also fixes
serialize or json, which are parts that need this fix as well as it is
impossible to valid a string manually (length check only is not enough
or cannot work in all cases).But 1st of all, the fix addition has to be applied and fully tested.
But if the addition is not desired yet, then we must at least release
5.3.9 with Dmitry's fix only and we can fix json&serialize later,
ideally within 2 weeks max.Cheers,
Pierre
@pierrejoye | http://blog.thepimp.net | http://www.libgd.org
--
Pierre@pierrejoye | http://blog.thepimp.net | http://www.libgd.org
--
惠新宸 laruence
Senior PHP Engineer
http://www.laruence.com--
Why not double hashing (http://en.wikipedia.org/wiki/Double_hashing)
somelike John Crenshaw proposed ?
Julien
I was under the impression that somebody worked on the information
disclosure issue in the error message and the error message spamming.
This seems not to be the case.
If you, Pierre, are ready for Windows builds tomorrow morning I'd like
to release tomorrow as is.
johannes
hi,
Moving this discussion here as it makes little to non sense to discuss
that any longer on security@We are now very late behind an acceptable delay to provide a fix for
the hash DoS, to say it nicely.I'd strongly suggest to release 5.3.9 (RC5 has been tested now) final
this week using the max_input_vars fix, with the modification from
Laruence (but with a larger limit). Laruence addition also fixes
serialize or json, which are parts that need this fix as well as it is
impossible to valid a string manually (length check only is not enough
or cannot work in all cases).But 1st of all, the fix addition has to be applied and fully tested.
But if the addition is not desired yet, then we must at least release
5.3.9 with Dmitry's fix only and we can fix json&serialize later,
ideally within 2 weeks max.Cheers,