Newsgroups: php.internals Path: news.php.net Xref: news.php.net php.internals:57225 Return-Path: Mailing-List: contact internals-help@lists.php.net; run by ezmlm Delivered-To: mailing list internals@lists.php.net Received: (qmail 23832 invoked from network); 5 Jan 2012 07:30:10 -0000 Received: from unknown (HELO lists.php.net) (127.0.0.1) by localhost with SMTP; 5 Jan 2012 07:30:10 -0000 Authentication-Results: pb1.pair.com header.from=rasmus@lerdorf.com; sender-id=unknown Authentication-Results: pb1.pair.com smtp.mail=rasmus@lerdorf.com; spf=permerror; sender-id=unknown Received-SPF: error (pb1.pair.com: domain lerdorf.com from 209.85.210.170 cause and error) X-PHP-List-Original-Sender: rasmus@lerdorf.com X-Host-Fingerprint: 209.85.210.170 mail-iy0-f170.google.com Received: from [209.85.210.170] ([209.85.210.170:51808] helo=mail-iy0-f170.google.com) by pb1.pair.com (ecelerity 2.1.1.9-wez r(12769M)) with ESMTP id 54/44-28877-181550F4 for ; Thu, 05 Jan 2012 02:30:10 -0500 Received: by iafj26 with SMTP id j26so544762iaf.29 for ; Wed, 04 Jan 2012 23:30:07 -0800 (PST) Received: by 10.50.214.73 with SMTP id ny9mr1139908igc.1.1325748607061; Wed, 04 Jan 2012 23:30:07 -0800 (PST) Received: from [192.168.200.5] (c-50-131-44-225.hsd1.ca.comcast.net. [50.131.44.225]) by mx.google.com with ESMTPS id b20sm198343150ibj.7.2012.01.04.23.30.05 (version=SSLv3 cipher=OTHER); Wed, 04 Jan 2012 23:30:06 -0800 (PST) Message-ID: <4F05517C.5040600@lerdorf.com> Date: Wed, 04 Jan 2012 23:30:04 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:8.0) Gecko/20111124 Thunderbird/8.0 MIME-Version: 1.0 To: Stas Malyshev CC: Laruence , Ferenc Kovacs , Reindl Harald , "internals@lists.php.net" References: <4F048A03.4070408@sugarcrm.com> <4F04A172.7080509@sugarcrm.com> <4F04AA8E.6020701@sugarcrm.com> <4F04AD6D.80608@php.net> <4F04B071.8080102@php.net> <4F04B44D.6080208@thelounge.net> <4F04BCF9.30802@lerdorf.com> <4F04BF63.5060309@lerdorf.com> <4F04C427.9050202@sugarcrm.com> <4F04C920.9050105@lerdorf.com> <4F04CB0D.6040703@lerdorf.com> <4F052C10.30106@lerdor f.com> <4F054CB0.6070202@sugarcrm.com> In-Reply-To: <4F054CB0.6070202@sugarcrm.com> X-Enigmail-Version: 1.4a1pre Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Subject: Re: [PHP-DEV] Re: another fix for max_input_vars. From: rasmus@lerdorf.com (Rasmus Lerdorf) On 01/04/2012 11:09 PM, Stas Malyshev wrote: > Hi! > >> I really don't think this manual per-feature limiting is going to cut it >> here. The real problem is the predictability of the hashing which we can >> address by seeding it with a random value. That part is easy enough, the >> hard part is figuring out where people may be storing these hashes >> externally and providing some way of associating the seed with these >> external caches so they will still work. > > I think we'd need and API to access the seed value and to calculate hash > for given seed value. That would probably allow extensions that store > hashes with some additional work to do it properly. Though it needs more > detailed investigation. Yes, but we still need an actual case to look at. Opcode caches shouldn't be a problem unless they store some representation on disk that live across server restarts. In the APC world, nobody does that. Is there something in common use out there that actually needs this? Let's do just the GPC fix (the Dmitry version) for 5.3 and turn on ignore_repeated_errors just during startup and get it out there. That takes care of the most obvious attack vector for existing users. Leaving this in place for 5.4 is fine regardless of what we do in 5.4. I think for 5.4 we should take a couple of days to dig into what would actually break from seeding the hash. This seems like a much more elegant solution compared to trying to add limits to all the other places manually. This manual limit checking also wouldn't cover 3rd party extensions or even userspace code that might be vulnerable to the same thing. The only way to fix those cases is with a central hash fix. Another alternative to seeding would be to use a different hashing algorithm altogether. That would solve the cross-server issues, at the likely cost of slower hashing. -Rasmus