2012/1/10 Xinchen Hui
> On Tue, Jan 10, 2012 at 12:57 AM, Pierre Joye
> wrote:
> > hi,
> >
> > No time for new ideas yet. We cannot afford to implement, test and
> > valid new propositions and provide a fix as soon as possible (read: in
> > the next days).
> >
> > What's the status of your patch
On Tue, Jan 10, 2012 at 12:57 AM, Pierre Joye wrote:
> hi,
>
> No time for new ideas yet. We cannot afford to implement, test and
> valid new propositions and provide a fix as soon as possible (read: in
> the next days).
>
> What's the status of your patch? The max input var one, not the random
>
On 01/09/2012 05:28 PM, Xinchen Hui wrote:
>> I understand the difference. But large arrays are obviously the ones
>> that are prone to hitting the collision limits.
> Yes, but don't you think this is at least better than restricting
> number of elements? :)
The difference is the source. If, for e
Sent from my iPhone
在 2012-1-10,1:51,Rasmus Lerdorf 写道:
> On 01/09/2012 09:18 AM, Xinchen Hui wrote:
>> Sent from my iPhone
>>
>> 在 2012-1-10,1:14,Rasmus Lerdorf 写道:
>>
>>> On 01/09/2012 08:50 AM, Xinchen Hui wrote:
Hi:
I am not sure whether you have understood my point.
> -Original Message-
> From: Nikita Popov [mailto:nikita@googlemail.com]
> Sent: Monday, January 09, 2012 11:54 AM
> To: Xinchen Hui
> Cc: Pierre Joye; PHP internals; Johannes Schlüter; Laruence
> Subject: Re: [PHP-DEV] Re: 5.3.9, Hash DoS, release
>
> On Mon
On 01/09/2012 09:18 AM, Xinchen Hui wrote:
> Sent from my iPhone
>
> 在 2012-1-10,1:14,Rasmus Lerdorf 写道:
>
>> On 01/09/2012 08:50 AM, Xinchen Hui wrote:
>>> Hi:
>>>I am not sure whether you have understood my point.
>>>
>>>If an array have more than 1024 buckets in an same bucket
>>> lis
Sent from my iPhone
在 2012-1-10,1:18,Xinchen Hui 写道:
> Sent from my iPhone
>
> 在 2012-1-10,1:14,Rasmus Lerdorf 写道:
>
>> On 01/09/2012 08:50 AM, Xinchen Hui wrote:
>>> Hi:
>>> I am not sure whether you have understood my point.
>>>
>>> If an array have more than 1024 buckets in an same bucke
Sent from my iPhone
在 2012-1-10,1:14,Rasmus Lerdorf 写道:
> On 01/09/2012 08:50 AM, Xinchen Hui wrote:
>> Hi:
>>I am not sure whether you have understood my point.
>>
>>If an array have more than 1024 buckets in an same bucket
>> list(same index), there must already be an performance issue
On 01/09/2012 08:50 AM, Xinchen Hui wrote:
> Hi:
> I am not sure whether you have understood my point.
>
> If an array have more than 1024 buckets in an same bucket
> list(same index), there must already be an performance issue.
The problem is you really need to consider the source. There
Sent from my iPhone
在 2012-1-10,1:07,Stefan Esser 写道:
> Hello,
>
>> I am not sure whether you have understood my point.
> I understood your point: you want to break HashTables because 1024 colliding
> entries could have an performance impact. This could break thousands of
> scripts.
>
> for
Hello,
>I am not sure whether you have understood my point.
I understood your point: you want to break HashTables because 1024 colliding
entries could have an performance impact. This could break thousands of scripts.
for ($i=0; $i<2000; $i++) $arr[$i<<16] = 1;
would stop working, while it
Sent from my iPhone
在 2012-1-10,0:57,Pierre Joye 写道:
> hi,
>
> No time for new ideas yet. We cannot afford to implement, test and
> valid new propositions and provide a fix as soon as possible (read: in
> the next days)
That idea will only need one hour to be implemented. :)
Anyone who have tim
hi,
No time for new ideas yet. We cannot afford to implement, test and
valid new propositions and provide a fix as soon as possible (read: in
the next days).
What's the status of your patch? The max input var one, not the random
(or derived version), can you post it in this thread again for the
r
On Mon, Jan 9, 2012 at 5:36 PM, Xinchen Hui wrote:
> Hi:
> I have a new idea, which is simple and also works for Jason/serialized etc.
>
> That is Restricting a max length of a buckets list in a hash table.
>
> If a bucket's length exceed 1024, any insertion into this bucket
> will return fai
Hi:
I am not sure whether you have understood my point.
If an array have more than 1024 buckets in an same bucket
list(same index), there must already be an performance issue.
Sent from my iPhone
在 2012-1-10,0:41,Stefan Esser 写道:
> Hey,
>
>> That is Restricting a max length of a bucket
Hey,
> That is Restricting a max length of a buckets list in a hash table.
>
> If a bucket's length exceed 1024, any insertion into this bucket
> will return failure and a warning will be generated.
>
> What do you think?
very bad idea. Especially when it comes to numerical indices a legit
I was under the impression that somebody worked on the information
disclosure issue in the error message and the error message spamming.
This seems not to be the case.
If you, Pierre, are ready for Windows builds tomorrow morning I'd like
to release tomorrow as is.
johannes
On Mon, 2012-01-09 a
Hi:
I have a new idea, which is simple and also works for Jason/serialized etc.
That is Restricting a max length of a buckets list in a hash table.
If a bucket's length exceed 1024, any insertion into this bucket
will return failure and a warning will be generated.
What do you think?
18 matches
Mail list logo