On 01/30/18 00:04, Chintan Pandya wrote:
>> (1)
>>
>> Can you point me to the driver code that is invoking
>> the search?
> There are many locations. Few of them being,
> https://source.codeaurora.org/quic/la/kernel/msm-4.9/tree/drivers/of/irq.c?h=msm-4.9#n214
> https://source.codeaurora.org/quic/l
(1)
Can you point me to the driver code that is invoking
the search?
There are many locations. Few of them being,
https://source.codeaurora.org/quic/la/kernel/msm-4.9/tree/drivers/of/irq.c?h=msm-4.9#n214
https://source.codeaurora.org/quic/la/kernel/msm-4.9/tree/drivers/irqchip/irq-gic-v3.c?h=msm
Hi Chintan,
On 01/26/18 00:31, Chintan Pandya wrote:
> of_find_node_by_phandle() takes a lot of time (1ms per
> call) to find right node when your intended device is
> too deeper in the fdt. Reason is, we search for each
> device serially in the fdt. See this,
>
> struct device_node *__of_find_al
Scenarios:
[1] Cache size 1024 + early cache build up [Small change in your cache
patch,
see the patch below]
[2] Hash 64 approach[my original v2 patch]
[3] Cache size 64
[4] Cache size 128
[5] Cache size 256
[6] Base build
Result (boot to shell in sec):
[1] 14.292498 14.370994 14.313537 -
On Mon, Jan 29, 2018 at 1:34 AM, Chintan Pandya wrote:
>
>> I was curious, so I implemented it. It ends up being similar to Rasmus's
>> 1st suggestion. The difference is we don't try to store all entries, but
>> rather implement a hash table that doesn't handle collisions. Relying on
>> the fact t
I was curious, so I implemented it. It ends up being similar to Rasmus's
1st suggestion. The difference is we don't try to store all entries, but
rather implement a hash table that doesn't handle collisions. Relying on
the fact that phandles are just linearly allocated from 0, we just mask
the h
Hi Chintan,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on robh/for-next]
[also build test ERROR on v4.15-rc9 next-20180126]
[cannot apply to glikely/devicetree/next]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
Hi Chintan,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on robh/for-next]
[also build test ERROR on v4.15-rc9 next-20180126]
[cannot apply to glikely/devicetree/next]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
On Fri, Jan 26, 2018 at 09:34:59AM -0600, Rob Herring wrote:
> On Fri, Jan 26, 2018 at 9:14 AM, Chintan Pandya
> wrote:
> >
> >> I'm probably missing something obvious, but: Aren't phandles in practice
> >> small consecutive integers assigned by dtc? If so, why not just have a
> >> smallish stati
On Fri, Jan 26, 2018 at 9:14 AM, Chintan Pandya wrote:
>
>> I'm probably missing something obvious, but: Aren't phandles in practice
>> small consecutive integers assigned by dtc? If so, why not just have a
>> smallish static array mapping the small phandle values directly to
>> device node, inste
I'm probably missing something obvious, but: Aren't phandles in practice
small consecutive integers assigned by dtc? If so, why not just have a
smallish static array mapping the small phandle values directly to
device node, instead of adding a pointer to every struct device_node? Or
one could de
On 2018-01-26 09:31, Chintan Pandya wrote:
> Implement, device-phandle relation in hash-table so
> that look up can be faster, irrespective of where my
> device is defined in the DT.
>
> There are ~6.7k calls to of_find_node_by_phandle() and
> total improvement observed during boot is 400ms.
I'm
of_find_node_by_phandle() takes a lot of time (1ms per
call) to find right node when your intended device is
too deeper in the fdt. Reason is, we search for each
device serially in the fdt. See this,
struct device_node *__of_find_all_nodes(struct device_node *prev)
{
struct device_node *np
13 matches
Mail list logo