(2013/08/23 8:57), zhangwei(Jovi) wrote:
> On 2013/8/23 0:42, Steven Rostedt wrote:
>> On Fri, 09 Aug 2013 18:56:54 +0900
>> Masami Hiramatsu <masami.hiramatsu...@hitachi.com> wrote:
>>
>>> (2013/08/09 17:45), Namhyung Kim wrote:
>>>> From: Namhyung Kim <namhyung....@lge.com>
>>>>
>>>> Fetching from user space should be done in a non-atomic context.  So
>>>> use a temporary buffer and copy its content to the ring buffer
>>>> atomically.
>>>>
>>>> While at it, use __get_data_size() and store_trace_args() to reduce
>>>> code duplication.
>>>
>>> I just concern using kmalloc() in the event handler. For fetching user
>>> memory which can be swapped out, that is true. But most of the cases,
>>> we can presume that it exists on the physical memory.
>>>
>>
>>
>> What about creating a per cpu buffer when uprobes are registered, and
>> delete them when they are finished? Basically what trace_printk() does
>> if it detects that there are users of trace_printk() in the kernel.
>> Note, it does not deallocate them when finished, as it is never
>> finished until reboot ;-)
>>
>> -- Steve
>>
> I also thought out this approach, but the issue is we cannot fetch user
> memory into per-cpu buffer, because use per-cpu buffer should under
> preempt disabled, and fetching user memory could sleep.

Hm, perhaps, we just need a "hot" buffer pool which can be allocate/free
soon, and whan the pool shortage caller just wait or allocate new page
from "cold" area, this is a.k.a. kmem_cache :)

Anyway, kmem_cache/kmalloc looks so heavy to just allocate temporally
buffers for trace handler (and also, those have tracepoints), so I think
you may just need a memory pool whose has enough number of slots with
a semaphore (which will wait if the all slots are currently used).

Thank you,

-- 
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu...@hitachi.com


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to