On 2021/6/28 下午3:49, Frank Chang wrote:
LIU Zhiwei <zhiwei_...@c-sky.com <mailto:zhiwei_...@c-sky.com>> 於 2021年6月28日 週一 下午3:40寫道:


    On 2021/6/28 下午3:23, Frank Chang wrote:
    LIU Zhiwei <zhiwei_...@c-sky.com <mailto:zhiwei_...@c-sky.com>> 於
    2021年6月28日 週一 下午3:17寫道:


        On 2021/6/26 下午8:56, Frank Chang wrote:
        On Wed, Jun 16, 2021 at 10:56 AM LIU Zhiwei
        <zhiwei_...@c-sky.com <mailto:zhiwei_...@c-sky.com>> wrote:


            On 6/13/21 6:10 PM, Frank Chang wrote:
            LIU Zhiwei <zhiwei_...@c-sky.com
            <mailto:zhiwei_...@c-sky.com>> 於 2021年4月9日 週五
            下午3:57寫道:

                +static void riscv_clic_realize(DeviceState *dev,
                Error **errp)
                +{
                +    RISCVCLICState *clic = RISCV_CLIC(dev);
                +    size_t harts_x_sources = clic->num_harts *
                clic->num_sources;
                +    int irqs, i;
                +
                +    if (clic->prv_s && clic->prv_u) {
                +        irqs = 3 * harts_x_sources;
                +    } else if (clic->prv_s || clic->prv_u) {
                +        irqs = 2 * harts_x_sources;
                +    } else {
                +        irqs = harts_x_sources;
                +    }
                +
                +    clic->clic_size = irqs * 4 + 0x1000;
                + memory_region_init_io(&clic->mmio, OBJECT(dev),
                &riscv_clic_ops, clic,
                + TYPE_RISCV_CLIC, clic->clic_size);
                +
                +    clic->clicintip = g_new0(uint8_t, irqs);
                +    clic->clicintie = g_new0(uint8_t, irqs);
                +    clic->clicintattr = g_new0(uint8_t, irqs);
                +    clic->clicintctl = g_new0(uint8_t, irqs);
                +    clic->active_list =
                g_new0(CLICActiveInterrupt, irqs);
                +    clic->active_count = g_new0(size_t,
                clic->num_harts);
                +    clic->exccode = g_new0(uint32_t, clic->num_harts);
                +    clic->cpu_irqs = g_new0(qemu_irq,
                clic->num_harts);
                + sysbus_init_mmio(SYS_BUS_DEVICE(dev), &clic->mmio);
                +
                +    /* Allocate irq through gpio, so that we can
                use qtest */
                +    qdev_init_gpio_in(dev, riscv_clic_set_irq, irqs);
                +    qdev_init_gpio_out(dev, clic->cpu_irqs,
                clic->num_harts);
                +
                +    for (i = 0; i < clic->num_harts; i++) {
                +        RISCVCPU *cpu = RISCV_CPU(qemu_get_cpu(i));


            As spec says:
              Smaller single-core systems might have only a CLIC,
              while multicore systems might have a CLIC per-core
            and a single shared PLIC.
              The PLIC xeip signals are treated as hart-local
            interrupt sources by the CLIC at each core.

            It looks like it's possible to have one CLIC instance
            per core.

            If you want to delivery an interrupt to one hart, you
            should encode the IRQ by the interrupt number
            , the hart number and the interrupt target privilege,
            then set the irq.

            I think how to calculate the hart number is the task of
            PLIC and it can make use of "hartid-base"
            to calculate it.

            Thanks,
            Zhiwei


        Hi Zhiwei,

        What I mean is if there are multiple CLIC instances, each
        per core (CLIC spec allows that).
        If you try to bind CLIC with CPU index start from 0,
        it will be impossible for CLIC instance to bind CPU from
        index other than 0.

        For example, for 4 cores system, it's possible to have 4
        CLIC instances:
          * CLIC 0 binds to CPU 0
          * CLIC 1 binds to CPU 1
          * CLIC 2 binds to CPU 2
          * CLIC 3 binds to CPU 3

        and that's why I said it's possible to pass an extra
        "hartid-base" just like PLIC.
        I know most of hardid are calculated by the requesing
        address, so most hartid usages should be fine.
        But I saw two places using qemu_get_cpu(),
        which may cause the problem for the scenario I describe above:
        i.e. riscv_clic_next_interrupt() and riscv_clic_realize() as
        my original reply.

        So what's the problem here?

        Currently all cores share the same CLIC instance. Do you want
        to give each core  a CLIC instance?

    Yes, that's what I mean, which should be supported as what spec
    says[1]:
      The CLIC complements the PLIC. Smaller single-core systems
    might have only a CLIC,
      while multicore systems might have *a CLIC per-core* and a
    single shared PLIC.
      The PLIC xeip signals are treated as hart-local interrupt
    sources by the CLIC at each core.

    [1]
    
https://github.com/riscv/riscv-fast-interrupt/blob/646310a5e4ae055964b4680f12c1c04a7cc0dd56/clic.adoc#12-clic-versus-plic
    
<https://github.com/riscv/riscv-fast-interrupt/blob/646310a5e4ae055964b4680f12c1c04a7cc0dd56/clic.adoc#12-clic-versus-plic>

    Thanks,
    Frank Chang

    If we give each core a CLIC instance, it is not convenient to
    access the shared memory, such as 0x0-0x1000.
    Which CLIC instance should contain this memory region?

What do you mean by: "access the shared memory" here?

It means the cliccfg or clicinfo which  should be shared by all CLIC instances.

Thanks,
Zhiwei

I thought the memory region is defined during CLIC's creation?
So it should depend on the platform that creates CLIC instances.

Thanks,
Frank Chang

    Thanks,
    Zhiwei


        Thanks,
        Zhiwei

        Regards,
        Frank Chang

            However if you try to bind CPU reference start from
            index i = 0.
            It's not possible for each per-core CLIC to bind their
            own CPU instance in multicore system
            as they have to bind from CPU 0.

            I'm not sure if we add a new "hartid-base" property
            just like what SiFive PLIC is
            implemented would be a good idea or not.


            Regards,
            Frank Chang

                +        qemu_irq irq =
                qemu_allocate_irq(riscv_clic_cpu_irq_handler,
                +            &cpu->env, 1);
                + qdev_connect_gpio_out(dev, i, irq);
                +        cpu->env.clic = clic;
                +    }
                +}
                +


Reply via email to