Now,I woulde like to use te.extern to add an extern op level library ,like 
cuDNN. For complex ops, like conv2d or dense , we can add new dispatch rules
in op strategy,like @softmax_strategy.register(["cuda", "gpu"]), and then 
redefine the compute with te.extern. But for some injective or broadcast ops, 
TVM
define the same compute function across all targets. As follows,

    RELAY_REGISTER_OP("nn.relu")

        .describe(R"code(Returns the relu input array, computed element-wise.

    .. math::

       max(x, 0)

    )code" TVM_ADD_FILELINE)

        .set_num_inputs(1)

        .add_argument("data", "Tensor", "The input tensor.")

        .set_support_level(1)

        .add_type_rel("Identity", IdentityRel)

        .set_attr<FInferCorrectLayout>("FInferCorrectLayout", 
ElemwiseArbitraryLayout)

        .set_attr<FTVMCompute>("FTVMCompute", [](const Attrs& attrs, const 
Array<te::Tensor>& inputs,

                                                 const Type& out_type) {

          return Array<te::Tensor>{topi::relu(inputs[0], 0.0f)};

        });


Now, It seems that dispatch logic has been removed from TOPI. So how can I add 
new compute for relu? @haichen





---
[Visit Topic](https://discuss.tvm.ai/t/relay-op-strategy/5247/20) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/22c3a7394ca4504768dd594002cc489cf8ee29474346fa2992b5c01ee96a8168).

Reply via email to