# Summary

This RFC outlines a set of additional APIs for the C Runtime to enable direct 
calling of an AOT micro entrypoint 
(https://discuss.tvm.apache.org/t/rfc-utvm-aot-optimisations-for-embedded-targets/9849)
 from a model descriptor which includes some model metadata, this is an 
alternative to the packed function API when working in embedded environments.
 
```c
typedef struct {
        ...metadata...,
        (TVMMicroEntryPoint*) entrypoint
} TVMModel; // Model descriptor to be used in static linkage
typedef struct {
        ...,
        void** workspace;
} TVMContext; // Context configuration for minimal environments

// Execution function to execute a model in a given context
static inline int32_t TVMExecute(const TVMModel* model, void** inputs, void** 
outputs, TVMContext* context);
// Workspace setup function to assign the workspace to the context
static inline void TVMSetWorkspaces(TVMContext* context, void** workspace);
// Workspace size retrieval
static inline size_t TVMGetWorkspaceSize(const TVMModel* model, size_t 
workspace_index);
```

# Motivation

As illustrated by @stoa in 
https://discuss.tvm.apache.org/t/rfc-standalone-code-generation-and-c-runtime-for-stm32-bare-metal-devices/9562,
 an embedded specific entrypoint into TVM is desired and in order to access AOT 
from an embedded environment, it makes sense to provide a stable user facing 
API so as underlying changes in the output model can be transparent to system 
integrators. Providing stable interfaces for the facilities of the existing C 
runtime to an embedded environment provides similar guarantees and ease of use 
for those not using the packed function signature in TVM. This also provides 
TVM developers the ability to change the underlying micro runtime as TVM 
evolves with a stable outward facing interface.

One of the principles of the micro entrypoint is that it is providing a minimal 
amount of overhead when running in an embedded system, therefore a similarly 
minimal way to run a simple model is introduced which can be augmented by the 
wider C Runtime.

# Guide-level explanation

This RFC aims to introduce the concepts to call the AOT micro entrypoint from 
an embedded application, as a starting point this proposal includes:

* A model descriptor to give richer information about the model and wrap the 
micro entrypoint
* A model context to store embedded environment information
* Initial functions for managing memory workspaces

A user can include these as additional headers to allow a thin and stable 
interface for the AOT execution entrypoint, instead of having:

*user_app.c*
```c
extern const TVMModel my_model;
my_model->entrypoint(inputs, outputs, my_context);
```

And having to understand the calling pattern of the AOT output, they can 
instead use:

*user_app.c*
```c
#include "tvm_micro_runtime.h"
extern const TVMModel my_model;
TVMExecute(&my_model, inputs, outputs, &my_context);
```

This would be achieved by using minimal inline functions to mask the internal 
structure of TVMModel, such as:

*tvm_micro_runtime.h*
```c
#include "tvm_micro_backend.h"
static inline int32_t TVMExecute(TVMModel* model, void** inputs, void** 
outputs, TVMContext* context) {
        return model->entrypoint(inputs, outputs, context);
}
```
*tvm_micro_backend.h*
```
typedef struct {
        ...metadata...,
        (TVMMicroEntryPoint*) entrypoint
} TVMModel; // Model descriptor to be used in static linkage
typedef struct {
        ...,
        void** workspace;
} TVMContext; // Context configuration for minimal environments
```

You can see this in two motivating user flows, compiling a model and then 
augmenting it with application level memory management.

## Default Model Compilation

![](https://confluence.arm.com/download/attachments/759974179/structurizr-barebones.png?version=1&modificationDate=1619713091765&api=v2)

In this flow, the user is using tvmc to generate a model and an associated 
block of memory is allocated for it:

`tvmc my_model.tflite --executor=aot --target=c --no-typed-operators 
--micro-entrypoint`

For this flow, no additional context is required and the user can run the code 
on their device:

```c
extern const TVMModel my_model;
void* inputs = {my_data};
void* outputs = {output_space};
TVMExecute(&my_model, inputs, outputs, NULL);
```

This is enabled by the use of of the a TVMModel structure generated by TVM to 
expose the AOT resources which can be constant and provided by the compiler 
output with relevant metadata for users to query.

## Custom-Workspace Compilation

![](https://confluence.arm.com/download/attachments/759974179/structurizr-baremetal.png?version=3&modificationDate=1619713102527&api=v2)

In this flow, the user is using tvmc to generate a model but specifies the 
memory available:

`tvmc my_model.tflite --executor=aot --target=c --no-typed-operators 
--micro-entrypoint --with-memory=size=2048;access=rw`

For this flow, the additional context is required to allow telling the runtime 
where the memory exists:

```c
extern const TVMModel my_model;
TVMContext context;

void* inputs = {my_data};
void* outputs = {output_space};

TVMSetWorkspaces(&context, malloc(TVMGetWorkspaceSize(model, 0));
TVMExecute(&my_model, inputs, outputs, context);
```

This works because of the context which the model runs within, similar to the 
DLContext object but providing only information not hardcoded into the AOT 
output for a minimal runtime. By re-using the resource_handle pointer, the 
embedded context can also be used for operators run using packed functions and 
normal TVM buffers.

# Reference-level explanation

In this RFC, we are primarily concerned with three areas; a model descriptor 
which the compiler generates, a context which the user can manipulate and an 
API file which binds the two together.

## Model Descriptor

This is a formalisation of the model descriptor found in 
tvm/runtime/crt/internal/aot_executor/aot_executor.h, which can be used to 
describe a model via the APIs proposed:

```c
typedef struct {
  uint32_t num_input_tensors;    /** Number of expected input tensors */
  uint32_t num_output_tensors;   /** Number of expected output tensors */
  size_t* workspace_size;         /** Size of workspace required for model to 
run */
  TVMMicroEntryPoint entrypoint; /** Generated model function, called through 
tvm_runtime_run */
} TVMModel;
```

This is the generated fixed model descriptor which users can address by name in 
the outputted code:

`extern const TVMModel my_model;`

Additional fields can be added here alongside suitable getters to retrieve 
information about a model. Notably, if the workspace isn't specified by the 
user, it'll default to being pinned within the generated code rather than being 
user accessible.

## Context

Paired with the model descriptor, this provides any contextual information 
required to run the model, such as an application driven workspace 
configuration:

```
typedef struct {
        void** workspace; /** Pointers to different memory to use as a 
workspace */
} TVMContext;
```

## Micro Entrypoint Runtime API

A header which can be added to the `src/runtime` folder alongside 
`c_backend_api.h` and `c_runtime_api.h` to provide the correct overlay to the 
matching C runtime. Using `static inline` functions each of the individual 
calls can be kept minimal and provide abstraction on top of the underlying 
model:

```c
static inline int32_t TVMExecute(const TVMModel* model, void** inputs, void** 
outputs, TVMContext* context) {
        return model->entrypoint(inputs, outputs, context);
}
static inline size_t TVMGetWorkspaceSize(const TVMModel* model, size_t 
workspace_index) {
    return model->workspace_size[workspace_index];
}
static inline void TVMSetWorkspaces(TVMContext* context, void** workspace) {
        context->workspace = workspace;
}
```

# Drawbacks

This starts to build up a minimal interface for interacting with TVM, which 
deviates from the main dynamic linked approach. It's important to keep this 
layer as minimal as possible to allow other parts of TVM to continue doing the 
heavy lifting.

Combining this with the core C runtime means maintaining support across an 
incredibly broad range of devices from single core embedded devices to cloud 
environments and dynamically loading for autotuning.

# Rationale and alternatives

Integrating with the current C Runtime gives us a way to assess and move 
forwards with embedded specific changes but alternatively a different runtime 
environment could be created entirely, this would mean reinventing every aspect 
of the runtime and would not leverage as much of the existing work.

# Prior art

* The setting up of an application workspace for a TVM model was first 
demonstrated in 
https://discuss.tvm.apache.org/t/rfc-standalone-code-generation-and-c-runtime-for-stm32-bare-metal-devices/9562
* AOT introduced the concept of a model descriptor in 
tvm/runtime/crt/internal/aot_executor/aot_executor.h [within it's introductory 
PR](https://github.com/apache/tvm/pull/7785)

# Unresolved questions

* Is this lightweight enough to allow usage of the C Runtime where useful to 
embedded applications?
* Should we use the common C snake case style to better match embedded systems 
rather than the style used by the C runtime?

# Future possibilities

By integrating and evolving the C Runtime API, TVM can be targeted across a 
broader range of targets than is currently possible with the current API. This 
section outlines some of the use cases we could extend this into, these are 
intended to be illustrative and will require their own RFCs.

## Re-use of C Runtime APIs

Using the C runtime provides access to standard interfaces such as 
multithreading, with an RTOS specific implementation

![](https://confluence.arm.com/download/attachments/759974179/structurizr-multithreaded.png?version=1&modificationDate=1619712082407&api=v2)

## Further Model Metadata

Any additional metadata can be added to the `TVMModel` structure as a minimal 
overhead, allowing for extension into a variety of use cases.

```
static inline int32_t TVMGetTVMVersionMajor(const TVMModel* model)  {
        return model->compiler_version_major;
}
```

## Shared Workspaces

In this flow, the user disables the generation of a default memory block so as 
to allow for the application to define that memory:

`tvmc my_model1.tflite --executor=aot --target=c --no-typed-operators 
--micro-entrypoint --with-memory=size=2048;access=rw`

`tvmc my_model2.tflite --executor=aot --target=c --no-typed-operators 
--micro-entrypoint --with-memory=size=4096;access=rw`

And this can be loaded into the context for the executor to get the memory from:

```
TVMContext my_context;
size_t max_workspace_required = max(TVMGetWorkspaceSize(my_model1, 0), 
TVMGetWorkspaceSize(my_model2, 0));
TVMSetWorkspace(&my_context, malloc(max_workspace_required));
```

## RTOS Device Integration

![](https://confluence.arm.com/download/attachments/759974179/structurizr-accelerator.png?version=1&modificationDate=1619712145836&api=v2)

The context object can be defined per-platform to allow RTOS specific 
structures to be passed through to the operators:

```c
struct device* my_accel = device_get_binding("ACC_0");
TVMSetDevice(&my_context, my_accel);
```

With an associated header-only platform wrapper, here is an example for the 
Zephyr RTOS:

```c
#include <device.h>

typedef struct {
  void** workspace;
  struct device* device;
} TVMContext;

static inline void TVMSetDevice(TVMContext* context,  struct device* device) { 
   context->device = device;
}
```

Alongside new device drivers, this can provide an interface for operators to 
interact with RTOS drivers directly in the C runtime:

```c
void TVMAcceleratorAccelerate(TVMContext* context, int32_t operation) {
        struct device* device = context->device;
        device_specific_rtos_call(device, operation);
}
```

## Parameter Updating

By starting to uncover the alternative pathway into a more static execution 
environment we can start to provide methods of overwriting aspects of the model 
such as overwriting existing in-memory parameters:

```c
static inline void TVMSetParameters(TVMContext* context, void** params) {
        context->params = params;
}
```

Which can then provide the potential for Over-the-Air updates of models to IoT 
devices.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/rfc-utvm-embedded-c-runtime-interface/9951/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/c6ddffcf85e327478c02f4ade5a0ddafd85fa9d4e63dc6f1f123d5ad27522ae0).

Reply via email to