@comaniac I've done some refactoring to disentangle 'TuningJob' from the 
ConfigLibrary. The tuning loop now looks like this:

```
def tune_kernels(tasks,
                 n_trial,
                 config_library,
                 measure_option,
                 log_filename='tuning.log'):

    # Create a tuning job and point it at a config library
    job = TuningJob(
        log_filename,
        target,
        config_library=config_library,
    )
    # Use the tuning job during the tuning loop
    with job:
        for i, tsk in enumerate(tasks):
            prefix = "[Task %2d/%2d] " % (i+1, len(tasks))

            # Convert conv2d tasks to conv2d_NCHWc tasks
            task = autotvm.task.create("topi_x86_conv2d_NCHWc", args=tsk.args,
                                       target=target, template_key='direct')
            task.workload = tsk.workload

            # Create tuner
            tuner_obj = GridSearchTuner(task)

            # Do tuning - the tuner will skip tasks which have already been 
tuned
            # in the config library
            tuner_obj.tune(
                n_trial=n_trial,
                early_stopping=n_trial,
                measure_option=measure_option,
                callbacks=[autotvm.callback.progress_bar(n_trial, 
prefix=prefix)],
            )
```

Using 'with job' puts the job into the global tuning scope. The job will then 
automatically register it's own callback to the tuners and a new tuner method 
`load_library` is called if the TuningJob has a ConfigLibrary attached. This is 
where the resume logic can be implemented. Currently I have only implemented 
the basic logic to skip completed tasks, but it should be possible to implement 
more advanced resume logic fairly easily as you have access to the full 
ConfigLibrary.

If you don't specify a ConfigLibrary with a job, it will just log all the 
results to the specified log file.

Config files are indexed within the library by target, so to use configs from 
the library you can simply do `with config_library.load(target):`. This just 
returns ApplyHistoryBest, it doesn't implement a new DispatchContext.
 
I've updated my PR (https://github.com/apache/incubator-tvm/pull/4151) 
accordingly. Note that the PR does not include every feature discussed here but 
is intended as initial infrastructure on top of which more advanced features 
can be developed.


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4150#issuecomment-552982585

Reply via email to