tvm.meta_schedule

Package tvm.meta_schedule. The meta schedule infrastructure.

class tvm.meta_schedule.Builder

The abstract builder interface.

build(build_inputs: List[BuilderInput]) List[BuilderResult]

Build the given inputs.

Parameters:

build_inputs (List[BuilderInput]) – The inputs to be built.

Returns:

build_results – The results of building the given inputs.

Return type:

List[BuilderResult]

static create(kind: Literal['local'] = 'local', *args, **kwargs) Builder

Create a Builder.

Parameters:

kind (Literal["local"]) – The kind of the builder. For now, only “local” is supported.

Returns:

builder – The builder created.

Return type:

Builder

class tvm.meta_schedule.CostModel

Cost model.

static create(kind: Literal['xgb', 'mlp', 'random', 'none'], *args, **kwargs) CostModel

Create a CostModel.

Parameters:

kind (Literal["xgb", "mlp", "random", "none"]) – The kind of the cost model. Can be “xgb”, “mlp”, “random” or “none”.

Returns:

cost_model – The created cost model.

Return type:

CostModel

load(path: str) None

Load the cost model from given file location.

Parameters:

path (str) – The file path.

predict(context: TuneContext, candidates: List[MeasureCandidate]) ndarray

Predict normalized score with the cost model.

Parameters:
Returns:

result – The predicted normalized score.

Return type:

np.ndarray

save(path: str) None

Save the cost model to given file location.

Parameters:

path (str) – The file path.

update(context: TuneContext, candidates: List[MeasureCandidate], results: List[RunnerResult]) None

Update the cost model given running results.

Parameters:
  • context (TuneContext,) – The tuning context.

  • candidates (List[MeasureCandidate]) – The measure candidates.

  • results (List[RunnerResult]) – The running results of the measure candidates.

class tvm.meta_schedule.Database

The abstract database interface.

commit_tuning_record(record: TuningRecord) None

Commit a tuning record to the database.

Parameters:

record (TuningRecord) – The tuning record to add.

commit_workload(mod: IRModule) Workload

Commit a workload to the database if missing.

Parameters:

mod (IRModule) – The IRModule to be searched for or added.

Returns:

workload – The workload corresponding to the given IRModule.

Return type:

Workload

static create(kind: Literal['json', 'memory', 'union', 'ordered_union'] | Callable[[Schedule], bool] = 'json', *args, **kwargs) Database

Create a Database.

Parameters:
  • kind (str = "json" | "memory" | "union" | "ordered_union" | Callable[[tvm.tir.Schedule],) –

  • bool] – The kind of the database to be created. The following kinds are supported: “json”, “memory”, “union”, “ordered_union”, and a custom schedule function.

Returns:

database – The created database.

Return type:

Database

static current() Database | None

Get the current database under scope.

dump_pruned(destination: Database) None

Dump the pruned database to files of JSONDatabase format.

Parameters:

destination (Database) – The destination database to be dumped to.

get_all_tuning_records() List[TuningRecord]

Get all the tuning records from the database.

Returns:

tuning_records – All tuning records from the database.

Return type:

List[TuningRecord]

get_top_k(workload: Workload, top_k: int) List[TuningRecord]

Get the top K valid tuning records of given workload from the database.

Parameters:
  • workload (Workload) – The workload to be searched for.

  • top_k (int) – The number of top records to get.

Returns:

top_k_records – The top K records.

Return type:

List[TuningRecord]

has_workload(mod: IRModule) bool

Check if the database has the given workload. :param mod: The IRModule to be searched for. :type mod: IRModule

Returns:

result – Whether the database has the given workload.

Return type:

bool

query(mod: IRModule, target: Target, *, workload_name: str = 'main', kind: Literal['schedule'] | Literal['record'] | Literal['ir_module'] = 'schedule') Schedule | IRModule | TuningRecord

Query the database to retrieve the best optimization outcome of the given workload.

Parameters:
  • mod (IRModule) – The IRModule to be searched for.

  • target (Target) – The target to be searched for.

  • kind (str = "schedule" | "record" | "ir_module") – The kind of the optimization outcome to be returned.

Returns:

result – The best optimization outcome of the given workload.

Return type:

Union[tvm.tir.Schedule, IRModule, TuningRecord]

query_ir_module(mod: IRModule, target: Target, workload_name: str) IRModule | None

Query the best IRModule of the given workload from the database.

Parameters:
  • mod (IRModule) – The IRModule to be searched for.

  • target (Target) – The target to be searched for.

  • workload_name (str) – The name of the workload to be searched for.

Returns:

ir_module – The best IRModule of the given workload; None if not found.

Return type:

Optional[IRModule]

query_schedule(mod: IRModule, target: Target, workload_name: str) Schedule | None

Query the best schedule of the given workload from the database.

Parameters:
  • mod (IRModule) – The IRModule to be searched for.

  • target (Target) – The target to be searched for.

  • workload_name (str) – The name of the workload to be searched for.

Returns:

schedule – The best schedule of the given workload; None if not found.

Return type:

Optional[tvm.tir.Schedule]

query_tuning_record(mod: IRModule, target: Target, workload_name: str) TuningRecord | None

Query the best record of the given workload from the database.

Parameters:
  • mod (IRModule) – The IRModule to be searched for.

  • target (Target) – The target to be searched for.

  • workload_name (str) – The name of the workload to be searched for.

Returns:

tuning_record – The best record of the given workload; None if not found.

Return type:

Optional[TuningRecord]

class tvm.meta_schedule.ExtractedTask(task_name: str, mod: IRModule, target: Target, dispatched: List[IRModule], weight: int)

A tuning task extracted from the high-level IR

Parameters:
  • task_name (str) – The name of the task extracted

  • mod (IRModule) – The high-level IR

  • target (Target) – Target information

  • dispatched (List[IRModule]) – A list of low-level IRs that the high-level IR could potentially dispatch to

  • weight (int) – The weight of the task

class tvm.meta_schedule.FeatureExtractor

Extractor for features from measure candidates for use in cost model.

static create(kind: Literal['per-store-feature'], *args, **kwargs) FeatureExtractor

Create a CostModel.

extract_from(context: TuneContext, candidates: List[MeasureCandidate]) List[NDArray]

Extract features from the given measure candidate.

Parameters:
  • context (TuneContext) – The tuning context for feature extraction.

  • candidates (List[MeasureCandidate]) – The measure candidates to extract features from.

Returns:

features – The feature tvm ndarray extracted.

Return type:

List[NDArray]

class tvm.meta_schedule.MeasureCallback

Rules to apply after measure results is available.

apply(task_scheduler: TaskScheduler, task_id: int, measure_candidates: List[MeasureCandidate], builder_results: List[BuilderResult], runner_results: List[RunnerResult]) None

Apply a measure callback to the given schedule.

Parameters:
  • task_scheduler (TaskScheduler) – The task scheduler.

  • task_id (int) – The task id.

  • measure_candidates (List[MeasureCandidate]) – The measure candidates.

  • builder_results (List[BuilderResult]) – The builder results by building the measure candidates.

  • runner_results (List[RunnerResult]) – The runner results by running the built measure candidates.

static create(kind: Literal['default']) List[MeasureCallback]

Create a list of measure callbacks.

class tvm.meta_schedule.MeasureCandidate(sch: Schedule, args_info: List[ArgInfo])

Measure candidate class.

Parameters:
  • sch (tvm.tir.Schedule) – The schedule to be measured.

  • args_info (List[ArgInfo]) – The argument information.

class tvm.meta_schedule.Mutator

Mutator is designed to mutate the trace to explore the design space.

apply(trace: Trace) Trace | None

Apply the mutator function to the given trace.

Parameters:

trace (Trace) – The given trace for mutation.

Returns:

trace – None if mutator failed, otherwise return the mutated trace.

Return type:

Optional[Trace]

clone() Mutator

Clone the mutator.

Returns:

mutator – The cloned mutator.

Return type:

Mutator

static create(kind: Literal['llvm', 'cuda', 'cuda-tensorcore', 'hexagon']) Dict[Mutator, float]

Create a list of default mutators.

Parameters:

kind (Literal["llvm", "cuda", "cuda-tensorcore", "hexagon"]) – The kind of mutators.

Returns:

mutators – The list of mutators.

Return type:

List[Mutator]

class tvm.meta_schedule.Postproc

Rules to apply a postprocessor to a schedule.

apply(sch: Schedule) bool

Apply a postprocessor to the given schedule.

Parameters:

sch (tvm.tir.Schedule) – The schedule to be post processed.

Returns:

result – Whether the postprocessor was successfully applied.

Return type:

bool

clone() Postproc

Clone the postprocessor.

Returns:

cloned_postproc – The cloned postprocessor.

Return type:

Postproc

static create(kind: Literal['llvm', 'cuda', 'cuda-tensorcore', 'hexagon']) List[Postproc]

Create a list of default postprocessors.

Parameters:

kind (Literal["llvm", "cuda", "cuda-tensorcore", "hexagon"]) – The kind of the postprocessors.

Returns:

postprocs – The list of postprocessors.

Return type:

List[Mutator]

class tvm.meta_schedule.Profiler

Tuning time profiler.

static current() Profiler | None

Get the current profiler.

get() Dict[str, float]

Get the profiling results in seconds

table() str

Get the profiling results in a table format

static timeit(name: str)

Timeit a block of code

class tvm.meta_schedule.Runner

The abstract runner interface

static create(kind: Literal['local', 'rpc'] = 'local', *args, **kwargs) Runner

Create a Runner.

run(runner_inputs: List[RunnerInput]) List[RunnerFuture]

Run the built artifact and get runner futures.

Parameters:

runner_inputs (List[RunnerInput]) – The inputs to the runner.

Returns:

runner_futures – The runner futures.

Return type:

List[RunnerFuture]

class tvm.meta_schedule.ScheduleRule

Rules to modify a block in a schedule.

apply(sch: Schedule, block: BlockRV) List[Schedule]

Apply a schedule rule to the specific block in the given schedule.

Parameters:
  • sch (tvm.tir.Schedule) – The schedule to be modified.

  • block (BlockRV) – The specific block to apply the schedule rule.

Returns:

design_spaces – The list of schedules generated by applying the schedule rule.

Return type:

List[tvm.tir.Schedule]

clone() ScheduleRule

Deep clone the schedule rule.

Returns:

cloned_rule – The cloned schedule rule.

Return type:

ScheduleRule

static create(kind: Literal['llvm', 'cuda', 'cuda-tensorcore', 'hexagon']) List[ScheduleRule]

Create a list of schedule rules for the given kind.

Parameters:

kind (Literal["llvm", "cuda", "cuda-tensorcore", "hexagon"]) – The kind of the schedule rules.

Returns:

rules – The list of schedule rules.

Return type:

List[ScheduleRule]

class tvm.meta_schedule.SearchStrategy

Search strategy is the class that generates the measure candidates.

clone() SearchStrategy

Clone the search strategy.

Returns:

cloned – The cloned search strategy.

Return type:

SearchStrategy

static create(kind: Literal['evolutionary', 'replay-trace', 'replay-func'] = 'evolutionary', *args, **kwargs) SearchStrategy

Create a search strategy.

generate_measure_candidates() List[MeasureCandidate] | None

Generate measure candidates from design spaces for measurement.

Returns:

measure_candidates – The measure candidates generated, None if finished.

Return type:

Optional[List[IRModule]]

notify_runner_results(measure_candidates: List[MeasureCandidate], results: List[RunnerResult]) None

Update the search strategy with profiling results.

Parameters:
  • measure_candidates (List[MeasureCandidate]) – The measure candidates for update.

  • results (List[RunnerResult]) – The profiling results from the runner.

post_tuning() None

Post-tuning for the search strategy.

pre_tuning(max_trials: int, num_trials_per_iter: int, design_spaces: List[Schedule], database: Database | None = None, cost_model: CostModel | None = None) None

Pre-tuning for the search strategy.

Parameters:
  • max_trials (int) – The maximum number of trials.

  • num_trials_per_iter (int) – The number of trials per iteration.

  • design_spaces (List[tvm.tir.Schedule]) – The design spaces used during tuning process.

  • database (Optional[Database] = None) – The database used during tuning process.

  • cost_model (Optional[CostModel] = None) – The cost model used during tuning process.

class tvm.meta_schedule.SpaceGenerator

The abstract design space generator interface.

clone() SpaceGenerator

Clone the design space generator.

Returns:

cloned_sg – The cloned design space generator.

Return type:

SpaceGenerator

static create(kind: Literal['post-order-apply', 'union'] | Callable[[Schedule], None] | Callable[[Schedule], Schedule] | Callable[[Schedule], List[Schedule]] = 'post-order-apply', *args, **kwargs) SpaceGenerator

Create a design space generator.

generate_design_space(mod: IRModule) List[Schedule]

Generate design spaces given a module.

Parameters:

mod (IRModule) – The module used for design space generation.

Returns:

design_spaces – The generated design spaces, i.e., schedules.

Return type:

List[tvm.tir.Schedule]

class tvm.meta_schedule.TaskScheduler

The abstract task scheduler interface.

static create(kind: Literal['round-robin', 'gradient'] = 'gradient', *args, **kwargs) TaskScheduler

Create a task scheduler.

join_running_task(task_id: int) List[RunnerResult]

Wait until the task is finished.

Parameters:

task_id (int) – The task id to be joined.

Returns:

results – The list of results.

Return type:

List[RunnerResult]

next_task_id() int

Fetch the next task id.

Returns:

next_task_id – The next task id.

Return type:

int

print_tuning_statistics() None

Print out a human-readable format of the tuning statistics.

terminate_task(task_id: int) None

Terminate the task

Parameters:

task_id (int) – The task id to be terminated.

touch_task(task_id: int) None

Touch the task and update its status

Parameters:

task_id (int) – The task id to be checked.

tune(tasks: List[TuneContext], task_weights: List[float], max_trials_global: int, max_trials_per_task: int, num_trials_per_iter: int, builder: Builder, runner: Runner, measure_callbacks: List[MeasureCallback], database: Database | None, cost_model: CostModel | None) None

Auto-tuning.

Parameters:
  • tasks (List[TuneContext]) – The list of tuning contexts as tasks.

  • task_weights (List[float]) – The list of task weights.

  • max_trials_global (int) – The maximum number of trials globally.

  • max_trials_per_task (int) – The maximum number of trials per task.

  • num_trials_per_iter (int) – The number of trials per iteration.

  • builder (Builder) – The builder.

  • runner (Runner) – The runner.

  • measure_callbacks (List[MeasureCallback]) – The list of measure callbacks.

  • database (Optional[Database]) – The database.

  • cost_model (Optional[CostModel]) – The cost model.

class tvm.meta_schedule.TuneContext(mod: IRModule | None = None, *, target: Target | str | None = None, space_generator: SpaceGenerator.SpaceGeneratorType | None = None, search_strategy: SearchStrategy.SearchStrategyType | None = None, task_name: str = 'main', rand_state: int = -1, num_threads: int | Literal['physical', 'logical'] = 'physical', logger: Logger | None = None)

The tune context class is designed to contain all resources for a tuning task.

Parameters:
  • mod (Optional[IRModule] = None) – The workload to be optimized.

  • target (Optional[Target] = None) – The target to be optimized for.

  • space_generator (Union[None, ScheduleFnType, SpaceGenerator] = None) – The design space generator.

  • search_strategy (Union[None, SearchStrategy] = None) – The search strategy. if None, the strategy is left blank.

  • task_name (Optional[str] = None) – The name of the tuning task.

  • logger (logging.Logger) – The logger for the tuning task.

  • rand_state (int = -1) – The random state. Need to be in integer in [1, 2^31-1], -1 means using random number.

  • num_threads (int = None) – The number of threads to be used, None means using the logical cpu count.

clone() TuneContext

Clone the TuneContext.

Returns:

cloned_context – The cloned TuneContext.

Return type:

TuneContext

generate_design_space() List[Schedule]

Generate design spaces given a module.

Delegated to self.space_generator.generate_design_space with self.mod

Returns:

design_spaces – The generated design spaces, i.e., schedules.

Return type:

List[tvm.tir.Schedule]

generate_measure_candidates() List[MeasureCandidate] | None

Generate a batch of measure candidates from design spaces for measurement.

Delegated to self.search_strategy.generate_measure_candidates.

Returns:

measure_candidates – The measure candidates generated, None if search is finished.

Return type:

Optional[List[IRModule]]

notify_runner_results(measure_candidates: List[MeasureCandidate], results: List[RunnerResult]) None

Update the state in SearchStrategy with profiling results.

Delegated to self.search_strategy.notify_runner_results.

Parameters:
  • measure_candidates (List[MeasureCandidate]) – The measure candidates for update.

  • results (List[RunnerResult]) – The profiling results from the runner.

post_tuning() None

A method to be called for SearchStrategy to do necessary cleanup after tuning.

Delegated to self.search_strategy.post_tuning.

pre_tuning(max_trials: int, num_trials_per_iter: int = 64, design_spaces: List[Schedule] | None = None, database: Database | None = None, cost_model: CostModel | None = None) None

A method to be called for SearchStrategy to do necessary preparation before tuning.

Delegated to self.search_strategy.pre_tuning.

Parameters:
  • max_trials (int) – The maximum number of trials to be executed.

  • num_trials_per_iter (int = 64) – The number of trials to be executed per iteration.

  • design_spaces (Optional[List[tvm.tir.Schedule]]) – The design spaces used during tuning process. If None, use the outcome of self.generate_design_space().

  • database (Optional[Database] = None) – The database used during tuning process. If None, and the search strategy is EvolutionarySearch, then use tvm.meta_schedule.database.MemoryDatabase.

  • cost_model (Optional[CostModel] = None) – The cost model used during tuning process. If None, and the search strategy is EvolutionarySearch, then use tvm.meta_schedule.cost_model.RandomModel.

tvm.meta_schedule.derived_object(cls: type) type

A decorator to register derived subclasses for TVM objects.

Parameters:

cls (type) – The derived class to be registered.

Returns:

cls – The decorated TVM object.

Return type:

type

Example

@register_object("meta_schedule.PyRunner")
class _PyRunner(meta_schedule.Runner):
    def __init__(self, f_run: Callable = None):
        self.__init_handle_by_constructor__(_ffi_api.RunnerPyRunner, f_run)

class PyRunner:
    _tvm_metadata = {
        "cls": _PyRunner,
        "methods": ["run"]
    }
    def run(self, runner_inputs):
        raise NotImplementedError

@derived_object
class LocalRunner(PyRunner):
    def run(self, runner_inputs):
        ...
tvm.meta_schedule.is_meta_schedule_enabled() bool

Return whether the meta-schedule is enabled.

Returns:

enabled – Whether the meta schedule is enabled

Return type:

bool

tvm.meta_schedule.tune_tasks(*, tasks: List[TuneContext], task_weights: List[float], work_dir: str, max_trials_global: int, max_trials_per_task: int | None = None, num_trials_per_iter: int = 64, builder: Builder | Literal['local'] = 'local', runner: Runner | Literal['local', 'rpc'] = 'local', database: Database | Literal['json', 'memory'] = 'json', cost_model: CostModel | Literal['xgb', 'mlp', 'random'] = 'xgb', measure_callbacks: List[MeasureCallback] | MeasureCallback | Literal['default'] = 'default', task_scheduler: TaskScheduler | Literal['gradient', 'round-robin'] = 'gradient', module_equality: str = 'structural') Database

Tune a list of tasks. Using a task scheduler.

Parameters:
  • tasks (List[TuneContext]) – The list of tasks to tune.

  • task_weights (List[float]) – The weight of each task.

  • work_dir (str) – The working directory.

  • max_trials_global (int) – The maximum number of trials to run globally.

  • max_trials_per_task (Optional[int]) – The maximum number of trials to run per task.

  • num_trials_per_iter (int) – The number of trials to run per iteration

  • builder (Builder.BuilderType) – The builder.

  • runner (Runner.RunnerType) – The runner.

  • database (Database.DatabaseType) – The database.

  • cost_model (CostModel.CostModelType) – The cost model.

  • measure_callbacks (MeasureCallback.CallbackListType) – The measure callbacks.

  • task_scheduler (TaskScheduler.TaskSchedulerType) – The task scheduler.

  • module_equality (Optional[str]) –

    A string to specify the module equality testing and hashing method. It must be one of the followings:

    • ”structural”: Use StructuralEqual/Hash

    • ”ignore-ndarray”: Same as “structural”, but ignore ndarray raw data during equality

      testing and hashing.

    • ”anchor-block”: Apply equality testing and hashing on the anchor block extracted from

      a given module. The “ignore-ndarray” varint is used for the extracted blocks or in case no anchor block is found. For the definition of the anchor block, see tir/analysis/analysis.py.

Returns:

database – The database with all tuning records

Return type:

Database

tvm.meta_schedule.tune_tir(mod: IRModule | PrimFunc, target: str | Target, work_dir: str, max_trials_global: int, *, max_trials_per_task: int | None = None, num_trials_per_iter: int = 64, builder: Builder | Literal['local'] = 'local', runner: Runner | Literal['local', 'rpc'] = 'local', database: Database | Literal['json', 'memory'] = 'json', cost_model: CostModel | Literal['xgb', 'mlp', 'random'] = 'xgb', measure_callbacks: List[MeasureCallback] | MeasureCallback | Literal['default'] = 'default', task_scheduler: TaskScheduler | Literal['gradient', 'round-robin'] = 'gradient', space: SpaceGenerator | Callable[[Schedule], None] | Callable[[Schedule], Schedule] | Callable[[Schedule], List[Schedule]] | Literal['post-order-apply', 'union'] = 'post-order-apply', strategy: SearchStrategy | Literal['replay-func', 'replay-trace', 'evolutionary'] = 'evolutionary', num_tuning_cores: Literal['physical', 'logical'] | int = 'physical', seed: int | None = None, module_equality: str = 'structural', special_space: Mapping[str, SpaceGenerator | Callable[[Schedule], None] | Callable[[Schedule], Schedule] | Callable[[Schedule], List[Schedule]] | Literal['post-order-apply', 'union']] | None = None) Database

Tune a TIR function or an IRModule of TIR functions.

Parameters:
  • mod (Union[ir.IRModule, tir.PrimFunc]) – The TIR IRModule to tune.

  • target (Union[str, Target]) – The target to tune for.

  • work_dir (str) – The working directory.

  • max_trials_global (int) – The maximum number of trials to run globally.

  • max_trials_per_task (Optional[int]) – The maximum number of trials to run per task.

  • num_trials_per_iter (int) – The number of trials to run per iteration

  • builder (Builder.BuilderType) – The builder.

  • runner (Runner.RunnerType) – The runner.

  • database (Database.DatabaseType) – The database.

  • cost_model (CostModel.CostModelType) – The cost model.

  • measure_callbacks (MeasureCallback.CallbackListType) – The measure callbacks.

  • task_scheduler (TaskScheduler.TaskSchedulerType) – The task scheduler.

  • space (SpaceGenerator.SpaceGeneratorType) – The space generator.

  • strategy (SearchStrategy.SearchStrategyType) – The search strategy.

  • num_tuning_cores (Union[Literal["physical", "logical"], int]) – The number of CPU cores to use during tuning.

  • seed (Optional[int]) – The seed for the random number generator.

  • module_equality (Optional[str]) – A string to specify the module equality testing and hashing method.

  • special_space (Optional[Mapping[str, SpaceGenerator.SpaceGeneratorType]]) – A mapping from task name to a special space generator for that task.

Returns:

database – The database with all tuning records

Return type:

Database