tvm.relax.op¶
tvm.relax.op¶
Relax core operators.
- tvm.relax.op.abs(x: Expr) Expr ¶
Compute element-wise absolute value of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.acos(x: Expr) Expr ¶
Compute element-wise arc cos of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.acosh(x: Expr) Expr ¶
Compute element-wise arc cosh of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.add(x1: Expr, x2: Expr) Expr ¶
Addition with numpy-style broadcasting.
- Parameters:
x1 (Expr) – The first input tensor.
x2 (Expr) – The second input tensor.
- Returns:
result – The computed result.
- Return type:
Expr
Examples
bb = relax.BlockBuilder() a = relax.Var("a", relax.TensorStructInfo(shape=(2, 3), dtype="float32")) b = relax.Var("b", relax.TensorStructInfo(shape=(2, 1), dtype="float32")) c = bb.normalize(relax.op.add(a, b)) # c has TensorStructInfo(shape=(2, 3), dtype="float32")
- tvm.relax.op.arange(start: int | PrimExpr | PrimValue, end: int | PrimExpr | PrimValue | None = None, step: int | PrimExpr | PrimValue = 1, dtype: str | DataType | None = None) Expr ¶
Construct a tensor with evenly spaced elements.
- Parameters:
start (Union[PrimExprLike,PrimValue]) – The start of the interval.
end (Optional[Union[PrimExprLike,PrimValue]]) – The end of the interval. If not given, it will be set to start, and start will be set to 0.
step (Union[PrimExprLike,PrimValue]) – The step size.
dtype (Optional[Union[str, DataType]]) – The data type of the created tensor.
- Returns:
result – The result tensor.
- Return type:
relax.Expr
- tvm.relax.op.argmax(x: Expr, axis: int | None = None, keepdims: bool = False) Expr ¶
Computes the argmax of tensor elements over given axis.
- Parameters:
x (relax.Expr) – The input data tensor
axis (Optional[int]) – Axis along which an argmax operation is performed. The default, axis=None, will compute the argmax of all elements in the input tensor. Negative indexing is supported.
keepdims (bool) – If this is set to True, the axis being reduced is left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.argmin(x: Expr, axis: int | None = None, keepdims: bool = False) Expr ¶
Computes the argmin of tensor elements over given axis.
- Parameters:
x (relax.Expr) – The input data tensor
axis (Optional[int]) – Axis along which an argmin operation is performed. The default, axis=None, will compute the argmin of all elements in the input tensor. Negative indexing is supported.
keepdims (bool) – If this is set to True, the axis being reduced is left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.argsort(data: Expr, axis: int = -1, descending: bool = False, dtype: str = 'int32')¶
Performs sorting along the given axis and returns an array of indices having same shape as an input array that index data in sorted order.
- Parameters:
data (relax.Expr) – The input data tensor.
axis (int) – Axis long which to sort the input tensor.
descending (bool) – Whether to sort in descending order, the default is False
dtype (str) – The data type of the output indices.
- Returns:
out – Tensor with same shape as data.
- Return type:
relax.Expr
- tvm.relax.op.asin(x: Expr) Expr ¶
Compute element-wise arc sin of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.asinh(x: Expr) Expr ¶
Compute element-wise arc sinh of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.assert_op(condition: Expr, format_args: Expr | List[Expr] | None = None, format: str | Expr = '') Expr ¶
Create a call to Relax’s assert_op operation (assert is reserved in Python, so the name must be distinct).
- Parameters:
condition (Expr) – The assertion condition.
format_args (Optional[Union[Expr, List[Expr]]]) – Format arguments for the error message if the condition fails.
format (Union[str, Expr]) – The format string or StringImm for the error message.
- Returns:
result – A relax.Call to the Relax assert operation.
- Return type:
Expr
- tvm.relax.op.astype(x: Expr, dtype: str | DataType) Expr ¶
Cast input tensor to the given data type.
- Parameters:
x (relax.Expr) – The input data to the operator.
dtype (Union[str, DataType]) – The target data type
- Returns:
result – The casted result.
- Return type:
relax.Expr
- tvm.relax.op.atan(x: Expr) Expr ¶
Compute element-wise arc tan of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.atanh(x: Expr) Expr ¶
Compute element-wise arc tanh of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.bitwise_and(x1: Expr, x2: Expr) Expr ¶
Bitwise AND :param x1: The first input tensor. :type x1: relax.Expr :param x2: The second input tensor. :type x2: relax.Expr
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.bitwise_not(x: Expr) Expr ¶
Compute bitwise NOT of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.bitwise_or(x1: Expr, x2: Expr) Expr ¶
Bitwise OR :param x1: The first input tensor. :type x1: relax.Expr :param x2: The second input tensor. :type x2: relax.Expr
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.bitwise_xor(x1: Expr, x2: Expr) Expr ¶
Bitwise XOR :param x1: The first input tensor. :type x1: relax.Expr :param x2: The second input tensor. :type x2: relax.Expr
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.broadcast_to(x: Expr, shape: Tuple[int | PrimExpr] | Expr) Expr ¶
Broadcasts a tensor to a specified shape.
- Parameters:
x (relax.Expr) – The input data to the operator.
shape (Union[Tuple[PrimExprLike], Expr]) – The target shape.
- Returns:
result – The broadcasted tensor.
- Return type:
relax.Expr
- tvm.relax.op.call_builtin_with_ctx(func: str | Expr, args: Expr, *, sinfo_args: StructInfo | List[StructInfo] | None = None) Call ¶
relax.Call a builtin function func.
- Parameters:
func (Expr) – The builtin function to be called.
args (Expr) – The input arguments.
sinfo_args (Optional[Union[StructInfo, List[StructInfo]]]) – The struct info arguments to the call node.
- Returns:
ret – The created call node.
- Return type:
- tvm.relax.op.call_dps_packed(func: str | Expr, args: Expr, out_sinfo: TensorStructInfo | List[TensorStructInfo]) Call ¶
relax.Call a destination-passing-style packed function and return the output.
Note: The called function is assumed to be _pure_ (other than modifying the designated output arguments). If the function _does_ result in other side effects, then the compiler may end up removing, reordering, or repeating those effects–no guarantees can be made.
- Parameters:
func (Union[str, Expr]) – The destination-passing-style function, can be ExternFunc.
args (Expr) – The input arguments.
out_sinfo (Union[TensorStructInfo, List[TensorStructInfo]]) – The structure info of the call_dps_packed output. It should be a single or a list of TensorStructInfo. Each one denotes the structure info of a returned tensor.
- Returns:
ret – A call node for the call_dps_packed operator.
- Return type:
- tvm.relax.op.call_inplace_packed(func: str | ExternFunc | GlobalVar, *args: Expr, inplace_indices: int | List[int], sinfo_args: StructInfo | List[StructInfo]) Expr ¶
Construct a call to a packed function that consumes some of its arguments “in-place” and returns the mutated arguments (aliased), but should be considered to be otherwise pure. The inplace_indices argument indicates which of the outputs are mutated arguments.
The resulting call will have the same semantics as calling the packed function directly.
Note: This should be used for cases when the user knows that calling the packed function with these arguments will in reality not cause any other side effects. If it is used for a call that does result in other side effects, then the compiler may end up removing, reordering, or repeating that call, with no guarantees made about any side effects from the callee.
Warning: This operator as treated as pure by the type system even though it is performing side effects (mutating some arguments). It is therefore incumbent upon the user to ensure that it is being used safely (viz., that mutated arguments are not live after the mutation, that they do not alias values live after the mutation).
- Parameters:
func (Union[str, ExternFunc]) – The name (global symbol) for a PackedFunc or an ExternFunc node.
args (Expr) – The arguments for the PackedFunc.
input_indices (Union[int, List[int]]) – Specify which arguments should be used for in-place computations. If input_indices is a single integer, it will be made into a singleton list. Suppose input_indices[i] = j, where j >= 0. Then the i`th output will be an alias of `args[j]. If input_indices[i] = -1, then the i`th output will be a freshly allocated tensor. At least one member of `input_indices must not be -1.
sinfo_args (Union[StructInfo, List[StructInfo]]) – The list of structure info arguments (giving the structural info for the returned value).
- Returns:
result – A Relax call, corresponding to call_pure_packed(ExternFunc(func), args, DictAttrs(kwargs), sinfo_args)
- Return type:
Expr
- tvm.relax.op.call_pure_packed(func: str | ExternFunc | GlobalVar, *args: Expr, sinfo_args: StructInfo | List[StructInfo]) Expr ¶
Construct a call to a packed function that should be treated as pure, even though packed calls are normally not treated as pure.
The resulting call will have the same semantics as calling the packed function directly.
Note: This should be used for cases when the user knows that calling the packed function with these arguments will in reality not cause any side effects. If it is used for a call that does result in side effects, then the compiler may end up removing, reordering, or repeating that call, with no guarantees made about any side effects from the callee.
- Parameters:
func (Union[str, ExternFunc]) – The name (global symbol) for a PackedFunc or an ExternFunc node.
args (Expr) – The arguments for the PackedFunc.
sinfo_args (Union[StructInfo, List[StructInfo]]) – The list of structure info arguments (giving the structural info for the returned value).
- Returns:
result – A Relax call, corresponding to call_pure_packed(ExternFunc(func), args, DictAttrs(kwargs), sinfo_args)
- Return type:
Expr
- tvm.relax.op.call_tir(gvar: GlobalVar, args: Expr, out_sinfo: TensorStructInfo | List[TensorStructInfo], tir_vars: ShapeExpr | Tuple[PrimExpr] | List[PrimExpr] | None = None) Call ¶
relax.Call a tir.prim_func and return the output.
- Parameters:
gvar (GlobalVar) – The GlobalVar referring to a tir PrimFunc.
args (Expr) – The input arguments.
out_sinfo (Union[TensorStructInfo, List[TensorStructInfo]]) – The structure info of the call_tir output. It should be a single or a list of TensorStructInfo. Each one denotes the structure info of a returned tensor.
tir_vars (Optional[Union[ShapeExpr, Tuple[PrimExpr], List[PrimExpr]]]) – ShapeExpr representing a tuple of integers to unpack when calling func. Is null if not used
- Returns:
ret – A call node for the call_tir operator.
- Return type:
- tvm.relax.op.call_tir_inplace(gvar: GlobalVar, args: Expr, inplace_indices: int | List[int], out_sinfo: TensorStructInfo | List[TensorStructInfo], tir_vars: ShapeExpr | Tuple[PrimExpr] | List[PrimExpr] | None = None) Call ¶
relax.Call a TIR PrimFunc and return the result, doing the specified computations in-place (based on the inplace_indices argument; outputs will alias the inputs selected by in-place indices).
Warning: This operator is considered pure by the type system but actually mutates the arguments specified by inplace_indices. This operator should not be used directly, but rather should be inserted by passes that have checked whether it is safe to perform operations in-place (i.e., none of the arguments specified as an output is aliased or is live after calling call_tir_inplace).
Direct calls to this operator should be done for testing purposes only.
- Parameters:
gvar (GlobalVar) – The GlobalVar referring to a TIR PrimFunc.
args (Expr) – The input arguments.
input_indices (Union[int, List[int]]) – Specify which arguments should be used for in-place computations. If input_indices is a single integer, it will be made into a singleton list. Suppose input_indices[i] = j, where j >= 0. Then the i`th output will be an alias of `args[j]. If input_indices[i] = -1, then the i`th output will be a freshly allocated tensor. At least one member of `input_indices must not be -1.
out_sinfo (Union[TensorStructInfo, List[TensorStructInfo]]) – The structure info of the call_tir_inplace output. It should be a single TensorStructInfo or a list of TensorStructInfo. Each one denotes the structure info of a returned tensor. If a list of TensorStructInfo is given, the result will be a tuple of TensorStructInfo.
tir_vars (Optional[Union[ShapeExpr, Tuple[PrimExpr], List[PrimExpr]]]) – ShapeExpr representing a tuple of integers to unpack when calling func. Is null if not used
- Returns:
ret – A call node for the call_tir operator.
- Return type:
- tvm.relax.op.call_tir_with_grad(gvar: GlobalVar, args: Expr, out_sinfo: TensorStructInfo | List[TensorStructInfo], te_grad_name: str, te_grad_kwargs: Dict[str, Object] = None, tir_vars: ShapeExpr | Tuple[PrimExpr] | List[PrimExpr] | None = None) Call ¶
relax.Call a tir.prim_func and return the output. This intrinsic will bind a te gradient function (refered by te_grad_name) to the call_tir_with_grad node. The te gradient function will be called by the Gradient pass.
- Parameters:
gvar (GlobalVar) – The GlobalVar referring to a tir PrimFunc.
args (Expr) – The input arguments.
out_sinfo (Union[TensorStructInfo, List[TensorStructInfo]]) – The structure info of the call_tir_with_grad output. It should be a single or a list of TensorStructInfo. Each one denotes the structure info of a returned tensor.
te_grad_name (str) – The registered name of the te gradient function associated with the call_tir_with_grad node. Must be provided as a keyword argument.
te_grad_kwargs (Dict[str, Object], optional) – The keyword arguments passed to the te gradient function. Optionally provided as a keyword argument. Default: {}.
tir_vars (Optional[Union[ShapeExpr, Tuple[PrimExpr], List[PrimExpr]]]) – ShapeExpr representing a tuple of integers to unpack when calling func. Is null if not used
- Returns:
ret – A call node for the call_tir_with_grad operator.
- Return type:
- tvm.relax.op.ceil(x: Expr) Expr ¶
Take ceil of input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.clip(x: Expr, min: Expr, max: Expr) Expr ¶
Clips tensor values to a specified min and max.
- Parameters:
x (relax.Expr) – The input data
min (relax.Expr) – The minimum value
max (relax.Expr) – The maximum value
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.collapse_sum_like(data: Expr, collapse_target: Expr) Expr ¶
Return a summation of data to the shape of collapse_target.
For details, please see relax.op.collapse_sum_to.
- Parameters:
data (relax.Expr) – The input tensor.
collapse_target (relax.Expr) – The tensor whose shape is the shape to collapse to.
- Returns:
result – The result tensor after summation.
- Return type:
relax.Expr
- tvm.relax.op.collapse_sum_to(data: Expr, shape: Tuple[int | PrimExpr] | Expr) Expr ¶
Return a summation of data to the given shape.
collapse_sum_to is intended as the backward operator of tvm.relax.op.broadcast_to and other broadcast operators in the automatic differentiation process.
We expect that data is the result of broadcasting some tensor of the given shape in some broadcast operation. Thus the given shape and data.shape must follow broadcast rules.
During computation, all axes of data.shape and shape are checked from right to left. For an axis, if it follows these rules, data will be summed over this axis: - the axis exists in data.shape but not in shape, or - the axis exists in data.shape and equals to 1 in shape.
- Parameters:
data (relax.Expr) – The input tensor.
shape (Union[Tuple[PrimExprLike], relax.Expr]) – The shape to collapse to.
- Returns:
result – The result tensor of the given shape after summation.
- Return type:
relax.Expr
- tvm.relax.op.concat(tensors: Expr | List[Expr], axis: int | None = 0) Expr ¶
Concatenate the input tensors along the given axis.
- Parameters:
tensors (Union[relax.Expr, List[relax.Expr]]) – An Expr in Tuple type, containing the tensors to be concatenated, or a list of Tensors.
axis (Optional[int]) – The axis along which the tensors are concatenated. If axis is None, the input tensor is required to be flattened before concatenation.
- Returns:
result – The concatenated tensor.
- Return type:
relax.Expr
- tvm.relax.op.cos(x: Expr) Expr ¶
Compute element-wise cos of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.cosh(x: Expr) Expr ¶
Compute element-wise cosh of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.cumprod(data: Expr, axis: int | None = None, dtype: str | DataType | None = None, exclusive: bool | None = None)¶
Numpy style cumprod op. Return the cumulative product of the elements along a given axis.
- Parameters:
data (relax.Expr) – The input data to the operator.
axis (Optional[int]) – Axis along which the cumulative product is computed. The default (None) is to compute the cumprod over the flattened array.
dtype (Optional[Union[str, DataType]]) – Type of the returned array and of the accumulator in which the elements are computed. If dtype is not specified, it defaults to the dtype of data.
exclusive (Optional[bool]) – If true will return exclusive sum in which the first element is not included.
- Returns:
result – The result has the same size as data, and the same shape as data if axis is not None. If axis is None, the result is a 1-d array.
- Return type:
relax.Expr
Examples
a = [[1, 2, 3], [4, 5, 6]] cumprod(a) # if axis is not provided, cumprod is done over the flattened input. -> [ 1, 2, 6, 24, 120, 720] cumprod(a, dtype="float32") -> [ 1., 2., 6., 24., 120., 720.] cumprod(a, axis=0) # multiply over rows for each of the 3 columns -> [[1, 2, 3], [4, 10, 18]] cumprod(a, axis=1) -> [[ 1, 2, 6], [ 4, 20, 120]] a = [1, 1, 1, 0, 1, 1, 0] # a is a boolean array cumprod(a, dtype=int32) # dtype should be provided to get the expected results -> [1, 1, 1, 0, 0, 0, 0]
- tvm.relax.op.cumsum(data: Expr, axis: int | None = None, dtype: str | DataType | None = None, exclusive: bool | None = None)¶
Numpy style cumsum op. Return the cumulative inclusive sum of the elements along a given axis.
- Parameters:
data (relax.Expr) – The input data to the operator.
axis (Optional[int]) – Axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array.
dtype (Optional[Union[str, DataType]]) – Type of the returned array and of the accumulator in which the elements are summed. If dtype is not specified, it defaults to the dtype of data.
exclusive (Optional[bool]) – If true will return exclusive sum in which the first element is not included.
- Returns:
result – The result has the same size as data, and the same shape as data if axis is not None. If axis is None, the result is a 1-d array.
- Return type:
relax.Expr
Examples
a = [[1, 2, 3], [4, 5, 6]] cumsum(a) # if axis is not provided, cumsum is done over the flattened input. -> [ 1, 3, 6, 10, 15, 21] cumsum(a, dtype="float32") -> [ 1., 3., 6., 10., 15., 21.] cumsum(a, axis=0) # sum over rows for each of the 3 columns -> [[1, 2, 3], [5, 7, 9]] cumsum(a, axis=1) -> [[ 1, 3, 6], [ 4, 9, 15]] a = [1, 0, 1, 0, 1, 1, 0] # a is a boolean array cumsum(a, dtype=int32) # dtype should be provided to get the expected results -> [1, 1, 2, 2, 3, 4, 4]
- tvm.relax.op.dequantize(data: Expr, scale: Expr, zero_point: Expr, axis: int = -1, out_dtype: str = 'float32')¶
Dequantize op This operator takes input and produces dequantized output. The input tensor can be of any shape. The output shape is the same as input shape.
output = clamp(scale * (input_tensor - zero_point), out_dtype::min, out_dtype::max)
- Parameters:
data (tvm.relax.Expr) – The input tensor to be dequantized.
scale (tvm.relax.Expr) – The input scale.
zero_point (tvm.relay.Expr) – The input zero_point.
axis (int) – The channel axis for dequantization. Default value is -1 which corresponds to the last axis.
out_dtype (str, optional) – The data type of the output tensor.
- Returns:
result – The computed result.
- Return type:
tvm.relax.Expr
- tvm.relax.op.divide(x1: Expr, x2: Expr) Expr ¶
Division with numpy-style broadcasting.
- Parameters:
x1 (relax.Expr) – The first input tensor.
x2 (relax.Expr) – The second input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.dynamic_strided_slice(x: Expr, begin: Expr, end: Expr, strides: Expr) Expr ¶
Dynamic strided slice of a tensor. begin, end, strides can be computed at runtime.
- Parameters:
x (Expr) – The source tensor to be sliced.
begin (Expr) – The indices to begin with in the slicing, inclusive.
end (Expr) – The indices indicating end of the slice, exclusive.
strides (Expr) – Specifies the stride values, it can be negative in that case, the input tensor will be reversed in that particular axis. If not specified, it by default is an list of ones of the same length as axes.
- Returns:
ret – The sliced result.
- Return type:
relax.Expr
Note
dyn_strided_slice require the input begin, end and strides to have the same length as rank of data tensor.
- tvm.relax.op.einsum(operands, subscripts)¶
Evaluates the Einstein summation convention on data
- Parameters:
operands (Union(List[relax.Expr], Tuple[relax.Expr])) – A list of expression.
subscripts (str) – The einsum expression string.
- Returns:
result – The output from the einsum op.
- Return type:
relax.Expr
- tvm.relax.op.equal(x1: Expr, x2: Expr) Expr ¶
Broadcasted element-wise test for (lhs == rhs).
- Parameters:
x1 (relax.Expr) – The first input tensor.
x2 (relax.Expr) – The second input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.erf(x: Expr) Expr ¶
Computes the error function of the input.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – Computed error function for each element.
- Return type:
relax.Expr
- tvm.relax.op.ewise_fma(x1: Expr, x2: Expr, x3: Expr) Expr ¶
Elementwise fused multiply-add operator Returns elementwise result of \(x1 * x2 + x3\)
- Parameters:
x1 (relax.Expr) – The left hand operand of the multiplication
x2 (relax.Expr) – The right hand operand of the multiplication
x3 (relax.Expr) – The operand of the addition
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.exp(x: Expr) Expr ¶
Compute element-wise exp of data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.expand_dims(x: Expr, axis: int | List[int]) Expr ¶
Insert new axes at the positions given by axis.
- Parameters:
x (relax.Expr) – The input data to the operator.
axis (Union[int, List[int]]) – The axes at which the input array are expanded. All values are required to lie in range [-data.ndim - 1, data.ndim], with the convention of negative indexing.
- Returns:
result – The transformed result.
- Return type:
relax.Expr
- tvm.relax.op.flatten(x: Expr) Expr ¶
Flatten all the tensor dimensions into one.
- Parameters:
x (relax.Expr) – The input data to the operator.
- Returns:
result – The flattened result.
- Return type:
relax.Expr
- tvm.relax.op.flip(data, axis)¶
Reverses the order of elements along given axis while preserving array shape.
- Parameters:
data (relax.Expr) – The input data to the operator.
axis (int) – axis to flip on
- Returns:
ret – The computed result.
- Return type:
relax.Expr
Examples
x = [[1., 2.], [3., 4.]] relax.flip(x, axis=0) = [[3., 4.], [1., 2.]] relax.flip(x, axis=1) = [[2., 1.], [4., 3.]]
- tvm.relax.op.floor(x: Expr) Expr ¶
Take floor of input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.floor_divide(x1: Expr, x2: Expr) Expr ¶
Floor division with numpy-style broadcasting.
- Parameters:
x1 (relax.Expr) – The first input tensor.
x2 (relax.Expr) – The second input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.full(shape: Tuple[int | PrimExpr] | Expr, fill_value: Expr, dtype: str | DataType | None = None) Expr ¶
Fill array with scalar value.
- Parameters:
- Returns:
result – The result tensor.
- Return type:
relax.Expr
- tvm.relax.op.full_like(x: Expr, fill_value: Expr, dtype: str | DataType | None = None) Expr ¶
Construct a tensor such that - its shape is the same as the input data tensor’s shape, - its value is filled with the input scalar fill value.
- Parameters:
x (relax.Expr) – The input tensor, which provides the shape, and dtype when the dtype field is not specified.
fill_value (relax.Expr) – The value to fill. Must be a scalar tensor.
dtype (Optional[Union[str, DataType]]) – The data type of the created tensor. If dtype is not given, it will by default use the dtype of the input tensor.
- Returns:
result – The result tensor.
- Return type:
relax.Expr
- tvm.relax.op.greater(x1: Expr, x2: Expr) Expr ¶
Broadcasted element-wise test for (lhs > rhs).
- Parameters:
x1 (relax.Expr) – The first input tensor.
x2 (relax.Expr) – The second input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.greater_equal(x1: Expr, x2: Expr) Expr ¶
Broadcasted element-wise test for (lhs >= rhs).
- Parameters:
x1 (relax.Expr) – The first input tensor.
x2 (relax.Expr) – The second input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.hint_on_device(data, dst_vdevice) Expr ¶
It provides a hint specifying the device on which the input data should be executed. This hint is utilized by RealizeVDevice to propagate the virtual device.”
- Parameters:
data (Expr) – The tensor to be copied.
dst_device (VDevice) – The destination device where the data is supposed to be executed.
- Returns:
result – The result.
- Return type:
Expr
- tvm.relax.op.invoke_closure(closure: Expr, args: Expr, sinfo_args: List[StructInfo] | StructInfo) Call ¶
Invoke a closure.
- Parameters:
closure (Expr) – The VMClosure object.
args (Expr) – The input arguments.
type_args (Union[List[StructInfo], StructInfo]) – The structure info arguments of the CallNode
- Returns:
ret – A call to invoke_closure.
- Return type:
- tvm.relax.op.invoke_pure_closure(closure: Expr, args: Expr, sinfo_args: List[StructInfo] | StructInfo) Call ¶
Invoke a closure and indicate to the compiler that it is pure.
Note: This should be used for cases when the user knows that calling the closure with these arguments will in reality not cause any side effects. If it is used for a call that _does_ result in side effects, then the compiler may end up removing, reordering, or repeating that call, with no guarantees made about any side effects from the callee.
- Parameters:
closure (Expr) – The VMClosure object.
args (Expr) – The input arguments.
type_args (Union[List[StructInfo], StructInfo]) – The structure info arguments of the CallNode
- Returns:
ret – A call to invoke_pure_closure.
- Return type:
- tvm.relax.op.isfinite(x: Expr) Expr ¶
Check if input value is finite.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.isinf(x: Expr) Expr ¶
Check if input value is infinite.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.isnan(x: Expr) Expr ¶
Check if input value is Nan.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.layout_transform(x: Expr, index_map: Callable | IndexMap, pad_value: int | float | PrimValue | None = None, axis_separators: int | axis_separator | None = None)¶
Modifies the layout of a tensor.
- Parameters:
x (relax.Expr) – The input tensor to the operator.
index_map (Union[Callable, IndexMap]) – The transformation to apply.
pad_value (Optional[Union[int, float, PrimValue]]) – The value used for padding if the transformation results in implicit padding. If not specified, any value can be used.
axis_separators (Optional[Union[int, IndexMap.AXIS_SEPARATOR]]) – The axis_separators for index_map to create non flat buffers.
- Returns:
result – The transformed tensor.
- Return type:
relax.Expr
- tvm.relax.op.less(x1: Expr, x2: Expr) Expr ¶
Broadcasted element-wise test for (lhs < rhs).
- Parameters:
x1 (relax.Expr) – The first input tensor.
x2 (relax.Expr) – The second input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.less_equal(x1: Expr, x2: Expr) Expr ¶
Broadcasted element-wise test for (lhs <= rhs).
- Parameters:
x1 (relax.Expr) – The first input tensor.
x2 (relax.Expr) – The second input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.linear(data: Expr, weight: Expr, bias: Expr | None = None, out_dtype: str | DataType | None = None) Expr ¶
Applies a linear transformation to the incoming data: y = xA^T + b
- Parameters:
data (relax.Expr) – The input data.
weight (relax.Expr) – The weight tensor.
bias (Optional[Expr]) – The bias tensor.
out_dtype (Optional[Union[str, DataType]]) – The data type of the matmul result. When it is not specified, the output dtype will be the same as input dtype.
Notes
Relax does not regard the Linear Op as a primitive Op, while combine the transpose, matmul and add op to implement it.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.log(x: Expr) Expr ¶
Compute element-wise natural logarithm of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.logical_and(x1: Expr, x2: Expr) Expr ¶
Logical AND :param x1: The first input tensor. :type x1: relax.Expr :param x2: The second input tensor. :type x2: relax.Expr
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.logical_not(x: Expr) Expr ¶
Compute logical NOT of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.logical_or(x1: Expr, x2: Expr) Expr ¶
Logical OR :param x1: The first input tensor. :type x1: relax.Expr :param x2: The second input tensor. :type x2: relax.Expr
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.logical_xor(x1: Expr, x2: Expr) Expr ¶
Logical XOR :param x1: The first input tensor. :type x1: relax.Expr :param x2: The second input tensor. :type x2: relax.Expr
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.make_closure(func: Expr, args: Expr) Object ¶
Create a closure with free variables and return the closure.
- Parameters:
func (Expr) – The closure, can be ExternFunc or PrimFunc.
args (Expr) – The input arguments.
- Returns:
ret – The VMClosure.
- Return type:
- tvm.relax.op.masked_fill(x: Expr, mask: Expr, value: Expr)¶
Fill a tensor by a specified value in places defined by a mask. :param x: The input data to the operator. :type x: relax.Expr :param mask: The mask. :type mask: relax.Expr :param value: The value to set in the input tensor. :type value: relax.Expr
- Returns:
result – The filled tensor.
- Return type:
relax.Expr
- tvm.relax.op.matmul(x1: Expr, x2: Expr, out_dtype: str | DataType | None = None) Expr ¶
General matrix multiplication of two tensors, with broadcasting on batched dimensions.
The semantics and output shape deduction rule is specified as https://data-apis.org/array-api/latest/API_specification/generated/array_api.matmul.html.
- Parameters:
x1 (relax.Expr) – The first input tensor.
x2 (relax.Expr) – The second input tensor.
out_dtype (Optional[Union[str, DataType]]) – The data type of the matmul result. When it is not specified, the output dtype will be the same as input dtype.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.max(x: Expr, axis: int | List[int] | None = None, keepdims: bool = False) Expr ¶
Computes the max of tensor elements over given axes.
- Parameters:
x (relax.Expr) – The input data tensor
axis (Optional[Union[int, List[int]]]) – Axis or axes along which a max operation is performed. The default, axis=None, will compute the max of all elements in the input tensor. Negative indexing is supported.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.maximum(x1: Expr, x2: Expr) Expr ¶
Element-wise maximum
- Parameters:
x1 (relax.Expr) – The first input tensor.
x2 (relax.Expr) – The second input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.mean(x: Expr, axis: int | List[int] | None = None, keepdims: bool = False) Expr ¶
Computes the mean of tensor elements over given axes.
- Parameters:
x (relax.Expr) – The input data tensor
axis (Optional[Union[int, List[int]]]) – Axis or axes along which a mean operation is performed. The default, axis=None, will compute the mean of all elements in the input tensor. Negative indexing is supported.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.min(x: Expr, axis: int | List[int] | None = None, keepdims: bool = False) Expr ¶
Computes the min of tensor elements over given axes.
- Parameters:
x (relax.Expr) – The input data tensor
axis (Optional[Union[int, List[int]]]) – Axis or axes along which a min operation is performed. The default, axis=None, will compute the min of all elements in the input tensor. Negative indexing is supported.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.minimum(x1: Expr, x2: Expr) Expr ¶
Element-wise minimum
- Parameters:
x1 (relax.Expr) – The first input tensor.
x2 (relax.Expr) – The second input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.multiply(x1: Expr, x2: Expr) Expr ¶
Multiplication with numpy-style broadcasting.
- Parameters:
x1 (Expr) – The first input tensor.
x2 (Expr) – The second input tensor.
- Returns:
result – The computed result.
- Return type:
Expr
- tvm.relax.op.negative(x: Expr) Expr ¶
Compute element-wise negative of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result
- Return type:
relax.Expr
- tvm.relax.op.not_equal(x1: Expr, x2: Expr) Expr ¶
Broadcasted element-wise test for (lhs != rhs).
- Parameters:
x1 (relax.Expr) – The first input tensor.
x2 (relax.Expr) – The second input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.null_value() Call ¶
Create a call node that represents a null value object.
- Returns:
ret – The created call node.
- Return type:
- tvm.relax.op.ones(shape: Tuple[int | PrimExpr] | Expr, dtype: str | DataType) Expr ¶
Construct a tensor of all ones, with the input shape and dtype.
- tvm.relax.op.ones_like(x: Expr, dtype: str | DataType | None = None) Expr ¶
Construct a tensor with all ones, with shape of the input tensor shape.
- Parameters:
x (relax.Expr) – The input tensor, which provides the shape, and dtype when the dtype field is not specified.
dtype (Optional[Union[str, DataType]]) – The data type of the created tensor. If dtype is not given, it will by default use the dtype of the input tensor.
- Returns:
result – The result tensor.
- Return type:
relax.Expr
- tvm.relax.op.permute_dims(x: Expr, axes: List[int] | None = None) Expr ¶
Permutes the dimensions of an array.
- Parameters:
x (relax.Expr) – The input data to the operator.
axes (Optional[List[int]]) – The target axes order. If not specified, permute_dims will reverse the order of all axes.
- Returns:
result – The transposed result.
- Return type:
relax.Expr
- tvm.relax.op.power(x1: Expr, x2: Expr)¶
Power with numpy-style broadcasting.
- Parameters:
x1 (relax.Expr) – The first input tensor.
x2 (relax.Expr) – The second input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.print(*values: List[Expr], format: str | Expr = '') Expr ¶
Print op to print the values
- Parameters:
values (List[Expr]) – The values to print.
format (Union[str, Expr]) – The format string or StringImm.
- Returns:
result – A relax relax.Call, which will print the value during runtime.
- Return type:
Expr
- tvm.relax.op.prod(x: Expr, axis: int | List[int] | None = None, keepdims: bool = False) Expr ¶
Computes the product of tensor elements over given axes.
- Parameters:
x (relax.Expr) – The input data tensor
axis (Optional[Union[int, List[int]]]) – Axis or axes along which a product is performed. The default, axis=None, will compute the product of all elements of the input tensor. Negative indexing is supported.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.quantize(data: Expr, scale: Expr, zero_point: Expr, axis: int = -1, out_dtype: str = 'int8')¶
Quantize op This operator takes input and produces quantized output. The input tensor can be of any shape. The output shape is the same as input shape.
Q_output = clamp((round(input_tensor/scale) + zero_point), out_dtype::min, out_dtype::max)
- Parameters:
data (tvm.relax.Expr) – The input tensor to be quantized.
scale (tvm.relax.Expr) – The output scale.
zero_point (tvm.relay.Expr) – The output zero_point.
axis (int) – The channel axis for quantization. Default value is -1 which corresponds to the last axis.
out_dtype (str, optional) – The data type of the output tensor.
- Returns:
result – The computed result.
- Return type:
tvm.relax.Expr
- tvm.relax.op.register_gradient(op_name: str, fgradient: Callable[[Var, Call, Var, BlockBuilder], List[Expr]] = None, level: int = 10)¶
Register operator gradient function for a relax operator.
- Parameters:
op_name (str) – The name of the op.
fgradient (function (orig_var: relax.Var, orig_call: relax.Call, output_grad: relax.Var, ctx: BlockBuilder)) – -> partials: List[Expr] The gradient function being used.
level (int) – The priority level
- tvm.relax.op.repeat(data: Expr, repeats: int, axis: int | None = None) Expr ¶
Repeats elements of an array.
- Parameters:
data (relax.Expr) – The input tensor.
repeats (int) – The number of repetitions.
axis (Optional[int]) – The axis along which to repeat values. The negative numbers are interpreted counting from the backward. By default, use the flattened input array, and return a flat output array.
- Returns:
ret – The computed result.
- Return type:
relax.Expr
Examples
x = R.const([[1, 2], [3, 4]]) lv1 = R.repeat(x, repeats=2) # lv1 == [1, 1, 2, 2, 3, 3, 4, 4] lv2 = R.repeat(x, repeats=2, axis=1) # lv2 == [[1., 1., 2., 2.], # [3., 3., 4., 4.]]
- tvm.relax.op.reshape(x: Expr, shape: Tuple[int | PrimExpr] | Expr) Expr ¶
Reshape the input array.
-1
infers the dimension of the output shape by using the remainder of the input dimensions keeping the size of the new array same as that of the input array. At most one dimension of shape can be -1.x.shape = (2, 3, 4), shape = (6, 1, -1), result.shape = (6, 1, 4) x.shape = (2, 3, 4), shape = (3, -1, 8), result.shape = (3, 1, 8) x.shape = (2, 3, 4), shape = (-1,), result.shape = (24,)
- Parameters:
x (relax.Expr) – The input data to the operator.
shape (Union[Tuple[PrimExprLike], Expr]) – The new shape. Should be compatible with the original shape.
- Returns:
result – The reshaped result.
- Return type:
relax.Expr
Note
The
-1
inference is only performed at compile-time. That is to say, in any case the dimension length of-1
cannot be inferred in compile-time, an error will be thrown.
- tvm.relax.op.round(x: Expr) Expr ¶
Rounds each element of the input data to nearest integer.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.rsqrt(x: Expr) Expr ¶
Compute element-wise reciprocal square root of the input data.
\[1/sqrt(x)\]- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.scatter_elements(data: Expr, indices: Expr, updates: Expr, axis: int = 0, reduction: str = 'update')¶
ONNX style scatter elements. This operation updates its value in data to values specified by updates at specific index positions specified by indices. For example, in 2D tensor, the update corresponding to the [i][j] entry is performed as below:
output[indices[i][j]][j] = updates[i][j] if axis = 0 output[i][indices[i][j]] = updates[i][j] if axis = 1
When the reduction is set to some reduction function f, the update corresponding to [i][j] entry is performed as below:
output[indices[i][j]][j] += f(output[indices[i][j]][j], updates[i][j]) if axis = 0 output[i][indices[i][j]] += f(output[i][indices[i][j]], updates[i][j]) if axis = 1
Where f is update, add, mul, mean, max, min.
- Parameters:
data (relax.Expr) – The input data to the operator.
indices (relax.Expr) – The index positions to update in data.
updates (relax.Expr) – Values to replace to.
axis (int) – Axis to scatter on.
reduction (str) – Type of reduction to apply: update, add, mul, mean, max, min. It is “update” by default.
- Returns:
result – The result has the same size as data, and the same shape as data
- Return type:
relax.Expr
Examples
# inputs data = [ [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], ] indices = [ [1, 0, 2], [0, 2, 1], ] updates = [ [1.0, 1.1, 1.2], [2.0, 2.1, 2.2], ] axis = 0 reduction = "update" # output P output = [ [2.0, 1.1, 0.0] [1.0, 0.0, 2.2] [0.0, 2.1, 1.2] ]
- tvm.relax.op.shape_of(expr: Expr) Expr ¶
Get shape of a tensor.
- Parameters:
expr (Expr) – The input Expr.
- Returns:
result – A relax relax.Call, which gets the shape of the input
- Return type:
Expr
- tvm.relax.op.shape_to_tensor(expr: Expr) Expr ¶
Convert shape to tensor expr. :param expr: The input Expr :type expr: Expr
- Returns:
result – A relax relax.Call, which transforms the shape values to the tensor
- Return type:
Expr
- tvm.relax.op.sigmoid(x: Expr) Expr ¶
Compute element-wise sigmoid of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.sign(x: Expr) Expr ¶
Returns an indication of the sign of a number for each element of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.sin(x: Expr) Expr ¶
Compute element-wise sin of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.sinh(x: Expr) Expr ¶
Compute element-wise sinh of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.sort(x: Expr, axis: int = -1, descending: bool = False)¶
Performs sorting along the given axis and returns an array in sorted order.
- Parameters:
x (relax.Expr) – The input tensor.
axis (int) – Axis along which to sort the input tensor. By default the last axis of the input is used.
descending (bool) – Whether to sort in descending order, the default is False
- Returns:
out – Sorted tensor.
- Return type:
relax.Expr
- tvm.relax.op.split(x: Expr, indices_or_sections: int | List[int | PrimExpr], axis: int = 0) Expr ¶
Split input tensor along axis by sections or indices.
If indices_or_sections is an integer, the input will be divided equally along given axis (if possible). Last section will be smaller if the tensor size along the given dimension is not divisible by the integer.
If indices_or_sections is a tuple of mixture of int or PrimExpr, the entries indicate the indices where along axis the array is split.
- Parameters:
x (relax.Expr) – The tensor to be split.
indices_or_sections (Union[int, List[PrimExprLike]]) – Indices or sections to split into. Accepts an int or a list.
axis (int) – The axis over which to split.
- Returns:
ret – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.sqrt(x: Expr) Expr ¶
Compute element-wise square root of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.square(x: Expr) Expr ¶
Squares each element of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.squeeze(x: Expr, axis: int | List[int] | None = None) Expr ¶
Squeeze axes in the array.
- Parameters:
x (relax.Expr) – The input data to the operator.
axis (Optional[Union[int, List[int]]) – The set of axes to remove. If axis = None, remove all axis of dimensions 1. If any specified axis has dimension that does not equal 1, it is an error.
- Returns:
result – The squeezed result.
- Return type:
relax.Expr
- tvm.relax.op.std(x: Expr, axis: int | List[int] | None = None, keepdims: bool = False) Expr ¶
Computes the standard deviation of tensor elements over given axes.
- Parameters:
x (relax.Expr) – The input data tensor
axis (Optional[Union[int, List[int]]]) – Axis or axes along which a standard deviation is performed. The default, axis=None, will compute the std of all elements of the input tensor. Negative indexing is supported.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.strided_slice(x: Expr, axes: List[int], begin: List[int | PrimExpr], end: List[int | PrimExpr], strides: List[int | PrimExpr] | None = None, assume_inbound: bool = False) Expr ¶
Strided slice of a tensor.
- Parameters:
x (relax.Expr) – The source tensor to be sliced.
axes (List[int]) – Axes along which slicing is applied.
begin (List[PrimExprLike]) – The indices to begin with in the slicing, inclusive.
end (List[PrimExprLike]) – The indices indicating end of the slice, exclusive.
strides (Optional[List[PrimExprLike]]) – Specifies the stride values, it can be negative in that case, the input tensor will be reversed in that particular axis. If not specified, it by default is an list of ones of the same length as axes.
assume_inbound (bool) – Whether to assume the indices are in bound. If it is set to false, out of bound indices will be clipped to the bound.
- Returns:
ret – The sliced result.
- Return type:
relax.Expr
Note
strided_slice require the input begin, end and strides to have the same length as axes.
- tvm.relax.op.subtract(x1: Expr, x2: Expr) Expr ¶
Subtraction with numpy-style broadcasting.
- Parameters:
x1 (relax.Expr) – The first input tensor.
x2 (relax.Expr) – The second input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.sum(x: Expr, axis: int | List[int] | None = None, keepdims: bool = False) Expr ¶
Computes the sum of tensor elements over given axes.
- Parameters:
x (relax.Expr) – The input data tensor
axis (Optional[Union[int, List[int]]]) – Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input tensor. Negative indexing is supported.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.take(x: Expr, indices: Expr, axis: int | None = None) Expr ¶
Take elements from a tensor along an axis. Its semantic is mostly similar to numpy.take (https://numpy.org/doc/stable/reference/generated/numpy.take.html), which can cover torch.take (https://pytorch.org/docs/stable/generated/torch.take.html) and onnx.gather (https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Gather-13).
- Parameters:
x (relax.Expr) – The source tensor.
indices (relax.Expr) – The indices of the values to extract.
axis (Optional[int]) – The axis over which to select values. If it is none, the input tensor is required to be one-dimensional.
- Returns:
ret – The taken result.
- Return type:
relax.Expr
- tvm.relax.op.tan(x: Expr) Expr ¶
Compute element-wise tan of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.tanh(x: Expr) Expr ¶
Compute element-wise tanh of the input data.
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.tensor_to_shape(expr: Expr) Expr ¶
Convert tensor to shape expr. :param expr: The input Expr :type expr: Expr
- Returns:
result – A relax relax.Call, which transforms the tensor values to the shape
- Return type:
Expr
- tvm.relax.op.tile(data: Expr, repeats: int | Tuple[int] | List[int]) Expr ¶
Construct an array by repeating data the number of times given by repeats.
If repeats has length l, and data has dimension d, the result will have dimension of max(l, d).
If d < l, data is promoted to be l-dimensional by prepending new axes. So a shape (3,) Tensor is promoted to (1, 3) for 2-D replication, or shape (1, 1, 3) for 3-D replication. If this is not the desired behavior, promote data to d-dimensions manually before calling this function.
If d > l, reps is promoted to length d by pre-pending 1’s to it. Thus for a data of shape (2, 3, 4, 5), a reps of (2, 2) is treated as (1, 1, 2, 2).
- Parameters:
data (relax.Expr) – The input data to the operator.
repeats (Union[int, Tuple[int], List[int]]) – The number of repetitions of data along each axis.
- Returns:
ret – The computed result.
- Return type:
relax.Expr
Examples
x = R.const([[1, 2], [3, 4]]) lv1 = R.tile(x, reps=(2, 3)) # lv1 = [[1., 2., 1., 2., 1., 2.], # [3., 4., 3., 4., 3., 4.], # [1., 2., 1., 2., 1., 2.], # [3., 4., 3., 4., 3., 4.]] lv2 = R.tile(x, reps=2) # lv2 = [[1., 2., 1., 2.], # [3., 4., 3., 4.]]
- tvm.relax.op.to_vdevice(data, dst_vdevice) Expr ¶
Copy data to the destination device. This operator helps data transferring between difference devices for heterogeneous execution.
- Parameters:
data (Expr) – The tensor to be copied.
dst_device (VDevice) – The destination device where the data is copied to.
- Returns:
result – The copied result.
- Return type:
Expr
- tvm.relax.op.topk(data: Expr, k: int = 1, axis: int = -1, ret_type: str = 'both', largest: bool = True, dtype: str = 'int32')¶
Get the top k elements in an input tensor along the given axis.
ret_type specifies the return type, can be one of (“both”, “values”, “indices”).
- Parameters:
data (relax.Expr) – The input data tensor.
k (int) – Number of top elements to select. Return all elements if k < 1.
axis (int) – Axis long which to sort the input tensor.
ret_type (str) – The return type [both, values, indices]. “both”: return both top k data and indices. “values”: return top k data only. “indices”: return top k indices only.
largest (bool) – Whether to return largest or smallest elements. The k smallest elements are returned if largest is False.
dtype (str) – The data type of the indices output.
- Returns:
out – The computed result.
- Return type:
relax.Expr or List[relax.Expr]
- tvm.relax.op.tril(x: Expr, k: int | PrimExpr | Expr = 0) Expr ¶
Return the lower triangular part of a matrix or a batch of matrices.
- Parameters:
x (relax.Expr) – The tensor that tril will be applied to. It is required to have at least two dimensions.
k (int) – The index indicating the diagonal above which to zero elements. If k = 0, the diagonal is the main diagonal. If k < 0, the diagonal is below the main diagonal. If k > 0, the diagonal is above the main diagonal.
- Returns:
ret – The result tensor.
- Return type:
relax.Expr
- tvm.relax.op.triu(x: ~tvm.ir.expr.Expr, k: [<class 'int'>, <class 'tvm.ir.expr.PrimExpr'>, <class 'tvm.ir.expr.Expr'>] = 0) Expr ¶
Return the upper triangular part of a matrix or a batch of matrices.
- Parameters:
x (relax.Expr) – The tensor that triu will be applied to. It is required to have at least two dimensions.
k (int) – The index indicating the diagonal below which to zero elements. If k = 0, the diagonal is the main diagonal. If k < 0, the diagonal is below the main diagonal. If k > 0, the diagonal is above the main diagonal.
- Returns:
ret – The result tensor.
- Return type:
relax.Expr
- tvm.relax.op.unique(x: Expr, sorted: bool | Expr = True, return_index: bool | Expr = False, return_inverse: bool | Expr = False, return_counts: bool | Expr = False, axis: int | Expr | None = None) Expr ¶
Find the unique elements in a given tensor. In addition, it optionally returns - the indices of the input tensor that give the unique values; - the indices of the unique tensor that reconstruct the input tensor; - the number of times each unique value comes up in the input tensor.
- Parameters:
x (relax.Expr) – The input tensor.
sorted (Union[bool, Expr]) – Whether to sort the unique elements in ascending order before returning as output.
return_index (Union[bool, Expr]) – Whether to return an additional tensor with indices for where elements in the unique tensor come from the original input.
return_inverse (Union[bool, Expr]) – Whether to return an additional tensor with indices for where elements in the original input ended up in the returned unique list.
return_counts (Union[bool, Expr]) – Whether to return an additional tensor with counts of each unique elements.
axis (Optional) – The dimension to apply unique. If not specified, the unique values of the flattened input are returned.
- Returns:
ret – The created relax call with
- Return type:
relax.Expr
- tvm.relax.op.variance(x: Expr, axis: int | List[int] | None = None, keepdims: bool = False) Expr ¶
Computes the variance of tensor elements over given axes.
- Parameters:
x (relax.Expr) – The input data tensor
axis (Optional[Union[int, List[int]]]) – Axis or axes along which a variance operation is performed. The default, axis=None, will compute the variance of all elements in the input tensor. Negative indexing is supported.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.where(condition: Expr, x1: Expr, x2: Expr) Expr ¶
Selecting elements from either the input tensors depending on the value of the condition.
For a given position, return the corresponding value in x1 if condition is True, and return the corresponding value in x2 otherwise.
- Parameters:
condition (relax.Expr) – When True, yield x1; otherwise, yield x2. Must be broadcasting compatible with x1 and x2. Must have boolean dtype.
x1 (relax.Expr) – The first input tensor. Must be broadcasting compatible with condition and x2.
x2 (relax.Expr) – The second input tensor. Must be broadcasting compatible with condition and x1.
- Returns:
result – The result tensor.
- Return type:
relax.Expr
- tvm.relax.op.wrap_param(data: Expr, dtype: str | DataType = 'float32') Expr ¶
Cast input tensor which is model param to data type if the dtype of the input data is not the same as the given dtype. :param data: The input data to the operator. :type data: relax.Expr :param dtype: The target data type :type dtype: Union[str, DataType]
- Returns:
result – The casted result.
- Return type:
relax.Expr
- tvm.relax.op.zeros(shape: Tuple[int | PrimExpr] | Expr, dtype: str | DataType) Expr ¶
Construct a tensor of all zeros, with the input shape and dtype.
- tvm.relax.op.zeros_like(x: Expr, dtype: str | DataType | None = None) Expr ¶
Construct a tensor with all zeros, with shape of the input tensor shape.
- Parameters:
x (relax.Expr) – The input tensor, which provides the shape, and dtype when the dtype field is not specified.
dtype (Optional[Union[str, DataType]]) – The data type of the created tensor. If dtype is not given, it will by default use the dtype of the input tensor.
- Returns:
result – The result tensor.
- Return type:
relax.Expr
tvm.relax.op.nn¶
Neural network related operators.
- tvm.relax.op.nn.adaptive_avg_pool2d(data: Expr, output_size: int | Tuple[int, int] | None = None, layout: str = 'NCHW', out_layout: str | None = None) Expr ¶
2D adaptive average pooling operator. This operator is experimental.
This operator takes data as input and does 2D average value calculation across each window represented by WxH.
In the default case, where the data_layout is NCHW a data Tensor with shape (batch_size, in_channels, height, width), to produce an output Tensor with shape (batch_size, in_channels, output_height, output_width).
The pooling kernel and stride sizes are automatically chosen for desired output sizes.
- For output_size:
If this argument is not provided, input height and width will be used as output height and width.
If a single integer is provided for output_size, the output size is (N x C x output_size x output_size) for any input (NCHW).
If a tuple of integers (height, width) are provided for output_size, the output size is (N x C x height x width) for any input (NCHW).
- Parameters:
data (relax.Expr) – The input data to the operator.
output_size (Optional[Union[int, Tuple[int, int]]]) – Output height and width. If not specified, it will be the same as the input height and width. If specified, it is required to have length either 1 or 2.
layout (str) – Layout of the input.
out_layout (Optional[str]) – Layout of the output. If not specified, it is the same as data_layout
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.attention(query: Expr, key: Expr, value: Expr, bias: Expr | None = None, scale: FloatImm | None = None, causal_mask: str | None = None, window_size: int | None = None) Expr ¶
Computes fused multi head attention.
All input tensors are of 4-D tensors with BSNH layout.
\[FMA(Q, K, V) = \text{Softmax}(Q @ K^T) @ V\]Note
The input tensor is required to have float16 dtype
- Parameters:
query (relax.Expr) – The input query to the operator. The layout of the input query should be (batch_size, seq_len, num_head, head_dim).
key (relax.Expr) – The input key to the operator. The layout of the input key should be (batch_size, seq_len_kv, num_head, head_dim).
value (relax.Expr) – The input value to the operator. The layout of the input value should be (batch_size, seq_len_kv, num_head, head_dim_v).
bias (Optional[Expr]) – The optional attention bias to the operator. The layout of the attention bias should be a 4-D tensor ending with seq_len_kv, and broadcastable to (batch_size, num_head, seq_len, seq_len_kv).
scale (Optional[float]) – The scale value to be applied to the attention score, by default 1 / sqrt(head_dim).
causal_mask (Optional[str]) –
The optional causal mask, i.e. ‘TopLeft’ and ‘BottomRight’. For ‘TopLeft’, the mask matrix is as np.tril(*, k=0), while for ‘BottomRight’, the mask matrix is as np.tril(*, k=abs(seq_len - seq_len_kv)) For example, with seq_len = 4, seq_len_kv = 2, mask for ‘TopLeft’:
[[1, 0], [1, 1], [1, 1], [1, 1]]
mask for ‘BottomRight’:
[[1, 1], [1, 1], [1, 1], [1, 1]]
with seq_len = 2, seq_len_kv = 4, mask for ‘TopLeft’:
[[1, 0, 0, 0], [1, 1, 0, 0]]
mask for ‘BottomRight’:
[[1, 1, 1, 0], [1, 1, 1, 1]]
window_size (Optional[int]) – The size of the window for sliding-window attention.
- Returns:
result – The computed result. The layout of the output should be (batch_size, seq_len, num_head, head_dim_v).
- Return type:
relax.Expr
- tvm.relax.op.nn.attention_var_len(queries: Expr, keys: Expr, values: Expr, seqstart_q: Expr, max_seqlen_q: Expr, seqstart_k: Expr | None = None, max_seqlen_k: Expr | None = None, scale: FloatImm | None = None, causal_mask: str | None = None, window_size: int | None = None) Expr ¶
Computes fused multi head attention over batched sequences of variable lengths.
Given concatenated inputs and sequence lengths information, this operator computes attention for all sequences more efficiently than calling the normal attention operator for each sequence individually.
- Parameters:
queries (relax.Expr) – The input queries concatenated along the second axis. Its shape must be (1, total_seq_len, num_head, head_dim).
keys (relax.Expr) – The input keys concatenated along the second axis. Its shape must be (1, total_seq_len_kv, num_head, head_dim).
values (relax.Expr) – The input values concatenated along the second axis. Its shape must be (1, total_seq_len_kv, num_head, head_dim_v).
seqstart_q (Optional[Expr]) – The cumsum of query sequence lengths, prepended with 0. Its dtype must be int32. For example, if the lengths of the sequences that are batched are [2, 5, 3], this tensor has values [0, 2, 7, 10].
seqstart_k (Optional[Expr]) – The cumsum of key sequence lengths, prepended with 0. By default it is the same as seqstart_q.
max_seqlen_q (Optional[Expr]) – The maximum query sequence length in the batch. It must be int32.
max_seqlen_k (Optional[Expr]) – The maximum key sequence length in the batch. It must be int32. By default it is the same as max_seqlen_q.
scale (Optional[float]) – The scale value to be applied to the attention score, by default 1 / sqrt(head_dim).
causal_mask (Optional[str]) –
The optional causal mask, i.e. ‘TopLeft’ and ‘BottomRight’. For ‘TopLeft’, the mask matrix is as np.tril(*, k=0), while for ‘BottomRight’, the mask matrix is as np.tril(*, k=abs(seq_len - seq_len_kv)) For example, with seq_len = 4, seq_len_kv = 2, mask for ‘TopLeft’:
[[1, 0], [1, 1], [1, 1], [1, 1]]
mask for ‘BottomRight’:
[[1, 1], [1, 1], [1, 1], [1, 1]]
with seq_len = 2, seq_len_kv = 4, mask for ‘TopLeft’:
[[1, 0, 0, 0], [1, 1, 0, 0]]
mask for ‘BottomRight’:
[[1, 1, 1, 0], [1, 1, 1, 1]]
window_size (Optional[int]) – The size of the window for sliding-window attention.
- Returns:
result – The computed result with shape (1, total_seq_len, num_head, head_dim_v).
- Return type:
relax.Expr
- tvm.relax.op.nn.avg_pool2d(data: Expr, pool_size: int | Tuple[int, int] = (1, 1), strides: int | Tuple[int, int] = (1, 1), padding: int | Tuple[int, ...] = (0, 0), dilation: int | Tuple[int, int] = (1, 1), ceil_mode: bool = False, layout: str = 'NCHW', out_layout: str | None = None) Expr ¶
2D average pooling operator.
This operator takes data as input and does 2D avarage value calculation with in pool_size sized window by striding defined by stride.
In the default case, where the data_layout is NCHW a data Tensor with shape (batch_size, in_channels, height, width), to produce an output Tensor with the following rule:
with data of shape (b, c, h, w) and pool_size (kh, kw)
\[\mbox{out}(b, c, y, x) = \frac{1}{kh * kw} \sum_{m=0, \ldots, kh-1} \sum_{n=0, \ldots, kw-1} \mbox{data}(b, c, \mbox{stride}[0] * y + m, \mbox{stride}[1] * x + n)\]Padding is applied to data before the computation. ceil_mode is used to take ceil or floor while computing out shape. This operator accepts data layout specification.
- Parameters:
data (relax.Expr) – The input data to the operator.
pool_size (Union[int, Tuple[int, int]]) – The size of window for pooling. It is required to have length either 1 or 2.
strides (Union[int, Tuple[int, int]]) – The strides of pooling. It is required to have length either 1 or 2.
padding (Union[int, Tuple[int, ...]]) – The padding for pooling. It is required to have length either 1, 2 or 4.
dilation (Union[int, Tuple[int, int]]) – The dilation of pooling. It is required to have length either 1 or 2.
ceil_mode (bool) – A boolean indicating if use ceil or floor to compute the output shape. By using ceil, every element in the input tensor will be covered by a sliding window.
layout (str) – Layout of the input.
out_layout (Optional[str]) – Layout of the output. If not specified, it is the same as data_layout
- Returns:
result – The computed result.
- Return type:
Expr
- tvm.relax.op.nn.batch_norm(data: Expr, gamma: Expr, beta: Expr, moving_mean: Expr, moving_var: Expr, axis: int, epsilon: float = 1e-05, center: bool = True, scale: bool = True, momentum: float = 0.1) Expr ¶
Batch normalization layer (Ioffe and Szegedy, 2014).
Normalizes the input at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1.
\[\begin{split}data\_mean[i] = mean(data[:,i,:,...]) \\ data\_var[i] = var(data[:,i,:,...])\end{split}\]Both mean and var returns a scalar by treating the input as a vector.
Then compute the normalized output, which has the same shape as input, as following:
\[out[:,i,:,...] = \frac{data[:,i,:,...] - data\_mean[i]}{\sqrt{data\_var[i]+\epsilon}} * gamma[i] + beta[i]\]Assume the input has size k on axis 1, then both
gamma
andbeta
have shape (k,).Besides the inputs and the outputs, this operator accepts two auxiliary states,
moving_mean
andmoving_var
, which are k-length vectors. They are global statistics for the whole dataset, which are updated bymoving_mean = moving_mean * momentum + data_mean * (1 - momentum) moving_var = moving_var * momentum + data_var * (1 - momentum)
The parameter
axis
specifies which axis of the input shape denotes the ‘channel’ (separately normalized groups). The default is 1. Specifying -1 sets the channel axis to be the last item in the input shape.Note
This operator has two modes:
- Training mode.
Use the mean and var computed from THIS batch to normalize.
Update and then return the running mean and running var.
- Inference mode.
Use the running_mean and running_var parameters to normalize.
Do not update the running mean and running var. Just return the original value.
In the legalization stage, this operator will be legalized to the training mode by default.
You can use tvm.relax.transform.DecomposeOpsForInference to decompose the operator, so it executes the inference mode computation. Similarly, use tvm.relax.transform.DecomposeOpsForTraining to execute the training mode computation.
- Parameters:
data (relax.Expr) – The input data to the operator.
gamma (relax.Expr) – The gamma scale factor.
beta (relax.Expr) – The beta offset factor.
moving_mean (relax.Expr) – Running mean of input.
moving_var (relax.Expr) – Running variance of input.
axis (int) – The axis along which the normalization is applied.
epsilon (float) – Small float added to variance to avoid dividing by zero.
center (bool) – Indicating if the beta offset will be added to the normalized tensor.
scale (bool) – Indicating if the gamma scale will be multiplied.
momentum (float) – The value used for the moving_mean and moving_var update.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.conv1d(data: Expr, weight: Expr, strides: int | Tuple[int] = 1, padding: int | Tuple[int, ...] = 0, dilation: int | Tuple[int] = 1, groups: int = 1, data_layout: str = 'NCW', kernel_layout: str = 'OIW', out_layout: str | None = None, out_dtype: str | DataType | None = None) Expr ¶
1D convolution.
This operator takes the weight as the 1D convolution kernel and convolves it with data to produce an output.
In the default case, where the data_layout is NCW and kernel_layout is OIW, conv1d takes in a data Tensor with shape (batch_size, in_channels, width), and a weight Tensor with shape (channels, in_channels, kernel_w), where kernel_w is the length of the W kernel dimension, to produce an output Tensor with the following rule:
\[\mbox{out}[b, c, x] = \sum_{dx, k} \mbox{data}[b, k, \mbox{strides} * x + dx] * \mbox{weight}[c, k, dx]\]Padding and dilation are applied to data and weight respectively before the computation. This operator accepts data layout specification. Semantically, the operator will convert the layout to the canonical layout (NCW for data and OIW for weight), perform the computation, then convert to the out_layout.
- Parameters:
data (relax.Expr) – The input data to the operator.
weight (relax.Expr) – The weight expressions.
strides (Union[int, Tuple[int]]) – The strides of convolution. It is required to have length 1.
padding (Union[int, Tuple[int, ...]]) – The padding of convolution on both sides of inputs before convolution. It is required to have length either 1 or 2.
dilation (Union[int, Tuple[int, int]]) – Specifies the dilation rate to be used for dilated convolution. It is required to have length 1.
groups (int) – Number of groups to split the input into for grouped convolution. The number of input and output channels should be divisible by the number of groups.
data_layout (str) – Layout of the input.
kernel_layout (str) – Layout of the weight.
out_layout (Optional[str]) – Layout of the output. If not specified, it is the same as data_layout
out_dtype (Optional[Union[str, DataType]]) – Specifies the output data type for mixed precision conv1d.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.conv1d_transpose(data: Expr, weight: Expr, strides: int | Tuple[int] = 1, padding: int | Tuple[int, ...] = 0, output_padding: int | Tuple[int] = 0, dilation: int | Tuple[int] = 1, groups: int = 1, data_layout: str = 'NCW', kernel_layout: str = 'IOW', out_layout: str | None = None, out_dtype: str | DataType | None = None) Expr ¶
1D transposed convolution operator.
This operator can be seen as the gradient operator of conv1d.
The output shape can be explained in the simple case when data_layout == “NCW” and kernel_layout == “IOW”. Suppose data has shape (N, in_channel, in_w), weight has shape (in_channel, out_channel, weight_w), we need to assure that in_channel % groups == 0. The shape of the output will be (N, out_channel * groups, out_w), where
out_w = ((in_w - 1) * strides[0] + weight_w - 2 * padding[0] + output_padding[0])
- Parameters:
data (relax.Expr) – The input data to the operator.
weight (relax.Expr) – The weight expressions.
strides (Union[int, Tuple[int]]) – The strides of convolution. It is required to have length 1.
padding (Union[int, Tuple[int, ...]]) – The padding of convolution on both sides of inputs before convolution. It is required to have length either 1 or 2.
output_padding (Union[int, Tuple[int, ...]], optional) – Used to disambiguate the output shape.
dilation (Union[int, Tuple[int]]) – Specifies the dilation rate to be used for dilated convolution. It is required to have length either 1.
groups (int) – Number of groups to split the input into for grouped convolution. The number of input and output channels should be divisible by the number of groups.
data_layout (str) – Layout of the input.
kernel_layout (str) – Layout of the weight.
out_layout (Optional[str]) – Layout of the output. If not specified, it is the same as data_layout
out_dtype (Optional[Union[str, DataType]]) – Specifies the output data type for mixed precision conv2d.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.conv2d(data: Expr, weight: Expr, strides: int | Tuple[int, int] = (1, 1), padding: int | Tuple[int, ...] = (0, 0), dilation: int | Tuple[int, int] = (1, 1), groups: int = 1, data_layout: str = 'NCHW', kernel_layout: str = 'OIHW', out_layout: str | None = None, out_dtype: str | DataType | None = None) Expr ¶
2D convolution.
This operator takes the weight as the convolution kernel and convolves it with data to produce an output.
In the default case, where the data_layout is NCHW and kernel_layout is OIHW, conv2d takes in a data Tensor with shape (batch_size, in_channels, height, width), and a weight Tensor with shape (channels, in_channels, kernel_h, kernel_w), where kernel_h and kernel_w is the lengths of the H and W kernel dimensions, to produce an output Tensor with the following rule:
\[\mbox{out}[b, c, y, x] = \sum_{dy, dx, k} \mbox{data}[b, k, \mbox{strides}[0] * y + dy, \mbox{strides}[1] * x + dx] * \mbox{weight}[c, k, dy, dx]\]Padding and dilation are applied to data and weight respectively before the computation. This operator accepts data layout specification. Semantically, the operator will convert the layout to the canonical layout (NCHW for data and OIHW for weight), perform the computation, then convert to the out_layout.
- Parameters:
data (relax.Expr) – The input data to the operator.
weight (relax.Expr) – The weight expressions.
strides (Union[int, Tuple[int, int]]) – The strides of convolution. It is required to have length either 1 or 2.
padding (Union[int, Tuple[int, ...]]) – The padding of convolution on both sides of inputs before convolution. It is required to have length either 1, 2 or 4.
dilation (Union[int, Tuple[int, int]]) – Specifies the dilation rate to be used for dilated convolution. It is required to have length either 1 or 2.
groups (int) – Number of groups to split the input into for grouped convolution. The number of input and output channels should be divisible by the number of groups.
data_layout (str) – Layout of the input.
kernel_layout (str) – Layout of the weight.
out_layout (Optional[str]) – Layout of the output. If not specified, it is the same as data_layout
out_dtype (Optional[Union[str, DataType]]) – Specifies the output data type for mixed precision conv2d.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.conv2d_transpose(data: Expr, weight: Expr, strides: int | Tuple[int, int] = (1, 1), padding: int | Tuple[int, ...] = (0, 0), output_padding: int | Tuple[int, int] = (0, 0), dilation: int | Tuple[int, int] = (1, 1), groups: int = 1, data_layout: str = 'NCHW', kernel_layout: str = 'IOHW', out_layout: str | None = None, out_dtype: str | DataType | None = None) Expr ¶
Two dimensional transposed convolution operator.
This operator is intended to be the gradient operator of conv2d. That means, if
out = conv2d(data, weight, strides, padding, dilation),
The gradient w.r.t. data can be calculated as follows:
data_grad = conv2d_transpose(out_grad, weight, strides, padding, output_padding, dilation),
where output_padding is a parameter used to determine the output shape.
The output shape can be explained in the simple case when data_layout == “NCHW” and kernel_layout == “IOHW”. Suppose data has shape (N, in_channel, in_h, in_w), weight has shape (in_channel, out_channel, weight_h, weight_w), we need to assure that in_channel % groups == 0. The shape of the output will be (N, out_channel * groups, out_h, out_w), where
out_h = ((in_h - 1) * strides[0] + weight_h - 2 * padding[0] + output_padding[0])
out_w = ((in_w - 1) * strides[1] + weight_w - 2 * padding[1] + output_padding[1])
- Parameters:
data (relax.Expr) – The input data to the operator.
weight (relax.Expr) – The weight expressions.
strides (Union[int, Tuple[int, int]]) – The strides of convolution. It is required to have length either 1 or 2.
padding (Union[int, Tuple[int, ...]]) – The padding of convolution on both sides of inputs before convolution. It is required to have length either 1, 2 or 4.
output_padding (Union[int, Tuple[int, ...]], optional) – Used to disambiguate the output shape.
dilation (Union[int, Tuple[int, int]]) – Specifies the dilation rate to be used for dilated convolution. It is required to have length either 1 or 2.
groups (int) – Number of groups to split the input into for grouped convolution. The number of input and output channels should be divisible by the number of groups.
data_layout (str) – Layout of the input.
kernel_layout (str) – Layout of the weight.
out_layout (Optional[str]) – Layout of the output. If not specified, it is the same as data_layout
out_dtype (Optional[Union[str, DataType]]) – Specifies the output data type for mixed precision conv2d.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.conv3d(data: Expr, weight: Expr, strides: int | Tuple[int, int] = (1, 1, 1), padding: int | Tuple[int, ...] = (0, 0, 0), dilation: int | Tuple[int, int] = (1, 1, 1), groups: int = 1, data_layout: str = 'NCDHW', kernel_layout: str = 'OIDHW', out_layout: str | None = None, out_dtype: str | DataType | None = None) Expr ¶
3D convolution.
This operator takes the weight as the convolution kernel and convolves it with data to produce an output.
In the default case, where the data_layout is NCDHW and kernel_layout is OIDHW, conv3d takes in a data Tensor with shape (batch_size, in_channels, depth, height, width), and a weight Tensor with shape (channels, in_channels, kernel_d, kernel_h, kernel_w), where kernel_d, kernel_h, and kernel_w are the lengths of the D, H, and W kernel dimensions, to produce an output Tensor with the following rule:
\[\mbox{out}[b, c, z, y, x] = \sum_{dz, dy, dx, k} \mbox{data}[b, k, \mbox{strides}[0] * z + dz, \mbox{strides}[1] * y + dy, \mbox{strides}[2] * x + dx] * \mbox{weight}[c, k, dz, dy, dx]\]Padding and dilation are applied to data and weight respectively before the computation. This operator accepts data layout specification. Semantically, the operator will convert the layout to the canonical layout (NCDHW for data and OIDHW for weight), perform the computation, then convert to the out_layout.
- Parameters:
data (relax.Expr) – The input data to the operator.
weight (relax.Expr) – The weight expressions.
strides (Union[int, Tuple[int, int, int]]) – The strides of convolution. It is required to have length either 1 or 3.
padding (Union[int, Tuple[int, ...]]) – The padding of convolution on both sides of inputs before convolution. It is required to have length either 1, 3 or 6.
dilation (Union[int, Tuple[int, int, int]]) – Specifies the dilation rate to be used for dilated convolution. It is required to have length either 1 or 3.
groups (int) – Number of groups to split the input into for grouped convolution. The number of input and output channels should be divisible by the number of groups.
data_layout (str) – Layout of the input.
kernel_layout (str) – Layout of the weight.
out_layout (Optional[str]) – Layout of the output. If not specified, it is the same as data_layout
out_dtype (Optional[Union[str, DataType]]) – Specifies the output data type for mixed precision conv2d.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.cross_entropy_with_logits(predictions: Expr, labels: Expr) Expr ¶
CrossEntropy with logits between the predictions and labels.
The shape of predictions and labels must be the same. And when ndim >= 2, the first dimension is regarded as the batch_size N. In this case the computed result will divide by N to perform a mean reduction.
\[\text{cross\_entropy\_with\_logits}(x_i, y_i) = \frac{\sum_i -x_i \cdot y_i}{N}\]- Parameters:
predictions (relax.Expr) – The predictions.
labels (relax.Expr) – The labels (the ground truth values).
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.dropout(data: Expr, rate: float = 0.5) Expr ¶
Applies the dropout operation to the input tensor.
During training, each element of the input is set to zero with probability
p
. The whole array is scaled by1/(1-p)
to keep the expected sum of the input unchanged.- Parameters:
data (relax.Expr) – The input data to the operator.
rate (float) – The probability for an element to be reset to 0.
- Returns:
result – The result of dropout, which is a tuple of two tensors. The first one is the original tensor and the second one is a mask tensor (1.0 where element not dropped, 0.0 where dropped)
- Return type:
relax.Expr
- tvm.relax.op.nn.gelu(data: Expr) Expr ¶
Gaussian Error Linear Units function
\[\text{GeLU}(x) = 0.5 * x * (1 + \text{erf}(x * 0.5**0.5))\]where \(erf\) is the Gauss Error function.
- Parameters:
data (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.nn.gelu_tanh(data: Expr) Expr ¶
Gaussian Error Linear Units function with tanh approximation
\[\text{GELU}(x) = 0.5 * x * (1 + \text{Tanh}(\sqrt(2 / \pi) * (x + 0.044715 * x^3)))\]- Parameters:
data (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.nn.group_norm(data: Expr, gamma: Expr, beta: Expr, num_groups: int, channel_axis: int, axes: int | List[int], epsilon: float = 1e-05, center: bool = True, scale: bool = True) Expr ¶
Group normalization (Yuxin Wu and et al., 2016). Applies group normalization to the n-dimensional input array. This operator takes an n-dimensional input array. First separate the input array into groups along the channel axis. Then apply layer normalization to each group.
- Parameters:
data (relax.Expr) – Input to which group_norm will be applied.
gamma (relax.Expr) – The gamma scale factor.
beta (relax.Expr) – The beta offset factor.
num_groups (int) – Number of groups to separate the channels into.
channel_axis (int) – The index of the channel axis in the input data.
axes (Union[int, List[int]]) – The axes that along which the normalization is applied (excluding the group axis)
epsilon (float) – Small float added to variance to avoid dividing by zero.
center (bool) – Indicating if the beta offset will be added to the normalized tensor.
scale (bool) – Indicating if the gamma scale will be multiplied.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.layer_norm(data: Expr, gamma: Expr, beta: Expr, axes: int | List[int], epsilon: float = 1e-05, center: bool = True, scale: bool = True) Expr ¶
Layer normalization (Lei Ba and et al., 2016). Applies layer normalization to the n-dimensional input array. This operator takes an n-dimensional input array and normalizes the input using the given axis:
\[out = \frac{data - mean(data, axis)}{\sqrt{var(data, axis)+\epsilon}} * gamma + beta\]Unlike batch normalization, the mean and var are computed along the channel dimension.
Assume the input has size k on axis 1, then both gamma and beta have shape (k,).
Note
This operator can be optimized away for inference.
- Parameters:
data (relax.Expr) – Input to which layer_norm will be applied.
gamma (relax.Expr) – The gamma scale factor.
beta (relax.Expr) – The beta offset factor.
axes (Union[int, List[int]]) – The axes that along which the normalization is applied.
epsilon (float) – Small float added to variance to avoid dividing by zero.
center (bool) – Indicating if the beta offset will be added to the normalized tensor.
scale (bool) – Indicating if the gamma scale will be multiplied.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.leakyrelu(data: Expr, alpha: float = 0.01) Expr ¶
Rectified linear unit.
\[text{LeakyReLU, negative_slope}(x) = max(x, 0) + negative_slope * min(x, 0)\]- Parameters:
data (relax.Expr) – The input data
alpha (float) – Controls the angle of the negative slope, used for nagative inputs. Default value is 0.01
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.log_softmax(data: Expr, axis: int = -1) Expr ¶
Computes log softmax.
\[\text{log\_softmax}(x_i) = \log\left( \frac{\exp(x_i)}{\sum_j \exp(x_j)}\right)\]Note
This operator can be optimized away for inference.
- Parameters:
data (relax.Expr) – The input data to the operator.
axis (int) – The axis to sum over when computing log softmax. If not specified, it is by default the last axis of the input tensor. Supports negative indexing.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.max_pool2d(data: Expr, pool_size: int | Tuple[int, int] = (1, 1), strides: int | Tuple[int, int] = (1, 1), padding: int | Tuple[int, ...] = (0, 0), dilation: int | Tuple[int, int] = (1, 1), ceil_mode: bool = False, layout: str = 'NCHW', out_layout: str | None = None) Expr ¶
2D maximum pooling operator.
This operator takes data as input and does 2D max value calculation with in pool_size sized window by striding defined by stride.
In the default case, where the data_layout is NCHW a data Tensor with shape (batch_size, in_channels, height, width), to produce an output Tensor with the following rule:
with data of shape (b, c, h, w) and pool_size (kh, kw)
\[\mbox{out}(b, c, y, x) = \max_{m=0, \ldots, kh-1} \max_{n=0, \ldots, kw-1} \mbox{data}(b, c, \mbox{stride}[0] * y + m, \mbox{stride}[1] * x + n)\]Padding is applied to data before the computation. ceil_mode is used to take ceil or floor while computing out shape. This operator accepts data layout specification.
- Parameters:
data (relax.Expr) – The input data to the operator.
pool_size (Union[int, Tuple[int, int]]) – The size of window for pooling. It is required to have length either 1 or 2.
strides (Union[int, Tuple[int, int]]) – The strides of pooling. It is required to have length either 1 or 2.
padding (Union[int, Tuple[int, ...]]) – The padding for pooling. It is required to have length either 1, 2 or 4.
dilation (Union[int, Tuple[int, int]]) – The dilation of pooling. It is required to have length either 1 or 2.
ceil_mode (bool) – A boolean indicating if use ceil or floor to compute the output shape. By using ceil, every element in the input tensor will be covered by a sliding window.
layout (str) – Layout of the input.
out_layout (Optional[str]) – Layout of the output. If not specified, it is the same as data_layout
- Returns:
result – The computed result.
- Return type:
Expr
- tvm.relax.op.nn.nll_loss(predictions: Expr, targets: Expr, weights: Expr | None = None, reduction: str = 'mean', ignore_index: int = -100) Expr ¶
Negative log likelihood loss.
output[n, i_1, i_2, …, i_k] = -p * w, where - p = predictions[n, t, i_1, i_2, i_k], - t = targets[n, i_1, i_2, …, i_k], - w = weights[t] if t != ignore_index else 0
result = reduction(output)
- Parameters:
predictions (relax.Expr) – The predictions. Should be a (k+2)-D Tensor with shape (N, C, d_1, d_2, …, d_k) where C is the number of target classes.
targets (relax.Expr) – The target value of each prediction. Should be a (k+1)-D Tensor with shape (N, d_1, d_2, …, d_k). Must be of int dtype.
weights (Optional[relax.Expr]) – The weight of each target value. Should be a 1-D Tensor with shape (C,). If not specified, it is treated as if having all ones.
reduction (str) – The reduction method to apply to the output. Possible values are “mean”, “sum” and “none”.
ignore_index (int) – The target value to ignore.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.pad(data, pad_width, pad_value=0, pad_mode='constant')¶
Padding
This operator takes in a tensor and pads each axis by the specified widths using the specified value.
- Parameters:
data (relax.Expr) – The input data to the operator
pad_width (tuple of <tuple of <int>>, required) – Number of values padded to the edges of each axis, in the format of ((before_1, after_1), …, (before_N, after_N))
pad_value (float) – The value used for padding
pad_mode ('constant', 'edge', 'reflect') – ‘constant’ pads with constant_value pad_value ‘edge’ pads using the edge values of the input array ‘reflect’ pads by reflecting values with respect to the edge
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.relu(data: Expr) Expr ¶
Rectified linear unit.
\[\text{ReLU}(x) = \max(x, 0)\]- Parameters:
data (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.rms_norm(data: Expr, weight: Expr, axes: int | List[int] = -1, epsilon: float = 1e-05) Expr ¶
Root mean square normalization (Biao Zhang and et al., 2019). Applies root mean square normalization to the n-dimensional input array. This operator takes an n-dimensional input array and normalizes the input using the given axis:
\[out = \frac{data}{\sqrt{mean(data, axis)+\epsilon}} * weight + bias\]- Parameters:
data (relax.Expr) – Input to which rms_norm will be applied.
weight (relax.Expr) – The scale factor.
bias (relax.Expr) – The offset factor.
axes (Union[int, List[int]]) – The axes that along which the normalization is applied.
epsilon (float) – Small float added to square mean to avoid dividing by zero.
- Returns:
result – The computed result.
- Return type:
relax.Expr
- tvm.relax.op.nn.silu(data: Expr) Expr ¶
Sigmoid Linear Unit function
\[\text{SiLU}(x) = x * \text{sigmoid}(x)\]- Parameters:
data (relax.Expr) – The input data
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
- tvm.relax.op.nn.softmax(data: Expr, axis: int = -1) Expr ¶
Computes softmax.
\[\text{softmax}(x)_i = \frac{\exp(x_i)}{\sum_j \exp(x_j)}\]- Parameters:
data (relax.Expr) – The input data to the operator.
axis (int) – The axis to sum over when computing softmax. If not specified, it is by default the last axis of the input tensor. Supports negative indexing.
- Returns:
result – The computed result.
- Return type:
relax.Expr
Note
The input tensor is required to have float dtype
tvm.relax.op.builtin¶
Relax builtin operators.
- tvm.relax.op.builtin.alloc_tensor(shape: Expr, dtype: str | Expr, runtime_device_index: int | Expr) Call ¶
Construct a relax.Call to allocate a tensor with specific shape, dtype, runtime_device_index.
- Parameters:
shape (Expr) – The shape of the tensor to be allocated.
dtype (Union[str, Expr]) – The datatype of the tensor to be allocated.
runtime_device_index (Union[int, Expr]) – The device index indicating on which device the tensor is to be allocated at runtime. Index -1 is reserved for the host device.
- Returns:
result – A relax relax.Call, which gets the allocated tensor.
- Return type:
- tvm.relax.op.builtin.stop_lift_params(x: Expr) Expr ¶
An indicator that the consumers of input tensor should not be lifted to transform_params function
- Parameters:
x (relax.Expr) – The input data
- Returns:
result – The result tensor that is the same as input tensor
- Return type:
relax.Expr
tvm.relax.op.ccl¶
CCL related operators.
- tvm.relax.op.ccl.allreduce(x, op_type: str = 'sum')¶
Allreduce operator
- Parameters:
x (relax.Expr) – The input tensor.
op_type (str) – The type of reduction operation to be applied to the input data. Now “sum”, “prod”, “min”, “max” and “avg” are supported.
- Returns:
result – The result of allreduce.
- Return type:
relax.Expr
- tvm.relax.op.ccl.broadcast_from_worker0(x: Expr) Expr ¶
Broadcast data from worker-0 to all other workers.
- Parameters:
x (relax.Expr) – The tensor to be broadcast.
- Returns:
result – The same tensor, which has been broadcast to all other workers.
- Return type:
relax.Expr
- tvm.relax.op.ccl.scatter_from_worker0(x: Expr, num_workers: int, axis: int = 0) Expr ¶
Perform a scatter operation from worker-0, chunking the given buffer into equal parts.
- Parameters:
x (relax.Expr) – The buffer to be divided into equal parts and sent to each worker accordingly.
num_worker (int) – The number of workers, i.e. the number of parts the given buffer should be chunked into.
axis (int) – The dimension of the tensor to be scattered. Default is 0.
- Returns:
result – Chunked Tensor received by different workers.
- Return type:
relax.Expr
tvm.relax.op.distributed¶
Operators serving for distributed Relax.
- tvm.relax.op.distributed.annotate_sharding(input: Expr, device_mesh: DeviceMesh, placement: Placement) Expr ¶
Annotate sharding plan for tensor
- Parameters:
input (relax.Expr) – The input tensor.
device_mesh (DeviceMesh) – The device mesh of the sharding plan
placement (Placement) – The placement of the sharding plan
- Returns:
result – The tensor unmodified.
- Return type:
relax.Expr
- tvm.relax.op.distributed.call_tir_local_view(gvar: GlobalVar, args: Expr, out_sinfo: DTensorStructInfo | List[DTensorStructInfo], tir_vars: ShapeExpr | Tuple[PrimExpr] | List[PrimExpr] | None = None) Call ¶
relax.Call a tir.prim_func and return the output. The prim_func should be a worker-local function that is actually executed on each worker, instead of the unpartitioned function. The output of this operator is DTensor or a tuple of DTensors.
- Parameters:
gvar (GlobalVar) – The GlobalVar referring to a tir PrimFunc.
args (Expr) – The input arguments.
out_sinfo (Union[DTensorStructInfo, List[DTensorStructInfo]]) – The structure info of the call_tir output. It should be a single or a list of DTensorStructInfo. Each one denotes the structure info of a returned tensor.
tir_vars (Optional[Union[ShapeExpr, Tuple[PrimExpr], List[PrimExpr]]]) – ShapeExpr representing a tuple of integers to unpack when calling func. Is null if not used
- Returns:
ret – A call node for the call_tir_local_view operator.
- Return type:
- tvm.relax.op.distributed.redistribute(input: Expr, device_mesh: DeviceMesh, placement: Placement) Expr ¶
Redistribute tensor
- Parameters:
input (relax.Expr) – The input tensor.
device_mesh (DeviceMesh) – The device mesh after redistribution
placement (Placement) – The placement after redistribution
- Returns:
result – The tensor after redistribution.
- Return type:
relax.Expr
- tvm.relax.op.distributed.redistribute_replica_to_shard(input: Expr, num_workers: int, axis: int) Expr ¶
- Slice tensor into several parts along one axis,
and each worker takes one part. input.struct_info.shape[axis] % num_workers == 0 is required. Each worker must have an identical copy of the input. This is a specialized version of redistribute op.
- Parameters:
input (relax.Expr) – The buffer to be sliced into equal parts.
num_worker (int) – The number of workers, i.e. the number of parts the given buffer should be sliced into.
axis (int) – The axis of the tensor to be sliced.
- Returns:
result – Sliced Tensor kept by each device.
- Return type:
relax.Expr
tvm.relax.op.grad¶
Operators serving for finding gradient of relax operators.
- tvm.relax.op.grad.avg_pool2d_backward(output_grad: Expr, data: Expr, pool_size: Tuple[int, int] = (1, 1), strides: Tuple[int, int] = (1, 1), padding: Tuple[int, int, int, int] = (0, 0, 0, 0), dilation: Tuple[int, int] = (1, 1), ceil_mode: bool = False, layout: str = 'NCHW', out_layout: str | None = None) Expr ¶
Backward operator of relax.nn.avg_pool2d. All parameters except output_grad is the same as relax.nn.avg_pool2d. Returns the gradient w.r.t. data.
- Parameters:
output_grad (relax.Expr) – The gradient w.r.t. the result of avg_pool2d.
- Returns:
result – The gradient w.r.t. data.
- Return type:
relax.Expr
- tvm.relax.op.grad.end_checkpoint(input: Expr) Expr ¶
Mark the end of checkpoint stage. See tvm.relax.op.grad.start_checkpoint.
- Parameters:
input (relax.Expr) – The output of the checkpoint stage.
- Returns:
result – The same tensor as the input.
- Return type:
relax.Expr
- tvm.relax.op.grad.max_pool2d_backward(output_grad: Expr, data: Expr, pool_size: Tuple[int, int] = (1, 1), strides: Tuple[int, int] = (1, 1), padding: Tuple[int, int, int, int] = (0, 0, 0, 0), dilation: Tuple[int, int] = (1, 1), ceil_mode: bool = False, layout: str = 'NCHW', out_layout: str | None = None) Expr ¶
Backward operator of relax.nn.max_pool2d. All parameters except output_grad is the same as relax.nn.max_pool2d. Returns the gradient w.r.t. data.
- Parameters:
output_grad (relax.Expr) – The gradient w.r.t. the result of max_pool2d.
- Returns:
result – The gradient w.r.t. data.
- Return type:
relax.Expr
- tvm.relax.op.grad.nll_loss_backward(output_grad: Expr, predictions: Expr, targets: Expr, weights: Expr | None = None, reduction: str = 'mean', ignore_index: int = -100) Expr ¶
Backward operator of relax.nn.nll_loss. All parameters except output_grad is the same as relax.nn.nll_loss. Returns the gradient w.r.t. predictions.
- Parameters:
output_grad (relax.Expr) – The gradient w.r.t. the result of nll_loss.
- Returns:
result – The gradient w.r.t. predictions.
- Return type:
relax.Expr
- tvm.relax.op.grad.no_grad(input: Expr) Expr ¶
No gradient dummy operator w.r.t. the input.
- Parameters:
input (relax.Expr) – The corresponding input tensor.
- Returns:
result – The no-gradient representation w.r.t. input.
- Return type:
relax.Expr
- tvm.relax.op.grad.start_checkpoint(input: Expr) Expr ¶
Mark the start of the checkpoint stage. The computation between start_checkpoint and end_checkpoint will be marked as the checkpoint stage.
Rather than storing all intermediate activations of the entire computation graph for computing backward, the checkpointed stage does not save intermediate activations, and instead recomputes them in backward process.
For instance,
` a = relax.Var("a", relax.TensorStructInfo((2, 2), "float32")) b = relax.Var("b", relax.TensorStructInfo((2, 2), "float32")) c = a * 2 d = b * 2 c_cp = start_checkpoint(c) d_cp = start_checkpoint(d) e = c_cp + d_cp e_out = end_checkpoint(e) `
Then e will be recomputed in the backward stage.See tvm.relax.transform.Gradient, tvm.relax.testing.nn.checkpoint, tvm.relax.op.grad.end_checkpoint for more information.
- Parameters:
input (relax.Expr) – The tensor marking the input of the checkpoint stage.
- Returns:
result – The same tensor as the input.
- Return type:
relax.Expr
- tvm.relax.op.grad.take_backward(output_grad: Expr, x: Expr, indices: Expr, axis: int | None = None) Expr ¶
Backward operator of relax.take. All parameters except output_grad is the same as relax.take. Returns the gradient w.r.t. x.
- Parameters:
output_grad (relax.Expr) – The gradient w.r.t. the result of take.
- Returns:
result – The gradient w.r.t. x.
- Return type:
relax.Expr
tvm.relax.op.image¶
Image operators.
- tvm.relax.op.image.resize2d(data: Expr, size: Expr | int | PrimExpr | Tuple[int | PrimExpr], roi: float | Tuple[float] | None = None, layout: str = 'NCHW', method: str = 'linear', coordinate_transformation_mode: str = 'half_pixel', rounding_method: str = 'round', cubic_alpha: float = -0.5, cubic_exclude: int = 0, extrapolation_value: float = 0.0, out_dtype: str | DataType | None = None) Expr ¶
Image resize2d operator.
This operator takes data as input and does 2D scaling to the given scale factor. In the default case, where the data_layout is NCHW with data of shape (n, c, h, w) out will have a shape (n, c, size[0], size[1])
method indicates the algorithm to be used while calculating the out value and method can be one of (“linear”, “nearest_neighbor”, “cubic”)
- Parameters:
data (relax.Expr) – The input data to the operator.
size (Union[Expr, PrimExprLike, Tuple[PrimExprLike]]) – The out size to which the image will be resized. If specified as a list, it is required to have length either 1 or 2. If specified as an Expr, it is required to have ndim 2.
roi (Optional[Union[float, Tuple[float]]]) – The region of interest for cropping the input image. Expected to be of size 4, and format [start_h, start_w, end_h, end_w]. Only used if coordinate_transformation_mode is tf_crop_and_resize.
layout (str) – Layout of the input.
method (str) – Scale method to used [nearest_neighbor, linear, cubic].
coordinate_transformation_mode (str) – Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. Definitions can be found in topi/image/resize.py. [half_pixel, align_corners, asymmetric, pytorch_half_pixel, tf_half_pixel_for_nn, and tf_crop_and_resize].
rounding_method (str) – indicates how to find the “nearest” pixel in nearest_neighbor method [round, floor, ceil]
cubic_alpha (float) – Spline Coefficient for bicubic interpolation
cubic_exclude (int) – Flag to exclude exterior of the image during bicubic interpolation
extrapolation_value (float) – Fill value to use when roi is outside of the image
out_dtype (Optional[Union[str, DataType]]) – The dtype of the output tensor. It it is not specified, the output will have the same dtype as input if not specified.
- Returns:
result – The resized result.
- Return type:
relax.Expr
tvm.relax.op.memory¶
Relax memory primitives.
- tvm.relax.op.memory.alloc_storage(size: Expr, virtual_device_index: int | Expr, storage_scope: str | Expr, dtype: str | Expr) Call ¶
Construct a relax.Call to allocate a storage with specific size, virtual_device_index, storage_scope and dtype.
- Parameters:
size (Expr) – The size of the storage to be allocated.
virtual_device_index (Union[int, Expr]) – The virtual device index indicating on which device the storage is to be allocated. Index -1 is reserved for the host device.
storage_scope (Union[str, Expr]) – The storage scope to allocate the storage to.
dtype (Union[str, Expr]) – The datatype of the storage to be allocated.
- Returns:
result – A relax relax.Call, which gets the allocated storage.
- Return type:
- tvm.relax.op.memory.alloc_tensor(storage: Expr, offset: int | Expr, shape: Expr, dtype: str | Expr) Call ¶
Construct a relax.Call to allocate a tensor on a certain storage starting from the given offset.
- Parameters:
storage (Expr) – The storage to allocate the tensor to.
offset (Union[int, Expr]) – The storage offset to allocate the tensor.
shape (Expr) – The shape of the tensor to be allocated.
dtype (Union[str, Expr]) – The datatype of the tensor to be allocated.
- Returns:
result – A relax relax.Call, which gets the allocated tensor.
- Return type:
- tvm.relax.op.memory.kill_storage(storage: Expr) Call ¶
Construct a relax.Call to kill a storage.
- Parameters:
storage (Expr) – The storage to be killed.
- Returns:
result – A relax relax.Call to kill a storage.
- Return type:
tvm.relax.op.op_attrs¶
The attributes node used for Relax operators
- class tvm.relax.op.op_attrs.AdaptivePool2DAttrs¶
Attributes for 2d adaptive pool operator
- class tvm.relax.op.op_attrs.ArgmaxArgminAttrs¶
Attributes for argmax/argmin operator
- class tvm.relax.op.op_attrs.ArgsortAttrs¶
Attributes for argsort operator
- class tvm.relax.op.op_attrs.AstypeAttrs¶
Attributes used in astype operator
- class tvm.relax.op.op_attrs.BatchNormAttrs¶
Attributes used in batch_norm operator
- class tvm.relax.op.op_attrs.CallTIRWithGradAttrs¶
Attributes used in call_tir_with_grad operator
- class tvm.relax.op.op_attrs.ConcatAttrs¶
Attributes for concat operator
- class tvm.relax.op.op_attrs.Conv2DAttrs¶
Attributes for nn.conv2d
- class tvm.relax.op.op_attrs.Conv2DTransposeAttrs¶
Attributes for nn.conv2d_transpose
- class tvm.relax.op.op_attrs.DropoutAttrs¶
Attributes for dropout operator
- class tvm.relax.op.op_attrs.EinsumAttrs¶
Attributes for einsum operator
- class tvm.relax.op.op_attrs.ExpandDimsAttrs¶
Attributes for expand_dims operator
- class tvm.relax.op.op_attrs.FlipAttrs¶
Attributes for flip operator
- class tvm.relax.op.op_attrs.InitAttrs¶
Attributes used in full/full_like, ones/ones_like, and zeros/zeros_like operator
- class tvm.relax.op.op_attrs.LayerNormAttrs¶
Attributes used in layer_norm operator
- class tvm.relax.op.op_attrs.LayoutTransformAttrs¶
Attributes used in layout_transform operator
- class tvm.relax.op.op_attrs.MatmulAttrs¶
Attributes for matmul operator
- class tvm.relax.op.op_attrs.PermuteDimsAttrs¶
Attributes for permute_dims operator
- class tvm.relax.op.op_attrs.Pool2DAttrs¶
Attributes for nn.max_pool2d
- class tvm.relax.op.op_attrs.RepeatAttrs¶
Attributes for repeat operator
- class tvm.relax.op.op_attrs.Resize2DAttrs¶
Attributes used in image resize2d operator
- class tvm.relax.op.op_attrs.ScanopAttrs¶
Attributes for scan operators
- class tvm.relax.op.op_attrs.SoftmaxAttrs¶
Attributes for nn.softmax
- class tvm.relax.op.op_attrs.SortAttrs¶
Attributes for sort operator
- class tvm.relax.op.op_attrs.SplitAttrs¶
Attributes used in split operator
- class tvm.relax.op.op_attrs.SqueezeAttrs¶
Attributes for squeeze operator
- class tvm.relax.op.op_attrs.StatisticalAttrs¶
Attributes used in statistical operator
- class tvm.relax.op.op_attrs.StridedSliceAttrs¶
Attributes used in strided_slice operator
- class tvm.relax.op.op_attrs.TakeAttrs¶
Attributes used in take operator
- class tvm.relax.op.op_attrs.TileAttrs¶
Attributes for tile operator
- class tvm.relax.op.op_attrs.TopKAttrs¶
Attributes for topk operators
- class tvm.relax.op.op_attrs.TriluAttrs¶
Attributes used in tril and triu operator