sail.MultiEngine ________________ MultiEngine >>>>>>>>>>>>>>> **Interface:** .. code-block:: python def __init__(self, bmodel_path: str, device_ids: list[int], sys_out: bool = True, graph_idx: int = 0) **Parameters** * bmodel_path : str Path to bmodel * device_ids : list[int] TPU ID. You can use bm-smi to see available IDs * sys_out : bool, default: True The flag of copy result to system memory. * graph_idx : int, default: 0 The specified graph index set_print_flag >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Print debug messages. **Interface:** .. code-block:: python def set_print_flag(self, print_flag: bool) **Parameters** * print_flag : bool if print_flag is true, print debug messages set_print_time >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Print main process time use. **Interface:** .. code-block:: python def set_print_time(self, print_flag: bool) **Parameters** * print_flag : bool if print_flag is true, print main process time use, Otherwise not print. get_device_ids >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Get device ids of this MultiEngine. **Interface:** .. code-block:: python def get_device_ids(self)-> list[int] **Returns** * device_ids : list[int] tpu ids of this MultiEngine. get_graph_names >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Get all graph names in the loaded bmodels. **Interface:** .. code-block:: python def get_graph_names(self)-> list **Returns** * graph_names : list Graph names list in loaded context get_input_names >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Get all input tensor names of the specified graph. **Interface:** .. code-block:: python def get_input_names(self, graph_name: str)-> list **Parameters** * graph_name : str Specified graph name **Returns** * input_names : list All the input tensor names of the graph get_output_names >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Get all output tensor names of the specified graph. **Interface:** .. code-block:: python def get_output_names(self, graph_name: str)-> list **Parameters** * graph_name : str Specified graph name **Returns** * input_names : list All the output tensor names of the graph get_input_shape >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Get the maximum dimension shape of an input tensor in a graph. \ There are cases that there are multiple input shapes in one input name, \ This API only returns the maximum dimension one for the memory allocation \ in order to get the best performance. **Interface:** .. code-block:: python def get_input_shape(self, graph_name: str, tensor_name: str)-> list **Parameters** * graph_name : str The specified graph name * tensor_name : str The specified input tensor name **Returns** * tensor_shape : list The maxmim dimension shape of the tensor get_output_shape >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Get the shape of an output tensor in a graph. **Interface:** .. code-block:: python def get_output_shape(self, graph_name: str, tensor_name: str)-> list **Parameters** * graph_name : str The specified graph name * tensor_name : str The specified output tensor name **Returns** * tensor_shape : list The shape of the tensor process >>>>>>>>>>>>>>>>>>> Inference with provided system data of input tensors. **Interface:** .. code-block:: python def process(self, input_tensors: dict {str : numpy.array})-> dict {str : numpy.array} **Parameters** * input_tensors : dict {str : numpy.array} Data of all input tensors in system memory **Returns** * output_tensors : dict {str : numpy.array} Data of all output tensors in system memory