DL Ops&Models支持情况¶
以下是Deep Learning Framework的Ops和Models支持情况。 其中的INT8支持,指的是经过SOPHGO Calibration tool量化后的Ops和Models。
下面的表格中,未标注OK的Ops/Layer表示暂未支持,未标注OK的model表示暂未测试。
一般情况下,若用户model中所有的算子都包含在支持的算子中,则该model可以支持。 这里也列出了各个框架已经测试的model,供用户参考。
为了方便,我们在bmnetc/bmnett/bmnetm/bmnetp/bmnetd工具中提供了参数(–mode=check)来检查是否model 存在未支持的Float32算子,详情请见各个工具的说明。
Caffe支持情况¶
Caffe无版本限制,只要是Github上的官方开源版本即可。
Caffe已支持的层如下表所示:
Caffe Layer Support
Layer Name
Float32
INT8
AbaVal
OK
ArgMax
OK
OK
BN
OK
OK
BatchNorm
OK
OK
Bias
OK
OK
Concat
OK
OK
Convolution
OK
OK
Crop
OK
OK
Deconvolotion
OK
OK
DetectionOutput
OK
OK
ELU
OK
OK
Eltwise
OK
OK
Flatten
OK
OK
InnerProduct
OK
OK
Interp
OK
OK
LSTM
OK
Log
OK
Normalize
OK
OK
PRelu
OK
OK
PSROIPooling
OK
OK
PadChannel
OK
Permute
OK
OK
Pooling
OK
OK
Power
OK
Priorbox
OK
OK
ROIAlign
OK
ROIPooling
OK
OK
RPN
OK
OK
Relu
OK
OK
Reduction
OK
OK
Reorg
OK
OK
Reshape
OK
OK
Reverse
OK
Scale
OK
OK
ShuffleChannel
OK
OK
Sigmoid
OK
OK
Slice
OK
OK
Softmax
OK
OK
Split
OK
OK
TanH
OK
OK
Tile
OK
OK
Upsample
OK
OK
UpsampleCopy
OK
OK
Yolo
OK
OK
Yolov3DetectionOuput
OK
OK
Caffe已测试的model如下表所示:
Caffe Model Support
Model Name
Float32
INT8
Alexnet
OK
OK
Mtcnn
OK
OK
Googlenet v1/v2/v3
OK
OK
Mobilenet v1/v2/v3
OK
OK
Pose
OK
OK
r50
OK
OK
Resnet 18/34/50/101/152
OK
OK
Vggnet 16/19
OK
OK
Yolo v1/v2/v3
OK
OK
SSD
OK
OK
SSH
OK
OK
densenet
OK
OK
maskrcnn
OK
OK
inception
OK
OK
ICNet
OK
OK
Squeezenet
OK
OK
unet
OK
OK
deeplab_v2
OK
OK
vanface
OK
OK
shufflenet
OK
OK
segnet
OK
OK
dual
OK
OK
rfcn
OK
OK
faster_rcnn
OK
OK
landmark
OK
OK
pspnet
OK
OK
OCR_detection
OK
OK
Erfnet
OK
OK
Enet
OK
OK
TensorFlow支持情况¶
TensorFlow目前支持的版本为: <=2.6.0。
已支持的算子如下表所示(也可通过命令`python3 -m bmnett –op_list True`列出当前支持算子):
TensorFlow Ops Support
Ops Name
Float32
INT8
All
OK
Abs
OK
OK
Acos
OK
OK
Acosh
OK
OK
Add
OK
OK
AddV2
OK
OK
AddN
OK
OK
Any
OK
OK
ArgMax
OK
OK
ArgMin
OK
OK
Asin
OK
OK
Asinh
OK
OK
Assert
OK
OK
Atanh
OK
OK
AvgPool
OK
OK
BatchMatMul
OK
BatchMatMulV2
OK
BatchToSpaceND
OK
OK
BiasAdd
OK
OK
Cast
OK
OK
Ceil
OK
OK
Concat
OK
OK
ConcatV2
OK
OK
Conv2D
OK
OK
Conv2DBackpropInput
OK
OK
Conv3D
OK
OK
Conv3DBackpropInput
OK
OK
Cos
OK
OK
Cosh
OK
OK
CropAndResize
OK
OK
DepthToSpace
OK
OK
DepthwiseConv2dNative
OK
OK
Div
OK
OK
Elu
OK
OK
Enter
OK
Equal
OK
OK
Erf
OK
OK
Exit
OK
Exp
OK
OK
ExpandDims
OK
OK
Expm1
OK
OK
Fill
OK
OK
Floor
OK
OK
FloorMod
OK
OK
FloorDiv
OK
OK
FusedBatchNorm
OK
OK
FusedBatchNormV3
OK
OK
Gather
OK
GatherNd
OK
GatherV2
OK
Greater
OK
OK
GreaterEqual
OK
OK
Identity
OK
OK
IsFinite
OK
LeakyRelu
OK
OK
Less
OK
OK
LessEqual
OK
OK
Log
OK
OK
Log1p
OK
OK
LogSoftmax
OK
OK
LogicalAnd
OK
OK
LogicalNot
OK
OK
LogicalOr
OK
OK
LoopCond
OK
LRN
OK
OK
MatMul
OK
OK
Max
OK
OK
Maximum
OK
OK
MaxPool
OK
OK
Mean
OK
OK
Merge
OK
Minimum
OK
OK
MirrorPad
OK
OK
Mul
OK
OK
Neg
OK
OK
NextIteration
OK
NoOp
OK
OK
NonMaxSuppressionV2
OK
NonMaxSuppressionV3
OK
NonMaxSuppressionV5
OK
NotEqual
OK
OK
OneHot
OK
OK
OnesLike
OK
OK
Pack
OK
OK
Pad
OK
OK
PadV2
OK
OK
Placeholder
OK
OK
PlaceholderWithDefault
OK
OK
Pow
OK
OK
Prod
OK
OK
RandomUniform
OK
OK
RandomUniformInt
OK
OK
Range
OK
OK
Rank
OK
Reciprocal
OK
OK
Relu
OK
OK
Relu6
OK
OK
Reshape
OK
OK
ResizeBilinear
OK
OK
ResizeNearestNeighbor
OK
OK
ReverseV2
OK
OK
Round
OK
OK
Rsqrt
OK
OK
ScatterNd
OK
OK
Select
OK
OK
Shape
OK
OK
Sigmoid
OK
OK
Sin
OK
OK
Sinh
OK
OK
Size
OK
OK
Slice
OK
OK
Softmax
OK
OK
Softplus
OK
OK
Softsign
OK
OK
SpaceToBatchND
OK
OK
SpaceToDepth
OK
OK
Split
OK
OK
SplitV
OK
OK
Sqrt
OK
OK
Square
OK
OK
SquaredDifference
OK
OK
Squeeze
OK
OK
StopGradient
OK
OK
StridedSlice
OK
OK
Sub
OK
OK
Sum
OK
OK
Switch
OK
OK
Tan
OK
OK
Tanh
OK
OK
TensorArrayConcatV3
OK
TensorArrayGatherV3
OK
TensorArrayReadV3
OK
TensorArrayScatterV3
OK
TensorArraySizeV3
OK
TensorArraySplitV3
OK
TensorArrayV3
OK
TensorArrayWriteV3
OK
Tile
OK
OK
TopKV2
OK
OK
Transpose
OK
OK
Unpack
OK
OK
Where
OK
ZerosLike
OK
OK
TensorFlow已测试的model如下表所示:
TensorFlow Model Support
Model Name
Float32
INT8
Inception v1/v2/v3/v4
OK
OK
Mobilenet v1/v2
OK
OK
Resnet 50/101/152 v1/v2
OK
OK
Vggnet 16/19
OK
OK
Nasnet large/mobile
OK
OK
Pnasnet_mobile
OK
OK
SSD Resnet50 fpn
OK
SSD Mobile v2
OK
SSD Inception v2
OK
SSD Vgg 300
OK
SSD Mobilenet 300
OK
Segmentation
OK
Faster rcnn
OK
Bert
OK
Gan
OK
vqvae
OK
rl_apex
OK
Pedes Resnet50 V2
OK
Yolo v3
OK
OK
Ocr
OK
Mask_rcnn
OK
Espcn
OK
Lstm
OK
Fcnn
OK
Deeplabv3
OK
OK
EfficientNet
OK
OK
EfficientDet
OK
Pytorch支持情况¶
Pytorch目前支持的版本为: <= 1.8.x。
已支持的算子如下表所示(也可通过命令`python3 -m bmnetp –op_list True`列出当前支持算子):
Pytorch Ops Support
Ops Name
Float32
INT8
aten::_convolution
OK
OK
aten::abs
OK
OK
aten::abs_
OK
OK
aten::adaptive_avg_pool1d
OK
aten::adaptive_avg_pool2d
OK
aten::adaptive_max_pool1d
OK
aten::adaptive_max_pool2d
OK
aten::add
OK
OK
aten::add_
OK
OK
aten::addmm
OK
OK
aten::affine_grid_generator
OK
aten::alpha_dropout
OK
OK
aten::alpha_dropout_
OK
OK
aten::arange
OK
OK
aten::avg_pool1d
OK
OK
aten::avg_pool2d
OK
OK
aten::avg_pool3d
OK
aten::batch_norm
OK
OK
caffe2::BatchPermutation
OK
caffe2::BBoxTransform
OK
aten::bmm
OK
OK
aten::cat
OK
OK
aten::celu
OK
OK
aten::celu_
OK
OK
aten::chunk
OK
OK
aten::clamp
OK
OK
aten::clamp_
OK
OK
aten::clamp_max
OK
OK
aten::clamp_max_
OK
OK
aten::clamp_min
OK
OK
aten::clamp_min_
OK
OK
aten::clone
OK
OK
caffe2::CollectRpnProposals
OK
aten::constant_pad_nd
OK
OK
aten::contiguous
OK
OK
aten::copy
OK
OK
aten::cos
OK
OK
aten::cos_
OK
OK
aten::cumsum
OK
OK
aten::detach
OK
OK
aten::detach_
OK
OK
caffe2::DistributeFpnProposals
OK
OK
aten::div
OK
OK
aten::div_
OK
OK
aten::dropout
OK
OK
aten::dropout_
OK
OK
aten::einsum
OK
OK
aten::elu
OK
OK
aten::elu_
OK
OK
aten::embedding
OK
OK
aten::empty
OK
OK
aten::eq
OK
OK
aten::eq_
OK
OK
aten::erf
OK
OK
aten::erf_
OK
OK
aten::erfc
OK
OK
aten::erfc_
OK
OK
aten::exp
OK
OK
aten::exp_
OK
OK
aten::expand
OK
OK
aten::expand_as
OK
OK
aten::expm1
OK
OK
aten::expm1_
OK
OK
aten::feature_dropout
OK
OK
aten::feature_dropout_
OK
OK
aten::flatten
OK
OK
aten::floor
OK
OK
aten::floor_
OK
OK
aten::floor_divide
OK
OK
aten::floor_divide_
OK
OK
aten::gather
OK
OK
aten::ge
OK
OK
aten::ge_
OK
OK
aten::gelu
OK
OK
aten::gelu_
OK
OK
caffe2::GenerateProposals
OK
OK
aten::grid_sampler
OK
OK
aten::gru
OK
OK
aten::gt
OK
OK
aten::gt_
OK
OK
aten::hardshrink
OK
OK
aten::hardshrink_
OK
OK
aten::hardtanh
OK
OK
aten::hardtanh_
OK
OK
aten::index
OK
OK
aten::index_put
OK
OK
aten::index_put_
OK
OK
aten::instance_norm
OK
OK
aten::Int
OK
OK
aten::layer_norm
OK
OK
aten::le
OK
OK
aten::le_
OK
OK
aten::leaky_relu
OK
OK
aten::leaky_relu_
OK
OK
aten::lerp
OK
OK
aten::lerp_
OK
OK
aten::log
OK
OK
aten::log_
OK
OK
aten::log10
OK
OK
aten::log10_
OK
OK
aten::log1p
OK
OK
aten::log1p_
OK
OK
aten::log2
OK
OK
aten::log2_
OK
OK
aten::log_sigmoid
OK
OK
aten::log_softmax
OK
OK
aten::lstm
OK
OK
aten::lt
OK
OK
aten::lt_
OK
OK
aten::matmul
OK
OK
aten::max
OK
OK
aten::max_
OK
OK
aten::max_pool1d
OK
OK
aten::max_pool1d_with_indices
OK
OK
aten::max_pool2d
OK
OK
aten::max_pool2d_with_indices
OK
OK
aten::mean
OK
OK
aten::meshgrid
OK
OK
aten::min
OK
OK
aten::min_
OK
OK
aten::mm
OK
OK
aten::mul
OK
OK
aten::mul_
OK
OK
aten::narrow
OK
OK
aten::ne
OK
OK
aten::ne_
OK
OK
aten::neg
OK
OK
aten::neg_
OK
OK
aten::new_full
OK
OK
aten::new_zeros
OK
OK
aten::nonzero
OK
OK
aten::norm
OK
OK
aten::ones
OK
OK
aten::ones_like
OK
OK
aten::permute
OK
OK
aten::pow
OK
OK
aten::pow_
OK
OK
aten::prelu
OK
OK
aten::prelu_
OK
OK
aten::reciprocal
OK
OK
aten::reciprocal_
OK
OK
aten::reflection_pad1d
OK
OK
aten::reflection_pad2d
OK
OK
aten::relu
OK
OK
aten::relu_
OK
OK
aten::repeat
OK
OK
aten::reshape
OK
OK
aten::reshape_
OK
OK
torchvision::roi_align
OK
OK
caffe2::RoIAlign
OK
OK
aten::rsqrt
OK
OK
aten::rsqrt_
OK
OK
aten::ScalarImplicit
OK
OK
aten::select
OK
OK
aten::selu
OK
OK
aten::selu_
OK
OK
aten::sigmoid
OK
OK
aten::sigmoid_
OK
OK
aten::silu
OK
OK
aten::silu_
OK
OK
aten::sin
OK
OK
aten::sin_
OK
OK
aten::size
OK
OK
aten::slice
OK
OK
aten::softmax
OK
OK
aten::softplus
OK
OK
aten::softshrink
OK
OK
aten::sort
OK
OK
aten::split
OK
OK
aten::split_with_sizes
OK
OK
aten::sqrt
OK
OK
aten::sqrt_
OK
OK
aten::squeeze
OK
OK
aten::squeeze_
OK
OK
aten::stack
OK
OK
aten::sub
OK
OK
aten::sub_
OK
OK
aten::sum
OK
OK
aten::t
OK
OK
aten::t_
OK
OK
aten::tanh
OK
OK
aten::tanh_
OK
OK
aten::threshold
OK
OK
aten::threshold_
OK
OK
aten::to
OK
OK
aten::topk
OK
OK
aten::transpose
OK
OK
aten::transpose_
OK
OK
aten::true_divide
OK
OK
aten::true_divide_
OK
OK
aten::type_as
OK
OK
aten::unfold
OK
OK
aten::unsqueeze
OK
OK
aten::upsample_nearest2d
OK
OK
aten::upsample_bilinear2d
OK
OK
aten::view
OK
OK
aten::view_
OK
OK
aten::view_as
OK
OK
aten::view_as_
OK
OK
aten::zero_
OK
OK
aten::zeros
OK
OK
aten::zeros_like
OK
OK
Pytorch已测试的model如下表所示:
Pytorch Model Support
Model Name
Float32
INT8
Alexnet
OK
OK
Darknet
OK
OK
Densenet
OK
OK
Inception v2/v3
OK
OK
Resnet 50/101/152
OK
OK
Mobilenet v2/v3
OK
OK
Squeezenet
OK
OK
Vggnet 16/19
OK
OK
DCGAN_generator
OK
SSD300 Mobilenet v2
OK
OK
SSD300 vgg16
OK
OK
Yolo v3/v4/v5
OK
OK
Face_alignment
OK
OK
OCR_EAST
OK
OK
Retrieval_NetVLAD
OK
Seg_deeplab
OK
Vot
OK
Ranknet
OK
Anchors_v1
OK
Eca mobilenet
OK
Lprnet
OK
Bert
OK
OK
Se resenet
OK
OK
Shufflenet
OK
OK
Stn
OK
Ctpn
OK
GAN
OK
Mnasnet
OK
OK
Slowfast
OK
OK
Anchors
OK
CCN
OK
OK
GRU
OK
OK
CRNN
OK
OK
Retinaface
OK
OK
Osd
OK
MxNet支持情况¶
MxNet目前支持的版本为: <= 1.7.0。
已支持的算子如下表所示(也可通过命令`python3 -m bmnetm –list_ops`列出当前支持算子):
MxNet Ops Support
Ops Name
Float32
INT8
Flatten
OK
OK
FullConnected
OK
OK
SoftmaxOutput
OK
OK
softmax
OK
OK
Pooling
OK
OK
Activation
OK
OK
LeakyReLU
OK
OK
sigmoid
OK
OK
exp
OK
Convolution
OK
OK
Deconvolution
OK
OK
BatchNorm
OK
OK
max
OK
OK
elemwise_add
OK
OK
elemwise_mul
OK
OK
elemwise_sub
OK
OK
Reshape
OK
OK
Concat
OK
OK
LRN
OK
OK
transpose
OK
OK
slice
OK
OK
slice_axis
OK
OK
broadcast_mul
OK
OK
broadcast_div
OK
OK
broadcast_plus
OK
OK
broadcast_minus
OK
OK
broadcast_sub
OK
OK
broadcast_add
OK
OK
broadcast_maximum
OK
OK
broadcast_minimum
OK
OK
broadcast_greater
OK
OK
broadcast_greater_equal
OK
OK
broadcast_lesser
OK
OK
broadcast_lesser_equal
OK
OK
broadcast_equal
OK
OK
broadcast_not_equal
OK
OK
expand_dims
OK
OK
Pad
OK
OK
contrib_AdaptiveAvgPooling2D
OK
contrib_BilinearResize2D
OK
clip
OK
OK
BlockGrad
OK
_plus_scalar
OK
OK
_sub_scalar
OK
OK
_minus_scalar
OK
OK
_mul_scalar
OK
OK
_div_scalar
OK
OK
_maximum_scalar
OK
OK
_minumum_scalar
OK
OK
_greater_scalar
OK
OK
_greater_equal_scalar
OK
OK
_equal_scalar
OK
OK
_not_equal_scalar
OK
OK
SliceChannel
OK
OK
slice_like
OK
ones_like
OK
OK
zeros_like
OK
OK
_arange
OK
OK
where
OK
L2Normalization
OK
OK
shape_array
OK
OK
reverse
OK
tile
OK
OK
repeat
OK
OK
stack
OK
OK
_contrib_ROIAlign
OK
_contrib_box_nms
OK
Cast
OK
OK
SwapAxis
OK
OK
InstanceNorm
OK
OK
squeeze
OK
OK
argsort
OK
gather_nd
OK
UpSampling
OK
topk
OK
crop
OK
relu
OK
equal
OK
broadcast_like
OK
MxNet已测试的model如下表所示:
MxNet Model Support
Model Name
Float32
INT8
Inception_v3
OK
OK
Mobilenet v1/v2
OK
OK
Nasnet
OK
OK
Senet
OK
OK
Se_resnet50
OK
OK
Se_resnext50
OK
OK
Resnet50_v1
OK
OK
Resnet50_v2
OK
OK
Resnext_50
OK
OK
Densenet121
OK
OK
Googlenet
OK
OK
Yolo
OK
OK
Alexnet
OK
OK
TSN
OK
Nin
OK
Vggnet16
OK
OK
Squeezenet
OK
OK
fcn_resnet50
OK
residual_attention_net
OK
Yolov3_darknet53
OK
OK
SSD_512_resnet50
OK
OK
SSD_512_mobilenet
OK
OK
SSD_512_vgg16
OK
OK
faster_rcnn_resnet50
OK
deeplabv3_resnet101
OK
center_net_resnet18_v1b
OK
arcface_r100_v1
OK
alpha_pose_resnet101_v1b
OK
Darknet支持情况¶
Darknet无版本限制,只要是Github上的官方开源版本即可。
Darknet已支持的层如下表所示:
Darknet Layer Support
Layer Name
Float32
INT8
Activate
OK
OK
Route
OK
OK
Upsample
OK
OK
Sum
OK
OK
Batchnorm
OK
OK
Scale
OK
OK
Convolution
OK
OK
Connected
OK
OK
Maxpool
OK
OK
Softmax
OK
OK
Crop
OK
OK
Reorg
OK
OK
Shortcut
OK
OK
Yolo
OK
OK
Region
OK
OK
Darknet已测试的model如下表所示:
Darknet Model Support
Model Name
Float32
INT8
Yolo v2
OK
OK
Yolo v3
OK
OK
Yolo v4
OK
OK
Yolov3_tiny
OK
OK
Vggnet16
OK
OK
Alexnet
OK
OK
ONNX支持情况¶
ONNX目前支持的版本为: == 1.7.0。
ONNX已支持的算子如下表所示(op_set==12, 通过命令`python3 -m bmneto –list_ops`查看):
ONNX Layer Support
Layer Name
Float32
INT8
Abs
OK
OK
Acos
OK
OK
Acosh
OK
OK
Add
OK
OK
Asin
OK
OK
Asinh
OK
OK
Atanh
OK
OK
AveragePool
OK
OK
BatchNormalization
OK
OK
Cast
OK
OK
Ceil
OK
OK
Clip
OK
OK
Concat
OK
OK
Constant
OK
OK
ConstantOfShape
OK
OK
Conv
OK
OK
ConvTranspose
OK
OK
Cos
OK
OK
Cosh
OK
OK
Div
OK
OK
Elu
OK
OK
Equal
OK
OK
Erf
OK
OK
Exp
OK
OK
Expand
OK
OK
Flatten
OK
OK
Floor
OK
OK
GRU
OK
OK
Gather
OK
OK
GatherND
OK
OK
Gemm
OK
OK
GlobalAveragePool
OK
OK
GlobalMaxPool
OK
OK
Greater
OK
OK
GreaterOrEqual
OK
OK
Identity
OK
OK
IsInf
OK
OK
LSTM
OK
OK
LeakyRelu
OK
OK
Less
OK
OK
LessOrEqual
OK
OK
Log
OK
OK
MatMul
OK
OK
Max
OK
OK
MaxPool
OK
OK
Mean
OK
OK
Min
OK
OK
Mul
OK
OK
NonMaxSuppression
OK
OK
NonZero
OK
OK
Pad
OK
OK
Pow
OK
OK
Reciprocal
OK
OK
ReduceMax
OK
OK
ReduceMean
OK
OK
ReduceMin
OK
OK
ReduceProd
OK
OK
ReduceSum
OK
OK
Relu
OK
OK
Reshape
OK
OK
Resize
OK
OK
RoiAlign
OK
OK
Round
OK
OK
ScatterND
OK
OK
Shape
OK
OK
Sigmoid
OK
OK
Sign
OK
OK
Sin
OK
OK
Sinh
OK
OK
Slice
OK
OK
Softmax
OK
OK
Softplus
OK
OK
Split
OK
OK
Sqrt
OK
OK
Squeeze
OK
OK
Sub
OK
OK
Sum
OK
OK
Tan
OK
OK
Tanh
OK
OK
Tile
OK
OK
TopK
OK
OK
Transpose
OK
OK
Unsqueeze
OK
OK
Where
OK
OK
ArgMin
OK
OK
ArgMax
OK
OK
ONNX已测试的model如下表所示:
ONNX Model Support
Model Name
Float32
INT8
Yolov4
OK
OK
Yolov5s
OK
OK
Resnet
OK
OK
SSD Resnet34
OK
OK
Postnet
OK
OK
PADDLE支持情况¶
PADDLE目前支持的版本为: <= 2.1.1。
PADDLE已支持的层如下表所示:
PADDLE Layer Support
Layer Name
Float32
INT8
abs
OK
OK
arg_max
OK
OK
arg_min
OK
OK
batch_norm
OK
OK
bilinear_interp
OK
OK
bilinear_interp_v2
OK
OK
cast
OK
OK
clip
OK
OK
concat
OK
OK
conv2d
OK
OK
conv2d_transpose
OK
OK
deformable_conv
OK
OK
depthwise_conv2d
OK
OK
dropout
OK
OK
elementwise_add
OK
OK
elementwise_div
OK
OK
elementwise_mul
OK
OK
elementwise_pow
OK
OK
elementwise_sub
OK
OK
expand_v2
OK
OK
fill_constant
OK
OK
fill_constant_batch_size_like
OK
OK
hard_sigmoid
OK
OK
hard_swish
OK
OK
leaky_relu
OK
OK
log
OK
OK
matmul
OK
OK
matmul_v2
OK
OK
matrix_nms
OK
OK
max_pool2d_with_index
OK
OK
mul
OK
OK
multiclass_nms
OK
OK
multiclass_nms2
OK
OK
multiclass_nms3
OK
OK
nearest_interp
OK
OK
nearest_interp_v2
OK
OK
pool2d
OK
OK
range
OK
OK
relu
OK
OK
reshape
OK
OK
reshape2
OK
OK
rnn
OK
OK
scale
OK
OK
shape
OK
OK
sigmoid
OK
OK
slice
OK
OK
softmax
OK
OK
split
OK
OK
squeeze2
OK
OK
transpose
OK
OK
transpose2
OK
OK
yolo_box
OK
OK
PADDLE已测试的model如下表所示:
PADDLE Model Support
Model Name
Float32
INT8
Yolov3
OK
OK
Resnet
OK
OK
Mobilenet
OK
OK