tf2onnx-1.7.1
Summary
large model support, conversion performance improvements and fixes.
Changes since v1.6.3
- Fixed error in computing shape of unpack op 1125 (TomWildenhain-Microsoft)
- Moves unused class attribute to function local variable. 1124 (xadupre)
- Added cast to float before log 1123 (TomWildenhain-Microsoft)
- cast log to float 1122 (TomWildenhain-Microsoft)
- Add cast after Size if expected type is not int64 1119 (TomWildenhain-Microsoft)
- Fixed bug in run_pretrained_models.py for models with subgraphs 1118 (TomWildenhain-Microsoft)
- Fixes #595, support operator InvertPermutation 1117 (xadupre)
- Better error message when it fails due to RFFT 1116 (xadupre)
- Add support for operators RFFT, ComplexAbs 1114 (xadupre)
- Rescaled get_beach and added get_zeros_int32 and get_zeros_int64 1113 (TomWildenhain-Microsoft)
- Fix --concrete_func for multiple inputs in signature 1110 (TomWildenhain-Microsoft)
- Added support for converting NonMaxSuppression in opset 10 with dynam… 1109 (TomWildenhain-Microsoft)
- Update run_pretrained_models.py to support large models 1107 (TomWildenhain-Microsoft)
- fix bn fuse for fp16 1106 (guschmue)
- Added constant folding using TF for large models 1105 (TomWildenhain-Microsoft)
- Add an end2end example with tf.keras 1103 (xadupre)
- Hash tensor tensor values in merge_duplicated_nodes to increase conversion speed 1102 (TomWildenhain-Microsoft)
- Made einsum op convert equation string to lower case 1099 (TomWildenhain-Microsoft)
- Fix issue with model esrgan-tf2_1 1098 (xadupre)
- reflect support for python 3.8, tf-2.3 1097 (guschmue)
- Added f to the list of TF function attributes 1094 (TomWildenhain-Microsoft)
- Added support for converting large models 1090 (TomWildenhain-Microsoft)
- Add cast to same type before equal operator 1089 (bedapisl)
- Added graph methods for saving using the external data storage format 1088 (TomWildenhain-Microsoft)
- Created compress_graph_def function 1087 (TomWildenhain-Microsoft)
- Rename coverter into converter 1086 (xadupre)
- Fix the missing axis when LogSoftmax is in graph 1084 (peterjc123)
- Add functionality for QDQ per channel 1081 (peri044)
- Add a script to profile the conversion of a model 1077 (xadupre)
- Feature: Added optimization step to remove upsample layers with all ones in scale 1074 (NikolasMarkou)
- #1070 replacing Reshape and Transpose operators to the kernel that are invalid in tensorrt with just a reshaped tensor 1071 (NikolasMarkou)
- Fix/gemm rewriter bias add 1069 (phager90)
- fix a NCHW pb converting bug in pad_rewriter 1063 (charlieguo0307)
- Faster insertion of operator cast, missing replacement in #1059 1061 (xadupre)
- Faster insertion of operator cast 1059 (xadupre)
- Changes type instead of name in operator clip 1058 (xadupre)
- Replace sys.maxsize by np.iinfo(np.int64).max 1057 (xadupre)
- Use the same syntax to replace an node input 1056 (xadupre)
- Removes unnecessary print 1055 (xadupre)
- Perf gain in tf_utils.py, more efficient error messages, faster comparisons 1054 (xadupre)
- Minor perf gain in graph_matcher.py 1053 (xadupre)- Added brackets to fix error in QueueDequeueUpToV2 1050 (TomWildenhain-Microsoft)
- [WIP] Improve performance by adding forward indexes, second version. 1049 (xadupre)
- Implemented QueueDequeueUpToV2 1047 (TomWildenhain-Microsoft)
- Fixed bug in padding calculation for padding='SAME' when dilations>1 1046 (TomWildenhain-Microsoft)
- Conv3DBackpropInputV2 1045 (ralovich)
- Adding Constant Folding for Reshape Nodes to Optimizer 1042 (phager90)
- Added support for Conv3DBackpropInputV2 1041 (TomWildenhain-Microsoft)
- [WIP] Improve performance by adding forward indexes. 1035 (xadupre)
- fix label offset in tutorial 1026 (guschmue)
- [WIP] Improve performance by adding identity nodes 1023 (xadupre)
- enable tf-2.3 in ci pipeline 1022 (guschmue)
A huge thank you to our contributors for this release!
bedapisl, peterjc123, peri044, NikolasMarkou, phager90, charlieguo0307, ralovich, phager90