You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Qualcomm AI Engine Direct - add op support list (#10253)
### Summary
- add op support list of HTP BE
- rearrange a bit for matching QNN document
Fixes#10220.
### Test plan
python backends/qualcomm/tests/test_qnn_delegate.py
TestQNNQuantizedOperator -s $DEVICE_SN -b build-android -m SM8750
Copy file name to clipboardExpand all lines: backends/qualcomm/builders/README.md
+124-1Lines changed: 124 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,7 @@ Thank you for contributing to Qualcomm AI Engine Direct delegate for ExecuTorch.
8
8
*[Check Operator Spec](#check-operator-spec)
9
9
*[Implementation](#implementation)
10
10
*[Quantizer Annotation](#quantizer-annotation)
11
+
*[Operator Support Status](#operator-support-status)
11
12
*[Issues](#issues)
12
13
*[Pull Requests](#pull-requests)
13
14
@@ -246,7 +247,7 @@ Now, we can start to fill in function body step by step:
246
247
nodes_to_wrappers,
247
248
)
248
249
```
249
-
The logic should be similar and straightforward. Please carefully set arguments `tensor_type`
250
+
The logic should be similar and straightforward. Please carefully set arguments `tensor_type`
250
251
according to tensors' property.
251
252
252
253
3. Define parameters:
@@ -355,6 +356,128 @@ Now, we can start to fill in function body step by step:
355
356
### Quantizer Annotation
356
357
The operator now should be functional forQualcomm backends. For operator to workin fixed-precision, we should also make `QnnQuantizer` to correctly insert observers for recording calibrated encodings. Please read more on the [Quantization Annotation Tutorial](../quantizer//README.md).
357
358
359
+
## Operator Support Status
360
+
Please help update following table if you are contributing new operators:
361
+
362
+
| Operators | HTP - 77/116 Enabled |
363
+
|-----------|---------|
364
+
| Argmax |✗|
365
+
| Argmin |✓|
366
+
| BatchNorm |✓|
367
+
| BatchToSpace |✗|
368
+
| Cast |✓|
369
+
| ChannelShuffle |✗|
370
+
| Concat |✓|
371
+
| Conv2d |✓|
372
+
| Conv3d |✗|
373
+
| Convert |✓|
374
+
| CreateSparse |✗|
375
+
| CumulativeSum |✓|
376
+
| DepthToSpace |✓|
377
+
| DepthWiseConv2d |✓|
378
+
| Dequantize |✓|
379
+
| DetectionOutput |✗|
380
+
| ElementWiseAbs |✓|
381
+
| ElementWiseAdd |✓|
382
+
| ElementWiseAnd |✓|
383
+
| ElementWiseAsin |✗|
384
+
| ElementWiseAtan |✗|
385
+
| ElementWiseBinary |✗|
386
+
| ElementWiseCeil |✓|
387
+
| ElementWiseCos |✓|
388
+
| ElementWiseDivide |✓|
389
+
| ElementWiseEqual |✓|
390
+
| ElementWiseExp |✓|
391
+
| ElementWiseFloor |✗|
392
+
| ElementWiseFloorDiv |✗|
393
+
| ElementWiseGreater |✓|
394
+
| ElementWiseGreaterEqual |✓|
395
+
| ElementWiseLess |✓|
396
+
| ElementWiseLessEqual |✓|
397
+
| ElementWiseLog |✓|
398
+
| ElementWiseMaximum |✓|
399
+
| ElementWiseMinimum |✓|
400
+
| ElementWiseMultiply |✓|
401
+
| ElementWiseNeg |✓|
402
+
| ElementWiseNeuron |✓|
403
+
| ElementWiseNot |✓|
404
+
| ElementWiseNotEqual |✓|
405
+
| ElementWiseOr |✓|
406
+
| ElementWisePower |✓|
407
+
| ElementWiseRound |✗|
408
+
| ElementWiseRsqrt |✓|
409
+
| ElementWiseSelect |✓|
410
+
| ElementWiseSign |✗|
411
+
| ElementWiseSin |✓|
412
+
| ElementWiseSquaredDifference |✗|
413
+
| ElementWiseSquareRoot |✓|
414
+
| ElementWiseSubtract |✓|
415
+
| ElementWiseUnary |✗|
416
+
| ElementWiseXor |✗|
417
+
| Elu |✓|
418
+
| ExpandDims |✓|
419
+
| ExtractGlimpse |✗|
420
+
| ExtractPatches |✗|
421
+
| FullyConnected |✓|
422
+
| Gather |✓|
423
+
| GatherElements |✗|
424
+
| GatherNd |✓|
425
+
| Gelu |✓|
426
+
| GetSparseIndices |✗|
427
+
| GetSparseValues |✗|
428
+
| GridSample |✗|
429
+
| GroupNorm |✓|
430
+
| HardSwish |✓|
431
+
| InstanceNorm |✓|
432
+
| L2Norm |✗|
433
+
| LayerNorm |✓|
434
+
| LogSoftmax |✓|
435
+
| Lrn |✗|
436
+
| Lstm |✗|
437
+
| MatMul |✓|
438
+
| MultiClassNms |✗|
439
+
| NonMaxSuppression |✗|
440
+
| Nonzero |✗|
441
+
| OneHot |✗|
442
+
| Pack |✓|
443
+
| Pad |✓|
444
+
| PoolAvg2d |✓|
445
+
| PoolAvg3d |✗|
446
+
| PoolMax2d |✓|
447
+
| Prelu |✓|
448
+
| Quantize |✓|
449
+
| ReduceMax |✓|
450
+
| ReduceMean |✓|
451
+
| ReduceMin |✗|
452
+
| ReduceSum |✓|
453
+
| Relu |✓|
454
+
| Relu1 |✗|
455
+
| Relu6 |✗|
456
+
| ReluMinMax |✓|
457
+
| Reshape |✓|
458
+
| Resize |✗|
459
+
| ResizeBilinear |✓|
460
+
| ResizeNearestNeighbor |✓|
461
+
| RoiAlign |✗|
462
+
| RmsNorm |✓|
463
+
| ScatterElements |✗|
464
+
| ScatterNd |✓|
465
+
| Sigmoid |✓|
466
+
| Softmax |✓|
467
+
| SpaceToBatch |✗|
468
+
| SpaceToDepth |✓|
469
+
| SparseToDense |✗|
470
+
| Split |✓|
471
+
| Squeeze |✓|
472
+
| StridedSlice |✓|
473
+
| Tanh |✓|
474
+
| Tile |✓|
475
+
| TopK |✓|
476
+
| TransPose |✓|
477
+
| TransPoseConv2d |✓|
478
+
| TransPoseConv3d |✗|
479
+
| Unpack |✓|
480
+
358
481
## Issues
359
482
Please refer to the [issue section](../README.md#issues) for more information.
0 commit comments