Skip to content

Commit 9de4fb2

Browse files
digantdesaifacebook-github-bot
authored andcommitted
Disable internal fp16 llama unittest
Summary: Something non-xnnpack is falling on to portable and it is failing to handle it. ``` [op_scalar_tensor.cpp:33] In function operator()(), assert failed (false): Unhandled dtype Half for scalar_tensor.out ``` Created from CodeHub with https://fburl.com/edit-in-codehub Reviewed By: JacobSzwejbka Differential Revision: D55332891 fbshipit-source-id: 8dcbd79b15aee1e0f1f5f108a77be8e410dab5ce
1 parent bc92906 commit 9de4fb2

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

backends/xnnpack/test/models/llama2_et_example.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ class TestLlama2ETExample(unittest.TestCase):
1616
def test_f32(self):
1717
self._test()
1818

19+
@unittest.skip("T183420542: Add proper fp16 support.")
1920
def test_f16(self):
2021
self._test(torch.float16)
2122

0 commit comments

Comments
 (0)