Skip to content

[HLSL] add extra scalar vector overloads for clamp #129939

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 16 commits into from
Mar 17, 2025
Merged

Conversation

spall
Copy link
Contributor

@spall spall commented Mar 5, 2025

Add additional vector scalar overloads for clamp using templates
Add Tests
fixup tests which have changed.
Closes #128230

@llvmbot llvmbot added clang Clang issues not falling into any other category backend:X86 clang:headers Headers provided by Clang, e.g. for intrinsics HLSL HLSL Language Support labels Mar 5, 2025
@llvmbot
Copy link
Member

llvmbot commented Mar 5, 2025

@llvm/pr-subscribers-backend-x86

Author: Sarah Spall (spall)

Changes

Add additional vector scalar overloads for clamp
Revamp Macros which generate overloads
Add Tests
Fix one test which now errors instead of warns.
Closes #128230


Full diff: https://github.com/llvm/llvm-project/pull/129939.diff

3 Files Affected:

  • (modified) clang/lib/Headers/hlsl/hlsl_alias_intrinsics.h (+63-36)
  • (modified) clang/test/CodeGenHLSL/builtins/clamp.hlsl (+32)
  • (modified) clang/test/SemaHLSL/BuiltIns/clamp-errors.hlsl (+1-1)
diff --git a/clang/lib/Headers/hlsl/hlsl_alias_intrinsics.h b/clang/lib/Headers/hlsl/hlsl_alias_intrinsics.h
index 75b0c95440461..62262535130cc 100644
--- a/clang/lib/Headers/hlsl/hlsl_alias_intrinsics.h
+++ b/clang/lib/Headers/hlsl/hlsl_alias_intrinsics.h
@@ -35,26 +35,44 @@ namespace hlsl {
 #define _HLSL_16BIT_AVAILABILITY_STAGE(environment, version, stage)
 #endif
 
-#define GEN_VEC_SCALAR_OVERLOADS(FUNC_NAME, BASE_TYPE, AVAIL)                  \
-  GEN_BOTH_OVERLOADS(FUNC_NAME, BASE_TYPE, BASE_TYPE##2, AVAIL)                \
-  GEN_BOTH_OVERLOADS(FUNC_NAME, BASE_TYPE, BASE_TYPE##3, AVAIL)                \
-  GEN_BOTH_OVERLOADS(FUNC_NAME, BASE_TYPE, BASE_TYPE##4, AVAIL)
-
-#define GEN_BOTH_OVERLOADS(FUNC_NAME, BASE_TYPE, VECTOR_TYPE, AVAIL)           \
-  IF_TRUE_##AVAIL(                                                             \
-      _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)) constexpr VECTOR_TYPE        \
-  FUNC_NAME(VECTOR_TYPE p0, BASE_TYPE p1) {                                    \
-    return __builtin_elementwise_##FUNC_NAME(p0, (VECTOR_TYPE)p1);             \
+#define _HLSL_CAT(a,b) a##b
+#define _HLSL_VEC_SCALAR_OVERLOADS(NAME, BASE_T, AVAIL) \
+  _HLSL_ALL_OVERLOADS(NAME, BASE_T, AVAIL, _HLSL_CAT(_HLSL_NUM_ARGS_,NAME))
+
+#define _HLSL_ALL_OVERLOADS(NAME, BASE_T, AVAIL, NUM_ARGS) \
+  _HLSL_CAT(_HLSL_BOTH_OVERLOADS_,NUM_ARGS)(NAME, BASE_T, _HLSL_CAT(BASE_T,2), AVAIL) \
+  _HLSL_CAT(_HLSL_BOTH_OVERLOADS_,NUM_ARGS)(NAME, BASE_T, _HLSL_CAT(BASE_T,3), AVAIL) \
+  _HLSL_CAT(_HLSL_BOTH_OVERLOADS_,NUM_ARGS)(NAME, BASE_T, _HLSL_CAT(BASE_T,4), AVAIL)
+
+#define _HLSL_BOTH_OVERLOADS_2(NAME, BASE_T, VECTOR_T, AVAIL)           \
+  _HLSL_CAT(_HLSL_IF_TRUE_,AVAIL)(					\
+      _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)) constexpr VECTOR_T        \
+  NAME(VECTOR_T p0, BASE_T p1) {                                    \
+    return _HLSL_CAT(__builtin_elementwise_,NAME)(p0, (VECTOR_T)p1);	\
   }                                                                            \
-  IF_TRUE_##AVAIL(                                                             \
-      _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)) constexpr VECTOR_TYPE        \
-  FUNC_NAME(BASE_TYPE p0, VECTOR_TYPE p1) {                                    \
-    return __builtin_elementwise_##FUNC_NAME((VECTOR_TYPE)p0, p1);             \
+  _HLSL_CAT(_HLSL_IF_TRUE_,AVAIL)(					\
+      _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)) constexpr VECTOR_T        \
+  NAME(BASE_T p0, VECTOR_T p1) {                                    \
+    return _HLSL_CAT(__builtin_elementwise_,NAME)((VECTOR_T)p0, p1);	\
   }
 
-#define IF_TRUE_0(EXPR)
-#define IF_TRUE_1(EXPR) EXPR
+#define _HLSL_BOTH_OVERLOADS_3(NAME, BASE_T, VECTOR_T, AVAIL)  \
+  _HLSL_CAT(_HLSL_IF_TRUE_,AVAIL)(_HLSL_16BIT_AVAILABILITY(shadermodel, 6.2))	\
+  constexpr VECTOR_T NAME(VECTOR_T p0, VECTOR_T p1, BASE_T p2) { \
+    return _HLSL_CAT(__builtin_hlsl_elementwise_,NAME)(p0, p1, (VECTOR_T)p2); \
+  } \
+  _HLSL_CAT(_HLSL_IF_TRUE_,AVAIL)(_HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)) \
+  constexpr VECTOR_T NAME(VECTOR_T p0, BASE_T p1, VECTOR_T p2) { \
+    return _HLSL_CAT(__builtin_hlsl_elementwise_,NAME)(p0, (VECTOR_T)p1, p2); \
+  }
+
+#define _HLSL_IF_TRUE_0(EXPR)
+#define _HLSL_IF_TRUE_1(EXPR) EXPR
 
+#define _HLSL_NUM_ARGS_min 2
+#define _HLSL_NUM_ARGS_max 2
+#define _HLSL_NUM_ARGS_clamp 3
+  
 //===----------------------------------------------------------------------===//
 // abs builtins
 //===----------------------------------------------------------------------===//
@@ -582,7 +600,8 @@ half3 clamp(half3, half3, half3);
 _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 half4 clamp(half4, half4, half4);
-
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, half, 1)
+  
 #ifdef __HLSL_ENABLE_16_BIT
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
@@ -596,7 +615,8 @@ int16_t3 clamp(int16_t3, int16_t3, int16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int16_t4 clamp(int16_t4, int16_t4, int16_t4);
-
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, int16_t, 1)
+  
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint16_t clamp(uint16_t, uint16_t, uint16_t);
@@ -609,6 +629,7 @@ uint16_t3 clamp(uint16_t3, uint16_t3, uint16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint16_t4 clamp(uint16_t4, uint16_t4, uint16_t4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, uint16_t, 1)
 #endif
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
@@ -619,6 +640,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int3 clamp(int3, int3, int3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int4 clamp(int4, int4, int4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, int, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint clamp(uint, uint, uint);
@@ -628,6 +650,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint3 clamp(uint3, uint3, uint3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint4 clamp(uint4, uint4, uint4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, uint, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int64_t clamp(int64_t, int64_t, int64_t);
@@ -637,6 +660,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int64_t3 clamp(int64_t3, int64_t3, int64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int64_t4 clamp(int64_t4, int64_t4, int64_t4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, int64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint64_t clamp(uint64_t, uint64_t, uint64_t);
@@ -646,6 +670,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint64_t3 clamp(uint64_t3, uint64_t3, uint64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint64_t4 clamp(uint64_t4, uint64_t4, uint64_t4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, uint64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 float clamp(float, float, float);
@@ -655,6 +680,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 float3 clamp(float3, float3, float3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 float4 clamp(float4, float4, float4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, float, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 double clamp(double, double, double);
@@ -664,6 +690,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 double3 clamp(double3, double3, double3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 double4 clamp(double4, double4, double4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, double, 0)
 
 //===----------------------------------------------------------------------===//
 // clip builtins
@@ -1576,7 +1603,7 @@ half3 max(half3, half3);
 _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 half4 max(half4, half4);
-GEN_VEC_SCALAR_OVERLOADS(max, half, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(max, half, 1)
 
 #ifdef __HLSL_ENABLE_16_BIT
 _HLSL_AVAILABILITY(shadermodel, 6.2)
@@ -1591,7 +1618,7 @@ int16_t3 max(int16_t3, int16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int16_t4 max(int16_t4, int16_t4);
-GEN_VEC_SCALAR_OVERLOADS(max, int16_t, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(max, int16_t, 1)
 
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
@@ -1605,7 +1632,7 @@ uint16_t3 max(uint16_t3, uint16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint16_t4 max(uint16_t4, uint16_t4);
-GEN_VEC_SCALAR_OVERLOADS(max, uint16_t, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(max, uint16_t, 1)
 #endif
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
@@ -1616,7 +1643,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int3 max(int3, int3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int4 max(int4, int4);
-GEN_VEC_SCALAR_OVERLOADS(max, int, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, int, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint max(uint, uint);
@@ -1626,7 +1653,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint3 max(uint3, uint3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint4 max(uint4, uint4);
-GEN_VEC_SCALAR_OVERLOADS(max, uint, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, uint, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int64_t max(int64_t, int64_t);
@@ -1636,7 +1663,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int64_t3 max(int64_t3, int64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int64_t4 max(int64_t4, int64_t4);
-GEN_VEC_SCALAR_OVERLOADS(max, int64_t, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, int64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint64_t max(uint64_t, uint64_t);
@@ -1646,7 +1673,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint64_t3 max(uint64_t3, uint64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint64_t4 max(uint64_t4, uint64_t4);
-GEN_VEC_SCALAR_OVERLOADS(max, uint64_t, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, uint64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 float max(float, float);
@@ -1656,7 +1683,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 float3 max(float3, float3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 float4 max(float4, float4);
-GEN_VEC_SCALAR_OVERLOADS(max, float, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, float, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 double max(double, double);
@@ -1666,7 +1693,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 double3 max(double3, double3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 double4 max(double4, double4);
-GEN_VEC_SCALAR_OVERLOADS(max, double, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, double, 0)
 
 //===----------------------------------------------------------------------===//
 // min builtins
@@ -1689,7 +1716,7 @@ half3 min(half3, half3);
 _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 half4 min(half4, half4);
-GEN_VEC_SCALAR_OVERLOADS(min, half, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(min, half, 1)
 
 #ifdef __HLSL_ENABLE_16_BIT
 _HLSL_AVAILABILITY(shadermodel, 6.2)
@@ -1704,7 +1731,7 @@ int16_t3 min(int16_t3, int16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int16_t4 min(int16_t4, int16_t4);
-GEN_VEC_SCALAR_OVERLOADS(min, int16_t, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(min, int16_t, 1)
 
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
@@ -1718,7 +1745,7 @@ uint16_t3 min(uint16_t3, uint16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint16_t4 min(uint16_t4, uint16_t4);
-GEN_VEC_SCALAR_OVERLOADS(min, uint16_t, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(min, uint16_t, 1)
 #endif
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
@@ -1729,7 +1756,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int3 min(int3, int3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int4 min(int4, int4);
-GEN_VEC_SCALAR_OVERLOADS(min, int, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, int, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint min(uint, uint);
@@ -1739,7 +1766,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint3 min(uint3, uint3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint4 min(uint4, uint4);
-GEN_VEC_SCALAR_OVERLOADS(min, uint, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, uint, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 float min(float, float);
@@ -1749,7 +1776,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 float3 min(float3, float3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 float4 min(float4, float4);
-GEN_VEC_SCALAR_OVERLOADS(min, float, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, float, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int64_t min(int64_t, int64_t);
@@ -1759,7 +1786,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int64_t3 min(int64_t3, int64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int64_t4 min(int64_t4, int64_t4);
-GEN_VEC_SCALAR_OVERLOADS(min, int64_t, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, int64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint64_t min(uint64_t, uint64_t);
@@ -1769,7 +1796,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint64_t3 min(uint64_t3, uint64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint64_t4 min(uint64_t4, uint64_t4);
-GEN_VEC_SCALAR_OVERLOADS(min, uint64_t, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, uint64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 double min(double, double);
@@ -1779,7 +1806,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 double3 min(double3, double3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 double4 min(double4, double4);
-GEN_VEC_SCALAR_OVERLOADS(min, double, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, double, 0)
 
 //===----------------------------------------------------------------------===//
 // normalize builtins
diff --git a/clang/test/CodeGenHLSL/builtins/clamp.hlsl b/clang/test/CodeGenHLSL/builtins/clamp.hlsl
index d01c2a45c43c8..51454d6479708 100644
--- a/clang/test/CodeGenHLSL/builtins/clamp.hlsl
+++ b/clang/test/CodeGenHLSL/builtins/clamp.hlsl
@@ -28,6 +28,9 @@ int16_t3 test_clamp_short3(int16_t3 p0, int16_t3 p1) { return clamp(p0, p1,p1);
 // NATIVE_HALF: define [[FNATTRS]] <4 x i16> @_Z17test_clamp_short4
 // NATIVE_HALF: call <4 x i16> @llvm.[[TARGET]].sclamp.v4i16
 int16_t4 test_clamp_short4(int16_t4 p0, int16_t4 p1) { return clamp(p0, p1,p1); }
+// NATIVE_HALF: define [[FNATTRS]] <4 x i16> {{.*}}test_clamp_short4_mismatch
+// NATIVE_HALF: call <4 x i16> @llvm.[[TARGET]].sclamp.v4i16
+int16_t4 test_clamp_short4_mismatch(int16_t4 p0, int16_t p1) { return clamp(p0, p0,p1); }
 
 // NATIVE_HALF: define [[FNATTRS]] i16 @_Z17test_clamp_ushort
 // NATIVE_HALF: call i16 @llvm.[[TARGET]].uclamp.i16(
@@ -41,6 +44,9 @@ uint16_t3 test_clamp_ushort3(uint16_t3 p0, uint16_t3 p1) { return clamp(p0, p1,p
 // NATIVE_HALF: define [[FNATTRS]] <4 x i16> @_Z18test_clamp_ushort4
 // NATIVE_HALF: call <4 x i16> @llvm.[[TARGET]].uclamp.v4i16
 uint16_t4 test_clamp_ushort4(uint16_t4 p0, uint16_t4 p1) { return clamp(p0, p1,p1); }
+// NATIVE_HALF: define [[FNATTRS]] <4 x i16> {{.*}}test_clamp_ushort4_mismatch
+// NATIVE_HALF: call <4 x i16> @llvm.[[TARGET]].uclamp.v4i16
+uint16_t4 test_clamp_ushort4_mismatch(uint16_t4 p0, uint16_t p1) { return clamp(p0, p0,p1); }
 #endif
 
 // CHECK: define [[FNATTRS]] i32 @_Z14test_clamp_int
@@ -55,6 +61,9 @@ int3 test_clamp_int3(int3 p0, int3 p1) { return clamp(p0, p1,p1); }
 // CHECK: define [[FNATTRS]] <4 x i32> @_Z15test_clamp_int4
 // CHECK: call <4 x i32> @llvm.[[TARGET]].sclamp.v4i32
 int4 test_clamp_int4(int4 p0, int4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] <4 x i32> {{.*}}test_clamp_int4_mismatch
+// CHECK: call <4 x i32> @llvm.[[TARGET]].sclamp.v4i32
+int4 test_clamp_int4_mismatch(int4 p0, int p1) { return clamp(p0, p0,p1); }
 
 // CHECK: define [[FNATTRS]] i32 @_Z15test_clamp_uint
 // CHECK: call i32 @llvm.[[TARGET]].uclamp.i32(
@@ -68,6 +77,9 @@ uint3 test_clamp_uint3(uint3 p0, uint3 p1) { return clamp(p0, p1,p1); }
 // CHECK: define [[FNATTRS]] <4 x i32> @_Z16test_clamp_uint4
 // CHECK: call <4 x i32> @llvm.[[TARGET]].uclamp.v4i32
 uint4 test_clamp_uint4(uint4 p0, uint4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] <4 x i32> {{.*}}test_clamp_uint4_mismatch
+// CHECK: call <4 x i32> @llvm.[[TARGET]].uclamp.v4i32
+uint4 test_clamp_uint4_mismatch(uint4 p0, uint p1) { return clamp(p0, p0,p1); }
 
 // CHECK: define [[FNATTRS]] i64 @_Z15test_clamp_long
 // CHECK: call i64 @llvm.[[TARGET]].sclamp.i64(
@@ -81,6 +93,9 @@ int64_t3 test_clamp_long3(int64_t3 p0, int64_t3 p1) { return clamp(p0, p1,p1); }
 // CHECK: define [[FNATTRS]] <4 x i64> @_Z16test_clamp_long4
 // CHECK: call <4 x i64> @llvm.[[TARGET]].sclamp.v4i64
 int64_t4 test_clamp_long4(int64_t4 p0, int64_t4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] <4 x i64> {{.*}}test_clamp_long4_mismatch
+// CHECK: call <4 x i64> @llvm.[[TARGET]].sclamp.v4i64
+int64_t4 test_clamp_long4_mismatch(int64_t4 p0, int64_t4 p1) { return clamp(p0, p0,p1); }
 
 // CHECK: define [[FNATTRS]] i64 @_Z16test_clamp_ulong
 // CHECK: call i64 @llvm.[[TARGET]].uclamp.i64(
@@ -94,6 +109,9 @@ uint64_t3 test_clamp_ulong3(uint64_t3 p0, uint64_t3 p1) { return clamp(p0, p1,p1
 // CHECK: define [[FNATTRS]] <4 x i64> @_Z17test_clamp_ulong4
 // CHECK: call <4 x i64> @llvm.[[TARGET]].uclamp.v4i64
 uint64_t4 test_clamp_ulong4(uint64_t4 p0, uint64_t4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] <4 x i64> {{.*}}test_clamp_ulong4_mismatch
+// CHECK: call <4 x i64> @llvm.[[TARGET]].uclamp.v4i64
+uint64_t4 test_clamp_ulong4_mismatch(uint64_t4 p0, uint64_t4 p1) { return clamp(p0, p0,p1); }
 
 // NATIVE_HALF: define [[FNATTRS]] [[FFNATTRS]] half @_Z15test_clamp_half
 // NATIVE_HALF: call reassoc nnan ninf nsz arcp afn half @llvm.[[TARGET]].nclamp.f16(
@@ -115,6 +133,11 @@ half3 test_clamp_half3(half3 p0, half3 p1) { return clamp(p0, p1,p1); }
 // NO_HALF: define [[FNATTRS]] [[FFNATTRS]] <4 x float> @_Z16test_clamp_half4
 // NO_HALF: call reassoc nnan ninf nsz arcp afn <4 x float> @llvm.[[TARGET]].nclamp.v4f32(
 half4 test_clamp_half4(half4 p0, half4 p1) { return clamp(p0, p1,p1); }
+// NATIVE_HALF: define [[FNATTRS]] [[FFNATTRS]] <4 x half> {{.*}}test_clamp_half4_mismatch
+// NATIVE_HALF: call reassoc nnan ninf nsz arcp afn <4 x half> @llvm.[[TARGET]].nclamp.v4f16
+// NO_HALF: define [[FNATTRS]] [[FFNATTRS]] <4 x float> {{.*}}test_clamp_half4_mismatch
+// NO_HALF: call reassoc nnan ninf nsz arcp afn <4 x float> @llvm.[[TARGET]].nclamp.v4f32(
+half4 test_clamp_half4_mismatch(half4 p0, half p1) { return clamp(p0, p0,p1); }
 
 // CHECK: define [[FNATTRS]] [[FFNATTRS]] float @_Z16test_clamp_float
 // CHECK: call reassoc nnan ninf nsz arcp afn float @llvm.[[TARGET]].nclamp.f32(
@@ -128,6 +151,9 @@ float3 test_clamp_float3(float3 p0, float3 p1) { return clamp(p0, p1,p1); }
 // CHECK: define [[FNATTRS]] [[FFNATTRS]] <4 x float> @_Z17test_clamp_float4
 // CHECK: call reassoc nnan ninf nsz arcp afn <4 x float> @llvm.[[TARGET]].nclamp.v4f32
 float4 test_clamp_float4(float4 p0, float4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] [[FFNATTRS]] <4 x float> {{.*}}test_clamp_float4_mismatch
+// CHECK: call reassoc nnan ninf nsz arcp afn <4 x float> @llvm.[[TARGET]].nclamp.v4f32
+float4 test_clamp_float4_mismatch(float4 p0, float p1) { return clamp(p0, p0,p1); }
 
 // CHECK: define [[FNATTRS]] [[FFNATTRS]] double @_Z17test_clamp_double
 // CHECK: call reassoc nnan ninf nsz arcp afn double @llvm.[[TARGET]].nclamp.f64(
@@ -141,3 +167,9 @@ double3 test_clamp_double3(double3 p0, double3 p1) { return clamp(p0, p1,p1); }
 // CHECK: define [[FNATTRS]] [[FFNATTRS]] <4 x double> @_Z18test_clamp_double4
 // CHECK: call reassoc nnan ninf nsz arcp afn <4 x double> @llvm.[[TARGET]].nclamp.v4f64
 double4 test_clamp_double4(double4 p0, double4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] [[FFNATTRS]] <4 x double> {{.*}}test_clamp_double4_mismatch
+// CHECK: call reassoc nnan ninf nsz arcp afn <4 x double> @llvm.[[TARGET]].nclamp.v4f64
+double4 test_clamp_double4_mismatch(double4 p0, double p1) { return clamp(p0, p0,p1); }
+// CHECK: define [[FNATTRS]] [[FFNATTRS]] <4 x double> {{.*}}test_clamp_double4_mismatch2
+// CHECK: call reassoc nnan ninf nsz arcp afn <4 x double> @llvm.[[TARGET]].nclamp.v4f64
+double4 test_clamp_double4_mismatch2(double4 p0, double p1) { return clamp(p0, p1,p0); }
diff --git a/clang/test/SemaHLSL/BuiltIns/clamp-errors.hlsl b/clang/test/SemaHLSL/BuiltIns/clamp-errors.hlsl
index 036f04cdac0b5..a1850d47b105d 100644
--- a/clang/test/SemaHLSL/BuiltIns/clamp-errors.hlsl
+++ b/clang/test/SemaHLSL/BuiltIns/clamp-errors.hlsl
@@ -22,7 +22,7 @@ float2 test_clamp_no_second_arg(float2 p0) {
 
 float2 test_clamp_vector_size_mismatch(float3 p0, float2 p1) {
   return clamp(p0, p0, p1);
-  // expected-warning@-1 {{implicit conversion truncates vector: 'float3' (aka 'vector<float, 3>') to 'vector<float, 2>' (vector of 2 'float' values)}}
+  // expected-error@-1 {{call to 'clamp' is ambiguous}}
 }
 
 float2 test_clamp_builtin_vector_size_mismatch(float3 p0, float2 p1) {

@llvmbot
Copy link
Member

llvmbot commented Mar 5, 2025

@llvm/pr-subscribers-clang

Author: Sarah Spall (spall)

Changes

Add additional vector scalar overloads for clamp
Revamp Macros which generate overloads
Add Tests
Fix one test which now errors instead of warns.
Closes #128230


Full diff: https://github.com/llvm/llvm-project/pull/129939.diff

3 Files Affected:

  • (modified) clang/lib/Headers/hlsl/hlsl_alias_intrinsics.h (+63-36)
  • (modified) clang/test/CodeGenHLSL/builtins/clamp.hlsl (+32)
  • (modified) clang/test/SemaHLSL/BuiltIns/clamp-errors.hlsl (+1-1)
diff --git a/clang/lib/Headers/hlsl/hlsl_alias_intrinsics.h b/clang/lib/Headers/hlsl/hlsl_alias_intrinsics.h
index 75b0c95440461..62262535130cc 100644
--- a/clang/lib/Headers/hlsl/hlsl_alias_intrinsics.h
+++ b/clang/lib/Headers/hlsl/hlsl_alias_intrinsics.h
@@ -35,26 +35,44 @@ namespace hlsl {
 #define _HLSL_16BIT_AVAILABILITY_STAGE(environment, version, stage)
 #endif
 
-#define GEN_VEC_SCALAR_OVERLOADS(FUNC_NAME, BASE_TYPE, AVAIL)                  \
-  GEN_BOTH_OVERLOADS(FUNC_NAME, BASE_TYPE, BASE_TYPE##2, AVAIL)                \
-  GEN_BOTH_OVERLOADS(FUNC_NAME, BASE_TYPE, BASE_TYPE##3, AVAIL)                \
-  GEN_BOTH_OVERLOADS(FUNC_NAME, BASE_TYPE, BASE_TYPE##4, AVAIL)
-
-#define GEN_BOTH_OVERLOADS(FUNC_NAME, BASE_TYPE, VECTOR_TYPE, AVAIL)           \
-  IF_TRUE_##AVAIL(                                                             \
-      _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)) constexpr VECTOR_TYPE        \
-  FUNC_NAME(VECTOR_TYPE p0, BASE_TYPE p1) {                                    \
-    return __builtin_elementwise_##FUNC_NAME(p0, (VECTOR_TYPE)p1);             \
+#define _HLSL_CAT(a,b) a##b
+#define _HLSL_VEC_SCALAR_OVERLOADS(NAME, BASE_T, AVAIL) \
+  _HLSL_ALL_OVERLOADS(NAME, BASE_T, AVAIL, _HLSL_CAT(_HLSL_NUM_ARGS_,NAME))
+
+#define _HLSL_ALL_OVERLOADS(NAME, BASE_T, AVAIL, NUM_ARGS) \
+  _HLSL_CAT(_HLSL_BOTH_OVERLOADS_,NUM_ARGS)(NAME, BASE_T, _HLSL_CAT(BASE_T,2), AVAIL) \
+  _HLSL_CAT(_HLSL_BOTH_OVERLOADS_,NUM_ARGS)(NAME, BASE_T, _HLSL_CAT(BASE_T,3), AVAIL) \
+  _HLSL_CAT(_HLSL_BOTH_OVERLOADS_,NUM_ARGS)(NAME, BASE_T, _HLSL_CAT(BASE_T,4), AVAIL)
+
+#define _HLSL_BOTH_OVERLOADS_2(NAME, BASE_T, VECTOR_T, AVAIL)           \
+  _HLSL_CAT(_HLSL_IF_TRUE_,AVAIL)(					\
+      _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)) constexpr VECTOR_T        \
+  NAME(VECTOR_T p0, BASE_T p1) {                                    \
+    return _HLSL_CAT(__builtin_elementwise_,NAME)(p0, (VECTOR_T)p1);	\
   }                                                                            \
-  IF_TRUE_##AVAIL(                                                             \
-      _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)) constexpr VECTOR_TYPE        \
-  FUNC_NAME(BASE_TYPE p0, VECTOR_TYPE p1) {                                    \
-    return __builtin_elementwise_##FUNC_NAME((VECTOR_TYPE)p0, p1);             \
+  _HLSL_CAT(_HLSL_IF_TRUE_,AVAIL)(					\
+      _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)) constexpr VECTOR_T        \
+  NAME(BASE_T p0, VECTOR_T p1) {                                    \
+    return _HLSL_CAT(__builtin_elementwise_,NAME)((VECTOR_T)p0, p1);	\
   }
 
-#define IF_TRUE_0(EXPR)
-#define IF_TRUE_1(EXPR) EXPR
+#define _HLSL_BOTH_OVERLOADS_3(NAME, BASE_T, VECTOR_T, AVAIL)  \
+  _HLSL_CAT(_HLSL_IF_TRUE_,AVAIL)(_HLSL_16BIT_AVAILABILITY(shadermodel, 6.2))	\
+  constexpr VECTOR_T NAME(VECTOR_T p0, VECTOR_T p1, BASE_T p2) { \
+    return _HLSL_CAT(__builtin_hlsl_elementwise_,NAME)(p0, p1, (VECTOR_T)p2); \
+  } \
+  _HLSL_CAT(_HLSL_IF_TRUE_,AVAIL)(_HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)) \
+  constexpr VECTOR_T NAME(VECTOR_T p0, BASE_T p1, VECTOR_T p2) { \
+    return _HLSL_CAT(__builtin_hlsl_elementwise_,NAME)(p0, (VECTOR_T)p1, p2); \
+  }
+
+#define _HLSL_IF_TRUE_0(EXPR)
+#define _HLSL_IF_TRUE_1(EXPR) EXPR
 
+#define _HLSL_NUM_ARGS_min 2
+#define _HLSL_NUM_ARGS_max 2
+#define _HLSL_NUM_ARGS_clamp 3
+  
 //===----------------------------------------------------------------------===//
 // abs builtins
 //===----------------------------------------------------------------------===//
@@ -582,7 +600,8 @@ half3 clamp(half3, half3, half3);
 _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 half4 clamp(half4, half4, half4);
-
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, half, 1)
+  
 #ifdef __HLSL_ENABLE_16_BIT
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
@@ -596,7 +615,8 @@ int16_t3 clamp(int16_t3, int16_t3, int16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int16_t4 clamp(int16_t4, int16_t4, int16_t4);
-
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, int16_t, 1)
+  
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint16_t clamp(uint16_t, uint16_t, uint16_t);
@@ -609,6 +629,7 @@ uint16_t3 clamp(uint16_t3, uint16_t3, uint16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint16_t4 clamp(uint16_t4, uint16_t4, uint16_t4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, uint16_t, 1)
 #endif
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
@@ -619,6 +640,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int3 clamp(int3, int3, int3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int4 clamp(int4, int4, int4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, int, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint clamp(uint, uint, uint);
@@ -628,6 +650,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint3 clamp(uint3, uint3, uint3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint4 clamp(uint4, uint4, uint4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, uint, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int64_t clamp(int64_t, int64_t, int64_t);
@@ -637,6 +660,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int64_t3 clamp(int64_t3, int64_t3, int64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int64_t4 clamp(int64_t4, int64_t4, int64_t4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, int64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint64_t clamp(uint64_t, uint64_t, uint64_t);
@@ -646,6 +670,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint64_t3 clamp(uint64_t3, uint64_t3, uint64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint64_t4 clamp(uint64_t4, uint64_t4, uint64_t4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, uint64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 float clamp(float, float, float);
@@ -655,6 +680,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 float3 clamp(float3, float3, float3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 float4 clamp(float4, float4, float4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, float, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 double clamp(double, double, double);
@@ -664,6 +690,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 double3 clamp(double3, double3, double3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 double4 clamp(double4, double4, double4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, double, 0)
 
 //===----------------------------------------------------------------------===//
 // clip builtins
@@ -1576,7 +1603,7 @@ half3 max(half3, half3);
 _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 half4 max(half4, half4);
-GEN_VEC_SCALAR_OVERLOADS(max, half, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(max, half, 1)
 
 #ifdef __HLSL_ENABLE_16_BIT
 _HLSL_AVAILABILITY(shadermodel, 6.2)
@@ -1591,7 +1618,7 @@ int16_t3 max(int16_t3, int16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int16_t4 max(int16_t4, int16_t4);
-GEN_VEC_SCALAR_OVERLOADS(max, int16_t, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(max, int16_t, 1)
 
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
@@ -1605,7 +1632,7 @@ uint16_t3 max(uint16_t3, uint16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint16_t4 max(uint16_t4, uint16_t4);
-GEN_VEC_SCALAR_OVERLOADS(max, uint16_t, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(max, uint16_t, 1)
 #endif
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
@@ -1616,7 +1643,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int3 max(int3, int3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int4 max(int4, int4);
-GEN_VEC_SCALAR_OVERLOADS(max, int, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, int, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint max(uint, uint);
@@ -1626,7 +1653,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint3 max(uint3, uint3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint4 max(uint4, uint4);
-GEN_VEC_SCALAR_OVERLOADS(max, uint, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, uint, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int64_t max(int64_t, int64_t);
@@ -1636,7 +1663,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int64_t3 max(int64_t3, int64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int64_t4 max(int64_t4, int64_t4);
-GEN_VEC_SCALAR_OVERLOADS(max, int64_t, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, int64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint64_t max(uint64_t, uint64_t);
@@ -1646,7 +1673,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint64_t3 max(uint64_t3, uint64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint64_t4 max(uint64_t4, uint64_t4);
-GEN_VEC_SCALAR_OVERLOADS(max, uint64_t, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, uint64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 float max(float, float);
@@ -1656,7 +1683,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 float3 max(float3, float3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 float4 max(float4, float4);
-GEN_VEC_SCALAR_OVERLOADS(max, float, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, float, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 double max(double, double);
@@ -1666,7 +1693,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 double3 max(double3, double3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 double4 max(double4, double4);
-GEN_VEC_SCALAR_OVERLOADS(max, double, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, double, 0)
 
 //===----------------------------------------------------------------------===//
 // min builtins
@@ -1689,7 +1716,7 @@ half3 min(half3, half3);
 _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 half4 min(half4, half4);
-GEN_VEC_SCALAR_OVERLOADS(min, half, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(min, half, 1)
 
 #ifdef __HLSL_ENABLE_16_BIT
 _HLSL_AVAILABILITY(shadermodel, 6.2)
@@ -1704,7 +1731,7 @@ int16_t3 min(int16_t3, int16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int16_t4 min(int16_t4, int16_t4);
-GEN_VEC_SCALAR_OVERLOADS(min, int16_t, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(min, int16_t, 1)
 
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
@@ -1718,7 +1745,7 @@ uint16_t3 min(uint16_t3, uint16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint16_t4 min(uint16_t4, uint16_t4);
-GEN_VEC_SCALAR_OVERLOADS(min, uint16_t, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(min, uint16_t, 1)
 #endif
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
@@ -1729,7 +1756,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int3 min(int3, int3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int4 min(int4, int4);
-GEN_VEC_SCALAR_OVERLOADS(min, int, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, int, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint min(uint, uint);
@@ -1739,7 +1766,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint3 min(uint3, uint3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint4 min(uint4, uint4);
-GEN_VEC_SCALAR_OVERLOADS(min, uint, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, uint, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 float min(float, float);
@@ -1749,7 +1776,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 float3 min(float3, float3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 float4 min(float4, float4);
-GEN_VEC_SCALAR_OVERLOADS(min, float, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, float, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int64_t min(int64_t, int64_t);
@@ -1759,7 +1786,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int64_t3 min(int64_t3, int64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int64_t4 min(int64_t4, int64_t4);
-GEN_VEC_SCALAR_OVERLOADS(min, int64_t, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, int64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint64_t min(uint64_t, uint64_t);
@@ -1769,7 +1796,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint64_t3 min(uint64_t3, uint64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint64_t4 min(uint64_t4, uint64_t4);
-GEN_VEC_SCALAR_OVERLOADS(min, uint64_t, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, uint64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 double min(double, double);
@@ -1779,7 +1806,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 double3 min(double3, double3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 double4 min(double4, double4);
-GEN_VEC_SCALAR_OVERLOADS(min, double, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, double, 0)
 
 //===----------------------------------------------------------------------===//
 // normalize builtins
diff --git a/clang/test/CodeGenHLSL/builtins/clamp.hlsl b/clang/test/CodeGenHLSL/builtins/clamp.hlsl
index d01c2a45c43c8..51454d6479708 100644
--- a/clang/test/CodeGenHLSL/builtins/clamp.hlsl
+++ b/clang/test/CodeGenHLSL/builtins/clamp.hlsl
@@ -28,6 +28,9 @@ int16_t3 test_clamp_short3(int16_t3 p0, int16_t3 p1) { return clamp(p0, p1,p1);
 // NATIVE_HALF: define [[FNATTRS]] <4 x i16> @_Z17test_clamp_short4
 // NATIVE_HALF: call <4 x i16> @llvm.[[TARGET]].sclamp.v4i16
 int16_t4 test_clamp_short4(int16_t4 p0, int16_t4 p1) { return clamp(p0, p1,p1); }
+// NATIVE_HALF: define [[FNATTRS]] <4 x i16> {{.*}}test_clamp_short4_mismatch
+// NATIVE_HALF: call <4 x i16> @llvm.[[TARGET]].sclamp.v4i16
+int16_t4 test_clamp_short4_mismatch(int16_t4 p0, int16_t p1) { return clamp(p0, p0,p1); }
 
 // NATIVE_HALF: define [[FNATTRS]] i16 @_Z17test_clamp_ushort
 // NATIVE_HALF: call i16 @llvm.[[TARGET]].uclamp.i16(
@@ -41,6 +44,9 @@ uint16_t3 test_clamp_ushort3(uint16_t3 p0, uint16_t3 p1) { return clamp(p0, p1,p
 // NATIVE_HALF: define [[FNATTRS]] <4 x i16> @_Z18test_clamp_ushort4
 // NATIVE_HALF: call <4 x i16> @llvm.[[TARGET]].uclamp.v4i16
 uint16_t4 test_clamp_ushort4(uint16_t4 p0, uint16_t4 p1) { return clamp(p0, p1,p1); }
+// NATIVE_HALF: define [[FNATTRS]] <4 x i16> {{.*}}test_clamp_ushort4_mismatch
+// NATIVE_HALF: call <4 x i16> @llvm.[[TARGET]].uclamp.v4i16
+uint16_t4 test_clamp_ushort4_mismatch(uint16_t4 p0, uint16_t p1) { return clamp(p0, p0,p1); }
 #endif
 
 // CHECK: define [[FNATTRS]] i32 @_Z14test_clamp_int
@@ -55,6 +61,9 @@ int3 test_clamp_int3(int3 p0, int3 p1) { return clamp(p0, p1,p1); }
 // CHECK: define [[FNATTRS]] <4 x i32> @_Z15test_clamp_int4
 // CHECK: call <4 x i32> @llvm.[[TARGET]].sclamp.v4i32
 int4 test_clamp_int4(int4 p0, int4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] <4 x i32> {{.*}}test_clamp_int4_mismatch
+// CHECK: call <4 x i32> @llvm.[[TARGET]].sclamp.v4i32
+int4 test_clamp_int4_mismatch(int4 p0, int p1) { return clamp(p0, p0,p1); }
 
 // CHECK: define [[FNATTRS]] i32 @_Z15test_clamp_uint
 // CHECK: call i32 @llvm.[[TARGET]].uclamp.i32(
@@ -68,6 +77,9 @@ uint3 test_clamp_uint3(uint3 p0, uint3 p1) { return clamp(p0, p1,p1); }
 // CHECK: define [[FNATTRS]] <4 x i32> @_Z16test_clamp_uint4
 // CHECK: call <4 x i32> @llvm.[[TARGET]].uclamp.v4i32
 uint4 test_clamp_uint4(uint4 p0, uint4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] <4 x i32> {{.*}}test_clamp_uint4_mismatch
+// CHECK: call <4 x i32> @llvm.[[TARGET]].uclamp.v4i32
+uint4 test_clamp_uint4_mismatch(uint4 p0, uint p1) { return clamp(p0, p0,p1); }
 
 // CHECK: define [[FNATTRS]] i64 @_Z15test_clamp_long
 // CHECK: call i64 @llvm.[[TARGET]].sclamp.i64(
@@ -81,6 +93,9 @@ int64_t3 test_clamp_long3(int64_t3 p0, int64_t3 p1) { return clamp(p0, p1,p1); }
 // CHECK: define [[FNATTRS]] <4 x i64> @_Z16test_clamp_long4
 // CHECK: call <4 x i64> @llvm.[[TARGET]].sclamp.v4i64
 int64_t4 test_clamp_long4(int64_t4 p0, int64_t4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] <4 x i64> {{.*}}test_clamp_long4_mismatch
+// CHECK: call <4 x i64> @llvm.[[TARGET]].sclamp.v4i64
+int64_t4 test_clamp_long4_mismatch(int64_t4 p0, int64_t4 p1) { return clamp(p0, p0,p1); }
 
 // CHECK: define [[FNATTRS]] i64 @_Z16test_clamp_ulong
 // CHECK: call i64 @llvm.[[TARGET]].uclamp.i64(
@@ -94,6 +109,9 @@ uint64_t3 test_clamp_ulong3(uint64_t3 p0, uint64_t3 p1) { return clamp(p0, p1,p1
 // CHECK: define [[FNATTRS]] <4 x i64> @_Z17test_clamp_ulong4
 // CHECK: call <4 x i64> @llvm.[[TARGET]].uclamp.v4i64
 uint64_t4 test_clamp_ulong4(uint64_t4 p0, uint64_t4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] <4 x i64> {{.*}}test_clamp_ulong4_mismatch
+// CHECK: call <4 x i64> @llvm.[[TARGET]].uclamp.v4i64
+uint64_t4 test_clamp_ulong4_mismatch(uint64_t4 p0, uint64_t4 p1) { return clamp(p0, p0,p1); }
 
 // NATIVE_HALF: define [[FNATTRS]] [[FFNATTRS]] half @_Z15test_clamp_half
 // NATIVE_HALF: call reassoc nnan ninf nsz arcp afn half @llvm.[[TARGET]].nclamp.f16(
@@ -115,6 +133,11 @@ half3 test_clamp_half3(half3 p0, half3 p1) { return clamp(p0, p1,p1); }
 // NO_HALF: define [[FNATTRS]] [[FFNATTRS]] <4 x float> @_Z16test_clamp_half4
 // NO_HALF: call reassoc nnan ninf nsz arcp afn <4 x float> @llvm.[[TARGET]].nclamp.v4f32(
 half4 test_clamp_half4(half4 p0, half4 p1) { return clamp(p0, p1,p1); }
+// NATIVE_HALF: define [[FNATTRS]] [[FFNATTRS]] <4 x half> {{.*}}test_clamp_half4_mismatch
+// NATIVE_HALF: call reassoc nnan ninf nsz arcp afn <4 x half> @llvm.[[TARGET]].nclamp.v4f16
+// NO_HALF: define [[FNATTRS]] [[FFNATTRS]] <4 x float> {{.*}}test_clamp_half4_mismatch
+// NO_HALF: call reassoc nnan ninf nsz arcp afn <4 x float> @llvm.[[TARGET]].nclamp.v4f32(
+half4 test_clamp_half4_mismatch(half4 p0, half p1) { return clamp(p0, p0,p1); }
 
 // CHECK: define [[FNATTRS]] [[FFNATTRS]] float @_Z16test_clamp_float
 // CHECK: call reassoc nnan ninf nsz arcp afn float @llvm.[[TARGET]].nclamp.f32(
@@ -128,6 +151,9 @@ float3 test_clamp_float3(float3 p0, float3 p1) { return clamp(p0, p1,p1); }
 // CHECK: define [[FNATTRS]] [[FFNATTRS]] <4 x float> @_Z17test_clamp_float4
 // CHECK: call reassoc nnan ninf nsz arcp afn <4 x float> @llvm.[[TARGET]].nclamp.v4f32
 float4 test_clamp_float4(float4 p0, float4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] [[FFNATTRS]] <4 x float> {{.*}}test_clamp_float4_mismatch
+// CHECK: call reassoc nnan ninf nsz arcp afn <4 x float> @llvm.[[TARGET]].nclamp.v4f32
+float4 test_clamp_float4_mismatch(float4 p0, float p1) { return clamp(p0, p0,p1); }
 
 // CHECK: define [[FNATTRS]] [[FFNATTRS]] double @_Z17test_clamp_double
 // CHECK: call reassoc nnan ninf nsz arcp afn double @llvm.[[TARGET]].nclamp.f64(
@@ -141,3 +167,9 @@ double3 test_clamp_double3(double3 p0, double3 p1) { return clamp(p0, p1,p1); }
 // CHECK: define [[FNATTRS]] [[FFNATTRS]] <4 x double> @_Z18test_clamp_double4
 // CHECK: call reassoc nnan ninf nsz arcp afn <4 x double> @llvm.[[TARGET]].nclamp.v4f64
 double4 test_clamp_double4(double4 p0, double4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] [[FFNATTRS]] <4 x double> {{.*}}test_clamp_double4_mismatch
+// CHECK: call reassoc nnan ninf nsz arcp afn <4 x double> @llvm.[[TARGET]].nclamp.v4f64
+double4 test_clamp_double4_mismatch(double4 p0, double p1) { return clamp(p0, p0,p1); }
+// CHECK: define [[FNATTRS]] [[FFNATTRS]] <4 x double> {{.*}}test_clamp_double4_mismatch2
+// CHECK: call reassoc nnan ninf nsz arcp afn <4 x double> @llvm.[[TARGET]].nclamp.v4f64
+double4 test_clamp_double4_mismatch2(double4 p0, double p1) { return clamp(p0, p1,p0); }
diff --git a/clang/test/SemaHLSL/BuiltIns/clamp-errors.hlsl b/clang/test/SemaHLSL/BuiltIns/clamp-errors.hlsl
index 036f04cdac0b5..a1850d47b105d 100644
--- a/clang/test/SemaHLSL/BuiltIns/clamp-errors.hlsl
+++ b/clang/test/SemaHLSL/BuiltIns/clamp-errors.hlsl
@@ -22,7 +22,7 @@ float2 test_clamp_no_second_arg(float2 p0) {
 
 float2 test_clamp_vector_size_mismatch(float3 p0, float2 p1) {
   return clamp(p0, p0, p1);
-  // expected-warning@-1 {{implicit conversion truncates vector: 'float3' (aka 'vector<float, 3>') to 'vector<float, 2>' (vector of 2 'float' values)}}
+  // expected-error@-1 {{call to 'clamp' is ambiguous}}
 }
 
 float2 test_clamp_builtin_vector_size_mismatch(float3 p0, float2 p1) {

@llvmbot
Copy link
Member

llvmbot commented Mar 5, 2025

@llvm/pr-subscribers-hlsl

Author: Sarah Spall (spall)

Changes

Add additional vector scalar overloads for clamp
Revamp Macros which generate overloads
Add Tests
Fix one test which now errors instead of warns.
Closes #128230


Full diff: https://github.com/llvm/llvm-project/pull/129939.diff

3 Files Affected:

  • (modified) clang/lib/Headers/hlsl/hlsl_alias_intrinsics.h (+63-36)
  • (modified) clang/test/CodeGenHLSL/builtins/clamp.hlsl (+32)
  • (modified) clang/test/SemaHLSL/BuiltIns/clamp-errors.hlsl (+1-1)
diff --git a/clang/lib/Headers/hlsl/hlsl_alias_intrinsics.h b/clang/lib/Headers/hlsl/hlsl_alias_intrinsics.h
index 75b0c95440461..62262535130cc 100644
--- a/clang/lib/Headers/hlsl/hlsl_alias_intrinsics.h
+++ b/clang/lib/Headers/hlsl/hlsl_alias_intrinsics.h
@@ -35,26 +35,44 @@ namespace hlsl {
 #define _HLSL_16BIT_AVAILABILITY_STAGE(environment, version, stage)
 #endif
 
-#define GEN_VEC_SCALAR_OVERLOADS(FUNC_NAME, BASE_TYPE, AVAIL)                  \
-  GEN_BOTH_OVERLOADS(FUNC_NAME, BASE_TYPE, BASE_TYPE##2, AVAIL)                \
-  GEN_BOTH_OVERLOADS(FUNC_NAME, BASE_TYPE, BASE_TYPE##3, AVAIL)                \
-  GEN_BOTH_OVERLOADS(FUNC_NAME, BASE_TYPE, BASE_TYPE##4, AVAIL)
-
-#define GEN_BOTH_OVERLOADS(FUNC_NAME, BASE_TYPE, VECTOR_TYPE, AVAIL)           \
-  IF_TRUE_##AVAIL(                                                             \
-      _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)) constexpr VECTOR_TYPE        \
-  FUNC_NAME(VECTOR_TYPE p0, BASE_TYPE p1) {                                    \
-    return __builtin_elementwise_##FUNC_NAME(p0, (VECTOR_TYPE)p1);             \
+#define _HLSL_CAT(a,b) a##b
+#define _HLSL_VEC_SCALAR_OVERLOADS(NAME, BASE_T, AVAIL) \
+  _HLSL_ALL_OVERLOADS(NAME, BASE_T, AVAIL, _HLSL_CAT(_HLSL_NUM_ARGS_,NAME))
+
+#define _HLSL_ALL_OVERLOADS(NAME, BASE_T, AVAIL, NUM_ARGS) \
+  _HLSL_CAT(_HLSL_BOTH_OVERLOADS_,NUM_ARGS)(NAME, BASE_T, _HLSL_CAT(BASE_T,2), AVAIL) \
+  _HLSL_CAT(_HLSL_BOTH_OVERLOADS_,NUM_ARGS)(NAME, BASE_T, _HLSL_CAT(BASE_T,3), AVAIL) \
+  _HLSL_CAT(_HLSL_BOTH_OVERLOADS_,NUM_ARGS)(NAME, BASE_T, _HLSL_CAT(BASE_T,4), AVAIL)
+
+#define _HLSL_BOTH_OVERLOADS_2(NAME, BASE_T, VECTOR_T, AVAIL)           \
+  _HLSL_CAT(_HLSL_IF_TRUE_,AVAIL)(					\
+      _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)) constexpr VECTOR_T        \
+  NAME(VECTOR_T p0, BASE_T p1) {                                    \
+    return _HLSL_CAT(__builtin_elementwise_,NAME)(p0, (VECTOR_T)p1);	\
   }                                                                            \
-  IF_TRUE_##AVAIL(                                                             \
-      _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)) constexpr VECTOR_TYPE        \
-  FUNC_NAME(BASE_TYPE p0, VECTOR_TYPE p1) {                                    \
-    return __builtin_elementwise_##FUNC_NAME((VECTOR_TYPE)p0, p1);             \
+  _HLSL_CAT(_HLSL_IF_TRUE_,AVAIL)(					\
+      _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)) constexpr VECTOR_T        \
+  NAME(BASE_T p0, VECTOR_T p1) {                                    \
+    return _HLSL_CAT(__builtin_elementwise_,NAME)((VECTOR_T)p0, p1);	\
   }
 
-#define IF_TRUE_0(EXPR)
-#define IF_TRUE_1(EXPR) EXPR
+#define _HLSL_BOTH_OVERLOADS_3(NAME, BASE_T, VECTOR_T, AVAIL)  \
+  _HLSL_CAT(_HLSL_IF_TRUE_,AVAIL)(_HLSL_16BIT_AVAILABILITY(shadermodel, 6.2))	\
+  constexpr VECTOR_T NAME(VECTOR_T p0, VECTOR_T p1, BASE_T p2) { \
+    return _HLSL_CAT(__builtin_hlsl_elementwise_,NAME)(p0, p1, (VECTOR_T)p2); \
+  } \
+  _HLSL_CAT(_HLSL_IF_TRUE_,AVAIL)(_HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)) \
+  constexpr VECTOR_T NAME(VECTOR_T p0, BASE_T p1, VECTOR_T p2) { \
+    return _HLSL_CAT(__builtin_hlsl_elementwise_,NAME)(p0, (VECTOR_T)p1, p2); \
+  }
+
+#define _HLSL_IF_TRUE_0(EXPR)
+#define _HLSL_IF_TRUE_1(EXPR) EXPR
 
+#define _HLSL_NUM_ARGS_min 2
+#define _HLSL_NUM_ARGS_max 2
+#define _HLSL_NUM_ARGS_clamp 3
+  
 //===----------------------------------------------------------------------===//
 // abs builtins
 //===----------------------------------------------------------------------===//
@@ -582,7 +600,8 @@ half3 clamp(half3, half3, half3);
 _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 half4 clamp(half4, half4, half4);
-
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, half, 1)
+  
 #ifdef __HLSL_ENABLE_16_BIT
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
@@ -596,7 +615,8 @@ int16_t3 clamp(int16_t3, int16_t3, int16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int16_t4 clamp(int16_t4, int16_t4, int16_t4);
-
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, int16_t, 1)
+  
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint16_t clamp(uint16_t, uint16_t, uint16_t);
@@ -609,6 +629,7 @@ uint16_t3 clamp(uint16_t3, uint16_t3, uint16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint16_t4 clamp(uint16_t4, uint16_t4, uint16_t4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, uint16_t, 1)
 #endif
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
@@ -619,6 +640,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int3 clamp(int3, int3, int3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int4 clamp(int4, int4, int4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, int, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint clamp(uint, uint, uint);
@@ -628,6 +650,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint3 clamp(uint3, uint3, uint3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint4 clamp(uint4, uint4, uint4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, uint, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int64_t clamp(int64_t, int64_t, int64_t);
@@ -637,6 +660,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int64_t3 clamp(int64_t3, int64_t3, int64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 int64_t4 clamp(int64_t4, int64_t4, int64_t4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, int64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint64_t clamp(uint64_t, uint64_t, uint64_t);
@@ -646,6 +670,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint64_t3 clamp(uint64_t3, uint64_t3, uint64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 uint64_t4 clamp(uint64_t4, uint64_t4, uint64_t4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, uint64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 float clamp(float, float, float);
@@ -655,6 +680,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 float3 clamp(float3, float3, float3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 float4 clamp(float4, float4, float4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, float, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 double clamp(double, double, double);
@@ -664,6 +690,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 double3 clamp(double3, double3, double3);
 _HLSL_BUILTIN_ALIAS(__builtin_hlsl_elementwise_clamp)
 double4 clamp(double4, double4, double4);
+_HLSL_VEC_SCALAR_OVERLOADS(clamp, double, 0)
 
 //===----------------------------------------------------------------------===//
 // clip builtins
@@ -1576,7 +1603,7 @@ half3 max(half3, half3);
 _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 half4 max(half4, half4);
-GEN_VEC_SCALAR_OVERLOADS(max, half, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(max, half, 1)
 
 #ifdef __HLSL_ENABLE_16_BIT
 _HLSL_AVAILABILITY(shadermodel, 6.2)
@@ -1591,7 +1618,7 @@ int16_t3 max(int16_t3, int16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int16_t4 max(int16_t4, int16_t4);
-GEN_VEC_SCALAR_OVERLOADS(max, int16_t, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(max, int16_t, 1)
 
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
@@ -1605,7 +1632,7 @@ uint16_t3 max(uint16_t3, uint16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint16_t4 max(uint16_t4, uint16_t4);
-GEN_VEC_SCALAR_OVERLOADS(max, uint16_t, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(max, uint16_t, 1)
 #endif
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
@@ -1616,7 +1643,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int3 max(int3, int3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int4 max(int4, int4);
-GEN_VEC_SCALAR_OVERLOADS(max, int, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, int, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint max(uint, uint);
@@ -1626,7 +1653,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint3 max(uint3, uint3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint4 max(uint4, uint4);
-GEN_VEC_SCALAR_OVERLOADS(max, uint, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, uint, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int64_t max(int64_t, int64_t);
@@ -1636,7 +1663,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int64_t3 max(int64_t3, int64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 int64_t4 max(int64_t4, int64_t4);
-GEN_VEC_SCALAR_OVERLOADS(max, int64_t, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, int64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint64_t max(uint64_t, uint64_t);
@@ -1646,7 +1673,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint64_t3 max(uint64_t3, uint64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 uint64_t4 max(uint64_t4, uint64_t4);
-GEN_VEC_SCALAR_OVERLOADS(max, uint64_t, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, uint64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 float max(float, float);
@@ -1656,7 +1683,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 float3 max(float3, float3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 float4 max(float4, float4);
-GEN_VEC_SCALAR_OVERLOADS(max, float, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, float, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 double max(double, double);
@@ -1666,7 +1693,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 double3 max(double3, double3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_max)
 double4 max(double4, double4);
-GEN_VEC_SCALAR_OVERLOADS(max, double, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(max, double, 0)
 
 //===----------------------------------------------------------------------===//
 // min builtins
@@ -1689,7 +1716,7 @@ half3 min(half3, half3);
 _HLSL_16BIT_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 half4 min(half4, half4);
-GEN_VEC_SCALAR_OVERLOADS(min, half, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(min, half, 1)
 
 #ifdef __HLSL_ENABLE_16_BIT
 _HLSL_AVAILABILITY(shadermodel, 6.2)
@@ -1704,7 +1731,7 @@ int16_t3 min(int16_t3, int16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int16_t4 min(int16_t4, int16_t4);
-GEN_VEC_SCALAR_OVERLOADS(min, int16_t, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(min, int16_t, 1)
 
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
@@ -1718,7 +1745,7 @@ uint16_t3 min(uint16_t3, uint16_t3);
 _HLSL_AVAILABILITY(shadermodel, 6.2)
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint16_t4 min(uint16_t4, uint16_t4);
-GEN_VEC_SCALAR_OVERLOADS(min, uint16_t, 1)
+_HLSL_VEC_SCALAR_OVERLOADS(min, uint16_t, 1)
 #endif
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
@@ -1729,7 +1756,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int3 min(int3, int3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int4 min(int4, int4);
-GEN_VEC_SCALAR_OVERLOADS(min, int, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, int, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint min(uint, uint);
@@ -1739,7 +1766,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint3 min(uint3, uint3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint4 min(uint4, uint4);
-GEN_VEC_SCALAR_OVERLOADS(min, uint, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, uint, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 float min(float, float);
@@ -1749,7 +1776,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 float3 min(float3, float3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 float4 min(float4, float4);
-GEN_VEC_SCALAR_OVERLOADS(min, float, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, float, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int64_t min(int64_t, int64_t);
@@ -1759,7 +1786,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int64_t3 min(int64_t3, int64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 int64_t4 min(int64_t4, int64_t4);
-GEN_VEC_SCALAR_OVERLOADS(min, int64_t, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, int64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint64_t min(uint64_t, uint64_t);
@@ -1769,7 +1796,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint64_t3 min(uint64_t3, uint64_t3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 uint64_t4 min(uint64_t4, uint64_t4);
-GEN_VEC_SCALAR_OVERLOADS(min, uint64_t, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, uint64_t, 0)
 
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 double min(double, double);
@@ -1779,7 +1806,7 @@ _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 double3 min(double3, double3);
 _HLSL_BUILTIN_ALIAS(__builtin_elementwise_min)
 double4 min(double4, double4);
-GEN_VEC_SCALAR_OVERLOADS(min, double, 0)
+_HLSL_VEC_SCALAR_OVERLOADS(min, double, 0)
 
 //===----------------------------------------------------------------------===//
 // normalize builtins
diff --git a/clang/test/CodeGenHLSL/builtins/clamp.hlsl b/clang/test/CodeGenHLSL/builtins/clamp.hlsl
index d01c2a45c43c8..51454d6479708 100644
--- a/clang/test/CodeGenHLSL/builtins/clamp.hlsl
+++ b/clang/test/CodeGenHLSL/builtins/clamp.hlsl
@@ -28,6 +28,9 @@ int16_t3 test_clamp_short3(int16_t3 p0, int16_t3 p1) { return clamp(p0, p1,p1);
 // NATIVE_HALF: define [[FNATTRS]] <4 x i16> @_Z17test_clamp_short4
 // NATIVE_HALF: call <4 x i16> @llvm.[[TARGET]].sclamp.v4i16
 int16_t4 test_clamp_short4(int16_t4 p0, int16_t4 p1) { return clamp(p0, p1,p1); }
+// NATIVE_HALF: define [[FNATTRS]] <4 x i16> {{.*}}test_clamp_short4_mismatch
+// NATIVE_HALF: call <4 x i16> @llvm.[[TARGET]].sclamp.v4i16
+int16_t4 test_clamp_short4_mismatch(int16_t4 p0, int16_t p1) { return clamp(p0, p0,p1); }
 
 // NATIVE_HALF: define [[FNATTRS]] i16 @_Z17test_clamp_ushort
 // NATIVE_HALF: call i16 @llvm.[[TARGET]].uclamp.i16(
@@ -41,6 +44,9 @@ uint16_t3 test_clamp_ushort3(uint16_t3 p0, uint16_t3 p1) { return clamp(p0, p1,p
 // NATIVE_HALF: define [[FNATTRS]] <4 x i16> @_Z18test_clamp_ushort4
 // NATIVE_HALF: call <4 x i16> @llvm.[[TARGET]].uclamp.v4i16
 uint16_t4 test_clamp_ushort4(uint16_t4 p0, uint16_t4 p1) { return clamp(p0, p1,p1); }
+// NATIVE_HALF: define [[FNATTRS]] <4 x i16> {{.*}}test_clamp_ushort4_mismatch
+// NATIVE_HALF: call <4 x i16> @llvm.[[TARGET]].uclamp.v4i16
+uint16_t4 test_clamp_ushort4_mismatch(uint16_t4 p0, uint16_t p1) { return clamp(p0, p0,p1); }
 #endif
 
 // CHECK: define [[FNATTRS]] i32 @_Z14test_clamp_int
@@ -55,6 +61,9 @@ int3 test_clamp_int3(int3 p0, int3 p1) { return clamp(p0, p1,p1); }
 // CHECK: define [[FNATTRS]] <4 x i32> @_Z15test_clamp_int4
 // CHECK: call <4 x i32> @llvm.[[TARGET]].sclamp.v4i32
 int4 test_clamp_int4(int4 p0, int4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] <4 x i32> {{.*}}test_clamp_int4_mismatch
+// CHECK: call <4 x i32> @llvm.[[TARGET]].sclamp.v4i32
+int4 test_clamp_int4_mismatch(int4 p0, int p1) { return clamp(p0, p0,p1); }
 
 // CHECK: define [[FNATTRS]] i32 @_Z15test_clamp_uint
 // CHECK: call i32 @llvm.[[TARGET]].uclamp.i32(
@@ -68,6 +77,9 @@ uint3 test_clamp_uint3(uint3 p0, uint3 p1) { return clamp(p0, p1,p1); }
 // CHECK: define [[FNATTRS]] <4 x i32> @_Z16test_clamp_uint4
 // CHECK: call <4 x i32> @llvm.[[TARGET]].uclamp.v4i32
 uint4 test_clamp_uint4(uint4 p0, uint4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] <4 x i32> {{.*}}test_clamp_uint4_mismatch
+// CHECK: call <4 x i32> @llvm.[[TARGET]].uclamp.v4i32
+uint4 test_clamp_uint4_mismatch(uint4 p0, uint p1) { return clamp(p0, p0,p1); }
 
 // CHECK: define [[FNATTRS]] i64 @_Z15test_clamp_long
 // CHECK: call i64 @llvm.[[TARGET]].sclamp.i64(
@@ -81,6 +93,9 @@ int64_t3 test_clamp_long3(int64_t3 p0, int64_t3 p1) { return clamp(p0, p1,p1); }
 // CHECK: define [[FNATTRS]] <4 x i64> @_Z16test_clamp_long4
 // CHECK: call <4 x i64> @llvm.[[TARGET]].sclamp.v4i64
 int64_t4 test_clamp_long4(int64_t4 p0, int64_t4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] <4 x i64> {{.*}}test_clamp_long4_mismatch
+// CHECK: call <4 x i64> @llvm.[[TARGET]].sclamp.v4i64
+int64_t4 test_clamp_long4_mismatch(int64_t4 p0, int64_t4 p1) { return clamp(p0, p0,p1); }
 
 // CHECK: define [[FNATTRS]] i64 @_Z16test_clamp_ulong
 // CHECK: call i64 @llvm.[[TARGET]].uclamp.i64(
@@ -94,6 +109,9 @@ uint64_t3 test_clamp_ulong3(uint64_t3 p0, uint64_t3 p1) { return clamp(p0, p1,p1
 // CHECK: define [[FNATTRS]] <4 x i64> @_Z17test_clamp_ulong4
 // CHECK: call <4 x i64> @llvm.[[TARGET]].uclamp.v4i64
 uint64_t4 test_clamp_ulong4(uint64_t4 p0, uint64_t4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] <4 x i64> {{.*}}test_clamp_ulong4_mismatch
+// CHECK: call <4 x i64> @llvm.[[TARGET]].uclamp.v4i64
+uint64_t4 test_clamp_ulong4_mismatch(uint64_t4 p0, uint64_t4 p1) { return clamp(p0, p0,p1); }
 
 // NATIVE_HALF: define [[FNATTRS]] [[FFNATTRS]] half @_Z15test_clamp_half
 // NATIVE_HALF: call reassoc nnan ninf nsz arcp afn half @llvm.[[TARGET]].nclamp.f16(
@@ -115,6 +133,11 @@ half3 test_clamp_half3(half3 p0, half3 p1) { return clamp(p0, p1,p1); }
 // NO_HALF: define [[FNATTRS]] [[FFNATTRS]] <4 x float> @_Z16test_clamp_half4
 // NO_HALF: call reassoc nnan ninf nsz arcp afn <4 x float> @llvm.[[TARGET]].nclamp.v4f32(
 half4 test_clamp_half4(half4 p0, half4 p1) { return clamp(p0, p1,p1); }
+// NATIVE_HALF: define [[FNATTRS]] [[FFNATTRS]] <4 x half> {{.*}}test_clamp_half4_mismatch
+// NATIVE_HALF: call reassoc nnan ninf nsz arcp afn <4 x half> @llvm.[[TARGET]].nclamp.v4f16
+// NO_HALF: define [[FNATTRS]] [[FFNATTRS]] <4 x float> {{.*}}test_clamp_half4_mismatch
+// NO_HALF: call reassoc nnan ninf nsz arcp afn <4 x float> @llvm.[[TARGET]].nclamp.v4f32(
+half4 test_clamp_half4_mismatch(half4 p0, half p1) { return clamp(p0, p0,p1); }
 
 // CHECK: define [[FNATTRS]] [[FFNATTRS]] float @_Z16test_clamp_float
 // CHECK: call reassoc nnan ninf nsz arcp afn float @llvm.[[TARGET]].nclamp.f32(
@@ -128,6 +151,9 @@ float3 test_clamp_float3(float3 p0, float3 p1) { return clamp(p0, p1,p1); }
 // CHECK: define [[FNATTRS]] [[FFNATTRS]] <4 x float> @_Z17test_clamp_float4
 // CHECK: call reassoc nnan ninf nsz arcp afn <4 x float> @llvm.[[TARGET]].nclamp.v4f32
 float4 test_clamp_float4(float4 p0, float4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] [[FFNATTRS]] <4 x float> {{.*}}test_clamp_float4_mismatch
+// CHECK: call reassoc nnan ninf nsz arcp afn <4 x float> @llvm.[[TARGET]].nclamp.v4f32
+float4 test_clamp_float4_mismatch(float4 p0, float p1) { return clamp(p0, p0,p1); }
 
 // CHECK: define [[FNATTRS]] [[FFNATTRS]] double @_Z17test_clamp_double
 // CHECK: call reassoc nnan ninf nsz arcp afn double @llvm.[[TARGET]].nclamp.f64(
@@ -141,3 +167,9 @@ double3 test_clamp_double3(double3 p0, double3 p1) { return clamp(p0, p1,p1); }
 // CHECK: define [[FNATTRS]] [[FFNATTRS]] <4 x double> @_Z18test_clamp_double4
 // CHECK: call reassoc nnan ninf nsz arcp afn <4 x double> @llvm.[[TARGET]].nclamp.v4f64
 double4 test_clamp_double4(double4 p0, double4 p1) { return clamp(p0, p1,p1); }
+// CHECK: define [[FNATTRS]] [[FFNATTRS]] <4 x double> {{.*}}test_clamp_double4_mismatch
+// CHECK: call reassoc nnan ninf nsz arcp afn <4 x double> @llvm.[[TARGET]].nclamp.v4f64
+double4 test_clamp_double4_mismatch(double4 p0, double p1) { return clamp(p0, p0,p1); }
+// CHECK: define [[FNATTRS]] [[FFNATTRS]] <4 x double> {{.*}}test_clamp_double4_mismatch2
+// CHECK: call reassoc nnan ninf nsz arcp afn <4 x double> @llvm.[[TARGET]].nclamp.v4f64
+double4 test_clamp_double4_mismatch2(double4 p0, double p1) { return clamp(p0, p1,p0); }
diff --git a/clang/test/SemaHLSL/BuiltIns/clamp-errors.hlsl b/clang/test/SemaHLSL/BuiltIns/clamp-errors.hlsl
index 036f04cdac0b5..a1850d47b105d 100644
--- a/clang/test/SemaHLSL/BuiltIns/clamp-errors.hlsl
+++ b/clang/test/SemaHLSL/BuiltIns/clamp-errors.hlsl
@@ -22,7 +22,7 @@ float2 test_clamp_no_second_arg(float2 p0) {
 
 float2 test_clamp_vector_size_mismatch(float3 p0, float2 p1) {
   return clamp(p0, p0, p1);
-  // expected-warning@-1 {{implicit conversion truncates vector: 'float3' (aka 'vector<float, 3>') to 'vector<float, 2>' (vector of 2 'float' values)}}
+  // expected-error@-1 {{call to 'clamp' is ambiguous}}
 }
 
 float2 test_clamp_builtin_vector_size_mismatch(float3 p0, float2 p1) {

Copy link
Contributor

@bob80905 bob80905 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, given the specific conditions that:

  1. there are 3 parameters
  2. the first parameter should always be the vector type
  3. and there should be at most one base type among all the parameters

This macro should work for that specific case.
I think it may need to be adjusted / more parameters may need to be added to the macro if there are later intrinsics that don't satisfy these specific conditions (that is, if there are 2 base types in the parameters that need to be cast to vector types).
Might need to double check with someone whether the vector base base case is allowed or not. But otherwise LGTM.

@spall spall marked this pull request as draft March 7, 2025 00:15
@spall spall marked this pull request as ready for review March 10, 2025 18:55
@llvmbot llvmbot added the clang:frontend Language frontend issues, e.g. anything involving "Sema" label Mar 10, 2025
Copy link

github-actions bot commented Mar 10, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

@spall
Copy link
Contributor Author

spall commented Mar 10, 2025

I need to add more tests for certain odd cases.

@farzonl
Copy link
Member

farzonl commented Mar 12, 2025

LGTM thanks for adding the vec5 test case!

spall added 2 commits March 14, 2025 17:03
…lementwise_clamp, to ensure clamp can't be called on 16 bit types in a shader model before 6.2. Add tests to show this
Copy link
Collaborator

@llvm-beanz llvm-beanz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One small nit on formatting, otherwise looks good.

@spall spall merged commit af5abd9 into llvm:main Mar 17, 2025
12 checks passed
@damyanp damyanp moved this to Closed in HLSL Support Apr 25, 2025
@damyanp damyanp removed this from HLSL Support Jun 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend:X86 clang:frontend Language frontend issues, e.g. anything involving "Sema" clang:headers Headers provided by Clang, e.g. for intrinsics clang Clang issues not falling into any other category HLSL HLSL Language Support
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[HLSL] clamp overloads with mixed scalar vector operands
5 participants