Skip to content

Commit c16aaf5

Browse files
ssjiaSS-JIA
authored andcommitted
[ET-VK][testing] Create dedicated test binary for pointwise convolutions
Pull Request resolved: #17220 This commit creates a dedicated test binary for pointwise (1x1) convolutions (test_q8ta_conv2d_pw), separating them from the general 2D convolution tests. Here are the key changes: What Changed 1. New Test Binary: test_q8ta_conv2d_pw.cpp (591 lines) - Dedicated test file focusing exclusively on pointwise convolutions (kernel size 1x1) - Contains 9 test configurations ranging from accuracy tests to performance cases: - Accuracy tests: Various channel configurations (32→3, 64→32, 96→64, 13→7, 80→40) with different spatial dimensions - Performance tests: Larger configurations (160→480, 22→48, 48→48, 128→128) exceeding the 100-dim reference limit - Tests all combinations of: - Storage types: Texture3D, Buffer - Int8 memory layouts: 4C1W, 4W4C, 4C - Also tests legacy 4W4C implementation via impl_selector="legacy_4w4c" - Includes full reference implementation for numerical correctness verification - Custom FLOP calculator for performance measurements 2. Removed from test_q8ta_conv2d.cpp (44 lines deleted) - Removed 6 pointwise convolution configurations that are now covered by the new dedicated binary - General conv2d tests now focus solely on kernels > 1x1 (3x3, 5x5, etc.) 3. Build System Updates - Added test_q8ta_conv2d_pw target to: - targets.bzl (Buck2) - CMakeLists.txt (CMake) - Both fbcode and xplat paths updated (files are mirrored) 4. CI/Workflow Integration - Updated executorch_vulkan_eureka_unit_tests.sky to include the new test binary in on-device testing workflow Why This Separation? Pointwise convolutions (1x1 kernels) are a distinct optimization target with different performance characteristics than general convolutions. Separating them enables: - Focused performance iteration on pointwise-specific shaders - Cleaner test organization - Faster test runs when only testing one convolution type ghstack-source-id: 338638548 @exported-using-ghexport Differential Revision: [D92307251](https://our.internmc.facebook.com/intern/diff/D92307251/)
1 parent 67ff1b8 commit c16aaf5

4 files changed

Lines changed: 593 additions & 44 deletions

File tree

backends/vulkan/test/custom_ops/CMakeLists.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,7 @@ if(TARGET vulkan_backend)
100100
add_operator_prototype(test_q8ta_qdq)
101101
add_operator_prototype(test_q8ta_clone)
102102
add_operator_prototype(test_q8ta_conv2d)
103+
add_operator_prototype(test_q8ta_conv2d_pw)
103104
add_operator_prototype(test_q8ta_conv2d_dw)
104105
add_operator_prototype(q8ta_q8ta_q8to_add)
105106
endif()

backends/vulkan/test/custom_ops/targets.bzl

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -94,5 +94,6 @@ def define_common_targets(is_fbcode = False):
9494
define_custom_op_test_binary("test_q8ta_qdq")
9595
define_custom_op_test_binary("test_q8ta_clone")
9696
define_custom_op_test_binary("test_q8ta_conv2d")
97+
define_custom_op_test_binary("test_q8ta_conv2d_pw")
9798
define_custom_op_test_binary("test_q8ta_conv2d_dw")
9899
define_custom_op_test_binary("q8ta_q8ta_q8to_add")

backends/vulkan/test/custom_ops/test_q8ta_conv2d.cpp

Lines changed: 0 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -258,35 +258,6 @@ static std::vector<TestCase> generate_quantized_conv2d_test_cases() {
258258
}
259259

260260
std::vector<Conv2dConfig> configs = {
261-
// Pointwise convolutions: kernel size 1x1
262-
{OutInChannels(32, 3),
263-
InputSize2D(64, 64),
264-
KernelSize(1, 1),
265-
Stride(1, 1),
266-
Padding(0, 0),
267-
Dilation(1, 1),
268-
1},
269-
{OutInChannels(64, 32),
270-
InputSize2D(32, 32),
271-
KernelSize(1, 1),
272-
Stride(1, 1),
273-
Padding(0, 0),
274-
Dilation(1, 1),
275-
1},
276-
{OutInChannels(96, 64),
277-
InputSize2D(16, 16),
278-
KernelSize(1, 1),
279-
Stride(1, 1),
280-
Padding(0, 0),
281-
Dilation(1, 1),
282-
1},
283-
{OutInChannels(13, 7),
284-
InputSize2D(57, 33),
285-
KernelSize(1, 1),
286-
Stride(1, 1),
287-
Padding(0, 0),
288-
Dilation(1, 1),
289-
1},
290261
// General 2D convolutions
291262
{OutInChannels(32, 3),
292263
InputSize2D(64, 64),
@@ -352,21 +323,6 @@ static std::vector<TestCase> generate_quantized_conv2d_test_cases() {
352323
Padding(2, 2),
353324
Dilation(1, 1),
354325
4},
355-
// Performance cases (pointwise - will use im2col)
356-
{OutInChannels(128, 128),
357-
InputSize2D(128, 128),
358-
KernelSize(1, 1),
359-
Stride(1, 1),
360-
Padding(0, 0),
361-
Dilation(1, 1),
362-
1},
363-
{OutInChannels(128, 128),
364-
InputSize2D(128, 128),
365-
KernelSize(1, 1),
366-
Stride(1, 1),
367-
Padding(0, 0),
368-
Dilation(1, 1),
369-
1},
370326
// Performance cases (3x3 convs - will use im2col)
371327
{OutInChannels(32, 3),
372328
InputSize2D(256, 256),

0 commit comments

Comments
 (0)