Skip to content

Commit d7c8b3d

Browse files
authored
[CI] Replace "openai/triton" with "triton-lang/triton" (triton-lang#3900)
1 parent a9f7506 commit d7c8b3d

File tree

17 files changed

+27
-27
lines changed

17 files changed

+27
-27
lines changed

.github/workflows/integration-tests.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ jobs:
103103
id: set-matrix
104104
if: env.enable_integration == 'true'
105105
run: |
106-
if [ x"${{ github.repository }}" == x"openai/triton" ]; then
106+
if [ x"${{ github.repository }}" == x"triton-lang/triton" ]; then
107107
echo '::set-output name=matrix-CUDA::[["self-hosted", "A100"], ["self-hosted", "H100"]]'
108108
echo '::set-output name=matrix-HIP::[["self-hosted", "gfx90a"]]'
109109
else

.github/workflows/integration-tests.yml.in

+1-1
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ jobs:
111111
id: set-matrix
112112
if: env.enable_integration == 'true'
113113
run: |
114-
if [ x"${{ github.repository }}" == x"openai/triton" ]; then
114+
if [ x"${{ github.repository }}" == x"triton-lang/triton" ]; then
115115
echo '::set-output name=matrix-CUDA::[["self-hosted", "A100"], ["self-hosted", "H100"]]'
116116
echo '::set-output name=matrix-HIP::[["self-hosted", "gfx90a"]]'
117117
else

.github/workflows/llvm-build.yml

+3-3
Original file line numberDiff line numberDiff line change
@@ -287,23 +287,23 @@ jobs:
287287
${{ github.workspace }}/llvm-*-${{ matrix.config.target-os }}-${{ matrix.config.arch }}.tar.gz
288288
289289
- name: Azure Login
290-
if: ${{ (github.repository == 'openai/triton') }}
290+
if: ${{ (github.repository == 'triton-lang/triton') }}
291291
uses: azure/login@v2
292292
with:
293293
client-id: ${{ secrets.AZURE_CLIENT_ID }}
294294
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
295295
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
296296

297297
- name: Upload LLVM Artifacts to Azure
298-
if: ${{ (github.repository == 'openai/triton') }}
298+
if: ${{ (github.repository == 'triton-lang/triton') }}
299299
run: |
300300
az storage blob upload --account-name tritonlang --auth-mode login --container-name llvm-builds --file "${{ env.llvm_install_dir }}.tar.gz" --name "${{ env.llvm_install_dir }}.tar.gz" --overwrite
301301
302302
URL=$(az storage blob url --account-name tritonlang --auth-mode login --container-name llvm-builds --name "${{ env.llvm_install_dir }}.tar.gz")
303303
echo "Blob URL: ${URL}"
304304
305305
- name: Azure Logout
306-
if: ${{ (github.repository == 'openai/triton') }}
306+
if: ${{ (github.repository == 'triton-lang/triton') }}
307307
run: |
308308
az logout
309309
az cache purge

.github/workflows/test-backends.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ jobs:
1616
- name: Prepare runner matrix
1717
id: set-matrix
1818
run: |
19-
if [ x"${{ github.repository }}" == x"openai/triton" ]; then
19+
if [ x"${{ github.repository }}" == x"triton-lang/triton" ]; then
2020
echo '::set-output name=matrix-optional::[["self-hosted", "gfx90a"], ["self-hosted", "arc770"]]'
2121
else
2222
echo '::set-output name=matrix-optional::["ubuntu-latest"]'

CONTRIBUTING.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -64,4 +64,4 @@ We are committed to accepting functional bug fixes that meet our quality standar
6464

6565
## Controversial Changes
6666

67-
More controversial design changes (e.g., changes in our IRs/APIs/Passes) are evaluated on a case-by-case basis under the subjective judgment of core maintainers. While it is possible for contributors to propose and land deep design changes upstream (see https://github.com/openai/triton/pull/1305), the community should expect such occurrences to be relatively rare.
67+
More controversial design changes (e.g., changes in our IRs/APIs/Passes) are evaluated on a case-by-case basis under the subjective judgment of core maintainers. While it is possible for contributors to propose and land deep design changes upstream (see https://github.com/triton-lang/triton/pull/1305), the community should expect such occurrences to be relatively rare.

README.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ We're hiring! If you are interested in working on Triton at OpenAI, we have role
66

77
| **`Documentation`** | **`Nightly Wheels`** |
88
|-------------------- | -------------------- |
9-
| [![Documentation](https://github.com/openai/triton/actions/workflows/documentation.yml/badge.svg)](https://triton-lang.org/) | [![Wheels](https://github.com/openai/triton/actions/workflows/wheels.yml/badge.svg?branch=release/2.0.x)](https://github.com/openai/triton/actions/workflows/wheels.yml) |
9+
| [![Documentation](https://github.com/triton-lang/triton/actions/workflows/documentation.yml/badge.svg)](https://triton-lang.org/) | [![Wheels](https://github.com/triton-lang/triton/actions/workflows/wheels.yml/badge.svg?branch=release/2.0.x)](https://github.com/triton-lang/triton/actions/workflows/wheels.yml) |
1010

1111

1212
# Triton
@@ -35,7 +35,7 @@ pip install -U --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/
3535
# Install from source
3636

3737
```
38-
git clone https://github.com/openai/triton.git;
38+
git clone https://github.com/triton-lang/triton.git;
3939
cd triton;
4040
4141
pip install ninja cmake wheel; # build-time dependencies
@@ -45,7 +45,7 @@ pip install -e python
4545
Or with a virtualenv:
4646

4747
```
48-
git clone https://github.com/openai/triton.git;
48+
git clone https://github.com/triton-lang/triton.git;
4949
cd triton;
5050
5151
python -m venv .venv --prompt triton;
@@ -208,7 +208,7 @@ Version 2.0 is out! New features include:
208208

209209
# Contributing
210210

211-
Community contributions are more than welcome, whether it be to fix bugs or to add new features at [github](https://github.com/openai/triton/). For more detailed instructions, please visit our [contributor's guide](CONTRIBUTING.md).
211+
Community contributions are more than welcome, whether it be to fix bugs or to add new features at [github](https://github.com/triton-lang/triton/). For more detailed instructions, please visit our [contributor's guide](CONTRIBUTING.md).
212212

213213

214214
# Compatibility

docs/conf.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ def documenter(app, obj, parent):
158158
'filename_pattern': '',
159159
# TODO: Re-enable the grouped-gemm tutorial. It currently hits this
160160
# assertion:
161-
# https://github.com/openai/triton/blob/main/lib/Dialect/TritonNvidiaGPU/Transforms/FenceInsertion.cpp#L127
161+
# https://github.com/triton-lang/triton/blob/main/lib/Dialect/TritonNvidiaGPU/Transforms/FenceInsertion.cpp#L127
162162
'ignore_pattern': r'(__init__\.py|11.*.py)',
163163
'within_subsection_order': FileNameSortKey,
164164
'reference_url': {

docs/getting-started/installation.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
Installation
33
============
44

5-
For supported platform/OS and supported hardware, review the `Compatibility <https://github.com/openai/triton?tab=readme-ov-file#compatibility>`_ section on Github.
5+
For supported platform/OS and supported hardware, review the `Compatibility <https://github.com/triton-lang/triton?tab=readme-ov-file#compatibility>`_ section on Github.
66

77
--------------------
88
Binary Distributions
@@ -35,14 +35,14 @@ You can install the Python package from source by running the following commands
3535

3636
.. code-block:: bash
3737
38-
git clone https://github.com/openai/triton.git;
38+
git clone https://github.com/triton-lang/triton.git;
3939
cd triton/python;
4040
pip install ninja cmake wheel; # build-time dependencies
4141
pip install -e .
4242
4343
Note that, if llvm is not present on your system, the setup.py script will download the official LLVM static libraries and link against that.
4444

45-
For building with a custom LLVM, review the `Building with a custom LLVM <https://github.com/openai/triton?tab=readme-ov-file#building-with-a-custom-llvm>`_ section on Github.
45+
For building with a custom LLVM, review the `Building with a custom LLVM <https://github.com/triton-lang/triton?tab=readme-ov-file#building-with-a-custom-llvm>`_ section on Github.
4646

4747
You can then test your installation by running the unit tests:
4848

docs/index.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -67,4 +67,4 @@ Check out the following documents to learn more about Triton and how it compares
6767
programming-guide/chapter-2/related-work
6868
programming-guide/chapter-3/debugging
6969

70-
.. _Triton: https://github.com/openai/triton
70+
.. _Triton: https://github.com/triton-lang/triton

docs/meetups/08-22-2023/notes.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -31,11 +31,11 @@ Recording link [here](https://drive.google.com/file/d/19Nnc0i7zUyn-ni2RSFHbPHHiP
3131
- Community can start with the latest stable branch and rebase 3rd party plugin on top of that. OAI has no resources to commit to, but community can contribute.
3232
3. Linalg updates
3333
- Discussion on Github for Linalg as a middle layer between the language and target hardware. Includes support for block pointers and modulo operators.
34-
- Please join the conversation [here](https://github.com/openai/triton/discussions/1842)
34+
- Please join the conversation [here](https://github.com/triton-lang/triton/discussions/1842)
3535
- Branch pushed is behind the tip, will work on getting it caught up on the tip.
3636
4. Intel GPU Backend status update.
37-
- Please refer to slides [here](https://github.com/openai/triton/blob/main/docs/meetups/Intel%20XPU%20Backend%20for%20Triton%20-%20Update%20-%200823.pptx)
37+
- Please refer to slides [here](https://github.com/triton-lang/triton/blob/main/docs/meetups/Intel%20XPU%20Backend%20for%20Triton%20-%20Update%20-%200823.pptx)
3838
5. Intel working on the CPU backend for Triton.
39-
- Please refer to slides [here](https://github.com/openai/triton/blob/main/docs/meetups/Intel%20XPU%20Backend%20for%20Triton%20-%20Update%20-%200823.pptx)
39+
- Please refer to slides [here](https://github.com/triton-lang/triton/blob/main/docs/meetups/Intel%20XPU%20Backend%20for%20Triton%20-%20Update%20-%200823.pptx)
4040
6. AMD updates
41-
- Please refer to slides [here](https://github.com/openai/triton/blob/main/docs/meetups/Triton_AMD_update_0823.pdf).
41+
- Please refer to slides [here](https://github.com/triton-lang/triton/blob/main/docs/meetups/Triton_AMD_update_0823.pdf).

docs/programming-guide/chapter-3/debugging.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Debugging Triton
55
This tutorial provides guidance for debugging Triton programs.
66
It is mostly documented for Triton users.
77
Developers interested in exploring Triton's backend, including MLIR code transformation and LLVM code generation,
8-
can refer to this `section <https://github.com/openai/triton?tab=readme-ov-file#tips-for-hacking>`_ to explore debugging options.
8+
can refer to this `section <https://github.com/triton-lang/triton?tab=readme-ov-file#tips-for-hacking>`_ to explore debugging options.
99

1010
------------------------------------
1111
Using Triton's Debugging Operations

lib/Dialect/TritonGPU/Transforms/AccelerateMatmul.cpp

+1-1
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ warpsPerTileV2(tt::DotOp dotOp, const ArrayRef<int64_t> shape, int numWarps) {
8585
shapePerWarp[rank - 2] = 16;
8686
// TODO (@daadaada): double-check.
8787
// original logic in
88-
// https://github.com/openai/triton/blob/master/lib/codegen/analysis/layout.cc#L252
88+
// https://github.com/triton-lang/triton/blob/master/lib/codegen/analysis/layout.cc#L252
8989
// seems buggy for shape = [32, 16] ?
9090
do {
9191
if (ret[0] * ret[1] >= numWarps)

python/setup.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -578,7 +578,7 @@ def get_install_requires():
578578
zip_safe=False,
579579
# for PyPI
580580
keywords=["Compiler", "Deep Learning"],
581-
url="https://github.com/openai/triton/",
581+
url="https://github.com/triton-lang/triton/",
582582
classifiers=[
583583
"Development Status :: 4 - Beta",
584584
"Intended Audience :: Developers",

python/test/regression/test_cast_matmul.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
"""
2-
issue: https://github.com/openai/triton/issues/2523
2+
issue: https://github.com/triton-lang/triton/issues/2523
33
fused type convert and matmul, base on triton matmul, the different with matmul:
44
1. force C's dtype=dot_out_dtype to ["float16", "float32"]
55
2. accept A and B with dtype=["float32", "float64"]

python/test/unit/language/test_core.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -5124,7 +5124,7 @@ def test_fp8_dot_acc(in_type_str, low_precision_acc, device):
51245124
def test_enable_fp_fusion(enable_fp_fusion, device):
51255125
if is_hip():
51265126
pytest.skip(
5127-
'test_enable_fp_fusion for HIP currently broken in https://github.com/openai/triton. Use https://github.com/ROCmSoftwarePlatform/triton'
5127+
'test_enable_fp_fusion for HIP currently broken in https://github.com/triton-lang/triton. Use https://github.com/ROCmSoftwarePlatform/triton'
51285128
)
51295129

51305130
# Sequential multiply add can be fused by backend

test/TritonGPU/loop-pipeline.mlir

+1-1
Original file line numberDiff line numberDiff line change
@@ -1146,7 +1146,7 @@ module attributes {"triton_gpu.target" = "cuda:80", "triton_gpu.num-ctas" = 1 :
11461146
%51 = tt.addptr %50, %47 : tensor<64x256x!tt.ptr<i8>, #blocked>, tensor<64x256xi32, #blocked>
11471147

11481148
// Check that both loads in the loop are pipelined.
1149-
// TODO(jlebar): https://github.com/openai/triton/pull/3472 disables the
1149+
// TODO(jlebar): https://github.com/triton-lang/triton/pull/3472 disables the
11501150
// relevant optimization. Once we've reenabled it, we can uncomment this test.
11511151
// CHECK: scf.for
11521152
// COM: CHECK-NOT: tt.load

third_party/proton/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ Proton is a lightweight profiler for Triton, designed to be used for code writte
99
The following command installs the latest version of Proton.
1010

1111
```bash
12-
git clone https://github.com/openai/triton
12+
git clone https://github.com/triton-lang/triton
1313
cd triton/python
1414
pip install .
1515
```

0 commit comments

Comments
 (0)