diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml
new file mode 100644
index 000000000..bafd57195
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/bug_report.yml
@@ -0,0 +1,103 @@
+name: Bug report
+title: "[Bug] "
+description: Problems and issues with code of Exchangis
+labels: [bug, triage]
+body:
+ - type: markdown
+ attributes:
+ value: |
+ Thank you for reporting the problem!
+ Please make sure what you are reporting is a bug with reproducible steps. To ask questions
+ or share ideas, pleae post on our [Discussion page](https://github.com/WeBankFinTech/Exchangis/discussions) instead.
+
+ - type: checkboxes
+ attributes:
+ label: Search before asking
+ description: >
+ Please make sure to search in the [issues](https://github.com/WeBankFinTech/Exchangis/issues) first to see
+ whether the same issue was reported already.
+ options:
+ - label: >
+ I searched the [issues](https://github.com/WeBankFinTech/Exchangis/issues) and found no similar
+ issues.
+ required: true
+
+ - type: dropdown
+ attributes:
+ label: Exchangis Component
+ description: |
+ What component are you using? Exchangis has many modules, please make sure to choose the module that
+ you found the bug.
+ multiple: true
+ options:
+ - "exchangis-datasource"
+ - "exchangis-job-launcher"
+ - "exchangis-job-server"
+ - "exchangis-job-builder"
+ - "exchangis-job-metrics"
+ - "exchangis-project"
+ - "exchangis-plugins"
+ - "exchangis-dao"
+ - "exchangis-web"
+ validations:
+ required: true
+
+ - type: textarea
+ attributes:
+ label: What happened + What you expected to happen
+ description: Describe 1. the bug 2. expected behavior 3. useful information (e.g., logs)
+ placeholder: >
+ Please provide the context in which the problem occurred and explain what happened. Further,
+ To Reproduce Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '.... 4. See error
+ please also explain why you think the behaviour is erroneous. It is extremely helpful if you can
+ copy and paste the fragment of logs showing the exact error messages or wrong behaviour here.
+
+ **NOTE**: Expected behavior A clear and concise description of what you expected to happen.Screenshots If applicable, add screenshots to help explain your problem.
+ validations:
+ required: true
+
+ - type: textarea
+ attributes:
+ label: Relevent platform
+ description: The platform where you occurred this issue
+ placeholder: >
+ Please specify Desktop or Smartphone, Version / Dependencies / OS / Browser
+ validations:
+ required: true
+
+ - type: textarea
+ attributes:
+ label: Reproduction script
+ description: >
+ Please provide a reproducible script. Providing a narrow reproduction (minimal / no external dependencies) will
+ help us triage and address issues in the timely manner!
+ placeholder: >
+ Please provide a short code snippet (less than 50 lines if possible) that can be copy-pasted to
+ reproduce the issue. The snippet should have **no external library dependencies**
+ (i.e., use fake or mock data / environments).
+
+ **NOTE**: If the code snippet cannot be run by itself, the issue will be marked as "needs-repro-script"
+ until the repro instruction is updated.
+ validations:
+ required: true
+
+ - type: textarea
+ attributes:
+ label: Anything else
+ description: Anything else we need to know?
+ placeholder: >
+ How often does this problem occur? (Once? Every time? Only when certain conditions are met?)
+ Any relevant logs to include? Are there other relevant issues?
+
+ - type: checkboxes
+ attributes:
+ label: Are you willing to submit a PR?
+ description: >
+ This is absolutely not required, but we are happy to guide you in the contribution process
+ especially if you already have a good understanding of how to implement the fix.
+ options:
+ - label: Yes I am willing to submit a PR!
+
+ - type: markdown
+ attributes:
+ value: "Thanks for completing our form!"
diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml
new file mode 100644
index 000000000..7c34114e9
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/config.yml
@@ -0,0 +1,5 @@
+blank_issues_enabled: fasle
+contact_links:
+ - name: Ask a question or get support
+ url: https://github.com/WeBankFinTech/Exchangis/discussions
+ about: Ask a question or request support for using Exchangis
\ No newline at end of file
diff --git a/.github/ISSUE_TEMPLATE/feature_request.yml b/.github/ISSUE_TEMPLATE/feature_request.yml
new file mode 100644
index 000000000..357f173ff
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/feature_request.yml
@@ -0,0 +1,63 @@
+name: Exchangis feature request
+description: Suggest an idea for Exchangis project
+title: "[Feature] "
+labels: [enhancement]
+body:
+ - type: markdown
+ attributes:
+ value: |
+ Thank you for finding the time to propose a new feature!
+ We really appreciate the community efforts to improve Exchangis.
+ - type: checkboxes
+ attributes:
+ label: Search before asking
+ description: >
+ Please make sure to search in the [issues](https://github.com/WeBankFinTech/Exchangis/issues) first to see
+ whether the same feature was requested already.
+ options:
+ - label: >
+ I had searched in the [issues](https://github.com/WeBankFinTech/Exchangis/issues) and found no similar
+ feature requirement.
+ required: true
+ - type: textarea
+ attributes:
+ label: Problem Description
+ description: Is your feature request related to a problem? Please describe.
+
+ - type: textarea
+ attributes:
+ label: Description
+ description: A short description of your feature
+
+ - type: textarea
+ attributes:
+ label: Use case
+ description: >
+ Describe the use case of your feature request.
+ placeholder: >
+ Describe the solution you'd like A clear and concise description of what you want to happen.
+
+ - type: textarea
+ attributes:
+ label: solutions
+ description: Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.
+
+ - type: textarea
+ attributes:
+ label: Anything else
+ description: Anything else we need to know?
+ placeholder: >
+ Additional context Add any other context or screenshots about the feature request here.
+
+ - type: checkboxes
+ attributes:
+ label: Are you willing to submit a PR?
+ description: >
+ This is absolutely not required, but we are happy to guide you in the contribution process
+ especially if you already have a good understanding of how to implement the feature.
+ options:
+ - label: Yes I am willing to submit a PR!
+
+ - type: markdown
+ attributes:
+ value: "Thanks for completing our form!"
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
new file mode 100644
index 000000000..57e883bcd
--- /dev/null
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -0,0 +1,28 @@
+### What is the purpose of the change
+(For example: Exchangis-Job defines the core ability of Exchangis, it provides the abilities of job management, job transform, and job launch.
+Related issues: #50. )
+
+### Brief change log
+(for example:)
+- defines the job server module of Exchangis;
+- defines the job launcher module of Exchangis;
+- defines the job metrics module of Exchangis.
+
+### Verifying this change
+(Please pick either of the following options)
+This change is a trivial rework / code cleanup without any test coverage.
+(or)
+This change is already covered by existing tests, such as (please describe tests).
+(or)
+This change added tests and can be verified as follows:
+(example:)
+- Added tests for creating and execute the Exchangis jobs and verify the availability of different Exchangis Job, such as sqoop job, datax job.
+
+### Does this pull request potentially affect one of the following parts:
+- Dependencies (does it add or upgrade a dependency): (yes / no)
+- Anything that affects deployment: (yes / no / don't know)
+- The Core framework, i.e., JobManager, Server.: (yes / no)
+
+### Documentation
+- Does this pull request introduce a new feature? (yes / no)
+- If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
\ No newline at end of file
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
new file mode 100644
index 000000000..d0c49fde7
--- /dev/null
+++ b/.github/workflows/build.yml
@@ -0,0 +1,53 @@
+#
+# Copyright 2019 WeBank.
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+name: Exchangis CI Actions
+
+on:
+ push:
+ pull_request:
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ strategy:
+ matrix:
+ node-version: [16.13.1]
+ # See supported Node.js release schedule at https://nodejs.org/en/about/releases/
+
+ steps:
+ - name: Checkout
+ uses: actions/checkout@v2
+ - name: Set up JDK 8
+ uses: actions/setup-java@v2
+ with:
+ distribution: 'adopt'
+ java-version: 8
+ - name: Use Node.js ${{ matrix.node-version }}
+ uses: actions/setup-node@v2
+ with:
+ node-version: ${{ matrix.node-version }}
+ - name: Build backend by maven
+ run: |
+ mvn -N install
+ mvn clean package
+ - name: Build frontend by node.js
+ run: |
+ cd web
+ npm install
+ npm run build
diff --git a/.github/workflows/check_license.yml b/.github/workflows/check_license.yml
new file mode 100644
index 000000000..10e3f9fde
--- /dev/null
+++ b/.github/workflows/check_license.yml
@@ -0,0 +1,48 @@
+#
+# Copyright 2019 WeBank.
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+name: Exchangis License check
+
+on: [push, pull_request]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - name: Checkout source
+ uses: actions/checkout@v2
+ - name: Set up JDK 8
+ uses: actions/setup-java@v2
+ with:
+ java-version: '8'
+ distribution: 'adopt'
+ - name: mvn -N install
+ run:
+ mvn -N install
+ - name: License check with Maven
+ run: |
+ rat_file=`mvn apache-rat:check | { grep -oe "\\S\\+/rat.txt" || true; }`
+ echo "rat_file=$rat_file"
+ if [[ -n "$rat_file" ]];then echo "check error!" && cat $rat_file && exit 123;else echo "check success!" ;fi
+ - name: Upload the report
+ uses: actions/upload-artifact@v2
+ with:
+ name: license-check-report
+ path: "**/target/rat.txt"
diff --git a/.github/workflows/dead-link-checker.yml b/.github/workflows/dead-link-checker.yml
new file mode 100644
index 000000000..8de24aac8
--- /dev/null
+++ b/.github/workflows/dead-link-checker.yml
@@ -0,0 +1,17 @@
+name: Dead Link Check
+
+on: [push]
+
+jobs:
+ dead-links-check:
+ runs-on: ubuntu-latest
+ timeout-minutes: 30
+ if: (github.repository == 'WeBankFinTech/Exchangis')
+ steps:
+ - uses: actions/checkout@v3
+ - uses: gaurav-nelson/github-action-markdown-link-check@v1
+ with:
+ use-quiet-mode: 'no'
+ use-verbose-mode: 'yes'
+ folder-path: '../'
+ config-file: '.github/workflows/dlc.json'
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 000000000..a2857cb35
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,40 @@
+/target/
+target
+
+### STS ###
+.apt_generated
+.classpath
+.factorypath
+.project
+.settings
+.springBeans
+.sts4-cache
+
+### IntelliJ IDEA ###
+.idea
+*.log
+*.iws
+*.iml
+*.ipr
+
+### NetBeans ###
+/nbproject/private/
+/build/
+/nbbuild/
+/dist/
+/nbdist/
+/.nb-gradle/
+.mvn/wrapper/maven-wrapper.jar
+.mvn/wrapper/maven-wrapper.properties
+/packages/
+exchangis-server/exchangis-extds
+/logs/
+/web/package-lock.json
+package-lock.json
+.DS_Store
+
+web/dist
+
+workspace/
+
+.flattened-pom.xml
\ No newline at end of file
diff --git a/Dockerfile b/Dockerfile
new file mode 100644
index 000000000..50fd364a9
--- /dev/null
+++ b/Dockerfile
@@ -0,0 +1,9 @@
+FROM harbor.local.hching.com/library/jdk:8u301
+
+ADD assembly-package/target/wedatasphere-exchangis-1.1.2.tar.gz /opt/wedatasphere-exchangis.tar.gz
+
+RUN cd /opt/wedatasphere-exchangis.tar.gz/packages/ && tar -zxf exchangis-server_1.1.2.tar.gz && cd /opt/wedatasphere-exchangis.tar.gz/sbin
+
+WORKDIR /opt/wedatasphere-exchangis.tar.gz/sbin
+
+ENTRYPOINT ["/bin/bash start.sh"]
diff --git a/README-ZH.md b/README-ZH.md
new file mode 100644
index 000000000..ffdc9911f
--- /dev/null
+++ b/README-ZH.md
@@ -0,0 +1,70 @@
+# Exchangis
+
+[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
+
+[English](README.md) | 中文
+
+## 介绍
+
+Exchangis是微众银行大数据平台 WeDataSphere 与社区用户共同研发的的新版数据交换工具,支持异构数据源之间的结构化和非结构化数据传输同步。
+
+Exchangis 抽象了一套统一的数据源和同步作业定义插件,允许用户快速接入新的数据源,并只需在数据库中简单配置即可在页面中使用。
+
+基于插件化的框架设计,及计算中间件 [Linkis](https://github.com/apache/incubator-linkis),Exchangis 可快速集成对接 Linkis 已集成的数据同步引擎,将 Exchangis 的同步作业转换成 Linkis 数据同步引擎的数据同步作业。
+
+借助于 [Linkis](https://github.com/apache/incubator-linkis) 计算中间件的连接、复用与简化能力,Exchangis 天生便具备了高并发、高可用、多租户隔离和资源管控的金融级数据同步能力。
+
+### 界面预览
+
+![image](images/zh_CN/ch1/frontend_view.png)
+
+## 核心特点
+
+### 1. 轻量化的数据源管理
+
+- 基于 Linkis DataSource,抽象了底层数据源在 Exchangis 作为一个同步作业的 Source 和 Sink 所必须的所有能力。只需简单配置即可完成一个数据源的创建。
+
+- 特别数据源版本发布管理功能,支持历史版本数据源回滚,一键发布无需再次配置历史数据源。
+
+
+### 2. 高稳定,快响应的数据同步任务执行
+
+- **近实时任务管控**
+快速抓取传输任务日志以及传输速率等信息,对多任务包括CPU使用、内存使用、数据同步记录等各项指标进行监控展示,支持实时关闭任务;
+
+- **任务高并发传输**
+多任务并发执行,并且支持复制子任务,实时展示每个任务的状态,多租户执行功能有效避免执行过程中任务彼此影响进行;
+
+- **任务状态自检**
+监控长时间运行的任务和状态异常任务,中止任务并及时释放占用的资源。
+
+
+### 3. 与DSS工作流打通,一站式大数据开发的门户
+
+- 实现DSS AppConn包括一级 SSO 规范,二级组织结构规范,三级开发流程规范在内的三级规范;
+- 作为DSS工作流的数据交换节点,是整个工作流链路中的门户流程,为后续的工作流节点运行提供稳固的数据基础;
+
+### 4. 支持多种导数引擎
+
+- 支持Sqoop和DataX引擎进行多种异构数据源之间的导数
+
+## 整体设计
+
+### 架构设计
+
+![架构设计](images/zh_CN/ch1/home_page_zh.png)
+
+
+## 相关文档
+[安装部署文档](docs/zh_CN/ch1/exchangis_deploy_cn.md)
+[用户手册](docs/zh_CN/ch1/exchangis_user_manual_cn.md)
+
+## 交流贡献
+
+如果您想得到最快的响应,请给我们提 issue,或者扫码进群:
+
+![communication](images/zh_CN/ch1/code.png)
+
+## License
+
+Exchangis is under the Apache 2.0 License. See the [License](./LICENSE) file for details.
diff --git a/README.md b/README.md
new file mode 100644
index 000000000..5e211e77a
--- /dev/null
+++ b/README.md
@@ -0,0 +1,67 @@
+[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
+
+English | [中文](README-ZH.md)
+
+## Introduction
+
+Exchangis is a new version of data exchange tool jointly developed by WeDataSphere, a big data platform of WeBank, and community users, which supports the synchronization of structured and unstructured data transmission between heterogeneous data sources.
+
+Exchangis abstracts a unified set of data source and synchronization job definition plugins, allowing users to quickly access new data sources and use them on pages with simple configuration in the database.
+
+Based on the plugin framework design and the computing middleware [Linkis](https://github.com/apache/incubator-Linkis), Exchangis can quickly connect to the data synchronization engine in Linkis, and convert the data synchronization job of Exchangis into the job of Linkis.
+
+With the help of [Linkis](https://github.com/apache/incubator-linkis) computing middleware's connection, reuse and simplification capabilities, Exchangia is inherently equipped with financial-grade data synchronization capabilities of high concurrency, high availability, multi-tenant isolation and resource control.
+
+### Interface preview
+
+![image](images/zh_CN/ch1/frontend_view.png)
+
+## Core characteristics
+
+### 1. Lightweight datasource management
+
+- Based on Linkis DataSource, Exchangis abstracts all the necessary capabilities of the underlying data source as the Source and Sink of a synchronization job. A data source can be created with simple configuration.
+
+- Special datasource version publishing management function supports version history datasource rollback, and one-click publishing does not need to configure historical datasources again.
+
+
+### 2. High-stability and fast-response data synchronization task execution
+
+- **Near-real-time task management**
+ Quickly capture information such as transmission task log and transmission rate, monitor and display various indicators of multi-task including CPU usage, memory usage, data synchronization record, etc., and support closing tasks in real time.
+
+- **Task high concurrent transmission**
+ Multi-tasks are executed concurrently, and sub-tasks can be copied to show the status of each task in real time. Multi-tenant execution function can effectively prevent tasks from affecting each other during execution.
+
+- **Self-check of task status**
+ Monitor long-running tasks and abnormal tasks, stop tasks and release occupied resources in time.
+
+
+### 3. Integrate with DSS workflow, one-stop big data development portal
+
+- Realize DSS AppConn's three-level specification, including the first-level SSO specification, the second-level organizational structure specification and the third-level development process specification.
+
+- As the data exchange node of DSS workflow, it is the fundamental process in the whole workflow link, which provides a solid data foundation for the subsequent operation of workflow nodes.
+
+## Overall Design
+
+### Architecture Design
+
+![架构设计](images/zh_CN/ch1/home_page_en.png)
+
+
+## Documents
+
+[Quick Deploy](docs/en_US/ch1/exchangis_deploy_en.md)
+[User Manual](docs/en_US/ch1/exchangis_user_manual_en.md)
+
+## Communication and contribution
+
+If you want to get the fastest response, please mention issue to us, or scan the code into the group :
+
+![communication](images/en_US/ch1/code.png)
+
+## License
+
+Exchangis is under the Apache 2.0 License. See the [License](./LICENSE) file for details.
+
diff --git a/assembly-package/config/application-exchangis.yml b/assembly-package/config/application-exchangis.yml
new file mode 100644
index 000000000..946ee2cb8
--- /dev/null
+++ b/assembly-package/config/application-exchangis.yml
@@ -0,0 +1,20 @@
+server:
+ port: 9321
+spring:
+ application:
+ name: dss-exchangis-main-server-dev
+eureka:
+ client:
+ serviceUrl:
+ defaultZone: http://{IP}:{PORT}/eureka/
+ instance:
+ metadata-map:
+ test: wedatasphere
+
+management:
+ endpoints:
+ web:
+ exposure:
+ include: refresh,info
+logging:
+ config: classpath:log4j2.xml
diff --git a/assembly-package/config/config.sh b/assembly-package/config/config.sh
new file mode 100644
index 000000000..9a4a0e502
--- /dev/null
+++ b/assembly-package/config/config.sh
@@ -0,0 +1,11 @@
+#LINKIS_GATEWAY服务地址IP,用于查找linkis-mg-gateway服务
+LINKIS_GATEWAY_HOST={IP}
+
+#LINKIS_GATEWAY服务地址端口,用于查找linkis-mg-gateway服务
+LINKIS_GATEWAY_PORT={PORT}
+
+#Exchangis服务端口
+EXCHANGIS_PORT={PORT}
+
+#Eureka服务URL
+EUREKA_URL=http://{IP:PORT}/eureka/
\ No newline at end of file
diff --git a/assembly-package/config/db.sh b/assembly-package/config/db.sh
new file mode 100644
index 000000000..cf33388e3
--- /dev/null
+++ b/assembly-package/config/db.sh
@@ -0,0 +1,9 @@
+# 设置数据库的连接信息
+# 包括IP地址、数据库名称、用户名、端口
+MYSQL_HOST={IP}
+MYSQL_PORT={PORT}
+MYSQL_USERNAME={username}
+MYSQL_PASSWORD={password}
+DATABASE={dbName}
+
+
diff --git a/assembly-package/config/dss-exchangis-server.properties b/assembly-package/config/dss-exchangis-server.properties
new file mode 100644
index 000000000..70ebaca62
--- /dev/null
+++ b/assembly-package/config/dss-exchangis-server.properties
@@ -0,0 +1,69 @@
+#
+# Copyright 2019 WeBank
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+#
+
+wds.linkis.test.mode=false
+wds.linkis.server.mybatis.datasource.url=jdbc:mysql://{IP}:{PORT}/{database}?useSSL=false&characterEncoding=UTF-8&allowMultiQueries=true&useAffectedRows=true
+wds.linkis.server.mybatis.datasource.username={username}
+wds.linkis.server.mybatis.datasource.password={password}
+
+wds.linkis.gateway.ip={LINKIS_IP}
+wds.linkis.gateway.port={LINKIS_PORT}
+wds.linkis.gateway.url=http://{LINKIS_IP}:{LINKIS_PORT}/
+wds.linkis.log.clear=true
+wds.linkis.server.version=v1
+
+# server rpc
+wds.linkis.ms.service.scan.package=com.webank.wedatasphere.exchangis
+
+# datasource client
+wds.exchangis.datasource.client.server-url=http://{LINKIS_IP}:{LINKIS_PORT}/
+wds.exchangis.datasource.client.token.value=EXCHANGIS-AUTH
+wds.exchangis.datasource.client.dws.version=v1
+
+# launcher client
+wds.exchangis.client.linkis.server-url=http://{LINKIS_IP}:{LINKIS_PORT}/
+wds.exchangis.client.linkis.token.value=EXCHANGIS-AUTH
+wds.exchangis.datasource.extension.dir=exchangis-extds/
+
+##restful
+wds.linkis.server.restful.scan.packages=com.webank.wedatasphere.exchangis.datasource.server.restful.api,\
+ com.webank.wedatasphere.exchangis.project.server.restful,\
+ com.webank.wedatasphere.exchangis.job.server.restful
+
+wds.linkis.server.mybatis.mapperLocations=classpath*:com/webank/wedatasphere/exchangis/job/server/mapper/impl/*.xml,\
+classpath*:com/webank/wedatasphere/exchangis/project/server/mapper/impl/*.xml,\
+classpath*:com/webank/wedatasphere/exchangis/project/provider/mapper/impl/*.xml,\
+classpath*:com/webank/wedatasphere/exchangis/engine/server/mapper/*.xml
+
+wds.linkis.server.mybatis.BasePackage=com.webank.wedatasphere.exchangis.dao,\
+ com.webank.wedatasphere.exchangis.project.server.mapper,\
+ com.webank.wedatasphere.exchangis.project.provider.mapper,\
+ com.webank.wedatasphere.linkis.configuration.dao,\
+ com.webank.wedatasphere.linkis.metadata.dao,\
+ com.webank.wedatasphere.exchangis.job.server.mapper,\
+ com.webank.wedatasphere.exchangis.job.server.dao,\
+ com.webank.wedatasphere.exchangis.engine.dao
+
+wds.exchangis.job.task.scheduler.load-balancer.flexible.segments.min-occupy=0.25
+wds.exchangis.job.task.scheduler.load-balancer.flexible.segments.max-occupy=0.5
+#wds.exchangis.job.scheduler.group.max.running-jobs=4
+
+wds.linkis-session.ticket.key=bdp-user-ticket-id
+wds.exchangis.limit.interface.value=false
+
+wds.exchangis.publicKeyStr=
+wds.exchangis.privateKeyStr=
diff --git a/assembly-package/config/log4j2.xml b/assembly-package/config/log4j2.xml
new file mode 100644
index 000000000..121b48d1d
--- /dev/null
+++ b/assembly-package/config/log4j2.xml
@@ -0,0 +1,53 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/assembly-package/config/transform-processor-templates/datax-processor.java b/assembly-package/config/transform-processor-templates/datax-processor.java
new file mode 100644
index 000000000..e69de29bb
diff --git a/assembly-package/pom.xml b/assembly-package/pom.xml
new file mode 100644
index 000000000..dc473f537
--- /dev/null
+++ b/assembly-package/pom.xml
@@ -0,0 +1,78 @@
+
+
+
+
+ exchangis
+ com.webank.wedatasphere.exchangis
+ ${revision}
+ ../pom.xml
+
+ 4.0.0
+ assembly-package
+ pom
+
+
+
+ org.apache.maven.plugins
+ maven-install-plugin
+ 2.4
+
+ true
+
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+ 1.3
+
+
+ package
+
+ run
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-assembly-plugin
+ 3.1.0
+
+
+ dist
+ package
+
+ single
+
+
+ false
+ wedatasphere-exchangis-${revision}
+ false
+ false
+
+ src/main/assembly/assembly.xml
+
+
+
+
+
+
+
+
diff --git a/assembly-package/sbin/common.sh b/assembly-package/sbin/common.sh
new file mode 100644
index 000000000..8ee615b64
--- /dev/null
+++ b/assembly-package/sbin/common.sh
@@ -0,0 +1,19 @@
+#!/bin/bash
+#
+# Copyright 2020 WeBank
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+declare -A MODULE_MAIN_CLASS
+MODULE_MAIN_CLASS["dss-exchangis-main-server-dev"]="com.webank.wedatasphere.exchangis.server.boot.ExchangisServerApplication"
diff --git a/assembly-package/sbin/configure.sh b/assembly-package/sbin/configure.sh
new file mode 100644
index 000000000..e61c428da
--- /dev/null
+++ b/assembly-package/sbin/configure.sh
@@ -0,0 +1,25 @@
+#!/bin/bash
+#
+# Copyright 2020 WeBank
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# configure modules
+
+configure_main(){
+
+}
+
+configure_server(){
+
+}
\ No newline at end of file
diff --git a/assembly-package/sbin/daemon.sh b/assembly-package/sbin/daemon.sh
new file mode 100644
index 000000000..c7ee8f1e1
--- /dev/null
+++ b/assembly-package/sbin/daemon.sh
@@ -0,0 +1,69 @@
+#!/bin/bash
+#
+# Copyright 2020 WeBank
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+load_env_definitions ${ENV_FILE}
+if [[ "x"${EXCHANGIS_HOME} != "x" ]]; then
+ source ${EXCHANGIS_HOME}/sbin/launcher.sh
+ source ${EXCHANGIS_HOME}/sbin/common.sh
+else
+ source ./launcher.sh
+ source ./common.sh
+fi
+
+usage(){
+ echo "Usage is [start|stop|restart {server}]"
+}
+
+start(){
+ # call launcher
+ launcher_start $1 $2
+}
+
+stop(){
+ # call launcher
+ launcher_stop $1 $2
+}
+
+restart(){
+ launcher_stop $1 $2
+ if [[ $? -eq 0 ]]; then
+ sleep 3
+ launcher_start $1 $2
+ fi
+}
+
+COMMAND=$1
+case $COMMAND in
+ start|stop|restart)
+ if [[ ! -z $2 ]]; then
+ SERVICE_NAME=${MODULE_DEFAULT_PREFIX}$2${MODULE_DEFAULT_SUFFIX}
+ MAIN_CLASS=${MODULE_MAIN_CLASS[${SERVICE_NAME}]}
+ if [[ "x"${MAIN_CLASS} != "x" ]]; then
+ $COMMAND ${SERVICE_NAME} ${MAIN_CLASS}
+ else
+ LOG ERROR "Cannot find the main class for [ ${SERVICE_NAME} ]"
+ fi
+ else
+ usage
+ exit 1
+ fi
+ ;;
+ *)
+ usage
+ exit 1
+ ;;
+esac
\ No newline at end of file
diff --git a/assembly-package/sbin/env.properties b/assembly-package/sbin/env.properties
new file mode 100644
index 000000000..c6e528ab4
--- /dev/null
+++ b/assembly-package/sbin/env.properties
@@ -0,0 +1,6 @@
+EXCHANGIS_CONF_PATH=/appcom/config/exchangis-config/background
+EXCHANGIS_LOG_PATH=/appcom/logs/exchangis/background
+MODULE_DEFAULT_PREFIX="dss-exchangis-main-"
+MODULE_DEFAULT_SUFFIX="-dev"
+DEBUG_MODE=false
+DEBUG_PORT=8321
\ No newline at end of file
diff --git a/assembly-package/sbin/install.sh b/assembly-package/sbin/install.sh
new file mode 100644
index 000000000..2ce1569f7
--- /dev/null
+++ b/assembly-package/sbin/install.sh
@@ -0,0 +1,226 @@
+#!/bin/bash
+#
+# Copyright 2020 WeBank
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+source ~/.bashrc
+shellDir=`dirname $0`
+workDir=`cd ${shellDir}/..;pwd`
+
+SOURCE_ROOT=${workDir}
+#load config
+source ${SOURCE_ROOT}/config/config.sh
+source ${SOURCE_ROOT}/config/db.sh
+DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
+SHELL_LOG="${DIR}/console.out" #console.out是什么文件?
+export SQL_SOURCE_PATH="${DIR}/../db/exchangis_ddl.sql"
+PACKAGE_DIR="${DIR}/../packages"
+# Home Path
+EXCHNGIS_HOME_PATH="${DIR}/../"
+
+CONF_FILE_PATH="sbin/configure.sh"
+FORCE_INSTALL=false
+SKIP_PACKAGE=false
+USER=`whoami`
+SUDO_USER=false
+
+CONF_PATH=${DIR}/../config
+
+usage(){
+ printf "\033[1m Install project, run directly\n\033[0m"
+}
+
+function LOG(){
+ currentTime=`date "+%Y-%m-%d %H:%M:%S.%3N"`
+ echo -e "$currentTime [${1}] ($$) $2" | tee -a ${SHELL_LOG} # tee -a 输出是追加到文件里面
+}
+
+abs_path(){
+ SOURCE="${BASH_SOURCE[0]}"
+ while [ -h "${SOURCE}" ]; do
+ DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
+ SOURCE="$(readlink "${SOURCE}")"
+ [[ ${SOURCE} != /* ]] && SOURCE="${DIR}/${SOURCE}"
+ done
+ echo "$( cd -P "$( dirname "${SOURCE}" )" && pwd )"
+}
+
+BIN=`abs_path`
+
+
+is_sudo_user(){
+ sudo -v >/dev/null 2>&1 #因为 sudo 在第一次执行时或是在 N分钟内没有执行(N 预设为5)会问密码
+ #这条命令的意思就是在后台执行这个程序,并将错误输出2重定向到标准输出1,然后将标准输出1全部放到/dev/null文件,也就是清空.
+ #所以可以看出" >/dev/null 2>&1 "常用来避免shell命令或者程序等运行中有内容输出。
+}
+
+uncompress_packages(){
+ LOG INFO "\033[1m package dir is: [${PACKAGE_DIR}]\033[0m"
+ local list=`ls ${PACKAGE_DIR}`
+ LOG INFO "\033[1m package list is: [${list}]\033[0m"
+ for pack in ${list}
+ do
+ local uncompress=true
+ if [ ${#PACKAGE_NAMES[@]} -gt 0 ]; then
+ uncompress=false
+ for server in ${PACKAGE_NAMES[@]}
+ do
+ if [ ${server} == ${pack%%.tar.gz*} ] || [ ${server} == ${pack%%.zip*} ]; then
+ uncompress=true
+ break
+ fi
+ done
+ fi
+ if [ ${uncompress} == true ]; then
+ if [[ ${pack} =~ tar\.gz$ ]]; then
+ local do_uncompress=0
+ #if [ ${FORCE_INSTALL} == false ]; then
+ # interact_echo "Do you want to decompress this package: [${pack}]?"
+ # do_uncompress=$?
+ #fi
+ if [ ${do_uncompress} == 0 ]; then
+ LOG INFO "\033[1m Uncompress package: [${pack}] to modules directory\033[0m"
+ tar --skip-old-files -zxf ${PACKAGE_DIR}/${pack} -C ../
+ fi
+ elif [[ ${pack} =~ zip$ ]]; then
+ local do_uncompress=0
+ #if [ ${FORCE_INSTALL} == false ]; then
+ # interact_echo "Do you want to decompress this package: [${pack}]?"
+ # do_uncompress=$?
+ #fi
+ if [ ${do_uncompress} == 0 ]; then
+ LOG INFO "\033[1m Uncompress package: [${pack}] to modules directory\033[0m"
+ unzip -nq ${PACKAGE_DIR}/${pack} -d # n 解压缩时不要覆盖原有的文件
+ fi
+ fi
+ # skip other packages
+ fi
+ done
+}
+
+interact_echo(){
+ while [ 1 ]; do
+ read -p "$1 (Y/N)" yn
+ if [ "${yn}x" == "Yx" ] || [ "${yn}x" == "yx" ]; then
+ return 0
+ elif [ "${yn}x" == "Nx" ] || [ "${yn}x" == "nx" ]; then
+ return 1
+ else
+ echo "Unknown choise: [$yn], please choose again."
+ fi
+ done
+}
+
+# Initalize database
+init_database(){
+ BOOTSTRAP_PROP_FILE="${CONF_PATH}/dss-exchangis-main-server-dev.properties"
+ if [ "x${SQL_SOURCE_PATH}" != "x" ] && [ -f "${SQL_SOURCE_PATH}" ]; then
+ `mysql --version >/dev/null 2>&1`
+ DATASOURCE_URL="jdbc:mysql:\/\/${MYSQL_HOST}:${MYSQL_PORT}\/${DATABASE}\?useSSL=false\&characterEncoding=UTF-8\&allowMultiQueries=true"
+ sed -ri "s![#]?(wds.linkis.server.mybatis.datasource.username=)\S*!\1${MYSQL_USERNAME}!g" ${BOOTSTRAP_PROP_FILE}
+ sed -ri "s![#]?(wds.linkis.server.mybatis.datasource.password=)\S*!\1${MYSQL_PASSWORD}!g" ${BOOTSTRAP_PROP_FILE}
+ sed -ri "s![#]?(wds.linkis.server.mybatis.datasource.url=)\S*!\1${DATASOURCE_URL}!g" ${BOOTSTRAP_PROP_FILE}
+ interact_echo "Do you want to initalize database with sql: [${SQL_SOURCE_PATH}]?"
+ if [ $? == 0 ]; then
+ LOG INFO "\033[1m Scan out mysql command, so begin to initalize the database\033[0m"
+ mysql -h ${MYSQL_HOST} -P ${MYSQL_PORT} -u ${MYSQL_USERNAME} -p${MYSQL_PASSWORD} --default-character-set=utf8 -e \
+ "CREATE DATABASE IF NOT EXISTS ${DATABASE}; USE ${DATABASE}; source ${SQL_SOURCE_PATH};"
+ fi
+ fi
+}
+
+init_properties(){
+ BOOTSTRAP_PROP_FILE="${CONF_PATH}/dss-exchangis-server.properties"
+ APPLICATION_YML="${CONF_PATH}/application-exchangis.yml"
+ LINKIS_GATEWAY_URL="http:\/\/${LINKIS_GATEWAY_HOST}:${LINKIS_GATEWAY_PORT}\/"
+ if [ "x${LINKIS_SERVER_URL}" == "x" ]; then
+ LINKIS_SERVER_URL="http://127.0.0.1:9001"
+ fi
+
+ sed -ri "s![#]?(wds.exchangis.datasource.client.serverurl=)\S*!\1${LINKIS_GATEWAY_URL}!g" ${BOOTSTRAP_PROP_FILE}
+ sed -ri "s![#]?(wds.exchangis.client.linkis.server-url=)\S*!\1${LINKIS_GATEWAY_URL}!g" ${BOOTSTRAP_PROP_FILE}
+ sed -ri "s![#]?(port: )\S*!\1${EXCHANGIS_PORT}!g" ${APPLICATION_YML}
+ sed -ri "s![#]?(defaultZone: )\S*!\1${EUREKA_URL}!g" ${APPLICATION_YML}
+}
+
+install_modules(){
+ LOG INFO "\033[1m ####### Start To Install project ######\033[0m"
+ echo ""
+ if [ ${FORCE_INSTALL} == false ]; then
+ LOG INFO "\033[1m Install project ......\033[0m"
+ init_database
+ init_properties
+ else
+ LOG INFO "\033[1m Install project ......\033[0m"
+ init_database
+ fi
+ LOG INFO "\033[1m ####### Finish To Install Project ######\033[0m"
+}
+
+
+while [ 1 ]; do
+ case ${!OPTIND} in
+ -h|--help)
+ usage
+ exit 0
+ ;;
+ "")
+ break
+ ;;
+ *)
+ echo "Argument error! " 1>&2
+ exit 1
+ ;;
+ esac
+done
+
+is_sudo_user
+if [ $? == 0 ]; then
+ SUDO_USER=true
+fi
+
+MODULE_LIST_RESOLVED=()
+c=0
+RESOLVED_DIR=${PACKAGE_DIR}
+
+server="exchangis-server"
+LOG INFO "\033[1m ####### server is [${server}] ######\033[0m"
+server_list=`ls ${RESOLVED_DIR} | grep -E "^(${server}|${server}_[0-9]+\\.[0-9]+\\.[0-9]+)" | grep -E "(\\.tar\\.gz|\\.zip|)$"`
+LOG INFO "\033[1m ####### server_list is [${server_list}] ######\033[0m"
+for _server in ${server_list}
+ do
+ # More better method to cut string?
+ _server=${_server%%.tar.gz*}
+ _server=${_server%%zip*}
+ MODULE_LIST_RESOLVED[$c]=${_server}
+ c=$(($c + 1))
+ done
+if [ ${SKIP_PACKAGE} == true ]; then
+ MODULE_LIST=${MODULE_LIST_RESOLVED}
+else
+ PACKAGE_NAMES=${MODULE_LIST_RESOLVED}
+fi
+
+
+LOG INFO "\033[1m ####### Start To Uncompress Packages ######\033[0m"
+LOG INFO "Uncompressing...."
+uncompress_packages
+LOG INFO "\033[1m ####### Finish To Umcompress Packages ######\033[0m"
+
+ install_modules
+
+
+exit 0
+
diff --git a/assembly-package/sbin/launcher.sh b/assembly-package/sbin/launcher.sh
new file mode 100644
index 000000000..4c9530eae
--- /dev/null
+++ b/assembly-package/sbin/launcher.sh
@@ -0,0 +1,252 @@
+#!/bin/bash
+#
+# Copyright 2020 WeBank
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Launcher for modules, provided start/stop functions
+
+DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
+ENV_FILE="${DIR}/env.properties"
+SHELL_LOG="${DIR}/command.log"
+USER_DIR="${DIR}/../"
+EXCHANGIS_LIB_PATH="${DIR}/../lib"
+EXCHANGIS_PID_PATH="${DIR}/../runtime"
+# Default
+MAIN_CLASS=""
+DEBUG_MODE=False
+DEBUG_PORT="7006"
+SPRING_PROFILE="exchangis"
+SLEEP_TIMEREVAL_S=2
+
+function LOG(){
+ currentTime=`date "+%Y-%m-%d %H:%M:%S.%3N"`
+ echo -e "$currentTime [${1}] ($$) $2" | tee -a ${SHELL_LOG}
+}
+
+abs_path(){
+ SOURCE="${BASH_SOURCE[0]}"
+ while [ -h "${SOURCE}" ]; do
+ DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
+ SOURCE="$(readlink "${SOURCE}")"
+ [[ ${SOURCE} != /* ]] && SOURCE="${DIR}/${SOURCE}"
+ done
+ echo "$( cd -P "$( dirname "${SOURCE}" )" && pwd )"
+}
+
+verify_java_env(){
+ if [[ "x${JAVA_HOME}" != "x" ]]; then
+ ${JAVA_HOME}/bin/java -version >/dev/null 2>&1
+ else
+ java -version >/dev/null 2>&1
+ fi
+ if [[ $? -ne 0 ]]; then
+ cat 1>&2 </dev/null`
+ if [ "x"${pid_in_file} != "x" ]; then
+ p=`${JPS} -q | grep ${pid_in_file} | awk '{print $1}'`
+ fi
+ fi
+ else
+ p=`${JPS} -l | grep "$2" | awk '{print $1}'`
+ fi
+ if [ -n "$p" ]; then
+ # echo "$1 ($2) is still running with pid $p"
+ return 0
+ else
+ # echo "$1 ($2) does not appear in the java process table"
+ return 1
+ fi
+}
+
+wait_for_startup(){
+ local now_s=`date '+%s'`
+ local stop_s=$((${now_s} + $1))
+ while [ ${now_s} -le ${stop_s} ];do
+ status_class $2 $3
+ if [ $? -eq 0 ]; then
+ return 0
+ fi
+ sleep ${SLEEP_TIMEREVAL_S}
+ now_s=`date '+%s'`
+ done
+ return 1
+}
+
+wait_for_stop(){
+ local now_s=`date '+%s'`
+ local stop_s=$((${now_s} + $1))
+ while [ ${now_s} -le ${stop_s} ];do
+ status_class $2 $3
+ if [ $? -eq 1 ]; then
+ return 0
+ fi
+ sleep ${SLEEP_TIMEREVAL_S}
+ now_s=`date '+%s'`
+ done
+ return 1
+}
+
+# Input: $1:module_name, $2:main class
+launcher_start(){
+ LOG INFO "Launcher: launch to start server [ $1 ]"
+ status_class $1 $2
+ if [[ $? -eq 0 ]]; then
+ LOG INFO "Launcher: [ $1 ] has been started in process"
+ return 0
+ fi
+ construct_java_command $1 $2
+ # Execute
+ echo ${EXEC_JAVA}
+ LOG INFO ${EXEC_JAVA}
+ nohup ${EXEC_JAVA} >/dev/null 2>&1 &
+ LOG INFO "Launcher: waiting [ $1 ] to start complete ..."
+ wait_for_startup 20 $1 $2
+ if [[ $? -eq 0 ]]; then
+ LOG INFO "Launcher: [ $1 ] start success"
+ LOG INFO ${EXCHANGIS_CONF_PATH}
+ APPLICATION_YML="${EXCHANGIS_CONF_PATH}/application-exchangis.yml"
+ EUREKA_URL=`cat ${APPLICATION_YML} | grep Zone | sed -n '1p'`
+ echo "${EUREKA_URL}"
+ LOG INFO "Please check exchangis server in EUREKA_ADDRESS: ${EUREKA_URL#*:} "
+ else
+ LOG ERROR "Launcher: [ $1 ] start fail over 20 seconds, please retry it"
+ fi
+}
+
+# Input: $1:module_name, $2:main class
+launcher_stop(){
+ LOG INFO "Launcher: stop the server [ $1 ]"
+ local p=""
+ local pid_file_path=${EXCHANGIS_PID_PATH}/$1.pid
+ if [ "x"${pid_file_path} != "x" ]; then
+ if [ -f ${pid_file_path} ]; then
+ local pid_in_file=`cat ${pid_file_path} 2>/dev/null`
+ if [ "x"${pid_in_file} != "x" ]; then
+ p=`${JPS} -q | grep ${pid_in_file} | awk '{print $1}'`
+ fi
+ fi
+ elif [[ "x"$2 != "x" ]]; then
+ p=`${JPS} -l | grep "$2" | awk '{print $1}'`
+ fi
+ if [[ -z ${p} ]]; then
+ LOG INFO "Launcher: [ $1 ] didn't start successfully, not found in the java process table"
+ return 0
+ fi
+ case "`uname`" in
+ CYCGWIN*) taskkill /PID "${p}" ;;
+ *) kill -SIGTERM "${p}" ;;
+ esac
+ LOG INFO "Launcher: waiting [ $1 ] to stop complete ..."
+ wait_for_stop 20 $1 $2
+ if [[ $? -eq 0 ]]; then
+ LOG INFO "Launcher: [ $1 ] stop success"
+ else
+ LOG ERROR "Launcher: [ $1 ] stop exceeded over 20s " >&2
+ return 1
+ fi
+}
diff --git a/assembly-package/sbin/start-server.sh b/assembly-package/sbin/start-server.sh
new file mode 100644
index 000000000..5889993c8
--- /dev/null
+++ b/assembly-package/sbin/start-server.sh
@@ -0,0 +1,54 @@
+#!/bin/bash
+#
+# Copyright 2020 WeBank
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Start exchangis-server module
+MODULE_NAME="exchangis-server"
+
+function LOG(){
+ currentTime=`date "+%Y-%m-%d %H:%M:%S.%3N"`
+ echo -e "$currentTime [${1}] ($$) $2" | tee -a ${SHELL_LOG}
+}
+
+abs_path(){
+ SOURCE="${BASH_SOURCE[0]}"
+ while [ -h "${SOURCE}" ]; do
+ DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
+ SOURCE="$(readlink "${SOURCE}")"
+ [[ ${SOURCE} != /* ]] && SOURCE="${DIR}/${SOURCE}"
+ done
+ echo "$( cd -P "$( dirname "${SOURCE}" )" && pwd )"
+}
+
+BIN=`abs_path`
+SHELL_LOG="${BIN}/console.out"
+
+interact_echo(){
+ while [ 1 ]; do
+ read -p "$1 (Y/N)" yn
+ if [ "${yn}x" == "Yx" ] || [ "${yn}x" == "yx" ]; then
+ return 0
+ elif [ "${yn}x" == "Nx" ] || [ "${yn}x" == "nx" ]; then
+ return 1
+ else
+ echo "Unknown choise: [$yn], please choose again."
+ fi
+ done
+}
+
+start_main(){
+
+}
+exit $?
diff --git a/assembly-package/src/main/assembly/assembly.xml b/assembly-package/src/main/assembly/assembly.xml
new file mode 100644
index 000000000..e873afe23
--- /dev/null
+++ b/assembly-package/src/main/assembly/assembly.xml
@@ -0,0 +1,77 @@
+
+
+ exchangis
+
+ tar.gz
+
+ false
+
+
+
+ ${basedir}/sbin
+
+ *
+
+ 0777
+ sbin
+ unix
+
+
+ ${basedir}/bin
+
+ *
+
+ 0777
+ bin
+ unix
+
+
+ ${basedir}/config
+
+ *
+
+ 0777
+ config
+ unix
+
+
+
+ ${basedir}/../db
+
+ *
+
+ 0777
+ db
+ unix
+
+
+
+ ${basedir}/../exchangis-server/target/packages
+
+ *.tar.gz
+ *.zip
+
+ 0755
+ packages
+
+
+
+
\ No newline at end of file
diff --git a/assembly/package.xml b/assembly/package.xml
deleted file mode 100644
index cef49a33c..000000000
--- a/assembly/package.xml
+++ /dev/null
@@ -1,41 +0,0 @@
-
- main
-
- tar.gz
-
- true
-
-
- ../packages
-
- exchangis*
-
- packages
-
-
- unix
- ../bin
- bin
- 0755
-
-
- ../docs
- docs
-
-
- ../images
- images
-
-
- ../
- unix
-
- README.md
- LICENSE
-
- /
-
-
-
\ No newline at end of file
diff --git a/db/1.1.1/exchangis_ddl.sql b/db/1.1.1/exchangis_ddl.sql
new file mode 100644
index 000000000..1002aa86b
--- /dev/null
+++ b/db/1.1.1/exchangis_ddl.sql
@@ -0,0 +1,88 @@
+-- exchangis_job_func definition
+DROP TABLE IF EXISTS `exchangis_job_func`;
+CREATE TABLE `exchangis_job_func` (
+ `id` int(11) NOT NULL AUTO_INCREMENT,
+ `func_type` varchar(50) NOT NULL,
+ `func_name` varchar(100) NOT NULL,
+ `tab_name` varchar(50) NOT NULL COMMENT 'Tab',
+ `name_dispaly` varchar(100) DEFAULT NULL,
+ `param_num` int(11) DEFAULT '0',
+ `ref_name` varchar(100) DEFAULT NULL,
+ `description` varchar(200) DEFAULT NULL,
+ `modify_time` datetime DEFAULT NULL,
+ `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
+ PRIMARY KEY (`id`),
+ UNIQUE KEY `job_func_tab_name_idx` (`tab_name`,`func_name`)
+) ENGINE=InnoDB AUTO_INCREMENT=12 DEFAULT CHARSET=utf8;
+
+-- exchangis_job_func_params definition
+DROP TABLE IF EXISTS `exchangis_job_func_params`;
+CREATE TABLE IF NOT EXISTS `exchangis_job_func_params`(
+ `func_id` INT(11) NOT NULL,
+ `param_name` VARCHAR(100) NOT NULL,
+ `order` INT(11) DEFAULT 0,
+ `name_display` VARCHAR(100),
+ `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
+ PRIMARY KEY(`func_id`, `param_name`)
+)Engine=InnoDB DEFAULT CHARSET=utf8;
+
+-- exchangis_job_param_config definition
+DROP TABLE IF EXISTS `exchangis_job_param_config`;
+CREATE TABLE `exchangis_job_param_config` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `config_key` varchar(64) NOT NULL,
+ `config_name` varchar(64) NOT NULL,
+ `config_direction` varchar(16) DEFAULT NULL,
+ `type` varchar(32) NOT NULL,
+ `ui_type` varchar(32) DEFAULT NULL,
+ `ui_field` varchar(64) DEFAULT NULL,
+ `ui_label` varchar(32) DEFAULT NULL,
+ `unit` varchar(32) DEFAULT NULL,
+ `required` bit(1) DEFAULT b'0',
+ `value_type` varchar(32) DEFAULT NULL,
+ `value_range` varchar(255) DEFAULT NULL,
+ `default_value` varchar(255) DEFAULT NULL,
+ `validate_type` varchar(64) DEFAULT NULL,
+ `validate_range` varchar(64) DEFAULT NULL,
+ `validate_msg` varchar(255) DEFAULT NULL,
+ `is_hidden` bit(1) DEFAULT NULL,
+ `is_advanced` bit(1) DEFAULT NULL,
+ `source` varchar(255) DEFAULT NULL,
+ `level` tinyint(4) DEFAULT NULL,
+ `treename` varchar(32) DEFAULT NULL,
+ `sort` int(11) DEFAULT NULL,
+ `description` varchar(255) DEFAULT NULL,
+ `status` tinyint(4) DEFAULT NULL,
+ `ref_id` bigint(20) DEFAULT NULL,
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB AUTO_INCREMENT=32 DEFAULT CHARSET=utf8;
+
+-- exchangis_engine_settings definition
+DROP TABLE IF EXISTS `exchangis_engine_settings`;
+CREATE TABLE `exchangis_engine_settings` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `engine_name` varchar(50) NOT NULL,
+ `engine_desc` varchar(500) NOT NULL,
+ `engine_settings_value` text,
+ `engine_direction` varchar(255) NOT NULL,
+ `res_loader_class` varchar(255),
+ `res_uploader_class` varchar(255),
+ `modify_time` datetime DEFAULT NULL,
+ `create_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
+ PRIMARY KEY (`id`),
+ UNIQUE KEY `engine_setting_idx` (`engine_name`)
+ ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
+
+-- exchangis_job_transform_rule
+DROP TABLE IF EXISTS `exchangis_job_transform_rule`;
+CREATE TABLE `exchangis_job_transform_rule` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `rule_name` varchar(100) NOT NULL DEFAULT 'transform_rule',
+ `rule_type` varchar(64) NOT NULL DEFAULT 'DEF',
+ `rule_source` varchar(600) DEFAULT '{}',
+ `data_source_type` varchar(64) NOT NULL,
+ `engine_type` varchar(32),
+ `direction` varchar(32) NOT NULL DEFAULT 'NONE',
+ `create_time` datetime DEFAULT CURRENT_TIMESTAMP,
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8;
\ No newline at end of file
diff --git a/db/1.1.1/exchangis_dml.sql b/db/1.1.1/exchangis_dml.sql
new file mode 100644
index 000000000..3e546d667
--- /dev/null
+++ b/db/1.1.1/exchangis_dml.sql
@@ -0,0 +1,79 @@
+-- job_func records
+INSERT INTO `exchangis_job_func`(func_type,func_name,tab_name,name_dispaly,param_num,ref_name,description,modify_time) VALUES
+('TRANSFORM','dx_substr','DATAX',NULL,2,NULL,NULL,NULL)
+,('TRANSFORM','dx_pad','DATAX',NULL,3,NULL,NULL,NULL)
+,('TRANSFORM','dx_replace','DATAX',NULL,3,NULL,NULL,NULL)
+,('VERIFY','like','DATAX',NULL,1,'dx_filter',NULL,NULL)
+,('VERIFY','not like','DATAX',NULL,1,'dx_filter',NULL,NULL)
+,('VERIFY','>','DATAX',NULL,1,'dx_filter',NULL,NULL)
+,('VERIFY','<','DATAX',NULL,1,'dx_filter',NULL,NULL)
+,('VERIFY','=','DATAX',NULL,1,'dx_filter',NULL,NULL)
+,('VERIFY','!=','DATAX',NULL,1,'dx_filter',NULL,NULL)
+,('VERIFY','>=','DATAX',NULL,1,'dx_filter',NULL,NULL)
+,('TRANSFORM','dx_precision','DATAX',NULL,1,NULL,NULL,NULL)
+;
+
+-- job_func_params records
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(1, 'startIndex', 'startIndex', 0) ON DUPLICATE KEY UPDATE `name_display` = 'startIndex';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(1, 'length', 'length', 1) ON DUPLICATE KEY UPDATE `name_display` = 'length';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(2, 'padType', 'padType(r or l)', 0) ON DUPLICATE KEY UPDATE `name_display` = 'padType(r or l)';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(2, 'length', 'length', 1) ON DUPLICATE KEY UPDATE `name_display` = 'length';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(2, 'padString', 'padString', 2) ON DUPLICATE KEY UPDATE `name_display` = 'padString';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(3, 'startIndex', 'startIndex', 0) ON DUPLICATE KEY UPDATE `name_display` = 'startIndex';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(3, 'length', 'length', 1) ON DUPLICATE KEY UPDATE `name_display` = 'length';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(3, 'replaceString', 'replaceString', 2) ON DUPLICATE KEY UPDATE `name_display` = 'replaceString';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(4, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(5, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(6, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(7, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(8, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(9, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(10, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
+
+-- job_param_config records
+INSERT INTO `exchangis_job_param_config` (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status,ref_id) VALUES
+('setting.speed.byte','作业速率限制','','DATAX','INPUT','setting.speed.bytes','作业速率限制','Mb/s',1,'NUMBER','','5','REGEX','^[1-9]d*$','作业速率限制输入错误',0,0,'',1,'',1,'',1,NULL)
+,('setting.speed.record','作业记录数限制','','DATAX','INPUT','setting.speed.records','作业记录数限制','条/s',1,'NUMBER','','100','REGEX','^[1-9]d*$','作业记录数限制输入错误',0,0,'',1,'',2,'',1,NULL)
+,('setting.speed.channel','作业最大并行度','','DATAX','INPUT','setting.max.parallelism','作业最大并行度','个',1,'NUMBER','','1','REGEX','^[1-9]d*$','作业最大并行度输入错误',0,0,'',1,'',3,'',1,NULL)
+,('setting.max.memory','作业最大使用内存','','DATAX','INPUT','setting.max.memory','作业最大使用内存','Mb',1,'NUMBER','','1024','REGEX','^[1-9]d*$','作业最大使用内存输入错误',0,0,'',1,'',4,'',1,NULL)
+,('setting.errorLimit.record','最多错误记录数','','DATAX','INPUT','setting.errorlimit.record','最多错误记录数','条',0,'NUMBER','','','REGEX','^[0-9]d*$','最多错误记录数输入错误',0,0,'',1,'',5,'',1,NULL)
+,('setting.max.parallelism','作业最大并行数','','SQOOP','INPUT','setting.max.parallelism','作业最大并行数','个',1,'NUMBER','','1','REGEX','^[1-9]d*$','作业最大并行数输入错误',0,0,'',1,'',1,'',1,NULL)
+,('setting.max.memory','作业最大内存','','SQOOP','INPUT','setting.max.memory','作业最大内存','Mb',1,'NUMBER','','1024','REGEX','^[1-9]d*$','作业最大内存输入错误',0,0,'',1,'',2,'',1,NULL)
+,('where','WHERE条件','SOURCE','MYSQL','INPUT','where','WHERE条件','',0,'VARCHAR','','','REGEX','^[sS]{0,500}$','WHERE条件输入过长',0,0,'',1,'',2,'',1,NULL)
+,('writeMode','写入方式','SQOOP-SINK','HIVE','OPTION','writeMode','写入方式(OVERWRITE只对TEXT类型表生效)','',1,'OPTION','["OVERWRITE","APPEND"]','OVERWRITE','','','写入方式输入错误',0,0,'',1,'',1,'',1,NULL)
+,('partition','分区信息','SINK','HIVE','MAP','partition','分区信息(文本)','',0,'VARCHAR','','','REGEX','^[sS]{0,50}$','分区信息过长',0,0,'/api/rest_j/v1/dss/exchangis/main/datasources/render/partition/element/map',1,'',2,'',1,NULL)
+;
+INSERT INTO `exchangis_job_param_config` (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status,ref_id) VALUES
+('partition','分区信息','SOURCE','HIVE','MAP','partition','分区信息(文本)','',0,'VARCHAR','','','REGEX','^[sS]{0,50}$','分区信息过长',0,0,'/api/rest_j/v1/dss/exchangis/main/datasources/render/partition/element/map',1,'',2,'',1,NULL)
+,('writeMode','写入方式','SQOOP-SINK','MYSQL','OPTION','writeMode','写入方式','',1,'OPTION','["INSERT","UPDATE"]','INSERT','','','写入方式输入错误',0,0,'',1,'',1,'',1,NULL)
+,('batchSize','批量大小','DATAX-SINK','ELASTICSEARCH','INPUT','batchSize','批量大小','',0,'NUMBER','','','REGEX','^[1-9]d*$','批量大小输入错误',0,0,'',1,'',1,'',1,NULL)
+,('query','query条件','DATAX-SOURCE','MONGODB','INPUT','query','query条件','',0,'VARCHAR','','','REGEX','^[sS]{0,500}$','query条件输入过长',0,0,'',1,'',2,'',1,NULL)
+,('writeMode','写入方式','DATAX-SINK','MONGODB','OPTION','writeMode','写入方式','',1,'OPTION','["INSERT","REPLACE"]','INSERT','','','写入方式输入错误',0,0,'',1,'',1,'',1,NULL)
+,('batchSize','批量大小','DATAX-SINK','MONGODB','INPUT','batchSize','批量大小','',0,'NUMBER','','','REGEX','^[1-9]d*$','批量大小输入错误',0,0,'',1,'',2,'',1,NULL)
+,('transferMode','传输方式','DATAX-SOURCE','HIVE','OPTION','transferMode','传输方式','',1,'OPTION','["二进制","记录"]','二进制','','','该传输方式不可用',0,0,'',1,'',1,'',1,NULL)
+,('nullFormat','空值字符','DATAX-SOURCE','HIVE','INPUT','nullFormat','空值字符','',0,'VARCHAR','','','REGEX','^[sS]{0,50}$','空值字符输入错误',0,0,'',1,'',2,'',1,49)
+,('writeMode','写入方式','DATAX-SINK','MYSQL','OPTION','writeMode','写入方式','',1,'OPTION','["INSERT","UPDATE"]','INSERT','','','写入方式输入错误',0,0,'',1,'',1,'',1,NULL)
+,('writeMode','写入方式','DATAX-SINK','HIVE','OPTION','writeMode','写入方式(OVERWRITE只对TEXT类型表生效)','',1,'OPTION','["append","truncate"]','append','','','写入方式输入错误',0,0,'',1,'',1,'',1,NULL)
+;
+INSERT INTO `exchangis_job_param_config` (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status,ref_id) VALUES
+('nullFormat','空值字符','DATAX-SINK','HIVE','INPUT','nullFormat','空值字符','',0,'VARCHAR','','','REGEX','^[sS]{0,50}$','空值字符输入错误',0,0,'',1,'',2,'',1,49)
+,('nullFormat','空值字符','DATAX-SINK','ELASTICSEARCH','INPUT','nullFormat','空值字符','',0,'VARCHAR','','','REGEX','^[sS]{0,50}$','空值字符输入错误',0,0,'',1,'',2,'',1,49)
+;
+INSERT INTO `exchangis_job_param_config` (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status,ref_id) VALUES
+('where','WHERE条件','SOURCE','ORACLE','INPUT','where','WHERE条件',NULL,0,'VARCHAR',NULL,NULL,'REGEX','^[\\s\\S]{0,500}$','WHERE条件输入过长',0,0,NULL,1,'',2,NULL,1,NULL)
+,('writeMode','写入方式','DATAX-SINK','ORACLE','OPTION','writeMode','写入方式',NULL,1,'OPTION','["INSERT","UPDATE"]','INSERT',NULL,NULL,'写入方式输入错误',0,0,NULL,1,NULL,1,NULL,1,NULL)
+;
+
+-- engine_settings records
+INSERT INTO `exchangis_engine_settings` (id, engine_name, engine_desc, engine_settings_value, engine_direction, res_loader_class, res_uploader_class, modify_time, create_time) VALUES
+(1, 'datax', 'datax sync engine', '{}', 'mysql->hive,hive->mysql,mysql->oracle,oracle->mysql,oracle->hive,hive->oracle,mongodb->hive,hive->mongodb,mysql->elasticsearch,oracle->elasticsearch,mongodb->elasticsearch,mysql->mongodb,mongodb->mysql,oracle->mongodb,mongodb->oracle', 'com.webank.wedatasphere.exchangis.engine.resource.loader.datax.DataxEngineResourceLoader', NULL, NULL, '2022-08-09 18:20:51.0'),
+(2, 'sqoop', 'hadoop tool', '{}', 'mysql->hive,hive->mysql', '', NULL, NULL, '2022-08-09 18:20:51.0');
+
+-- exchangis_job_transform_rule records
+INSERT INTO `exchangis_job_transform_rule` (rule_name,rule_type,rule_source,data_source_type,engine_type,direction) VALUES
+('es_with_post_processor','DEF','{"types": ["MAPPING", "PROCESSOR"]}','ELASTICSEARCH',NULL,'SINK')
+,('es_fields_not_editable','MAPPING','{"fieldEditEnable": false, "fieldDeleteEnable": false}','ELASTICSEARCH',NULL,'SINK')
+,('hive_sink_not_access','MAPPING','{"fieldEditEnable": false, "fieldDeleteEnable": false, "fieldAddEnable": false}','HIVE',NULL,'SINK')
+,('mongo_field_match','MAPPING','{"fieldMatchStrategyName": "CAMEL_CASE_MATCH"}','MONGODB',NULL,'SINK')
+,('mysql_field_source_match','MAPPING','{"fieldMatchStrategyName": "CAMEL_CASE_MATCH","fieldEditEnable": true, "fieldDeleteEnable": true, "fieldAddEnable": false}','MYSQL',NULL,'SOURCE')
+;
\ No newline at end of file
diff --git a/db/1.1.2/exchangis_ddl.sql b/db/1.1.2/exchangis_ddl.sql
new file mode 100644
index 000000000..3609cadfd
--- /dev/null
+++ b/db/1.1.2/exchangis_ddl.sql
@@ -0,0 +1 @@
+ALTER TABLE exchangis_job_entity MODIFY COLUMN name varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL;
\ No newline at end of file
diff --git a/db/1.1.3/exchangis_ddl.sql b/db/1.1.3/exchangis_ddl.sql
new file mode 100644
index 000000000..a503dba4e
--- /dev/null
+++ b/db/1.1.3/exchangis_ddl.sql
@@ -0,0 +1 @@
+ALTER TABLE exchangis_launchable_task CHANGE linkis_job_content linkis_job_content mediumtext NULL;
\ No newline at end of file
diff --git a/db/1.1.3/exchangis_dml.sql b/db/1.1.3/exchangis_dml.sql
new file mode 100644
index 000000000..8d8530575
--- /dev/null
+++ b/db/1.1.3/exchangis_dml.sql
@@ -0,0 +1,7 @@
+INSERT INTO `exchangis_job_param_config` (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status,ref_id) VALUES
+,('writeMode','写入方式','DATAX-SINK','STARROCKS','OPTION','writeMode','写入方式','',1,'OPTION','["insert"]','insert','','','写入方式输入错误',0,0,'',1,'',1,'',1,NULL)
+,('batchSize','批量字节数大小','DATAX-SINK','STARROCKS','INPUT','maxBatchSize','批量字节数大小','',0,'NUMBER','','','REGEX','^[1-9]\\d*$','批量大小输入错误',0,0,'',1,'',2,'',1,NULL);
+
+UPDATE exchangis_engine_settings
+SET engine_direction='mysql->hive,hive->mysql,mysql->oracle,oracle->mysql,oracle->hive,hive->oracle,mongodb->hive,hive->mongodb,mysql->elasticsearch,oracle->elasticsearch,mongodb->elasticsearch,mysql->mongodb,mongodb->mysql,oracle->mongodb,mongodb->oracle,hive->starrocks'
+WHERE engine_name='datax';
\ No newline at end of file
diff --git a/db/exchangis_ddl.sql b/db/exchangis_ddl.sql
new file mode 100644
index 000000000..28ce7ac9d
--- /dev/null
+++ b/db/exchangis_ddl.sql
@@ -0,0 +1,253 @@
+-- exchangis_job_ds_bind definition
+DROP TABLE IF EXISTS `exchangis_job_ds_bind`;
+CREATE TABLE `exchangis_job_ds_bind` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `job_id` bigint(20) NOT NULL,
+ `task_index` int(11) NOT NULL,
+ `source_ds_id` bigint(20) NOT NULL,
+ `sink_ds_id` bigint(20) NOT NULL,
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB AUTO_INCREMENT=59575 DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+
+
+-- exchangis_job_entity definition
+DROP TABLE IF EXISTS `exchangis_job_entity`;
+CREATE TABLE `exchangis_job_entity` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `name` varchar(255) NOT NULL,
+ `create_time` datetime DEFAULT NULL,
+ `last_update_time` datetime(3) DEFAULT NULL,
+ `engine_type` varchar(45) DEFAULT '',
+ `job_labels` varchar(255) DEFAULT NULL,
+ `create_user` varchar(100) DEFAULT NULL,
+ `job_content` mediumtext,
+ `execute_user` varchar(100) DEFAULT '',
+ `job_params` text,
+ `job_desc` varchar(255) DEFAULT NULL,
+ `job_type` varchar(50) DEFAULT NULL,
+ `project_id` bigint(13) DEFAULT NULL,
+ `source` text,
+ `modify_user` varchar(50) DEFAULT NULL COMMENT '修改用户',
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB AUTO_INCREMENT=5793 DEFAULT CHARSET=utf8;
+
+
+-- exchangis_job_param_config definition
+DROP TABLE IF EXISTS `exchangis_job_param_config`;
+CREATE TABLE `exchangis_job_param_config` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `config_key` varchar(64) NOT NULL,
+ `config_name` varchar(64) NOT NULL,
+ `config_direction` varchar(16) DEFAULT NULL,
+ `type` varchar(32) NOT NULL,
+ `ui_type` varchar(32) DEFAULT NULL,
+ `ui_field` varchar(64) DEFAULT NULL,
+ `ui_label` varchar(32) DEFAULT NULL,
+ `unit` varchar(32) DEFAULT NULL,
+ `required` bit(1) DEFAULT b'0',
+ `value_type` varchar(32) DEFAULT NULL,
+ `value_range` varchar(255) DEFAULT NULL,
+ `default_value` varchar(255) DEFAULT NULL,
+ `validate_type` varchar(64) DEFAULT NULL,
+ `validate_range` varchar(64) DEFAULT NULL,
+ `validate_msg` varchar(255) DEFAULT NULL,
+ `is_hidden` bit(1) DEFAULT NULL,
+ `is_advanced` bit(1) DEFAULT NULL,
+ `source` varchar(255) DEFAULT NULL,
+ `level` tinyint(4) DEFAULT NULL,
+ `treename` varchar(32) DEFAULT NULL,
+ `sort` int(11) DEFAULT NULL,
+ `description` varchar(255) DEFAULT NULL,
+ `status` tinyint(4) DEFAULT NULL,
+ `ref_id` bigint(20) DEFAULT NULL,
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB AUTO_INCREMENT=32 DEFAULT CHARSET=utf8;
+
+-- exchangis_project_info definition
+DROP TABLE IF EXISTS `exchangis_project_info`;
+-- udes_gzpc_pub_sit_01.exchangis_project_info definition
+CREATE TABLE `exchangis_project_info` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `name` varchar(64) NOT NULL,
+ `description` varchar(255) DEFAULT NULL,
+ `create_time` datetime DEFAULT CURRENT_TIMESTAMP,
+ `last_update_time` datetime DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
+ `create_user` varchar(64) DEFAULT NULL,
+ `last_update_user` varchar(64) DEFAULT NULL,
+ `project_labels` varchar(255) DEFAULT NULL,
+ `domain` varchar(32) DEFAULT NULL,
+ `exec_users` varchar(255) DEFAULT '',
+ `view_users` varchar(255) DEFAULT '',
+ `edit_users` varchar(255) DEFAULT '',
+ `source` text,
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB AUTO_INCREMENT=1497870871035974171 DEFAULT CHARSET=utf8;
+
+-- exchangis_project_user definition
+DROP TABLE IF EXISTS `exchangis_project_user`;
+CREATE TABLE `exchangis_project_user` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `project_id` bigint(20) NOT NULL,
+ `priv_user` varchar(32) COLLATE utf8_bin DEFAULT NULL,
+ `priv` int(20) DEFAULT NULL,
+ `last_update_time` datetime DEFAULT CURRENT_TIMESTAMP,
+ PRIMARY KEY (`id`),
+ UNIQUE KEY `exchangis_project_user_un` (`project_id`,`priv_user`,`priv`)
+) ENGINE=InnoDB AUTO_INCREMENT=844 DEFAULT CHARSET=utf8 COLLATE=utf8_bin ROW_FORMAT=COMPACT;
+
+-- exchangis_launchable_task definition
+DROP TABLE IF EXISTS `exchangis_launchable_task`;
+CREATE TABLE `exchangis_launchable_task` (
+ `id` bigint(13) NOT NULL,
+ `name` varchar(100) NOT NULL,
+ `job_execution_id` varchar(64) DEFAULT NULL,
+ `create_time` datetime DEFAULT NULL,
+ `last_update_time` datetime(3) DEFAULT NULL,
+ `engine_type` varchar(45) DEFAULT '',
+ `execute_user` varchar(50) DEFAULT '',
+ `linkis_job_name` varchar(100) NOT NULL,
+ `linkis_job_content` mediumtext NOT NULL,
+ `linkis_params` text DEFAULT NULL,
+ `linkis_source` varchar(64) DEFAULT NULL,
+ `labels` varchar(64) DEFAULT NULL,
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8;
+
+-- exchangis_launched_job_entity definition
+DROP TABLE IF EXISTS `exchangis_launched_job_entity`;
+CREATE TABLE `exchangis_launched_job_entity` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `name` varchar(100) NOT NULL,
+ `create_time` datetime DEFAULT NULL,
+ `last_update_time` datetime(3) DEFAULT NULL,
+ `job_id` bigint(20) DEFAULT NULL,
+ `launchable_task_num` int(20) DEFAULT '0',
+ `engine_type` varchar(100) DEFAULT NULL,
+ `execute_user` varchar(100) DEFAULT NULL,
+ `job_name` varchar(100) DEFAULT NULL,
+ `status` varchar(100) DEFAULT NULL,
+ `progress` varchar(100) DEFAULT NULL,
+ `error_code` varchar(64) DEFAULT NULL,
+ `error_msg` varchar(255) DEFAULT NULL,
+ `retry_num` bigint(10) DEFAULT NULL,
+ `job_execution_id` varchar(255) DEFAULT NULL,
+ `log_path` varchar(255) DEFAULT NULL,
+ `create_user` varchar(100) DEFAULT NULL,
+ PRIMARY KEY (`id`),
+ UNIQUE KEY `job_execution_id_UNIQUE` (`job_execution_id`)
+) ENGINE=InnoDB AUTO_INCREMENT=8380 DEFAULT CHARSET=utf8;
+
+-- exchangis_launched_task_entity definition
+DROP TABLE IF EXISTS `exchangis_launched_task_entity`;
+CREATE TABLE `exchangis_launched_task_entity` (
+ `id` bigint(20) NOT NULL,
+ `name` varchar(100) NOT NULL,
+ `create_time` datetime DEFAULT NULL,
+ `last_update_time` datetime(3) DEFAULT NULL,
+ `job_id` bigint(20) DEFAULT NULL,
+ `engine_type` varchar(100) DEFAULT NULL,
+ `execute_user` varchar(100) DEFAULT NULL,
+ `job_name` varchar(100) DEFAULT NULL,
+ `progress` varchar(64) DEFAULT NULL,
+ `error_code` varchar(64) DEFAULT NULL,
+ `error_msg` varchar(255) DEFAULT NULL,
+ `retry_num` bigint(10) DEFAULT NULL,
+ `task_id` varchar(64) DEFAULT NULL,
+ `linkis_job_id` varchar(200) DEFAULT NULL,
+ `linkis_job_info` varchar(1000) DEFAULT NULL,
+ `job_execution_id` varchar(100) DEFAULT NULL,
+ `launch_time` datetime DEFAULT NULL,
+ `running_time` datetime DEFAULT NULL,
+ `metrics` text,
+ `status` varchar(64) DEFAULT NULL,
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8;
+
+-- exchangis_job_func definition
+DROP TABLE IF EXISTS `exchangis_job_func`;
+CREATE TABLE `exchangis_job_func` (
+ `id` int(11) NOT NULL AUTO_INCREMENT,
+ `func_type` varchar(50) NOT NULL,
+ `func_name` varchar(100) NOT NULL,
+ `tab_name` varchar(50) NOT NULL COMMENT 'Tab',
+ `name_dispaly` varchar(100) DEFAULT NULL,
+ `param_num` int(11) DEFAULT '0',
+ `ref_name` varchar(100) DEFAULT NULL,
+ `description` varchar(200) DEFAULT NULL,
+ `modify_time` datetime DEFAULT NULL,
+ `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
+ PRIMARY KEY (`id`),
+ UNIQUE KEY `job_func_tab_name_idx` (`tab_name`,`func_name`)
+) ENGINE=InnoDB AUTO_INCREMENT=12 DEFAULT CHARSET=utf8;
+
+-- exchangis_job_func_params definition
+DROP TABLE IF EXISTS `exchangis_job_func_params`;
+CREATE TABLE IF NOT EXISTS `exchangis_job_func_params`(
+ `func_id` INT(11) NOT NULL,
+ `param_name` VARCHAR(100) NOT NULL,
+ `order` INT(11) DEFAULT 0,
+ `name_display` VARCHAR(100),
+ `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
+ PRIMARY KEY(`func_id`, `param_name`)
+)Engine=InnoDB DEFAULT CHARSET=utf8;
+
+-- exchangis_engine_resources definition
+DROP TABLE IF EXISTS `exchangis_engine_resources`;
+CREATE TABLE `exchangis_engine_resources` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `engine_type` varchar(50) NOT NULL,
+ `resource_name` varchar(100) NOT NULL,
+ `resource_type` varchar(50) NOT NULL COMMENT 'resource type' DEFAULT 'file',
+ `resource_path` varchar(255) NOT NULL,
+ `store_uri` varchar(500) NOT NULL,
+ `create_user` varchar(50) NOT NULL,
+ `modify_time` datetime DEFAULT NULL,
+ `create_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
+ PRIMARY KEY (`id`),
+ UNIQUE KEY `engine_res_idx` (`engine_type`,`resource_path`)
+ ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
+
+-- exchangis_engine_settings definition
+DROP TABLE IF EXISTS `exchangis_engine_settings`;
+CREATE TABLE `exchangis_engine_settings` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `engine_name` varchar(50) NOT NULL,
+ `engine_desc` varchar(500) NOT NULL,
+ `engine_settings_value` text,
+ `engine_direction` varchar(255) NOT NULL,
+ `res_loader_class` varchar(255),
+ `res_uploader_class` varchar(255),
+ `modify_time` datetime DEFAULT NULL,
+ `create_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
+ PRIMARY KEY (`id`),
+ UNIQUE KEY `engine_setting_idx` (`engine_name`)
+ ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
+
+-- exchangis_job_transform_rule
+DROP TABLE IF EXISTS `exchangis_job_transform_rule`;
+CREATE TABLE `exchangis_job_transform_rule` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `rule_name` varchar(100) NOT NULL DEFAULT 'transform_rule',
+ `rule_type` varchar(64) NOT NULL DEFAULT 'DEF',
+ `rule_source` varchar(600) DEFAULT '{}',
+ `data_source_type` varchar(64) NOT NULL,
+ `engine_type` varchar(32),
+ `direction` varchar(32) NOT NULL DEFAULT 'NONE',
+ `create_time` datetime DEFAULT CURRENT_TIMESTAMP,
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8;
+
+-- exchangis_job_transform_processor
+DROP TABLE IF EXISTS `exchangis_job_transform_processor`;
+CREATE TABLE `exchangis_job_transform_processor` (
+ `id` bigint(20) NOT NULL AUTO_INCREMENT,
+ `job_id` bigint(20) NOT NULL,
+ `code_content` text DEFAULT NULL,
+ `code_language` varchar(32) NOT NULL DEFAULT 'java',
+ `code_bml_resourceId` varchar(255) COMMENT 'BML resource id',
+ `code_bml_version` varchar(255) COMMENT 'BML version',
+ `creator` varchar(50) NOT NULL COMMENT 'Owner of processor',
+ `create_time` datetime DEFAULT CURRENT_TIMESTAMP,
+ `update_time` datetime DEFAULT CURRENT_TIMESTAMP,
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8;
\ No newline at end of file
diff --git a/db/exchangis_dml.sql b/db/exchangis_dml.sql
new file mode 100644
index 000000000..967381fa0
--- /dev/null
+++ b/db/exchangis_dml.sql
@@ -0,0 +1,93 @@
+-- job_func records
+INSERT INTO `exchangis_job_func`(func_type,func_name,tab_name,name_dispaly,param_num,ref_name,description,modify_time) VALUES
+('TRANSFORM','dx_substr','DATAX',NULL,2,NULL,NULL,NULL)
+,('TRANSFORM','dx_pad','DATAX',NULL,3,NULL,NULL,NULL)
+,('TRANSFORM','dx_replace','DATAX',NULL,3,NULL,NULL,NULL)
+,('VERIFY','like','DATAX',NULL,1,'dx_filter',NULL,NULL)
+,('VERIFY','not like','DATAX',NULL,1,'dx_filter',NULL,NULL)
+,('VERIFY','>','DATAX',NULL,1,'dx_filter',NULL,NULL)
+,('VERIFY','<','DATAX',NULL,1,'dx_filter',NULL,NULL)
+,('VERIFY','=','DATAX',NULL,1,'dx_filter',NULL,NULL)
+,('VERIFY','!=','DATAX',NULL,1,'dx_filter',NULL,NULL)
+,('VERIFY','>=','DATAX',NULL,1,'dx_filter',NULL,NULL)
+,('TRANSFORM','dx_precision','DATAX',NULL,1,NULL,NULL,NULL)
+;
+
+-- job_func_params records
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(1, 'startIndex', 'startIndex', 0) ON DUPLICATE KEY UPDATE `name_display` = 'startIndex';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(1, 'length', 'length', 1) ON DUPLICATE KEY UPDATE `name_display` = 'length';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(2, 'padType', 'padType(r or l)', 0) ON DUPLICATE KEY UPDATE `name_display` = 'padType(r or l)';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(2, 'length', 'length', 1) ON DUPLICATE KEY UPDATE `name_display` = 'length';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(2, 'padString', 'padString', 2) ON DUPLICATE KEY UPDATE `name_display` = 'padString';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(3, 'startIndex', 'startIndex', 0) ON DUPLICATE KEY UPDATE `name_display` = 'startIndex';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(3, 'length', 'length', 1) ON DUPLICATE KEY UPDATE `name_display` = 'length';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`, `order`) VALUES(3, 'replaceString', 'replaceString', 2) ON DUPLICATE KEY UPDATE `name_display` = 'replaceString';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(4, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(5, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(6, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(7, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(8, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(9, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
+INSERT INTO `exchangis_job_func_params`(`func_id`, `param_name`, `name_display`) VALUES(10, 'value', 'value') ON DUPLICATE KEY UPDATE `name_display` = 'value';
+
+-- job_param_config records
+INSERT INTO `exchangis_job_param_config` (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status,ref_id) VALUES
+('setting.speed.byte','作业速率限制','','DATAX','INPUT','setting.speed.bytes','作业速率限制','Mb/s',1,'NUMBER','','5','REGEX','^[1-9]\\d*$','作业速率限制输入错误',0,0,'',1,'',1,'',1,NULL)
+,('setting.speed.record','作业记录数限制','','DATAX','INPUT','setting.speed.records','作业记录数限制','条/s',1,'NUMBER','','100','REGEX','^[1-9]\\d*$','作业记录数限制输入错误',0,0,'',1,'',2,'',1,NULL)
+,('setting.speed.channel','作业最大并行度','','DATAX','INPUT','setting.max.parallelism','作业最大并行度','个',1,'NUMBER','','1','REGEX','^[1-9]\\d*$','作业最大并行度输入错误',0,0,'',1,'',3,'',1,NULL)
+,('setting.max.memory','作业最大使用内存','','DATAX','INPUT','setting.max.memory','作业最大使用内存','Mb',1,'NUMBER','','1024','REGEX','^[1-9]\\d*$','作业最大使用内存输入错误',0,0,'',1,'',4,'',1,NULL)
+,('setting.errorLimit.record','最多错误记录数','','DATAX','INPUT','setting.errorlimit.record','最多错误记录数','条',0,'NUMBER','','','REGEX','^[0-9]\\d*$','最多错误记录数输入错误',0,0,'',1,'',5,'',1,NULL)
+,('setting.max.parallelism','作业最大并行数','','SQOOP','INPUT','setting.max.parallelism','作业最大并行数','个',1,'NUMBER','','1','REGEX','^[1-9]\\d*$','作业最大并行数输入错误',0,0,'',1,'',1,'',1,NULL)
+,('setting.max.memory','作业最大内存','','SQOOP','INPUT','setting.max.memory','作业最大内存','Mb',1,'NUMBER','','1024','REGEX','^[1-9]\\d*$','作业最大内存输入错误',0,0,'',1,'',2,'',1,NULL);
+
+INSERT INTO `exchangis_job_param_config` (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status,ref_id) VALUES
+('where','WHERE条件','SOURCE','MYSQL','INPUT','where','WHERE条件','',0,'VARCHAR','','','REGEX','^[\\s\\S]{0,500}$','WHERE条件输入过长',0,0,'',1,'',2,'',1,NULL);
+,('writeMode','写入方式','SQOOP-SINK','MYSQL','OPTION','writeMode','写入方式','',1,'OPTION','["INSERT","UPDATE"]','INSERT','','','写入方式输入错误',0,0,'',1,'',1,'',1,NULL)
+,('writeMode','写入方式','DATAX-SINK','MYSQL','OPTION','writeMode','写入方式','',1,'OPTION','["INSERT","UPDATE"]','INSERT','','','写入方式输入错误',0,0,'',1,'',1,'',1,NULL);
+
+INSERT INTO `exchangis_job_param_config` (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status,ref_id) VALUES
+('where','WHERE条件','SOURCE','TDSQL','INPUT','where','WHERE条件','',0,'VARCHAR','','','REGEX','^[\\s\\S]{0,500}$','WHERE条件输入过长',0,0,'',1,'',2,'',1,NULL);
+,('writeMode','写入方式','SQOOP-SINK','TDSQL','OPTION','writeMode','写入方式','',1,'OPTION','["INSERT","UPDATE"]','INSERT','','','写入方式输入错误',0,0,'',1,'',1,'',1,NULL)
+,('writeMode','写入方式','DATAX-SINK','TDSQL','OPTION','writeMode','写入方式','',1,'OPTION','["INSERT","UPDATE"]','INSERT','','','写入方式输入错误',0,0,'',1,'',1,'',1,NULL);
+
+INSERT INTO `exchangis_job_param_config` (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status,ref_id) VALUES
+('writeMode','写入方式','SQOOP-SINK','HIVE','OPTION','writeMode','写入方式(OVERWRITE只对TEXT类型表生效)','',1,'OPTION','["OVERWRITE","APPEND"]','OVERWRITE','','','写入方式输入错误',0,0,'',1,'',1,'',1,NULL)
+,('partition','分区信息','SINK','HIVE','MAP','partition','分区信息(文本)','',0,'VARCHAR','','','REGEX','^[\\s\\S]{0,50}$','分区信息过长',0,0,'/api/rest_j/v1/dss/exchangis/main/datasources/render/partition/element/map',1,'',2,'',1,NULL)
+,('partition','分区信息','SOURCE','HIVE','MAP','partition','分区信息(文本)','',0,'VARCHAR','','','REGEX','^[\\s\\S]{0,50}$','分区信息过长',0,0,'/api/rest_j/v1/dss/exchangis/main/datasources/render/partition/element/map',1,'',2,'',1,NULL);
+,('transferMode','传输方式','DATAX-SOURCE','HIVE','OPTION','transferMode','传输方式','',1,'OPTION','["记录"]','二进制','','','该传输方式不可用',0,0,'',1,'',1,'',1,NULL)
+,('nullFormat','空值字符','DATAX-SOURCE','HIVE','INPUT','nullFormat','空值字符','',0,'VARCHAR','','','REGEX','^[\\s\\S]{0,50}$','空值字符输入错误',0,0,'',1,'',2,'',1,48)
+,('writeMode','写入方式','DATAX-SINK','HIVE','OPTION','writeMode','写入方式(OVERWRITE只对TEXT类型表生效)','',1,'OPTION','["append","truncate"]','append','','','写入方式输入错误',0,0,'',1,'',1,'',1,NULL)
+,('nullFormat','空值字符','DATAX-SINK','HIVE','INPUT','nullFormat','空值字符','',0,'VARCHAR','','','REGEX','^[\\s\\S]{0,50}$','空值字符输入错误',0,0,'',1,'',2,'',1,49);
+
+INSERT INTO `exchangis_job_param_config` (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status,ref_id) VALUES
+('batchSize','批量大小','DATAX-SINK','ELASTICSEARCH','INPUT','batchSize','批量大小','',0,'NUMBER','','','REGEX','^[1-9]\\d*$','批量大小输入错误',0,0,'',1,'',1,'',1,NULL)
+,('nullFormat','空值字符','DATAX-SINK','ELASTICSEARCH','INPUT','nullFormat','空值字符','',0,'VARCHAR','','','REGEX','^[\\s\\S]{0,50}$','空值字符输入错误',0,0,'',1,'',2,'',1,49);
+
+INSERT INTO `exchangis_job_param_config` (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status,ref_id) VALUES
+('where','WHERE条件','SOURCE','ORACLE','INPUT','where','WHERE条件',NULL,0,'VARCHAR',NULL,NULL,'REGEX','^[\\s\\S]{0,500}$','WHERE条件输入过长',0,0,NULL,1,'',2,NULL,1,NULL)
+,('writeMode','写入方式','DATAX-SINK','ORACLE','OPTION','writeMode','写入方式',NULL,1,'OPTION','["INSERT","UPDATE"]','INSERT',NULL,NULL,'写入方式输入错误',0,0,NULL,1,NULL,1,NULL,1,NULL);
+
+INSERT INTO `exchangis_job_param_config` (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status,ref_id) VALUES
+('query','query条件','DATAX-SOURCE','MONGODB','INPUT','query','query条件','',0,'VARCHAR','','','REGEX','^[\\s\\S]{0,500}$','query条件输入过长',0,0,'',1,'',2,'',1,NULL)
+,('writeMode','写入方式','DATAX-SINK','MONGODB','OPTION','writeMode','写入方式','',1,'OPTION','["insert","replace"]','insert','','','写入方式输入错误',0,0,'',1,'',1,'',1,NULL)
+,('batchSize','批量大小','DATAX-SINK','MONGODB','INPUT','batchSize','批量大小','',0,'NUMBER','','','REGEX','^[1-9]\\d*$','批量大小输入错误',0,0,'',1,'',2,'',1,NULL);
+
+INSERT INTO `exchangis_job_param_config` (config_key,config_name,config_direction,`type`,ui_type,ui_field,ui_label,unit,required,value_type,value_range,default_value,validate_type,validate_range,validate_msg,is_hidden,is_advanced,source,`level`,treename,sort,description,status,ref_id) VALUES
+('writeMode','写入方式','DATAX-SINK','STARROCKS','OPTION','writeMode','写入方式','',1,'OPTION','["upsert"]','upsert','','','写入方式输入错误',0,0,'',1,'',1,'',1,NULL)
+,('batchSize','批量字节数大小','DATAX-SINK','STARROCKS','INPUT','maxBatchSize','批量字节数大小','',0,'NUMBER','','','REGEX','^[1-9]\\d*$','批量大小输入错误',0,0,'',1,'',2,'',1,NULL);
+
+-- engine_settings records
+INSERT INTO `exchangis_engine_settings` (id, engine_name, engine_desc, engine_settings_value, engine_direction, res_loader_class, res_uploader_class, modify_time) VALUES
+(1, 'datax', 'datax sync engine', '{}', 'mysql->hive,hive->mysql,mysql->oracle,oracle->mysql,oracle->hive,hive->oracle,mongodb->hive,hive->mongodb,mysql->elasticsearch,oracle->elasticsearch,mongodb->elasticsearch,mysql->mongodb,mongodb->mysql,oracle->mongodb,mongodb->oracle,hive->starrocks', 'com.webank.wedatasphere.exchangis.engine.resource.loader.datax.DataxEngineResourceLoader', NULL, NULL),
+(2, 'sqoop', 'hadoop tool', '{}', 'mysql->hive,hive->mysql', '', NULL, NULL);
+
+-- exchangis_job_transform_rule records
+INSERT INTO `exchangis_job_transform_rule` (rule_name,rule_type,rule_source,data_source_type,engine_type,direction) VALUES
+('es_with_post_processor','DEF','{"types": ["MAPPING", "PROCESSOR"]}','ELASTICSEARCH',NULL,'SINK')
+,('es_fields_not_editable','MAPPING','{"fieldEditEnable": true, "fieldDeleteEnable": true}','ELASTICSEARCH',NULL,'SINK')
+,('hive_sink_not_access','MAPPING','{"fieldEditEnable": true, "fieldDeleteEnable": true, "fieldAddEnable": true}','HIVE',NULL,'SINK')
+,('mongo_field_match','MAPPING','{"fieldMatchStrategyName": "CAMEL_CASE_MATCH"}','MONGODB',NULL,'SINK')
+,('mysql_field_source_match','MAPPING','{"fieldMatchStrategyName": "CAMEL_CASE_MATCH","fieldEditEnable": true, "fieldDeleteEnable": true, "fieldAddEnable": true}','MYSQL',NULL,'SOURCE')
+,('starrocks_field_source_match','MAPPING','{"fieldMatchStrategyName": "CAMEL_CASE_MATCH","fieldEditEnable": true, "fieldDeleteEnable": true, "fieldAddEnable": true}','STARROCKS',NULL,'SINK')
+;
+
diff --git a/db/job_content_example.json b/db/job_content_example.json
new file mode 100644
index 000000000..5046a8ed4
--- /dev/null
+++ b/db/job_content_example.json
@@ -0,0 +1,76 @@
+{
+ "dataSources": {
+ "source_id": "HIVE.10001.test_db.test_table",
+ "sink_id": "MYSQL.10002.mask_db.mask_table"
+ },
+ "params": {
+ "sources": [
+ {
+ "config_key": "exchangis.job.ds.params.hive.transform_type",
+ "config_name": "传输方式",
+ "config_value": "二进制",
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.job.ds.params.hive.partitioned_by",
+ "config_name": "分区信息",
+ "config_value": "2021-07-30",
+ "sort": 2
+ },
+ {
+ "config_key": "exchangis.job.ds.params.hive.empty_string",
+ "config_name": "空值字符",
+ "config_value": "",
+ "sort": 3
+ }
+ ],
+ "sinks": [
+ {
+ "config_key": "exchangis.job.ds.params.mysql.write_type",
+ "config_name": "写入方式",
+ "config_value": "insert",
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.job.ds.params.mysql.batch_size",
+ "config_name": "批量大小",
+ "config_value": 1000,
+ "sort": 2
+ }
+ ]
+ },
+ "transforms": [
+ {
+ "source_field_name": "name",
+ "source_field_type": "VARCHAR",
+ "sink_field_name": "c_name",
+ "sink_field_type": "VARCHAR"
+ },
+ {
+ "source_field_name": "year",
+ "source_field_type": "VARCHAR",
+ "sink_field_name": "d_year",
+ "sink_field_type": "VARCHAR"
+ }
+ ],
+ "settings": [
+ {
+ "config_key": "rate_limit",
+ "config_name": "作业速率限制",
+ "config_value": 102400,
+ "sort": 1
+ },
+ {
+ "config_key": "record_limit",
+ "config_name": "作业记录数限制",
+ "config_value": 10000,
+ "sort": 2
+ },
+ {
+ "config_key": "max_errors",
+ "config_name": "最多错误记录数",
+ "config_value": 100,
+ "sort": 3
+ }
+ ]
+}
\ No newline at end of file
diff --git a/db/job_content_example_batch.json b/db/job_content_example_batch.json
new file mode 100644
index 000000000..864bf89f0
--- /dev/null
+++ b/db/job_content_example_batch.json
@@ -0,0 +1,153 @@
+[{
+ "subJobName": "job0001",
+ "dataSources": {
+ "source_id": "HIVE.10001.test_db.test_table",
+ "sink_id": "MYSQL.10002.mask_db.mask_table"
+ },
+ "params": {
+ "sources": [
+ {
+ "config_key": "exchangis.job.ds.params.hive.transform_type",
+ "config_name": "传输方式",
+ "config_value": "二进制",
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.job.ds.params.hive.partitioned_by",
+ "config_name": "分区信息",
+ "config_value": "2021-07-30",
+ "sort": 2
+ },
+ {
+ "config_key": "exchangis.job.ds.params.hive.empty_string",
+ "config_name": "空值字符",
+ "config_value": "",
+ "sort": 3
+ }
+ ],
+ "sinks": [
+ {
+ "config_key": "exchangis.job.ds.params.mysql.write_type",
+ "config_name": "写入方式",
+ "config_value": "insert",
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.job.ds.params.mysql.batch_size",
+ "config_name": "批量大小",
+ "config_value": 1000,
+ "sort": 2
+ }
+ ]
+ },
+ "transforms": {
+ "type": "MAPPING",
+ "mapping": [
+ {
+ "source_field_name": "name",
+ "source_field_type": "VARCHAR",
+ "sink_field_name": "c_name",
+ "sink_field_type": "VARCHAR"
+ },
+ {
+ "source_field_name": "year",
+ "source_field_type": "VARCHAR",
+ "sink_field_name": "d_year",
+ "sink_field_type": "VARCHAR"
+ }
+ ]
+ },
+ "settings": [
+ {
+ "config_key": "exchangis.datax.setting.speed.byte",
+ "config_name": "传输速率",
+ "config_value": 102400,
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.datax.setting.errorlimit.record",
+ "config_name": "脏数据最大记录数",
+ "config_value": 10000,
+ "sort": 2
+ },
+ {
+ "config_key": "exchangis.datax.setting.errorlimit.percentage",
+ "config_name": "脏数据占比阈值",
+ "config_value": 100,
+ "sort": 3
+ }
+ ]
+}, {
+ "subJobName": "job0002",
+ "dataSources": {
+ "source_id": "HIVE.10001.superman2_db.funny2_table",
+ "sink_id": "MYSQL.10002.ducky2_db.chicken2_table"
+ },
+ "params": {
+ "sources": [
+ {
+ "config_key": "exchangis.job.ds.params.hive.transform_type",
+ "config_name": "传输方式",
+ "config_value": "二进制",
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.job.ds.params.hive.partitioned_by",
+ "config_name": "分区信息",
+ "config_value": "2021-07-30",
+ "sort": 2
+ },
+ {
+ "config_key": "exchangis.job.ds.params.hive.empty_string",
+ "config_name": "空值字符",
+ "config_value": "",
+ "sort": 3
+ }
+ ],
+ "sinks": [
+ {
+ "config_key": "exchangis.job.ds.params.mysql.write_type",
+ "config_name": "写入方式",
+ "config_value": "insert",
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.job.ds.params.mysql.batch_size",
+ "config_name": "批量大小",
+ "config_value": 1000,
+ "sort": 2
+ }
+ ]
+ },
+ "transforms": {
+ "type": "MAPPING",
+ "mapping": [
+ {
+ "source_field_name": "mid",
+ "source_field_type": "VARCHAR",
+ "sink_field_name": "c_mid",
+ "sink_field_type": "VARCHAR"
+ },
+ {
+ "source_field_name": "maxcount",
+ "source_field_type": "INT",
+ "sink_field_name": "c_maxcount",
+ "sink_field_type": "INT"
+ }
+ ]
+ },
+ "settings": [
+ {
+ "config_key": "exchangis.datax.setting.speed.byte",
+ "config_name": "传输速率",
+ "config_value": 102400,
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.datax.setting.errorlimit.record",
+ "config_name": "脏数据最大记录数",
+ "config_value": 100,
+ "sort": 2
+ }
+ ]
+}]
\ No newline at end of file
diff --git a/db/job_content_example_stream.json b/db/job_content_example_stream.json
new file mode 100644
index 000000000..264147849
--- /dev/null
+++ b/db/job_content_example_stream.json
@@ -0,0 +1,57 @@
+[{
+ "subJobName": "streamjob0001",
+ "dataSources": {
+ "source_id": "HIVE.10001.test_db.test_table",
+ "sink_id": "MYSQL.10002.mask_db.mask_table"
+ },
+ "params": {},
+ "transforms": {
+ "type": "SQL",
+ "sql": "select * from aaa"
+ },
+ "settings": [
+ {
+ "config_key": "exchangis.datax.setting.speed.byte",
+ "config_name": "传输速率",
+ "config_value": 102400,
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.datax.setting.errorlimit.record",
+ "config_name": "脏数据最大记录数",
+ "config_value": 100,
+ "sort": 2
+ }
+ ]
+}, {
+ "subJobName": "streamjob0002",
+ "dataSources": {
+ "source_id": "HIVE.10001.test_db.test_table",
+ "sink_id": "MYSQL.10002.mask_db.mask_table"
+ },
+ "params": {},
+ "transforms": {
+ "type": "SQL",
+ "sql": "insert into xxx"
+ },
+ "settings": [
+ {
+ "config_key": "exchangis.datax.setting.speed.byte",
+ "config_name": "传输速率",
+ "config_value": 102400,
+ "sort": 1
+ },
+ {
+ "config_key": "exchangis.datax.setting.errorlimit.record",
+ "config_name": "脏数据最大记录数",
+ "config_value": 10000,
+ "sort": 2
+ },
+ {
+ "config_key": "exchangis.datax.setting.errorlimit.percentage",
+ "config_name": "脏数据占比阈值",
+ "config_value": 100,
+ "sort": 3
+ }
+ ]
+}]
\ No newline at end of file
diff --git a/docs/en_US/ch1/component_upgrade_en.md b/docs/en_US/ch1/component_upgrade_en.md
new file mode 100644
index 000000000..3bc6e3b47
--- /dev/null
+++ b/docs/en_US/ch1/component_upgrade_en.md
@@ -0,0 +1,75 @@
+# Exchangis Component Upgrade Documentation
+This article mainly introduces the upgrade steps for adapting DSS1.1.2 and Linkis1.4.0 on the basis of the original installation of the Exchangis service. The biggest difference between the Exchangis1.1.2 and the Exchangis1.0.0 version is the installation of the ExchangisAppconn, which needs to be replaced by the entire Exchangisappconn. and load### 1.升级Exchangis前的工作
+Before you upgrade Exchangis, please follow the[DSS1.1.2Install and deploy documentation](https://github.com/WeBankFinTech/DataSphereStudio-Doc/tree/main/zh_CN/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2)
+and [Linkis1.4.0Install and deploy documentation](https://linkis.staged.apache.org/zh-CN/docs/1.4.0/deployment/deploy-quick)Complete the installation and upgrade of DSS and Linkis
+
+### 2.Exchangis upgrade steps
+
+#### 1)Delete old version ExchangisAppconn package
+
+Go to the following directory and find exchangis appconn folder and delete:
+```
+{DSS_Install_HOME}/dss/dss-appconns
+```
+
+#### 2)Download binary package
+We provide the upgrade material package of ExchangisAppconn, which you can download and use directly.[Click to jump Release interface](https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/WeDatasphere/Exchangis/exchangis1.1.2/Exchangis1.1.2_install_package.zip)
+
+#### 3) Compile and package
+
+If you want to compile ExchangisAppConn by yourself, the specific compilation steps are as follows:
+
+1.clone Exchangis code
+2.Under the exchangis-plugins module, find exchangis-appconn and compile exchangis-appconn separately
+```
+cd {EXCHANGIS_CODE_HOME}/exchangis-plugins/exchangis-appconn
+mvn clean install
+```
+The exchangis-appconn.zip installation package will be found in this path
+```
+{EXCHANGIS_CODE_HOME}\exchangis-plugins\exchangis-appconn\target\exchangis-appconn.zip
+```
+
+### 3.ExchangisAppConn general steps to deploy and configure plugins
+1.Get the packaged exchangis-appconn.zip material package
+
+2.Put it in the following directory and unzip it
+
+```
+cd {DSS_Install_HOME}/dss/dss-appconns
+unzip exchangis-appconn.zip
+```
+The decompressed directory structure is:
+```
+conf
+db
+lib
+appconn.properties
+```
+
+3.Execute a script to automate the installation
+
+```shell
+cd {DSS_INSTALL_HOME}/dss/bin
+./install-appconn.sh
+# The script is an interactive installation scheme, you need to enter the string exchangis and the ip and port of the exchangis service to complete the installation
+# The exchangis port here refers to the front-end port, which is configured in nginx. instead of the backend service port
+```
+
+### 4.After completing the installation of exchangis-appconn, call the script to refresh the appconn service
+
+#### 4.1)Make the deployed APPCONN take effect
+Use DSS refresh to make APPCONN take effect, enter the directory where the script is located {DSS_INSTALL_HOME}/bin, and execute the script with the following command. Note that there is no need to restart the dss service:
+```
+sh ./appconn-refresh.sh
+```
+
+#### 4.2)Verify that exchangis-appconn is in effect
+After the installation and deployment of exchangis-appconn is completed, you can preliminarily verify whether the exchangis-appconn is successfully installed by performing the following steps.
+1. Create a new project in the DSS workspace
+![image](https://user-images.githubusercontent.com/27387830/169782142-b2fc2633-e605-4553-9433-67756135a6f1.png)
+
+2. Check whether the project is created synchronously on the exchange side. If the creation is successful, the appconn installation is successful.
+![image](https://user-images.githubusercontent.com/27387830/169782337-678f2df0-080a-495a-b59f-a98c5a427cf8.png)
+
+For more usage, please refer to[Exchangis User Manual](docs/zh_CN/ch1/exchangis_user_manual_cn.md)
diff --git a/docs/en_US/ch1/exchangis_appconn_deploy_en.md b/docs/en_US/ch1/exchangis_appconn_deploy_en.md
new file mode 100644
index 000000000..1b195c0c0
--- /dev/null
+++ b/docs/en_US/ch1/exchangis_appconn_deploy_en.md
@@ -0,0 +1,90 @@
+# ExchangisAppConn installation documentation
+
+This paper mainly introduces the deployment, configuration and installation of ExchangisAppConn in DSS(DataSphere Studio)1.0.1.
+
+### 1. Preparations for the deployment of ExchangisAppConn
+Before you deploy ExchangisAppConn, please follow the [Exchangis to install the deployment document](docs/en_US/ch1/exchangis_deploy_en.md) to complete the installation of Exchangis and other related components, and ensure that the basic functions of the project are available.
+
+### 2. Download and compilation of the ExchangisAppConn plugin
+#### 1) Download binary package
+We provide ExchangisAppconn's material package, which you can download and use directly. [Click to jump to Release interface](https://github.com/WeBankFinTech/Exchangis/releases)
+#### 2) Compile and package
+
+If you want to develop and compile ExchangisAppConn yourself, the specific compilation steps are as follows:
+1.clone Exchangis's source code
+2.In exchangis-plugins module, find exchangis-appconn, separate compilation exchangis-appconn
+
+```
+cd {EXCHANGIS_CODE_HOME}/exchangis-plugins/exchangis-appconn
+mvn clean install
+```
+The exchangis-appconn.zip installation package will be found in this path.
+```
+{EXCHANGIS_CODE_HOME}/exchangis-plugins/exchangis-appconn/target/exchangis-appconn.zip
+```
+
+### 3. Overall steps for deployment and configuration of ExchangisAppConn
+1、Get the packed exchangis-appconn.zip material package.
+
+2、Place it in the following directory and unzip it
+
+```
+cd {DSS_Install_HOME}/dss/dss-appconns
+unzip exchangis-appconn.zip
+```
+ The extracted directory structure is:
+```
+conf
+db
+lib
+appconn.properties
+```
+
+3、 Execute scripts for automated installation
+
+```shell
+cd {DSS_INSTALL_HOME}/dss/bin
+./install-appconn.sh
+# Script is an interactive installation scheme. You need to enter the string exchangis and the ip and port of exchangis service to complete the installation.
+# Exchangis port here refers to the front-end port, which is configured in nginx. Rather than the back-end service port.
+```
+
+### 4. After the installation of Exchangis-AppConn is completed, the dss service needs to be re-installed to finally complete the plug-in update.
+
+#### 4.1) Make the deployed APPCONN effective
+Make APPCONN effective by using DSS start-stop script, which is located in {DSS_INSTALL_HOME}/sbin, and execute the script by using the following commands in turn
+```
+sh /sbin/dss-stop-all.sh
+sh /sbin/dss-start-all.sh
+```
+There may be startup failure or jam in the middle, so you can quit repeated execution.
+
+#### 4.2) Verify that exchangis-appconn is effective.
+After the exchangis-appconn is installed and deployed, the following steps can be taken to preliminarily verify whether exchangis-appconn is successfully installed.
+1. Create a new project in DSS workspace
+![image](https://user-images.githubusercontent.com/27387830/169782142-b2fc2633-e605-4553-9433-67756135a6f1.png)
+
+2. Check whether the project is created synchronously on Exchangis. Successful creation means successful installation of appconn
+![image](https://user-images.githubusercontent.com/27387830/169782337-678f2df0-080a-495a-b59f-a98c5a427cf8.png)
+
+For more operation, please refer to [Exchangis User Manual](docs/zh_CN/ch1/exchangis_user_manual_cn.md)
+
+### 5.Exchangis AppConn installation principle
+
+The related configuration information of Exchangis inserted into the following table. By configuring the following table, you can complete the use configuration of Exchangis. When installing Exchangis AppConn, the script will replace init.sql under each AppConn and insert it into the table. (Note: If you only need to install APPCONN quickly, you don't need to pay too much attention to the following fields. Most of the provided init.sql are configured by default. Focus on the above operations)
+
+| Table name | Table function | Remark |
+| :----: | :----: |-------|
+| dss_application | The application table is mainly used to insert the basic information of Exchangis application | Required |
+| dss_menu | Menu, which stores the displayed contents, such as icons, names, etc | Required |
+| dss_onestop_menu_application| And menu and application, which are used for joint search | Required |
+| dss_appconn |Basic information of appconn, used to load appconn | Required |
+| dss_appconn_instance| Information of an instance of AppConn, including its own url information | Required |
+| dss_workflow_node | Schedulis is the information that needs to be inserted as a workflow node | Required |
+
+Exchangis as a scheduling framework, which implements the first-level specification and the second-level specification. The micro-services of exchangis AppConn need to be used in the following table.
+
+| Table name | Table function | Remark |
+| :----: | :----: |-------|
+| dss-framework-project-server | Use exchangis-appconn to complete the project and unify the organization. | Required |
+| dss-workflow-server | Scheduling AppConn is used to achieve workflow publishing and status acquisition. | Required |
diff --git a/docs/en_US/ch1/exchangis_datasource_en.md b/docs/en_US/ch1/exchangis_datasource_en.md
new file mode 100644
index 000000000..ffbffdf98
--- /dev/null
+++ b/docs/en_US/ch1/exchangis_datasource_en.md
@@ -0,0 +1,304 @@
+# DataSource1.0
+
+## 1、Background
+
+The earlier versions of **Exchangis0.x**and **Linkis0.x** have integrated data source modules, in which **linkis-datasource** is used as the blueprint (please refer to related documents) to reconstruct the data source module.
+
+## 2、Overall architecture desig
+
+In order to build a common data source module, the data source module is mainly divided into two parts: **datasource-client** and **datasource-server**, in which the server part is placed in the **linkis-datasource** module of **Linkis-1.0**, including the main logic of the service core; Client is placed under the **exchange is-data source** module of **exchange is-1.0**, which contains the calling logic of the client. Look at the overall architecture
+
+![linkis_datasource_structure](../../../images/zh_CN/ch1/datasource_structure.png)
+
+
+Figure 2-1 Overall Architecture Design
+
+
+## 3、Detailed explanation of modules
+
+### 3.1 datasource-server
+
+**datasource-server**: As the name implies, it is a module that stores core services, and it follows the original architecture of **linkis-datasource** (split into **datasourcemanager** and **metadatamanager**)
+
+### 3.2 linkis-datasource
+
+Schematic diagram of current architecture :
+
+![linkis_datasource_structure](../../../images/zh_CN/ch1/linkis_datasource_structure.png)
+
+
+Figure 3-1 Schematic diagram of current architecture
+
+
+It can be seen in the above figure that **linkis-datasource** decouples the related functions of data sources, the basic information part is managed by **datasourcemanager**, and the metadata part is managed by **metadatamanager**. The two sub-modules visit each other through RPC requests, and at the same time, they provide Restful entrances to the outside respectively. The external service requests are uniformly forwarded by **liniks-gateway** before they fall on the corresponding services. Furthermore, **metadatamanage** is connected to the sub-modules of **service** of different data sources in order to plug-in the metadata management platform of the third party. Each sub-module has its own implementation of metadata acquisition interface, such as **service/hive, service/elastic search and service/MySQL**
+
+#### 3.2.1 New demand
+
+##### Front end interface requirements
+
+The original **linkis-datasource** did not include the front-end interface, so now the original data source interface design of **exchangis 1.0** is merged. See **UI document** and **front-end interactive document** for details. Make a detailed description of the requirements involved:
+
+- Type of datasource-list acquisition [data source management]
+
+Description:
+
+Get all data source types accessed and show them
+
+- Datasource environment-list acquisition [data source management]
+
+Description:
+
+Get the preset data source environment parameters in the background and display them as a list
+
+- Add/Modify Datasource-Label Settings [Data Source Management]
+
+Description:
+
+Set the label information of the datasource
+
+- Connectivity detection [datasource management]
+
+Description:
+
+Check the connectivity of connected data sources, and click the Connectivity Monitor button in the data source list
+
+- Add/Modify Datasource-Configure and Load [Datasource Management]
+
+Description:
+
+In order to facilitate the introduction of new data sources or the attribute expansion of existing data sources, the form configuration of new/modified data sources is planned to adopt the method of background storage+front-end loading. The background will save the type, default value, loading address and simple cascading relationship of each attribute field, and the front-end will generate abstract data structures according to these, and then convert them into DOM operations.
+
+Process design:
+
+1. The user selects the datasource type, and the front end requests the background for the attribute configuration list of the data source with the datasource type as the parameter;
+
+2. When the front end gets the configuration list, it first judges the type, selects the corresponding control, then sets the default value and refreshes the interface DOM;
+
+3. After the basic configuration information is loaded and rendered, the values are preloaded and the cascading relationship is established;
+
+4. The configuration is completed, waiting for the user to fill it.
+
+ Associated UI:
+
+![datasource_ui](../../../images/zh_CN/ch1/datasource_ui.png)
+
+
+
+- Batch Processing-Batch Import/Export [Datasource Management]
+
+Description:
+
+Batch import and export of datasource configuration.
+
+##### Backstage demand
+
+**linkis-datasurce** at present, the background has integrated the relevant operation logic about the data source CRUD, and now the contents related to the label and version are added:
+
+- datasource permission setting [datasource management]
+
+Description:
+
+The background needs to integrate it with the labeling function of Linkis1.4.0, and give the datasource a labeling relationship.
+
+Process design:
+
+1. Users are allowed to set labels on datasources when they create and modify them;
+
+2. When saving, the tag information is sent to the back end as a character list, and the back end converts the tag characters into tag entities, and inserts and updates the tag;
+
+3. Save the datasource and establish the connection between the datasource and the label.
+
+- datasource version function [datasource management]
+
+Description:
+
+The concept of adding a version to a datasource, the function of which is to publish and update. When updating, a new version is added by default. When publishing, the datasource information of the version to be published covers the latest version and is marked as published.
+
+#### 3.2.2 Detailing
+
+Make some modifications and extensions to the entity objects contained in **linkis-datasource**, as follows:
+
+| Class Name | Role |
+| -------------------------------- | ------------------------------------------------------------ |
+| DataSourceType | Indicates the type of data source |
+| DataSourceParamKeyDefinition | Declare data source attribute configuration definition |
+| DataSourceScope[Add] | There are usually three fields for marking the scope of datasource attributes: datasource, data source environment and default (all) |
+| DataSource | Datasource entity class, including label and attribute configuration definitions |
+| DataSourceEnv | The datasource object entity class also contains attribute configuration definitions. |
+| DataSourcePermissonLabel[Delete] | |
+| DataSourceLabelRelation[Add] | Represents the relationship between datasources and permission labels |
+| VersionInfo[Add] | Version information, including datasource version number information |
+
+2.1 Among them, **DataSourceParamKeyDefinition** keeps the original consistent structure, and adds some attributes to support interface rendering. The detailed structure is as follows:
+
+| **Field name** | **Field type** | **Remark** |
+| -------------- | -------------- | ------------------------------------------------------------ |
+| id | string | persistent ID |
+| key | string | attribute name keyword |
+| description | string | describe |
+| name | string | attribute display name |
+| defaultValue | string | attribute default value |
+| valueType | string | attribute value type |
+| require | boolean | is it a required attribute |
+| refId | string | another attribute ID of the cascade |
+| dataSrcTypId | string | the associated data source type ID |
+| refMap[Add] | string | cascading relation table, format should be as follows: value1=refValue1, value2=refValue2 |
+| loadUrl[Add] | string | upload URL, which is empty by default |
+
+2.2 The **DataSource** structure is similar, but it contains label information
+
+| **Field name** | **Field type** | **Remark** |
+| ---------------- | -------------- | ------------------------------------------------------------ |
+| serId | string | persistent ID |
+| id | string | system ID |
+| versions[Add] | list-obj | The associated version VersionInfo list |
+| srcVersion[Add] | string | Version, indicating that the data source was created by version information. |
+| datSourceName | string | Data source name |
+| dataSourceDesc | string | Description of data source |
+| dataSourceTypeId | integer | Data source type ID |
+| connectParams | map | Connection parameter dictionary |
+| parameter | string | Connection attribute parameters |
+| createSystem | string | The created system is usually empty or (exchange is) |
+| dataSourceEnvId | integer | The associated data source environment ID of |
+| keyDefinitions | list-object | List of associated attribute configuration definitions. |
+| labels[Add] | map | Tag string |
+| readOnly[Add]] | boolean | Is it a read-only data source |
+| expire[Add]] | boolean | Is it expired |
+| isPub[Add] | boolean | Publish |
+
+2.3 **VersionInfo** version information. Different versions of data sources mainly have different connection parameters. The structure is as follows:
+
+| **Field name** | **Field type** | **Remark** |
+| -------------- | -------------- | ----------------------------- |
+| version | string | version number |
+| source | long | The associated data source ID |
+| connectParams | map | Version parameter dictionary |
+| parameter | string | Version parameter string |
+
+2.4 **DataSourceType** and **DataSourceEnv** are also roughly the same as the original classes, in which **DataSourceType** needs to add **classifier** fields to classify different datasource types, and the others will not be described.
+
+The main service processing classes of **datasource-server** are as follows:
+
+| **Interface name** | **Interface role** | **Single realization** |
+| ------------------------------- | ------------------------------------------------------------ | ---------------------- |
+| DataSourceRelateService | The operation of declaring datasource association information includes enumerating all datasource types and attribute definition information under different types | Yes |
+| DataSourceInfoService | Declare the basic operation of datasource/datasource environment | Yes |
+| MetadataOperateService | Declare the operation of datasource metadatasource, which is generally used for connection test | Yes |
+| BmlAppService | Declare the remote call to BML module to upload/download the key file of datasource | Yes |
+| DataSourceVersionSupportService | Declare the operations supported by multiple versions of the datasource | Yes |
+| MetadataAppService[Old] | Declare operations on metadata information | Yes |
+| DataSourceBatchOpService[Add] | Declare batch operations on datasources | Yes |
+| MetadataDatabaseService[Add] | Declare operations on metadata information of database classes | Yes |
+| MetadataPropertiesService[Add] | Operation of declaring metadata information of attribute class | Yes |
+
+### 3.3 datasource-client
+
+**datasource-client**: contains the client-side calling logic, which can operate the data source and obtain relevant metadata in the client-side way.
+
+#### 3.3.1 Related demand
+
+##### Backstage demand
+
+As the requesting client, **datasource-client** has no front-end interface requirements, and its back-end requirements are relatively simple. It not only builds a stable, retryable and traceable client, but also directly interfaces with all interfaces supported by sever, and supports various access modes as much as possible.
+
+#### 3.3.2 Detailing
+
+Its organizational structure is generally designed as follows :
+
+![datasource_client_scructure](../../../images/zh_CN/ch1/datasource_client_scructure.png)
+
+
+Figure 3-4 Detailed Design of datasource-client
+
+
+The class/interface information involved is as follows:
+
+| Class/interface name | Class/interface role | Single realization |
+| ----------------------------- | ------------------------------------------------------------ | ------------------ |
+| RemoteClient | The top-level interface of the Client declares the common interface methods of initialization, release and basic permission verification | No |
+| RemoteClientBuilder | Client's construction class is constructed according to different implementation classes of RemoteClient | Yes |
+| AbstractRemoteClient | The abstract implementation of remote involves logic such as retry, statistics and caching | Yes |
+| DataSourceRemoteClient | Declare all operation portals of the data source client | No |
+| MetaDataRemoteClient | Declare all operation portals of metadata client | No |
+| LinkisDataSourceRemoteClient | Datasource client implementation of linkis-datasource | Yes |
+| LinkisMetaDataRemoteClient | Metadata client implementation of linkis-datasource | Yes |
+| MetadataRemoteAccessService | Declare the interface of the bottom layer to access the remote third-party metadata service. | Yes |
+| DataSourceRemoteAccessService | Declare the interface of the bottom layer to access the remote third-party datasource service | Yes |
+
+The class relationship group diagram is as follows:
+
+![datasource_client_class_relation](../../../images/zh_CN/ch1/datasource_client_class_relation.png)
+
+
+Figure 3-5 datasource-client Class Relationship Group Diagram
+
+
+##### Process sequence diagram:
+
+Next, combining all modules, the calling relationship between interfaces/classes in the business process is described in detail :
+
+- Create datasource
+
+Focus:
+
+1. Before creating a datasource, you need to pull the datasource type list and the attribute configuration definition list of the datasource corresponding to the type. In some cases, you also need to pull the datasource environment list ;
+
+2. There are two scenarios for creating datasources, one is created through the interface of **linkis-datasource**, and the other is created through the datasource-client of **exchangis**;
+
+3. Datasource type, attribute configuration definition and datasource environment can be added in the background library by themselves. Currently, there is no interface dynamic configuration method (to be provided).
+
+Now look at the timing diagram of creating a data source:
+
+![datasource_client_create](../../../images/zh_CN/ch1/datasource_client_create.png)
+
+
+Figure 3-6 Sequence diagram of datasource created datasource-client
+
+
+Continue to look at creating data source interface through **datasource-client**:
+
+![datasource_client_create2](../../../images/zh_CN/ch1/datasource_client_create2.png)
+
+
+Figure 3-7 Sequence diagram of datasource created datasource-client call
+
+
+Some additional methods, such as client connection authentication, request recording and life cycle monitoring, are omitted in the above figure, but the whole calling process is simplified
+
+- Update datasource
+
+Focus:
+
+1. There are two ways to update: version update and ordinary update. Version update will generate a new version of datasource (which can be deleted or published), while ordinary update will overwrite the current datasource and will not generate a new version;
+
+2. Only the creator and administrator users of the datasource can update the publication datasource.
+
+![datasource_client_update](../../../images/zh_CN/ch1/datasource_client_update.png)
+
+
+
+- Query datasource
+
+Focus :
+
+1. When you get the datasource list through datasource-client, you need to attach the operating user information for permission filtering of the datasource
+
+Database design :
+
+![datasource_client_query](../../../images/zh_CN/ch1/datasource_client_query.png)
+
+
+
+Interface design: (refer to the existing interface of linkis-datasource for supplement)
\ No newline at end of file
diff --git a/docs/en_US/ch1/exchangis_datax_deploy_en.md b/docs/en_US/ch1/exchangis_datax_deploy_en.md
new file mode 100644
index 000000000..7458e8212
--- /dev/null
+++ b/docs/en_US/ch1/exchangis_datax_deploy_en.md
@@ -0,0 +1,84 @@
+# DataX engine uses documentation
+
+### Prepare the environment
+
+The DataX engine is an indispensable component for executing Exchangis data synchronization tasks. Data synchronization tasks can be performed only after the DataX engine is installed and deployed. Also, ensure that DataX is installed on the deployed machine.
+
+Before you install and deploy DataX engine, Please complete the installation of Exchangis and related components according to the [Exchangis installation and deployment document](docs/en_US/ch1/exchangis_deploy_en.md), and ensure that the basic functions of the project are available.
+
+It is strongly recommended that you use the native DataX to perform the test task on this node before performing the DataX task, so as to check whether the environment of this node is normal.
+
+| Environment variable name | Environment variable content | Remark |
+| :-----------------------: | :--------------------------: | ------------ |
+| JAVA_HOME | JDK installation path | Required |
+| DATAX_HOME | DataX installation path | Not Required |
+| DATAX_CONF_DIR | DataX config path | Not Required |
+
+### Prepare installation package
+
+#### 1)Download binary package
+
+Exchangis1.1.2 and Linkis 1.4.0 support the mainstream DataX versions 1.4.6 and 1.4.7, and later versions may need to modify some codes for recompilation.
+
+[Click to jump to Release interface](https://github.com/WeBankFinTech/Exchangis/releases/tag/release-1.1.2)
+
+#### 2)Compile and package
+
+If you want to develop and compile datax engine yourself, the specific compilation steps are as follows:
+
+1.clone Exchangis's source code
+
+2.Under exchangis-plugins module, find sqoop engine and compile sqoop separately, as follows :
+
+```
+cd {EXCHANGIS_CODE_HOME}/exchangis-plugins/engine/datax
+mvn clean install
+```
+
+Then the datax engine installation package will be found in this path.
+
+```
+{EXCHANGIS_CODE_HOME}\exchangis-plugins\datax\target\out\datax
+```
+
+
+### Start deployment
+
+#### 1)DataX engine installation
+
+1、Get the packed datax.zip material package, the directory structure is
+
+```shell
+datax
+-- dist
+-- plugin
+```
+
+2、Place in the following directory in the linkis installation path
+
+```shell
+cd {LINKIS_HOME}/linkis/lib/linkis-engineconn-plugins
+```
+
+(Note that depending on which users the current datax engine has permissions for, they are generally hadoop user groups and hadoop users)
+
+
+#### 2)Restart linkis-engineplugin service to make datax engine take effect
+
+New engines joining linkis will not take effect until the engineplugin service of linkis is restarted, and the restart script is. /linkis-daemon.sh in the Linkis installation directory. The specific steps are as follows :
+
+```
+cd {LINKIS_INSTALL_HOME}/links/sbin/
+./linkis-daemon.sh restart cg-engineplugin
+```
+
+After the service is successfully started, check whether the datax engine is installed in the linkis database
+
+```shell
+select * from linkis_cg_engine_conn_plugin_bml_resources where engine_conn_type='datax';
+```
+
+At this point, the datax installation and deployment is complete.
+
+For a more detailed introduction of engineplugin, please refer to the following article.
+https://linkis.apache.org/zh-CN/docs/latest/deployment/install-engineconn
\ No newline at end of file
diff --git a/docs/en_US/ch1/exchangis_deploy_en.md b/docs/en_US/ch1/exchangis_deploy_en.md
new file mode 100644
index 000000000..bcef1232d
--- /dev/null
+++ b/docs/en_US/ch1/exchangis_deploy_en.md
@@ -0,0 +1,307 @@
+## Foreword
+
+Exchangis installation is mainly divided into the following four steps :
+
+1. Exchangis dependent on environmental preparation
+2. Exchangis installation and deployment
+3. DSS ExchangisAppConn installation and deployment
+4. Linkis Sqoop engine installation and deployment
+
+## 1. Exchangis dependent on environmental preparation
+
+#### 1.1 Basic software installation
+
+| Dependent components | Must be installed | Install through train |
+|------------------------------------------------------------------------------| ------ | --------------- |
+| JDK (1.8.0_141) | yes | [How to install JDK](https://www.oracle.com/java/technologies/downloads/) |
+| MySQL (5.5+) | yes | [How to install mysql](https://mysql.net.cn/) |
+| Hadoop(3.3.4,Other versions of Hadoop need to compile Linkis by themselves.) | yes | [Hadoop deployment](https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-3.3.4/hadoop-3.3.4.tar.gz) |
+| Hive(2.3.3,Other versions of Hive need to compile Linkis by themselves.) | yes | [Hive quick installation](https://www.apache.org/dyn/closer.cgi/hive/) |
+| SQOOP (1.4.6) | yes | [How to install Sqoop](https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html) |
+| DSS1.1.2 | yes | [How to install DSS](https://github.com/WeBankFinTech/DataSphereStudio-Doc/tree/main/zh_CN/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2) |
+| Linkis1.4.0 | yes | [How to install Linkis](https://linkis.apache.org/zh-CN/docs/1.4.0/deployment/deploy-quick) |
+| Nginx | yes | [How to install Nginx](http://nginx.org/) |
+
+Underlying component checking
+
+$\color{#FF0000}{Note: be sure to reinstall dss1.1.2, and linkis1.4.0. Please recompile linkis and use the package released on June 15th }$
+
+[linkis1.4.0 code address ](https://github.com/apache/incubator-linkis/tree/release-1.4.0)
+
+[DSS1.1.2 code address ](https://github.com/WeBankFinTech/DataSphereStudio)
+
+datasource enabled
+
+By default, two services related to datasources (ps-data-source-manager, ps-metadatamanager) will not be started in the startup script of linkis. If you want to use datasource services, you can start them by modifying the export enable _ metadata _ manager = true value in $ linkis_conf_dir/linkis-env.sh. When the service is started and stopped through linkis-start-all.sh/linkis-stop-all.sh, the datasource service will be started and stopped. For more details about data sources, please refer to [Data Source Function Usage](https://linkis.apache.org/zh-CN/docs/1.1.0/deployment/start-metadatasource)
+
+#### 1.2 Create Linux users
+
+Please keep the deployment user of Exchangis consistent with that of Linkis, for example, the deployment user is hadoop account.
+
+#### 1.3 在linkis中为exchangis加专用token
+
+###### 1)Add special token to exchangis in linkis:
+
+```sql
+INSERT INTO `linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`) VALUES ('EXCHANGIS-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
+```
+
+###### 2)Authentication of hive data source for exchangis
+
+Insert hive data source environment configuration by executing the following sql statement in linkis database. Note that ${HIVE_METADATA_IP} and ${HIVE_METADATA_PORT} in the statement need to be modified before execution, for example:${HIVE_METADATA_IP}=127.0.0.1,${HIVE_METADATA_PORT}=3306:
+
+```sql
+INSERT INTO `linkis_ps_dm_datasource_env` (`env_name`, `env_desc`, `datasource_type_id`, `parameter`, `create_time`, `create_user`, `modify_time`, `modify_user`) VALUES ('开发环境SIT', '开发环境SIT', 4, '{"uris":"thrift://${HIVE_METADATA_IP}:${HIVE_METADATA_PORT}", "hadoopConf":{"hive.metastore.execute.setugi":"true"}}', now(), NULL, now(), NULL);
+INSERT INTO `linkis_ps_dm_datasource_env` (`env_name`, `env_desc`, `datasource_type_id`, `parameter`, `create_time`, `create_user`, `modify_time`, `modify_user`) VALUES ('开发环境UAT', '开发环境UAT', 4, '{"uris":"thrift://${HIVE_METADATA_IP}:${HIVE_METADATA_PORT}", "hadoopConf":{"hive.metastore.execute.setugi":"true"}}', now(), NULL, now(), NULL);
+```
+
+If the hive data source needs kerberos authentication when deployed, you need to specify a parameter keyTab in the parameter field of the Linkis_ps_dm_datasource_env table, and the way to obtain its value can be seen: [Setting and authenticating hive data source in linkis](https://linkis.apache.org/zh-CN/docs/latest/auth/token).
+
+#### 1.4 Underlying component checking
+
+Please ensure that DSS1.1.2 and Linkis1.4.0 are basically available. HiveQL scripts can be executed in the front-end interface of DSS, and DSS workflows can be created and executed normally.
+
+## 2. Exchangis installation and deployment
+
+### 2.1 Prepare installation package
+
+#### 2.1.1 Download binary package
+
+Download the latest installation package from the Released release of Exchangis [click to jump to the release interface](https://github.com/WeBankFinTech/Exchangis/releases).
+
+#### 2.1.2 Compile and package
+
+ Execute the following command in the root directory of the project:
+
+```shell script
+ mvn clean install
+```
+
+ After successful compilation, the installation package will be generated in the `assembly-package/target` directory of the project.
+
+### 2.2 Unzip the installation package
+
+ Execute the following command to decompress:
+
+```shell script
+ tar -zxvf wedatasphere-exchangis-{VERSION}.tar.gz
+```
+
+ The directory structure after decompression is as follows:
+
+```html
+|-- config:One-click installation deployment parameter configuration directory
+|-- db:Database initialization SQL directory
+|-- exchangis-extds
+|-- packages:Exchangis installation package directory
+ |-- exchangis-extds:exchangis datasource library
+ |-- lib:library
+|-- sbin:Script storage directory
+```
+
+### 2.3 Modify configuration parameters
+
+```shell script
+ vim config/config.sh
+```
+
+```shell script
+#IP of LINKIS_GATEWAY service address, which is used to find linkis-mg-gateway service.
+LINKIS_GATEWAY_HOST=
+
+#The LINKIS_GATEWAY service address port is used to find linkis-mg-gateway service.
+LINKIS_GATEWAY_PORT=
+
+#The URL of LINKIS_GATEWAY service address is composed of the above two parts.
+LINKIS_SERVER_URL=
+
+#Token used to request verification of linkis service, which can be obtained in ${LINKIST_INSTALLED_HOME}/conf/token.propertis of linkis installation directory.
+LINKIS_TOKEN=
+
+#Eureka service port
+EUREKA_PORT=
+
+#Eureka service URL
+DEFAULT_ZONE=
+```
+
+### 2.4 Modify database configuration
+
+```shell script
+ vim config/db.sh
+```
+
+```shell script
+# Set the connection information of the database.
+# Include IP address, port, user name, password and database name.
+MYSQL_HOST=
+MYSQL_PORT=
+MYSQL_USERNAME=
+MYSQL_PASSWORD=
+DATABASE=
+```
+
+### 2.5 Installation and startup
+
+#### 2.5.1 Execute one-click installation script.
+
+ Execute the install.sh script to complete the one-click installation and deployment:
+
+```shell script
+ sh sbin/install.sh
+```
+
+#### 2.5.2 Installation step
+
+ This script is an interactive installation. After executing the install.sh script, the installation steps are divided into the following steps:
+
+1. Initialize database tables.
+
+ When the reminder appears: Do you want to confiugre and install project?
+
+ Enter `y` to start installing Exchange IS service, or `n` to not install it.
+
+#### 2.5.3 Change the path of the configuration file and log file
+
+In the 'env.properties' file in the sbin directory, set the configuration file path and log file path
+
+```yaml
+EXCHANGIS_CONF_PATH="/appcom/config/exchangis-config/background"
+EXCHANGIS_LOG_PATH="/appcom/logs/exchangis/background"
+MODULE_DEFAULT_PREFIX="dss-exchangis-main-"
+MODULE_DEFAULT_SUFFIX="-dev"
+```
+
+EXCHANGIS_CONF_PATH indicates the configuration file path, and EXCHANGIS_LOG_PATH indicates the log file path. If the preceding configurations are used, perform the following operations:
+
+```yaml
+cd {EXCHANGIS_DEPLOY_PATH}
+cp -r config /appcom/config/exchangis-config/background
+mkdir -p /appcom/logs/exchangis/background
+```
+
+When the service is started, the configuration file in the corresponding path is used and logs are written to the corresponding path
+
+#### 2.5.4 Start service
+
+Execute the following command to start Exchangis Server:
+
+```shell script
+ sh sbin/daemon.sh start server
+```
+
+ You can also use the following command to restart Exchangis Server:
+
+```shell script
+./sbin/daemon.sh restart server
+```
+
+After executing the startup script, the following prompt will appear, eureka address will also be typed in the console when starting the service:
+
+![企业微信截图_16532930262583](../../../images/zh_CN/ch1/register_eureka.png)
+
+### 2.6 Check whether the service started successfully.
+
+You can check the success of service startup in Eureka interface. Check the method:
+
+Use http://${EUREKA_INSTALL_IP}:${EUREKA_PORT}. It is recommended to open it in Chrome browser to see if the service is registered successfully.
+
+As shown in the figure below:
+
+![补充Eureka截图](../../../images/zh_CN/ch1/eureka_exchangis.png)
+
+### 2.7 Front-end installation and deployment
+
+#### 2.7.1 Get the front-end installation package
+
+Exchangis has provided compiled front-end installation package by default, which can be downloaded and used directly :[Click to jump to the Release interface](https://github.com/WeBankFinTech/Exchangis/releases)
+
+You can also compile the exchange front-end by yourself and execute the following command in the exchanise root directory:
+
+```shell script
+ cd web
+ npm i
+ npm run build
+```
+
+Get the compiled dist.zip front-end package from the `web/` path.
+
+The acquired front-end package can be placed anywhere on the server. Here, it is recommended that you keep the same directory as the back-end installation address, place it in the same directory and unzip it.
+
+#### 3.3.4 Front-end installation deployment
+
+1. Decompress front-end installation package
+
+ If you plan to deploy Exchange is front-end package to the directory `/appcom/install/Exchange is/web`, please copy ` dist.zip to the directory and extract it:
+
+```shell script
+ # Please copy the exchange front-end package to `/appcom/install/exchange/web` directory first.
+ cd /appcom/Install/exchangis/web
+ unzip dist.zip
+```
+
+ Execute the following command:
+
+```shell script
+ vim /etc/nginx/conf.d/exchangis.conf
+```
+
+```
+ server {
+ listen 8098; # Access port If this port is occupied, it needs to be modified.
+ server_name localhost;
+ #charset koi8-r;
+ #access_log /var/log/nginx/host.access.log main;
+ location /dist {
+ root /appcom/Install/exchangis/web; # Exchangisfront-end deployment directory
+ autoindex on;
+ }
+
+ location /api {
+ proxy_pass http://127.0.0.1:9020; # The address of the back-end Linkis needs to be modified.
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header x_real_ipP $remote_addr;
+ proxy_set_header remote_addr $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_http_version 1.1;
+ proxy_connect_timeout 4s;
+ proxy_read_timeout 600s;
+ proxy_send_timeout 12s;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection upgrade;
+ }
+
+ #error_page 404 /404.html;
+ # redirect server error pages to the static page /50x.html
+ #
+ error_page 500 502 503 504 /50x.html;
+ location = /50x.html {
+ root /usr/share/nginx/html;
+ }
+ }
+```
+
+#### 2.7.3 Start nginx and visit the front page
+
+ After the configuration is complete, use the following command to refresh the nginx configuration again:
+
+```shell script
+ nginx -s reload
+```
+
+Please visit the Exchange front-end page at http://${EXCHANGIS_INSTALL_IP}:8098/#/projectManage. The following interface appears, indicating that Exchangis successfully installed on the front end. If you really want to try Exchangis, you need to install dss and linkis, and log in secret-free through dss. As shown in the following figure :
+
+![image](https://user-images.githubusercontent.com/27387830/170417473-af0b4cbe-758e-4800-a58f-0972f83d87e6.png)
+
+## 3. DSS ExchangisAppConn installation and deployment
+
+If you want to use Exchangis front-end, you also need to install the DSS ExchangisAppConn plugin. Please refer to: [ExchangisAppConn installation documentation for plugins ](docs/en_US/ch1/exchangis_appconn_deploy_en.md)
+
+## 4. Linkis Sqoop engine installation and deployment
+
+If you want to execute the Sqoop operation of Exchangis normally, you also need to install the Linkis Sqoop engine. Please refer to: : [Linkis Sqoop engine installation documentation ](https://linkis.apache.org/zh-CN/docs/1.1.2/engine-usage/sqoop/)
+
+## 5. How to log in and use Exchangis
+
+Exchangis for more instructions, please refer to the user manual.[Exchangis user manual](docs/en_US/ch1/exchangis_user_manual_en.md)
diff --git a/docs/en_US/ch1/exchangis_interface_en.md b/docs/en_US/ch1/exchangis_interface_en.md
new file mode 100644
index 000000000..019f4f2aa
--- /dev/null
+++ b/docs/en_US/ch1/exchangis_interface_en.md
@@ -0,0 +1,839 @@
+# Exchangis interface document
+
+## Exchangis datasource module
+
+### 1、Get datasource type
+
+Interface description:Get the datasource type according to the information of request
+
+Request URL:/dss/exchangis/main/datasources/type
+
+Request mode:GET
+
+Request parameters:
+
+| Name | Type | If required | Default value | Remark |
+| ------- | ------------------ | ----------- | ------------- | ------- |
+| request | HttpServletRequest | yes | / | request |
+
+Return parameter:
+
+| Name | Type | If required | Default value | Remark |
+| ------- | ------ | ----------- | ------------- | ---------------------------- |
+| method | String | yes | | Called method (request path) |
+| status | int | yes | | Response status code |
+| message | String | no | | Information of the response |
+| data | List | yes | | The returned data |
+
+### 2、Query datasource
+
+Interface description:Query the required datasource according to vo
+
+Request URL:/dss/exchangis/main/datasources/query
+
+Request mode:GET、POST
+
+Request parameters:
+
+| Name | Type | If required | Default value | Remark |
+| ------- | ------------------ | ----------- | ------------- | ------- |
+| request | HttpServletRequest | yes | / | request |
+| vo | DataSourceQueryVO | yes | / | |
+
+Return parameter:
+
+| Name | Type | If required | Default value | Remark |
+| ------- | ------ | ----------- | ------------- | ---------------------------- |
+| method | String | yes | / | Called method (request path) |
+| status | int | yes | / | Response status code |
+| message | String | no | / | Information of the response |
+| data | List | yes | / | The returned data |
+
+### 3、Query datasource
+
+Interface description:Query datasource according to request information
+
+Request URL:/dss/exchangis/main/datasources
+
+Request mode:GET
+
+Request parameters:
+
+| Name | Type | If required | Default value | Remark |
+| -------- | ------------------ | ----------- | ------------- | ------------------- |
+| request | HttpServletRequest | yes | / | request |
+| typeId | Long | yes | / | datasource typeId |
+| typeName | String | yes | / | datasource Typename |
+| page | Integer | yes | / | page num |
+| size | Integer | yes | / | size per page |
+
+Return parameter:
+
+| Name | Type | If required | Default value | Remark |
+| ------- | --------------------- | ----------- | ------------- | ---------------------------- |
+| method | String | yes | / | Called method (request path) |
+| status | int | yes | / | Response status code |
+| message | String | no | / | Information of the response |
+| data | List\ | yes | / | The returned data |
+
+### 4、Query datasource keydefines
+
+Interface description:Query key definitions of datasource according to datasource type ID
+
+Request URL:/dss/exchangis/main/datasources/types/{dataSourceTypeId}/keydefines
+
+Request mode:GET
+
+Request parameters:
+
+| Name | Type | If required | Default value | Remark |
+| ---------------- | ------------------ | ----------- | ------------- | ----------------- |
+| request | HttpServletRequest | yes | / | |
+| dataSourceTypeId | Long | yes | / | dataSource typeId |
+
+Return parameter:
+
+| Name | Type | If required | Default value | Remark |
+| ------- | ---------------------- | ----------- | ------------- | ---------------------------- |
+| method | String | yes | / | Called method (request path) |
+| status | int | yes | / | Response status code |
+| message | String | no | / | Information of the response |
+| data | List[Map[String, Any]] | yes | / | The returned data |
+
+### 5、Get datasource version
+
+Interface description:Get the datasource version according to the datasource ID
+
+Request URL:/dss/exchangis/main/datasources/{id}/versions
+
+Request mode:GET
+
+Request parameters:
+
+| Name | Type | If required | Default value | Remark |
+| ------- | ------------------ | ----------- | ------------- | ------------- |
+| request | HttpServletRequest | yes | / | request |
+| id | Long | yes | / | datasource id |
+
+Return parameter:
+
+| Name | Type | If required | Default value | Remark |
+| ------- | ------------------------- | ----------- | ------------- | ---------------------------- |
+| method | String | yes | / | Called method (request path) |
+| status | int | yes | / | Response status code |
+| message | String | no | / | Information of the response |
+| data | List