Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 0 additions & 7 deletions .env

This file was deleted.

49 changes: 49 additions & 0 deletions .github/workflows/fate-push.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
name: Push fate images to DockerHub

on:
push:
# Publish `master` as Docker `latest` image.
branches:
- master
- jenkins-integration

# Publish `v1.2.3` tags as releases.
tags:
- v*

jobs:
# no test is required
push:
runs-on: ubuntu-18.04
if: github.event_name == 'push'

steps:
- uses: actions/checkout@v2

- name: Prepare the TAG
id: prepare-the-tag
run: |
# strip git ref prefix from version
TAG=""
VERSION=$(echo "${{ github.ref }}" | sed -e 's,.*/\(.*\),\1,')
if [ $VERSION = "master" ]; then
TAG=latest
else
TAG=${VERSION##*v}-release
fi
echo "::notice col=5 title=print tag::TAG=$TAG"
echo "::set-output name=tag::$TAG"
- name: Build image
run: |
IMG_TAG=${{steps.prepare-the-tag.outputs.tag}}
cd docker-build
bash docker-build.sh all

- name: Log into DockerHub
run: docker login -u ${{ secrets.DOCKERHUB_USERNAME }} -p ${{ secrets.DOCKERHUB_TOKEN }}

- name: Push image
run: |
IMG_TAG=${{steps.prepare-the-tag.outputs.tag}}
cd docker-build
bash docker-build.sh push
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
name: Publish
name: Push kubefate service image to DockerHub

on:
push:
# Publish `master` as Docker `latest` image.
branches:
- master
- jenkins-integration

# Publish `v1.2.3` tags as releases.
tags:
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,4 @@ dist/
*.out
*.tgz
*.tar
release/
2 changes: 1 addition & 1 deletion build/ci/docker-deploy/docker_deploy.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#!/bin/bash
set -e
set -x
dir=$(dirname $0)

CONTAINER_NUM=13
Expand Down
2 changes: 1 addition & 1 deletion build/ci/docker-deploy/generate_config.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ sed -i "s/serving_ip_list=(192.168.1.1 192.168.1.2)/serving_ip_list=(${host_ip})

# Replace tag to latest
# TODO should replace the serving as well
# sed -i "s/^TAG=.*/TAG=latest/g" .env
sed -i "s/^TAG=.*/TAG=latest/g" .env
echo "# config prepare is ok"

echo "# generate config"
Expand Down
2 changes: 1 addition & 1 deletion docker-build/.env
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
#PREFIX=federatedai
#IMG_TAG=1.7.2-release
#IMG_TAG=1.8.0-release
29 changes: 29 additions & 0 deletions docker-build/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# KubeFATE docker build

This contains the builds of some images for KubeFATE to deploy FATE.

- client
- nginx
- spark
- python-spark

## Prerequisites

1. A Linux host
2. Docker: 18+

## Build

All images build.

```bash
IMG_TAG=latest bash docker-build.sh all
```

## push

Push images to DockerHub

```bash
IMG_TAG=latest bash docker-build.sh all
```
2 changes: 1 addition & 1 deletion docker-build/client/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ ARG SOURCE_PREFIX=federatedai
ARG SOURCE_TAG=1.5.0-release
FROM ${SOURCE_PREFIX}/python:${SOURCE_TAG} as data

FROM python:3.7
FROM python:3.6

COPY pipeline /data/projects/fate/pipeline
RUN pip install notebook fate-client pandas sklearn
Expand Down
8 changes: 6 additions & 2 deletions docker-build/docker-build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,12 @@

set -e

PREFIX=federatedai
IMG_TAG=latest
if [ -z "$IMG_TAG" ]; then
IMG_TAG=latest
fi
if [ -z "$PREFIX" ]; then
PREFIX=federatedai
fi

source .env

Expand Down
2 changes: 1 addition & 1 deletion docker-deploy/.env
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
RegistryURI=
TAG=1.7.2-release
TAG=1.8.0-release
SERVING_TAG=2.0.4-release

# PREFIX: namespace on the registry's server.
Expand Down
1 change: 1 addition & 0 deletions docker-deploy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ First, on a Linux host, download KubeFATE from [releases pages](https://github.c

By default, the installation script pulls the images from Docker Hub during the deployment. If the target node is not connected to Internet, refer to the below section to set up a local registry such as Harbor and use the offline images.

***If you have deployed other versions of FATE before, please delete and clean up before deploying the new version, [Deleting the cluster](#deleting-the-cluster).***
### Setting up a local registry Harbor (Optional)
Please refer to [this guide](../registry/README.md) to install Harbor as a local registry.

Expand Down
2 changes: 2 additions & 0 deletions docker-deploy/README_zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,8 @@ RegistryURI=192.168.10.1/federatedai

### 用Docker Compose部署FATE

***如果在之前你已经部署过其他版本的FATE,请删除清理之后再部署新的版本,[删除部署](#删除部署).***

#### 配置需要部署的实例数目

部署脚本提供了部署多个FATE实例的功能,下面的例子我们部署在两个机器上,每个机器运行一个FATE实例,这里两台机器的IP分别为*192.168.7.1*和*192.168.7.2*
Expand Down
9 changes: 3 additions & 6 deletions docker-deploy/generate_config.sh
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ GenerateConfig() {
cp -r training_template/backends/spark/rabbitmq confs-$party_id/confs/

cp training_template/docker-compose-spark.yml confs-$party_id/docker-compose.yml
sed -i '157,173d' confs-$party_id/docker-compose.yml
sed -i '163,179d' confs-$party_id/docker-compose.yml
fi

if [ "$backend" == "spark_pulsar" ]; then
Expand All @@ -98,7 +98,7 @@ GenerateConfig() {
cp -r training_template/backends/spark/pulsar confs-$party_id/confs/

cp training_template/docker-compose-spark.yml confs-$party_id/docker-compose.yml
sed -i '139,155d' confs-$party_id/docker-compose.yml
sed -i '145,161d' confs-$party_id/docker-compose.yml
fi

if [ "$backend" == "spark_local_pulsar" ]; then
Expand Down Expand Up @@ -165,15 +165,12 @@ GenerateConfig() {
mkdir -p ${shared_dir}/${value}
done

sed -i "s|{/path/to/host/dir}|${dir}/${shared_dir}|g" ./confs-$party_id/docker-compose.yml
sed -i "s|<path-to-host-dir>|${dir}/${shared_dir}|g" ./confs-$party_id/docker-compose.yml

# Start the general config rendering
# fateboard
sed -i "s#^server.port=.*#server.port=${fateboard_port}#g" ./confs-$party_id/confs/fateboard/conf/application.properties
sed -i "s#^fateflow.url=.*#fateflow.url=http://${fate_flow_ip}:${fate_flow_http_port}#g" ./confs-$party_id/confs/fateboard/conf/application.properties
sed -i "s#<jdbc.username>#${db_user}#g" ./confs-$party_id/confs/fateboard/conf/application.properties
sed -i "s#<jdbc.password>#${db_password}#g" ./confs-$party_id/confs/fateboard/conf/application.properties
sed -i "s#<jdbc.url>#jdbc:mysql://${db_ip}:3306/${db_name}?characterEncoding=utf8\&characterSetResults=utf8\&autoReconnect=true\&failOverReadOnly=false\&serverTimezone=GMT%2B8#g" ./confs-$party_id/confs/fateboard/conf/application.properties
sed -i "s#<fateboard.username>#${fateboard_username}#g" ./confs-$party_id/confs/fateboard/conf/application.properties
sed -i "s#<fateboard.password>#${fateboard_password}#g" ./confs-$party_id/confs/fateboard/conf/application.properties
echo fateboard module of $party_id done!
Expand Down
14 changes: 10 additions & 4 deletions docker-deploy/training_template/docker-compose-eggroll.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,19 +25,19 @@ volumes:
driver_opts:
type: none
o: bind
device: {/path/to/host/dir}/examples
device: <path-to-host-dir>/examples
shared_dir_federatedml:
driver: local
driver_opts:
type: none
o: bind
device: {/path/to/host/dir}/federatedml
device: <path-to-host-dir>/federatedml
shared_dir_data:
driver: local
driver_opts:
type: none
o: bind
device: {/path/to/host/dir}/data
device: <path-to-host-dir>/data

services:
rollsite:
Expand Down Expand Up @@ -86,7 +86,7 @@ services:
- 4671
volumes:
- ./confs/eggroll/conf:/data/projects/fate/eggroll/conf
- ./confs/fate_flow/conf/service_conf.yaml:/data/projects/fate/fate/conf/service_conf.yaml
- ./confs/fate_flow/conf/service_conf.yaml:/data/projects/fate/conf/service_conf.yaml
- ./shared_dir/data/nodemanager:/data/projects/fate/eggroll/data
networks:
- fate-network
Expand Down Expand Up @@ -117,6 +117,12 @@ services:
networks:
fate-network:
ipv4_address: 192.167.0.100
healthcheck:
test: ["CMD", "curl", "-f", "-X POST", "http://192.167.0.100:9380/v1/version/get"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
command:
- "/bin/bash"
- "-c"
Expand Down
12 changes: 9 additions & 3 deletions docker-deploy/training_template/docker-compose-spark-slim.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,19 +26,19 @@ volumes:
driver_opts:
type: none
o: bind
device: {/path/to/host/dir}/examples
device: <path-to-host-dir>/examples
shared_dir_federatedml:
driver: local
driver_opts:
type: none
o: bind
device: {/path/to/host/dir}/federatedml
device: <path-to-host-dir>/federatedml
shared_dir_data:
driver: local
driver_opts:
type: none
o: bind
device: {/path/to/host/dir}/data
device: <path-to-host-dir>/data

services:
fateboard:
Expand Down Expand Up @@ -71,6 +71,12 @@ services:
networks:
fate-network:
ipv4_address: 192.167.0.100
healthcheck:
test: ["CMD", "curl", "-f", "-X POST", "http://192.167.0.100:9380/v1/version/get"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
command:
- "/bin/bash"
- "-c"
Expand Down
12 changes: 9 additions & 3 deletions docker-deploy/training_template/docker-compose-spark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,19 +25,19 @@ volumes:
driver_opts:
type: none
o: bind
device: {/path/to/host/dir}/examples
device: <path-to-host-dir>/examples
shared_dir_federatedml:
driver: local
driver_opts:
type: none
o: bind
device: {/path/to/host/dir}/federatedml
device: <path-to-host-dir>/federatedml
shared_dir_data:
driver: local
driver_opts:
type: none
o: bind
device: {/path/to/host/dir}/data
device: <path-to-host-dir>/data

services:
fateboard:
Expand Down Expand Up @@ -70,6 +70,12 @@ services:
networks:
fate-network:
ipv4_address: 192.167.0.100
healthcheck:
test: ["CMD", "curl", "-f", "-X POST", "http://192.167.0.100:9380/v1/version/get"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
command:
- "/bin/bash"
- "-c"
Expand Down
Original file line number Diff line number Diff line change
@@ -1,29 +1,20 @@
server.port=8080
fateflow.url=
spring.datasource.driver-Class-Name=com.mysql.cj.jdbc.Driver
fateflow.url=http://localhost:9380
fateflow.http_app_key=
fateflow.http_secret_key=
spring.http.encoding.charset=UTF-8
spring.http.encoding.enabled=true
server.tomcat.uri-encoding=UTF-8
fateboard.datasource.jdbc-url=<jdbc.url>
fateboard.datasource.username=<jdbc.username>
fateboard.datasource.password=<jdbc.password>
fateboard.front_end.cors=false
fateboard.front_end.url=http://localhost:8028
server.tomcat.max-threads=1000
server.tomcat.max-connections=20000
spring.servlet.multipart.max-file-size=10MB
spring.servlet.multipart.max-request-size=100MB
spring.datasource.druid.filter.config.enabled=false
spring.datasource.druid.web-stat-filter.enabled=false
spring.datasource.druid.stat-view-servlet.enabled=false
server.compression.enabled=true
server.compression.mime-types=application/json,application/xml,text/html,text/xml,text/plain
server.board.login.username=<fateboard.username>
server.board.login.password=<fateboard.password>
management.endpoints.web.exposure.exclude=*
#server.ssl.key-store=classpath:
#server.ssl.key-store-password=
#server.ssl.key-password=
#server.ssl.key-alias=
spring.session.store-type=jdbc
spring.session.jdbc.initialize-schema=always
#HTTP_APP_KEY=
#HTTP_SECRET_KEY=
server.servlet.session.timeout=4h
server.servlet.session.cookie.max-age=4h
management.endpoints.web.exposure.exclude=*
2 changes: 0 additions & 2 deletions docs/FATE_On_Spark.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,6 @@ In current implementation, the `fate_flow` service uses the `spark-submit` binar
"party_id": 10000
},
"job_parameters": {
"work_mode": 1,
"backend": 1,
"spark_run": {
"executor-memory": "4G",
"total-executor-cores": 4
Expand Down
2 changes: 0 additions & 2 deletions docs/FATE_On_Spark_With_Pulsar.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,8 +70,6 @@ When submitting a task, the user can declare in the config file to use Pulsar as
"job_parameters": {
"common": {
"job_type": "train",
"work_mode": 1,
"backend": 2,
"spark_run": {
"num-executors": 1,
"executor-cores": 2
Expand Down
Loading