You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The MPM examples are all based on the same Credit Scoring examples, the goal of the model is to identify users that are likely to default on their loan.
4
+
5
+
This folder contains three different set of scripts that showcase MPM:
6
+
*`data_processing`: Script that processes the raw data and creates a new CSV file with the model's features
7
+
*`training`: Script that trains a machine learning model and uploads it to Comet's Model Registry
8
+
*`serving`: FastAPI inference server that downloads a model from Comet's Model Registry who's predictions are logged to MPM
9
+
10
+
## Setup
11
+
In order to run these demo scripts you will need to set these environment variables:
12
+
```bash
13
+
export COMET_API_KEY="<Comet API Key>"
14
+
export COMET_WORKSPACE="<Comet workspace to log data to>"
15
+
export COMET_PROJECT_NAME="<Comet project name>"
16
+
export COMET_MODEL_REGISTRY_NAME="<Comet model registry name>"
17
+
18
+
export COMET_URL_OVERRIDE="<EM endpoint, similar format to https://www.comet.com/clientlib/>"
19
+
export COMET_URL="<MPM ingestion endpoint, similar format to https://www.comet.com/>"
20
+
```
21
+
22
+
You will also need to install the Python libraries in `requirements.txt`
23
+
24
+
## Data processing
25
+
26
+
For this demo, we will be using a simple credit scoring dataset available in the `data_processing` folder.
27
+
28
+
The proprocessing set is quite simple in this demo but showcases how you can use Comet's Artifacts features to track all your data processing steps.
29
+
30
+
The code can be run using:
31
+
```
32
+
cd data_processing
33
+
python data_processing.py
34
+
```
35
+
36
+
## Training
37
+
For this demo we train a LightGBM model that we then upload to the model registry.
38
+
39
+
The code can be run using:
40
+
```
41
+
cd training
42
+
python model_training.py
43
+
```
44
+
45
+
## Serving
46
+
**Dependency**: In order to use this inference server, you will need to first train a model and upload it to the model registry using the training scripts.
47
+
48
+
The inference server is built using FastAPI and demonstrates how to use both the model registry to store models as well as MPM to log predictions.
49
+
50
+
The code can be run using:
51
+
```
52
+
cd serving
53
+
uvicorn main:app --reload
54
+
```
55
+
56
+
Once the code has been run, an inference server will be available under `http://localhost:8000` and has the following endpoints:
57
+
*`http://localhost:8000/`: returns the string `FastAPI inference service` and indicates the inference server is running
58
+
*`http://localhost:8000/health_check`: Simple health check to make sure the server is running and accepting requests
59
+
*`http://localhost:8000/prediction`: Make a prediction and log it to MPM
60
+
*`http://localhost:8000/create_demo_data`: Creates 10,000 predictions over a one week period to populate MPM dashboards
61
+
62
+
**Note:** It can take a few minutes for the data to appear in the debugger tab in the MPM UI.
0 commit comments