diff --git a/docs/guides/clip-classification.mdx b/docs/guides/clip-classification.mdx new file mode 100644 index 0000000..ed0435c --- /dev/null +++ b/docs/guides/clip-classification.mdx @@ -0,0 +1,139 @@ +--- +title: Vision Language Classification +--- +## 1. Objective + +This guide provides step-by-step instructions on fine-tuning a classification model for image pair comparison tasks on Emissary. The model learns to classify whether two images are similar or violating based on visual features. + +*Note: This task is only supported in dino-v2 and siglip.* + +## 2. Dataset Preparation + +Prepare your dataset in JSON format with the following structure. + +## CLIP Classification Data Format + +**Each entry should contain**: + +- **prompt.content**: A list of exactly 2 image objects to compare. Each object must have `type` set to `"image"` and `image` containing either a URL or base64-encoded image string. +- **completion**: A dictionary containing a `violation` field with a binary label — `1` for violation and `0` for no violation. + +```json +[ + { + "prompt": { + "content": [ + {"type": "image", "image": ""}, + {"type": "image", "image": ""} + ] + }, + "completion": { + "violation": 1 + } + } +] +``` + +## 3. Finetuning Preparation + +Please refer to the in-depth guide on Finetuning on Emissary here - [Quickstart Guide](../). + +### Create Training Project + +Navigate to **Dashboard** arriving at **Training**, the default page on the Emissary platform. + +1. Click **+ NEW PROJECT** in the dashboard. + + ![new_project](/img/guides/new_project.png) + +2. In the pop-up, enter a new training project name, and click **CREATE**. + + ![create_project](/img/guides/create_new_project_general.png) + +### Uploading Dataset + +A tile is created for your task. Click **Manage** to enter the task workspace. + +![manage_project](/img/guides/mange_project_general.png) + +1. Click **Manage Datasets** in the **Datasets Available** tile. + + ![manage_dataset](/img/guides/manage_dataset_general.png) + +2. Click on **+ UPLOAD DATASET** and select training and test datasets. + + ![upload_dataset](/img/guides/upload_dataset_general.png) + +3. Name dataset and upload the file + + ![upload_dataset_button](/img/guides/upload_dataset_button_general.png) + +## 4. Model Finetuning + +Now, go back one panel by clicking **OVERVIEW** and then click **Manage Training Jobs** in the **Training Jobs** tile. + +![manage_training_job](/img/guides/mange_training_jobs_general.png) + +Click **+ NEW TRAINING JOB button and fill in the configuration** + +![new_training_job](/img/guides/new_job_clip_classification.png) + +![hyper_parameters](/img/guides/parameters_clip_classification.png) + +**Required Fields** + +- **Name**: Name of your training job (fine-tuned model) +- **Base Model**: Choose the backbone pre-trained / fine-tuned model from the drop down list +- **Training Technique**: Choose training technique to use, clip-classification is only supported in SFT +- **Task Type**: Select task type clip-embedding +- **Train Dataset**: Select dataset you would like to train on the backbone model + +**(Optional)** + +- **Test Dataset**: You can provide a test dataset which then will be used in testing (evaluation phase). If None selected, the testing phase will be skipped. + - **Split Train/Test Dataset**: Use ratio of train dataset as a test set + - **Select existing dataset:** Upload separate dataset for test +- **Rebalance Dataset**: It does not work for clip-classification. +- **Hyper Parameters**: Hyper Parameters’ value is all set with Good default values but you can adjust the value if you want. + + *Note: The loss_type for dino only supports “dino” for clip-classification.* + +- **Test Functions**: When you select any Test Dataset option, you can also provide your own test functions which provides you an aggregate results. However for default, clip-classification will evaluate the violation probability for each image pair in your test dataset and shows the optimal cutoff to make your classification accuracy highest. + +After initiating the training job you will see your training job on the list + +![training_jobs](/img/guides/training_job_clip_classification.png) + +If you click the row you will be navigated to the training job detail page + +![training_job_details](/img/guides/training_job_details_clip_class.png) + +You can check the `Status` and `Progress` from the summary and you can also check the live logs and loss graph when you click the tab on the side + +![training_job_logs](/img/guides/training_logs_clip_classification.png) + +![training_loss_graph](/img/guides/train_loss_clip_classification.png) + +Go to Artifacts tab to check checkpoints and test results (if test dataset and functions provided). + +![artifacts](/img/guides/artifacts_clip_classification.png) + +## 5. Deployment + +From the Artifacts tab you can deploy any checkpoint from the training job by hitting `DEPLOY` button. + +![fine_tuned_model_deployment_modal](/img/guides/deployment_clip_classification.png) + +(Optional) You can also set resource management when creating a deployment. Setting a inactivity timeout will shutdown your deployment (inference engine) after a period of inactivity. Also you can schedule your deployment to be run in specific date and time. + +Once you initiate your deployment you go to Inference dashboard and you will see your recent / previous deployments. + +![engine_list](/img/guides/engine_clip_classification.png) + +By clicking the card you can see the details of your deployment (inference engine). + +![deployment_detail](/img/guides/clip_classification_engine_detail.png) + +Once your deployment status becomes deployed then it means your inference server is ready to be used. You can test your deployment on calling the API referring the API examples tab. + +![api_example](/img/guides/api_example_clip_classification.png) \ No newline at end of file diff --git a/docs/guides/clip-embedding.mdx b/docs/guides/clip-embedding.mdx index 2b63b3f..7068e0f 100644 --- a/docs/guides/clip-embedding.mdx +++ b/docs/guides/clip-embedding.mdx @@ -1,5 +1,5 @@ --- -title: CLIP Embedding +title: Vision Language Embedding --- ## 1. Objective @@ -70,7 +70,7 @@ A tile is created for your task. Click **Manage** to enter the task workspace ![upload_dataset](/img/guides/upload_dataset_clip_embedding.png) -3. . Name dataset and upload files +3. Name dataset and upload files ![upload_dataset_button](/img/guides/upload_button_clip_embedding.png) diff --git a/docs/guides/emissary-regression.mdx b/docs/guides/emissary-regression.mdx index b396c0c..d82acad 100644 --- a/docs/guides/emissary-regression.mdx +++ b/docs/guides/emissary-regression.mdx @@ -1,118 +1,130 @@ --- -title: LLMs for Regression - Guide +title: LLMs as Regressor - Guide --- - ## 1. Objective -This guide provides step-by-step instructions on finetuning a model for **Regression** tasks on Emissary using our regression approach. In this approach, we add a regressive head on top of the base LLMs that returns score based on the given score. We recommend using **Llama3.1-8B-instruct** for this task. +This guide provides step-by-step instructions on fine-tuning a model for **Regression** tasks on Emissary using our novel regression approach. In this approach, we add a regression head on top of the base LLMs that returns predicted values. ## 2. Dataset Preparation -Prepare your dataset in the appropriate format for the regression task. +Prepare your dataset in the appropriate format for the Regression task. ## Regression Data Format -**Each entry should contain:** - -- **Prompt**: The input text for Regression. -- **Completion**: A float value . - +Each entry should contain: -**JSONL Format** +- **prompt**: The input text for regression task. +- **completion**: The ground-truth numeric target value (a JSON number, typically a float) that the model should predict for the given prompt. This represents a continuous label (e.g., a score), and the valid range depends on your task definition. ```json { - "prompt": "This is a sample text for regression", - "completion": 0.7 + "prompt": "This is a sample text for regression task.", + "completion": 0.231 } ``` +*Note: For large target values or wide-ranging scales, normalize targets before training for better stability.* ## 3. Finetuning Preparation Please refer to the in-depth guide on Finetuning on Emissary here - [Quickstart Guide](../). -### Create Model Service -Navigate to **Dashboard** arriving at **Model Services**, the default page on the Emissary platform. +### Create Training Project + +Navigate to **Dashboard** arriving at **Training**, the default page on the Emissary platform. + +1. Click **+ NEW PROJECT** in the dashboard. + + ![new_project](/img/guides/new_project.png) -1. Click **+ NEW SERVICE** in the dashboard. -![project_create_1](/img/guides/project_create1.png) -2. In the pop-up, enter a new model service name, and click **CREATE**. -![project_create_2](/img/guides/project_create2.png) +2. In the pop-up, enter a new training project name, and click **CREATE**. -### Uploading Datasets + ![create_project](/img/guides/create_new_project_general.png) + +### Uploading Dataset A tile is created for your task. Click **Manage** to enter the task workspace. -![project_manage_1](/img/guides/project_manage1.png) +![manage_project](/img/guides/mange_project_general.png) + +1. Click **Manage Datasets** in the **Datasets Available** tile. + + ![manage_dataset](/img/guides/manage_dataset_general.png) -1. Click **MANAGE** in the **Datasets Available** tile. -![project_manage_1](/img/guides/project_manage2.png) 2. Click on **+ UPLOAD DATASET** and select training and test datasets. -![dataset1](/img/guides/dataset1.png) -3. Name datasets clearly to distinguish between training and test data (e.g., train_regression_data.csv, test_regression_data.csv). + ![upload_dataset](/img/guides/upload_dataset_general.png) + +3. Name dataset and upload the file + + ![upload_dataset_button](/img/guides/upload_dataset_button_general.png) ## 4. Model Finetuning -Now, go back one panel by clicking **OVERVIEW** and then click **MANAGE** in the **Training Jobs** tile. -![project_manage_1](/img/guides/project_manage3.png) -Here, we’ll kick off finetuning. The shortest path to finetuning a model is by clicking **+ NEW TRAINING JOB**, naming the output model, picking a backbone (**base model**), selecting the training dataset (you must have uploaded it in the step before), and finally hitting **START NEW TRAINING JOB**. +Now, go back one panel by clicking **OVERVIEW** and then click **Manage Training Jobs** in the **Training Jobs** tile. + +![manage_training_job](/img/guides/mange_training_jobs_general.png) + +Click **+ NEW TRAINING JOB button and fill in the configuration** + +![new_training_job](/img/guides/new_training_jobs_regression.png) + +![hyper_parameters](/img/guides/parameters_regression.png) + +**Required Fields** + +- Name: Name of your training job (fine-tuned model) +- Base Model: Choose the backbone pre-trained / fine-tuned model from the drop down list +- Training Technique Choose training technique to use, regression is only supported in SFT +- Task Type: Select task type ner +- Train Dataset: Select dataset you would like to train on the backbone model + +**(Optional)** + +- Test Dataset: You can provide a test dataset which then will be used in testing (evaluation phase). If None selected, the testing phase will be skipped. + - **Split Train/Test Dataset**: Use ratio of train dataset as a test set + - **Select existing dataset**: Upload separate dataset for test +- Hyper Parameters: Hyper Parameters’ value is all set with Good default values but you can adjust the value if you want. +- **Test Functions:** When you select any Test Dataset option, you can also provide your own test functions which provides you an aggregate results. We recommend to try our `regression mae` + + ![test_functions](/img/guides/test_functions_regression.png) + +After initiating the training job you will see your training job on the list + +![training_jobs](/img/guides/training_jobs_regression.png) -### Selecting Regression Option -When creating a new training job, you need to specify that you are performing a regression task to utilize the regression approach. +If you click the row you will be navigated to the training job detail page -In the Training Job Creation page, locate the Task Type option. -Select **Regression** from the given options. +![training_job_details](/img/guides/training_job_details_regression.png) -This selection ensures that a **regression head** is added on top of the base LLM, enabling the model to return **scores** for the specified text. +You can check the `Status` and `Progress` from the summary and you can also check the live logs and loss graph when you click the tab on the side -![new_traning_job](/img/guides/train_regression.png) +![training_job_logs](/img/guides/training_logs_regression.png) -A custom function that calculates a matching score for the given expected and predicted outputs. -Uncomment the suitable regression metric function to use it. +![training_loss_graph](/img/guides/training_loss_regression.png) -![classification_metric](/img/guides/regression_metric.png) -### Training Parameter Configuration -Please refer to the in-depth guide on configuring training parameters here - [Finetuning Parameter Guide](../fine-tuning/parameters). +Go to Artifacts tab to check checkpoints and test results (if test dataset and functions provided). +![artifacts](/img/guides/artifacts_regression.png) -## 5. Model Monitoring & Evaluation +## 5. Deployment -### Using Test Datasets +From the Artifacts tab you can deploy any checkpoint from the training job by hitting `DEPLOY` button. -Including a test dataset allows you to evaluate the model's performance during training. +![fine_tuned_model_deployment_modal](/img/guides/deployment_regression.png) -- **Per Epoch Evaluation**: The platform evaluates the model at each epoch using the test dataset. -- **Metrics and Outputs**: View evaluation metrics and generated outputs for test samples. -- Post completion of training, check scores in **Training Job --> Artifacts**. -

For the **LLM model**, expect the following: +(Optional) You can also set resource management when creating a deployment. Setting a inactivity timeout will shutdown your deployment (inference engine) after a period of inactivity. Also you can schedule your deployment to be run in specific date and time. -![evaluate_classification](/img/guides/evaluate_regression.png) +Once you initiate your deployment you go to Inference dashboard and you will see your recent / previous deployments. -## 6. Deployment -Refer to the in-depth walkthrough on deploying a model on Emissary here - [Deployment Guide](../fine-tuning/deployment). +![engine_list](/img/guides/engine_regression.png) -Deploying your models allows you to serve them and integrate them into your applications. +By clicking the card you can see the details of your deployment (inference engine). -### Finetuned Model Deployment -1. Navigate to the **Training Jobs Page**. From the list of finetuning jobs, select the one you want to deploy. -![deployment_fine_tuned](/img/guides/deployment_fine_tuned.png) -2. Go to the **ARTIFACTS** tab. -![artifacts1](/img/guides/artifacts1.png) -3. Select a **Checkpoint** to Deploy. -![checkpoint_evaluate](/img/guides/checkpoint_evaluate.png) -4. Go to **Deployments** to check the status of you deployed model -![checkpoint_evaluate](/img/guides/Deployment.png) -5. Once the model is deployed (as shown in the status), go to the testing tab. -![checkpoint_evaluate](/img/guides/deployment_status.png) -6. Test your samples in the the input box. -![checkpoint_evaluate](/img/guides/testing_tab.png) +![deployment_detail](/img/guides/regression_engine_detail.png) +Once your deployment status becomes `Deployed` then it means your inference server is ready to be used. You can test your deployment on Testing tab (UI) or you can also call by API referring the API examples tab. +![ui_testing](/img/guides/ui_testing_regression.png) -## 7. Best Practices -- **Start Small**: Begin with a smaller dataset to validate your setup. -- **Monitor Training**: Keep an eye on training logs and metrics. -- **Iterative Testing**: Use the test dataset to iteratively improve your model. -- **Data Format**: Use the recommended data formats for your chosen model to ensure compatibility and optimal performance. \ No newline at end of file +![api_example](/img/guides/api_example_regression.png) \ No newline at end of file diff --git a/docs/guides/ner.mdx b/docs/guides/ner.mdx new file mode 100644 index 0000000..97cb242 --- /dev/null +++ b/docs/guides/ner.mdx @@ -0,0 +1,128 @@ +--- +title: Named Entity Recognition (NER) +--- +## 1. Objective + +This guide provides step-by-step instructions on fine-tuning a model for **Named Entity Recognition** tasks on Emissary using our novel NER approach. In this approach, we add a token classification head on top of the base LLMs that returns probabilities for a each tokens. We recommend using **Qwen3-4B-Base** for this task. + +## 2. Dataset Preparation + +Prepare your dataset in the appropriate format for the NER task. + +## NER Data Format + +**Each entry should contain**: + +- **prompt**: The input text from which named entities should be extracted. +- **completion**: A JSON-serialized string containing a dictionary that maps entity type names to lists of entity mentions found in the prompt. Each key is an entity type, and each value is an array of matched entity strings. Use an empty array [] for entity types with no mentions in the given prompt. + +```jsx +{ + "prompt": "This is a sample text for NER task.", + "completion": "{\"Entity_Type_A\": [\"sample\"], \"Entity_Type_B\": [\"NER task\"], \"Entity_Type_C\": []}" +} +``` + +## 3. Finetuning Preparation + +Please refer to the in-depth guide on Finetuning on Emissary here - [Quickstart Guide](../). + +### Create Training Project + +Navigate to **Dashboard** arriving at **Training**, the default page on the Emissary platform. + +1. Click **+ NEW PROJECT** in the dashboard. + + ![new_project](/img/guides/new_project.png) + +2. In the pop-up, enter a new training project name, and click **CREATE**. + + ![create_project](/img/guides/create_new_project_general.png) + +### Uploading Dataset + +A tile is created for your task. Click **Manage** to enter the task workspace. + +![manage_project](/img/guides/mange_project_general.png) + +1. Click **Manage Datasets** in the **Datasets Available** tile. + + ![manage_dataset](/img/guides/manage_dataset_general.png) + +2. Click on **+ UPLOAD DATASET** and select training and test datasets. + + ![upload_dataset](/img/guides/upload_dataset_general.png) + +3. Name dataset and upload the file + + ![upload_dataset_button](/img/guides/upload_dataset_button_general.png) + +## 4. Model Finetuning + +Now, go back one panel by clicking **OVERVIEW** and then click **Manage Training Jobs** in the **Training Jobs** tile. + +![manage_training_job](/img/guides/mange_training_jobs_general.png) + +Click **+ NEW TRAINING JOB button and fill in the configuration** + +![new_training_job](/img/guides/new_training_job_ner.png) + +![hyper_parameters](/img/guides/hyper_parameters_ner.png) + +**Required Fields** + +- Name: Name of your training job (fine-tuned model) +- Base Model: Choose the backbone pre-trained / fine-tuned model from the drop down list +- Training Technique Choose training technique to use, NER is only supported in SFT +- Task Type: Select task type ner +- Train Dataset: Select dataset you would like to train on the backbone model + +**(Optional)** + +- Test Dataset: You can provide a test dataset which then will be used in testing (evaluation phase). If None selected, the testing phase will be skipped. + - **Split Train/Test Dataset**: Use ratio of train dataset as a test set + - **Select existing dataset:** Upload separate dataset for test +- Hyper Parameters: Hyper Parameters’ value is all set with Good default values but you can adjust the value if you want. +- **Test Functions:** When you select any Test Dataset option, you can also provide your own test functions which provides you an aggregate results. We recommend to try our `ner json eval auto` + + ![test_functions](/img/guides/test_functions_ner.png) + +After initiating the training job you will see your training job on the list + +![training_jobs](/img/guides/training_job_ner.png) + +If you click the row you will be navigated to the training job detail page + +![training_job_details](/img/guides/training_job_detail_ner.png) + +You can check the `Status` and `Progress` from the summary and you can also check the live logs and loss graph when you click the tab on the side + +![training_job_logs](/img/guides/train_logs_ner.png) + +![training_loss_graph](/img/guides/train_loss_ner.png) + +Go to Artifacts tab to check checkpoints and test results (if test dataset and functions provided). + +![artifacts](/img/guides/artifacts_ner.png) + +## 5. Deployment + +From the Artifacts tab you can deploy any checkpoint from the training job by hitting `DEPLOY` button. + +![fine_tuned_model_deployment_modal](/img/guides/deployment_ner.png) + +(Optional) You can also set resource management when creating a deployment. Setting a inactivity timeout will shutdown your deployment (inference engine) after a period of inactivity. Also you can schedule your deployment to be run in specific date and time. + +Once you initiate your deployment you go to Inference dashboard and you will see your recent / previous deployments. + +![engine_list](/img/guides/engine_ner.png) + +By clicking the card you can see the details of your deployment (inference engine). + +![deployment_detail](/img/guides/engine_detail_ner.png) + +Once your deployment status becomes `Deployed` then it means your inference server is ready to be used. You can test your deployment on Testing tab (UI) or you can also call by API referring the API examples tab. + +![ui_testing](/img/guides/ui_testing_ner.png) + +![api_example](/img/guides/api_example_ner.png) \ No newline at end of file diff --git a/docs/guides/text-generation.mdx b/docs/guides/text-generation.mdx new file mode 100644 index 0000000..aac26ed --- /dev/null +++ b/docs/guides/text-generation.mdx @@ -0,0 +1,149 @@ +--- +title: Text Generation +--- +## 1. Objective + +This guide provides step-by-step instructions on fine-tuning a model for **Text Generation** tasks on Emissary. + +## 2. Dataset Preparation + +Prepare your dataset in the appropriate format for the text generation task. + +## Text generation Data Format + +For text generation, your dataset could have two formats. + +### Completion: + +- **Prompt**: The input text for generation +- **Completion**: The output text that the model should generate + +```json +{ "prompt": "input", "completion": "output" } +``` + +### Chat + +- **Messages**: A list of messages, each containing a role and its corresponding content. + +```json +{ + "messages": [ + { "role": "user", "content": "input" }, + { "role": "assistant", "content": "response" } + ] +} +``` + +## 3. Finetuning Preparation + +Please refer to the in-depth guide on Finetuning on Emissary here - [Quickstart Guide](../). + +### Create Training Project + +Navigate to **Dashboard** arriving at **Training**, the default page on the Emissary platform. + +1. Click **+ NEW PROJECT** in the dashboard. + + ![new_project](/img/guides/new_project.png) + +2. In the pop-up, enter a new training project name, and click **CREATE**.### + + ![create_project](/img/guides/create_new_project_general.png) + +### Upload Dataset + +A tile is created for your task. Click **Manage** to enter the task workspace. + +![manage_project](/img/guides/mange_project_general.png) + +1. Click **Manage Datasets** in the **Datasets Available** tile. + + ![manage_dataset](/img/guides/manage_dataset_general.png) + +2. Click on **+ UPLOAD DATASET** and select training and test datasets. + + ![upload_dataset](/img/guides/upload_dataset_general.png) + +3. Name dataset and upload the file + + ![upload_dataset_button](/img/guides/upload_dataset_button_general.png) + +## 4. Model Finetuning + +Now, go back one panel by clicking **OVERVIEW** and then click **Manage Training Jobs** in the **Training Jobs** tile. + +![manage_training_job](/img/guides/mange_training_jobs_general.png) + +Click **+ NEW TRAINING JOB button and fill in the configuration** + +![new_training_job](/img/guides/new_training_jobs_text_gen.png) + +SFT Hyper Parameters + +![sft_hyper_parameters](/img/guides/sft_hyper_params_text_gen.png) + +GRPO Hyper Parameters + +![grpo_parameters](/img/guides/grpo_hyper_parameters_text_gen.png) + +**Required Fields** + +- Name: Name of your training job (fine-tuned model) +- Base Model: Choose the backbone pre-trained / fine-tuned model from the drop down list +- Training Technique Choose training technique to use +- Task Type: Select task type ner +- Train Dataset: Select dataset you would like to train on the backbone model +- Reward Function **(GRPO ONLY)**: Select or add your reward function used in GRPO training. The total reward should be must sum to 1. + +**(Optional)** + +- Test Dataset: You can provide a test dataset which then will be used in testing (evaluation phase). If None selected, the testing phase will be skipped. + - **Split Train/Test Dataset**: Use ratio of train dataset as a test set + - **Select existing dataset**: Upload separate dataset for test +- Hyper Parameters: Hyper Parameters’ value is all set with Good default values but you can adjust the value if you want. +- **Test Functions:** When you select any Test Dataset option, you can also provide your own test functions which provides you an aggregate results. We recommend to try our `test similarity` + + ![test_functions](/img/guides/test_function_text_gen.png) + + ![reward_functions](/img/guides/reward_functions_text_gen.png) + +After initiating the training job you will see your training job on the list + +![training_jobs](/img/guides/training_job_list_text_gen.png) + +If you click the row you will be navigated to the training job detail page + +![training_job_details](/img/guides/training_job_details_text_gen.png) + +You can check the `Status` and `Progress` from the summary and you can also check the live logs and loss graph when you click the tab on the side + +![training_job_logs](/img/guides/training_logs_text_gen.png) + +![training_loss_graph](/img/guides/training_loss_text_gen.png) + +Go to Artifacts tab to check checkpoints and test results (if test dataset and functions provided). + +![artifacts](/img/guides/artifacts_text_gen.png) + +## 5. Deployment + +From the Artifacts tab you can deploy any checkpoint from the training job by hitting `DEPLOY` button. + +![fine_tuned_model_deployment_modal](/img/guides/deployment_text_gen.png) + +(Optional) You can also set resource management when creating a deployment. Setting a inactivity timeout will shutdown your deployment (inference engine) after a period of inactivity. Also you can schedule your deployment to be run in specific date and time. + +Once you initiate your deployment you go to Inference dashboard and you will see your recent / previous deployments. + +![engine_list](/img/guides/sft_engine.png) + +By clicking the card you can see the details of your deployment (inference engine). + +![deployment_detail](/img/guides/text_gen_engine_details.png) + +Once your deployment status becomes `Deployed` then it means your inference server is ready to be used. You can test your deployment on Testing tab (UI) or you can also call by API referring the API examples tab. + +![ui_testing](/img/guides/ui_testing_text_gen.png) + +![api_example](/img/guides/api_testing_text_gen.png) \ No newline at end of file diff --git a/sidebars.ts b/sidebars.ts index 57f9d60..2a8e57d 100644 --- a/sidebars.ts +++ b/sidebars.ts @@ -29,13 +29,12 @@ const sidebars: SidebarsConfig = { type: 'category', label: 'Guides', items: [ - 'guides/data-extraction', - 'guides/classification', - 'guides/embeddings', - 'guides/clip-embedding', 'guides/emissary-classification', 'guides/emissary-regression', - 'guides/finetuning-pdf-docs' + 'guides/text-generation', + 'guides/ner', + 'guides/clip-classification', + 'guides/clip-embedding', ] } ], diff --git a/static/img/guides/api_example_clip_classification.png b/static/img/guides/api_example_clip_classification.png new file mode 100644 index 0000000..2b67ee7 Binary files /dev/null and b/static/img/guides/api_example_clip_classification.png differ diff --git a/static/img/guides/api_example_ner.png b/static/img/guides/api_example_ner.png new file mode 100644 index 0000000..70f4ac5 Binary files /dev/null and b/static/img/guides/api_example_ner.png differ diff --git a/static/img/guides/api_example_regression.png b/static/img/guides/api_example_regression.png new file mode 100644 index 0000000..adc1b3c Binary files /dev/null and b/static/img/guides/api_example_regression.png differ diff --git a/static/img/guides/api_testing_text_gen.png b/static/img/guides/api_testing_text_gen.png new file mode 100644 index 0000000..960af2f Binary files /dev/null and b/static/img/guides/api_testing_text_gen.png differ diff --git a/static/img/guides/artifacts_clip_classification.png b/static/img/guides/artifacts_clip_classification.png new file mode 100644 index 0000000..ffba681 Binary files /dev/null and b/static/img/guides/artifacts_clip_classification.png differ diff --git a/static/img/guides/artifacts_ner.png b/static/img/guides/artifacts_ner.png new file mode 100644 index 0000000..997fa46 Binary files /dev/null and b/static/img/guides/artifacts_ner.png differ diff --git a/static/img/guides/artifacts_regression.png b/static/img/guides/artifacts_regression.png new file mode 100644 index 0000000..64a6d7e Binary files /dev/null and b/static/img/guides/artifacts_regression.png differ diff --git a/static/img/guides/artifacts_text_gen.png b/static/img/guides/artifacts_text_gen.png new file mode 100644 index 0000000..00ab7d7 Binary files /dev/null and b/static/img/guides/artifacts_text_gen.png differ diff --git a/static/img/guides/clip_classification_engine_detail.png b/static/img/guides/clip_classification_engine_detail.png new file mode 100644 index 0000000..a34864c Binary files /dev/null and b/static/img/guides/clip_classification_engine_detail.png differ diff --git a/static/img/guides/create_new_project_general.png b/static/img/guides/create_new_project_general.png new file mode 100644 index 0000000..25229dc Binary files /dev/null and b/static/img/guides/create_new_project_general.png differ diff --git a/static/img/guides/deployment_clip_classification.png b/static/img/guides/deployment_clip_classification.png new file mode 100644 index 0000000..409ff4b Binary files /dev/null and b/static/img/guides/deployment_clip_classification.png differ diff --git a/static/img/guides/deployment_ner.png b/static/img/guides/deployment_ner.png new file mode 100644 index 0000000..48087b5 Binary files /dev/null and b/static/img/guides/deployment_ner.png differ diff --git a/static/img/guides/deployment_regression.png b/static/img/guides/deployment_regression.png new file mode 100644 index 0000000..39cdf4c Binary files /dev/null and b/static/img/guides/deployment_regression.png differ diff --git a/static/img/guides/deployment_text_gen.png b/static/img/guides/deployment_text_gen.png new file mode 100644 index 0000000..1fd2666 Binary files /dev/null and b/static/img/guides/deployment_text_gen.png differ diff --git a/static/img/guides/engine_clip_classification.png b/static/img/guides/engine_clip_classification.png new file mode 100644 index 0000000..417b8b8 Binary files /dev/null and b/static/img/guides/engine_clip_classification.png differ diff --git a/static/img/guides/engine_detail_ner.png b/static/img/guides/engine_detail_ner.png new file mode 100644 index 0000000..76090b3 Binary files /dev/null and b/static/img/guides/engine_detail_ner.png differ diff --git a/static/img/guides/engine_ner.png b/static/img/guides/engine_ner.png new file mode 100644 index 0000000..f97d903 Binary files /dev/null and b/static/img/guides/engine_ner.png differ diff --git a/static/img/guides/engine_regression.png b/static/img/guides/engine_regression.png new file mode 100644 index 0000000..1276977 Binary files /dev/null and b/static/img/guides/engine_regression.png differ diff --git a/static/img/guides/grpo_hyper_parameters_text_gen.png b/static/img/guides/grpo_hyper_parameters_text_gen.png new file mode 100644 index 0000000..c159318 Binary files /dev/null and b/static/img/guides/grpo_hyper_parameters_text_gen.png differ diff --git a/static/img/guides/hyper_parameters_ner.png b/static/img/guides/hyper_parameters_ner.png new file mode 100644 index 0000000..655c21f Binary files /dev/null and b/static/img/guides/hyper_parameters_ner.png differ diff --git a/static/img/guides/manage_dataset_general.png b/static/img/guides/manage_dataset_general.png new file mode 100644 index 0000000..678f4fe Binary files /dev/null and b/static/img/guides/manage_dataset_general.png differ diff --git a/static/img/guides/mange_project_general.png b/static/img/guides/mange_project_general.png new file mode 100644 index 0000000..0bc1ada Binary files /dev/null and b/static/img/guides/mange_project_general.png differ diff --git a/static/img/guides/mange_training_jobs_general.png b/static/img/guides/mange_training_jobs_general.png new file mode 100644 index 0000000..fc7747e Binary files /dev/null and b/static/img/guides/mange_training_jobs_general.png differ diff --git a/static/img/guides/new_job_clip_classification.png b/static/img/guides/new_job_clip_classification.png new file mode 100644 index 0000000..ff09d3f Binary files /dev/null and b/static/img/guides/new_job_clip_classification.png differ diff --git a/static/img/guides/new_training_job_ner.png b/static/img/guides/new_training_job_ner.png new file mode 100644 index 0000000..40be814 Binary files /dev/null and b/static/img/guides/new_training_job_ner.png differ diff --git a/static/img/guides/new_training_jobs_regression.png b/static/img/guides/new_training_jobs_regression.png new file mode 100644 index 0000000..fe33527 Binary files /dev/null and b/static/img/guides/new_training_jobs_regression.png differ diff --git a/static/img/guides/new_training_jobs_text_gen.png b/static/img/guides/new_training_jobs_text_gen.png new file mode 100644 index 0000000..98b1542 Binary files /dev/null and b/static/img/guides/new_training_jobs_text_gen.png differ diff --git a/static/img/guides/parameters_clip_classification.png b/static/img/guides/parameters_clip_classification.png new file mode 100644 index 0000000..28b4310 Binary files /dev/null and b/static/img/guides/parameters_clip_classification.png differ diff --git a/static/img/guides/parameters_regression.png b/static/img/guides/parameters_regression.png new file mode 100644 index 0000000..981111a Binary files /dev/null and b/static/img/guides/parameters_regression.png differ diff --git a/static/img/guides/regression_engine_detail.png b/static/img/guides/regression_engine_detail.png new file mode 100644 index 0000000..d10a833 Binary files /dev/null and b/static/img/guides/regression_engine_detail.png differ diff --git a/static/img/guides/reward_functions_text_gen.png b/static/img/guides/reward_functions_text_gen.png new file mode 100644 index 0000000..54ee981 Binary files /dev/null and b/static/img/guides/reward_functions_text_gen.png differ diff --git a/static/img/guides/sft_engine.png b/static/img/guides/sft_engine.png new file mode 100644 index 0000000..a156db7 Binary files /dev/null and b/static/img/guides/sft_engine.png differ diff --git a/static/img/guides/sft_hyper_params_text_gen.png b/static/img/guides/sft_hyper_params_text_gen.png new file mode 100644 index 0000000..da1857b Binary files /dev/null and b/static/img/guides/sft_hyper_params_text_gen.png differ diff --git a/static/img/guides/test_function_text_gen.png b/static/img/guides/test_function_text_gen.png new file mode 100644 index 0000000..81fc24d Binary files /dev/null and b/static/img/guides/test_function_text_gen.png differ diff --git a/static/img/guides/test_functions_ner.png b/static/img/guides/test_functions_ner.png new file mode 100644 index 0000000..83f5441 Binary files /dev/null and b/static/img/guides/test_functions_ner.png differ diff --git a/static/img/guides/test_functions_regression.png b/static/img/guides/test_functions_regression.png new file mode 100644 index 0000000..bb4cdbf Binary files /dev/null and b/static/img/guides/test_functions_regression.png differ diff --git a/static/img/guides/text_gen_engine_details.png b/static/img/guides/text_gen_engine_details.png new file mode 100644 index 0000000..7c61817 Binary files /dev/null and b/static/img/guides/text_gen_engine_details.png differ diff --git a/static/img/guides/train_logs_ner.png b/static/img/guides/train_logs_ner.png new file mode 100644 index 0000000..ef1cbb5 Binary files /dev/null and b/static/img/guides/train_logs_ner.png differ diff --git a/static/img/guides/train_loss_clip_classification.png b/static/img/guides/train_loss_clip_classification.png new file mode 100644 index 0000000..ac1b8fc Binary files /dev/null and b/static/img/guides/train_loss_clip_classification.png differ diff --git a/static/img/guides/train_loss_ner.png b/static/img/guides/train_loss_ner.png new file mode 100644 index 0000000..48a764d Binary files /dev/null and b/static/img/guides/train_loss_ner.png differ diff --git a/static/img/guides/training_job_clip_classification.png b/static/img/guides/training_job_clip_classification.png new file mode 100644 index 0000000..c35f87e Binary files /dev/null and b/static/img/guides/training_job_clip_classification.png differ diff --git a/static/img/guides/training_job_detail_ner.png b/static/img/guides/training_job_detail_ner.png new file mode 100644 index 0000000..d7f1296 Binary files /dev/null and b/static/img/guides/training_job_detail_ner.png differ diff --git a/static/img/guides/training_job_details_clip_class.png b/static/img/guides/training_job_details_clip_class.png new file mode 100644 index 0000000..7834153 Binary files /dev/null and b/static/img/guides/training_job_details_clip_class.png differ diff --git a/static/img/guides/training_job_details_regression.png b/static/img/guides/training_job_details_regression.png new file mode 100644 index 0000000..6f2e56b Binary files /dev/null and b/static/img/guides/training_job_details_regression.png differ diff --git a/static/img/guides/training_job_details_text_gen.png b/static/img/guides/training_job_details_text_gen.png new file mode 100644 index 0000000..f5d00cf Binary files /dev/null and b/static/img/guides/training_job_details_text_gen.png differ diff --git a/static/img/guides/training_job_list_text_gen.png b/static/img/guides/training_job_list_text_gen.png new file mode 100644 index 0000000..eceebc7 Binary files /dev/null and b/static/img/guides/training_job_list_text_gen.png differ diff --git a/static/img/guides/training_job_ner.png b/static/img/guides/training_job_ner.png new file mode 100644 index 0000000..b08245a Binary files /dev/null and b/static/img/guides/training_job_ner.png differ diff --git a/static/img/guides/training_jobs_regression.png b/static/img/guides/training_jobs_regression.png new file mode 100644 index 0000000..3a65404 Binary files /dev/null and b/static/img/guides/training_jobs_regression.png differ diff --git a/static/img/guides/training_logs_clip_classification.png b/static/img/guides/training_logs_clip_classification.png new file mode 100644 index 0000000..f20b367 Binary files /dev/null and b/static/img/guides/training_logs_clip_classification.png differ diff --git a/static/img/guides/training_logs_regression.png b/static/img/guides/training_logs_regression.png new file mode 100644 index 0000000..09b7d00 Binary files /dev/null and b/static/img/guides/training_logs_regression.png differ diff --git a/static/img/guides/training_logs_text_gen.png b/static/img/guides/training_logs_text_gen.png new file mode 100644 index 0000000..4ff4ca3 Binary files /dev/null and b/static/img/guides/training_logs_text_gen.png differ diff --git a/static/img/guides/training_loss_regression.png b/static/img/guides/training_loss_regression.png new file mode 100644 index 0000000..29a2f99 Binary files /dev/null and b/static/img/guides/training_loss_regression.png differ diff --git a/static/img/guides/training_loss_text_gen.png b/static/img/guides/training_loss_text_gen.png new file mode 100644 index 0000000..12726cc Binary files /dev/null and b/static/img/guides/training_loss_text_gen.png differ diff --git a/static/img/guides/ui_testing_ner.png b/static/img/guides/ui_testing_ner.png new file mode 100644 index 0000000..abc7c0d Binary files /dev/null and b/static/img/guides/ui_testing_ner.png differ diff --git a/static/img/guides/ui_testing_regression.png b/static/img/guides/ui_testing_regression.png new file mode 100644 index 0000000..6bf91f1 Binary files /dev/null and b/static/img/guides/ui_testing_regression.png differ diff --git a/static/img/guides/ui_testing_text_gen.png b/static/img/guides/ui_testing_text_gen.png new file mode 100644 index 0000000..90a45fa Binary files /dev/null and b/static/img/guides/ui_testing_text_gen.png differ diff --git a/static/img/guides/upload_dataset_button_general.png b/static/img/guides/upload_dataset_button_general.png new file mode 100644 index 0000000..3f82f98 Binary files /dev/null and b/static/img/guides/upload_dataset_button_general.png differ diff --git a/static/img/guides/upload_dataset_general.png b/static/img/guides/upload_dataset_general.png new file mode 100644 index 0000000..23e7c22 Binary files /dev/null and b/static/img/guides/upload_dataset_general.png differ