Skip to content

Commit c484011

Browse files
Initial commit
0 parents  commit c484011

23 files changed

Lines changed: 810 additions & 0 deletions

.env.example

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
# Local directories where inputs and outputs are found
2+
# When running on the refinement service, files will be mounted to the /input and /output directory of the container
3+
INPUT_DIR=input
4+
OUTPUT_DIR=output
5+
6+
# This key is derived from the user file's original encryption key, automatically injected into the container by the refinement service
7+
# When developing locally, use any value for testing.
8+
REFINEMENT_ENCRYPTION_KEY=0x1234
9+
10+
# Schema configuration
11+
SCHEMA_NAME=Google Drive Analytics
12+
SCHEMA_VERSION=0.0.1
13+
SCHEMA_DESCRIPTION=Schema for the Google Drive DLP, representing some basic analytics of the Google user
14+
SCHEMA_DIALECT=sqlite
15+
16+
# IPFS configuration
17+
# Required if using https://pinata.cloud (IPFS pinning service)
18+
PINATA_API_KEY=your_pinata_api_key_here
19+
PINATA_API_SECRET=your_pinata_api_secret_here
20+
21+
# Public IPFS gateway URL for accessing uploaded files
22+
# Recommended to use your own dedicated IPFS gateway to avoid congestion / rate limiting
23+
# Example: "https://ipfs.my-dao.org/ipfs" (Note: won't work for third-party files)
24+
IPFS_GATEWAY_URL=https://gateway.pinata.cloud/ipfs
Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
name: Build and Release
2+
3+
on:
4+
push:
5+
branches: [main]
6+
pull_request:
7+
branches: [main]
8+
9+
permissions:
10+
contents: write
11+
12+
jobs:
13+
build-and-release:
14+
runs-on: ubuntu-latest
15+
steps:
16+
- uses: actions/checkout@v3
17+
18+
- name: Set up Python
19+
uses: actions/setup-python@v4
20+
with:
21+
python-version: '3.11'
22+
23+
- name: Set up Docker Buildx
24+
uses: docker/setup-buildx-action@v2
25+
26+
- name: Build Docker image
27+
uses: docker/build-push-action@v4
28+
with:
29+
context: .
30+
load: true
31+
tags: |
32+
refiner:${{ github.run_number }}
33+
refiner:latest
34+
cache-from: type=gha
35+
cache-to: type=gha,mode=max
36+
37+
- name: Export image to file
38+
run: |
39+
docker save refiner:latest | gzip > refiner-${{ github.run_number }}.tar.gz
40+
41+
- name: Generate release body
42+
run: |
43+
echo "Image SHA256: $(sha256sum refiner-${{ github.run_number }}.tar.gz | cut -d' ' -f1)" >> release_body.txt
44+
45+
- name: Upload image
46+
uses: actions/upload-artifact@v4
47+
with:
48+
name: refiner-image
49+
path: refiner-${{ github.run_number }}.tar.gz
50+
51+
- name: Create Release and Upload Assets
52+
uses: softprops/action-gh-release@v1
53+
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
54+
env:
55+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
56+
with:
57+
tag_name: v${{ github.run_number }}
58+
name: Release v${{ github.run_number }}
59+
body_path: release_body.txt
60+
draft: false
61+
prerelease: false
62+
files: |
63+
./refiner-${{ github.run_number }}.tar.gz
64+
65+
- name: Log build result
66+
if: always()
67+
run: |
68+
if [ ${{ job.status }} == "success" ]; then
69+
echo "Build and release completed successfully"
70+
else
71+
echo "Build and release failed"
72+
fi

.gitignore

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
*.pem
2+
*.tar.gz
3+
__pycache__/
4+
*.pyc
5+
6+
/input/user.json
7+
/output/*
8+
!/output/.gitkeep
9+
.env
10+
11+
.idea/
12+
.DS_Store

Dockerfile

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
FROM python:3.12-slim
2+
3+
WORKDIR /app
4+
5+
COPY . /app
6+
7+
# Install any needed packages specified in requirements.txt
8+
RUN pip install --no-cache-dir -r requirements.txt
9+
10+
CMD ["python", "-m", "refiner"]

LICENSE

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
The MIT License (MIT)
2+
Copyright © 2024 Corsali, Inc. dba Vana
3+
4+
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
5+
6+
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
7+
8+
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

README.md

Lines changed: 107 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,107 @@
1+
# Vana Data Refinement Template
2+
3+
This repository serves as a template for creating Dockerized *data refinement instructions* that transform raw user data into normalized (and potentially anonymized) SQLite-compatible databases, so data in Vana can be querying by Vana's Query Engine.
4+
5+
## Overview
6+
7+
Here is an overview of the data refinement process on Vana.
8+
9+
![How Refinements Work](https://files.readme.io/25f8f6a4c8e785a72105d6eb012d09449f63ab5682d1f385120eaf5af871f9a2-image.png "How Refinements Work")
10+
11+
1. DLPs upload user-contributed data through their UI, and run proof-of-contribution against it. Afterwards, they call the refinement service to refine this data point.
12+
1. The refinement service downloads the file from the Data Registry and decrypts it.
13+
1. The refinement container, containing the instructions for data refinement (this repo), is executed
14+
1. The decrypted data is mounted to the container's `/input` directory
15+
1. The raw data points are transformed against a normalized SQLite database schema (specifically libSQL, a modern fork of SQLite)
16+
1. Optionally, PII (Personally Identifiable Information) is removed or masked
17+
1. The refined data is symmetrically encrypted with a derivative of the original file encryption key
18+
1. The encrypted refined data is uploaded and pinned to a DLP-owned IPFS
19+
1. The IPFS CID is written to the refinement container's `/output` directory
20+
1. The CID of the file is added as a refinement under the original file in the Data Registry
21+
1. Vana's Query Engine indexes that data point, aggregating it with all other data points of a given refiner. This allows SQL queries to run against all data of a particular refiner (schema).
22+
23+
## Project Structure
24+
25+
- `refiner/`: Contains the main refinement logic
26+
- `refine.py`: Core refinement implementation
27+
- `config.py`: Environment variables and settings needed to run your refinement
28+
- `__main__.py`: Entry point for the refinement execution
29+
- `models/`: Pydantic and SQLAlchemy data models (for both unrefined and refined data)
30+
- `transformer/`: Data transformation logic
31+
- `utils/`: Utility functions for encryption, IPFS upload, etc.
32+
- `input/`: Contains raw data files to be refined
33+
- `output/`: Contains refined outputs:
34+
- `schema.json`: Database schema definition
35+
- `db.libsql`: SQLite database file
36+
- `db.libsql.pgp`: Encrypted database file
37+
- `Dockerfile`: Defines the container image for the refinement task
38+
- `requirements.txt`: Python package dependencies
39+
40+
## Getting Started
41+
42+
1. Fork this repository
43+
2. Copy `.env.example` to `.env` and modify the values to match your environment
44+
3. Update the schemas in `refiner/models/` to define your raw and normalized data models
45+
4. Modify the refinement logic in `refiner/transformer/` to match your data structure
46+
5. If needed, modify `refiner/refiner.py` with your file(s) that need to be refined
47+
6. Build and test your refinement container
48+
49+
### Environment variables
50+
51+
Copy `.env.example` to `.env` and configure the following variables:
52+
53+
```dotenv
54+
# Local directories where inputs and outputs are found
55+
# When running on the refinement service, files will be mounted to the /input and /output directory of the container
56+
INPUT_DIR=input
57+
OUTPUT_DIR=output
58+
59+
# This key is derived from the user file's original encryption key, automatically injected into the container by the refinement service
60+
# When developing locally, any string can be used here for testing
61+
REFINEMENT_ENCRYPTION_KEY=0x1234
62+
63+
# Schema configuration
64+
SCHEMA_NAME=Google Drive Analytics
65+
SCHEMA_VERSION=0.0.1
66+
SCHEMA_DESCRIPTION=Schema for the Google Drive DLP, representing some basic analytics of the Google user
67+
SCHEMA_DIALECT=sqlite
68+
69+
# IPFS configuration
70+
# Required if using https://pinata.cloud (IPFS pinning service)
71+
PINATA_API_KEY=your_pinata_api_key_here
72+
PINATA_API_SECRET=your_pinata_api_secret_here
73+
74+
# Public IPFS gateway URL for accessing uploaded files
75+
# Recommended to use own dedicated IPFS gateway to avoid congestion / rate limiting
76+
# Example: "https://ipfs.my-dao.org/ipfs" (Note: won't work for third-party files)
77+
IPFS_GATEWAY_URL=https://gateway.pinata.cloud/ipfs
78+
```
79+
80+
## Local Development
81+
82+
To run the refinement locally for testing:
83+
84+
```bash
85+
# With Python
86+
pip install --no-cache-dir -r requirements.txt
87+
python -m refiner
88+
89+
# Or with Docker
90+
docker build -t refiner .
91+
docker run \
92+
--rm \
93+
--volume $(pwd)/input:/input \
94+
--volume $(pwd)/output:/output \
95+
--env PINATA_API_KEY=your_key \
96+
--env PINATA_API_SECRET=your_secret \
97+
refiner
98+
```
99+
100+
## Contributing
101+
102+
If you have suggestions for improving this template, please open an issue or submit a pull request.
103+
104+
## License
105+
106+
[MIT License](LICENSE)
107+

input/user.zip

417 Bytes
Binary file not shown.

output/.gitkeep

Whitespace-only changes.

refiner/__init__.py

Whitespace-only changes.

refiner/__main__.py

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
import json
2+
import logging
3+
import os
4+
import sys
5+
import traceback
6+
import zipfile
7+
8+
from refiner.refine import Refiner
9+
from refiner.config import settings
10+
11+
logging.basicConfig(level=logging.INFO, format='%(message)s')
12+
13+
14+
def run() -> None:
15+
"""Transform all input files into the database."""
16+
input_files_exist = os.path.isdir(settings.INPUT_DIR) and bool(os.listdir(settings.INPUT_DIR))
17+
18+
if not input_files_exist:
19+
raise FileNotFoundError(f"No input files found in {settings.INPUT_DIR}")
20+
extract_input()
21+
22+
refiner = Refiner()
23+
output = refiner.transform()
24+
25+
output_path = os.path.join(settings.OUTPUT_DIR, "output.json")
26+
with open(output_path, 'w') as f:
27+
json.dump(output.model_dump(), f, indent=2)
28+
logging.info(f"Data transformation complete: {output}")
29+
30+
31+
def extract_input() -> None:
32+
"""
33+
If the input directory contains any zip files, extract them
34+
:return:
35+
"""
36+
for input_filename in os.listdir(settings.INPUT_DIR):
37+
input_file = os.path.join(settings.INPUT_DIR, input_filename)
38+
39+
if zipfile.is_zipfile(input_file):
40+
with zipfile.ZipFile(input_file, 'r') as zip_ref:
41+
zip_ref.extractall(settings.INPUT_DIR)
42+
43+
44+
if __name__ == "__main__":
45+
try:
46+
run()
47+
except Exception as e:
48+
logging.error(f"Error during data transformation: {e}")
49+
traceback.print_exc()
50+
sys.exit(1)

0 commit comments

Comments
 (0)