diff --git a/.gitignore b/.gitignore index e5033edc..18444866 100644 --- a/.gitignore +++ b/.gitignore @@ -24,4 +24,30 @@ perf # reference ipv4/6-graph files ./ipv4-graph -./ipv6-graph \ No newline at end of file +./ipv6-graph + +# Python +__pycache__/ +*.py[cod] +*$py.class +*.so +.Python +venv/ +env/ +ENV/ +.venv +pip-log.txt +pip-delete-this-directory.txt +.pytest_cache/ +*.egg-info/ +dist/ +*.egg +.eggs/ +.tox/ +.coverage +.coverage.* +htmlcov/ +.mypy_cache/ +.dmypy.json +dmypy.json +.pyre/ \ No newline at end of file diff --git a/api/v1/README.md b/api/v1/README.md new file mode 100644 index 00000000..19ba17c8 --- /dev/null +++ b/api/v1/README.md @@ -0,0 +1,226 @@ +# Jalapeno API + +A FastAPI-based REST API for querying and analyzing network topology data from Jalapeno's ArangoDB graph database. + +## Features + +- **Graph Operations**: Shortest path, K-shortest paths, graph traversal, neighbors +- **Path Optimization**: Latency, utilization, load balancing, sovereignty constraints +- **Flex-Algo Support**: Multi-topology routing with algorithm-aware path computation +- **Resource Path Optimization (RPO)**: Intelligent destination selection based on metrics +- **SRv6 Integration**: Automatic SRv6 USID generation for computed paths +- **Collection Management**: Query and search across all ArangoDB collections + +## Prerequisites + +- Python 3.9+ +- Access to Jalapeno ArangoDB instance +- Kubernetes cluster (for production deployment) + +## Quick Start + +### Local Development + +1. **Create virtual environment** + +```bash +cd api/v1 +python3 -m venv venv +source venv/bin/activate # On Windows: venv\Scripts\activate +``` + +2. **Install dependencies** + +```bash +pip install -r requirements.txt +``` + +3. **Set environment variables** + +```bash +export LOCAL_DEV=1 +# Optional: Set custom ArangoDB connection +export ARANGO_HOST=localhost +export ARANGO_PORT=8529 +export ARANGO_USER=root +export ARANGO_PASSWORD=jalapeno +export ARANGO_DB=jalapeno +``` + +4. **Run the API** + +```bash +uvicorn app.main:app --reload +``` + +5. **Access the API** + +- API Documentation: http://localhost:8000/docs +- Alternative docs: http://localhost:8000/redoc +- API Root: http://localhost:8000/api/v1 + +## Project Structure + +``` +api/v1/ +├── app/ +│ ├── config/ # Configuration and settings +│ ├── routes/ # API endpoint definitions +│ │ ├── collections.py +│ │ ├── graphs.py +│ │ ├── instances.py +│ │ ├── rpo.py +│ │ └── vpns.py +│ ├── utils/ # Helper functions +│ │ ├── load_processor.py +│ │ └── path_processor.py +│ └── main.py # FastAPI application entry point +├── requirements.txt # Python dependencies +└── README.md # This file +``` + +## Example Usage + +### Get all collections + +```bash +curl http://localhost:8000/api/v1/collections +``` + +### Find shortest path + +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path?source=igp_node/2_0_0_0000.0000.0001&destination=igp_node/2_0_0_0000.0000.0018&direction=outbound" +``` + +### Find shortest path with Flex-Algo 128 + +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path?source=igp_node/2_0_0_0000.0000.0001&destination=igp_node/2_0_0_0000.0000.0018&direction=outbound&algo=128" +``` + +### Resource Path Optimization + +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-optimal?source=hosts/rome&metric=gpu_utilization&graphs=ipv6_graph" +``` + +### Get topology summary + +```bash +curl http://localhost:8000/api/v1/graphs/ipv6_graph/vertices/summary +``` + +## Production Deployment + +### Build Docker Image + +```bash +docker build -t iejalapeno/jalapeno-api:latest -f ../../build/Dockerfile.api . +``` + +### Deploy to Kubernetes + +```bash +kubectl apply -f ../../deployment/api-deployment.yaml +``` + +## Configuration + +The API uses environment variables for configuration: + +| Variable | Description | Default | +|----------|-------------|---------| +| `LOCAL_DEV` | Enable local development mode | `0` | +| `ARANGO_HOST` | ArangoDB host | `arango.jalapeno.svc.cluster.local` | +| `ARANGO_PORT` | ArangoDB port | `8529` | +| `ARANGO_USER` | ArangoDB username | `root` | +| `ARANGO_PASSWORD` | ArangoDB password | `jalapeno` | +| `ARANGO_DB` | ArangoDB database name | `jalapeno` | + +## API Documentation + +For detailed API documentation, see: + +- **[API Reference](../../docs/api/reference.md)** - Complete endpoint reference +- **[Flex-Algo Guide](../../docs/api/flex-algo.md)** - Flex-Algorithm implementation details +- **[RPO Examples](../../docs/api/rpo.md)** - Resource Path Optimization examples +- **Interactive Docs** - http://localhost:8000/docs (when running) + +## Key Features + +### Flex-Algo Support + +The API supports Flexible Algorithm (Flex-Algo) for multi-topology routing: + +- Query vertices by algorithm participation +- Compute paths constrained to specific algorithms +- Automatic SRv6 SID selection based on algorithm +- Support for algorithms 0-255 + +### Resource Path Optimization (RPO) + +Intelligent destination selection combining metrics with path computation: + +- Minimize/maximize numeric metrics (CPU, GPU, latency, cost) +- Exact match for categorical requirements (GPU model, language model) +- Multi-graph support +- Flex-Algo integration + +### SRv6 USID Generation + +Automatic generation of SRv6 micro-SID lists: + +- Auto-detects USID block from topology +- Algo-aware SID selection +- Compressed USID format output +- Full SID list for validation + +## Development + +### Running Tests + +```bash +# Install test dependencies +pip install pytest pytest-asyncio + +# Run tests +pytest +``` + +### Code Style + +The project follows PEP 8 style guidelines. Format code with: + +```bash +pip install black +black app/ +``` + +## Troubleshooting + +### Cannot connect to ArangoDB + +- Verify ArangoDB is running and accessible +- Check environment variables for correct connection details +- In Kubernetes, ensure service DNS resolution is working + +### API returns empty results + +- Verify ArangoDB contains data +- Check collection names match your topology +- Ensure graph collections are properly configured + +### SRv6 USID generation fails + +- Verify nodes have SRv6 SIDs configured in the `sids` array +- Check that SIDs include the `algo` field +- Ensure USID block format matches your topology (e.g., `fc00:0:`) + +## Contributing + +See the main [Jalapeno contributing guide](../../docs/development/contributing.md) for details. + +## License + +See [LICENSE](../../LICENSE) for details. diff --git a/api/v1/app/config/settings.py b/api/v1/app/config/settings.py new file mode 100644 index 00000000..0ee19ed8 --- /dev/null +++ b/api/v1/app/config/settings.py @@ -0,0 +1,22 @@ +from pydantic_settings import BaseSettings +import json +import os + +class Settings(BaseSettings): + # Default values for k8s deployment + database_server: str = "http://arangodb:8529" + database_name: str = "jalapeno" + credentials_path: str = "/credentials/auth" + username: str = "root" + password: str = "jalapeno" + + # Environment-based configuration + # export LOCAL_DEV=1 + def __init__(self, **kwargs): + super().__init__(**kwargs) + if os.getenv("LOCAL_DEV"): + self.database_server = "http://198.18.133.112:30852" + self.credentials_path = None + + class Config: + env_prefix = "JALAPENO_" diff --git a/api/v1/app/main.py b/api/v1/app/main.py new file mode 100644 index 00000000..245b88f5 --- /dev/null +++ b/api/v1/app/main.py @@ -0,0 +1,38 @@ +from fastapi import FastAPI, HTTPException +from fastapi.middleware.cors import CORSMiddleware +from .config.settings import Settings +from .routes import graphs, instances, collections, vpns, rpo + +app = FastAPI( + title="Network UI API", + description="API for network topology visualization", + version="1.0.0" +) + +# CORS configuration +app.add_middleware( + CORSMiddleware, + allow_origins=["*"], # Configure this appropriately for production + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], +) + +# Load settings +settings = Settings() + +# Include routers +app.include_router(instances.router, prefix="/api/v1", tags=["instances"]) +app.include_router(graphs.router, prefix="/api/v1", tags=["graphs"]) +app.include_router(collections.router, prefix="/api/v1", tags=["collections"]) +app.include_router(vpns.router, prefix="/api/v1", tags=["vpns"]) +app.include_router(rpo.router, prefix="/api/v1", tags=["rpo"]) + +@app.get("/health") +async def health_check(): + return { + "status": "healthy", + "database_server": settings.database_server, + "database_name": settings.database_name + } + diff --git a/api/v1/app/routes/__init__.py b/api/v1/app/routes/__init__.py new file mode 100644 index 00000000..46daa07a --- /dev/null +++ b/api/v1/app/routes/__init__.py @@ -0,0 +1,5 @@ +from . import graphs +from . import instances +from . import collections + +__all__ = ['graphs', 'instances', 'collections'] \ No newline at end of file diff --git a/api/v1/app/routes/collections.py b/api/v1/app/routes/collections.py new file mode 100644 index 00000000..9e56a178 --- /dev/null +++ b/api/v1/app/routes/collections.py @@ -0,0 +1,224 @@ +from fastapi import APIRouter, HTTPException +from arango import ArangoClient +from ..config.settings import Settings +from typing import Optional, List + +router = APIRouter() +settings = Settings() + +KNOWN_COLLECTIONS = { + 'graphs': [ + 'ipv4_graph', + 'ipv6_graph', + 'igpv4_graph', + 'igpv6_graph' + ], + 'prefixes': [ + 'ebgp_prefix_v4', + 'ebgp_prefix_v6' + ], + 'peers': [ + 'bgp_node', + 'igp_node' + ] +} + +def get_db(): + client = ArangoClient(hosts=settings.database_server) + try: + db = client.db( + settings.database_name, + username=settings.username, + password=settings.password + ) + return db + except Exception as e: + raise HTTPException( + status_code=500, + detail=f"Could not connect to database: {str(e)}" + ) +@router.get("/collections") +async def get_collections(filter_graphs: Optional[bool] = None): + """ + Get a list of collections in the database + Optional: filter_graphs parameter: + - None (default): show all collections + - True: show only graph collections + - False: show only non-graph collections + """ + try: + db = get_db() + # Get all collections + collections = db.collections() + + # Filter out system collections (those starting with '_') + # Then apply graph filter if specified + user_collections = [ + { + 'name': c['name'], + 'type': c['type'], + 'status': c['status'], + 'count': db.collection(c['name']).count() + } + for c in collections + if not c['name'].startswith('_') and + (filter_graphs is None or # Show all if no filter + (filter_graphs and c['name'].endswith('_graph')) or # Only graphs + (not filter_graphs and not c['name'].endswith('_graph'))) # Only non-graphs + ] + + # Sort by name + user_collections.sort(key=lambda x: x['name']) + + return { + 'collections': user_collections, + 'total_count': len(user_collections), + 'filter_applied': 'all' if filter_graphs is None else ('graphs' if filter_graphs else 'non_graphs') + } + except Exception as e: + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/collections/{collection_name}") +async def get_collection_data( + collection_name: str, + limit: Optional[int] = None, + skip: Optional[int] = None, + filter_key: Optional[str] = None +): + """ + Query any collection in the database with optional filtering and special handling for graphs + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + collection = db.collection(collection_name) + + # Build AQL query based on parameters + aql = f"FOR doc IN {collection_name}" + + # Add filter if specified + if filter_key: + aql += f" FILTER doc._key == @key" + + # Add limit and skip + if skip: + aql += f" SKIP {skip}" + if limit: + aql += f" LIMIT {limit}" + + aql += " RETURN doc" + + # Execute query + cursor = db.aql.execute( + aql, + bind_vars={'key': filter_key} if filter_key else None + ) + + results = [doc for doc in cursor] + + # If it's a graph collection, also get vertices + if collection_name in KNOWN_COLLECTIONS['graphs']: + vertex_collections = set() + for edge in results: + vertex_collections.add(edge['_from'].split('/')[0]) + vertex_collections.add(edge['_to'].split('/')[0]) + + vertices = [] + for vertex_col in vertex_collections: + try: + if db.has_collection(vertex_col): + vertices.extend([v for v in db.collection(vertex_col).all()]) + except Exception as e: + print(f"Warning: Could not fetch vertices from {vertex_col}: {e}") + + return { + 'collection': collection_name, + 'type': 'graph', + 'edge_count': len(results), + 'vertex_count': len(vertices), + 'edges': results, + 'vertices': vertices + } + else: + return { + 'collection': collection_name, + 'type': 'collection', + 'count': len(results), + 'data': results + } + + except Exception as e: + print(f"Error querying collection: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/collections/{collection_name}/keys") +async def get_collection_keys(collection_name: str): + """ + Get just the _key values from a collection + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + aql = f""" + FOR doc IN {collection_name} + RETURN doc._key + """ + + cursor = db.aql.execute(aql) + keys = [key for key in cursor] + + return { + 'collection': collection_name, + 'key_count': len(keys), + 'keys': keys + } + + except Exception as e: + print(f"Error getting collection keys: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/collections/{collection_name}/info") +async def get_collection_info(collection_name: str): + """ + Get metadata about any collection + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + collection = db.collection(collection_name) + + return { + "name": collection_name, + #"type": collection_type, + "count": collection.count(), + "properties": collection.properties() + } + except Exception as e: + raise HTTPException( + status_code=500, + detail=str(e) + ) \ No newline at end of file diff --git a/api/v1/app/routes/graphs.py b/api/v1/app/routes/graphs.py new file mode 100644 index 00000000..c4ae4a05 --- /dev/null +++ b/api/v1/app/routes/graphs.py @@ -0,0 +1,2543 @@ +from fastapi import APIRouter, HTTPException +from typing import List, Optional, Dict, Any +from arango import ArangoClient +from ..config.settings import Settings +import logging +from ..utils.path_processor import process_path_data +from ..utils.load_processor import process_load_data + +# Configure logging +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +router = APIRouter() +settings = Settings() + +# Debug print to see registered routes +print("Available routes:") +for route in router.routes: + print(f" {route.path}") + +KNOWN_COLLECTIONS = { + 'graphs': [ + 'ipv4_graph', + 'ipv6_graph', + 'igpv4_graph', + 'igpv6_graph' + ], + 'prefixes': [ + 'ebgp_prefix_v4', + 'ebgp_prefix_v6' + ], + 'peers': [ + 'bgp_node', + 'igp_node' + ] +} + +def get_db(): + client = ArangoClient(hosts=settings.database_server) + try: + db = client.db( + settings.database_name, + username=settings.username, + password=settings.password + ) + return db + except Exception as e: + raise HTTPException( + status_code=500, + detail=f"Could not connect to database: {str(e)}" + ) + + +################### +# Collection Routes +################### + +@router.get("/graphs") +async def get_graphs(filter_graphs: Optional[bool] = None): + """ + Get a list of graph collections in the database + """ + try: + db = get_db() + # Get all collections + collections = db.collections() + + # Filter out system collections (those starting with '_') + # Then apply graph filter if specified + graph_collections = [ + { + 'name': c['name'], + 'type': c['type'], + 'status': c['status'], + 'count': db.collection(c['name']).count() + } + for c in collections + if not c['name'].startswith('_') and c['name'].endswith('_graph') + ] + + # Sort by name + graph_collections.sort(key=lambda x: x['name']) + + return { + 'collections': graph_collections, + 'total_count': len(graph_collections) + } + except Exception as e: + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/graphs/{collection_name}") +async def get_graph(collection_name: str): + """ + Get information about a specific graph collection + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Graph collection {collection_name} not found" + ) + + if not collection_name.endswith('_graph'): + raise HTTPException( + status_code=400, + detail=f"Collection {collection_name} is not a graph collection" + ) + + collection = db.collection(collection_name) + properties = collection.properties() + + return { + 'name': collection_name, + 'type': properties['type'], + 'status': properties['status'], + 'count': collection.count() + } + except HTTPException: + raise + except Exception as e: + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/graphs/{collection_name}/info") +async def get_graph_info(collection_name: str): + """ + Get detailed information about a graph collection + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Graph collection {collection_name} not found" + ) + + if not collection_name.endswith('_graph'): + raise HTTPException( + status_code=400, + detail=f"Collection {collection_name} is not a graph collection" + ) + + collection = db.collection(collection_name) + properties = collection.properties() + statistics = collection.statistics() + + # Get vertex collections connected to this graph + vertex_collections = set() + for edge in collection: + vertex_collections.add(edge['_from'].split('/')[0]) + vertex_collections.add(edge['_to'].split('/')[0]) + + return { + 'name': collection_name, + 'properties': properties, + 'statistics': statistics, + 'vertex_collections': list(vertex_collections) + } + except HTTPException: + raise + except Exception as e: + raise HTTPException( + status_code=500, + detail=str(e) + ) + +################### +# Collection Routes +################### + +@router.get("/graphs/{collection_name}/vertices") +async def get_vertex_info(collection_name: str): + """ + Get vertex information from a graph collection's edges + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + collection = db.collection(collection_name) + + # Debug print + print(f"Processing vertices for collection: {collection_name}") + + try: + # Get all edges to find vertex collections + vertex_collections = set() + vertex_info = {} + + # First pass: collect all vertex collections + for edge in collection.all(): + if '_from' in edge and '_to' in edge: + from_collection = edge['_from'].split('/')[0] + to_collection = edge['_to'].split('/')[0] + vertex_collections.add(from_collection) + vertex_collections.add(to_collection) + + print(f"Found vertex collections: {vertex_collections}") + + # Second pass: get vertices from each collection + for vertex_col in vertex_collections: + try: + if db.has_collection(vertex_col): + vertices = [] + for vertex in db.collection(vertex_col).all(): + vertices.append({ + '_id': vertex['_id'], + '_key': vertex['_key'], + 'collection': vertex_col + }) + vertex_info[vertex_col] = vertices + print(f"Processed {len(vertices)} vertices from {vertex_col}") + except Exception as e: + print(f"Error processing collection {vertex_col}: {str(e)}") + vertex_info[vertex_col] = {"error": str(e)} + + return { + 'collection': collection_name, + 'vertex_collections': list(vertex_collections), + 'total_vertices': sum(len(vertices) for vertices in vertex_info.values() + if isinstance(vertices, list)), + 'vertices_by_collection': vertex_info + } + + except Exception as e: + print(f"Error processing vertices: {str(e)}") + raise HTTPException( + status_code=500, + detail=f"Error processing vertices: {str(e)}" + ) + + except Exception as e: + print(f"Error in get_vertex_info: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/graphs/{collection_name}/vertices/keys") +async def get_vertex_keys(collection_name: str): + """ + Get just the keys of vertices referenced in a graph collection + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + collection = db.collection(collection_name) + + # Debug print + print(f"Getting vertex keys for collection: {collection_name}") + + try: + # Get all edges to find vertex collections + vertex_keys = set() + + # First pass: collect all unique vertex keys from edges + aql = f""" + FOR edge IN {collection_name} + COLLECT AGGREGATE + from_keys = UNIQUE(PARSE_IDENTIFIER(edge._from).key), + to_keys = UNIQUE(PARSE_IDENTIFIER(edge._to).key) + RETURN {{ + keys: UNION_DISTINCT(from_keys, to_keys) + }} + """ + + cursor = db.aql.execute(aql) + results = [doc for doc in cursor] + + if results and results[0]['keys']: + return { + 'collection': collection_name, + 'vertex_count': len(results[0]['keys']), + 'vertex_keys': sorted(results[0]['keys']) + } + else: + return { + 'collection': collection_name, + 'vertex_count': 0, + 'vertex_keys': [] + } + + except Exception as e: + print(f"Error processing vertex keys: {str(e)}") + raise HTTPException( + status_code=500, + detail=f"Error processing vertex keys: {str(e)}" + ) + + except Exception as e: + print(f"Error in get_vertex_keys: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/graphs/{collection_name}/vertices/ids") +async def get_vertex_ids(collection_name: str): + """ + Get both _key and _id for vertices referenced in a graph collection + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Debug print + print(f"Getting vertex IDs for collection: {collection_name}") + + try: + aql = f""" + FOR edge IN {collection_name} + COLLECT AGGREGATE + from_vertices = UNIQUE({{_id: edge._from, _key: PARSE_IDENTIFIER(edge._from).key}}), + to_vertices = UNIQUE({{_id: edge._to, _key: PARSE_IDENTIFIER(edge._to).key}}) + RETURN {{ + vertices: UNION_DISTINCT(from_vertices, to_vertices) + }} + """ + + cursor = db.aql.execute(aql) + results = [doc for doc in cursor] + + if results and results[0]['vertices']: + # Sort by _key for consistency + sorted_vertices = sorted(results[0]['vertices'], key=lambda x: x['_key']) + return { + 'collection': collection_name, + 'vertex_count': len(sorted_vertices), + 'vertices': sorted_vertices + } + else: + return { + 'collection': collection_name, + 'vertex_count': 0, + 'vertices': [] + } + + except Exception as e: + print(f"Error processing vertex IDs: {str(e)}") + raise HTTPException( + status_code=500, + detail=f"Error processing vertex IDs: {str(e)}" + ) + + except Exception as e: + print(f"Error in get_vertex_ids: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/graphs/{collection_name}/vertices/algo") +async def get_vertices_by_algo(collection_name: str, algo: int = 0): + """ + Get vertices that participate in a specific Flex-Algo. + Filters vertices based on the 'algo' field in their SRv6 endpoint behavior. + + Args: + collection_name: The graph collection name + algo: The algorithm ID to filter by (default: 0) + + Example: + GET /graphs/ipv6_graph/vertices/algo?algo=129 + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + print(f"Getting vertices for collection: {collection_name} with algo: {algo}") + + try: + # Query to find all vertices that have the specified algo in their sids array + aql = f""" + FOR edge IN {collection_name} + // Get unique vertex IDs from both ends of edges + FOR vertex_id IN UNION_DISTINCT([edge._from], [edge._to]) + // Parse the collection and key from the vertex ID + LET vertex_collection = PARSE_IDENTIFIER(vertex_id).collection + LET vertex_key = PARSE_IDENTIFIER(vertex_id).key + + // Fetch the actual vertex document + LET vertex = DOCUMENT(vertex_id) + + // Filter vertices that have sids with matching algo + FILTER vertex != null + FILTER HAS(vertex, 'sids') AND vertex.sids != null + FILTER LENGTH( + FOR sid IN vertex.sids + FILTER HAS(sid, 'srv6_endpoint_behavior') + FILTER HAS(sid.srv6_endpoint_behavior, 'algo') + FILTER sid.srv6_endpoint_behavior.algo == @algo + RETURN sid + ) > 0 + + // Return vertex information with SID details + RETURN DISTINCT {{ + _id: vertex._id, + _key: vertex._key, + collection: vertex_collection, + name: HAS(vertex, 'name') ? vertex.name : null, + router_id: HAS(vertex, 'router_id') ? vertex.router_id : null, + sids: ( + FOR sid IN vertex.sids + FILTER HAS(sid, 'srv6_endpoint_behavior') + FILTER HAS(sid.srv6_endpoint_behavior, 'algo') + FILTER sid.srv6_endpoint_behavior.algo == @algo + RETURN {{ + srv6_sid: sid.srv6_sid, + algo: sid.srv6_endpoint_behavior.algo, + endpoint_behavior: sid.srv6_endpoint_behavior.endpoint_behavior, + flag: sid.srv6_endpoint_behavior.flag + }} + ) + }} + """ + + cursor = db.aql.execute(aql, bind_vars={'algo': algo}) + results = [doc for doc in cursor] + + # Group results by collection for better organization + vertices_by_collection = {} + for vertex in results: + coll = vertex['collection'] + if coll not in vertices_by_collection: + vertices_by_collection[coll] = [] + vertices_by_collection[coll].append(vertex) + + return { + 'graph_collection': collection_name, + 'algo': algo, + 'total_vertices': len(results), + 'vertex_collections': list(vertices_by_collection.keys()), + 'vertices_by_collection': vertices_by_collection + } + + except Exception as e: + print(f"Error processing vertices by algo: {str(e)}") + raise HTTPException( + status_code=500, + detail=f"Error processing vertices by algo: {str(e)}" + ) + + except Exception as e: + print(f"Error in get_vertices_by_algo: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/graphs/{collection_name}/vertices/summary") +async def get_vertex_summary( + collection_name: str, + limit: int = 100, + vertex_collection: str = None # New optional query parameter +): + """ + Get summarized vertex data from any graph in the database. + Returns only key fields that have data. + Optionally filter by specific vertex collection. + """ + try: + db = get_db() + + # First, get the vertex collections for this graph + collections_query = """ + FOR e IN @@graph + COLLECT AGGREGATE + from_cols = UNIQUE(PARSE_IDENTIFIER(e._from).collection), + to_cols = UNIQUE(PARSE_IDENTIFIER(e._to).collection) + RETURN { + vertex_collections: UNION_DISTINCT(from_cols, to_cols) + } + """ + + collections_cursor = db.aql.execute( + collections_query, + bind_vars={ + '@graph': collection_name + } + ) + + collections_result = [doc for doc in collections_cursor] + if not collections_result: + raise HTTPException( + status_code=404, + detail=f"No vertex collections found for graph {collection_name}" + ) + + vertex_collections = collections_result[0]['vertex_collections'] + + # If vertex_collection is specified, validate it exists in the graph + if vertex_collection and vertex_collection not in vertex_collections: + raise HTTPException( + status_code=400, + detail=f"Vertex collection '{vertex_collection}' not found in graph. Available collections: {vertex_collections}" + ) + + # Filter collections if vertex_collection is specified + collections_to_query = [vertex_collection] if vertex_collection else vertex_collections + + # Now query each vertex collection + all_vertices = [] + for vcoll in collections_to_query: + vertex_query = """ + FOR v IN @@collection + LIMIT @limit + RETURN { + collection: @collection_name, + _key: v._key, + _id: v._id, + name: HAS(v, 'name') ? v.name : null, + prefix: HAS(v, 'prefix') ? v.prefix : null, + sids: HAS(v, 'sids') ? v.sids[*].srv6_sid : null, + protocol: HAS(v, 'protocol') ? v.protocol : null, + asn: HAS(v, 'asn') ? v.asn : null + } + """ + + vertex_cursor = db.aql.execute( + vertex_query, + bind_vars={ + '@collection': vcoll, + 'collection_name': vcoll, + 'limit': limit + } + ) + + vertices = [doc for doc in vertex_cursor] + all_vertices.extend(vertices) + + # Remove null fields from the response + cleaned_vertices = [] + for vertex in all_vertices: + cleaned_vertex = {k: v for k, v in vertex.items() if v is not None} + cleaned_vertices.append(cleaned_vertex) + + return { + 'graph': collection_name, + 'vertex_collections': vertex_collections, + 'filtered_collection': vertex_collection, + 'total_vertices': len(cleaned_vertices), + 'vertices': cleaned_vertices + } + + except Exception as e: + print(f"Error getting vertex summary: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +################ +# Edge Routes +################ + +@router.get("/graphs/{collection_name}/edges") +async def get_edge_connections(collection_name: str): + """ + Get only the _from and _to fields from a graph collection + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + collection = db.collection(collection_name) + + # Debug print + print(f"Collection properties: {collection.properties()}") + + # Get all edges with error handling + try: + edges = [] + cursor = collection.all() + for edge in cursor: + if '_from' in edge and '_to' in edge: + edges.append({ + '_from': edge['_from'], + '_to': edge['_to'] + }) + else: + print(f"Warning: Edge missing _from or _to: {edge}") + + print(f"Found {len(edges)} edges") + + return { + 'collection': collection_name, + 'edge_count': len(edges), + 'edges': edges + } + + except Exception as e: + print(f"Error processing edges: {str(e)}") + raise HTTPException( + status_code=500, + detail=f"Error processing edges: {str(e)}" + ) + + except Exception as e: + print(f"Error in get_edge_connections: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/graphs/{collection_name}/edges/detail") +async def get_detailed_edge_connections(collection_name: str, limit: Optional[int] = None): + """ + Get detailed edge information from a graph collection including metrics and properties + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + collection = db.collection(collection_name) + + # Get edges with additional fields + try: + edges = [] + cursor = collection.all() + for edge in cursor: + if '_from' in edge and '_to' in edge: + edge_detail = { + '_key': edge.get('_key'), + '_from': edge['_from'], + '_to': edge['_to'], + 'name': edge.get('name'), + 'prefix': edge.get('prefix'), + 'protocol': edge.get('protocol'), + 'sids': edge.get('sids', []), + 'country_codes': edge.get('country_codes'), + 'metrics': { + 'unidir_delay': edge.get('unidir_link_delay'), + 'percent_util_out': edge.get('percent_util_out'), + 'percent_util_in': edge.get('percent_util_in'), + 'bandwidth': edge.get('max_link_bandwidth'), + 'reservable_bandwidth': edge.get('max_reservable_link_bandwidth'), + 'load': edge.get('load') + }, + 'timestamps': { + 'first_seen': edge.get('first_seen_at'), + 'last_seen': edge.get('last_seen_at'), + 'updated': edge.get('updated_at') + } + } + + # Remove any metrics that are None + edge_detail['metrics'] = {k: v for k, v in edge_detail['metrics'].items() if v is not None} + edge_detail['timestamps'] = {k: v for k, v in edge_detail['timestamps'].items() if v is not None} + + # Only include non-None fields + edges.append({k: v for k, v in edge_detail.items() if v is not None}) + else: + print(f"Warning: Edge missing _from or _to: {edge}") + + # Apply limit if specified + result_edges = edges[:limit] if limit else edges + + return { + 'collection': collection_name, + 'edge_count': len(edges), + 'returned_edges': len(result_edges), + 'edges': result_edges + } + + except Exception as e: + print(f"Error processing edges: {str(e)}") + raise HTTPException( + status_code=500, + detail=f"Error processing edges: {str(e)}" + ) + + except Exception as e: + print(f"Error in get_detailed_edge_connections: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +################ +# Topology Route +################ + +@router.get("/graphs/{collection_name}/topology") +async def get_topology( + collection_name: str, + include_all_fields: bool = True # New optional parameter +): + """ + Get complete topology information with optional field filtering + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Get all edges + collection = db.collection(collection_name) + edges = [] + vertex_ids = set() + + # Get all edges with full data + cursor = collection.all() + for edge in cursor: + if '_from' in edge and '_to' in edge: + # Include all fields if requested + if include_all_fields: + edges.append(edge) + else: + # Include only basic fields + edges.append({ + '_from': edge['_from'], + '_to': edge['_to'] + }) + vertex_ids.add(edge['_from']) + vertex_ids.add(edge['_to']) + + # Get vertex details + vertices = {} + for vertex_id in vertex_ids: + collection_name, key = vertex_id.split('/') + + try: + vertex = db.collection(collection_name).get(key) + if vertex: + if include_all_fields: + # Include all vertex fields + vertices[vertex_id] = vertex + else: + # Include only commonly used fields + vertex_detail = { + 'collection': collection_name, + 'name': vertex.get('name'), + 'prefix': vertex.get('prefix'), + 'protocol': vertex.get('protocol'), + 'sids': [sid.get('srv6_sid') for sid in vertex.get('sids', []) if 'srv6_sid' in sid], + 'asn': vertex.get('asn') + } + # Remove None values + vertices[vertex_id] = {k: v for k, v in vertex_detail.items() if v is not None} + except Exception as vertex_error: + print(f"Error getting vertex {vertex_id}: {str(vertex_error)}") + continue + + return { + 'edges': edges, + 'vertices': vertices, + 'total_edges': len(edges), + 'total_vertices': len(vertices), + 'include_all_fields': include_all_fields + } + + except Exception as e: + print(f"Error in get_topology: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/graphs/{collection_name}/topology/nodes") +async def get_node_topology( + collection_name: str, + include_all_fields: bool = True # Default to returning all fields +): + """ + Get topology information filtered to only node-to-node connections + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Get edges filtered for node connections + edge_query = """ + FOR edge IN @@collection + FILTER CONTAINS(edge._from, 'node') AND CONTAINS(edge._to, 'node') + RETURN edge + """ + + edge_cursor = db.aql.execute( + edge_query, + bind_vars={ + '@collection': collection_name + } + ) + + # Process edges based on include_all_fields + edges = [] + vertex_ids = set() + + for edge in edge_cursor: + if '_from' in edge and '_to' in edge: + if include_all_fields: + edges.append(edge) + else: + edges.append({ + '_from': edge['_from'], + '_to': edge['_to'] + }) + vertex_ids.add(edge['_from']) + vertex_ids.add(edge['_to']) + + # Get vertex details + vertices = {} + for vertex_id in vertex_ids: + collection_name, key = vertex_id.split('/') + + try: + vertex = db.collection(collection_name).get(key) + if vertex: + if include_all_fields: + # Include all vertex fields + vertices[vertex_id] = vertex + else: + # Include only commonly used fields + vertex_detail = { + 'collection': collection_name, + 'name': vertex.get('name'), + 'prefix': vertex.get('prefix'), + 'protocol': vertex.get('protocol'), + 'sids': [sid.get('srv6_sid') for sid in vertex.get('sids', []) if 'srv6_sid' in sid], + 'asn': vertex.get('asn') + } + # Remove None values + vertices[vertex_id] = {k: v for k, v in vertex_detail.items() if v is not None} + except Exception as vertex_error: + print(f"Error getting vertex {vertex_id}: {str(vertex_error)}") + continue + + return { + 'edges': edges, + 'vertices': vertices, + 'total_edges': len(edges), + 'total_vertices': len(vertices), + 'include_all_fields': include_all_fields + } + + except Exception as e: + print(f"Error in get_node_topology: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/graphs/{collection_name}/topology/nodes/algo") +async def get_node_topology_by_algo( + collection_name: str, + algo: int = 0, + include_all_fields: bool = True +): + """ + Get topology information filtered to only node-to-node connections + that participate in a specific Flex-Algo. + + Args: + collection_name: The graph collection name + algo: The algorithm ID to filter by (default: 0) + include_all_fields: Return all fields or just essential ones (default: True) + + Example: + GET /graphs/ipv6_graph/topology/nodes/algo?algo=128 + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + print(f"Getting topology for collection: {collection_name} with algo: {algo}") + + # First, get all vertices that participate in this algo + vertices_with_algo = set() + + # Query to find vertices with the specified algo + vertex_query = f""" + FOR edge IN {collection_name} + FILTER CONTAINS(edge._from, 'node') AND CONTAINS(edge._to, 'node') + FOR vertex_id IN UNION_DISTINCT([edge._from], [edge._to]) + LET vertex = DOCUMENT(vertex_id) + FILTER vertex != null + FILTER HAS(vertex, 'sids') AND vertex.sids != null + FILTER LENGTH( + FOR sid IN vertex.sids + FILTER HAS(sid, 'srv6_endpoint_behavior') + FILTER HAS(sid.srv6_endpoint_behavior, 'algo') + FILTER sid.srv6_endpoint_behavior.algo == @algo + RETURN sid + ) > 0 + RETURN DISTINCT vertex_id + """ + + vertex_cursor = db.aql.execute(vertex_query, bind_vars={'algo': algo}) + vertices_with_algo = set([vid for vid in vertex_cursor]) + + print(f"Found {len(vertices_with_algo)} vertices with algo {algo}") + + # Now get edges where BOTH endpoints participate in this algo + edge_query = f""" + FOR edge IN {collection_name} + FILTER CONTAINS(edge._from, 'node') AND CONTAINS(edge._to, 'node') + RETURN edge + """ + + edge_cursor = db.aql.execute(edge_query) + + # Filter edges where both endpoints have the algo + edges = [] + filtered_vertex_ids = set() + + for edge in edge_cursor: + if '_from' in edge and '_to' in edge: + # Only include edge if both vertices support this algo + if edge['_from'] in vertices_with_algo and edge['_to'] in vertices_with_algo: + if include_all_fields: + edges.append(edge) + else: + edges.append({ + '_from': edge['_from'], + '_to': edge['_to'] + }) + filtered_vertex_ids.add(edge['_from']) + filtered_vertex_ids.add(edge['_to']) + + # Get vertex details for vertices that are actually used in edges + vertices = {} + for vertex_id in filtered_vertex_ids: + vertex_collection, key = vertex_id.split('/') + + try: + vertex = db.collection(vertex_collection).get(key) + if vertex: + if include_all_fields: + # Include all vertex fields + vertices[vertex_id] = vertex + else: + # Filter SIDs to only include those matching the algo + algo_sids = [] + if 'sids' in vertex and vertex['sids']: + for sid in vertex['sids']: + if ('srv6_endpoint_behavior' in sid and + 'algo' in sid['srv6_endpoint_behavior'] and + sid['srv6_endpoint_behavior']['algo'] == algo): + algo_sids.append({ + 'srv6_sid': sid.get('srv6_sid'), + 'algo': sid['srv6_endpoint_behavior'].get('algo'), + 'endpoint_behavior': sid['srv6_endpoint_behavior'].get('endpoint_behavior') + }) + + vertex_detail = { + 'collection': vertex_collection, + 'name': vertex.get('name'), + 'router_id': vertex.get('router_id'), + 'sids': algo_sids if algo_sids else None, + 'asn': vertex.get('asn') + } + # Remove None values + vertices[vertex_id] = {k: v for k, v in vertex_detail.items() if v is not None} + except Exception as vertex_error: + print(f"Error getting vertex {vertex_id}: {str(vertex_error)}") + continue + + return { + 'graph_collection': collection_name, + 'algo': algo, + 'edges': edges, + 'vertices': vertices, + 'total_edges': len(edges), + 'total_vertices': len(vertices), + 'include_all_fields': include_all_fields + } + + except Exception as e: + print(f"Error in get_node_topology_by_algo: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +############################## +# Shortest Path and Traversals +############################## + +# basic shortest path +@router.get("/graphs/{collection_name}/shortest_path") +async def get_shortest_path( + collection_name: str, + source: str, + destination: str, + direction: str = "outbound", # or "inbound", "any" + algo: int = 0 # Flex-Algo to use for SRv6 SID selection +): + """ + Find shortest path between two nodes in a graph with detailed vertex and edge information. + + Args: + collection_name: The graph collection to search + source: Source node ID + destination: Destination node ID + direction: Path direction (outbound, inbound, or any) + algo: Flex-Algo ID for SRv6 SID selection (default: 0) + + The algo parameter filters which SRv6 SIDs are used in the srv6_data response. + Only SIDs matching the specified algo will be included in the SRv6 USID calculation. + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Validate direction parameter + if direction.lower() not in ["outbound", "inbound", "any"]: + raise HTTPException( + status_code=400, + detail="Direction must be 'outbound', 'inbound', or 'any'" + ) + + # Build AQL query with optional algo filtering for igp_nodes + # For algo 0 or when algo filtering is not needed, use standard shortest path + # For non-zero algo, filter igp_nodes to only include those participating in that algo + if algo == 0: + # Standard shortest path without algo filtering + aql = f""" + WITH igp_node + LET path = ( + FOR v, e IN {direction.upper()} + SHORTEST_PATH @source TO @destination + @graph_name + RETURN {{ + vertex: {{ + _id: v._id, + _key: v._key, + router_id: v.router_id, + prefix: v.prefix, + name: v.name, + sids: v.sids + }}, + edge: e ? {{ + _id: e._id, + _key: e._key, + _from: e._from, + _to: e._to, + latency: e.latency, + percent_util_out: e.percent_util_out, + load: e.load + }} : null + }} + ) + RETURN {{ + path: path, + hopcount: LENGTH(path) - 1, + vertex_count: LENGTH(path), + source_info: FIRST(path).vertex, + destination_info: LAST(path).vertex + }} + """ + else: + # Algo-aware shortest path - use K_SHORTEST_PATHS to find multiple paths + # and filter to get the first one where all igp_nodes support the algo + aql = f""" + WITH igp_node + FOR path IN {direction.upper()} + K_SHORTEST_PATHS @source TO @destination + @graph_name + // Check if all igp_nodes in this path support the requested algo + LET igp_nodes_in_path = ( + FOR v IN path.vertices + FILTER CONTAINS(v._id, 'igp_node') + RETURN v + ) + LET nodes_with_algo = ( + FOR node IN igp_nodes_in_path + FILTER HAS(node, 'sids') AND node.sids != null + FILTER LENGTH( + FOR sid IN node.sids + FILTER HAS(sid, 'srv6_endpoint_behavior') + FILTER HAS(sid.srv6_endpoint_behavior, 'algo') + FILTER sid.srv6_endpoint_behavior.algo == @algo + RETURN sid + ) > 0 + RETURN node + ) + // Only accept paths where all igp_nodes support the algo + FILTER LENGTH(igp_nodes_in_path) == LENGTH(nodes_with_algo) + LIMIT 1 + + LET formatted_path = ( + FOR i IN 0..LENGTH(path.vertices)-1 + RETURN {{ + vertex: {{ + _id: path.vertices[i]._id, + _key: path.vertices[i]._key, + router_id: path.vertices[i].router_id, + prefix: path.vertices[i].prefix, + name: path.vertices[i].name, + sids: path.vertices[i].sids + }}, + edge: i < LENGTH(path.edges) ? {{ + _id: path.edges[i]._id, + _key: path.edges[i]._key, + _from: path.edges[i]._from, + _to: path.edges[i]._to, + latency: path.edges[i].latency, + percent_util_out: path.edges[i].percent_util_out, + load: path.edges[i].load + }} : null + }} + ) + + RETURN {{ + path: formatted_path, + hopcount: LENGTH(path.vertices) - 1, + vertex_count: LENGTH(path.vertices), + source_info: {{ + _id: path.vertices[0]._id, + _key: path.vertices[0]._key, + router_id: path.vertices[0].router_id, + prefix: path.vertices[0].prefix, + name: path.vertices[0].name, + sids: path.vertices[0].sids + }}, + destination_info: {{ + _id: LAST(path.vertices)._id, + _key: LAST(path.vertices)._key, + router_id: LAST(path.vertices).router_id, + prefix: LAST(path.vertices).prefix, + name: LAST(path.vertices).name, + sids: LAST(path.vertices).sids + }} + }} + """ + + # Prepare bind variables + bind_vars = { + 'source': source, + 'destination': destination, + 'graph_name': collection_name + } + + # Add algo to bind vars if filtering is enabled + if algo != 0: + bind_vars['algo'] = algo + + cursor = db.aql.execute(aql, bind_vars=bind_vars) + + results = [doc for doc in cursor] + + if not results or not results[0]['path']: + return { + "found": False, + "message": "No path found between specified nodes" + } + + # Get the existing response + response = { + "found": True, + "path": results[0]['path'], + "hopcount": results[0]['hopcount'], + "vertex_count": results[0]['vertex_count'], + "source_info": results[0]['source_info'], + "destination_info": results[0]['destination_info'], + "direction": direction, + "algo": algo + } + + # Process and append the SRv6 data with algo filtering + srv6_data = process_path_data(results[0]['path'], source, destination, algo=algo) + response["srv6_data"] = srv6_data + + return response + + except Exception as e: + print(f"Error finding shortest path: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +# latency weighted shortest path +@router.get("/graphs/{collection_name}/shortest_path/latency") +async def get_shortest_path_latency( + collection_name: str, + source: str, + destination: str, + direction: str = "outbound", # or "inbound", "any" + algo: int = 0 # Flex-Algo to use for SRv6 SID selection +): + """ + Find shortest path between two nodes using latency as weight. + + Args: + collection_name: The graph collection to search + source: Source node ID + destination: Destination node ID + direction: Path direction (outbound, inbound, or any) + algo: Flex-Algo ID for path computation and SRv6 SID selection (default: 0) + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Validate direction parameter + if direction.lower() not in ["outbound", "inbound", "any"]: + raise HTTPException( + status_code=400, + detail="Direction must be 'outbound', 'inbound', or 'any'" + ) + + # Build AQL query with optional algo filtering + if algo == 0: + # Standard shortest path with latency weight + aql = f""" + WITH igp_node + LET path = ( + FOR v, e IN {direction.upper()} + SHORTEST_PATH @source TO @destination + @graph_name + OPTIONS {{ + weightAttribute: 'latency', + defaultWeight: 1 + }} + RETURN {{ + vertex: {{ + _id: v._id, + _key: v._key, + router_id: v.router_id, + prefix: v.prefix, + name: v.name, + sids: v.sids + }}, + edge: e ? {{ + _id: e._id, + _key: e._key, + _from: e._from, + _to: e._to, + latency: e.latency, + percent_util_out: e.percent_util_out, + load: e.load + }} : null + }} + ) + + LET total_latency = ( + FOR p IN path + FILTER p.edge != null + COLLECT AGGREGATE total = SUM(p.edge.latency) + RETURN total + ) + + RETURN {{ + path: path, + hopcount: LENGTH(path) - 1, + vertex_count: LENGTH(path), + source_info: FIRST(path).vertex, + destination_info: LAST(path).vertex, + total_latency: FIRST(total_latency) + }} + """ + else: + # Algo-aware shortest path with latency weight + aql = f""" + WITH igp_node + FOR path IN {direction.upper()} + K_SHORTEST_PATHS @source TO @destination + @graph_name + OPTIONS {{ + weightAttribute: 'latency', + defaultWeight: 1 + }} + LET igp_nodes_in_path = ( + FOR v IN path.vertices + FILTER CONTAINS(v._id, 'igp_node') + RETURN v + ) + LET nodes_with_algo = ( + FOR node IN igp_nodes_in_path + FILTER HAS(node, 'sids') AND node.sids != null + FILTER LENGTH( + FOR sid IN node.sids + FILTER HAS(sid, 'srv6_endpoint_behavior') + FILTER HAS(sid.srv6_endpoint_behavior, 'algo') + FILTER sid.srv6_endpoint_behavior.algo == @algo + RETURN sid + ) > 0 + RETURN node + ) + FILTER LENGTH(igp_nodes_in_path) == LENGTH(nodes_with_algo) + LIMIT 1 + + LET formatted_path = ( + FOR i IN 0..LENGTH(path.vertices)-1 + RETURN {{ + vertex: {{ + _id: path.vertices[i]._id, + _key: path.vertices[i]._key, + router_id: path.vertices[i].router_id, + prefix: path.vertices[i].prefix, + name: path.vertices[i].name, + sids: path.vertices[i].sids + }}, + edge: i < LENGTH(path.edges) ? {{ + _id: path.edges[i]._id, + _key: path.edges[i]._key, + _from: path.edges[i]._from, + _to: path.edges[i]._to, + latency: path.edges[i].latency, + percent_util_out: path.edges[i].percent_util_out, + load: path.edges[i].load + }} : null + }} + ) + + LET total_latency = ( + FOR i IN 0..LENGTH(path.edges)-1 + FILTER path.edges[i].latency != null + COLLECT AGGREGATE total = SUM(path.edges[i].latency) + RETURN total + ) + + RETURN {{ + path: formatted_path, + hopcount: LENGTH(path.vertices) - 1, + vertex_count: LENGTH(path.vertices), + source_info: {{ + _id: path.vertices[0]._id, + _key: path.vertices[0]._key, + router_id: path.vertices[0].router_id, + prefix: path.vertices[0].prefix, + name: path.vertices[0].name, + sids: path.vertices[0].sids + }}, + destination_info: {{ + _id: LAST(path.vertices)._id, + _key: LAST(path.vertices)._key, + router_id: LAST(path.vertices).router_id, + prefix: LAST(path.vertices).prefix, + name: LAST(path.vertices).name, + sids: LAST(path.vertices).sids + }}, + total_latency: FIRST(total_latency) + }} + """ + + # Prepare bind variables + bind_vars = { + 'source': source, + 'destination': destination, + 'graph_name': collection_name + } + + if algo != 0: + bind_vars['algo'] = algo + + cursor = db.aql.execute(aql, bind_vars=bind_vars) + + results = [doc for doc in cursor] + + if not results or not results[0]['path']: + return { + "found": False, + "message": "No path found between specified nodes" + } + + # Get the existing response + response = { + "found": True, + "path": results[0]['path'], + "hopcount": results[0]['hopcount'], + "vertex_count": results[0]['vertex_count'], + "source_info": results[0]['source_info'], + "destination_info": results[0]['destination_info'], + "direction": direction, + "total_latency": results[0]['total_latency'], + "algo": algo + } + + # Process and append the SRv6 data with algo filtering + srv6_data = process_path_data(results[0]['path'], source, destination, algo=algo) + response["srv6_data"] = srv6_data + + return response + + except Exception as e: + print(f"Error finding shortest path with latency weight: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +# weighted shortest path - outbound utilization +@router.get("/graphs/{collection_name}/shortest_path/utilization") +async def get_shortest_path_utilization( + collection_name: str, + source: str, + destination: str, + direction: str = "outbound", # or "inbound", "any" + algo: int = 0 # Flex-Algo to use for SRv6 SID selection +): + """ + Find shortest path between two nodes using utilization as weight. + + Args: + algo: Flex-Algo ID for path computation and SRv6 SID selection (default: 0) + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Validate direction parameter + if direction.lower() not in ["outbound", "inbound", "any"]: + raise HTTPException( + status_code=400, + detail="Direction must be 'outbound', 'inbound', or 'any'" + ) + + # Build AQL query with optional algo filtering + if algo == 0: + # Standard shortest path with utilization weight + aql = f""" + WITH igp_node + LET path = ( + FOR v, e IN {direction.upper()} + SHORTEST_PATH @source TO @destination + @graph_name + OPTIONS {{ + weightAttribute: 'percent_util_out', + defaultWeight: 1 + }} + RETURN {{ + vertex: {{ + _id: v._id, + _key: v._key, + router_id: v.router_id, + prefix: v.prefix, + name: v.name, + sids: v.sids + }}, + edge: e ? {{ + _id: e._id, + _key: e._key, + _from: e._from, + _to: e._to, + latency: e.latency, + percent_util_out: e.percent_util_out, + load: e.load + }} : null + }} + ) + + LET avg_utilization = ( + FOR p IN path + FILTER p.edge != null + COLLECT AGGREGATE + avg = AVERAGE(p.edge.percent_util_out) + RETURN avg + ) + + RETURN {{ + path: path, + hopcount: LENGTH(path) - 1, + vertex_count: LENGTH(path), + source_info: FIRST(path).vertex, + destination_info: LAST(path).vertex, + average_utilization: FIRST(avg_utilization) + }} + """ + else: + # Algo-aware shortest path with utilization weight + aql = f""" + WITH igp_node + FOR path IN {direction.upper()} + K_SHORTEST_PATHS @source TO @destination + @graph_name + OPTIONS {{ + weightAttribute: 'percent_util_out', + defaultWeight: 1 + }} + LET igp_nodes_in_path = ( + FOR v IN path.vertices + FILTER CONTAINS(v._id, 'igp_node') + RETURN v + ) + LET nodes_with_algo = ( + FOR node IN igp_nodes_in_path + FILTER HAS(node, 'sids') AND node.sids != null + FILTER LENGTH( + FOR sid IN node.sids + FILTER HAS(sid, 'srv6_endpoint_behavior') + FILTER HAS(sid.srv6_endpoint_behavior, 'algo') + FILTER sid.srv6_endpoint_behavior.algo == @algo + RETURN sid + ) > 0 + RETURN node + ) + FILTER LENGTH(igp_nodes_in_path) == LENGTH(nodes_with_algo) + LIMIT 1 + + LET formatted_path = ( + FOR i IN 0..LENGTH(path.vertices)-1 + RETURN {{ + vertex: {{ + _id: path.vertices[i]._id, + _key: path.vertices[i]._key, + router_id: path.vertices[i].router_id, + prefix: path.vertices[i].prefix, + name: path.vertices[i].name, + sids: path.vertices[i].sids + }}, + edge: i < LENGTH(path.edges) ? {{ + _id: path.edges[i]._id, + _key: path.edges[i]._key, + _from: path.edges[i]._from, + _to: path.edges[i]._to, + latency: path.edges[i].latency, + percent_util_out: path.edges[i].percent_util_out, + load: path.edges[i].load + }} : null + }} + ) + + LET avg_utilization = ( + FOR i IN 0..LENGTH(path.edges)-1 + FILTER path.edges[i].percent_util_out != null + COLLECT AGGREGATE avg = AVERAGE(path.edges[i].percent_util_out) + RETURN avg + ) + + RETURN {{ + path: formatted_path, + hopcount: LENGTH(path.vertices) - 1, + vertex_count: LENGTH(path.vertices), + source_info: {{ + _id: path.vertices[0]._id, + _key: path.vertices[0]._key, + router_id: path.vertices[0].router_id, + prefix: path.vertices[0].prefix, + name: path.vertices[0].name, + sids: path.vertices[0].sids + }}, + destination_info: {{ + _id: LAST(path.vertices)._id, + _key: LAST(path.vertices)._key, + router_id: LAST(path.vertices).router_id, + prefix: LAST(path.vertices).prefix, + name: LAST(path.vertices).name, + sids: LAST(path.vertices).sids + }}, + average_utilization: FIRST(avg_utilization) + }} + """ + + # Prepare bind variables + bind_vars = { + 'source': source, + 'destination': destination, + 'graph_name': collection_name + } + + if algo != 0: + bind_vars['algo'] = algo + + cursor = db.aql.execute(aql, bind_vars=bind_vars) + + results = [doc for doc in cursor] + + if not results or not results[0]['path']: + return { + "found": False, + "message": "No path found between specified nodes" + } + + # Get the existing response + response = { + "found": True, + "path": results[0]['path'], + "hopcount": results[0]['hopcount'], + "vertex_count": results[0]['vertex_count'], + "source_info": results[0]['source_info'], + "destination_info": results[0]['destination_info'], + "direction": direction, + "average_utilization": results[0]['average_utilization'] + } + + # Process and append the SRv6 data with algo filtering + srv6_data = process_path_data(results[0]['path'], source, destination, algo=algo) + response["srv6_data"] = srv6_data + response["algo"] = algo + + return response + + except Exception as e: + print(f"Error finding shortest path with utilization weight: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/graphs/{collection_name}/shortest_path/sovereignty") +async def get_shortest_path_sovereignty( + collection_name: str, + source: str, + destination: str, + excluded_countries: str, + direction: str = "outbound", + algo: int = 0 # Flex-Algo to use for SRv6 SID selection +): + """ + Find shortest path between two nodes while avoiding specified countries. + + Args: + algo: Flex-Algo ID for path computation and SRv6 SID selection (default: 0) + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Convert comma-separated countries to list and create filter conditions + countries = [c.strip().upper() for c in excluded_countries.split(',')] + country_filters = ' AND '.join([f'p.edges[*].country_codes !like "%{country}%"' for country in countries]) + + # Build algo filtering if needed + if algo == 0: + algo_filter = "" + else: + algo_filter = f""" + LET igp_nodes_in_path = ( + FOR v IN p.vertices + FILTER CONTAINS(v._id, 'igp_node') + RETURN v + ) + LET nodes_with_algo = ( + FOR node IN igp_nodes_in_path + FILTER HAS(node, 'sids') AND node.sids != null + FILTER LENGTH( + FOR sid IN node.sids + FILTER HAS(sid, 'srv6_endpoint_behavior') + FILTER HAS(sid.srv6_endpoint_behavior, 'algo') + FILTER sid.srv6_endpoint_behavior.algo == {algo} + RETURN sid + ) > 0 + RETURN node + ) + FILTER LENGTH(igp_nodes_in_path) == LENGTH(nodes_with_algo) + """ + + # AQL query matching the working manual query but with additional path details + aql = f""" + FOR p IN {direction.upper()} k_shortest_paths + '{source}' TO '{destination}' + {collection_name} + OPTIONS {{uniqueVertices: "path", bfs: true}} + FILTER {country_filters} + {algo_filter} + LIMIT 1 + RETURN {{ + path: ( + FOR v IN p.vertices + RETURN {{ + vertex: {{ + _id: v._id, + _key: v._key, + name: v.name, + sids: v.sids + }} + }} + ), + countries_traversed: p.edges[*].country_codes[*], + hopcount: LENGTH(p.vertices) - 1, + vertex_count: LENGTH(p.vertices), + source_info: FIRST(p.vertices), + destination_info: LAST(p.vertices) + }} + """ + + cursor = db.aql.execute(aql) + results = [doc for doc in cursor] + + if not results: + return { + "found": False, + "message": f"No path found between specified nodes avoiding countries: {excluded_countries}" + } + + # Create response with summary data + response = { + "found": True, + "path": results[0]['path'], + "hopcount": results[0]['hopcount'], + "vertex_count": results[0]['vertex_count'], + "source_info": results[0]['source_info'], + "destination_info": results[0]['destination_info'], + "direction": direction, + "countries_traversed": results[0]['countries_traversed'], + "excluded_countries": countries, + "algo": algo + } + + # Process and append the SRv6 data with algo filtering + srv6_data = process_path_data(results[0]['path'], source, destination, algo=algo) + response["srv6_data"] = srv6_data + + return response + + except Exception as e: + print(f"Error finding path with sovereignty constraints: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +# weighted shortest path - load +@router.get("/graphs/{collection_name}/shortest_path/load") +async def get_shortest_path_load( + collection_name: str, + source: str, + destination: str, + direction: str = "outbound", # or "inbound", "any" + algo: int = 0 # Flex-Algo to use for SRv6 SID selection +): + """ + Find shortest path between two nodes using load as weight. + + Args: + algo: Flex-Algo ID for path computation and SRv6 SID selection (default: 0) + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Validate direction parameter + if direction.lower() not in ["outbound", "inbound", "any"]: + raise HTTPException( + status_code=400, + detail="Direction must be 'outbound', 'inbound', or 'any'" + ) + + # Build AQL query with optional algo filtering + if algo == 0: + # Standard shortest path with load weight + aql = f""" + WITH igp_node + LET path = ( + FOR v, e IN {direction.upper()} + SHORTEST_PATH @source TO @destination + @graph_name + OPTIONS {{ + weightAttribute: 'load', + defaultWeight: 1 + }} + RETURN {{ + vertex: {{ + _id: v._id, + _key: v._key, + router_id: v.router_id, + ipv4_address: v.ipv4_address, + ipv6_address: v.ipv6_address, + prefix: v.prefix, + prefix_len: v.prefix_len, + name: v.name, + sids: v.sids + }}, + edge: e ? {{ + _id: e._id, + _key: e._key, + _from: e._from, + _to: e._to, + latency: e.latency, + percent_util_out: e.percent_util_out, + load: e.load + }} : null + }} + ) + + LET avg_load = ( + FOR p IN path + FILTER p.edge != null + COLLECT AGGREGATE + avg = AVERAGE(p.edge.load) + RETURN avg + ) + + RETURN {{ + path: path, + hopcount: LENGTH(path) - 1, + vertex_count: LENGTH(path), + source_info: FIRST(path).vertex, + destination_info: LAST(path).vertex, + average_load: FIRST(avg_load) + }} + """ + else: + # Algo-aware shortest path with load weight + aql = f""" + WITH igp_node + FOR path IN {direction.upper()} + K_SHORTEST_PATHS @source TO @destination + @graph_name + OPTIONS {{ + weightAttribute: 'load', + defaultWeight: 1 + }} + LET igp_nodes_in_path = ( + FOR v IN path.vertices + FILTER CONTAINS(v._id, 'igp_node') + RETURN v + ) + LET nodes_with_algo = ( + FOR node IN igp_nodes_in_path + FILTER HAS(node, 'sids') AND node.sids != null + FILTER LENGTH( + FOR sid IN node.sids + FILTER HAS(sid, 'srv6_endpoint_behavior') + FILTER HAS(sid.srv6_endpoint_behavior, 'algo') + FILTER sid.srv6_endpoint_behavior.algo == @algo + RETURN sid + ) > 0 + RETURN node + ) + FILTER LENGTH(igp_nodes_in_path) == LENGTH(nodes_with_algo) + LIMIT 1 + + LET formatted_path = ( + FOR i IN 0..LENGTH(path.vertices)-1 + RETURN {{ + vertex: {{ + _id: path.vertices[i]._id, + _key: path.vertices[i]._key, + router_id: path.vertices[i].router_id, + ipv4_address: path.vertices[i].ipv4_address, + ipv6_address: path.vertices[i].ipv6_address, + prefix: path.vertices[i].prefix, + prefix_len: path.vertices[i].prefix_len, + name: path.vertices[i].name, + sids: path.vertices[i].sids + }}, + edge: i < LENGTH(path.edges) ? {{ + _id: path.edges[i]._id, + _key: path.edges[i]._key, + _from: path.edges[i]._from, + _to: path.edges[i]._to, + latency: path.edges[i].latency, + percent_util_out: path.edges[i].percent_util_out, + load: path.edges[i].load + }} : null + }} + ) + + LET avg_load = ( + FOR i IN 0..LENGTH(path.edges)-1 + FILTER path.edges[i].load != null + COLLECT AGGREGATE avg = AVERAGE(path.edges[i].load) + RETURN avg + ) + + RETURN {{ + path: formatted_path, + hopcount: LENGTH(path.vertices) - 1, + vertex_count: LENGTH(path.vertices), + source_info: {{ + _id: path.vertices[0]._id, + _key: path.vertices[0]._key, + router_id: path.vertices[0].router_id, + ipv4_address: path.vertices[0].ipv4_address, + ipv6_address: path.vertices[0].ipv6_address, + prefix: path.vertices[0].prefix, + prefix_len: path.vertices[0].prefix_len, + name: path.vertices[0].name, + sids: path.vertices[0].sids + }}, + destination_info: {{ + _id: LAST(path.vertices)._id, + _key: LAST(path.vertices)._key, + router_id: LAST(path.vertices).router_id, + ipv4_address: LAST(path.vertices).ipv4_address, + ipv6_address: LAST(path.vertices).ipv6_address, + prefix: LAST(path.vertices).prefix, + prefix_len: LAST(path.vertices).prefix_len, + name: LAST(path.vertices).name, + sids: LAST(path.vertices).sids + }}, + average_load: FIRST(avg_load) + }} + """ + + # Prepare bind variables + bind_vars = { + 'source': source, + 'destination': destination, + 'graph_name': collection_name + } + + if algo != 0: + bind_vars['algo'] = algo + + cursor = db.aql.execute(aql, bind_vars=bind_vars) + + results = [doc for doc in cursor] + + if not results or not results[0]['path']: + return { + "found": False, + "message": "No path found between specified nodes" + } + + # Get the existing response + response = { + "found": True, + "path": results[0]['path'], + "hopcount": results[0]['hopcount'], + "vertex_count": results[0]['vertex_count'], + "source_info": results[0]['source_info'], + "destination_info": results[0]['destination_info'], + "direction": direction, + "average_load": results[0]['average_load'], + "algo": algo + } + + # Process and append the SRv6 data with algo filtering + srv6_data = process_path_data(results[0]['path'], source, destination, algo=algo) + response["srv6_data"] = srv6_data + + # Get database connection + db = get_db() + + # Process load data with db connection + load_data = process_load_data(results[0]['path'], collection_name, db) + response["load_data"] = load_data + + return response + + except Exception as e: + print(f"Error finding shortest path with load weight: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/graphs/{collection_name}/shortest_path/best-paths") +async def get_best_paths( + collection_name: str, + source: str, + destination: str, + limit: int = 4, + direction: str = "outbound", + algo: int = 0 # Flex-Algo to use for SRv6 SID selection +): + """ + Find multiple best paths between source and destination nodes. + Default limit is 4 paths, but user can specify more or fewer. + + Args: + algo: Flex-Algo ID for path computation and SRv6 SID selection (default: 0) + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Build algo filtering if needed + if algo == 0: + algo_filter = "" + else: + algo_filter = f""" + LET igp_nodes_in_path = ( + FOR v IN p.vertices + FILTER CONTAINS(v._id, 'igp_node') + RETURN v + ) + LET nodes_with_algo = ( + FOR node IN igp_nodes_in_path + FILTER HAS(node, 'sids') AND node.sids != null + FILTER LENGTH( + FOR sid IN node.sids + FILTER HAS(sid, 'srv6_endpoint_behavior') + FILTER HAS(sid.srv6_endpoint_behavior, 'algo') + FILTER sid.srv6_endpoint_behavior.algo == {algo} + RETURN sid + ) > 0 + RETURN node + ) + FILTER LENGTH(igp_nodes_in_path) == LENGTH(nodes_with_algo) + """ + + # AQL query to get multiple paths + aql = f""" + FOR p IN {direction.upper()} k_shortest_paths + '{source}' TO '{destination}' + {collection_name} + OPTIONS {{uniqueVertices: "path", bfs: true}} + {algo_filter} + LIMIT {limit} + RETURN {{ + path: ( + FOR v IN p.vertices + RETURN {{ + vertex: {{ + _id: v._id, + _key: v._key, + name: v.name, + sids: v.sids + }} + }} + ), + countries_traversed: p.edges[*].country_codes[*], + hopcount: LENGTH(p.vertices) - 1, + vertex_count: LENGTH(p.vertices), + source_info: FIRST(p.vertices), + destination_info: LAST(p.vertices) + }} + """ + + cursor = db.aql.execute(aql) + results = [doc for doc in cursor] + + if not results: + return { + "found": False, + "message": "No paths found between specified nodes" + } + + # Process each path and create response + paths = [] + for result in results: + path_response = { + "path": result['path'], + "hopcount": result['hopcount'], + "vertex_count": result['vertex_count'], + "source_info": result['source_info'], + "destination_info": result['destination_info'], + "countries_traversed": result['countries_traversed'] + } + + # Process and append SRv6 data for each path with algo filtering + srv6_data = process_path_data(result['path'], source, destination, algo=algo) + path_response["srv6_data"] = srv6_data + paths.append(path_response) + + return { + "found": True, + "total_paths_found": len(paths), + "direction": direction, + "algo": algo, + "paths": paths + } + + except Exception as e: + print(f"Error finding best paths: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/graphs/{collection_name}/shortest_path/next-best-path") +async def get_next_best_paths( + collection_name: str, + source: str, + destination: str, + same_hop_limit: int = 4, + plus_one_limit: int = 8, + direction: str = "outbound", + algo: int = 0 # Flex-Algo to use for SRv6 SID selection +): + """ + Find the shortest path and alternative paths with similar hop counts. + Allows customization of how many paths to return for each hop count. + + Args: + algo: Flex-Algo ID for path computation and SRv6 SID selection (default: 0) + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Debug prints for shortest path + print(f"\nProcessing next-best-path request:") + print(f"Source: {source}") + print(f"Destination: {destination}") + print(f"Direction: {direction}") + + # First query: Get shortest path and its hop count + shortest_path_query = f""" + WITH igp_node + LET path = ( + FOR v, e IN {direction} + SHORTEST_PATH '{source}' TO '{destination}' + {collection_name} + RETURN {{ + vertex: {{ + _id: v._id, + _key: v._key, + router_id: v.router_id, + prefix: v.prefix, + name: v.name, + sids: v.sids + }}, + edge: e ? {{ + _id: e._id, + _key: e._key, + _from: e._from, + _to: e._to, + latency: e.latency, + percent_util_out: e.percent_util_out, + country_codes: e.country_codes, + load: e.load + }} : null + }} + ) + RETURN {{ + path: path, + hopcount: LENGTH(path) - 1 + + }} + """ + + cursor = db.aql.execute(shortest_path_query) + results = [doc for doc in cursor] + + if not results: + return { + "found": False, + "message": "No path found between specified nodes" + } + + shortest_result = results[0] + base_hopcount = shortest_result['hopcount'] + print(f"Found shortest path with {base_hopcount} hops") + + # Second query: Get alternative paths with same hop count + same_hop_query = f""" + WITH igp_node + FOR v, e, p IN {base_hopcount}..{base_hopcount} {direction.upper()} + '{source}' {collection_name} + OPTIONS {{ uniquePaths: true, bfs: true }} + FILTER v._id == '{destination}' + LIMIT {same_hop_limit} + RETURN {{ + path: ( + FOR vertex IN p.vertices + RETURN {{ + vertex: vertex + }} + ), + hopcount: LENGTH(p.vertices) - 1 + }} + """ + + # Third query: Get paths with hop count + 1 + plus_one_hop_query = f""" + WITH igp_node + FOR v, e, p IN {base_hopcount + 1}..{base_hopcount + 1} {direction.upper()} + '{source}' {collection_name} + OPTIONS {{ uniquePaths: true, bfs: true }} + FILTER v._id == '{destination}' + LIMIT {plus_one_limit} + RETURN {{ + path: ( + FOR vertex IN p.vertices + RETURN {{ + vertex: vertex + }} + ), + hopcount: LENGTH(p.vertices) - 1 + }} + """ + + # Debug prints + print(f"\nProcessing next-best-path request:") + print(f"Source: {source}") + print(f"Destination: {destination}") + print(f"Direction: {direction}") + print(f"Found shortest path with {base_hopcount} hops") + + # Execute same hop query + print(f"\nSearching for paths with same hop count ({base_hopcount})...") + same_hop_cursor = db.aql.execute(same_hop_query) + same_hop_paths = [doc for doc in same_hop_cursor] + print(f"Found {len(same_hop_paths)} alternative paths with {base_hopcount} hops") + + # Execute plus one hop query + print(f"\nSearching for paths with hop count + 1 ({base_hopcount + 1})...") + plus_one_cursor = db.aql.execute(plus_one_hop_query) + plus_one_paths = [doc for doc in plus_one_cursor] + print(f"Found {len(plus_one_paths)} paths with {base_hopcount + 1} hops") + + # Process SRv6 data for all paths with algo filtering + shortest_srv6 = process_path_data(shortest_result['path'], source, destination, algo=algo) + same_hop_srv6_data = [ + process_path_data(path['path'], source, destination, algo=algo) + for path in same_hop_paths + ] + plus_one_srv6_data = [ + process_path_data(path['path'], source, destination, algo=algo) + for path in plus_one_paths + ] + + return { + "found": True, + "algo": algo, + "shortest_path": { + "path": shortest_result['path'], + "hopcount": shortest_result['hopcount'], + "srv6_data": shortest_srv6 + }, + "same_hopcount_paths": [{ + "path": path['path'], + "hopcount": path['hopcount'], + "srv6_data": srv6 + } for path, srv6 in zip(same_hop_paths, same_hop_srv6_data)], + "plus_one_hopcount_paths": [{ + "path": path['path'], + "hopcount": path['hopcount'], + "srv6_data": srv6 + } for path, srv6 in zip(plus_one_paths, plus_one_srv6_data)], + "summary": { + "base_hopcount": base_hopcount, + "same_hopcount_alternatives": len(same_hop_paths), + "plus_one_hopcount_alternatives": len(plus_one_paths) + } + } + + except Exception as e: + print(f"Error finding next best paths: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/graphs/{collection_name}/traverse") +async def traverse_graph( + collection_name: str, + source: str, + destination: str = None, + min_depth: int = 1, + max_depth: int = 4, + direction: str = "outbound" # or "inbound", "any" +): + """ + Traverse graph from a source node with optional destination filtering + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Validate direction parameter + if direction.lower() not in ["outbound", "inbound", "any"]: + raise HTTPException( + status_code=400, + detail="Direction must be 'outbound', 'inbound', or 'any'" + ) + + # Build filter clause if destination node is specified + filter_clause = f"FILTER v._id == '{destination}'" if destination else "" + + # AQL query for traversal with detailed information + aql = f""" + LET paths = ( + FOR v, e, p IN {min_depth}..{max_depth} {direction.upper()} + '{source}' + {collection_name} + OPTIONS {{uniqueVertices: "path", bfs: true}} + {filter_clause} + RETURN DISTINCT {{ + path: p.vertices[*]._key, + sids: p.vertices[*].sids[0].srv6_sid, + country_codes: p.edges[*].country_codes, + metrics: {{ + total_latency: SUM(p.edges[*].unidir_link_delay), + avg_util: AVG(p.edges[*].percent_util_out), + load: AVG(p.edges[*].load), + hop_count: LENGTH(p.vertices) - 1 + }}, + vertices: ( + FOR vertex IN p.vertices + RETURN {{ + _id: vertex._id, + _key: vertex._key, + router_id: vertex.router_id, + prefix: vertex.prefix, + name: vertex.name, + sids: vertex.sids[0].srv6_sid + }} + ), + edges: ( + FOR edge IN p.edges + RETURN {{ + _key: edge._key, + latency: edge.unidir_link_delay, + percent_util: edge.percent_util_out, + load: edge.load, + country_codes: edge.country_codes + }} + ) + }} + ) + RETURN {{ + paths: paths, + total_paths: LENGTH(paths) + }} + """ + + cursor = db.aql.execute(aql) + result = [doc for doc in cursor][0] # Get the first (and only) result + + return { + "source": source, + "destination": destination, + "min_depth": min_depth, + "max_depth": max_depth, + "direction": direction, + "traversal_results": result['paths'], + "total_paths": result['total_paths'] + } + + except Exception as e: + print(f"Error traversing graph: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/graphs/{collection_name}/traverse/simple") +async def traverse_graph_simple( + collection_name: str, + source: str, + destination: str = None, + min_depth: int = 1, + max_depth: int = 5, + direction: str = "any" # or "inbound", "outbound" +): + """ + Simplified graph traversal with basic path information + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Validate direction parameter + if direction.lower() not in ["outbound", "inbound", "any"]: + raise HTTPException( + status_code=400, + detail="Direction must be 'outbound', 'inbound', or 'any'" + ) + + # Build filter clause if destination node is specified + filter_clause = f"FILTER v._id == '{destination}'" if destination else "" + + # AQL query for simplified traversal + aql = f""" + LET paths = ( + FOR v, e, p IN {min_depth}..{max_depth} {direction.upper()} + '{source}' + {collection_name} + OPTIONS {{uniqueVertices: "path", bfs: true}} + {filter_clause} + RETURN DISTINCT {{ + path: p.vertices[*]._key, + sids: p.vertices[*].sids[0].srv6_sid, + country_codes: p.edges[*].country_codes, + metrics: {{ + total_latency: SUM(p.edges[*].unidir_link_delay), + avg_util: AVG(p.edges[*].percent_util_out), + load: AVG(p.edges[*].load), + hop_count: LENGTH(p.vertices) - 1 + }} + }} + ) + RETURN {{ + paths: paths, + total_paths: LENGTH(paths) + }} + """ + + cursor = db.aql.execute(aql) + result = [doc for doc in cursor][0] # Get the first (and only) result + + return { + "source": source, + "destination": destination, + "min_depth": min_depth, + "max_depth": max_depth, + "direction": direction, + "traversal_results": result['paths'], + "total_paths": result['total_paths'] + } + + except Exception as e: + print(f"Error in simple traversal: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/graphs/{collection_name}/neighbors") +async def get_neighbors( + collection_name: str, + source: str, + direction: str = "outbound", # or "inbound", "any" + depth: int = 1 +): + """ + Get immediate neighbors of a node + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Validate direction parameter + if direction.lower() not in ["outbound", "inbound", "any"]: + raise HTTPException( + status_code=400, + detail="Direction must be 'outbound', 'inbound', or 'any'" + ) + + # AQL query for neighbors + aql = f""" + FOR v, e, p IN 1..{depth} {direction.upper()} + '{source}' + {collection_name} + OPTIONS {{uniqueVertices: "path"}} + RETURN DISTINCT {{ + neighbor: {{ + _id: v._id, + _key: v._key, + router_id: v.router_id, + prefix: v.prefix, + name: v.name, + sids: v.sids[0].srv6_sid + }}, + edge: {{ + _key: e._key, + latency: e.unidir_link_delay, + percent_util: e.percent_util_out, + load: e.load, + country_codes: e.country_codes + }}, + metrics: {{ + hop_count: LENGTH(p.vertices) - 1 + }} + }} + """ + + cursor = db.aql.execute(aql) + results = [doc for doc in cursor] + + return { + "source": source, + "direction": direction, + "depth": depth, + "neighbor_count": len(results), + "neighbors": results + } + + except Exception as e: + print(f"Error getting neighbors: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +# Add this at the bottom of the file +print("\nRegistered routes in graphs.py:") +for route in router.routes: + print(f" {route.methods} {route.path}") + +# Test route to verify routing is working +@router.get("/api/v1/test") +async def test_route(): + return {"message": "Test route working"} \ No newline at end of file diff --git a/api/v1/app/routes/instances.py b/api/v1/app/routes/instances.py new file mode 100644 index 00000000..12e49b61 --- /dev/null +++ b/api/v1/app/routes/instances.py @@ -0,0 +1,36 @@ +from fastapi import APIRouter, HTTPException +from arango import ArangoClient +from ..config.settings import Settings + +router = APIRouter() +settings = Settings() + +def get_db(): + client = ArangoClient(hosts=settings.database_server) + try: + db = client.db( + settings.database_name, + username=settings.username, + password=settings.password + ) + return db + except Exception as e: + raise HTTPException( + status_code=500, + detail=f"Could not connect to database: {str(e)}" + ) + +@router.get("/instances") +async def get_instances(): + try: + db = get_db() + # Get list of collections that are graphs + collections = [c['name'] for c in db.collections() + if not c['name'].startswith('_') + and c['type'] == 'edge'] + return collections + except Exception as e: + raise HTTPException( + status_code=500, + detail=str(e) + ) \ No newline at end of file diff --git a/api/v1/app/routes/rpo.py b/api/v1/app/routes/rpo.py new file mode 100644 index 00000000..8e201814 --- /dev/null +++ b/api/v1/app/routes/rpo.py @@ -0,0 +1,452 @@ +from fastapi import APIRouter, HTTPException, Query +from typing import List, Optional, Dict, Any, Union +from arango import ArangoClient +from ..config.settings import Settings +import logging +from .graphs import get_shortest_path + +# Configure logging +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +router = APIRouter() +settings = Settings() + +# Supported metric types and their optimization strategies +SUPPORTED_METRICS = { + 'cpu_utilization': {'type': 'numeric', 'optimize': 'minimize'}, + 'gpu_utilization': {'type': 'numeric', 'optimize': 'minimize'}, + 'memory_utilization': {'type': 'numeric', 'optimize': 'minimize'}, + 'time_to_first_token': {'type': 'numeric', 'optimize': 'minimize'}, + 'cost_per_million_tokens': {'type': 'numeric', 'optimize': 'minimize'}, + 'cost_per_hour': {'type': 'numeric', 'optimize': 'minimize'}, + 'gpu_model': {'type': 'string', 'optimize': 'exact_match'}, + 'language_model': {'type': 'string', 'optimize': 'exact_match'}, + 'response_time': {'type': 'numeric', 'optimize': 'minimize'} +} + +def get_db(): + client = ArangoClient(hosts=settings.database_server) + try: + db = client.db( + settings.database_name, + username=settings.username, + password=settings.password + ) + return db + except Exception as e: + raise HTTPException( + status_code=500, + detail=f"Could not connect to database: {str(e)}" + ) + +@router.get("/rpo") +async def get_rpo_info(): + """ + Get information about Resource Path Optimization (RPO) capabilities + """ + try: + db = get_db() + + # Get all collections to identify potential graph collections + all_collections = db.collections() + graph_collections = [] + + for collection in all_collections: + collection_name = collection['name'] + # Look for collections that might be graph collections + # Common patterns: *_graph, topology_*, network_*, etc. + # Exclude vertex collections like igp_domain, igp_node + if (any(pattern in collection_name.lower() for pattern in ['graph', 'topology', 'network']) and + not any(vertex_pattern in collection_name.lower() for vertex_pattern in ['domain', 'node', 'vertex'])): + graph_collections.append(collection_name) + + return { + 'supported_metrics': SUPPORTED_METRICS, + 'description': 'Resource Path Optimization (RPO) API for intelligent destination selection', + 'available_graph_collections': sorted(graph_collections), + 'note': 'Use graphs parameter to specify which topology graph to use for path finding' + } + + except Exception as e: + logger.warning(f"Could not fetch graph collections: {str(e)}") + return { + 'supported_metrics': SUPPORTED_METRICS, + 'description': 'Resource Path Optimization (RPO) API for intelligent destination selection', + 'available_graph_collections': [], + 'note': 'Use graphs parameter to specify which topology graph to use for path finding' + } + +@router.get("/rpo/{collection_name}") +async def get_collection_endpoints( + collection_name: str, + limit: Optional[int] = None +): + """ + Get all endpoints from a specific collection with their metrics + """ + try: + db = get_db() + + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Query all endpoints from the collection + endpoints_query = f""" + FOR doc IN {collection_name} + RETURN doc + """ + + if limit: + endpoints_query = f""" + FOR doc IN {collection_name} + LIMIT {limit} + RETURN doc + """ + + cursor = db.aql.execute(endpoints_query) + endpoints = [doc for doc in cursor] + + return { + 'collection': collection_name, + 'type': 'collection', + 'count': len(endpoints), + 'data': endpoints + } + + except HTTPException: + raise + except Exception as e: + logger.error(f"Error getting collection endpoints: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/rpo/{collection_name}/select-optimal") +async def select_optimal_endpoint( + collection_name: str, + source: str = Query(..., description="Source endpoint ID"), + metric: str = Query(..., description="Metric to optimize for"), + value: Optional[str] = Query(None, description="Required value for exact match metrics"), + graphs: str = Query(..., description="Graph collection to use for path finding"), + direction: str = Query("outbound", description="Direction for path finding"), + algo: Optional[int] = Query(None, description="Flex-Algo ID to use for path finding (default: 0)") +): + """ + Select optimal destination endpoint from a collection based on metrics for Resource Path Optimization + """ + try: + db = get_db() + + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + if metric not in SUPPORTED_METRICS: + raise HTTPException( + status_code=400, + detail=f"Unsupported metric: {metric}. Supported metrics: {list(SUPPORTED_METRICS.keys())}" + ) + + # Get all endpoints from the collection + endpoints_query = f""" + FOR doc IN {collection_name} + RETURN doc + """ + + cursor = db.aql.execute(endpoints_query) + endpoints = [doc for doc in cursor] + + if not endpoints: + raise HTTPException( + status_code=404, + detail=f"No endpoints found in collection {collection_name}" + ) + + # Filter endpoints with valid metric values + metric_config = SUPPORTED_METRICS[metric] + optimization_strategy = metric_config['optimize'] + + if optimization_strategy == 'exact_match': + if not value: + raise HTTPException( + status_code=400, + detail=f"Value required for exact match metric: {metric}" + ) + + # Find endpoints that match the exact value + valid_endpoints = [ + ep for ep in endpoints + if ep.get(metric) == value + ] + + if not valid_endpoints: + raise HTTPException( + status_code=404, + detail=f"No endpoints found with {metric} = {value}" + ) + + selected_endpoint = valid_endpoints[0] + + elif optimization_strategy == 'minimize': + # Find endpoint with minimum value for the metric (excluding null values) + valid_endpoints = [ + ep for ep in endpoints + if ep.get(metric) is not None + ] + + if not valid_endpoints: + raise HTTPException( + status_code=404, + detail=f"No endpoints found with valid {metric} values" + ) + + selected_endpoint = min( + valid_endpoints, + key=lambda x: x.get(metric) + ) + + elif optimization_strategy == 'maximize': + # Find endpoint with maximum value for the metric (excluding null values) + valid_endpoints = [ + ep for ep in endpoints + if ep.get(metric) is not None + ] + + if not valid_endpoints: + raise HTTPException( + status_code=404, + detail=f"No endpoints found with valid {metric} values" + ) + + selected_endpoint = max( + valid_endpoints, + key=lambda x: x.get(metric) + ) + + else: + raise HTTPException( + status_code=500, + detail=f"Unknown optimization strategy: {optimization_strategy}" + ) + + # Find shortest path to selected endpoint + destination = selected_endpoint['_id'] + logger.info(f"Finding shortest path from {source} to {destination}...") + + try: + path_result = await get_shortest_path( + collection_name=graphs, + source=source, + destination=destination, + direction=direction, + algo=algo + ) + except Exception as path_error: + logger.warning(f"Could not find path: {str(path_error)}") + path_result = { + "found": False, + "error": str(path_error), + "message": "No path found between specified nodes" + } + + return { + 'collection': collection_name, + 'source': source, + 'selected_endpoint': selected_endpoint, + 'optimization_metric': metric, + 'metric_value': selected_endpoint.get(metric), + 'optimization_strategy': optimization_strategy, + 'algo': algo if algo is not None else 0, + 'total_endpoints_evaluated': len(endpoints), + 'valid_endpoints_count': len(valid_endpoints) if 'valid_endpoints' in locals() else len(endpoints), + 'path_result': path_result, + 'summary': { + 'destination': destination, + 'destination_name': selected_endpoint.get('name', 'Unknown'), + 'path_found': path_result.get('found', False), + 'hop_count': path_result.get('hopcount', 0) + } + } + + except HTTPException: + raise + except Exception as e: + logger.error(f"Error in select_optimal_endpoint: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/rpo/{collection_name}/select-from-list") +async def select_from_specific_endpoints( + collection_name: str, + source: str = Query(..., description="Source endpoint ID"), + destinations: str = Query(..., description="Comma-separated list of destination endpoint IDs"), + metric: str = Query(..., description="Metric to optimize for"), + value: Optional[str] = Query(None, description="Required value for exact match metrics"), + graphs: str = Query(..., description="Graph collection to use for path finding"), + direction: str = Query("outbound", description="Direction for path finding"), + algo: Optional[int] = Query(None, description="Flex-Algo ID to use for path finding (default: 0)") +): + """ + Select optimal destination from a specific list of endpoints for Resource Path Optimization + """ + try: + db = get_db() + + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + if metric not in SUPPORTED_METRICS: + raise HTTPException( + status_code=400, + detail=f"Unsupported metric: {metric}. Supported metrics: {list(SUPPORTED_METRICS.keys())}" + ) + + # Parse destination list + destination_list = [dest.strip() for dest in destinations.split(',')] + + # Get endpoint details for each destination + endpoints = [] + for dest_id in destination_list: + # Extract collection and key from dest_id (e.g., "hosts/amsterdam" -> collection="hosts", key="amsterdam") + if '/' in dest_id: + dest_collection, key = dest_id.split('/', 1) + else: + dest_collection = collection_name + key = dest_id + + # Try to get the endpoint from the specific collection + if db.has_collection(dest_collection): + try: + endpoint = db.collection(dest_collection).get(key) + if endpoint: + endpoints.append(endpoint) + else: + logger.warning(f"Could not find endpoint: {dest_id}") + except Exception as e: + logger.warning(f"Error getting endpoint {dest_id}: {str(e)}") + else: + logger.warning(f"Collection {dest_collection} not found for endpoint: {dest_id}") + + if not endpoints: + raise HTTPException( + status_code=404, + detail="No valid endpoints found in the provided list" + ) + + # Apply selection logic + metric_config = SUPPORTED_METRICS[metric] + optimization_strategy = metric_config['optimize'] + + if optimization_strategy == 'exact_match': + if not value: + raise HTTPException( + status_code=400, + detail=f"Value required for exact match metric: {metric}" + ) + + valid_endpoints = [ + ep for ep in endpoints + if ep.get(metric) == value + ] + + if not valid_endpoints: + raise HTTPException( + status_code=404, + detail=f"No endpoints found with {metric} = {value}" + ) + + selected_endpoint = valid_endpoints[0] + + elif optimization_strategy == 'minimize': + valid_endpoints = [ + ep for ep in endpoints + if ep.get(metric) is not None + ] + + if not valid_endpoints: + raise HTTPException( + status_code=404, + detail=f"No endpoints found with valid {metric} values" + ) + + selected_endpoint = min( + valid_endpoints, + key=lambda x: x.get(metric) + ) + + elif optimization_strategy == 'maximize': + valid_endpoints = [ + ep for ep in endpoints + if ep.get(metric) is not None + ] + + if not valid_endpoints: + raise HTTPException( + status_code=404, + detail=f"No endpoints found with valid {metric} values" + ) + + selected_endpoint = max( + valid_endpoints, + key=lambda x: x.get(metric) + ) + + # Find shortest path to selected endpoint + destination = selected_endpoint['_id'] + logger.info(f"Finding shortest path from {source} to {destination}...") + + try: + path_result = await get_shortest_path( + collection_name=graphs, + source=source, + destination=destination, + direction=direction, + algo=algo + ) + except Exception as path_error: + logger.warning(f"Could not find path: {str(path_error)}") + path_result = { + "found": False, + "error": str(path_error), + "message": "No path found between specified nodes" + } + + return { + 'collection': collection_name, + 'source': source, + 'selected_endpoint': selected_endpoint, + 'optimization_metric': metric, + 'metric_value': selected_endpoint.get(metric), + 'optimization_strategy': optimization_strategy, + 'algo': algo if algo is not None else 0, + 'total_candidates': len(endpoints), + 'valid_endpoints_count': len(valid_endpoints) if 'valid_endpoints' in locals() else len(endpoints), + 'path_result': path_result, + 'summary': { + 'destination': destination, + 'destination_name': selected_endpoint.get('name', 'Unknown'), + 'path_found': path_result.get('found', False), + 'hop_count': path_result.get('hopcount', 0) + } + } + + except HTTPException: + raise + except Exception as e: + logger.error(f"Error in select_from_specific_endpoints: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) \ No newline at end of file diff --git a/api/v1/app/routes/vpns.py b/api/v1/app/routes/vpns.py new file mode 100644 index 00000000..a35ef669 --- /dev/null +++ b/api/v1/app/routes/vpns.py @@ -0,0 +1,817 @@ +from fastapi import APIRouter, HTTPException +from typing import List, Optional, Dict, Any +from arango import ArangoClient +from ..config.settings import Settings +import logging + +# Configure logging +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +router = APIRouter() +settings = Settings() + +# Debug print to see registered routes +print("Available VPN routes:") +for route in router.routes: + print(f" {route.path}") + +# VPN-related collections +VPN_COLLECTIONS = { + 'prefixes': [ + 'l3vpn_v4_prefix', + 'l3vpn_v6_prefix' + ], + 'related': [ + 'igp_node', # PE routers + 'bgp_node' # PE routers in BGP context + ] +} + +def get_db(): + client = ArangoClient(hosts=settings.database_server) + try: + db = client.db( + settings.database_name, + username=settings.username, + password=settings.password + ) + return db + except Exception as e: + raise HTTPException( + status_code=500, + detail=f"Could not connect to database: {str(e)}" + ) + +################### +# VPN Routes +################### + +@router.get("/vpns") +async def get_vpn_collections(): + """ + Get a list of VPN-related collections in the database + """ + try: + db = get_db() + # Get all collections + collections = db.collections() + + # Filter for VPN collections + vpn_collections = [ + { + 'name': c['name'], + 'type': c['type'], + 'status': c['status'], + 'count': db.collection(c['name']).count() + } + for c in collections + if not c['name'].startswith('_') and + (c['name'] in VPN_COLLECTIONS['prefixes'] or + c['name'].startswith('l3vpn_') or + c['name'].startswith('vpn_')) + ] + + # Sort by name + vpn_collections.sort(key=lambda x: x['name']) + + return { + 'collections': vpn_collections, + 'total_count': len(vpn_collections) + } + except Exception as e: + logger.error(f"Error in get_vpn_collections: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/vpns/{collection_name}") +async def get_vpn_collection_info(collection_name: str): + """ + Get information about a specific VPN collection + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Verify it's a VPN collection + if not (collection_name in VPN_COLLECTIONS['prefixes'] or + collection_name.startswith('l3vpn_') or + collection_name.startswith('vpn_')): + raise HTTPException( + status_code=400, + detail=f"Collection {collection_name} is not a VPN collection" + ) + + collection = db.collection(collection_name) + properties = collection.properties() + + return { + 'name': collection_name, + 'type': properties['type'], + 'status': properties['status'], + 'count': collection.count() + } + except HTTPException: + raise + except Exception as e: + logger.error(f"Error in get_vpn_collection_info: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/vpns/{collection_name}/summary") +async def get_vpn_summary(collection_name: str): + """ + Get summary statistics for a VPN collection + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Verify it's a VPN collection + if not (collection_name in VPN_COLLECTIONS['prefixes'] or + collection_name.startswith('l3vpn_') or + collection_name.startswith('vpn_')): + raise HTTPException( + status_code=400, + detail=f"Collection {collection_name} is not a VPN collection" + ) + + # Get summary statistics based on the actual data structure + aql = f""" + LET total_count = LENGTH({collection_name}) + + LET unique_rds = ( + FOR doc IN {collection_name} + COLLECT rd = doc.vpn_rd + RETURN rd + ) + + LET unique_route_targets = ( + FOR doc IN {collection_name} + FOR rt IN doc.base_attrs.ext_community_list + FILTER STARTS_WITH(rt, 'rt=') + COLLECT target = rt + RETURN target + ) + + LET unique_nexthops = ( + FOR doc IN {collection_name} + COLLECT nexthop = doc.nexthop + RETURN nexthop + ) + + LET unique_peer_asns = ( + FOR doc IN {collection_name} + COLLECT asn = doc.peer_asn + RETURN asn + ) + + LET unique_labels = ( + FOR doc IN {collection_name} + FOR label IN doc.labels + COLLECT l = label + RETURN l + ) + + RETURN {{ + total_prefixes: total_count, + unique_rd_count: LENGTH(unique_rds), + unique_route_target_count: LENGTH(unique_route_targets), + unique_nexthop_count: LENGTH(unique_nexthops), + unique_peer_asn_count: LENGTH(unique_peer_asns), + unique_label_count: LENGTH(unique_labels) + }} + """ + + cursor = db.aql.execute(aql) + results = [doc for doc in cursor] + + if not results: + return { + 'collection': collection_name, + 'total_prefixes': 0, + 'unique_rd_count': 0, + 'unique_route_target_count': 0, + 'unique_nexthop_count': 0, + 'unique_peer_asn_count': 0, + 'unique_label_count': 0 + } + + return { + 'collection': collection_name, + **results[0] + } + + except Exception as e: + logger.error(f"Error in get_vpn_summary: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/vpns/{collection_name}/pe-routers") +async def get_pe_routers(collection_name: str): + """ + Get a list of PE routers (nexthops) and their prefix counts + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Verify it's a VPN prefix collection + if collection_name not in VPN_COLLECTIONS['prefixes']: + raise HTTPException( + status_code=400, + detail=f"Collection {collection_name} is not a VPN prefix collection" + ) + + # Get PE routers (nexthops) and their prefix counts + aql = f""" + FOR doc IN {collection_name} + COLLECT nexthop = doc.nexthop WITH COUNT INTO count + RETURN {{ + pe_router: nexthop, + prefix_count: count + }} + """ + + cursor = db.aql.execute(aql) + results = [doc for doc in cursor] + + return { + 'collection': collection_name, + 'total_pe_routers': len(results), + 'pe_routers': results + } + + except Exception as e: + logger.error(f"Error in get_pe_routers: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/vpns/{collection_name}/route-targets") +async def get_route_targets(collection_name: str): + """ + Get a list of route targets and their prefix counts + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Verify it's a VPN prefix collection + if collection_name not in VPN_COLLECTIONS['prefixes']: + raise HTTPException( + status_code=400, + detail=f"Collection {collection_name} is not a VPN prefix collection" + ) + + # Get route targets and their prefix counts + aql = f""" + FOR doc IN {collection_name} + FOR rt IN doc.base_attrs.ext_community_list + FILTER STARTS_WITH(rt, 'rt=') + LET clean_rt = SUBSTRING(rt, 3) + COLLECT route_target = clean_rt WITH COUNT INTO count + RETURN {{ + route_target: route_target, + prefix_count: count + }} + """ + + cursor = db.aql.execute(aql) + results = [doc for doc in cursor] + + return { + 'collection': collection_name, + 'total_route_targets': len(results), + 'route_targets': results + } + + except Exception as e: + logger.error(f"Error in get_route_targets: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/vpns/{collection_name}/prefixes/by-pe") +async def get_vpn_prefixes_by_pe( + collection_name: str, + pe_router: str, + limit: int = 100 +): + """ + Get VPN prefixes advertised by a specific PE router (nexthop) + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Verify it's a VPN prefix collection + if collection_name not in VPN_COLLECTIONS['prefixes']: + raise HTTPException( + status_code=400, + detail=f"Collection {collection_name} is not a VPN prefix collection" + ) + + # Get prefixes for the specified PE router (nexthop) + aql = f""" + FOR doc IN {collection_name} + FILTER doc.nexthop == @pe_router + LIMIT {limit} + RETURN {{ + _key: doc._key, + prefix: doc.prefix, + prefix_len: doc.prefix_len, + vpn_rd: doc.vpn_rd, + nexthop: doc.nexthop, + labels: doc.labels, + peer_asn: doc.peer_asn, + route_targets: ( + FOR rt IN doc.base_attrs.ext_community_list + FILTER STARTS_WITH(rt, 'rt=') + RETURN SUBSTRING(rt, 3) + ), + srv6_sid: doc.prefix_sid.srv6_l3_service.sub_tlvs["1"][0].sid + }} + """ + + cursor = db.aql.execute(aql, bind_vars={'pe_router': pe_router}) + results = [doc for doc in cursor] + + # Convert labels to hex in Python and rename to 'function' + for doc in results: + if 'labels' in doc and doc['labels']: + # Convert to hex, trim trailing zeros, and ensure it's at least 4 characters (16 bits) + doc['function'] = [ + format(label, 'x').rstrip('0') or '0' # If all zeros were stripped, return '0' + for label in doc['labels'] + ] + + # Ensure each function value is at least 4 characters (16 bits) + doc['function'] = [ + f if len(f) >= 4 else f.zfill(4) + for f in doc['function'] + ] + + # Create the combined SID field + if 'srv6_sid' in doc and doc['srv6_sid'] and doc['function']: + # Get the base SRv6 SID + base_sid = doc['srv6_sid'] + # Remove trailing colons if present + if base_sid.endswith('::'): + base_sid = base_sid[:-2] + elif base_sid.endswith(':'): + base_sid = base_sid[:-1] + + # Create the combined SID for each function + doc['sid'] = [f"{base_sid}:{func}::" for func in doc['function']] + + # Get total count + aql_count = f""" + FOR doc IN {collection_name} + FILTER doc.nexthop == @pe_router + COLLECT AGGREGATE count = COUNT() + RETURN count + """ + + count_cursor = db.aql.execute(aql_count, bind_vars={'pe_router': pe_router}) + total_count = [count for count in count_cursor][0] + + return { + 'collection': collection_name, + 'pe_router': pe_router, + 'total_prefixes': total_count, + 'prefixes': results, + 'limit_applied': limit + } + + except Exception as e: + logger.error(f"Error in get_vpn_prefixes_by_pe: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/vpns/{collection_name}/prefixes/by-rt") +async def get_vpn_prefixes_by_rt( + collection_name: str, + route_target: str, + limit: int = 100 +): + """ + Get VPN prefixes associated with a specific route target + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Verify it's a VPN prefix collection + if collection_name not in VPN_COLLECTIONS['prefixes']: + raise HTTPException( + status_code=400, + detail=f"Collection {collection_name} is not a VPN prefix collection" + ) + + # Format the route target to match how it's stored + formatted_rt = f"rt={route_target}" + + # Get prefixes for the specified route target + aql = f""" + FOR doc IN {collection_name} + FILTER @route_target IN doc.base_attrs.ext_community_list + LIMIT {limit} + RETURN {{ + _key: doc._key, + prefix: doc.prefix, + prefix_len: doc.prefix_len, + vpn_rd: doc.vpn_rd, + nexthop: doc.nexthop, + labels: doc.labels, + peer_asn: doc.peer_asn, + route_targets: ( + FOR rt IN doc.base_attrs.ext_community_list + FILTER STARTS_WITH(rt, 'rt=') + RETURN SUBSTRING(rt, 3) + ), + srv6_sid: doc.prefix_sid.srv6_l3_service.sub_tlvs["1"][0].sid + }} + """ + + cursor = db.aql.execute(aql, bind_vars={'route_target': formatted_rt}) + results = [doc for doc in cursor] + + # Convert labels to hex in Python and rename to 'function' + for doc in results: + if 'labels' in doc and doc['labels']: + # Convert to hex, trim trailing zeros, and ensure it's at least 4 characters (16 bits) + doc['function'] = [ + format(label, 'x').rstrip('0') or '0' # If all zeros were stripped, return '0' + for label in doc['labels'] + ] + + # Ensure each function value is at least 4 characters (16 bits) + doc['function'] = [ + f if len(f) >= 4 else f.zfill(4) + for f in doc['function'] + ] + + # Create the combined SID field + if 'srv6_sid' in doc and doc['srv6_sid'] and doc['function']: + # Get the base SRv6 SID + base_sid = doc['srv6_sid'] + # Remove trailing colons if present + if base_sid.endswith('::'): + base_sid = base_sid[:-2] + elif base_sid.endswith(':'): + base_sid = base_sid[:-1] + + # Create the combined SID for each function + doc['sid'] = [f"{base_sid}:{func}::" for func in doc['function']] + + # Get total count + aql_count = f""" + FOR doc IN {collection_name} + FILTER @route_target IN doc.base_attrs.ext_community_list + COLLECT AGGREGATE count = COUNT() + RETURN count + """ + + count_cursor = db.aql.execute(aql_count, bind_vars={'route_target': formatted_rt}) + total_count = [count for count in count_cursor][0] + + # Group by nexthop for summary + nexthop_summary = {} + for prefix in results: + nexthop = prefix['nexthop'] + if nexthop not in nexthop_summary: + nexthop_summary[nexthop] = 0 + nexthop_summary[nexthop] += 1 + + nexthop_list = [{"nexthop": nh, "prefix_count": count} for nh, count in nexthop_summary.items()] + + return { + 'collection': collection_name, + 'route_target': route_target, + 'total_prefixes': total_count, + 'advertising_pe_count': len(nexthop_summary), + 'advertising_pes': nexthop_list, + 'prefixes': results, + 'limit_applied': limit + } + + except Exception as e: + logger.error(f"Error in get_vpn_prefixes_by_rt: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/vpns/{collection_name}/prefixes/search") +async def search_vpn_prefixes( + collection_name: str, + prefix: Optional[str] = None, + prefix_exact: Optional[bool] = False, + route_target: Optional[str] = None, + vpn_rd: Optional[str] = None, + limit: int = 100 +): + """ + Search for VPN prefixes with flexible filtering options. + + Parameters: + - prefix: Search for this prefix (can be partial match if prefix_exact=False) + - prefix_exact: If True, match the prefix exactly; if False, use prefix as a substring + - route_target: Filter by this route target + - vpn_rd: Filter by this VPN Route Distinguisher + - limit: Maximum number of results to return + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Verify it's a VPN prefix collection + if collection_name not in VPN_COLLECTIONS['prefixes']: + raise HTTPException( + status_code=400, + detail=f"Collection {collection_name} is not a VPN prefix collection" + ) + + # Build filter conditions based on provided parameters + filter_conditions = [] + bind_vars = {} + + if prefix: + bind_vars['prefix'] = prefix + if prefix_exact: + filter_conditions.append("doc.prefix == @prefix") + else: + filter_conditions.append("CONTAINS(doc.prefix, @prefix)") + + if route_target: + formatted_rt = f"rt={route_target}" + bind_vars['route_target'] = formatted_rt + filter_conditions.append("@route_target IN doc.base_attrs.ext_community_list") + + if vpn_rd: + bind_vars['vpn_rd'] = vpn_rd + filter_conditions.append("doc.vpn_rd == @vpn_rd") + + # If no filters provided, return an error + if not filter_conditions: + raise HTTPException( + status_code=400, + detail="At least one search parameter (prefix, route_target, or vpn_rd) must be provided" + ) + + # Combine filter conditions + filter_clause = " AND ".join(filter_conditions) + + # Get matching prefixes + aql = f""" + FOR doc IN {collection_name} + FILTER {filter_clause} + LIMIT {limit} + RETURN {{ + _key: doc._key, + prefix: doc.prefix, + prefix_len: doc.prefix_len, + vpn_rd: doc.vpn_rd, + nexthop: doc.nexthop, + labels: doc.labels, + peer_asn: doc.peer_asn, + route_targets: ( + FOR rt IN doc.base_attrs.ext_community_list + FILTER STARTS_WITH(rt, 'rt=') + RETURN SUBSTRING(rt, 3) + ), + srv6_sid: doc.prefix_sid.srv6_l3_service.sub_tlvs["1"][0].sid + }} + """ + + cursor = db.aql.execute(aql, bind_vars=bind_vars) + results = [doc for doc in cursor] + + # Convert labels to hex in Python and rename to 'function' + for doc in results: + if 'labels' in doc and doc['labels']: + # Convert to hex, trim trailing zeros, and ensure it's at least 4 characters (16 bits) + doc['function'] = [ + format(label, 'x').rstrip('0') or '0' # If all zeros were stripped, return '0' + for label in doc['labels'] + ] + + # Ensure each function value is at least 4 characters (16 bits) + doc['function'] = [ + f if len(f) >= 4 else f.zfill(4) + for f in doc['function'] + ] + + # Create the combined SID field + if 'srv6_sid' in doc and doc['srv6_sid'] and doc['function']: + # Get the base SRv6 SID + base_sid = doc['srv6_sid'] + # Remove trailing colons if present + if base_sid.endswith('::'): + base_sid = base_sid[:-2] + elif base_sid.endswith(':'): + base_sid = base_sid[:-1] + + # Create the combined SID for each function + doc['sid'] = [f"{base_sid}:{func}::" for func in doc['function']] + + # Get total count + aql_count = f""" + FOR doc IN {collection_name} + FILTER {filter_clause} + COLLECT AGGREGATE count = COUNT() + RETURN count + """ + + count_cursor = db.aql.execute(aql_count, bind_vars=bind_vars) + total_count = [count for count in count_cursor][0] + + # Build response with search criteria + search_criteria = {} + if prefix: + search_criteria['prefix'] = prefix + search_criteria['prefix_exact'] = prefix_exact + if route_target: + search_criteria['route_target'] = route_target + if vpn_rd: + search_criteria['vpn_rd'] = vpn_rd + + return { + 'collection': collection_name, + 'search_criteria': search_criteria, + 'total_prefixes': total_count, + 'prefixes': results, + 'limit_applied': limit + } + + except Exception as e: + logger.error(f"Error in search_vpn_prefixes: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +@router.get("/vpns/{collection_name}/prefixes/by-pe-rt") +async def get_vpn_prefixes_by_pe_rt( + collection_name: str, + pe_router: str, + route_target: str, + limit: int = 100 +): + """ + Get VPN prefixes that match both a specific PE router (nexthop) and route target. + + Parameters: + - pe_router: The PE router's nexthop address + - route_target: The route target to filter by + - limit: Maximum number of results to return + """ + try: + db = get_db() + if not db.has_collection(collection_name): + raise HTTPException( + status_code=404, + detail=f"Collection {collection_name} not found" + ) + + # Verify it's a VPN prefix collection + if collection_name not in VPN_COLLECTIONS['prefixes']: + raise HTTPException( + status_code=400, + detail=f"Collection {collection_name} is not a VPN prefix collection" + ) + + # Format the route target to match how it's stored + formatted_rt = f"rt={route_target}" + + # Get prefixes matching both PE router and route target + aql = f""" + FOR doc IN {collection_name} + FILTER doc.nexthop == @pe_router + FILTER @route_target IN doc.base_attrs.ext_community_list + LIMIT {limit} + RETURN {{ + _key: doc._key, + prefix: doc.prefix, + prefix_len: doc.prefix_len, + vpn_rd: doc.vpn_rd, + nexthop: doc.nexthop, + labels: doc.labels, + peer_asn: doc.peer_asn, + route_targets: ( + FOR rt IN doc.base_attrs.ext_community_list + FILTER STARTS_WITH(rt, 'rt=') + RETURN SUBSTRING(rt, 3) + ), + srv6_sid: doc.prefix_sid.srv6_l3_service.sub_tlvs["1"][0].sid + }} + """ + + cursor = db.aql.execute(aql, bind_vars={ + 'pe_router': pe_router, + 'route_target': formatted_rt + }) + results = [doc for doc in cursor] + + # Convert labels to hex in Python and rename to 'function' + for doc in results: + if 'labels' in doc and doc['labels']: + # Convert to hex, trim trailing zeros, and ensure it's at least 4 characters (16 bits) + doc['function'] = [ + format(label, 'x').rstrip('0') or '0' # If all zeros were stripped, return '0' + for label in doc['labels'] + ] + + # Ensure each function value is at least 4 characters (16 bits) + doc['function'] = [ + f if len(f) >= 4 else f.zfill(4) + for f in doc['function'] + ] + + # Create the combined SID field + if 'srv6_sid' in doc and doc['srv6_sid'] and doc['function']: + # Get the base SRv6 SID + base_sid = doc['srv6_sid'] + # Remove trailing colons if present + if base_sid.endswith('::'): + base_sid = base_sid[:-2] + elif base_sid.endswith(':'): + base_sid = base_sid[:-1] + + # Create the combined SID for each function + doc['sid'] = [f"{base_sid}:{func}::" for func in doc['function']] + + # Get total count + aql_count = f""" + FOR doc IN {collection_name} + FILTER doc.nexthop == @pe_router + FILTER @route_target IN doc.base_attrs.ext_community_list + COLLECT AGGREGATE count = COUNT() + RETURN count + """ + + count_cursor = db.aql.execute(aql_count, bind_vars={ + 'pe_router': pe_router, + 'route_target': formatted_rt + }) + total_count = [count for count in count_cursor][0] + + return { + 'collection': collection_name, + 'pe_router': pe_router, + 'route_target': route_target, + 'total_prefixes': total_count, + 'prefixes': results, + 'limit_applied': limit + } + + except Exception as e: + logger.error(f"Error in get_vpn_prefixes_by_pe_rt: {str(e)}") + raise HTTPException( + status_code=500, + detail=str(e) + ) + +# Add this at the bottom of the file +print("\nRegistered routes in vpns.py:") +for route in router.routes: + print(f" {route.methods} {route.path}") \ No newline at end of file diff --git a/api/v1/app/utils/load_processor.py b/api/v1/app/utils/load_processor.py new file mode 100644 index 00000000..4b2ab06b --- /dev/null +++ b/api/v1/app/utils/load_processor.py @@ -0,0 +1,96 @@ +from typing import List, Dict, Any + +def process_load_data( + path_data: List[Dict[Any, Any]], + collection_name: str, + db, + load_increment: int = 10 +) -> Dict: + """ + Process path data to update and calculate load metrics + + Args: + path_data: List of dictionaries containing path information + collection_name: Name of the graph collection + db: Database connection + load_increment: Amount to increment load by (default: 10) + + Returns: + Dictionary containing load processing results + """ + try: + # Update edge documents with load value + updated_edges = [] + highest_load = 0 + highest_load_edge = None + + for doc in path_data: + if doc.get('edge') and doc['edge'].get('_key'): + edge_key = doc['edge']['_key'] + # Get current edge document + edge_doc = db.collection(collection_name).get({'_key': edge_key}) + if edge_doc: + # Get current load value, default to 0 if it doesn't exist + current_load = edge_doc.get('load', 0) + new_load = current_load + load_increment + + # Track highest load + if new_load > highest_load: + highest_load = new_load + highest_load_edge = edge_key + + # Update with incremented load + db.collection(collection_name).update_match( + {'_key': edge_key}, + {'load': new_load} + ) + updated_edges.append(edge_key) + print(f"Load updated for edge: {edge_key}") + + # Calculate average load after updates + total_load = 0 + edge_count = 0 + updated_loads = [] + + for doc in path_data: + if doc.get('edge') and doc['edge'].get('_key'): + edge_key = doc['edge']['_key'] + edge_doc = db.collection(collection_name).get({'_key': edge_key}) + if edge_doc: + current_load = edge_doc.get('load', 0) + total_load += current_load + edge_count += 1 + updated_loads.append({ + 'edge_key': edge_key, + 'load': current_load + }) + + avg_load = total_load / edge_count if edge_count > 0 else 0 + print(f"Average load across path: {avg_load}") + + return { + 'updated_edges': updated_edges, + 'edge_loads': updated_loads, + 'average_load': avg_load, + 'total_load': total_load, + 'edge_count': edge_count, + 'highest_load': { + 'edge_key': highest_load_edge, + 'load_value': highest_load + } + } + + except Exception as e: + print(f"Error processing load data: {str(e)}") + return { + 'error': str(e), + 'updated_edges': [], + 'edge_loads': [], + 'average_load': 0, + 'total_load': 0, + 'edge_count': 0, + 'highest_load': { + 'edge_key': None, + 'load_value': 0 + } + } \ No newline at end of file diff --git a/api/v1/app/utils/path_processor.py b/api/v1/app/utils/path_processor.py new file mode 100644 index 00000000..92bf0e3f --- /dev/null +++ b/api/v1/app/utils/path_processor.py @@ -0,0 +1,103 @@ +from math import ceil +from typing import List, Dict, Any +import json + +def process_path_data( + path_data: List[Dict[Any, Any]], + source: str, + destination: str, + usid_block: str = None, + algo: int = 0 +) -> Dict: + """ + Process shortest path data to extract SRv6 information + + Args: + path_data: List of path nodes with vertex/edge information + source: Source node identifier + destination: Destination node identifier + usid_block: Optional USID block prefix (e.g., 'fc00:0:', 'fc00:2:', 'fbbb:0:') + If None, will auto-detect from the first SID matching the algo + algo: Flex-Algo ID to filter SIDs (default: 0) + """ + try: + + # Calculate path metrics + hopcount = len(path_data) + print(f"Hopcount: {hopcount}, Algo: {algo}") + + # Extract SID locators filtered by algo + locators = [] + for node in path_data: + # print(f"Processing node: {json.dumps(node, indent=2)}") + # Check for vertex and sids in the vertex object + if 'vertex' in node and 'sids' in node['vertex']: + vertex_sids = node['vertex']['sids'] + if isinstance(vertex_sids, list) and len(vertex_sids) > 0: + # Filter SIDs by algo + matching_sid = None + for sid_entry in vertex_sids: + if isinstance(sid_entry, dict): + # Check if this SID matches the requested algo + if ('srv6_endpoint_behavior' in sid_entry and + 'algo' in sid_entry['srv6_endpoint_behavior'] and + sid_entry['srv6_endpoint_behavior']['algo'] == algo): + matching_sid = sid_entry.get('srv6_sid') + break + + # If we found a matching SID, add it to locators + if matching_sid: + locators.append(matching_sid) + # print(f"Added SID for algo {algo}: {matching_sid}") + + print(f"Collected locators for algo {algo}: {locators}") + + # Auto-detect USID block from first locator if not provided + if usid_block is None and len(locators) > 0: + # Extract the block from the first SID (everything up to and including the second colon) + first_sid = locators[0] + parts = first_sid.split(':') + if len(parts) >= 3: + # Reconstruct block as first two parts + trailing colon + usid_block = f"{parts[0]}:{parts[1]}:" + print(f"Auto-detected USID block: {usid_block}") + else: + # Fallback to default if format is unexpected + usid_block = 'fc00:0:' + print(f"Could not auto-detect USID block, using default: {usid_block}") + elif usid_block is None: + # No locators and no explicit block provided + usid_block = 'fc00:0:' + print(f"No locators found, using default USID block: {usid_block}") + + # Process USID information + usid = [] + for sid in locators: + if sid and usid_block in sid: + usid_list = sid.split(usid_block) + sid_value = usid_list[1] + usid_int = sid_value.split(':') + usid.append(usid_int[0]) + # print(f"Processed USID: {usid_int[0]}") + + # Build SRv6 USID carrier + sidlist = ":".join(usid) + "::" + srv6_sid = usid_block + sidlist + print(f"Final SRv6 SID for algo {algo}: {srv6_sid}") + + result = { + 'srv6_sid_list': locators, + 'srv6_usid': srv6_sid, + 'usid_block': usid_block, + 'algo': algo + } + # print(f"Returning result: {json.dumps(result, indent=2)}") + return result + + except Exception as e: + print(f"Error in path_processor: {str(e)}") + return { + 'error': str(e), + 'srv6_sid_list': [], + 'srv6_usid': '' + } \ No newline at end of file diff --git a/api/v1/requirements.txt b/api/v1/requirements.txt new file mode 100644 index 00000000..02b0d51f --- /dev/null +++ b/api/v1/requirements.txt @@ -0,0 +1,6 @@ +fastapi==0.104.1 +uvicorn==0.24.0 +python-arango==7.9.1 +python-dotenv==1.0.0 +pydantic==2.4.2 +pydantic-settings==2.0.3 \ No newline at end of file diff --git a/build/Dockerfile.api b/build/Dockerfile.api new file mode 100644 index 00000000..9b2b8cfc --- /dev/null +++ b/build/Dockerfile.api @@ -0,0 +1,18 @@ +# Use Python 3.9+ slim image +FROM python:3.9-slim + +# Set working directory +WORKDIR /app + +COPY requirements.txt . + +# Install dependencies +RUN pip install --no-cache-dir -r requirements.txt + +# Copy the rest of the application +COPY app/ app/ + +# Expose port +EXPOSE 8000 + +CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"] \ No newline at end of file diff --git a/deployment/api-deployment.yaml b/deployment/api-deployment.yaml new file mode 100644 index 00000000..6a133cd8 --- /dev/null +++ b/deployment/api-deployment.yaml @@ -0,0 +1,49 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: jalapeno-api + namespace: jalapeno +spec: + replicas: 1 + selector: + matchLabels: + app: jalapeno-api + template: + metadata: + labels: + app: jalapeno-api + spec: + containers: + - name: api + image: iejalapeno/jalapeno-api:latest + imagePullPolicy: Always + env: + - name: JALAPENO_DATABASE_SERVER + value: "http://arangodb:8529" + - name: JALAPENO_DATABASE_NAME + value: "jalapeno" + resources: + requests: + memory: "256Mi" + cpu: "200m" + limits: + memory: "512Mi" + cpu: "500m" + livenessProbe: + httpGet: + path: /health + port: 8000 + initialDelaySeconds: 30 + readinessProbe: + httpGet: + path: /health + port: 8000 + volumeMounts: + - name: credentials + mountPath: /credentials + ports: + - containerPort: 8000 + volumes: + - name: credentials + secret: + secretName: jalapeno \ No newline at end of file diff --git a/deployment/api-service.yaml b/deployment/api-service.yaml new file mode 100644 index 00000000..e78d0079 --- /dev/null +++ b/deployment/api-service.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Service +metadata: + name: jalapeno-api + namespace: jalapeno +spec: + type: NodePort + selector: + app: jalapeno-api + ports: + - port: 80 + targetPort: 8000 + nodePort: 30800 \ No newline at end of file diff --git a/deployment/igp-graph.yaml b/deployment/igp-graph.yaml index 26a77723..6e19d4b2 100755 --- a/deployment/igp-graph.yaml +++ b/deployment/igp-graph.yaml @@ -11,6 +11,23 @@ spec: labels: app: igp-graph spec: + initContainers: + - name: wait-for-data + image: curlimages/curl:8.5.0 + command: + - sh + - -c + - | + echo "Waiting 15 seconds for bmp-arango to start base collection processing..." + sleep 15 + echo "Waiting for base link-state collections to have data..." + until [ $(curl -s -u root:jalapeno "http://arangodb.jalapeno:8529/_db/jalapeno/_api/collection/ls_node/count" | grep -o '"count":[0-9]*' | cut -d':' -f2) -gt 0 ] 2>/dev/null; do + echo "ls_node collection empty, waiting..."; + sleep 5; + done + echo "Base link-state data available, waiting 10 more seconds for processing..." + sleep 10 + echo "Starting IGP graph processor" containers: - args: - --v diff --git a/deployment/ip-graph.yaml b/deployment/ip-graph.yaml index 26a4575c..f648e76b 100755 --- a/deployment/ip-graph.yaml +++ b/deployment/ip-graph.yaml @@ -11,6 +11,135 @@ spec: labels: app: ip-graph spec: + initContainers: + - name: wait-for-data + image: curlimages/curl:8.5.0 + command: + - sh + - -c + - | + # Check if this is a restart (ip-graph collections already have data) or fresh install + bgp_node_count=$(curl -s -u root:jalapeno "http://arangodb.jalapeno:8529/_db/jalapeno/_api/collection/bgp_node/count" 2>/dev/null | grep -o '"count":[0-9]*' | cut -d':' -f2) + bgp_node_count=${bgp_node_count:-0} + + if [ $bgp_node_count -gt 0 ]; then + echo "Detected pod restart (bgp_node has $bgp_node_count nodes)" + echo "Performing quick sanity checks..." + sleep 5 + echo "Starting IP graph processor" + exit 0 + fi + + echo "Detected fresh install - performing full stabilization checks..." + echo "Waiting 15 seconds for bmp-arango to start base collection processing..." + sleep 15 + + # First, wait for igp_node collection to stabilize (igp-graph must complete first) + echo "Waiting for igp_node collection to stabilize (igp-graph processing)..." + prev_igp=0 + stable_count=0 + while true; do + curr_igp=$(curl -s -u root:jalapeno "http://arangodb.jalapeno:8529/_db/jalapeno/_api/collection/igp_node/count" | grep -o '"count":[0-9]*' | cut -d':' -f2) + curr_igp=${curr_igp:-0} + + if [ $curr_igp -eq 0 ]; then + echo "igp_node collection still empty, waiting..."; + sleep 5; + continue; + fi + + diff=$((curr_igp - prev_igp)) + echo "igp_node: $curr_igp nodes (delta: +$diff)" + + if [ $diff -eq 0 ] && [ $prev_igp -gt 0 ]; then + stable_count=$((stable_count + 1)) + echo "igp_node stable check $stable_count/2" + if [ $stable_count -ge 2 ]; then + echo "igp_node stabilized at $curr_igp nodes - igp-graph processing complete" + break; + fi + else + stable_count=0 + fi + + prev_igp=$curr_igp + sleep 5; + done + + # Wait for peer collection to have data + echo "Checking for BGP peer data..." + until [ $(curl -s -u root:jalapeno "http://arangodb.jalapeno:8529/_db/jalapeno/_api/collection/peer/count" | grep -o '"count":[0-9]*' | cut -d':' -f2) -gt 0 ] 2>/dev/null; do + echo "peer collection empty, waiting..."; + sleep 5; + done + echo "BGP peer data found" + + # Wait for unicast_prefix_v4 collection to stabilize + echo "Waiting for unicast_prefix_v4 collection to stabilize..." + prev_v4=0 + stable_v4=0 + while true; do + curr_v4=$(curl -s -u root:jalapeno "http://arangodb.jalapeno:8529/_db/jalapeno/_api/collection/unicast_prefix_v4/count" | grep -o '"count":[0-9]*' | cut -d':' -f2) + curr_v4=${curr_v4:-0} + + if [ $curr_v4 -eq 0 ]; then + echo "unicast_prefix_v4 still empty, waiting..."; + sleep 5; + continue; + fi + + diff=$((curr_v4 - prev_v4)) + echo "unicast_prefix_v4: $curr_v4 prefixes (delta: +$diff)" + + if [ $diff -lt 100 ] && [ $prev_v4 -gt 0 ]; then + stable_v4=$((stable_v4 + 1)) + echo "unicast_prefix_v4 stable check $stable_v4/2" + if [ $stable_v4 -ge 2 ]; then + echo "unicast_prefix_v4 stabilized at $curr_v4 prefixes" + break; + fi + else + stable_v4=0 + fi + + prev_v4=$curr_v4 + sleep 5; + done + + # Wait for unicast_prefix_v6 collection to stabilize + echo "Waiting for unicast_prefix_v6 collection to stabilize..." + prev_v6=0 + stable_v6=0 + while true; do + curr_v6=$(curl -s -u root:jalapeno "http://arangodb.jalapeno:8529/_db/jalapeno/_api/collection/unicast_prefix_v6/count" | grep -o '"count":[0-9]*' | cut -d':' -f2) + curr_v6=${curr_v6:-0} + + if [ $curr_v6 -eq 0 ]; then + echo "unicast_prefix_v6 still empty, waiting..."; + sleep 5; + continue; + fi + + diff=$((curr_v6 - prev_v6)) + echo "unicast_prefix_v6: $curr_v6 prefixes (delta: +$diff)" + + if [ $diff -lt 100 ] && [ $prev_v6 -gt 0 ]; then + stable_v6=$((stable_v6 + 1)) + echo "unicast_prefix_v6 stable check $stable_v6/2" + if [ $stable_v6 -ge 2 ]; then + echo "unicast_prefix_v6 stabilized at $curr_v6 prefixes" + break; + fi + else + stable_v6=0 + fi + + prev_v6=$curr_v6 + sleep 5; + done + + echo "All IGP and BGP data stabilized" + echo "Starting IP graph processor" containers: - args: - --v diff --git a/docs/api/flex-algo.md b/docs/api/flex-algo.md new file mode 100644 index 00000000..dcd0806d --- /dev/null +++ b/docs/api/flex-algo.md @@ -0,0 +1,327 @@ +# Flex-Algo Implementation Summary + +## Overview +This document summarizes the Flex-Algo (Flexible Algorithm) support added to the Jalapeno API. Flex-Algo allows network operators to define multiple routing topologies (algorithms) within a single IGP domain, each optimized for different metrics (e.g., latency, bandwidth, sovereignty). + +## What is Flex-Algo? +Flex-Algo is an IGP extension that allows: +- Multiple algorithm IDs (0-255) to coexist in a single IGP domain +- Each algorithm can have different optimization objectives +- Nodes can participate in multiple algorithms +- Each algorithm has its own SRv6 SID space + +Common algorithm IDs: +- **Algo 0**: Default SPF (standard shortest path) +- **Algo 128**: Low latency optimization +- **Algo 129**: High bandwidth optimization +- **Algo 130+**: Custom optimization criteria + +## Implementation Details + +### 1. Data Structure Changes + +#### Vertex SID Structure +Each `igp_node` vertex now contains SIDs with algo information: +```json +{ + "_id": "igp_node/2_0_0_0000.0000.0001", + "name": "xrd01", + "router_id": "10.0.0.1", + "sids": [ + { + "srv6_sid": "fc00:0:1::", + "algo": 0, + "endpoint_behavior": 48, + "flag": 0 + }, + { + "srv6_sid": "fc00:1:1::", + "algo": 128, + "endpoint_behavior": 48, + "flag": 0 + } + ] +} +``` + +### 2. API Endpoints Modified + +#### A. New Algo-Aware Endpoints + +##### `/graphs/{collection_name}/vertices/algo` +Lists all vertices that participate in a specific Flex-Algo. + +**Example:** +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/vertices/algo?algo=128" +``` + +**Response:** +```json +{ + "graph_collection": "ipv6_graph", + "algo": 128, + "total_vertices": 12, + "vertex_collections": ["igp_node"], + "vertices_by_collection": { + "igp_node": [ + { + "_id": "igp_node/2_0_0_0000.0000.0001", + "name": "xrd01", + "router_id": "10.0.0.1", + "sids": [ + { + "srv6_sid": "fc00:1:1::", + "algo": 128, + "endpoint_behavior": 48, + "flag": 0 + } + ] + } + ] + } +} +``` + +##### `/graphs/{collection_name}/topology/algo` +Returns topology visualization filtered by algo participation. + +**Example:** +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/topology/algo?algo=128" +``` + +#### B. Updated Shortest Path Endpoints + +All shortest path endpoints now support the `algo` parameter: + +1. **Basic Shortest Path** + - Endpoint: `/graphs/{collection_name}/shortest_path` + - New parameter: `algo` (optional, default: 0) + +2. **Latency-Optimized Path** + - Endpoint: `/graphs/{collection_name}/shortest_path/latency` + - New parameter: `algo` (optional, default: 0) + +3. **Utilization-Optimized Path** + - Endpoint: `/graphs/{collection_name}/shortest_path/utilization` + - New parameter: `algo` (optional, default: 0) + +4. **Load-Balanced Path** + - Endpoint: `/graphs/{collection_name}/shortest_path/load` + - New parameter: `algo` (optional, default: 0) + +5. **Sovereignty-Constrained Path** + - Endpoint: `/graphs/{collection_name}/shortest_path/sovereignty` + - New parameter: `algo` (optional, default: 0) + +6. **Best Paths (K-Shortest)** + - Endpoint: `/graphs/{collection_name}/shortest_path/best-paths` + - New parameter: `algo` (optional, default: 0) + +7. **Next Best Paths** + - Endpoint: `/graphs/{collection_name}/shortest_path/next-best-path` + - New parameter: `algo` (optional, default: 0) + +**Example:** +```bash +# Default algo (0) +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path?source=igp_node/2_0_0_0000.0000.0001&destination=igp_node/2_0_0_0000.0000.0018&direction=outbound" + +# With Flex-Algo 128 +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path?source=igp_node/2_0_0_0000.0000.0001&destination=igp_node/2_0_0_0000.0000.0018&direction=outbound&algo=128" +``` + +#### C. RPO (Resource Path Optimization) Endpoints + +Both RPO endpoints now support Flex-Algo: + +1. **Select Optimal Endpoint** + - Endpoint: `/rpo/{collection_name}/select-optimal` + - New parameter: `algo` (optional, default: 0) + +2. **Select from List** + - Endpoint: `/rpo/{collection_name}/select-from-list` + - New parameter: `algo` (optional, default: 0) + +**Example:** +```bash +# Select endpoint with lowest GPU utilization using Flex-Algo 128 +curl "http://localhost:8000/api/v1/rpo/hosts/select-optimal?source=hosts/rome&metric=gpu_utilization&graphs=ipv6_graph&algo=128" +``` + +### 3. Path Processing Changes + +#### `path_processor.py` +Updated to support dynamic USID block detection and algo-aware SID selection: + +**Key Changes:** +- Removed hardcoded `usid_block` default +- Auto-detects USID block from first SID in path +- Selects SIDs matching the specified algo from each node's SID array +- Falls back to algo 0 if specified algo not found + +**Example:** +```python +# Automatically detects fc00:0: or fc00:1: or any other block +srv6_data = process_path_data( + path_data=results[0]['path'], + source=source, + destination=destination, + algo=128 # Will select SIDs with algo=128 +) +``` + +### 4. AQL Query Logic + +#### Algo Filtering Strategy +Since ArangoDB's `SHORTEST_PATH` doesn't support inline filtering, we use `K_SHORTEST_PATHS` with post-filtering: + +```aql +FOR p IN OUTBOUND K_SHORTEST_PATHS '{source}' TO '{destination}' {collection_name} + OPTIONS {{bfs: true}} + + // Filter to ensure all igp_nodes in path participate in the algo + FILTER ( + FOR v IN p.vertices + FILTER v._id LIKE 'igp_node/%' + FILTER {algo} IN v.sids[*].algo + COLLECT WITH COUNT INTO nodeCount + RETURN nodeCount + )[0] == LENGTH( + FOR v IN p.vertices + FILTER v._id LIKE 'igp_node/%' + COLLECT WITH COUNT INTO nodeCount + RETURN nodeCount + )[0] + + LIMIT 1 + RETURN {{ + path: p.vertices[*], + edges: p.edges[*], + hopcount: LENGTH(p.vertices) - 1 + }} +``` + +**Logic Explanation:** +1. Find multiple shortest paths using `K_SHORTEST_PATHS` +2. For each path, verify that all `igp_node` vertices have the specified algo in their SID array +3. Return the first path that meets the criteria +4. If no path found with specified algo, return error + +### 5. Response Format + +All algo-aware endpoints include the algo in their response: + +```json +{ + "found": true, + "algo": 128, + "path": [...], + "hopcount": 7, + "srv6_data": { + "srv6_sid_list": [ + "fc00:1:1::", + "fc00:1:3::", + "fc00:1:7::", + "fc00:1:18::" + ], + "srv6_usid": "fc00:1:1:3:7:18::" + } +} +``` + +Note: The USID block automatically adapts based on the algo (e.g., `fc00:1:` for algo 128). + +## Testing Scenarios + +### Scenario 1: Verify Algo Participation +```bash +# List all nodes in algo 128 +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/vertices/algo?algo=128" + +# List all nodes in algo 0 (default) +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/vertices/algo?algo=0" +``` + +### Scenario 2: Compare Paths Across Algos +```bash +# Path using default algo +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path?source=igp_node/2_0_0_0000.0000.0001&destination=igp_node/2_0_0_0000.0000.0018&direction=outbound" + +# Path using algo 128 (may take different route) +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path?source=igp_node/2_0_0_0000.0000.0001&destination=igp_node/2_0_0_0000.0000.0018&direction=outbound&algo=128" +``` + +### Scenario 3: Verify Alternate Path When Node Removed +```bash +# Remove node from algo 128 in the database +# Then verify path finds alternate route: +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path?source=igp_node/2_0_0_0000.0000.0001&destination=igp_node/2_0_0_0000.0000.0018&direction=outbound&algo=128" +``` + +### Scenario 4: RPO with Flex-Algo +```bash +# Select optimal endpoint using low-latency algo +curl "http://localhost:8000/api/v1/rpo/hosts/select-optimal?source=hosts/rome&metric=gpu_utilization&graphs=ipv6_graph&algo=128" +``` + +## Error Handling + +### No Path Found for Specified Algo +If no path exists using the specified algo (e.g., source or destination doesn't participate in that algo): + +```json +{ + "detail": "No path found using algo 128 between specified nodes. Nodes may not participate in this algo." +} +``` + +### Invalid Algo Parameter +If algo parameter is not a valid integer: + +```json +{ + "detail": "Invalid algo parameter. Must be an integer between 0 and 255." +} +``` + +## Benefits + +1. **Flexibility**: Support multiple routing topologies in a single network +2. **Optimization**: Different paths for different traffic types (latency-sensitive, bandwidth-intensive, etc.) +3. **Sovereignty**: Combine with sovereignty constraints for geo-aware routing +4. **Automation**: SRv6 USID generation automatically adapts to the selected algo +5. **Compatibility**: Default behavior (algo 0) maintains backward compatibility + +## Future Enhancements + +Potential future improvements: +1. Support for algo preferences (try algo X, fallback to algo Y) +2. Algo-specific metrics (e.g., latency only for algo 128 paths) +3. Multi-algo path comparison in a single API call +4. Algo validation endpoint (verify node participation before path calculation) +5. Dynamic algo discovery from IGP data + +## Code Files Modified + +1. **`app/routes/graphs.py`** + - Added `algo` parameter to all shortest path endpoints + - Added `/vertices/algo` endpoint + - Added `/topology/algo` endpoint + - Updated AQL queries to filter by algo participation + +2. **`app/routes/rpo.py`** + - Added `algo` parameter to both RPO endpoints + - Pass algo to `get_shortest_path` calls + +3. **`app/utils/path_processor.py`** + - Removed hardcoded USID block + - Added auto-detection of USID block + - Added algo-aware SID selection + - Updated to handle multiple SIDs per node + +## Conclusion + +The Flex-Algo implementation provides comprehensive support for multi-topology routing in SRv6 networks. All shortest path and RPO endpoints now support algo-aware path computation, with automatic SRv6 USID generation based on the selected algorithm. The implementation maintains backward compatibility while enabling advanced traffic engineering capabilities. + diff --git a/docs/api/reference.md b/docs/api/reference.md new file mode 100644 index 00000000..cd5ee86e --- /dev/null +++ b/docs/api/reference.md @@ -0,0 +1,371 @@ +# Jalapeno API Reference + +Complete reference for the Jalapeno REST API. + +## Base URL + +``` +http://localhost:8000/api/v1 +``` + +--- + +## Collections + +### Get all collections +```bash +curl http://localhost:8000/api/v1/collections +``` + +### Get only graph collections +```bash +curl http://localhost:8000/api/v1/collections?filter_graphs=true +``` + +### Get only non-graph collections +```bash +curl http://localhost:8000/api/v1/collections?filter_graphs=false +``` + +### Get data from any collection +```bash +curl "http://localhost:8000/api/v1/collection/ls_node" +curl "http://localhost:8000/api/v1/collection/ls_link" +curl "http://localhost:8000/api/v1/collection/ls_prefix" +curl "http://localhost:8000/api/v1/collection/ls_srv6_sid" +curl "http://localhost:8000/api/v1/collection/bgp_node" +curl "http://localhost:8000/api/v1/collection/igp_node" +curl "http://localhost:8000/api/v1/collection/bgp_prefix_v4" +curl "http://localhost:8000/api/v1/collection/bgp_prefix_v6" +``` + +### Get data with limits +```bash +curl "http://localhost:8000/api/v1/collection/bgp_node?limit=10" +curl "http://localhost:8000/api/v1/collection/igp_node?limit=10" +``` + +### Get data with a specific key +```bash +curl "http://localhost:8000/api/v1/collection/bgp_node?filter_key=some_key" +``` + +### Get just the keys from a collection +```bash +curl "http://localhost:8000/api/v1/collection/peer/keys" +``` + +--- + +## Search + +### Search by ASN only +```bash +curl "http://localhost:8000/api/v1/collection/igp_node/search?asn=65001" +``` + +### Search by protocol only +```bash +curl "http://localhost:8000/api/v1/collection/igp_node/search?protocol=IS-IS%20Level%202" +``` + +### Search with multiple filters +```bash +curl "http://localhost:8000/api/v1/collection/igp_node/search?asn=65001&srv6_enabled=true" +``` + +--- + +## Graphs + +### Get specific graph data +```bash +curl http://localhost:8000/api/v1/collections/igpv4_graph +curl http://localhost:8000/api/v1/collections/igpv6_graph +curl http://localhost:8000/api/v1/collections/ipv4_graph +curl http://localhost:8000/api/v1/collections/ipv6_graph +``` + +### Get graph info +```bash +curl http://localhost:8000/api/v1/collections/igpv4_graph/info +curl http://localhost:8000/api/v1/collections/igpv6_graph/info +``` + +### Get graph edges +```bash +curl http://localhost:8000/api/v1/graphs/ipv6_graph/edges +curl http://localhost:8000/api/v1/graphs/ipv4_graph/edges +curl http://localhost:8000/api/v1/graphs/igpv6_graph/edges +curl http://localhost:8000/api/v1/graphs/igpv4_graph/edges +``` + +### Get graph vertices +```bash +curl http://localhost:8000/api/v1/graphs/ipv6_graph/vertices +curl http://localhost:8000/api/v1/graphs/ipv4_graph/vertices +curl http://localhost:8000/api/v1/graphs/igpv6_graph/vertices +curl http://localhost:8000/api/v1/graphs/igpv4_graph/vertices +``` + +### Get vertex keys +```bash +curl http://localhost:8000/api/v1/graphs/ipv6_graph/vertices/keys +curl http://localhost:8000/api/v1/graphs/ipv4_graph/vertices/keys +``` + +### Get vertex IDs +```bash +curl http://localhost:8000/api/v1/graphs/ipv6_graph/vertices/ids +curl http://localhost:8000/api/v1/graphs/ipv4_graph/vertices/ids +``` + +### Get vertices by algorithm (Flex-Algo) +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/vertices/algo?algo=128" +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/vertices/algo?algo=129" +``` + +--- + +## Topology + +### Get full topology +```bash +curl http://localhost:8000/api/v1/graphs/ipv6_graph/topology +curl http://localhost:8000/api/v1/graphs/ipv6_graph/topology?limit=50 +``` + +### Get node-to-node connections +```bash +curl http://localhost:8000/api/v1/graphs/ipv6_graph/topology/nodes +curl http://localhost:8000/api/v1/graphs/ipv6_graph/topology/nodes?limit=50 +``` + +### Get topology per algorithm +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/topology/nodes/algo?algo=128" +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/topology/nodes/algo?algo=129" +``` + +### Get vertex summary +```bash +curl http://localhost:8000/api/v1/graphs/ipv6_graph/vertices/summary +curl http://localhost:8000/api/v1/graphs/ipv6_graph/vertices/summary?limit=25 +curl http://localhost:8000/api/v1/graphs/ipv6_graph/vertices/summary?vertex_collection=igp_node +curl http://localhost:8000/api/v1/graphs/ipv6_graph/vertices/summary?vertex_collection=igp_node&limit=10 +``` + +--- + +## Shortest Path + +### Basic shortest path +```bash +# Outbound (default) +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path?source=igp_node/2_0_0_0000.0001.0065&destination=igp_node/2_0_0_0000.0002.0067" + +# Inbound +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path?source=igp_node/2_0_0_0000.0001.0065&destination=igp_node/2_0_0_0000.0002.0067&direction=inbound" + +# Any direction +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path?source=igp_node/2_0_0_0000.0001.0065&destination=igp_node/2_0_0_0000.0002.0067&direction=any" +``` + +### Shortest path with Flex-Algo +```bash +# Algo 0 (default SPF) +curl "http://localhost:8000/api/v1/graphs/ipv4_graph/shortest_path?source=bgp_prefix_v4/10.10.46.0_24&destination=bgp_prefix_v4/96.1.0.0_24&direction=any&algo=0" + +# Algo 128 (low latency) +curl "http://localhost:8000/api/v1/graphs/ipv4_graph/shortest_path?source=bgp_prefix_v4/10.10.46.0_24&destination=bgp_prefix_v4/96.1.0.0_24&direction=any&algo=128" + +# Algo 129 (high bandwidth) +curl "http://localhost:8000/api/v1/graphs/ipv4_graph/shortest_path?source=bgp_prefix_v4/10.10.46.0_24&destination=bgp_prefix_v4/96.1.0.0_24&direction=any&algo=129" +``` + +### Prefix to prefix path +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path?source=ls_prefix/2_0_2_0_0_fc00:0:701:1::_64_0000.0001.0065&destination=ls_prefix/2_0_2_0_0_fc00:0:701:1003::_64_0000.0002.0067&direction=any" +``` + +--- + +## Optimized Paths + +### Latency-weighted shortest path +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path/latency?source=gpus/host08-gpu02&destination=gpus/host12-gpu02" +``` + +### Utilization-optimized path +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path/utilization?source=gpus/host08-gpu02&destination=gpus/host12-gpu02&direction=outbound" +``` + +### Load-balanced path +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path/load?source=gpus/host01-gpu02&destination=gpus/host12-gpu02&direction=any" +``` + +### Load-balanced path with Flex-Algo +```bash +curl "http://localhost:8000/api/v1/graphs/ipv4_graph/shortest_path/load?source=bgp_prefix_v4/10.10.46.0_24&destination=bgp_prefix_v4/96.1.0.0_24&direction=any&algo=128" +``` + +### Sovereignty-constrained path +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path/sovereignty?source=hosts/berlin-k8s&destination=hosts/rome&excluded_countries=FRA&direction=outbound" +``` + +### Sovereignty with Flex-Algo +```bash +curl "http://localhost:8000/api/v1/graphs/ipv4_graph/shortest_path/sovereignty?source=bgp_prefix_v4/10.10.46.0_24&destination=bgp_prefix_v4/10.17.1.0_24&excluded_countries=FRA&direction=any&algo=0" +``` + +--- + +## K-Shortest Paths + +### Best paths (K-shortest) +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path/best-paths?source=hosts/amsterdam&destination=hosts/rome&direction=outbound" +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path/best-paths?source=hosts/amsterdam&destination=hosts/rome&direction=outbound&limit=6" +``` + +### Best paths with Flex-Algo +```bash +curl "http://localhost:8000/api/v1/graphs/ipv4_graph/shortest_path/best-paths?source=bgp_prefix_v4/10.17.1.0_24&destination=bgp_prefix_v4/96.1.0.0_24&limit=5&algo=130" +``` + +### Next best path +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/shortest_path/next-best-path?source=hosts/berlin-k8s&destination=hosts/rome&direction=outbound" +``` + +### Next best path with Flex-Algo +```bash +curl "http://localhost:8000/api/v1/graphs/ipv4_graph/shortest_path/next-best-path?source=bgp_prefix_v4/10.17.1.0_24&destination=bgp_prefix_v4/96.1.0.0_24&direction=any&algo=0" + +curl "http://localhost:8000/api/v1/graphs/ipv4_graph/shortest_path/next-best-path?source=bgp_prefix_v4/10.17.1.0_24&destination=bgp_prefix_v4/96.1.0.0_24&direction=any&same_hop_limit=2&plus_one_limit=5&algo=0" + +curl "http://localhost:8000/api/v1/graphs/ipv4_graph/shortest_path/next-best-path?source=bgp_prefix_v4/10.17.1.0_24&destination=bgp_prefix_v4/96.1.0.0_24&direction=any&algo=128" +``` + +--- + +## Graph Traversal + +### Simple traverse +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/traverse/simple?source=ls_prefix/2_0_2_0_0_fc00:0:701:1::_64_0000.0001.0065&destination=igp_node/2_0_0_0000.0002.0067" + +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/traverse/simple?start_node=ls_prefix/2_0_2_0_0_fc00:0:701:1::_64_0000.0001.0065&target_node=ls_prefix/2_0_2_0_0_fc00:0:701:1003::_64_0000.0002.0067&max_depth=6" + +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/traverse/simple?source=ls_prefix/2_0_2_0_0_fc00:0:701:1::_64_0000.0001.0065&max_depth=5" +``` + +### Complex traverse +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/traverse?source=igp_node/2_0_0_0000.0001.0065&max_depth=3" + +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/traverse?source=ls_prefix/2_0_2_0_0_fc00:0:701:1::_64_0000.0001.0065&destination=igp_node/2_0_0_0000.0002.0067&max_depth=5&direction=any" +``` + +--- + +## Neighbors + +### Get immediate neighbors +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/neighbors?node=igp_node/2_0_0_0000.0001.0001" +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/neighbors?source=igp_node/2_0_0_0000.0001.0065" +``` + +### Get neighbors with specific direction +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/neighbors?source=igp_node/2_0_0_0000.0001.0065&direction=any" +``` + +### Get neighbors with greater depth +```bash +curl "http://localhost:8000/api/v1/graphs/ipv6_graph/neighbors?source=igp_node/2_0_0_0000.0001.0065&depth=2" +``` + +--- + +## Edge Operations + +### Reset load on all edges (with AQL) +```aql +FOR edge IN ipv6_graph + UPDATE edge WITH { load: 0 } IN ipv6_graph +``` + +### Reset load on all edges (with curl) +```bash +curl -X POST "http://localhost:8000/api/v1/graphs/ipv6_graph/edges" \ + -H "Content-Type: application/json" \ + -d '{"attribute": "load", "value": 0}' +``` + +--- + +## Resource Path Optimization (RPO) + +For detailed RPO examples, see [RPO API Documentation](rpo.md). + +### Basic RPO endpoint selection +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-optimal?source=hosts/rome&metric=gpu_utilization&graphs=ipv6_graph" +``` + +### RPO with Flex-Algo +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-optimal?source=hosts/rome&metric=gpu_utilization&graphs=ipv6_graph&algo=128" +``` + +--- + +## API Features + +### Common Query Parameters + +- **direction**: Path direction + - `outbound` (default): Follow outbound edges + - `inbound`: Follow inbound edges + - `any`: Follow edges in any direction + +- **algo**: Flex-Algorithm ID (default: 0) + - `0`: Default SPF + - `128`: Typically low latency + - `129`: Typically high bandwidth + - `130+`: Custom algorithms + +- **limit**: Limit number of results returned + +### Response Features + +- **SRv6 USID Generation**: Automatically generates SRv6 Micro-SID lists for paths +- **Hop Count**: Returns number of hops in the path +- **Full Path Data**: Returns complete vertex and edge information for paths +- **Algo-Aware**: SRv6 SIDs automatically selected based on specified algorithm + +--- + +## Additional Documentation + +- [Flex-Algo Implementation](flex-algo.md) - Detailed Flex-Algo support documentation +- [RPO API](rpo.md) - Complete Resource Path Optimization examples +- [ArangoDB Queries](../arango/api-queries.md) - Example AQL queries used by the API + +--- + +## Interactive Documentation + +The API provides interactive Swagger/OpenAPI documentation: + +``` +http://localhost:8000/docs +``` + diff --git a/docs/api/rpo.md b/docs/api/rpo.md new file mode 100644 index 00000000..66af465b --- /dev/null +++ b/docs/api/rpo.md @@ -0,0 +1,262 @@ +# Resource Path Optimization (RPO) API + +## Overview +The Resource Path Optimization (RPO) API provides intelligent destination selection based on metrics, combined with shortest path calculation and SRv6 USID generation. + +## Base URL +``` +http://localhost:8000/api/v1/rpo +``` + +--- + +## 1. Discovery and Information + +### Get RPO capabilities and available graphs +```bash +curl "http://localhost:8000/api/v1/rpo" +``` + +**Response includes:** +- Supported metrics and optimization strategies +- Available graph collections for path finding +- API description and usage notes + +--- + +## 2. Collection Management + +### List all endpoints in a collection +```bash +curl "http://localhost:8000/api/v1/rpo/hosts" +``` + +### List endpoints with limit +```bash +curl "http://localhost:8000/api/v1/rpo/hosts?limit=5" +``` + +--- + +## 3. Optimal Endpoint Selection (from all endpoints) + +### Select endpoint with lowest GPU utilization +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-optimal?source=hosts/rome&metric=gpu_utilization&graphs=ipv6_graph" +``` + +### Select endpoint with lowest CPU utilization +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-optimal?source=hosts/rome&metric=cpu_utilization&graphs=ipv6_graph" +``` + +### Select endpoint with lowest time to first token +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-optimal?source=hosts/rome&metric=time_to_first_token&graphs=ipv6_graph" +``` + +### Select endpoint with lowest cost per million tokens +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-optimal?source=hosts/rome&metric=cost_per_million_tokens&graphs=ipv6_graph" +``` + +### Select endpoint with specific GPU model (exact match) +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-optimal?source=hosts/rome&metric=gpu_model&value=GB300&graphs=ipv6_graph" +``` + +### Select endpoint with specific language model (exact match) +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-optimal?source=hosts/rome&metric=language_model&value=llama-3-70b&graphs=ipv6_graph" +``` + +--- + +## 4. Selection from Specific List + +### Select from specific destinations (lowest GPU utilization) +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-from-list?source=hosts/rome&destinations=hosts/amsterdam,hosts/berlin-k8s&metric=gpu_utilization&graphs=ipv6_graph" +``` + +### Select from specific destinations (lowest cost) +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-from-list?source=hosts/rome&destinations=hosts/amsterdam,hosts/berlin-k8s,hosts/london&metric=cost_per_million_tokens&graphs=ipv6_graph" +``` + +### Select from specific destinations (lowest response time) +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-from-list?source=hosts/rome&destinations=hosts/amsterdam,hosts/berlin-k8s&metric=response_time&graphs=ipv6_graph" +``` + +### Select from specific destinations (highest capacity) +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-from-list?source=hosts/rome&destinations=hosts/amsterdam,hosts/berlin-k8s&metric=available_capacity&graphs=ipv6_graph" +``` + +--- + +## 5. Different Graph Collections + +### Using IPv4 graph +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-from-list?source=hosts/rome&destinations=hosts/amsterdam,hosts/berlin-k8s&metric=gpu_utilization&graphs=ipv4_graph" +``` + +### Using IPv6 graph +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-from-list?source=hosts/rome&destinations=hosts/amsterdam,hosts/berlin-k8s&metric=gpu_utilization&graphs=ipv6_graph" +``` + +### Using IGPv4 graph +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-from-list?source=hosts/rome&destinations=hosts/amsterdam,hosts/berlin-k8s&metric=gpu_utilization&graphs=igpv4_graph" +``` + +### Using fabric graph +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-from-list?source=hosts/rome&destinations=hosts/amsterdam,hosts/berlin-k8s&metric=gpu_utilization&graphs=fabric_graph" +``` + +--- + +## 6. Different Direction Options + +### Outbound direction (default) +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-from-list?source=hosts/rome&destinations=hosts/amsterdam,hosts/berlin-k8s&metric=gpu_utilization&graphs=ipv6_graph&direction=outbound" +``` + +### Inbound direction +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-from-list?source=hosts/rome&destinations=hosts/amsterdam,hosts/berlin-k8s&metric=gpu_utilization&graphs=ipv6_graph&direction=inbound" +``` + +--- + +## 7. Flex-Algo Support + +### Select endpoint with default algo (algo 0) +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-optimal?source=hosts/rome&metric=gpu_utilization&graphs=ipv6_graph" +``` + +### Select endpoint using Flex-Algo 128 +```bash +curl "http://localhost:8000/api/v1/rpo/bgp_prefix_v4/select-optimal?source=bgp_prefix_v4/10.17.1.0_24&metric=gpu_utilization&graphs=ipv4_graph&algo=128" | jq +``` + +### Select endpoint using Flex-Algo 129 +```bash +curl "http://localhost:8000/api/v1/rpo/bgp_prefix_v4/select-optimal?source=bgp_prefix_v4/10.17.1.0_24&metric=gpu_utilization&graphs=ipv4_graph&algo=129" | jq +``` + +### Select from list with Flex-Algo 128 +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-from-list?source=hosts/rome&destinations=hosts/amsterdam,hosts/berlin-k8s&metric=gpu_utilization&graphs=ipv6_graph&algo=128" +``` + +### Low latency path with Flex-Algo 128 +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-optimal?source=hosts/rome&metric=time_to_first_token&graphs=ipv6_graph&algo=128" +``` + +### Cost optimization with specific algo +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-from-list?source=hosts/rome&destinations=hosts/amsterdam,hosts/berlin-k8s,hosts/london&metric=cost_per_million_tokens&graphs=ipv6_graph&algo=129" +``` + +--- + +## 8. Complex Scenarios + +### Multi-destination selection with cost optimization +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-from-list?source=hosts/rome&destinations=hosts/amsterdam,hosts/berlin-k8s,hosts/london,hosts/paris&metric=cost_per_hour&graphs=ipv6_graph" +``` + +### Memory utilization optimization +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-optimal?source=hosts/rome&metric=memory_utilization&graphs=ipv6_graph" +``` + +### Specific model requirement with fallback optimization +```bash +curl "http://localhost:8000/api/v1/rpo/hosts/select-from-list?source=hosts/rome&destinations=hosts/amsterdam,hosts/berlin-k8s&metric=gpu_model&value=V100&graphs=ipv6_graph" +``` + +--- + +## Response Format + +All endpoints return comprehensive information including: + +```json +{ + "collection": "hosts", + "source": "hosts/rome", + "selected_endpoint": { + "_id": "hosts/amsterdam", + "name": "amsterdam", + "gpu_utilization": 0.3, + "cost_per_million_tokens": 4, + "time_to_first_token": 2 + }, + "optimization_metric": "gpu_utilization", + "metric_value": 0.3, + "optimization_strategy": "minimize", + "total_candidates": 2, + "valid_endpoints_count": 2, + "path_result": { + "found": true, + "path": [...], + "hopcount": 7, + "srv6_data": { + "srv6_sid_list": ["fc00:0:7777::", "fc00:0:6666::", "fc00:0:2222::", "fc00:0:1111::"], + "srv6_usid": "fc00:0:7777:6666:2222:1111::" + } + }, + "summary": { + "destination": "hosts/amsterdam", + "destination_name": "amsterdam", + "path_found": true, + "hop_count": 7 + } +} +``` + +--- + +## Supported Metrics + +| Metric | Type | Optimization Strategy | Description | +|--------|------|----------------------|-------------| +| `cpu_utilization` | numeric | minimize | CPU usage percentage | +| `gpu_utilization` | numeric | minimize | GPU usage percentage | +| `memory_utilization` | numeric | minimize | Memory usage percentage | +| `time_to_first_token` | numeric | minimize | Response time in seconds | +| `cost_per_million_tokens` | numeric | minimize | Cost per million tokens | +| `cost_per_hour` | numeric | minimize | Hourly cost | +| `gpu_model` | string | exact_match | Specific GPU model required | +| `language_model` | string | exact_match | Specific language model required | +| `available_capacity` | numeric | maximize | Available processing capacity | +| `response_time` | numeric | minimize | General response time | + +--- + +## Notes + +- **Required Parameters**: `source`, `metric`, `graphs` +- **Optional Parameters**: + - `value` (required for exact_match metrics) + - `direction` (default: outbound) + - `algo` (Flex-Algo ID, default: 0) +- **Graph Collections**: Use the discovery endpoint to see available graphs +- **Flex-Algo**: Specify `algo` parameter to use specific Flex-Algo for path finding + - Default is algo 0 (standard SPF) + - Common values: 128 (low latency), 129 (high bandwidth), etc. + - Path will only traverse nodes that participate in the specified algo + - SRv6 SIDs are automatically selected based on the specified algo +- **SRv6 USID**: Generated automatically for path execution based on algo +- **Error Handling**: Comprehensive error messages for invalid parameters or missing data + diff --git a/docs/arango/api-queries.md b/docs/arango/api-queries.md new file mode 100644 index 00000000..a0ee1d74 --- /dev/null +++ b/docs/arango/api-queries.md @@ -0,0 +1,279 @@ +# ArangoDB Query Examples for API + +This document contains example AQL queries used by the Jalapeno API. + +## Basic Collection Queries + +### Query a specific document by key + +```aql +FOR doc IN hosts + FILTER doc._key == "amsterdam" + RETURN { + _id: doc._id, + _key: doc._key, + cpu_utilization: doc.cpu_utilization, + gpu_utilization: doc.gpu_utilization, + memory_utilization: doc.memory_utilization, + time_to_first_token: doc.time_to_first_token, + cost_per_million_tokens: doc.cost_per_million_tokens, + cost_per_hour: doc.cost_per_hour, + gpu_model: doc.gpu_model, + language_model: doc.language_model, + available_capacity: doc.available_capacity, + response_time: doc.response_time + } +``` + +### Find document with lowest metric value + +```aql +FOR doc IN hosts + FILTER doc.gpu_utilization != null + SORT doc.gpu_utilization ASC + LIMIT 1 + RETURN { + _id: doc._id, + _key: doc._key, + name: doc.name, + gpu_utilization: doc.gpu_utilization + } +``` + +## Resource Path Optimization Queries + +### Find optimal endpoint based on metric + +```aql +FOR doc IN @@collection + FILTER doc.@@metric != null + SORT doc.@@metric ASC + LIMIT 1 + RETURN doc +``` + +### Find endpoints matching specific value + +```aql +FOR doc IN @@collection + FILTER doc.@@metric == @value + RETURN doc +``` + +## Graph Traversal Queries + +### Shortest path query + +```aql +FOR v, e IN OUTBOUND SHORTEST_PATH @source TO @destination @@graph + RETURN { + vertices: v, + edges: e + } +``` + +### K-shortest paths query + +```aql +FOR p IN OUTBOUND K_SHORTEST_PATHS @source TO @destination @@graph + OPTIONS {bfs: true} + LIMIT @limit + RETURN { + path: p.vertices[*], + edges: p.edges[*], + hopcount: LENGTH(p.vertices) - 1 + } +``` + +### Flex-Algo aware shortest path + +```aql +FOR p IN OUTBOUND K_SHORTEST_PATHS @source TO @destination @@graph + OPTIONS {bfs: true} + + // Filter to ensure all igp_nodes in path participate in the algo + FILTER ( + FOR v IN p.vertices + FILTER v._id LIKE 'igp_node/%' + FILTER @algo IN v.sids[*].algo + COLLECT WITH COUNT INTO nodeCount + RETURN nodeCount + )[0] == LENGTH( + FOR v IN p.vertices + FILTER v._id LIKE 'igp_node/%' + COLLECT WITH COUNT INTO nodeCount + RETURN nodeCount + )[0] + + LIMIT 1 + RETURN { + path: p.vertices[*], + edges: p.edges[*], + hopcount: LENGTH(p.vertices) - 1 + } +``` + +### Neighbors query + +```aql +FOR v, e IN 1..@depth OUTBOUND @source @@graph + RETURN DISTINCT v +``` + +### Traverse with depth limit + +```aql +FOR v, e, p IN 1..@max_depth OUTBOUND @source @@graph + FILTER v._id == @destination + RETURN { + path: p.vertices[*], + edges: p.edges[*] + } +``` + +## Topology Queries + +### Get all graph edges + +```aql +FOR edge IN @@graph + RETURN edge +``` + +### Get all vertices from graph + +```aql +FOR v IN 1..1 ANY @start_vertex @@graph + RETURN DISTINCT v +``` + +### Get vertices by algorithm + +```aql +FOR collection_name IN @vertex_collections + FOR vertex IN @@db[collection_name] + FILTER @algo IN vertex.sids[*].algo + RETURN vertex +``` + +## Path Optimization Queries + +### Latency-weighted path + +```aql +FOR v, e IN OUTBOUND SHORTEST_PATH @source TO @destination @@graph + OPTIONS { + weightAttribute: 'latency', + defaultWeight: 1 + } + RETURN { + vertices: v, + edges: e + } +``` + +### Utilization-weighted path + +```aql +FOR v, e IN OUTBOUND SHORTEST_PATH @source TO @destination @@graph + OPTIONS { + weightAttribute: 'percent_util_out', + defaultWeight: 1 + } + RETURN { + vertices: v, + edges: e + } +``` + +### Sovereignty-constrained path + +```aql +FOR v, e IN OUTBOUND SHORTEST_PATH @source TO @destination @@graph + PRUNE v.country IN @excluded_countries + FILTER v.country NOT IN @excluded_countries + RETURN { + vertices: v, + edges: e + } +``` + +## Bulk Operations + +### Update edge attribute + +```aql +FOR edge IN @@graph + UPDATE edge WITH { @attribute: @value } IN @@graph + RETURN NEW +``` + +### Reset all edge loads + +```aql +FOR edge IN @@graph + UPDATE edge WITH { load: 0 } IN @@graph +``` + +## Search Queries + +### Search by multiple criteria + +```aql +FOR doc IN @@collection + FILTER doc.asn == @asn + FILTER doc.protocol == @protocol + FILTER doc.srv6_enabled == @srv6_enabled + RETURN doc +``` + +### Get collection keys only + +```aql +FOR doc IN @@collection + RETURN doc._key +``` + +### Get collection IDs only + +```aql +FOR doc IN @@collection + RETURN doc._id +``` + +## Summary and Statistics + +### Vertex summary by collection + +```aql +FOR vertex_coll IN @vertex_collections + LET count = LENGTH(@@db[vertex_coll]) + RETURN { + collection: vertex_coll, + count: count + } +``` + +### Get vertices with sample data + +```aql +FOR vertex_coll IN @vertex_collections + LET sample = ( + FOR v IN @@db[vertex_coll] + LIMIT @limit + RETURN v + ) + RETURN { + collection: vertex_coll, + count: LENGTH(@@db[vertex_coll]), + sample: sample + } +``` + +## Notes + +- Bind parameters are prefixed with `@` (e.g., `@source`, `@destination`) +- Collection binds use `@@` (e.g., `@@collection`, `@@graph`) +- All queries should use parameterized inputs to prevent AQL injection +- The API automatically handles parameter binding and escaping + diff --git a/igp-graph/arangodb/arango-conn.go b/igp-graph/arangodb/arango-conn.go index 2d6141b4..6821c17f 100644 --- a/igp-graph/arangodb/arango-conn.go +++ b/igp-graph/arangodb/arango-conn.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/igp-graph/arangodb/arangodb.go b/igp-graph/arangodb/arangodb.go index 34f177ef..d3b347c8 100644 --- a/igp-graph/arangodb/arangodb.go +++ b/igp-graph/arangodb/arangodb.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/igp-graph/arangodb/batch-processor.go b/igp-graph/arangodb/batch-processor.go index 16448ed9..6eca8697 100644 --- a/igp-graph/arangodb/batch-processor.go +++ b/igp-graph/arangodb/batch-processor.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/igp-graph/arangodb/bmp-helpers.go b/igp-graph/arangodb/bmp-helpers.go index 28019e69..585af905 100644 --- a/igp-graph/arangodb/bmp-helpers.go +++ b/igp-graph/arangodb/bmp-helpers.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/igp-graph/arangodb/deduplication.go b/igp-graph/arangodb/deduplication.go index 04239919..e76930dc 100644 --- a/igp-graph/arangodb/deduplication.go +++ b/igp-graph/arangodb/deduplication.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/igp-graph/arangodb/domain-processor.go b/igp-graph/arangodb/domain-processor.go index 92f69a13..c3187840 100644 --- a/igp-graph/arangodb/domain-processor.go +++ b/igp-graph/arangodb/domain-processor.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/igp-graph/arangodb/errors.go b/igp-graph/arangodb/errors.go index d762ce36..5625a586 100644 --- a/igp-graph/arangodb/errors.go +++ b/igp-graph/arangodb/errors.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import "errors" diff --git a/igp-graph/arangodb/graph-edge-processor.go b/igp-graph/arangodb/graph-edge-processor.go index d6787834..de0b06ff 100644 --- a/igp-graph/arangodb/graph-edge-processor.go +++ b/igp-graph/arangodb/graph-edge-processor.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/igp-graph/arangodb/prefix-processor.go b/igp-graph/arangodb/prefix-processor.go index 8b1338d9..f5c609fd 100644 --- a/igp-graph/arangodb/prefix-processor.go +++ b/igp-graph/arangodb/prefix-processor.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/igp-graph/arangodb/srv6-processor.go b/igp-graph/arangodb/srv6-processor.go index ff37c77e..5fae2902 100644 --- a/igp-graph/arangodb/srv6-processor.go +++ b/igp-graph/arangodb/srv6-processor.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/igp-graph/arangodb/types.go b/igp-graph/arangodb/types.go index 3db2d5d0..c2b2ff64 100644 --- a/igp-graph/arangodb/types.go +++ b/igp-graph/arangodb/types.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/igp-graph/arangodb/update-coordinator.go b/igp-graph/arangodb/update-coordinator.go index 6af3750e..0c4b421f 100644 --- a/igp-graph/arangodb/update-coordinator.go +++ b/igp-graph/arangodb/update-coordinator.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/igp-graph/kafkamessenger/kafkamessenger.go b/igp-graph/kafkamessenger/kafkamessenger.go index 63cba7fe..edf133f8 100644 --- a/igp-graph/kafkamessenger/kafkamessenger.go +++ b/igp-graph/kafkamessenger/kafkamessenger.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package kafkamessenger import ( diff --git a/install/infra/api/api-deployment.yaml b/install/infra/api/api-deployment.yaml new file mode 100644 index 00000000..6a133cd8 --- /dev/null +++ b/install/infra/api/api-deployment.yaml @@ -0,0 +1,49 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: jalapeno-api + namespace: jalapeno +spec: + replicas: 1 + selector: + matchLabels: + app: jalapeno-api + template: + metadata: + labels: + app: jalapeno-api + spec: + containers: + - name: api + image: iejalapeno/jalapeno-api:latest + imagePullPolicy: Always + env: + - name: JALAPENO_DATABASE_SERVER + value: "http://arangodb:8529" + - name: JALAPENO_DATABASE_NAME + value: "jalapeno" + resources: + requests: + memory: "256Mi" + cpu: "200m" + limits: + memory: "512Mi" + cpu: "500m" + livenessProbe: + httpGet: + path: /health + port: 8000 + initialDelaySeconds: 30 + readinessProbe: + httpGet: + path: /health + port: 8000 + volumeMounts: + - name: credentials + mountPath: /credentials + ports: + - containerPort: 8000 + volumes: + - name: credentials + secret: + secretName: jalapeno \ No newline at end of file diff --git a/install/infra/api/api-service.yaml b/install/infra/api/api-service.yaml new file mode 100644 index 00000000..e78d0079 --- /dev/null +++ b/install/infra/api/api-service.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Service +metadata: + name: jalapeno-api + namespace: jalapeno +spec: + type: NodePort + selector: + app: jalapeno-api + ports: + - port: 80 + targetPort: 8000 + nodePort: 30800 \ No newline at end of file diff --git a/install/infra/deploy_infrastructure.sh b/install/infra/deploy_infrastructure.sh index 8f413c41..9efe16a0 100755 --- a/install/infra/deploy_infrastructure.sh +++ b/install/infra/deploy_infrastructure.sh @@ -28,6 +28,9 @@ ${KUBE} create -f ${PWD}/${BASEDIR}/influxdb/. echo "Deploying Grafana" ${KUBE} create -f ${PWD}/${BASEDIR}/grafana/. +echo "Deploying API" +${KUBE} create -f ${PWD}/${BASEDIR}/api/. + echo "Finished deploying infra services" diff --git a/install/processors/igp-graph/igp-graph.yaml b/install/processors/igp-graph/igp-graph.yaml index 26a77723..6e19d4b2 100644 --- a/install/processors/igp-graph/igp-graph.yaml +++ b/install/processors/igp-graph/igp-graph.yaml @@ -11,6 +11,23 @@ spec: labels: app: igp-graph spec: + initContainers: + - name: wait-for-data + image: curlimages/curl:8.5.0 + command: + - sh + - -c + - | + echo "Waiting 15 seconds for bmp-arango to start base collection processing..." + sleep 15 + echo "Waiting for base link-state collections to have data..." + until [ $(curl -s -u root:jalapeno "http://arangodb.jalapeno:8529/_db/jalapeno/_api/collection/ls_node/count" | grep -o '"count":[0-9]*' | cut -d':' -f2) -gt 0 ] 2>/dev/null; do + echo "ls_node collection empty, waiting..."; + sleep 5; + done + echo "Base link-state data available, waiting 10 more seconds for processing..." + sleep 10 + echo "Starting IGP graph processor" containers: - args: - --v diff --git a/install/processors/ip-graph/ip-graph.yaml b/install/processors/ip-graph/ip-graph.yaml index 26a4575c..f648e76b 100755 --- a/install/processors/ip-graph/ip-graph.yaml +++ b/install/processors/ip-graph/ip-graph.yaml @@ -11,6 +11,135 @@ spec: labels: app: ip-graph spec: + initContainers: + - name: wait-for-data + image: curlimages/curl:8.5.0 + command: + - sh + - -c + - | + # Check if this is a restart (ip-graph collections already have data) or fresh install + bgp_node_count=$(curl -s -u root:jalapeno "http://arangodb.jalapeno:8529/_db/jalapeno/_api/collection/bgp_node/count" 2>/dev/null | grep -o '"count":[0-9]*' | cut -d':' -f2) + bgp_node_count=${bgp_node_count:-0} + + if [ $bgp_node_count -gt 0 ]; then + echo "Detected pod restart (bgp_node has $bgp_node_count nodes)" + echo "Performing quick sanity checks..." + sleep 5 + echo "Starting IP graph processor" + exit 0 + fi + + echo "Detected fresh install - performing full stabilization checks..." + echo "Waiting 15 seconds for bmp-arango to start base collection processing..." + sleep 15 + + # First, wait for igp_node collection to stabilize (igp-graph must complete first) + echo "Waiting for igp_node collection to stabilize (igp-graph processing)..." + prev_igp=0 + stable_count=0 + while true; do + curr_igp=$(curl -s -u root:jalapeno "http://arangodb.jalapeno:8529/_db/jalapeno/_api/collection/igp_node/count" | grep -o '"count":[0-9]*' | cut -d':' -f2) + curr_igp=${curr_igp:-0} + + if [ $curr_igp -eq 0 ]; then + echo "igp_node collection still empty, waiting..."; + sleep 5; + continue; + fi + + diff=$((curr_igp - prev_igp)) + echo "igp_node: $curr_igp nodes (delta: +$diff)" + + if [ $diff -eq 0 ] && [ $prev_igp -gt 0 ]; then + stable_count=$((stable_count + 1)) + echo "igp_node stable check $stable_count/2" + if [ $stable_count -ge 2 ]; then + echo "igp_node stabilized at $curr_igp nodes - igp-graph processing complete" + break; + fi + else + stable_count=0 + fi + + prev_igp=$curr_igp + sleep 5; + done + + # Wait for peer collection to have data + echo "Checking for BGP peer data..." + until [ $(curl -s -u root:jalapeno "http://arangodb.jalapeno:8529/_db/jalapeno/_api/collection/peer/count" | grep -o '"count":[0-9]*' | cut -d':' -f2) -gt 0 ] 2>/dev/null; do + echo "peer collection empty, waiting..."; + sleep 5; + done + echo "BGP peer data found" + + # Wait for unicast_prefix_v4 collection to stabilize + echo "Waiting for unicast_prefix_v4 collection to stabilize..." + prev_v4=0 + stable_v4=0 + while true; do + curr_v4=$(curl -s -u root:jalapeno "http://arangodb.jalapeno:8529/_db/jalapeno/_api/collection/unicast_prefix_v4/count" | grep -o '"count":[0-9]*' | cut -d':' -f2) + curr_v4=${curr_v4:-0} + + if [ $curr_v4 -eq 0 ]; then + echo "unicast_prefix_v4 still empty, waiting..."; + sleep 5; + continue; + fi + + diff=$((curr_v4 - prev_v4)) + echo "unicast_prefix_v4: $curr_v4 prefixes (delta: +$diff)" + + if [ $diff -lt 100 ] && [ $prev_v4 -gt 0 ]; then + stable_v4=$((stable_v4 + 1)) + echo "unicast_prefix_v4 stable check $stable_v4/2" + if [ $stable_v4 -ge 2 ]; then + echo "unicast_prefix_v4 stabilized at $curr_v4 prefixes" + break; + fi + else + stable_v4=0 + fi + + prev_v4=$curr_v4 + sleep 5; + done + + # Wait for unicast_prefix_v6 collection to stabilize + echo "Waiting for unicast_prefix_v6 collection to stabilize..." + prev_v6=0 + stable_v6=0 + while true; do + curr_v6=$(curl -s -u root:jalapeno "http://arangodb.jalapeno:8529/_db/jalapeno/_api/collection/unicast_prefix_v6/count" | grep -o '"count":[0-9]*' | cut -d':' -f2) + curr_v6=${curr_v6:-0} + + if [ $curr_v6 -eq 0 ]; then + echo "unicast_prefix_v6 still empty, waiting..."; + sleep 5; + continue; + fi + + diff=$((curr_v6 - prev_v6)) + echo "unicast_prefix_v6: $curr_v6 prefixes (delta: +$diff)" + + if [ $diff -lt 100 ] && [ $prev_v6 -gt 0 ]; then + stable_v6=$((stable_v6 + 1)) + echo "unicast_prefix_v6 stable check $stable_v6/2" + if [ $stable_v6 -ge 2 ]; then + echo "unicast_prefix_v6 stabilized at $curr_v6 prefixes" + break; + fi + else + stable_v6=0 + fi + + prev_v6=$curr_v6 + sleep 5; + done + + echo "All IGP and BGP data stabilized" + echo "Starting IP graph processor" containers: - args: - --v diff --git a/ip-graph/arangodb/arango-conn.go b/ip-graph/arangodb/arango-conn.go index c6ca2209..ba3fc837 100644 --- a/ip-graph/arangodb/arango-conn.go +++ b/ip-graph/arangodb/arango-conn.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/ip-graph/arangodb/arangodb.go b/ip-graph/arangodb/arangodb.go index 1e4408f9..674ef854 100644 --- a/ip-graph/arangodb/arangodb.go +++ b/ip-graph/arangodb/arangodb.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( @@ -790,6 +812,11 @@ func (a *arangoDB) createPrefixAttachment(ctx context.Context, prefixData map[st case "ebgp_public": // Internet prefixes: Connect to all BGP peer nodes with public ASNs (like original processInetPrefix) return a.attachPrefixToInternetPeers(ctx, prefixData, prefixCollection, isIPv4) + case "ebgp_peer_centric": + // BMP peer-centric approach: Skip during initial load (real-time processor handles these) + // These are created by the real-time update processor which has its own edge creation logic + glog.V(8).Infof("Skipping ebgp_peer_centric prefix during initial load (handled by real-time processor)") + return nil default: return fmt.Errorf("unknown prefix type: %s", prefixType) } diff --git a/ip-graph/arangodb/batch-processor.go b/ip-graph/arangodb/batch-processor.go index a54fa84e..6bd810c8 100644 --- a/ip-graph/arangodb/batch-processor.go +++ b/ip-graph/arangodb/batch-processor.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/ip-graph/arangodb/bgp-deduplication-processor.go b/ip-graph/arangodb/bgp-deduplication-processor.go index c089ebf9..b9e07ff5 100644 --- a/ip-graph/arangodb/bgp-deduplication-processor.go +++ b/ip-graph/arangodb/bgp-deduplication-processor.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( @@ -142,7 +164,6 @@ func (bdp *BGPDeduplicationProcessor) processEBGPPrivateIPv4Prefixes(ctx context remote_ip: p.remote_ip, router_id: p.remote_bgp_id, prefix_type: "ebgp_private", - is_host: u.prefix_len == 32 } INTO ` + bdp.db.config.BGPPrefixV4 + ` OPTIONS { ignoreErrors: true }` @@ -180,7 +201,6 @@ func (bdp *BGPDeduplicationProcessor) processInternetIPv4Prefixes(ctx context.Co peer_asn: u.peer_asn, nexthop: u.nexthop, prefix_type: "ebgp_public", - is_host: u.prefix_len == 32 } INTO ` + bdp.db.config.BGPPrefixV4 + ` OPTIONS { ignoreErrors: true }` @@ -217,7 +237,6 @@ func (bdp *BGPDeduplicationProcessor) processIBGPIPv4Prefixes(ctx context.Contex asn: u.peer_asn, local_pref: u.base_attrs.local_pref, prefix_type: "ibgp", - is_host: u.prefix_len == 32 } INTO ` + bdp.db.config.BGPPrefixV4 + ` OPTIONS { ignoreErrors: true }` @@ -255,7 +274,6 @@ func (bdp *BGPDeduplicationProcessor) processEBGPPrivateIPv6Prefixes(ctx context remote_ip: p.remote_ip, router_id: p.remote_bgp_id, prefix_type: "ebgp_private", - is_host: u.prefix_len == 128 } INTO ` + bdp.db.config.BGPPrefixV6 + ` OPTIONS { ignoreErrors: true }` @@ -292,7 +310,6 @@ func (bdp *BGPDeduplicationProcessor) processEBGP4BytePrivateIPv6Prefixes(ctx co remote_ip: p.remote_ip, router_id: p.remote_bgp_id, prefix_type: "ebgp_private_4byte", - is_host: u.prefix_len == 128 } INTO ` + bdp.db.config.BGPPrefixV6 + ` OPTIONS { ignoreErrors: true }` @@ -331,7 +348,6 @@ func (bdp *BGPDeduplicationProcessor) processInternetIPv6Prefixes(ctx context.Co peer_asn: u.peer_asn, nexthop: u.nexthop, prefix_type: "ebgp_public", - is_host: u.prefix_len == 128 } INTO ` + bdp.db.config.BGPPrefixV6 + ` OPTIONS { ignoreErrors: true }` @@ -367,7 +383,6 @@ func (bdp *BGPDeduplicationProcessor) processIBGPIPv6Prefixes(ctx context.Contex asn: u.peer_asn, local_pref: u.base_attrs.local_pref, prefix_type: "ibgp", - is_host: u.prefix_len == 128 } INTO ` + bdp.db.config.BGPPrefixV6 + ` OPTIONS { ignoreErrors: true }` diff --git a/ip-graph/arangodb/bgp-peer-processor.go b/ip-graph/arangodb/bgp-peer-processor.go index 12fcbd6a..50a6dd6e 100644 --- a/ip-graph/arangodb/bgp-peer-processor.go +++ b/ip-graph/arangodb/bgp-peer-processor.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/ip-graph/arangodb/bgp-prefix-processor.go b/ip-graph/arangodb/bgp-prefix-processor.go index e1e35ac5..eadffdf1 100644 --- a/ip-graph/arangodb/bgp-prefix-processor.go +++ b/ip-graph/arangodb/bgp-prefix-processor.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( @@ -73,12 +95,8 @@ func (uc *UpdateCoordinator) processPrefixAdvertisement(ctx context.Context, key glog.Infof("Processing %s BGP prefix: %s/%d from AS%d via AS%d (key: %s)", prefixType, prefix, prefixLen, originAS, peerASN, consistentKey) - // Determine if this should be node metadata or separate vertex - if uc.shouldAttachAsNodeMetadata(prefixLen, isIPv4) { - return uc.attachPrefixToOriginNode(ctx, prefix, prefixLen, originAS, prefixData, isIPv4) - } else { - return uc.createBGPPrefixVertex(ctx, consistentKey, prefixData, prefixType, isIPv4) - } + // All prefixes create proper vertices (including /32 and /128 loopbacks) + return uc.createBGPPrefixVertex(ctx, consistentKey, prefixData, prefixType, isIPv4) } func (uc *UpdateCoordinator) processPrefixWithdrawal(ctx context.Context, key string, prefixData map[string]interface{}) error { @@ -104,18 +122,8 @@ func (uc *UpdateCoordinator) processPrefixWithdrawal(ctx context.Context, key st glog.Infof("Withdrawing BGP prefix: %s/%d from AS%d via peer %s (AS%d) (BMP key: %s, consistent key: %s)", prefix, prefixLen, originAS, peerIP, peerASN, key, consistentKey) - // Determine if this was node metadata or separate vertex - if uc.shouldAttachAsNodeMetadata(prefixLen, isIPv4) { - // For loopback prefixes (/32, /128) - remove from node metadata - if originAS == 0 { - glog.V(6).Infof("Origin AS missing for loopback withdrawal %s - skipping node metadata removal", consistentKey) - return nil - } - return uc.removePrefixFromOriginNode(ctx, prefix, prefixLen, originAS, isIPv4) - } else { - // For transit prefixes - remove edges from specific peer only - return uc.removeBGPPrefixFromPeer(ctx, consistentKey, prefixData, isIPv4) - } + // All prefixes are vertices - remove edges from specific peer only + return uc.removeBGPPrefixFromPeer(ctx, consistentKey, prefixData, isIPv4) } // extractOriginASFromPath extracts the origin AS from the base_attrs.as_path @@ -161,204 +169,6 @@ func (uc *UpdateCoordinator) isPrivateASN(asn uint32) bool { return (asn >= 64512 && asn <= 65535) || (asn >= 4200000000 && asn <= 4294967294) } -func (uc *UpdateCoordinator) shouldAttachAsNodeMetadata(prefixLen uint32, isIPv4 bool) bool { - // Following IGP-graph pattern: /32 (IPv4) and /128 (IPv6) loopbacks as node metadata - if isIPv4 && prefixLen == 32 { - return true - } - if !isIPv4 && prefixLen == 128 { - return true - } - return false -} - -func (uc *UpdateCoordinator) attachPrefixToOriginNode(ctx context.Context, prefix string, prefixLen, originAS uint32, prefixData map[string]interface{}, isIPv4 bool) error { - // Find the origin node (could be IGP node or BGP node) - originNodeID, err := uc.findOriginNode(ctx, originAS, prefixData) - if err != nil { - return fmt.Errorf("failed to find origin node for AS%d: %w", originAS, err) - } - - if originNodeID == "" { - // Origin node doesn't exist - create BGP node for this AS - originNodeID, err = uc.createOriginBGPNode(ctx, originAS, prefixData) - if err != nil { - return fmt.Errorf("failed to create origin BGP node for AS%d: %w", originAS, err) - } - } - - // Add prefix to node's metadata - return uc.addPrefixToNodeMetadata(ctx, originNodeID, prefix, prefixLen, prefixData) -} - -func (uc *UpdateCoordinator) findOriginNode(ctx context.Context, originAS uint32, prefixData map[string]interface{}) (string, error) { - // Strategy 1: Look for IGP nodes with matching ASN - igpNodeID, err := uc.findIGPNodeByASN(ctx, originAS) - if err != nil { - return "", err - } - if igpNodeID != "" { - return igpNodeID, nil - } - - // Strategy 2: Look for existing BGP peer nodes with matching ASN - bgpNodeID, err := uc.findBGPNodeByASN(ctx, originAS) - if err != nil { - return "", err - } - if bgpNodeID != "" { - return bgpNodeID, nil - } - - // Strategy 3: No existing node found - will need to create one - return "", nil // Indicates node doesn't exist yet -} - -func (uc *UpdateCoordinator) findIGPNodeByASN(ctx context.Context, asn uint32) (string, error) { - // Query IGP nodes for matching peer_asn (the AS of the IGP domain) - query := fmt.Sprintf(` - FOR node IN %s - FILTER node.peer_asn == @asn - LIMIT 1 - RETURN node._id - `, uc.db.config.IGPNode) - - bindVars := map[string]interface{}{ - "asn": asn, - } - - cursor, err := uc.db.db.Query(ctx, query, bindVars) - if err != nil { - return "", fmt.Errorf("failed to query IGP nodes by ASN: %w", err) - } - defer cursor.Close() - - if cursor.HasMore() { - var nodeID string - if _, err := cursor.ReadDocument(ctx, &nodeID); err != nil { - return "", fmt.Errorf("failed to read IGP node ID: %w", err) - } - return nodeID, nil - } - - return "", nil -} - -func (uc *UpdateCoordinator) findBGPNodeByASN(ctx context.Context, asn uint32) (string, error) { - // Query BGP nodes for matching ASN - // Prefer real peer nodes (with router_id) over artificial origin nodes - query := fmt.Sprintf(` - FOR node IN %s - FILTER node.asn == @asn - FILTER node.router_id != null AND node.router_id != "" - FILTER !STARTS_WITH(node._key, "bgp_") OR !CONTAINS(node._key, "_origin") - SORT node.router_id ASC - LIMIT 1 - RETURN node._id - `, uc.db.config.BGPNode) - - bindVars := map[string]interface{}{ - "asn": asn, - } - - cursor, err := uc.db.db.Query(ctx, query, bindVars) - if err != nil { - return "", err - } - defer cursor.Close() - - if cursor.HasMore() { - var nodeID string - if _, err := cursor.ReadDocument(ctx, &nodeID); err != nil { - return "", err - } - return nodeID, nil - } - - return "", nil -} - -func (uc *UpdateCoordinator) createOriginBGPNode(ctx context.Context, originAS uint32, prefixData map[string]interface{}) (string, error) { - // Create a representative BGP node for this AS - // Use origin AS as the router ID since we don't have specific router info - bgpNodeKey := fmt.Sprintf("bgp_%d_origin", originAS) - routerID := fmt.Sprintf("origin_as_%d", originAS) - - bgpNode := &BGPNode{ - Key: bgpNodeKey, - RouterID: routerID, - ASN: originAS, - } - - // Create BGP node - if _, err := uc.db.bgpNode.CreateDocument(ctx, bgpNode); err != nil { - if !driver.IsConflict(err) { - return "", fmt.Errorf("failed to create origin BGP node: %w", err) - } - // Node already exists, which is fine - } - - nodeID := fmt.Sprintf("%s/%s", uc.db.config.BGPNode, bgpNodeKey) - glog.V(8).Infof("Created origin BGP node: %s for AS%d", nodeID, originAS) - return nodeID, nil -} - -func (uc *UpdateCoordinator) addPrefixToNodeMetadata(ctx context.Context, nodeID, prefix string, prefixLen uint32, prefixData map[string]interface{}) error { - // Create prefix metadata object - prefixMetadata := map[string]interface{}{ - "prefix": prefix, - "prefix_len": prefixLen, - "origin_as": getUint32FromInterface(prefixData["origin_as"]), - "peer_asn": getUint32FromInterface(prefixData["peer_asn"]), - "nexthop": getStringFromData(prefixData, "nexthop"), - "timestamp": getStringFromData(prefixData, "timestamp"), - } - - // Add AS path if available - if asPath, ok := prefixData["base_attrs"].(map[string]interface{}); ok { - if path, ok := asPath["as_path"].([]interface{}); ok { - prefixMetadata["as_path"] = path - } - } - - // Update node to add prefix to metadata - // This requires reading the node, updating the prefixes array, and writing it back - updateQuery := fmt.Sprintf(` - FOR node IN %s - FILTER node._id == @nodeId - LET currentPrefixes = node.prefixes || [] - LET newPrefixes = APPEND(currentPrefixes, @prefixData) - UPDATE node WITH { prefixes: newPrefixes } IN %s - RETURN NEW - `, uc.getNodeCollectionFromID(nodeID), uc.getNodeCollectionFromID(nodeID)) - - bindVars := map[string]interface{}{ - "nodeId": nodeID, - "prefixData": prefixMetadata, - } - - cursor, err := uc.db.db.Query(ctx, updateQuery, bindVars) - if err != nil { - return fmt.Errorf("failed to add prefix to node metadata: %w", err) - } - defer cursor.Close() - - glog.V(8).Infof("Added prefix %s/%d to node %s metadata", prefix, prefixLen, nodeID) - return nil -} - -func (uc *UpdateCoordinator) getNodeCollectionFromID(nodeID string) string { - // Extract collection name from node ID (format: "collection/key") - if len(nodeID) > 0 { - for i, char := range nodeID { - if char == '/' { - return nodeID[:i] - } - } - } - return uc.db.config.IGPNode // Default fallback -} - func (uc *UpdateCoordinator) createBGPPrefixVertex(ctx context.Context, key string, prefixData map[string]interface{}, prefixType string, isIPv4 bool) error { prefix, _ := prefixData["prefix"].(string) prefixLen := getUint32FromInterface(prefixData["prefix_len"]) @@ -374,7 +184,6 @@ func (uc *UpdateCoordinator) createBGPPrefixVertex(ctx context.Context, key stri PeerASN: peerASN, PrefixType: prefixType, Nexthop: getStringFromData(prefixData, "nexthop"), - IsHost: uc.shouldAttachAsNodeMetadata(prefixLen, isIPv4), } // Add base attributes if available @@ -459,48 +268,6 @@ func (uc *UpdateCoordinator) createPrefixToOriginEdge(ctx context.Context, prefi return nil } -func (uc *UpdateCoordinator) removePrefixFromOriginNode(ctx context.Context, prefix string, prefixLen, originAS uint32, isIPv4 bool) error { - // Find the origin node - originNodeID, err := uc.findOriginNode(ctx, originAS, map[string]interface{}{}) - if err != nil { - return fmt.Errorf("failed to find origin node for prefix removal: %w", err) - } - - if originNodeID == "" { - glog.V(6).Infof("Origin node not found for AS%d during prefix removal", originAS) - return nil // Node doesn't exist, nothing to remove - } - - // Remove prefix from node's metadata - removeQuery := fmt.Sprintf(` - FOR node IN %s - FILTER node._id == @nodeId - LET currentPrefixes = node.prefixes || [] - LET filteredPrefixes = ( - FOR p IN currentPrefixes - FILTER NOT (p.prefix == @prefix AND p.prefix_len == @prefixLen) - RETURN p - ) - UPDATE node WITH { prefixes: filteredPrefixes } IN %s - RETURN NEW - `, uc.getNodeCollectionFromID(originNodeID), uc.getNodeCollectionFromID(originNodeID)) - - bindVars := map[string]interface{}{ - "nodeId": originNodeID, - "prefix": prefix, - "prefixLen": prefixLen, - } - - cursor, err := uc.db.db.Query(ctx, removeQuery, bindVars) - if err != nil { - return fmt.Errorf("failed to remove prefix from node metadata: %w", err) - } - defer cursor.Close() - - glog.V(8).Infof("Removed prefix %s/%d from node %s metadata", prefix, prefixLen, originNodeID) - return nil -} - // removeBGPPrefixFromPeer removes edges between a specific peer and a prefix // Only removes the prefix vertex if no more peers are advertising it func (uc *UpdateCoordinator) removeBGPPrefixFromPeer(ctx context.Context, key string, prefixData map[string]interface{}, isIPv4 bool) error { @@ -706,8 +473,8 @@ func (uc *UpdateCoordinator) findBGPPeerNodesForPrefix(ctx context.Context, orig } if isIGPOrigin { - glog.V(6).Infof("Prefix %s/%d originates from internal IGP (AS%d) - attaching to IGP nodes", prefix, prefixLen, originAS) - return uc.findIGPNodesForPrefix(ctx, originAS) + glog.V(6).Infof("Prefix %s/%d originates from internal IGP (AS%d) - attaching to specific IGP node", prefix, prefixLen, originAS) + return uc.findIGPNodesForPrefix(ctx, originAS, prefixData) } // For external prefixes, use peer-centric approach @@ -738,16 +505,26 @@ func (uc *UpdateCoordinator) checkIfIGPOrigin(ctx context.Context, originAS uint } // findIGPNodesForPrefix finds IGP nodes that should be attached to an internal prefix -func (uc *UpdateCoordinator) findIGPNodesForPrefix(ctx context.Context, originAS uint32) ([]string, error) { - // Find IGP nodes with matching peer_asn (the AS of the IGP domain) +func (uc *UpdateCoordinator) findIGPNodesForPrefix(ctx context.Context, originAS uint32, prefixData map[string]interface{}) ([]string, error) { + // For internal prefixes, attach to the SPECIFIC node identified by router_id + // NOT all nodes in the AS domain + routerID := getStringFromData(prefixData, "router_id") + + if routerID == "" { + glog.Warningf("No router_id found for internal prefix from AS%d - cannot attach to specific node", originAS) + return nil, nil + } + + // Find the specific IGP node with matching router_id and peer_asn query := fmt.Sprintf(` FOR node IN %s - FILTER node.peer_asn == @asn + FILTER node.router_id == @routerId AND node.peer_asn == @asn RETURN node._id `, uc.db.config.IGPNode) bindVars := map[string]interface{}{ - "asn": originAS, + "routerId": routerID, + "asn": originAS, } cursor, err := uc.db.db.Query(ctx, query, bindVars) @@ -765,7 +542,11 @@ func (uc *UpdateCoordinator) findIGPNodesForPrefix(ctx context.Context, originAS nodeIDs = append(nodeIDs, nodeID) } - glog.V(7).Infof("Found %d IGP nodes for AS%d", len(nodeIDs), originAS) + if len(nodeIDs) == 0 { + glog.V(6).Infof("No IGP node found with router_id=%s and peer_asn=%d", routerID, originAS) + } else { + glog.V(7).Infof("Found %d IGP node(s) with router_id=%s for AS%d prefix", len(nodeIDs), routerID, originAS) + } return nodeIDs, nil } diff --git a/ip-graph/arangodb/bmp-helpers.go b/ip-graph/arangodb/bmp-helpers.go index cf238152..e7d1ddb4 100644 --- a/ip-graph/arangodb/bmp-helpers.go +++ b/ip-graph/arangodb/bmp-helpers.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/ip-graph/arangodb/errors.go b/ip-graph/arangodb/errors.go index 19f34bbb..5a107f2b 100644 --- a/ip-graph/arangodb/errors.go +++ b/ip-graph/arangodb/errors.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import "errors" diff --git a/ip-graph/arangodb/ibgp-subnet-processor.go b/ip-graph/arangodb/ibgp-subnet-processor.go index 10fde324..d18b448d 100644 --- a/ip-graph/arangodb/ibgp-subnet-processor.go +++ b/ip-graph/arangodb/ibgp-subnet-processor.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/ip-graph/arangodb/igp-copy-processor.go b/ip-graph/arangodb/igp-copy-processor.go index 97c3e2e2..adf44503 100644 --- a/ip-graph/arangodb/igp-copy-processor.go +++ b/ip-graph/arangodb/igp-copy-processor.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/ip-graph/arangodb/igp-sync-processor.go b/ip-graph/arangodb/igp-sync-processor.go index 0c5cc601..49670dee 100644 --- a/ip-graph/arangodb/igp-sync-processor.go +++ b/ip-graph/arangodb/igp-sync-processor.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/ip-graph/arangodb/types.go b/ip-graph/arangodb/types.go index fd2069be..600813a4 100644 --- a/ip-graph/arangodb/types.go +++ b/ip-graph/arangodb/types.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( @@ -101,7 +123,6 @@ type BGPPrefix struct { Name string `json:"name,omitempty"` PeerName string `json:"peer_name,omitempty"` PrefixType string `json:"prefix_type"` // "ibgp", "ebgp_private", "ebgp_public", "inet" - IsHost bool `json:"is_host"` // true for /32 and /128 prefixes } // IPNode represents a node in the full IP topology (can be IGP, BGP, or hybrid) diff --git a/ip-graph/arangodb/update-coordinator.go b/ip-graph/arangodb/update-coordinator.go index 2f750c98..774fe52e 100644 --- a/ip-graph/arangodb/update-coordinator.go +++ b/ip-graph/arangodb/update-coordinator.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package arangodb import ( diff --git a/ip-graph/kafkamessenger/kafkamessenger.go b/ip-graph/kafkamessenger/kafkamessenger.go index 6e2e37c2..0ead1ac9 100644 --- a/ip-graph/kafkamessenger/kafkamessenger.go +++ b/ip-graph/kafkamessenger/kafkamessenger.go @@ -1,3 +1,25 @@ +// Copyright (c) 2022-2025 Cisco Systems, Inc. and its affiliates +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// The contents of this file are licensed under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with the +// License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + package kafkamessenger import (