diff --git a/docs/geos_mesh_docs/doctor.rst b/docs/geos_mesh_docs/doctor.rst index 0e66d84f..123d673a 100644 --- a/docs/geos_mesh_docs/doctor.rst +++ b/docs/geos_mesh_docs/doctor.rst @@ -14,7 +14,7 @@ To use mesh-doctor, you first need to have installed the ``geos-mesh`` package u python -m pip install --upgrade ./geos-mesh -Once done, you can call ``mesh-doctor`` in your command line as presented in the rest of this documentation. +Once done, you can call ``mesh-doctor`` or ``meshDoctor`` in your command line as presented in the rest of this documentation. Modules ^^^^^^^ @@ -24,30 +24,30 @@ To list all the modules available through ``mesh-doctor``, you can simply use th .. code-block:: $ mesh-doctor --help - usage: mesh_doctor.py [-h] [-v] [-q] -i VTK_MESH_FILE - {collocated_nodes,element_volumes,fix_elements_orderings,generate_cube,generate_fractures,generate_global_ids,non_conformal,self_intersecting_elements,supported_elements} + usage: meshDoctor.py [-h] [-v] [-q] -i VTK_MESH_FILE + {collocatedNodes,elementVolumes,fixElementsOrderings,generateCube,generateFractures,generateGlobalIds,nonConformal,selfIntersectingElements,supportedElements} ... Inspects meshes for GEOSX. positional arguments: - {collocated_nodes,element_volumes,fix_elements_orderings,generate_cube,generate_fractures,generate_global_ids,non_conformal,self_intersecting_elements,supported_elements} + {collocatedNodes,elementVolumes,fixElementsOrderings,generateCube,generateFractures,generateGlobalIds,nonConformal,selfIntersectingElements,supportedElements} Modules - collocated_nodes + collocatedNodes Checks if nodes are collocated. - element_volumes + elementVolumes Checks if the volumes of the elements are greater than "min". - fix_elements_orderings + fixElementsOrderings Reorders the support nodes for the given cell types. - generate_cube + generateCube Generate a cube and its fields. - generate_fractures + generateFractures Splits the mesh to generate the faults and fractures. [EXPERIMENTAL] - generate_global_ids + generateGlobalIds Adds globals ids for points and cells. - non_conformal + nonConformal Detects non conformal elements. [EXPERIMENTAL] - self_intersecting_elements + selfIntersectingElements Checks if the faces of the elements are self intersecting. - supported_elements + supportedElements Check that all the elements of the mesh are supported by GEOSX. options: -h, --help @@ -64,8 +64,8 @@ For example .. code-block:: - $ mesh-doctor collocated_nodes --help - usage: mesh_doctor.py collocated_nodes [-h] --tolerance TOLERANCE + $ mesh-doctor collocatedNodes --help + usage: meshDoctor.py collocatedNodes [-h] --tolerance TOLERANCE options: -h, --help show this help message and exit --tolerance TOLERANCE [float]: The absolute distance between two nodes for them to be considered collocated. @@ -76,51 +76,51 @@ If you see a message like .. code-block:: bash - [1970-04-14 03:07:15,625][WARNING] Could not load module "collocated_nodes": No module named 'vtkmodules' + [1970-04-14 03:07:15,625][WARNING] Could not load module "collocatedNodes": No module named 'vtkmodules' -then most likely ``mesh-doctor`` could not load the ``collocated_nodes`` module, because the ``vtk`` python package was not found. -Thereafter, the documentation for module ``collocated_nodes`` will not be displayed. +then most likely ``mesh-doctor`` could not load the ``collocatedNodes`` module, because the ``vtk`` python package was not found. +Thereafter, the documentation for module ``collocatedNodes`` will not be displayed. You can solve this issue by installing the dependencies of ``mesh-doctor`` defined in its ``requirements.txt`` file (``python -m pip install -r requirements.txt``). Here is a list and brief description of all the modules available. -``all_checks`` and ``main_checks`` -"""""""""""""""""""""""""""""""""" +``allChecks`` and ``mainChecks`` +"""""""""""""""""""""""""""""""" ``mesh-doctor`` modules are called ``actions`` and they can be split into 2 different categories: ``check actions`` that will give you a feedback on a .vtu mesh that you would like to use in GEOS. ``operate actions`` that will either create a new mesh or modify an existing mesh. -``all_checks`` aims at applying every single ``check`` action in one single command. The available list is of check is: -``collocated_nodes``, ``element_volumes``, ``non_conformal``, ``self_intersecting_elements``, ``supported_elements``. +``allChecks`` aims at applying every single ``check`` action in one single command. The available list is of check is: +``collocatedNodes``, ``elementVolumes``, ``nonConformal``, ``selfIntersectingElements``, ``supportedElements``. -``main_checks`` does only the fastest checks ``collocated_nodes``, ``element_volumes`` and ``self_intersecting_elements`` +``mainChecks`` does only the fastest checks ``collocatedNodes``, ``elementVolumes`` and ``selfIntersectingElements`` that can quickly highlight some issues to deal with before investigating the other checks. -Both ``all_checks`` and ``main_checks`` have the same keywords and can be operated in the same way. The example below shows -the case of ``all_checks``, but it can be swapped for ``main_checks``. +Both ``allChecks`` and ``mainChecks`` have the same keywords and can be operated in the same way. The example below shows +the case of ``allChecks``, but it can be swapped for ``mainChecks``. .. code-block:: - $ mesh-doctor all_checks --help - usage: mesh-doctor all_checks [-h] [--checks_to_perform CHECKS_TO_PERFORM] [--set_parameters SET_PARAMETERS] + $ mesh-doctor allChecks --help + usage: mesh-doctor allChecks [-h] [--checksToPerform checksToPerform] [--set_parameters SET_PARAMETERS] options: -h, --help show this help message and exit - --checks_to_perform CHECKS_TO_PERFORM + --checksToPerform checksToPerform Comma-separated list of mesh-doctor checks to perform. - If no input was given, all of the following checks will be executed by default: ['collocated_nodes', 'element_volumes', 'self_intersecting_elements']. - The available choices for checks are ['collocated_nodes', 'element_volumes', 'non_conformal', 'self_intersecting_elements', 'supported_elements']. + If no input was given, all of the following checks will be executed by default: ['collocatedNodes', 'elementVolumes', 'selfIntersectingElements']. + The available choices for checks are ['collocatedNodes', 'elementVolumes', 'nonConformal', 'selfIntersectingElements', 'supportedElements']. If you want to choose only certain of them, you can name them individually. - Example: --checks_to_perform collocated_nodes,element_volumes (default: ) - --set_parameters SET_PARAMETERS + Example: --checksToPerform collocatedNodes,elementVolumes (default: ) + --setParameters setParameters Comma-separated list of parameters to set for the checks (e.g., 'param_name:value'). These parameters override the defaults. - Default parameters are: For collocated_nodes: tolerance:0.0. For element_volumes: min_volume:0.0. - For non_conformal: angle_tolerance:10.0, point_tolerance:0.0, face_tolerance:0.0. - For self_intersecting_elements: min_distance:2.220446049250313e-16. For supported_elements: chunk_size:1, nproc:8. - Example: --set_parameters parameter_name:10.5,other_param:25 (default: ) + Default parameters are: For collocatedNodes: tolerance:0.0. For elementVolumes: minVolume:0.0. + For nonConformal: angleTolerance:10.0, pointTolerance:0.0, faceTolerance:0.0. + For selfIntersectingElements: minDistance:2.220446049250313e-16. For supportedElements: chunkSize:1, nproc:8. + Example: --setParameters parameter_name:10.5,other_param:25 (default: ) -``collocated_nodes`` +``collocatedNodes`` """""""""""""""""""" Displays the neighboring nodes that are closer to each other than a prescribed threshold. @@ -128,37 +128,37 @@ It is not uncommon to define multiple nodes for the exact same position, which w .. code-block:: - $ mesh-doctor collocated_nodes --help - usage: mesh_doctor.py collocated_nodes [-h] --tolerance TOLERANCE + $ mesh-doctor collocatedNodes --help + usage: meshDoctor.py collocatedNodes [-h] --tolerance TOLERANCE options: -h, --help show this help message and exit --tolerance TOLERANCE [float]: The absolute distance between two nodes for them to be considered collocated. -``element_volumes`` -""""""""""""""""""" +``elementVolumes`` +"""""""""""""""""" Computes the volumes of all the cells and displays the ones that are below a prescribed threshold. Cells with negative volumes will typically be an issue for ``geos`` and should be fixed. .. code-block:: - $ mesh-doctor element_volumes --help - usage: mesh_doctor.py element_volumes [-h] --min 0.0 + $ mesh-doctor elementVolumes --help + usage: meshDoctor.py elementVolumes [-h] --minVolume 0.0 options: -h, --help show this help message and exit - --min 0.0 [float]: The minimum acceptable volume. Defaults to 0.0. + --minVolume 0.0 [float]: The minimum acceptable volume. Defaults to 0.0. -``fix_elements_orderings`` -"""""""""""""""""""""""""" +``fixElementsOrderings`` +"""""""""""""""""""""""" It sometimes happens that an exported mesh does not abide by the ``vtk`` orderings. -The ``fix_elements_orderings`` module can rearrange the nodes of given types of elements. +The ``fixElementsOrderings`` module can rearrange the nodes of given types of elements. This can be convenient if you cannot regenerate the mesh. .. code-block:: - $ mesh-doctor fix_elements_orderings --help - usage: mesh_doctor.py fix_elements_orderings [-h] [--Hexahedron 1,6,5,4,7,0,2,3] [--Prism5 8,2,0,7,6,9,5,1,4,3] + $ mesh-doctor fixElementsOrderings --help + usage: meshDoctor.py fixElementsOrderings [-h] [--Hexahedron 1,6,5,4,7,0,2,3] [--Prism5 8,2,0,7,6,9,5,1,4,3] [--Prism6 11,2,8,10,5,0,9,7,6,1,4,3] [--Pyramid 3,4,0,2,1] [--Tetrahedron 2,0,3,1] [--Voxel 1,6,5,4,7,0,2,3] [--Wedge 3,5,4,0,2,1] --output OUTPUT [--data-mode binary, ascii] @@ -178,8 +178,8 @@ This can be convenient if you cannot regenerate the mesh. --data-mode binary, ascii [string]: For ".vtu" output format, the data mode can be binary or ascii. Defaults to binary. -``generate_cube`` -""""""""""""""""" +``generateCube`` +"""""""""""""""" This module conveniently generates cubic meshes in ``vtk``. It can also generate fields with simple values. @@ -187,8 +187,8 @@ This tool can also be useful to generate a trial mesh that will later be refined .. code-block:: - $ mesh-doctor generate_cube --help - usage: mesh_doctor.py generate_cube [-h] [--x 0:1.5:3] [--y 0:5:10] [--z 0:1] [--nx 2:2] [--ny 1:1] [--nz 4] + $ mesh-doctor generateCube --help + usage: meshDoctor.py generateCube [-h] [--x 0:1.5:3] [--y 0:5:10] [--z 0:1] [--nx 2:2] [--ny 1:1] [--nz 4] [--fields name:support:dim [name:support:dim ...]] [--cells] [--no-cells] [--points] [--no-points] --output OUTPUT [--data-mode binary, ascii] options: @@ -209,46 +209,46 @@ This tool can also be useful to generate a trial mesh that will later be refined --data-mode binary, ascii [string]: For ".vtu" output format, the data mode can be binary or ascii. Defaults to binary. -``generate_fractures`` -"""""""""""""""""""""" +``generateFractures`` +""""""""""""""""""""" For a conformal fracture to be defined in a mesh, ``geos`` requires the mesh to be split at the faces where the fracture gets across the mesh. -The ``generate_fractures`` module will split the mesh and generate the multi-block ``vtk`` files. +The ``generateFractures`` module will split the mesh and generate the multi-block ``vtk`` files. .. code-block:: - $ mesh-doctor generate_fractures --help - usage: mesh_doctor.py generate_fractures [-h] --policy field, internal_surfaces [--name NAME] [--values VALUES] --output OUTPUT - [--data-mode binary, ascii] [--fractures_output_dir FRACTURES_OUTPUT_DIR] + $ mesh-doctor generateFractures --help + usage: meshDoctor.py generateFractures [-h] --policy field, internalSurfaces [--name NAME] [--values VALUES] --output OUTPUT + [--data-mode binary, ascii] [--fracturesOutputDir FRACTURES_OUTPUT_DIR] options: -h, --help show this help message and exit - --policy field, internal_surfaces - [string]: The criterion to define the surfaces that will be changed into fracture zones. Possible values are "field, internal_surfaces" + --policy field, internalSurfaces + [string]: The criterion to define the surfaces that will be changed into fracture zones. Possible values are "field, internalSurfaces" --name NAME [string]: If the "field" policy is selected, defines which field will be considered to define the fractures. - If the "internal_surfaces" policy is selected, defines the name of the attribute will be considered to identify the fractures. + If the "internalSurfaces" policy is selected, defines the name of the attribute will be considered to identify the fractures. --values VALUES [list of comma separated integers]: If the "field" policy is selected, which changes of the field will be considered as a fracture. - If the "internal_surfaces" policy is selected, list of the fracture attributes. + If the "internalSurfaces" policy is selected, list of the fracture attributes. You can create multiple fractures by separating the values with ':' like shown in this example. --values 10,12:13,14,16,18:22 will create 3 fractures identified respectively with the values (10,12), (13,14,16,18) and (22). If no ':' is found, all values specified will be assumed to create only 1 single fracture. --output OUTPUT [string]: The vtk output file destination. --data-mode binary, ascii [string]: For ".vtu" output format, the data mode can be binary or ascii. Defaults to binary. - --fractures_output_dir FRACTURES_OUTPUT_DIR + --fracturesOutputDir FRACTURES_OUTPUT_DIR [string]: The output directory for the fractures meshes that will be generated from the mesh. - --fractures_data_mode FRACTURES_DATA_MODE + --fracturesDataMode FRACTURES_DATA_MODE [string]: For ".vtu" output format, the data mode can be binary or ascii. Defaults to binary. -``generate_global_ids`` -""""""""""""""""""""""" +``generateGlobalIds`` +""""""""""""""""""""" When running ``geos`` in parallel, `global ids` can be used to refer to data across multiple ranks. -The ``generate_global_ids`` can generate `global ids` for the imported ``vtk`` mesh. +The ``generateGlobalIds`` can generate `global ids` for the imported ``vtk`` mesh. .. code-block:: - $ mesh-doctor generate_global_ids --help - usage: mesh_doctor.py generate_global_ids [-h] [--cells] [--no-cells] [--points] [--no-points] --output OUTPUT + $ mesh-doctor generateGlobalIds --help + usage: meshDoctor.py generateGlobalIds [-h] [--cells] [--no-cells] [--points] [--no-points] --output OUTPUT [--data-mode binary, ascii] options: -h, --help show this help message and exit @@ -260,8 +260,8 @@ The ``generate_global_ids`` can generate `global ids` for the imported ``vtk`` m --data-mode binary, ascii [string]: For ".vtu" output format, the data mode can be binary or ascii. Defaults to binary. -``non_conformal`` -""""""""""""""""" +``nonConformal`` +"""""""""""""""" This module will detect elements which are close enough (there's a user defined threshold) but which are not in front of each other (another threshold can be defined). `Close enough` can be defined in terms or proximity of the nodes and faces of the elements. @@ -270,48 +270,48 @@ This module can be a bit time consuming. .. code-block:: - $ mesh-doctor non_conformal --help - usage: mesh_doctor.py non_conformal [-h] [--angle_tolerance 10.0] [--point_tolerance POINT_TOLERANCE] - [--face_tolerance FACE_TOLERANCE] + $ mesh-doctor nonConformal --help + usage: meshDoctor.py nonConformal [-h] [--angleTolerance 10.0] [--pointTolerance POINT_TOLERANCE] + [--faceTolerance FACE_TOLERANCE] options: -h, --help show this help message and exit - --angle_tolerance 10.0 [float]: angle tolerance in degrees. Defaults to 10.0 - --point_tolerance POINT_TOLERANCE + --angleTolerance 10.0 [float]: angle tolerance in degrees. Defaults to 10.0 + --pointTolerance POINT_TOLERANCE [float]: tolerance for two points to be considered collocated. - --face_tolerance FACE_TOLERANCE + --faceTolerance FACE_TOLERANCE [float]: tolerance for two faces to be considered "touching". -``self_intersecting_elements`` -"""""""""""""""""""""""""""""" +``selfIntersectingElements`` +"""""""""""""""""""""""""""" Some meshes can have cells that auto-intersect. This module will display the elements that have faces intersecting. .. code-block:: - $ mesh-doctor self_intersecting_elements --help - usage: mesh_doctor.py self_intersecting_elements [-h] [--min 2.220446049250313e-16] + $ mesh-doctor selfIntersectingElements --help + usage: meshDoctor.py selfIntersectingElements [-h] [--minDistance 2.220446049250313e-16] options: -h, --help show this help message and exit - --min 2.220446049250313e-16 + --minDistance 2.220446049250313e-16 [float]: The tolerance in the computation. Defaults to your machine precision 2.220446049250313e-16. -``supported_elements`` -"""""""""""""""""""""" +``supportedElements`` +""""""""""""""""""""" ``geos`` supports a specific set of elements. Let's cite the standard elements like `tetrahedra`, `wedges`, `pyramids` or `hexahedra`. But also prismes up to 11 faces. ``geos`` also supports the generic ``VTK_POLYHEDRON``/``42`` elements, which are converted on the fly into one of the elements just described. -The ``supported_elements`` check will validate that no unsupported element is included in the input mesh. +The ``supportedElements`` check will validate that no unsupported element is included in the input mesh. It will also verify that the ``VTK_POLYHEDRON`` cells can effectively get converted into a supported type of element. .. code-block:: - $ mesh-doctor supported_elements --help - usage: mesh_doctor.py supported_elements [-h] [--chunck_size 1] [--nproc 8] + $ mesh-doctor supportedElements --help + usage: meshDoctor.py supportedElements [-h] [--chunkSize 1] [--nproc 8] options: -h, --help show this help message and exit - --chunck_size 1 [int]: Defaults chunk size for parallel processing to 1 + --chunkSize 1 [int]: Defaults chunk size for parallel processing to 1 --nproc 8 [int]: Number of threads used for parallel processing. Defaults to your CPU count 8. \ No newline at end of file diff --git a/geos-mesh/pyproject.toml b/geos-mesh/pyproject.toml index 30df3e5e..2daf4301 100644 --- a/geos-mesh/pyproject.toml +++ b/geos-mesh/pyproject.toml @@ -39,7 +39,8 @@ dependencies = [ ] [project.scripts] - mesh-doctor = "geos.mesh.doctor.mesh_doctor:main" + mesh-doctor = "geos.mesh.doctor.meshDoctor:main" + meshDoctor = "geos.mesh.doctor.meshDoctor:main" convert_abaqus = "geos.mesh.conversion.main:main" [project.urls] diff --git a/geos-mesh/src/geos/mesh/doctor/actions/allChecks.py b/geos-mesh/src/geos/mesh/doctor/actions/allChecks.py new file mode 100644 index 00000000..9b7f7c09 --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/actions/allChecks.py @@ -0,0 +1,26 @@ +from dataclasses import dataclass +from geos.mesh.doctor.register import __loadModuleAction +from geos.mesh.doctor.parsing.cliParsing import setupLogger + + +@dataclass( frozen=True ) +class Options: + checksToPerform: list[ str ] + checksOptions: dict[ str, any ] + checkDisplays: dict[ str, any ] + + +@dataclass( frozen=True ) +class Result: + checkResults: dict[ str, any ] + + +def action( vtkInputFile: str, options: Options ) -> list[ Result ]: + checkResults: dict[ str, any ] = dict() + for checkName in options.checksToPerform: + checkAction = __loadModuleAction( checkName ) + setupLogger.info( f"Performing check '{checkName}'." ) + option = options.checksOptions[ checkName ] + checkResult = checkAction( vtkInputFile, option ) + checkResults[ checkName ] = checkResult + return Result( checkResults=checkResults ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/all_checks.py b/geos-mesh/src/geos/mesh/doctor/actions/all_checks.py deleted file mode 100644 index 253165d9..00000000 --- a/geos-mesh/src/geos/mesh/doctor/actions/all_checks.py +++ /dev/null @@ -1,26 +0,0 @@ -from dataclasses import dataclass -from geos.mesh.doctor.register import __load_module_action -from geos.mesh.doctor.parsing.cli_parsing import setup_logger - - -@dataclass( frozen=True ) -class Options: - checks_to_perform: list[ str ] - checks_options: dict[ str, any ] - check_displays: dict[ str, any ] - - -@dataclass( frozen=True ) -class Result: - check_results: dict[ str, any ] - - -def action( vtk_input_file: str, options: Options ) -> list[ Result ]: - check_results: dict[ str, any ] = dict() - for check_name in options.checks_to_perform: - check_action = __load_module_action( check_name ) - setup_logger.info( f"Performing check '{check_name}'." ) - option = options.checks_options[ check_name ] - check_result = check_action( vtk_input_file, option ) - check_results[ check_name ] = check_result - return Result( check_results=check_results ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/checkFractures.py b/geos-mesh/src/geos/mesh/doctor/actions/checkFractures.py new file mode 100644 index 00000000..6966730f --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/actions/checkFractures.py @@ -0,0 +1,159 @@ +import numpy +from dataclasses import dataclass +from tqdm import tqdm +from typing import Collection, Iterable, Sequence +from vtkmodules.vtkCommonDataModel import vtkUnstructuredGrid, vtkCell +from vtkmodules.vtkCommonCore import vtkPoints +from vtkmodules.vtkIOXML import vtkXMLMultiBlockDataReader +from vtkmodules.util.numpy_support import vtk_to_numpy +from geos.mesh.doctor.actions.generateFractures import Coordinates3D +from geos.mesh.doctor.parsing.cliParsing import setupLogger +from geos.mesh.utils.genericHelpers import vtkIter + + +@dataclass( frozen=True ) +class Options: + tolerance: float + matrixName: str + fractureName: str + collocatedNodesFieldName: str + + +@dataclass( frozen=True ) +class Result: + # First index is the local index of the fracture mesh. + # Second is the local index of the matrix mesh. + # Third is the global index in the matrix mesh. + errors: Sequence[ tuple[ int, int, int ] ] + + +def __readMultiblock( vtkInputFile: str, matrixName: str, + fractureName: str ) -> tuple[ vtkUnstructuredGrid, vtkUnstructuredGrid ]: + reader = vtkXMLMultiBlockDataReader() + reader.SetFileName( vtkInputFile ) + reader.Update() + multiBlock = reader.GetOutput() + for b in range( multiBlock.GetNumberOfBlocks() ): + blockName: str = multiBlock.GetMetaData( b ).Get( multiBlock.NAME() ) + if blockName == matrixName: + matrix: vtkUnstructuredGrid = multiBlock.GetBlock( b ) + if blockName == fractureName: + fracture: vtkUnstructuredGrid = multiBlock.GetBlock( b ) + assert matrix and fracture + return matrix, fracture + + +def formatCollocatedNodes( fractureMesh: vtkUnstructuredGrid ) -> Sequence[ Iterable[ int ] ]: + """Extract the collocated nodes information from the mesh and formats it in a python way. + + Args: + fractureMesh (vtkUnstructuredGrid): The mesh of the fracture (with 2d cells). + + Returns: + Sequence[ Iterable[ int ] ]: An iterable over all the buckets of collocated nodes. + """ + collocatedNodes: numpy.ndarray = vtk_to_numpy( fractureMesh.GetPointData().GetArray( "collocatedNodes" ) ) + if len( collocatedNodes.shape ) == 1: + collocatedNodes: numpy.ndarray = collocatedNodes.reshape( ( collocatedNodes.shape[ 0 ], 1 ) ) + generator = ( tuple( sorted( bucket[ bucket > -1 ] ) ) for bucket in collocatedNodes ) + return tuple( generator ) + + +def __checkCollocatedNodesPositions( + matrixPoints: Sequence[ Coordinates3D ], fracturePoints: Sequence[ Coordinates3D ], g2l: Sequence[ int ], + collocatedNodes: Iterable[ Iterable[ int ] ] +) -> Collection[ tuple[ int, Iterable[ int ], Iterable[ Coordinates3D ] ] ]: + issues = [] + for li, bucket in enumerate( collocatedNodes ): + matrix_nodes = ( fracturePoints[ li ], ) + tuple( map( lambda gi: matrixPoints[ g2l[ gi ] ], bucket ) ) + m = numpy.array( matrix_nodes ) + rank: int = numpy.linalg.matrix_rank( m ) + if rank > 1: + issues.append( ( li, bucket, tuple( map( lambda gi: matrixPoints[ g2l[ gi ] ], bucket ) ) ) ) + return issues + + +def myIter( ccc ): + car, cdr = ccc[ 0 ], ccc[ 1: ] + for i in car: + if cdr: + for j in myIter( cdr ): + yield i, *j + else: + yield ( i, ) + + +def __checkNeighbors( matrix: vtkUnstructuredGrid, fracture: vtkUnstructuredGrid, g2l: Sequence[ int ], + collocatedNodes: Sequence[ Iterable[ int ] ] ): + fractureNodes: set[ int ] = set() + for bucket in collocatedNodes: + for gi in bucket: + fractureNodes.add( g2l[ gi ] ) + # For each face of each cell, + # if all the points of the face are "made" of collocated nodes, + # then this is a fracture face. + fractureFaces: set[ frozenset[ int ] ] = set() + for c in range( matrix.GetNumberOfCells() ): + cell: vtkCell = matrix.GetCell( c ) + for f in range( cell.GetNumberOfFaces() ): + face: vtkCell = cell.GetFace( f ) + pointIds = frozenset( vtkIter( face.GetPointIds() ) ) + if pointIds <= fractureNodes: + fractureFaces.add( pointIds ) + # Finding the cells + for c in tqdm( range( fracture.GetNumberOfCells() ), desc="Finding neighbor cell pairs" ): + cell: vtkCell = fracture.GetCell( c ) + cns: set[ frozenset[ int ] ] = set() # subset of collocatedNodes + pointIds = frozenset( vtkIter( cell.GetPointIds() ) ) + for pointId in pointIds: + bucket = collocatedNodes[ pointId ] + localBucket = frozenset( map( g2l.__getitem__, bucket ) ) + cns.add( localBucket ) + found = 0 + tmp = tuple( map( tuple, cns ) ) + for nodeCombinations in myIter( tmp ): + f = frozenset( nodeCombinations ) + if f in fractureFaces: + found += 1 + if found != 2: + setupLogger.warning( "Something went wrong since we should have found 2 fractures faces (we found" + + f" {found}) for collocated nodes {cns}." ) + + +def __action( vtkInputFile: str, options: Options ) -> Result: + matrix, fracture = __readMultiblock( vtkInputFile, options.matrixName, options.fractureName ) + matrixPoints: vtkPoints = matrix.GetPoints() + fracturePoints: vtkPoints = fracture.GetPoints() + + collocatedNodes: Sequence[ Iterable[ int ] ] = formatCollocatedNodes( fracture ) + assert matrix.GetPointData().GetGlobalIds() and matrix.GetCellData().GetGlobalIds() and \ + fracture.GetPointData().GetGlobalIds() and fracture.GetCellData().GetGlobalIds() + + pointIds = vtk_to_numpy( matrix.GetPointData().GetGlobalIds() ) + g2l = numpy.ones( len( pointIds ), dtype=int ) * -1 + for loc, glo in enumerate( pointIds ): + g2l[ glo ] = loc + g2l.flags.writeable = False + + issues = __checkCollocatedNodesPositions( vtk_to_numpy( matrix.GetPoints().GetData() ), + vtk_to_numpy( fracture.GetPoints().GetData() ), g2l, collocatedNodes ) + assert len( issues ) == 0 + + __checkNeighbors( matrix, fracture, g2l, collocatedNodes ) + + errors = [] + for i, duplicates in enumerate( collocatedNodes ): + for duplicate in filter( lambda i: i > -1, duplicates ): + p0 = matrixPoints.GetPoint( g2l[ duplicate ] ) + p1 = fracturePoints.GetPoint( i ) + if numpy.linalg.norm( numpy.array( p1 ) - numpy.array( p0 ) ) > options.tolerance: + errors.append( ( i, g2l[ duplicate ], duplicate ) ) + return Result( errors=errors ) + + +def action( vtkInputFile: str, options: Options ) -> Result: + try: + return __action( vtkInputFile, options ) + except BaseException as e: + setupLogger.error( e ) + return Result( errors=() ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/check_fractures.py b/geos-mesh/src/geos/mesh/doctor/actions/check_fractures.py deleted file mode 100644 index 17d3f893..00000000 --- a/geos-mesh/src/geos/mesh/doctor/actions/check_fractures.py +++ /dev/null @@ -1,156 +0,0 @@ -import numpy -from dataclasses import dataclass -from tqdm import tqdm -from typing import Collection, Iterable, Sequence -from vtkmodules.vtkCommonDataModel import vtkUnstructuredGrid, vtkCell -from vtkmodules.vtkCommonCore import vtkPoints -from vtkmodules.vtkIOXML import vtkXMLMultiBlockDataReader -from vtkmodules.util.numpy_support import vtk_to_numpy -from geos.mesh.doctor.actions.generate_fractures import Coordinates3D -from geos.mesh.doctor.parsing.cli_parsing import setup_logger -from geos.mesh.utils.genericHelpers import vtk_iter - - -@dataclass( frozen=True ) -class Options: - tolerance: float - matrix_name: str - fracture_name: str - collocated_nodes_field_name: str - - -@dataclass( frozen=True ) -class Result: - # First index is the local index of the fracture mesh. - # Second is the local index of the matrix mesh. - # Third is the global index in the matrix mesh. - errors: Sequence[ tuple[ int, int, int ] ] - - -def __read_multiblock( vtk_input_file: str, matrix_name: str, - fracture_name: str ) -> tuple[ vtkUnstructuredGrid, vtkUnstructuredGrid ]: - reader = vtkXMLMultiBlockDataReader() - reader.SetFileName( vtk_input_file ) - reader.Update() - multi_block = reader.GetOutput() - for b in range( multi_block.GetNumberOfBlocks() ): - block_name: str = multi_block.GetMetaData( b ).Get( multi_block.NAME() ) - if block_name == matrix_name: - matrix: vtkUnstructuredGrid = multi_block.GetBlock( b ) - if block_name == fracture_name: - fracture: vtkUnstructuredGrid = multi_block.GetBlock( b ) - assert matrix and fracture - return matrix, fracture - - -def format_collocated_nodes( fracture_mesh: vtkUnstructuredGrid ) -> Sequence[ Iterable[ int ] ]: - """ - Extract the collocated nodes information from the mesh and formats it in a python way. - :param fracture_mesh: The mesh of the fracture (with 2d cells). - :return: An iterable over all the buckets of collocated nodes. - """ - collocated_nodes: numpy.ndarray = vtk_to_numpy( fracture_mesh.GetPointData().GetArray( "collocated_nodes" ) ) - if len( collocated_nodes.shape ) == 1: - collocated_nodes: numpy.ndarray = collocated_nodes.reshape( ( collocated_nodes.shape[ 0 ], 1 ) ) - generator = ( tuple( sorted( bucket[ bucket > -1 ] ) ) for bucket in collocated_nodes ) - return tuple( generator ) - - -def __check_collocated_nodes_positions( - matrix_points: Sequence[ Coordinates3D ], fracture_points: Sequence[ Coordinates3D ], g2l: Sequence[ int ], - collocated_nodes: Iterable[ Iterable[ int ] ] -) -> Collection[ tuple[ int, Iterable[ int ], Iterable[ Coordinates3D ] ] ]: - issues = [] - for li, bucket in enumerate( collocated_nodes ): - matrix_nodes = ( fracture_points[ li ], ) + tuple( map( lambda gi: matrix_points[ g2l[ gi ] ], bucket ) ) - m = numpy.array( matrix_nodes ) - rank: int = numpy.linalg.matrix_rank( m ) - if rank > 1: - issues.append( ( li, bucket, tuple( map( lambda gi: matrix_points[ g2l[ gi ] ], bucket ) ) ) ) - return issues - - -def my_iter( ccc ): - car, cdr = ccc[ 0 ], ccc[ 1: ] - for i in car: - if cdr: - for j in my_iter( cdr ): - yield i, *j - else: - yield ( i, ) - - -def __check_neighbors( matrix: vtkUnstructuredGrid, fracture: vtkUnstructuredGrid, g2l: Sequence[ int ], - collocated_nodes: Sequence[ Iterable[ int ] ] ): - fracture_nodes: set[ int ] = set() - for bucket in collocated_nodes: - for gi in bucket: - fracture_nodes.add( g2l[ gi ] ) - # For each face of each cell, - # if all the points of the face are "made" of collocated nodes, - # then this is a fracture face. - fracture_faces: set[ frozenset[ int ] ] = set() - for c in range( matrix.GetNumberOfCells() ): - cell: vtkCell = matrix.GetCell( c ) - for f in range( cell.GetNumberOfFaces() ): - face: vtkCell = cell.GetFace( f ) - point_ids = frozenset( vtk_iter( face.GetPointIds() ) ) - if point_ids <= fracture_nodes: - fracture_faces.add( point_ids ) - # Finding the cells - for c in tqdm( range( fracture.GetNumberOfCells() ), desc="Finding neighbor cell pairs" ): - cell: vtkCell = fracture.GetCell( c ) - cns: set[ frozenset[ int ] ] = set() # subset of collocated_nodes - point_ids = frozenset( vtk_iter( cell.GetPointIds() ) ) - for point_id in point_ids: - bucket = collocated_nodes[ point_id ] - local_bucket = frozenset( map( g2l.__getitem__, bucket ) ) - cns.add( local_bucket ) - found = 0 - tmp = tuple( map( tuple, cns ) ) - for node_combinations in my_iter( tmp ): - f = frozenset( node_combinations ) - if f in fracture_faces: - found += 1 - if found != 2: - setup_logger.warning( "Something went wrong since we should have found 2 fractures faces (we found" + - f" {found}) for collocated nodes {cns}." ) - - -def __action( vtk_input_file: str, options: Options ) -> Result: - matrix, fracture = __read_multiblock( vtk_input_file, options.matrix_name, options.fracture_name ) - matrix_points: vtkPoints = matrix.GetPoints() - fracture_points: vtkPoints = fracture.GetPoints() - - collocated_nodes: Sequence[ Iterable[ int ] ] = format_collocated_nodes( fracture ) - assert matrix.GetPointData().GetGlobalIds() and matrix.GetCellData().GetGlobalIds() and \ - fracture.GetPointData().GetGlobalIds() and fracture.GetCellData().GetGlobalIds() - - point_ids = vtk_to_numpy( matrix.GetPointData().GetGlobalIds() ) - g2l = numpy.ones( len( point_ids ), dtype=int ) * -1 - for loc, glo in enumerate( point_ids ): - g2l[ glo ] = loc - g2l.flags.writeable = False - - issues = __check_collocated_nodes_positions( vtk_to_numpy( matrix.GetPoints().GetData() ), - vtk_to_numpy( fracture.GetPoints().GetData() ), g2l, collocated_nodes ) - assert len( issues ) == 0 - - __check_neighbors( matrix, fracture, g2l, collocated_nodes ) - - errors = [] - for i, duplicates in enumerate( collocated_nodes ): - for duplicate in filter( lambda i: i > -1, duplicates ): - p0 = matrix_points.GetPoint( g2l[ duplicate ] ) - p1 = fracture_points.GetPoint( i ) - if numpy.linalg.norm( numpy.array( p1 ) - numpy.array( p0 ) ) > options.tolerance: - errors.append( ( i, g2l[ duplicate ], duplicate ) ) - return Result( errors=errors ) - - -def action( vtk_input_file: str, options: Options ) -> Result: - try: - return __action( vtk_input_file, options ) - except BaseException as e: - setup_logger.error( e ) - return Result( errors=() ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/collocatedNodes.py b/geos-mesh/src/geos/mesh/doctor/actions/collocatedNodes.py new file mode 100644 index 00000000..992d23eb --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/actions/collocatedNodes.py @@ -0,0 +1,67 @@ +from collections import defaultdict +from dataclasses import dataclass +import numpy +from typing import Collection, Iterable +from vtkmodules.vtkCommonCore import reference, vtkPoints +from vtkmodules.vtkCommonDataModel import vtkIncrementalOctreePointLocator, vtkUnstructuredGrid +from geos.mesh.doctor.parsing.cliParsing import setupLogger +from geos.mesh.io.vtkIO import readUnstructuredGrid + + +@dataclass( frozen=True ) +class Options: + tolerance: float + + +@dataclass( frozen=True ) +class Result: + nodesBuckets: Iterable[ Iterable[ int ] ] # Each bucket contains the duplicated node indices. + wrongSupportElements: Collection[ int ] # Element indices with support node indices appearing more than once. + + +def __action( mesh: vtkUnstructuredGrid, options: Options ) -> Result: + points = mesh.GetPoints() + + locator = vtkIncrementalOctreePointLocator() + locator.SetTolerance( options.tolerance ) + output = vtkPoints() + locator.InitPointInsertion( output, points.GetBounds() ) + + # original ids to/from filtered ids. + filteredToOriginal = numpy.ones( points.GetNumberOfPoints(), dtype=int ) * -1 + + rejectedPoints = defaultdict( list ) + pointId = reference( 0 ) + for i in range( points.GetNumberOfPoints() ): + isInserted = locator.InsertUniquePoint( points.GetPoint( i ), pointId ) + if not isInserted: + # If it's not inserted, `pointId` contains the node that was already at that location. + # But in that case, `pointId` is the new numbering in the destination points array. + # It's more useful for the user to get the old index in the original mesh so he can look for it in his data. + setupLogger.debug( f"Point {i} at {points.GetPoint(i)} has been rejected, " + f"point {filteredToOriginal[pointId.get()]} is already inserted." ) + rejectedPoints[ pointId.get() ].append( i ) + else: + # If it's inserted, `pointId` contains the new index in the destination array. + # We store this information to be able to connect the source and destination arrays. + # originalToFiltered[i] = pointId.get() + filteredToOriginal[ pointId.get() ] = i + + tmp = [] + for n, ns in rejectedPoints.items(): + tmp.append( ( n, *ns ) ) + + # Checking that the support node indices appear only once per element. + wrongSupportElements = [] + for c in range( mesh.GetNumberOfCells() ): + cell = mesh.GetCell( c ) + numPointsPerCell = cell.GetNumberOfPoints() + if len( { cell.GetPointId( i ) for i in range( numPointsPerCell ) } ) != numPointsPerCell: + wrongSupportElements.append( c ) + + return Result( nodesBuckets=tmp, wrongSupportElements=wrongSupportElements ) + + +def action( vtkInputFile: str, options: Options ) -> Result: + mesh: vtkUnstructuredGrid = readUnstructuredGrid( vtkInputFile ) + return __action( mesh, options ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/collocated_nodes.py b/geos-mesh/src/geos/mesh/doctor/actions/collocated_nodes.py deleted file mode 100644 index 4881a1d3..00000000 --- a/geos-mesh/src/geos/mesh/doctor/actions/collocated_nodes.py +++ /dev/null @@ -1,68 +0,0 @@ -from collections import defaultdict -from dataclasses import dataclass -import numpy -from typing import Collection, Iterable -from vtkmodules.vtkCommonCore import reference, vtkPoints -from vtkmodules.vtkCommonDataModel import vtkIncrementalOctreePointLocator -from geos.mesh.doctor.parsing.cli_parsing import setup_logger -from geos.mesh.io.vtkIO import read_mesh - - -@dataclass( frozen=True ) -class Options: - tolerance: float - - -@dataclass( frozen=True ) -class Result: - nodes_buckets: Iterable[ Iterable[ int ] ] # Each bucket contains the duplicated node indices. - wrong_support_elements: Collection[ int ] # Element indices with support node indices appearing more than once. - - -def __action( mesh, options: Options ) -> Result: - points = mesh.GetPoints() - - locator = vtkIncrementalOctreePointLocator() - locator.SetTolerance( options.tolerance ) - output = vtkPoints() - locator.InitPointInsertion( output, points.GetBounds() ) - - # original ids to/from filtered ids. - filtered_to_original = numpy.ones( points.GetNumberOfPoints(), dtype=int ) * -1 - - rejected_points = defaultdict( list ) - point_id = reference( 0 ) - for i in range( points.GetNumberOfPoints() ): - is_inserted = locator.InsertUniquePoint( points.GetPoint( i ), point_id ) - if not is_inserted: - # If it's not inserted, `point_id` contains the node that was already at that location. - # But in that case, `point_id` is the new numbering in the destination points array. - # It's more useful for the user to get the old index in the original mesh, so he can look for it in his data. - setup_logger.debug( - f"Point {i} at {points.GetPoint(i)} has been rejected, point {filtered_to_original[point_id.get()]} is already inserted." - ) - rejected_points[ point_id.get() ].append( i ) - else: - # If it's inserted, `point_id` contains the new index in the destination array. - # We store this information to be able to connect the source and destination arrays. - # original_to_filtered[i] = point_id.get() - filtered_to_original[ point_id.get() ] = i - - tmp = [] - for n, ns in rejected_points.items(): - tmp.append( ( n, *ns ) ) - - # Checking that the support node indices appear only once per element. - wrong_support_elements = [] - for c in range( mesh.GetNumberOfCells() ): - cell = mesh.GetCell( c ) - num_points_per_cell = cell.GetNumberOfPoints() - if len( { cell.GetPointId( i ) for i in range( num_points_per_cell ) } ) != num_points_per_cell: - wrong_support_elements.append( c ) - - return Result( nodes_buckets=tmp, wrong_support_elements=wrong_support_elements ) - - -def action( vtk_input_file: str, options: Options ) -> Result: - mesh = read_mesh( vtk_input_file ) - return __action( mesh, options ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/element_volumes.py b/geos-mesh/src/geos/mesh/doctor/actions/elementVolumes.py similarity index 61% rename from geos-mesh/src/geos/mesh/doctor/actions/element_volumes.py rename to geos-mesh/src/geos/mesh/doctor/actions/elementVolumes.py index e5380c3c..03430c66 100644 --- a/geos-mesh/src/geos/mesh/doctor/actions/element_volumes.py +++ b/geos-mesh/src/geos/mesh/doctor/actions/elementVolumes.py @@ -1,24 +1,23 @@ from dataclasses import dataclass -from typing import List, Tuple import uuid -from vtkmodules.vtkCommonDataModel import VTK_HEXAHEDRON, VTK_PYRAMID, VTK_TETRA, VTK_WEDGE +from vtkmodules.vtkCommonDataModel import vtkUnstructuredGrid, VTK_HEXAHEDRON, VTK_PYRAMID, VTK_TETRA, VTK_WEDGE from vtkmodules.vtkFiltersVerdict import vtkCellSizeFilter, vtkMeshQuality from vtkmodules.util.numpy_support import vtk_to_numpy -from geos.mesh.doctor.parsing.cli_parsing import setup_logger -from geos.mesh.io.vtkIO import read_mesh +from geos.mesh.doctor.parsing.cliParsing import setupLogger +from geos.mesh.io.vtkIO import readUnstructuredGrid @dataclass( frozen=True ) class Options: - min_volume: float + minVolume: float @dataclass( frozen=True ) class Result: - element_volumes: List[ Tuple[ int, float ] ] + elementVolumes: list[ tuple[ int, float ] ] -def __action( mesh, options: Options ) -> Result: +def __action( mesh: vtkUnstructuredGrid, options: Options ) -> Result: cs = vtkCellSizeFilter() cs.ComputeAreaOff() @@ -26,8 +25,8 @@ def __action( mesh, options: Options ) -> Result: cs.ComputeSumOff() cs.ComputeVertexCountOff() cs.ComputeVolumeOn() - volume_array_name = "__MESH_DOCTOR_VOLUME-" + str( uuid.uuid4() ) # Making the name unique - cs.SetVolumeArrayName( volume_array_name ) + volumeArrayName = "__MESH_DOCTOR_VOLUME-" + str( uuid.uuid4() ) # Making the name unique + cs.SetVolumeArrayName( volumeArrayName ) cs.SetInputData( mesh ) cs.Update() @@ -43,29 +42,29 @@ def __action( mesh, options: Options ) -> Result: mq.SetWedgeQualityMeasureToVolume() SUPPORTED_TYPES.append( VTK_WEDGE ) else: - setup_logger.warning( + setupLogger.warning( "Your \"pyvtk\" version does not bring pyramid nor wedge support with vtkMeshQuality. Using the fallback solution." ) mq.SetInputData( mesh ) mq.Update() - volume = cs.GetOutput().GetCellData().GetArray( volume_array_name ) + volume = cs.GetOutput().GetCellData().GetArray( volumeArrayName ) quality = mq.GetOutput().GetCellData().GetArray( "Quality" ) # Name is imposed by vtk. assert volume is not None assert quality is not None volume = vtk_to_numpy( volume ) quality = vtk_to_numpy( quality ) - small_volumes: List[ Tuple[ int, float ] ] = [] + smallVolumes: list[ tuple[ int, float ] ] = [] for i, pack in enumerate( zip( volume, quality ) ): v, q = pack vol = q if mesh.GetCellType( i ) in SUPPORTED_TYPES else v - if vol < options.min_volume: - small_volumes.append( ( i, float( vol ) ) ) - return Result( element_volumes=small_volumes ) + if vol < options.minVolume: + smallVolumes.append( ( i, float( vol ) ) ) + return Result( elementVolumes=smallVolumes ) -def action( vtk_input_file: str, options: Options ) -> Result: - mesh = read_mesh( vtk_input_file ) +def action( vtkInputFile: str, options: Options ) -> Result: + mesh: vtkUnstructuredGrid = readUnstructuredGrid( vtkInputFile ) return __action( mesh, options ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/fixElementsOrderings.py b/geos-mesh/src/geos/mesh/doctor/actions/fixElementsOrderings.py new file mode 100644 index 00000000..e560551e --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/actions/fixElementsOrderings.py @@ -0,0 +1,53 @@ +from dataclasses import dataclass +from vtkmodules.vtkCommonCore import vtkIdList +from vtkmodules.vtkCommonDataModel import vtkUnstructuredGrid +from geos.mesh.utils.genericHelpers import toVtkIdList +from geos.mesh.io.vtkIO import VtkOutput, readUnstructuredGrid, writeMesh + + +@dataclass( frozen=True ) +class Options: + vtkOutput: VtkOutput + cellTypeToOrdering: dict[ int, list[ int ] ] + + +@dataclass( frozen=True ) +class Result: + output: str + unchangedCellTypes: frozenset[ int ] + + +def __action( mesh: vtkUnstructuredGrid, options: Options ) -> Result: + # The vtk cell type is an int and will be the key of the following mapping, + # that will point to the relevant permutation. + cellTypeToOrdering: dict[ int, list[ int ] ] = options.cellTypeToOrdering + unchangedCellTypes: set[ int ] = set() # For logging purpose + + # Preparing the output mesh by first keeping the same instance type. + outputMesh = mesh.NewInstance() + outputMesh.CopyStructure( mesh ) + outputMesh.CopyAttributes( mesh ) + + # `outputMesh` now contains a full copy of the input mesh. + # We'll now modify the support nodes orderings in place if needed. + cells = outputMesh.GetCells() + for cellIdx in range( outputMesh.GetNumberOfCells() ): + cellType: int = outputMesh.GetCell( cellIdx ).GetCellType() + newOrdering = cellTypeToOrdering.get( cellType ) + if newOrdering: + supportPointIds = vtkIdList() + cells.GetCellAtId( cellIdx, supportPointIds ) + newSupportPointIds = [] + for i, v in enumerate( newOrdering ): + newSupportPointIds.append( supportPointIds.GetId( newOrdering[ i ] ) ) + cells.ReplaceCellAtId( cellIdx, toVtkIdList( newSupportPointIds ) ) + else: + unchangedCellTypes.add( cellType ) + isWrittenError = writeMesh( outputMesh, options.vtkOutput ) + return Result( output=options.vtkOutput.output if not isWrittenError else "", + unchangedCellTypes=frozenset( unchangedCellTypes ) ) + + +def action( vtkInputFile: str, options: Options ) -> Result: + mesh: vtkUnstructuredGrid = readUnstructuredGrid( vtkInputFile ) + return __action( mesh, options ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/fix_elements_orderings.py b/geos-mesh/src/geos/mesh/doctor/actions/fix_elements_orderings.py deleted file mode 100644 index 3e00cf52..00000000 --- a/geos-mesh/src/geos/mesh/doctor/actions/fix_elements_orderings.py +++ /dev/null @@ -1,53 +0,0 @@ -from dataclasses import dataclass -from typing import Dict, FrozenSet, List, Set -from vtkmodules.vtkCommonCore import vtkIdList -from geos.mesh.utils.genericHelpers import to_vtk_id_list -from geos.mesh.io.vtkIO import VtkOutput, read_mesh, write_mesh - - -@dataclass( frozen=True ) -class Options: - vtk_output: VtkOutput - cell_type_to_ordering: Dict[ int, List[ int ] ] - - -@dataclass( frozen=True ) -class Result: - output: str - unchanged_cell_types: FrozenSet[ int ] - - -def __action( mesh, options: Options ) -> Result: - # The vtk cell type is an int and will be the key of the following mapping, - # that will point to the relevant permutation. - cell_type_to_ordering: Dict[ int, List[ int ] ] = options.cell_type_to_ordering - unchanged_cell_types: Set[ int ] = set() # For logging purpose - - # Preparing the output mesh by first keeping the same instance type. - output_mesh = mesh.NewInstance() - output_mesh.CopyStructure( mesh ) - output_mesh.CopyAttributes( mesh ) - - # `output_mesh` now contains a full copy of the input mesh. - # We'll now modify the support nodes orderings in place if needed. - cells = output_mesh.GetCells() - for cell_idx in range( output_mesh.GetNumberOfCells() ): - cell_type: int = output_mesh.GetCell( cell_idx ).GetCellType() - new_ordering = cell_type_to_ordering.get( cell_type ) - if new_ordering: - support_point_ids = vtkIdList() - cells.GetCellAtId( cell_idx, support_point_ids ) - new_support_point_ids = [] - for i, v in enumerate( new_ordering ): - new_support_point_ids.append( support_point_ids.GetId( new_ordering[ i ] ) ) - cells.ReplaceCellAtId( cell_idx, to_vtk_id_list( new_support_point_ids ) ) - else: - unchanged_cell_types.add( cell_type ) - is_written_error = write_mesh( output_mesh, options.vtk_output ) - return Result( output=options.vtk_output.output if not is_written_error else "", - unchanged_cell_types=frozenset( unchanged_cell_types ) ) - - -def action( vtk_input_file: str, options: Options ) -> Result: - mesh = read_mesh( vtk_input_file ) - return __action( mesh, options ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/generate_cube.py b/geos-mesh/src/geos/mesh/doctor/actions/generateCube.py similarity index 52% rename from geos-mesh/src/geos/mesh/doctor/actions/generate_cube.py rename to geos-mesh/src/geos/mesh/doctor/actions/generateCube.py index f30d2089..4247eddf 100644 --- a/geos-mesh/src/geos/mesh/doctor/actions/generate_cube.py +++ b/geos-mesh/src/geos/mesh/doctor/actions/generateCube.py @@ -5,9 +5,9 @@ from vtkmodules.vtkCommonCore import vtkPoints from vtkmodules.vtkCommonDataModel import ( vtkCellArray, vtkHexahedron, vtkRectilinearGrid, vtkUnstructuredGrid, VTK_HEXAHEDRON ) -from geos.mesh.doctor.actions.generate_global_ids import __build_global_ids -from geos.mesh.doctor.parsing.cli_parsing import setup_logger -from geos.mesh.io.vtkIO import VtkOutput, write_mesh +from geos.mesh.doctor.actions.generateGlobalIds import __buildGlobalIds +from geos.mesh.doctor.parsing.cliParsing import setupLogger +from geos.mesh.io.vtkIO import VtkOutput, writeMesh @dataclass( frozen=True ) @@ -24,9 +24,9 @@ class FieldInfo: @dataclass( frozen=True ) class Options: - vtk_output: VtkOutput - generate_cells_global_ids: bool - generate_points_global_ids: bool + vtkOutput: VtkOutput + generateCellsGlobalIds: bool + generatePointsGlobalIds: bool xs: Sequence[ float ] ys: Sequence[ float ] zs: Sequence[ float ] @@ -43,11 +43,14 @@ class XYZ: z: numpy.ndarray -def build_rectilinear_blocks_mesh( xyzs: Iterable[ XYZ ] ) -> vtkUnstructuredGrid: - """ - Builds an unstructured vtk grid from the `xyzs` blocks. Kind of InternalMeshGenerator. - :param xyzs: The blocks. - :return: The unstructured mesh, even if it's topologically structured. +def buildRectilinearBlocksMesh( xyzs: Iterable[ XYZ ] ) -> vtkUnstructuredGrid: + """Builds an unstructured vtk grid from the `xyzs` blocks. Kind of InternalMeshGenerator. + + Args: + xyzs (Iterable[ XYZ ]): The blocks. + + Returns: + vtkUnstructuredGrid: The unstructured mesh, even if it's topologically structured. """ rgs = [] for xyz in xyzs: @@ -58,89 +61,89 @@ def build_rectilinear_blocks_mesh( xyzs: Iterable[ XYZ ] ) -> vtkUnstructuredGri rg.SetZCoordinates( numpy_to_vtk( xyz.z ) ) rgs.append( rg ) - num_points = sum( map( lambda r: r.GetNumberOfPoints(), rgs ) ) - num_cells = sum( map( lambda r: r.GetNumberOfCells(), rgs ) ) + numPoints = sum( map( lambda r: r.GetNumberOfPoints(), rgs ) ) + numCells = sum( map( lambda r: r.GetNumberOfCells(), rgs ) ) points = vtkPoints() - points.Allocate( num_points ) + points.Allocate( numPoints ) for rg in rgs: for i in range( rg.GetNumberOfPoints() ): points.InsertNextPoint( rg.GetPoint( i ) ) - cell_types = [ VTK_HEXAHEDRON ] * num_cells + cellTypes = [ VTK_HEXAHEDRON ] * numCells cells = vtkCellArray() - cells.AllocateExact( num_cells, num_cells * 8 ) + cells.AllocateExact( numCells, numCells * 8 ) m = ( 0, 1, 3, 2, 4, 5, 7, 6 ) # VTK_VOXEL and VTK_HEXAHEDRON do not share the same ordering. offset = 0 for rg in rgs: for i in range( rg.GetNumberOfCells() ): c = rg.GetCell( i ) - new_cell = vtkHexahedron() + newCell = vtkHexahedron() for j in range( 8 ): - new_cell.GetPointIds().SetId( j, offset + c.GetPointId( m[ j ] ) ) - cells.InsertNextCell( new_cell ) + newCell.GetPointIds().SetId( j, offset + c.GetPointId( m[ j ] ) ) + cells.InsertNextCell( newCell ) offset += rg.GetNumberOfPoints() mesh = vtkUnstructuredGrid() mesh.SetPoints( points ) - mesh.SetCells( cell_types, cells ) + mesh.SetCells( cellTypes, cells ) return mesh -def __add_fields( mesh: vtkUnstructuredGrid, fields: Iterable[ FieldInfo ] ) -> vtkUnstructuredGrid: - for field_info in fields: - if field_info.support == "CELLS": +def __addFields( mesh: vtkUnstructuredGrid, fields: Iterable[ FieldInfo ] ) -> vtkUnstructuredGrid: + for fieldInfo in fields: + if fieldInfo.support == "CELLS": data = mesh.GetCellData() n = mesh.GetNumberOfCells() - elif field_info.support == "POINTS": + elif fieldInfo.support == "POINTS": data = mesh.GetPointData() n = mesh.GetNumberOfPoints() - array = numpy.ones( ( n, field_info.dimension ), dtype=float ) - vtk_array = numpy_to_vtk( array ) - vtk_array.SetName( field_info.name ) - data.AddArray( vtk_array ) + array = numpy.ones( ( n, fieldInfo.dimension ), dtype=float ) + vtkArray = numpy_to_vtk( array ) + vtkArray.SetName( fieldInfo.name ) + data.AddArray( vtkArray ) return mesh def __build( options: Options ): - def build_coordinates( positions, num_elements ): + def buildCoordinates( positions, numElements ): result = [] - it = zip( zip( positions, positions[ 1: ] ), num_elements ) + it = zip( zip( positions, positions[ 1: ] ), numElements ) try: coords, n = next( it ) while True: start, stop = coords - end_point = False - tmp = numpy.linspace( start=start, stop=stop, num=n + end_point, endpoint=end_point ) + endPoint = False + tmp = numpy.linspace( start=start, stop=stop, num=n + endPoint, endpoint=endPoint ) coords, n = next( it ) result.append( tmp ) except StopIteration: - end_point = True - tmp = numpy.linspace( start=start, stop=stop, num=n + end_point, endpoint=end_point ) + endPoint = True + tmp = numpy.linspace( start=start, stop=stop, num=n + endPoint, endpoint=endPoint ) result.append( tmp ) return numpy.concatenate( result ) - x = build_coordinates( options.xs, options.nxs ) - y = build_coordinates( options.ys, options.nys ) - z = build_coordinates( options.zs, options.nzs ) - cube = build_rectilinear_blocks_mesh( ( XYZ( x, y, z ), ) ) - cube = __add_fields( cube, options.fields ) - __build_global_ids( cube, options.generate_cells_global_ids, options.generate_points_global_ids ) + x = buildCoordinates( options.xs, options.nxs ) + y = buildCoordinates( options.ys, options.nys ) + z = buildCoordinates( options.zs, options.nzs ) + cube = buildRectilinearBlocksMesh( ( XYZ( x, y, z ), ) ) + cube = __addFields( cube, options.fields ) + __buildGlobalIds( cube, options.generateCellsGlobalIds, options.generatePointsGlobalIds ) return cube def __action( options: Options ) -> Result: - output_mesh = __build( options ) - write_mesh( output_mesh, options.vtk_output ) - return Result( info=f"Mesh was written to {options.vtk_output.output}" ) + outputMesh = __build( options ) + writeMesh( outputMesh, options.vtkOutput ) + return Result( info=f"Mesh was written to {options.vtkOutput.output}" ) -def action( vtk_input_file: str, options: Options ) -> Result: +def action( vtkInputFile: str, options: Options ) -> Result: try: return __action( options ) except BaseException as e: - setup_logger.error( e ) + setupLogger.error( e ) return Result( info="Something went wrong." ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/generateFractures.py b/geos-mesh/src/geos/mesh/doctor/actions/generateFractures.py new file mode 100644 index 00000000..fc8405d1 --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/actions/generateFractures.py @@ -0,0 +1,579 @@ +from collections import defaultdict +from dataclasses import dataclass +from enum import Enum +import networkx +from numpy import empty, ones, zeros +from tqdm import tqdm +from typing import Collection, Iterable, Mapping, Optional, Sequence +from vtk import vtkDataArray +from vtkmodules.vtkCommonCore import vtkIdList, vtkPoints +from vtkmodules.vtkCommonDataModel import ( vtkCell, vtkCellArray, vtkPolygon, vtkUnstructuredGrid, VTK_POLYGON, + VTK_POLYHEDRON ) +from vtkmodules.util.numpy_support import numpy_to_vtk, vtk_to_numpy +from vtkmodules.util.vtkConstants import VTK_ID_TYPE +from geos.mesh.doctor.actions.vtkPolyhedron import FaceStream +from geos.mesh.doctor.parsing.cliParsing import setupLogger +from geos.mesh.utils.arrayHelpers import hasArray +from geos.mesh.utils.genericHelpers import toVtkIdList, vtkIter +from geos.mesh.io.vtkIO import VtkOutput, readUnstructuredGrid, writeMesh +""" +TypeAliases cannot be used with Python 3.9. A simple assignment like described there will be used: +https://docs.python.org/3/library/typing.html#typing.TypeAlias:~:text=through%20simple%20assignment%3A-,Vector%20%3D%20list%5Bfloat%5D,-Or%20marked%20with +""" + +IDMapping = Mapping[ int, int ] +CellsPointsCoords = dict[ int, list[ tuple[ float ] ] ] +Coordinates3D = tuple[ float ] + + +class FracturePolicy( Enum ): + FIELD = 0 + INTERNAL_SURFACES = 1 + + +@dataclass( frozen=True ) +class Options: + policy: FracturePolicy + field: str + fieldValuesCombined: frozenset[ int ] + fieldValuesPerFracture: list[ frozenset[ int ] ] + meshVtkOutput: VtkOutput + allFracturesVtkOutput: list[ VtkOutput ] + + +@dataclass( frozen=True ) +class Result: + info: str + + +@dataclass( frozen=True ) +class FractureInfo: + nodeToCells: Mapping[ int, Iterable[ int ] ] # For each Fracture_ node, gives all the cells that use this node. + faceNodes: Iterable[ Collection[ int ] ] # For each fracture face, returns the nodes of this face + faceCellId: Iterable[ int ] # For each fracture face, returns the corresponding id of the cell in the mesh + + +def buildNodeToCells( mesh: vtkUnstructuredGrid, + faceNodes: Iterable[ Iterable[ int ] ] ) -> dict[ int, Iterable[ int ] ]: + # TODO normally, just a list and not a set should be enough. + nodeToCells: dict[ int, set[ int ] ] = defaultdict( set ) + + fractureNodes: set[ int ] = set() + for fns in faceNodes: + for n in fns: + fractureNodes.add( n ) + + for cellId in tqdm( range( mesh.GetNumberOfCells() ), desc="Computing the node to cells mapping" ): + cellPoints: frozenset[ int ] = frozenset( vtkIter( mesh.GetCell( cellId ).GetPointIds() ) ) + intersection: Iterable[ int ] = cellPoints & fractureNodes + for node in intersection: + nodeToCells[ node ].add( cellId ) + + return nodeToCells + + +def __buildFractureInfoFromFields( mesh: vtkUnstructuredGrid, f: Sequence[ int ], + fieldValues: frozenset[ int ] ) -> FractureInfo: + cellsToFaces: dict[ int, list[ int ] ] = defaultdict( list ) + # For each face of each cell, we search for the unique neighbor cell (if it exists). + # Then, if the 2 values of the two cells match the field requirements, + # we store the cell and its local face index: this is indeed part of the surface that we'll need to be split. + cell: vtkCell + for cellId in tqdm( range( mesh.GetNumberOfCells() ), desc="Computing the cell to faces mapping" ): + # No need to consider a cell if its field value is not in the target range. + if f[ cellId ] not in fieldValues: + continue + cell = mesh.GetCell( cellId ) + for i in range( cell.GetNumberOfFaces() ): + neighborCellIds = vtkIdList() + mesh.GetCellNeighbors( cellId, cell.GetFace( i ).GetPointIds(), neighborCellIds ) + assert neighborCellIds.GetNumberOfIds() < 2 + for j in range( neighborCellIds.GetNumberOfIds() ): # It's 0 or 1... + neighborCellId = neighborCellIds.GetId( j ) + if f[ neighborCellId ] != f[ cellId ] and f[ neighborCellId ] in fieldValues: + # TODO add this (cellIds, faceId) information to the fractureInfo? + cellsToFaces[ cellId ].append( i ) + faceNodes: list[ Collection[ int ] ] = list() + faceNodesHashes: set[ frozenset[ int ] ] = set() # A temporary not to add multiple times the same face. + for cellId, facesIds in tqdm( cellsToFaces.items(), desc="Extracting the faces of the fractures" ): + cell = mesh.GetCell( cellId ) + for faceId in facesIds: + fn: Collection[ int ] = tuple( vtkIter( cell.GetFace( faceId ).GetPointIds() ) ) + fnh = frozenset( fn ) + if fnh not in faceNodesHashes: + faceNodesHashes.add( fnh ) + faceNodes.append( fn ) + nodeToCells: dict[ int, Iterable[ int ] ] = buildNodeToCells( mesh, faceNodes ) + faceCellId: list = list() # no cell of the mesh corresponds to that face when fracture policy is 'field' + + return FractureInfo( nodeToCells=nodeToCells, faceNodes=faceNodes, faceCellId=faceCellId ) + + +def __buildFractureInfoFromInternalSurfaces( mesh: vtkUnstructuredGrid, f: Sequence[ int ], + fieldValues: frozenset[ int ] ) -> FractureInfo: + nodeToCells: dict[ int, list[ int ] ] = defaultdict( list ) + faceNodes: list[ Collection[ int ] ] = list() + faceCellId: list[ int ] = list() + for cellId in tqdm( range( mesh.GetNumberOfCells() ), desc="Computing the face to nodes mapping" ): + cell = mesh.GetCell( cellId ) + if cell.GetCellDimension() == 2: + if f[ cellId ] in fieldValues: + nodes = list() + for v in range( cell.GetNumberOfPoints() ): + pointId: int = cell.GetPointId( v ) + nodeToCells[ pointId ] = list() + nodes.append( pointId ) + faceNodes.append( tuple( nodes ) ) + faceCellId.append( cellId ) + + for cellId in tqdm( range( mesh.GetNumberOfCells() ), desc="Computing the node to cells mapping" ): + cell = mesh.GetCell( cellId ) + if cell.GetCellDimension() == 3: + for v in range( cell.GetNumberOfPoints() ): + if cell.GetPointId( v ) in nodeToCells: + nodeToCells[ cell.GetPointId( v ) ].append( cellId ) + + return FractureInfo( nodeToCells=nodeToCells, faceNodes=faceNodes, faceCellId=faceCellId ) + + +def buildFractureInfo( mesh: vtkUnstructuredGrid, + options: Options, + combinedFractures: bool, + fractureId: int = 0 ) -> FractureInfo: + field = options.field + if combinedFractures: + fieldValues = options.fieldValuesCombined + else: + fieldValues = options.fieldValuesPerFracture[ fractureId ] + cellData = mesh.GetCellData() + if cellData.HasArray( field ): + f = vtk_to_numpy( cellData.GetArray( field ) ) + else: + raise ValueError( f"Cell field {field} does not exist in mesh, nothing done" ) + + if options.policy == FracturePolicy.FIELD: + return __buildFractureInfoFromFields( mesh, f, fieldValues ) + elif options.policy == FracturePolicy.INTERNAL_SURFACES: + return __buildFractureInfoFromInternalSurfaces( mesh, f, fieldValues ) + + +def buildCellToCellGraph( mesh: vtkUnstructuredGrid, fracture: FractureInfo ) -> networkx.Graph: + """Connects all the cells that touch the fracture by at least one node. + Two cells are connected when they share at least a face which is not a face of the fracture. + + Args: + mesh (vtkUnstructuredGrid): The input mesh. + fracture (FractureInfo): The fracture info. + + Returns: + networkx.Graph: The graph: each node of this graph is the index of the cell. + There's an edge between two nodes of the graph if the cells share a face. + """ + # Faces are identified by their nodes. But the order of those nodes may vary while referring to the same face. + # Therefore we compute some kinds of hashes of those face to easily detect if a face is part of the fracture. + tmp: list[ frozenset[ int ] ] = list() + for fn in fracture.faceNodes: + tmp.append( frozenset( fn ) ) + faceHashes: frozenset[ frozenset[ int ] ] = frozenset( tmp ) + + # We extract the list of the cells that touch the fracture by at least one node. + cells: set[ int ] = set() + for cellIds in fracture.nodeToCells.values(): + for cellId in cellIds: + cells.add( cellId ) + + # Using the last precomputed containers, we're now building the dict which connects + # every face (hash) of the fracture to the cells that touch the face... + faceToCells: dict[ frozenset[ int ], list[ int ] ] = defaultdict( list ) + for cellId in tqdm( cells, desc="Computing the cell to cell graph" ): + cell: vtkCell = mesh.GetCell( cellId ) + for faceId in range( cell.GetNumberOfFaces() ): + faceHash: frozenset[ int ] = frozenset( vtkIter( cell.GetFace( faceId ).GetPointIds() ) ) + if faceHash not in faceHashes: + faceToCells[ faceHash ].append( cellId ) + + # ... eventually, when a face touches two cells, this means that those two cells share the same face + # and should be connected in the final cell to cell graph. + cellToCell = networkx.Graph() + cellToCell.add_nodes_from( cells ) + cellToCell.add_edges_from( filter( lambda cs: len( cs ) == 2, faceToCells.values() ) ) + + return cellToCell + + +def _identifySplit( numPoints: int, cellToCell: networkx.Graph, + nodeToCells: dict[ int, Iterable[ int ] ] ) -> dict[ int, IDMapping ]: + """For each cell, compute the node indices replacements. + + Args: + numPoints (int): The number of points in the whole mesh (not the fracture). + cellToCell (networkx.Graph): The cell to cell graph (connection through common faces). + nodeToCells (dict[ int, Iterable[ int ] ]): Maps the nodes of the fracture to the cells relying on this node. + + Returns: + dict[ int, IDMapping ]: For each cell (first key), returns a mapping from the current index + and the new index that should replace the current index. + Note that the current index and the new index can be identical: no replacement should be done then. + """ + + class NewIndex: + """ + Returns the next available index. + Note that the first time an index is met, the index itself is returned: + we do not want to change an index if we do not have to. + """ + + def __init__( self, numNodes: int ): + self.__currentLastIndex = numNodes - 1 + self.__seen: set[ int ] = set() + + def __call__( self, index: int ) -> int: + if index in self.__seen: + self.__currentLastIndex += 1 + return self.__currentLastIndex + else: + self.__seen.add( index ) + return index + + buildNewIndex = NewIndex( numPoints ) + result: dict[ int, IDMapping ] = defaultdict( dict ) + # Iteration over `sorted` nodes to have a predictable result for tests. + for node, cells in tqdm( sorted( nodeToCells.items() ), desc="Identifying the node splits" ): + for connectedCells in networkx.connected_components( cellToCell.subgraph( cells ) ): + # Each group of connect cells need around `node` must consider the same `node`. + # Separate groups must have different (duplicated) nodes. + newIndex: int = buildNewIndex( node ) + for cell in connectedCells: + result[ cell ][ node ] = newIndex + return result + + +def __copyFieldsSplitMesh( oldMesh: vtkUnstructuredGrid, splitMesh: vtkUnstructuredGrid, + addedPointsWithOldId: list[ tuple[ int ] ] ) -> None: + """Copies the fields from the old mesh to the new one. + Point data will be duplicated for collocated nodes. + + Args: + oldMesh (vtkUnstructuredGrid): The mesh before the split._ + splitMesh (vtkUnstructuredGrid): The mesh after the split. Will receive the fields in place. + addedPointsWithOldId (list[ tuple[ int ] ]): _description_ + """ + # Copying the cell data. The cells are the same, just their nodes support have changed. + inputCellData = oldMesh.GetCellData() + for i in range( inputCellData.GetNumberOfArrays() ): + inputArray: vtkDataArray = inputCellData.GetArray( i ) + setupLogger.info( f"Copying cell field \"{inputArray.GetName()}\"." ) + tmp = inputArray.NewInstance() + tmp.DeepCopy( inputArray ) + splitMesh.GetCellData().AddArray( inputArray ) + + # Copying field data. This data is a priori not related to geometry. + inputFieldData = oldMesh.GetFieldData() + for i in range( inputFieldData.GetNumberOfArrays() ): + inputArray = inputFieldData.GetArray( i ) + setupLogger.info( f"Copying field data \"{inputArray.GetName()}\"." ) + tmp = inputArray.NewInstance() + tmp.DeepCopy( inputArray ) + splitMesh.GetFieldData().AddArray( inputArray ) + + # Copying copy data. Need to take into account the new points. + inputPointData = oldMesh.GetPointData() + newNumberPoints: int = splitMesh.GetNumberOfPoints() + for i in range( inputPointData.GetNumberOfArrays() ): + oldPointsArray = vtk_to_numpy( inputPointData.GetArray( i ) ) + name: str = inputPointData.GetArrayName( i ) + setupLogger.info( f"Copying point data \"{name}\"." ) + oldNrows: int = oldPointsArray.shape[ 0 ] + oldNcols: int = 1 if len( oldPointsArray.shape ) == 1 else oldPointsArray.shape[ 1 ] + # Reshape oldPointsArray if it is 1-dimensional + if len( oldPointsArray.shape ) == 1: + oldPointsArray = oldPointsArray.reshape( ( oldNrows, 1 ) ) + newPointsArray = empty( ( newNumberPoints, oldNcols ) ) + newPointsArray[ :oldNrows, : ] = oldPointsArray + for newAndOldId in addedPointsWithOldId: + newPointsArray[ newAndOldId[ 0 ], : ] = oldPointsArray[ newAndOldId[ 1 ], : ] + # Reshape the VTK array to match the original dimensions + if oldNcols > 1: + vtkArray = numpy_to_vtk( newPointsArray.flatten() ) + vtkArray.SetNumberOfComponents( oldNcols ) + vtkArray.SetNumberOfTuples( newNumberPoints ) + else: + vtkArray = numpy_to_vtk( newPointsArray ) + vtkArray.SetName( name ) + splitMesh.GetPointData().AddArray( vtkArray ) + + +def __copyFieldsFractureMesh( oldMesh: vtkUnstructuredGrid, fractureMesh: vtkUnstructuredGrid, faceCellId: list[ int ], + node3dToNode2d: IDMapping ) -> None: + """Copies the fields from the old mesh to the new fracture when using internalSurfaces policy. + + Args: + oldMesh (vtkUnstructuredGrid): The mesh before the split. + fractureMesh (vtkUnstructuredGrid): The fracture mesh generated from the fractureInfo. + faceCellId (list[ int ]): The list of cell IDs that define the fracture faces. + node3dToNode2d (IDMapping): A mapping from 3D node IDs to 2D node IDs. + """ + # No copy of field data will be done with the fracture mesh because may lose its relevance compared to the splitted. + # Copying the cell data. The interesting cells are the ones stored in faceCellId. + newNumberCells: int = fractureMesh.GetNumberOfCells() + inputCellData = oldMesh.GetCellData() + for i in range( inputCellData.GetNumberOfArrays() ): + oldCellsArray = vtk_to_numpy( inputCellData.GetArray( i ) ) + oldNrows: int = oldCellsArray.shape[ 0 ] + if len( oldCellsArray.shape ) == 1: + oldCellsArray = oldCellsArray.reshape( ( oldNrows, 1 ) ) + name: str = inputCellData.GetArrayName( i ) + setupLogger.info( f"Copying cell data \"{name}\"." ) + newArray = oldCellsArray[ faceCellId, : ] + # Reshape the VTK array to match the original dimensions + oldNcols: int = 1 if len( oldCellsArray.shape ) == 1 else oldCellsArray.shape[ 1 ] + if oldNcols > 1: + vtkArray = numpy_to_vtk( newArray.flatten() ) + vtkArray.SetNumberOfComponents( oldNcols ) + vtkArray.SetNumberOfTuples( newNumberCells ) + else: + vtkArray = numpy_to_vtk( newArray ) + vtkArray.SetName( name ) + fractureMesh.GetCellData().AddArray( vtkArray ) + + newNumberPoints: int = fractureMesh.GetNumberOfPoints() + inputPointData = oldMesh.GetPointData() + for i in range( inputPointData.GetNumberOfArrays() ): + oldPointsArray = vtk_to_numpy( inputPointData.GetArray( i ) ) + oldNrows = oldPointsArray.shape[ 0 ] + if len( oldPointsArray.shape ) == 1: + oldPointsArray = oldPointsArray.reshape( ( oldNrows, 1 ) ) + name = inputPointData.GetArrayName( i ) + setupLogger.info( f"Copying point data \"{name}\"." ) + newArray = oldPointsArray[ list( node3dToNode2d.keys() ), : ] + oldNcols = 1 if len( oldPointsArray.shape ) == 1 else oldPointsArray.shape[ 1 ] + if oldNcols > 1: + vtkArray = numpy_to_vtk( newArray.flatten() ) + vtkArray.SetNumberOfComponents( oldNcols ) + vtkArray.SetNumberOfTuples( newNumberPoints ) + else: + vtkArray = numpy_to_vtk( newArray ) + vtkArray.SetName( name ) + fractureMesh.GetPointData().AddArray( vtkArray ) + + +def __performSplit( oldMesh: vtkUnstructuredGrid, cellToNodeMapping: Mapping[ int, IDMapping ] ) -> vtkUnstructuredGrid: + """Split the main 3d mesh based on the node duplication information contained in @p cellToNodeMapping + + Args: + oldMesh (vtkUnstructuredGrid): The main 3d mesh. + cellToNodeMapping (Mapping[ int, IDMapping ]): For each cell, + gives the nodes that must be duplicated and their new index. + + Returns: + vtkUnstructuredGrid: The main 3d mesh split at the fracture location. + """ + addedPoints: set[ int ] = set() + addedPointsWithOldId: list[ tuple[ int ] ] = list() + for nodeMapping in cellToNodeMapping.values(): + for i, o in nodeMapping.items(): + if i != o: + addedPoints.add( o ) + addedPointsWithOldId.append( ( o, i ) ) + numNewPoints: int = oldMesh.GetNumberOfPoints() + len( addedPoints ) + + # Creating the new points for the new mesh. + oldPoints: vtkPoints = oldMesh.GetPoints() + newPoints = vtkPoints() + newPoints.SetNumberOfPoints( numNewPoints ) + collocatedNodes = ones( numNewPoints, dtype=int ) * -1 + # Copying old points into the new container. + for p in range( oldPoints.GetNumberOfPoints() ): + newPoints.SetPoint( p, oldPoints.GetPoint( p ) ) + collocatedNodes[ p ] = p + # Creating the new collocated/duplicated points based on the old points positions. + for nodeMapping in cellToNodeMapping.values(): + for i, o in nodeMapping.items(): + if i != o: + newPoints.SetPoint( o, oldPoints.GetPoint( i ) ) + collocatedNodes[ o ] = i + collocatedNodes.flags.writeable = False + + # We are creating a new mesh. + # The cells will be the same, except that their nodes may be duplicated or renumbered nodes. + # In vtk, the polyhedron and the standard cells are managed differently. + # Also, it looks like the internal representation is being modified + # (see https://gitlab.kitware.com/vtk/vtk/-/merge_requests/9812) + # so we'll try nothing fancy for the moment. + # Maybe in the future using a `DeepCopy` of the vtkCellArray can be considered? + # The cell point ids could be modified in place then. + newMesh = oldMesh.NewInstance() + newMesh.SetPoints( newPoints ) + newMesh.Allocate( oldMesh.GetNumberOfCells() ) + + for c in tqdm( range( oldMesh.GetNumberOfCells() ), desc="Performing the mesh split" ): + nodeMapping: IDMapping = cellToNodeMapping.get( c, {} ) + cell: vtkCell = oldMesh.GetCell( c ) + cellType: int = cell.GetCellType() + # For polyhedron, we'll manipulate the face stream directly. + if cellType == VTK_POLYHEDRON: + faceStream = vtkIdList() + oldMesh.GetFaceStream( c, faceStream ) + newFaceNodes: list[ list[ int ] ] = list() + for faceNodes in FaceStream.buildFromVtkIdList( faceStream ).faceNodes: + newPointIds = list() + for currentPointId in faceNodes: + newPointId: int = nodeMapping.get( currentPointId, currentPointId ) + newPointIds.append( newPointId ) + newFaceNodes.append( newPointIds ) + newMesh.InsertNextCell( cellType, toVtkIdList( FaceStream( newFaceNodes ).dump() ) ) + else: + # For the standard cells, we extract the point ids of the cell directly. + # Then the values will be (potentially) overwritten in place, before being sent back into the cell. + cellPointIds: vtkIdList = cell.GetPointIds() + for i in range( cellPointIds.GetNumberOfIds() ): + currentPointId: int = cellPointIds.GetId( i ) + newPointId: int = nodeMapping.get( currentPointId, currentPointId ) + cellPointIds.SetId( i, newPointId ) + newMesh.InsertNextCell( cellType, cellPointIds ) + + __copyFieldsSplitMesh( oldMesh, newMesh, addedPointsWithOldId ) + + return newMesh + + +def __generateFractureMesh( oldMesh: vtkUnstructuredGrid, fractureInfo: FractureInfo, + cellToNodeMapping: Mapping[ int, IDMapping ] ) -> vtkUnstructuredGrid: + """Generates the mesh of the fracture. + + Args: + oldMesh (vtkUnstructuredGrid): The main 3d mesh. + fractureInfo (FractureInfo): The fracture description. + cellToNodeMapping (Mapping[ int, IDMapping ]): For each cell, gives the nodes that must be duplicated + and their new index. + + Returns: + vtkUnstructuredGrid: The fracture mesh. + """ + setupLogger.info( "Generating the meshes" ) + + meshPoints: vtkPoints = oldMesh.GetPoints() + isNodeDuplicated = zeros( meshPoints.GetNumberOfPoints(), dtype=bool ) # defaults to False + for nodeMapping in cellToNodeMapping.values(): + for i, o in nodeMapping.items(): + if not isNodeDuplicated[ i ]: + isNodeDuplicated[ i ] = i != o + + # Some elements can have all their nodes not duplicated. + # In this case, it's mandatory not get rid of this element because the neighboring 3d elements won't follow. + faceNodes: list[ Collection[ int ] ] = list() + discardedFaceNodes: set[ Iterable[ int ] ] = set() + if fractureInfo.faceCellId != list(): # The fracture policy is 'internalSurfaces' + faceCellId: list[ int ] = list() + for ns, fId in zip( fractureInfo.faceNodes, fractureInfo.faceCellId ): + if any( map( isNodeDuplicated.__getitem__, ns ) ): + faceNodes.append( ns ) + faceCellId.append( fId ) + else: + discardedFaceNodes.add( ns ) + else: # The fracture policy is 'field' + for ns in fractureInfo.faceNodes: + if any( map( isNodeDuplicated.__getitem__, ns ) ): + faceNodes.append( ns ) + else: + discardedFaceNodes.add( ns ) + + if discardedFaceNodes: + msg: str = "(" + '), ('.join( map( lambda dfns: ", ".join( map( str, dfns ) ), discardedFaceNodes ) ) + ")" + setupLogger.info( f"The faces made of nodes [{msg}] were/was discarded" + " from the fracture mesh because none of their/its nodes were duplicated." ) + + fractureNodesTmp = ones( meshPoints.GetNumberOfPoints(), dtype=int ) * -1 + for ns in faceNodes: + for n in ns: + fractureNodesTmp[ n ] = n + fractureNodes: Collection[ int ] = tuple( filter( lambda n: n > -1, fractureNodesTmp ) ) + numPoints: int = len( fractureNodes ) + points = vtkPoints() + points.SetNumberOfPoints( numPoints ) + node3dToNode2d: IDMapping = dict() # Building the node mapping, from 3d mesh nodes to 2d fracture nodes. + for i, n in enumerate( fractureNodes ): + coords: Coordinates3D = meshPoints.GetPoint( n ) + points.SetPoint( i, coords ) + node3dToNode2d[ n ] = i + + # The polygons are constructed in the same order as the faces defined in the fractureInfo. Therefore, + # fractureInfo.faceCellId can be used to link old cells to fracture cells for copy with internalSurfaces. + polygons = vtkCellArray() + for ns in faceNodes: + polygon = vtkPolygon() + polygon.GetPointIds().SetNumberOfIds( len( ns ) ) + for i, n in enumerate( ns ): + polygon.GetPointIds().SetId( i, node3dToNode2d[ n ] ) + polygons.InsertNextCell( polygon ) + + buckets: dict[ int, set[ int ] ] = defaultdict( set ) + for nodeMapping in cellToNodeMapping.values(): + for i, o in nodeMapping.items(): + k: Optional[ int ] = node3dToNode2d.get( min( i, o ) ) + if k is not None: + buckets[ k ].update( ( i, o ) ) + + assert set( buckets.keys() ) == set( range( numPoints ) ) + maxCollocatedNodes: int = max( map( len, buckets.values() ) ) if buckets.values() else 0 + collocatedNodes = ones( ( numPoints, maxCollocatedNodes ), dtype=int ) * -1 + for i, bucket in buckets.items(): + for j, val in enumerate( bucket ): + collocatedNodes[ i, j ] = val + array = numpy_to_vtk( collocatedNodes, array_type=VTK_ID_TYPE ) + array.SetName( "collocatedNodes" ) + + fractureMesh = vtkUnstructuredGrid() # We could be using vtkPolyData, but it's not supported by GEOS for now. + fractureMesh.SetPoints( points ) + if polygons.GetNumberOfCells() > 0: + fractureMesh.SetCells( [ VTK_POLYGON ] * polygons.GetNumberOfCells(), polygons ) + fractureMesh.GetPointData().AddArray( array ) + + # The copy of fields from the old mesh to the fracture is only available when using the internalSurfaces policy + # because the FractureInfo is linked to 2D elements from the oldMesh + if fractureInfo.faceCellId != list(): + __copyFieldsFractureMesh( oldMesh, fractureMesh, faceCellId, node3dToNode2d ) + + return fractureMesh + + +def __splitMeshOnFractures( mesh: vtkUnstructuredGrid, + options: Options ) -> tuple[ vtkUnstructuredGrid, list[ vtkUnstructuredGrid ] ]: + allFractureInfos: list[ FractureInfo ] = list() + for fractureId in range( len( options.fieldValuesPerFracture ) ): + fractureInfo: FractureInfo = buildFractureInfo( mesh, options, False, fractureId ) + allFractureInfos.append( fractureInfo ) + combinedFractures: FractureInfo = buildFractureInfo( mesh, options, True ) + cellToCell: networkx.Graph = buildCellToCellGraph( mesh, combinedFractures ) + cellToNodeMapping: Mapping[ int, IDMapping ] = _identifySplit( mesh.GetNumberOfPoints(), cellToCell, + combinedFractures.nodeToCells ) + outputMesh: vtkUnstructuredGrid = __performSplit( mesh, cellToNodeMapping ) + fractureMeshes: list[ vtkUnstructuredGrid ] = list() + for fractureInfoSeparated in allFractureInfos: + fractureMesh: vtkUnstructuredGrid = __generateFractureMesh( mesh, fractureInfoSeparated, cellToNodeMapping ) + fractureMeshes.append( fractureMesh ) + return ( outputMesh, fractureMeshes ) + + +def __action( mesh: vtkUnstructuredGrid, options: Options ) -> Result: + outputMesh, fractureMeshes = __splitMeshOnFractures( mesh, options ) + writeMesh( outputMesh, options.meshVtkOutput ) + for i, fractureMesh in enumerate( fractureMeshes ): + writeMesh( fractureMesh, options.allFracturesVtkOutput[ i ] ) + # TODO provide statistics about what was actually performed (size of the fracture, number of split nodes...). + return Result( info="OK" ) + + +def action( vtkInputFile: str, options: Options ) -> Result: + try: + mesh: vtkUnstructuredGrid = readUnstructuredGrid( vtkInputFile ) + # Mesh cannot contain global ids before splitting. + if hasArray( mesh, [ "GLOBAL_IDS_POINTS", "GLOBAL_IDS_CELLS" ] ): + errMsg: str = ( "The mesh cannot contain global ids for neither cells nor points. The correct procedure " + + " is to split the mesh and then generate global ids for new split meshes." ) + setupLogger.error( errMsg ) + raise ValueError( errMsg ) + return __action( mesh, options ) + except BaseException as e: + setupLogger.error( e ) + return Result( info="Something went wrong" ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/generateGlobalIds.py b/geos-mesh/src/geos/mesh/doctor/actions/generateGlobalIds.py new file mode 100644 index 00000000..34948350 --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/actions/generateGlobalIds.py @@ -0,0 +1,63 @@ +from dataclasses import dataclass +from vtkmodules.vtkCommonCore import vtkIdTypeArray +from vtkmodules.vtkCommonDataModel import vtkUnstructuredGrid +from geos.mesh.doctor.parsing.cliParsing import setupLogger +from geos.mesh.io.vtkIO import VtkOutput, readUnstructuredGrid, writeMesh + + +@dataclass( frozen=True ) +class Options: + vtkOutput: VtkOutput + generateCellsGlobalIds: bool + generatePointsGlobalIds: bool + + +@dataclass( frozen=True ) +class Result: + info: str + + +def __buildGlobalIds( mesh: vtkUnstructuredGrid, generateCellsGlobalIds: bool, generatePointsGlobalIds: bool ) -> None: + """Adds the global ids for cells and points in place into the mesh instance. + + Args: + mesh (vtkUnstructuredGrid): The mesh to modify. + generateCellsGlobalIds (bool): If True, generates the global ids for cells. Else, does nothing. + generatePointsGlobalIds (bool): If True, generates the global ids for points. Else, does nothing. + """ + # Building GLOBAL_IDS for points and cells.g GLOBAL_IDS for points and cells. + # First for points... + if mesh.GetPointData().GetGlobalIds(): + setupLogger.error( "Mesh already has globals ids for points; nothing done." ) + elif generatePointsGlobalIds: + pointGlobalIds = vtkIdTypeArray() + pointGlobalIds.SetName( "GLOBAL_IDS_POINTS" ) + pointGlobalIds.Allocate( mesh.GetNumberOfPoints() ) + for i in range( mesh.GetNumberOfPoints() ): + pointGlobalIds.InsertNextValue( i ) + mesh.GetPointData().SetGlobalIds( pointGlobalIds ) + # ... then for cells. + if mesh.GetCellData().GetGlobalIds(): + setupLogger.error( "Mesh already has globals ids for cells; nothing done." ) + elif generateCellsGlobalIds: + cellsGlobalIds = vtkIdTypeArray() + cellsGlobalIds.SetName( "GLOBAL_IDS_CELLS" ) + cellsGlobalIds.Allocate( mesh.GetNumberOfCells() ) + for i in range( mesh.GetNumberOfCells() ): + cellsGlobalIds.InsertNextValue( i ) + mesh.GetCellData().SetGlobalIds( cellsGlobalIds ) + + +def __action( mesh: vtkUnstructuredGrid, options: Options ) -> Result: + __buildGlobalIds( mesh, options.generateCellsGlobalIds, options.generatePointsGlobalIds ) + writeMesh( mesh, options.vtkOutput ) + return Result( info=f"Mesh was written to {options.vtkOutput.output}" ) + + +def action( vtkInputFile: str, options: Options ) -> Result: + try: + mesh: vtkUnstructuredGrid = readUnstructuredGrid( vtkInputFile ) + return __action( mesh, options ) + except BaseException as e: + setupLogger.error( e ) + return Result( info="Something went wrong." ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/generate_fractures.py b/geos-mesh/src/geos/mesh/doctor/actions/generate_fractures.py deleted file mode 100644 index 32f809db..00000000 --- a/geos-mesh/src/geos/mesh/doctor/actions/generate_fractures.py +++ /dev/null @@ -1,570 +0,0 @@ -from collections import defaultdict -from dataclasses import dataclass -from enum import Enum -import networkx -from numpy import empty, ones, zeros -from tqdm import tqdm -from typing import Collection, Iterable, Mapping, Optional, Sequence -from vtk import vtkDataArray -from vtkmodules.vtkCommonCore import vtkIdList, vtkPoints -from vtkmodules.vtkCommonDataModel import ( vtkCell, vtkCellArray, vtkPolygon, vtkUnstructuredGrid, VTK_POLYGON, - VTK_POLYHEDRON ) -from vtkmodules.util.numpy_support import numpy_to_vtk, vtk_to_numpy -from vtkmodules.util.vtkConstants import VTK_ID_TYPE -from geos.mesh.doctor.actions.vtk_polyhedron import FaceStream -from geos.mesh.doctor.parsing.cli_parsing import setup_logger -from geos.mesh.utils.arrayHelpers import has_array -from geos.mesh.utils.genericHelpers import to_vtk_id_list, vtk_iter -from geos.mesh.io.vtkIO import VtkOutput, read_mesh, write_mesh -""" -TypeAliases cannot be used with Python 3.9. A simple assignment like described there will be used: -https://docs.python.org/3/library/typing.html#typing.TypeAlias:~:text=through%20simple%20assignment%3A-,Vector%20%3D%20list%5Bfloat%5D,-Or%20marked%20with -""" - -IDMapping = Mapping[ int, int ] -CellsPointsCoords = dict[ int, list[ tuple[ float ] ] ] -Coordinates3D = tuple[ float ] - - -class FracturePolicy( Enum ): - FIELD = 0 - INTERNAL_SURFACES = 1 - - -@dataclass( frozen=True ) -class Options: - policy: FracturePolicy - field: str - field_values_combined: frozenset[ int ] - field_values_per_fracture: list[ frozenset[ int ] ] - mesh_VtkOutput: VtkOutput - all_fractures_VtkOutput: list[ VtkOutput ] - - -@dataclass( frozen=True ) -class Result: - info: str - - -@dataclass( frozen=True ) -class FractureInfo: - node_to_cells: Mapping[ int, Iterable[ int ] ] # For each _fracture_ node, gives all the cells that use this node. - face_nodes: Iterable[ Collection[ int ] ] # For each fracture face, returns the nodes of this face - face_cell_id: Iterable[ int ] # For each fracture face, returns the corresponding id of the cell in the mesh - - -def build_node_to_cells( mesh: vtkUnstructuredGrid, - face_nodes: Iterable[ Iterable[ int ] ] ) -> dict[ int, Iterable[ int ] ]: - # TODO normally, just a list and not a set should be enough. - node_to_cells: dict[ int, set[ int ] ] = defaultdict( set ) - - fracture_nodes: set[ int ] = set() - for fns in face_nodes: - for n in fns: - fracture_nodes.add( n ) - - for cell_id in tqdm( range( mesh.GetNumberOfCells() ), desc="Computing the node to cells mapping" ): - cell_points: frozenset[ int ] = frozenset( vtk_iter( mesh.GetCell( cell_id ).GetPointIds() ) ) - intersection: Iterable[ int ] = cell_points & fracture_nodes - for node in intersection: - node_to_cells[ node ].add( cell_id ) - - return node_to_cells - - -def __build_fracture_info_from_fields( mesh: vtkUnstructuredGrid, f: Sequence[ int ], - field_values: frozenset[ int ] ) -> FractureInfo: - cells_to_faces: dict[ int, list[ int ] ] = defaultdict( list ) - # For each face of each cell, we search for the unique neighbor cell (if it exists). - # Then, if the 2 values of the two cells match the field requirements, - # we store the cell and its local face index: this is indeed part of the surface that we'll need to be split. - cell: vtkCell - for cell_id in tqdm( range( mesh.GetNumberOfCells() ), desc="Computing the cell to faces mapping" ): - if f[ cell_id ] not in field_values: # No need to consider a cell if its field value is not in the target range. - continue - cell = mesh.GetCell( cell_id ) - for i in range( cell.GetNumberOfFaces() ): - neighbor_cell_ids = vtkIdList() - mesh.GetCellNeighbors( cell_id, cell.GetFace( i ).GetPointIds(), neighbor_cell_ids ) - assert neighbor_cell_ids.GetNumberOfIds() < 2 - for j in range( neighbor_cell_ids.GetNumberOfIds() ): # It's 0 or 1... - neighbor_cell_id = neighbor_cell_ids.GetId( j ) - if f[ neighbor_cell_id ] != f[ cell_id ] and f[ neighbor_cell_id ] in field_values: - # TODO add this (cell_is, face_id) information to the fracture_info? - cells_to_faces[ cell_id ].append( i ) - face_nodes: list[ Collection[ int ] ] = list() - face_nodes_hashes: set[ frozenset[ int ] ] = set() # A temporary not to add multiple times the same face. - for cell_id, faces_ids in tqdm( cells_to_faces.items(), desc="Extracting the faces of the fractures" ): - cell = mesh.GetCell( cell_id ) - for face_id in faces_ids: - fn: Collection[ int ] = tuple( vtk_iter( cell.GetFace( face_id ).GetPointIds() ) ) - fnh = frozenset( fn ) - if fnh not in face_nodes_hashes: - face_nodes_hashes.add( fnh ) - face_nodes.append( fn ) - node_to_cells: dict[ int, Iterable[ int ] ] = build_node_to_cells( mesh, face_nodes ) - face_cell_id: list = list() # no cell of the mesh corresponds to that face when fracture policy is 'field' - - return FractureInfo( node_to_cells=node_to_cells, face_nodes=face_nodes, face_cell_id=face_cell_id ) - - -def __build_fracture_info_from_internal_surfaces( mesh: vtkUnstructuredGrid, f: Sequence[ int ], - field_values: frozenset[ int ] ) -> FractureInfo: - node_to_cells: dict[ int, list[ int ] ] = defaultdict( list ) - face_nodes: list[ Collection[ int ] ] = list() - face_cell_id: list[ int ] = list() - for cell_id in tqdm( range( mesh.GetNumberOfCells() ), desc="Computing the face to nodes mapping" ): - cell = mesh.GetCell( cell_id ) - if cell.GetCellDimension() == 2: - if f[ cell_id ] in field_values: - nodes = list() - for v in range( cell.GetNumberOfPoints() ): - point_id: int = cell.GetPointId( v ) - node_to_cells[ point_id ] = list() - nodes.append( point_id ) - face_nodes.append( tuple( nodes ) ) - face_cell_id.append( cell_id ) - - for cell_id in tqdm( range( mesh.GetNumberOfCells() ), desc="Computing the node to cells mapping" ): - cell = mesh.GetCell( cell_id ) - if cell.GetCellDimension() == 3: - for v in range( cell.GetNumberOfPoints() ): - if cell.GetPointId( v ) in node_to_cells: - node_to_cells[ cell.GetPointId( v ) ].append( cell_id ) - - return FractureInfo( node_to_cells=node_to_cells, face_nodes=face_nodes, face_cell_id=face_cell_id ) - - -def build_fracture_info( mesh: vtkUnstructuredGrid, - options: Options, - combined_fractures: bool, - fracture_id: int = 0 ) -> FractureInfo: - field = options.field - if combined_fractures: - field_values = options.field_values_combined - else: - field_values = options.field_values_per_fracture[ fracture_id ] - cell_data = mesh.GetCellData() - if cell_data.HasArray( field ): - f = vtk_to_numpy( cell_data.GetArray( field ) ) - else: - raise ValueError( f"Cell field {field} does not exist in mesh, nothing done" ) - - if options.policy == FracturePolicy.FIELD: - return __build_fracture_info_from_fields( mesh, f, field_values ) - elif options.policy == FracturePolicy.INTERNAL_SURFACES: - return __build_fracture_info_from_internal_surfaces( mesh, f, field_values ) - - -def build_cell_to_cell_graph( mesh: vtkUnstructuredGrid, fracture: FractureInfo ) -> networkx.Graph: - """ - Connects all the cells that touch the fracture by at least one node. - Two cells are connected when they share at least a face which is not a face of the fracture. - :param mesh: The input mesh. - :param fracture: The fracture info. - :return: The graph: each node of this graph is the index of the cell. - There's an edge between two nodes of the graph if the cells share a face. - """ - # Faces are identified by their nodes. But the order of those nodes may vary while referring to the same face. - # Therefore we compute some kinds of hashes of those face to easily detect if a face is part of the fracture. - tmp: list[ frozenset[ int ] ] = list() - for fn in fracture.face_nodes: - tmp.append( frozenset( fn ) ) - face_hashes: frozenset[ frozenset[ int ] ] = frozenset( tmp ) - - # We extract the list of the cells that touch the fracture by at least one node. - cells: set[ int ] = set() - for cell_ids in fracture.node_to_cells.values(): - for cell_id in cell_ids: - cells.add( cell_id ) - - # Using the last precomputed containers, we're now building the dict which connects - # every face (hash) of the fracture to the cells that touch the face... - face_to_cells: dict[ frozenset[ int ], list[ int ] ] = defaultdict( list ) - for cell_id in tqdm( cells, desc="Computing the cell to cell graph" ): - cell: vtkCell = mesh.GetCell( cell_id ) - for face_id in range( cell.GetNumberOfFaces() ): - face_hash: frozenset[ int ] = frozenset( vtk_iter( cell.GetFace( face_id ).GetPointIds() ) ) - if face_hash not in face_hashes: - face_to_cells[ face_hash ].append( cell_id ) - - # ... eventually, when a face touches two cells, this means that those two cells share the same face - # and should be connected in the final cell to cell graph. - cell_to_cell = networkx.Graph() - cell_to_cell.add_nodes_from( cells ) - cell_to_cell.add_edges_from( filter( lambda cs: len( cs ) == 2, face_to_cells.values() ) ) - - return cell_to_cell - - -def __identify_split( num_points: int, cell_to_cell: networkx.Graph, - node_to_cells: dict[ int, Iterable[ int ] ] ) -> dict[ int, IDMapping ]: - """ - For each cell, compute the node indices replacements. - :param num_points: Number of points in the whole mesh (not the fracture). - :param cell_to_cell: The cell to cell graph (connection through common faces). - :param node_to_cells: Maps the nodes of the fracture to the cells relying on this node. - :return: For each cell (first key), returns a mapping from the current index - and the new index that should replace the current index. - Note that the current index and the new index can be identical: no replacement should be done then. - """ - - class NewIndex: - """ - Returns the next available index. - Note that the first time an index is met, the index itself is returned: - we do not want to change an index if we do not have to. - """ - - def __init__( self, num_nodes: int ): - self.__current_last_index = num_nodes - 1 - self.__seen: set[ int ] = set() - - def __call__( self, index: int ) -> int: - if index in self.__seen: - self.__current_last_index += 1 - return self.__current_last_index - else: - self.__seen.add( index ) - return index - - build_new_index = NewIndex( num_points ) - result: dict[ int, IDMapping ] = defaultdict( dict ) - # Iteration over `sorted` nodes to have a predictable result for tests. - for node, cells in tqdm( sorted( node_to_cells.items() ), desc="Identifying the node splits" ): - for connected_cells in networkx.connected_components( cell_to_cell.subgraph( cells ) ): - # Each group of connect cells need around `node` must consider the same `node`. - # Separate groups must have different (duplicated) nodes. - new_index: int = build_new_index( node ) - for cell in connected_cells: - result[ cell ][ node ] = new_index - return result - - -def __copy_fields_splitted_mesh( old_mesh: vtkUnstructuredGrid, splitted_mesh: vtkUnstructuredGrid, - added_points_with_old_id: list[ tuple[ int ] ] ) -> None: - """ - Copies the fields from the old mesh to the new one. - Point data will be duplicated for collocated nodes. - :param old_mesh: The mesh before the split. - :param new_mesh: The mesh after the split. Will receive the fields in place. - :return: None - """ - # Copying the cell data. The cells are the same, just their nodes support have changed. - input_cell_data = old_mesh.GetCellData() - for i in range( input_cell_data.GetNumberOfArrays() ): - input_array: vtkDataArray = input_cell_data.GetArray( i ) - setup_logger.info( f"Copying cell field \"{input_array.GetName()}\"." ) - tmp = input_array.NewInstance() - tmp.DeepCopy( input_array ) - splitted_mesh.GetCellData().AddArray( input_array ) - - # Copying field data. This data is a priori not related to geometry. - input_field_data = old_mesh.GetFieldData() - for i in range( input_field_data.GetNumberOfArrays() ): - input_array = input_field_data.GetArray( i ) - setup_logger.info( f"Copying field data \"{input_array.GetName()}\"." ) - tmp = input_array.NewInstance() - tmp.DeepCopy( input_array ) - splitted_mesh.GetFieldData().AddArray( input_array ) - - # Copying copy data. Need to take into account the new points. - input_point_data = old_mesh.GetPointData() - new_number_points: int = splitted_mesh.GetNumberOfPoints() - for i in range( input_point_data.GetNumberOfArrays() ): - old_points_array = vtk_to_numpy( input_point_data.GetArray( i ) ) - name: str = input_point_data.GetArrayName( i ) - setup_logger.info( f"Copying point data \"{name}\"." ) - old_nrows: int = old_points_array.shape[ 0 ] - old_ncols: int = 1 if len( old_points_array.shape ) == 1 else old_points_array.shape[ 1 ] - # Reshape old_points_array if it is 1-dimensional - if len( old_points_array.shape ) == 1: - old_points_array = old_points_array.reshape( ( old_nrows, 1 ) ) - new_points_array = empty( ( new_number_points, old_ncols ) ) - new_points_array[ :old_nrows, : ] = old_points_array - for new_and_old_id in added_points_with_old_id: - new_points_array[ new_and_old_id[ 0 ], : ] = old_points_array[ new_and_old_id[ 1 ], : ] - # Reshape the VTK array to match the original dimensions - if old_ncols > 1: - vtk_array = numpy_to_vtk( new_points_array.flatten() ) - vtk_array.SetNumberOfComponents( old_ncols ) - vtk_array.SetNumberOfTuples( new_number_points ) - else: - vtk_array = numpy_to_vtk( new_points_array ) - vtk_array.SetName( name ) - splitted_mesh.GetPointData().AddArray( vtk_array ) - - -def __copy_fields_fracture_mesh( old_mesh: vtkUnstructuredGrid, fracture_mesh: vtkUnstructuredGrid, - face_cell_id: list[ int ], node_3d_to_node_2d: IDMapping ) -> None: - """ - Copies the fields from the old mesh to the new fracture when using internal_surfaces policy. - :param old_mesh: The mesh before the split. - :param fracture: The fracture mesh generated from the fracture_info. - :return: None - """ - # No copy of field data will be done with the fracture mesh because may lose its relevance compared to the splitted. - # Copying the cell data. The interesting cells are the ones stored in face_cell_id. - new_number_cells: int = fracture_mesh.GetNumberOfCells() - input_cell_data = old_mesh.GetCellData() - for i in range( input_cell_data.GetNumberOfArrays() ): - old_cells_array = vtk_to_numpy( input_cell_data.GetArray( i ) ) - old_nrows: int = old_cells_array.shape[ 0 ] - if len( old_cells_array.shape ) == 1: - old_cells_array = old_cells_array.reshape( ( old_nrows, 1 ) ) - name: str = input_cell_data.GetArrayName( i ) - setup_logger.info( f"Copying cell data \"{name}\"." ) - new_array = old_cells_array[ face_cell_id, : ] - # Reshape the VTK array to match the original dimensions - old_ncols: int = 1 if len( old_cells_array.shape ) == 1 else old_cells_array.shape[ 1 ] - if old_ncols > 1: - vtk_array = numpy_to_vtk( new_array.flatten() ) - vtk_array.SetNumberOfComponents( old_ncols ) - vtk_array.SetNumberOfTuples( new_number_cells ) - else: - vtk_array = numpy_to_vtk( new_array ) - vtk_array.SetName( name ) - fracture_mesh.GetCellData().AddArray( vtk_array ) - - new_number_points: int = fracture_mesh.GetNumberOfPoints() - input_point_data = old_mesh.GetPointData() - for i in range( input_point_data.GetNumberOfArrays() ): - old_points_array = vtk_to_numpy( input_point_data.GetArray( i ) ) - old_nrows = old_points_array.shape[ 0 ] - if len( old_points_array.shape ) == 1: - old_points_array = old_points_array.reshape( ( old_nrows, 1 ) ) - name = input_point_data.GetArrayName( i ) - setup_logger.info( f"Copying point data \"{name}\"." ) - new_array = old_points_array[ list( node_3d_to_node_2d.keys() ), : ] - old_ncols = 1 if len( old_points_array.shape ) == 1 else old_points_array.shape[ 1 ] - if old_ncols > 1: - vtk_array = numpy_to_vtk( new_array.flatten() ) - vtk_array.SetNumberOfComponents( old_ncols ) - vtk_array.SetNumberOfTuples( new_number_points ) - else: - vtk_array = numpy_to_vtk( new_array ) - vtk_array.SetName( name ) - fracture_mesh.GetPointData().AddArray( vtk_array ) - - -def __perform_split( old_mesh: vtkUnstructuredGrid, cell_to_node_mapping: Mapping[ int, - IDMapping ] ) -> vtkUnstructuredGrid: - """ - Split the main 3d mesh based on the node duplication information contained in @p cell_to_node_mapping - :param old_mesh: The main 3d mesh. - :param cell_to_node_mapping: For each cell, gives the nodes that must be duplicated and their new index. - :return: The main 3d mesh split at the fracture location. - """ - added_points: set[ int ] = set() - added_points_with_old_id: list[ tuple[ int ] ] = list() - for node_mapping in cell_to_node_mapping.values(): - for i, o in node_mapping.items(): - if i != o: - added_points.add( o ) - added_points_with_old_id.append( ( o, i ) ) - num_new_points: int = old_mesh.GetNumberOfPoints() + len( added_points ) - - # Creating the new points for the new mesh. - old_points: vtkPoints = old_mesh.GetPoints() - new_points = vtkPoints() - new_points.SetNumberOfPoints( num_new_points ) - collocated_nodes = ones( num_new_points, dtype=int ) * -1 - # Copying old points into the new container. - for p in range( old_points.GetNumberOfPoints() ): - new_points.SetPoint( p, old_points.GetPoint( p ) ) - collocated_nodes[ p ] = p - # Creating the new collocated/duplicated points based on the old points positions. - for node_mapping in cell_to_node_mapping.values(): - for i, o in node_mapping.items(): - if i != o: - new_points.SetPoint( o, old_points.GetPoint( i ) ) - collocated_nodes[ o ] = i - collocated_nodes.flags.writeable = False - - # We are creating a new mesh. - # The cells will be the same, except that their nodes may be duplicated or renumbered nodes. - # In vtk, the polyhedron and the standard cells are managed differently. - # Also, it looks like the internal representation is being modified - # (see https://gitlab.kitware.com/vtk/vtk/-/merge_requests/9812) - # so we'll try nothing fancy for the moment. - # Maybe in the future using a `DeepCopy` of the vtkCellArray can be considered? - # The cell point ids could be modified in place then. - new_mesh = old_mesh.NewInstance() - new_mesh.SetPoints( new_points ) - new_mesh.Allocate( old_mesh.GetNumberOfCells() ) - - for c in tqdm( range( old_mesh.GetNumberOfCells() ), desc="Performing the mesh split" ): - node_mapping: IDMapping = cell_to_node_mapping.get( c, {} ) - cell: vtkCell = old_mesh.GetCell( c ) - cell_type: int = cell.GetCellType() - # For polyhedron, we'll manipulate the face stream directly. - if cell_type == VTK_POLYHEDRON: - face_stream = vtkIdList() - old_mesh.GetFaceStream( c, face_stream ) - new_face_nodes: list[ list[ int ] ] = list() - for face_nodes in FaceStream.build_from_vtk_id_list( face_stream ).face_nodes: - new_point_ids = list() - for current_point_id in face_nodes: - new_point_id: int = node_mapping.get( current_point_id, current_point_id ) - new_point_ids.append( new_point_id ) - new_face_nodes.append( new_point_ids ) - new_mesh.InsertNextCell( cell_type, to_vtk_id_list( FaceStream( new_face_nodes ).dump() ) ) - else: - # For the standard cells, we extract the point ids of the cell directly. - # Then the values will be (potentially) overwritten in place, before being sent back into the cell. - cell_point_ids: vtkIdList = cell.GetPointIds() - for i in range( cell_point_ids.GetNumberOfIds() ): - current_point_id: int = cell_point_ids.GetId( i ) - new_point_id: int = node_mapping.get( current_point_id, current_point_id ) - cell_point_ids.SetId( i, new_point_id ) - new_mesh.InsertNextCell( cell_type, cell_point_ids ) - - __copy_fields_splitted_mesh( old_mesh, new_mesh, added_points_with_old_id ) - - return new_mesh - - -def __generate_fracture_mesh( old_mesh: vtkUnstructuredGrid, fracture_info: FractureInfo, - cell_to_node_mapping: Mapping[ int, IDMapping ] ) -> vtkUnstructuredGrid: - """ - Generates the mesh of the fracture. - :param mesh_points: The points of the main 3d mesh. - :param fracture_info: The fracture description. - :param cell_to_node_mapping: For each cell, gives the nodes that must be duplicated and their new index. - :return: The fracture mesh. - """ - setup_logger.info( "Generating the meshes" ) - - mesh_points: vtkPoints = old_mesh.GetPoints() - is_node_duplicated = zeros( mesh_points.GetNumberOfPoints(), dtype=bool ) # defaults to False - for node_mapping in cell_to_node_mapping.values(): - for i, o in node_mapping.items(): - if not is_node_duplicated[ i ]: - is_node_duplicated[ i ] = i != o - - # Some elements can have all their nodes not duplicated. - # In this case, it's mandatory not get rid of this element because the neighboring 3d elements won't follow. - face_nodes: list[ Collection[ int ] ] = list() - discarded_face_nodes: set[ Iterable[ int ] ] = set() - if fracture_info.face_cell_id != list(): # The fracture policy is 'internal_surfaces' - face_cell_id: list[ int ] = list() - for ns, f_id in zip( fracture_info.face_nodes, fracture_info.face_cell_id ): - if any( map( is_node_duplicated.__getitem__, ns ) ): - face_nodes.append( ns ) - face_cell_id.append( f_id ) - else: - discarded_face_nodes.add( ns ) - else: # The fracture policy is 'field' - for ns in fracture_info.face_nodes: - if any( map( is_node_duplicated.__getitem__, ns ) ): - face_nodes.append( ns ) - else: - discarded_face_nodes.add( ns ) - - if discarded_face_nodes: - # tmp = list() - # for dfns in discarded_face_nodes: - # tmp.append(", ".join(map(str, dfns))) - msg: str = "(" + '), ('.join( map( lambda dfns: ", ".join( map( str, dfns ) ), discarded_face_nodes ) ) + ")" - # setup_logger.info(f"The {len(tmp)} faces made of nodes ({'), ('.join(tmp)}) were/was discarded" - # + "from the fracture mesh because none of their/its nodes were duplicated.") - # print(f"The {len(tmp)} faces made of nodes ({'), ('.join(tmp)}) were/was discarded" - # + "from the fracture mesh because none of their/its nodes were duplicated.") - setup_logger.info( f"The faces made of nodes [{msg}] were/was discarded" + - "from the fracture mesh because none of their/its nodes were duplicated." ) - - fracture_nodes_tmp = ones( mesh_points.GetNumberOfPoints(), dtype=int ) * -1 - for ns in face_nodes: - for n in ns: - fracture_nodes_tmp[ n ] = n - fracture_nodes: Collection[ int ] = tuple( filter( lambda n: n > -1, fracture_nodes_tmp ) ) - num_points: int = len( fracture_nodes ) - points = vtkPoints() - points.SetNumberOfPoints( num_points ) - node_3d_to_node_2d: IDMapping = dict() # Building the node mapping, from 3d mesh nodes to 2d fracture nodes. - for i, n in enumerate( fracture_nodes ): - coords: Coordinates3D = mesh_points.GetPoint( n ) - points.SetPoint( i, coords ) - node_3d_to_node_2d[ n ] = i - - # The polygons are constructed in the same order as the faces defined in the fracture_info. Therefore, - # fracture_info.face_cell_id can be used to link old cells to fracture cells for copy with internal_surfaces. - polygons = vtkCellArray() - for ns in face_nodes: - polygon = vtkPolygon() - polygon.GetPointIds().SetNumberOfIds( len( ns ) ) - for i, n in enumerate( ns ): - polygon.GetPointIds().SetId( i, node_3d_to_node_2d[ n ] ) - polygons.InsertNextCell( polygon ) - - buckets: dict[ int, set[ int ] ] = defaultdict( set ) - for node_mapping in cell_to_node_mapping.values(): - for i, o in node_mapping.items(): - k: Optional[ int ] = node_3d_to_node_2d.get( min( i, o ) ) - if k is not None: - buckets[ k ].update( ( i, o ) ) - - assert set( buckets.keys() ) == set( range( num_points ) ) - max_collocated_nodes: int = max( map( len, buckets.values() ) ) if buckets.values() else 0 - collocated_nodes = ones( ( num_points, max_collocated_nodes ), dtype=int ) * -1 - for i, bucket in buckets.items(): - for j, val in enumerate( bucket ): - collocated_nodes[ i, j ] = val - array = numpy_to_vtk( collocated_nodes, array_type=VTK_ID_TYPE ) - array.SetName( "collocated_nodes" ) - - fracture_mesh = vtkUnstructuredGrid() # We could be using vtkPolyData, but it's not supported by GEOS for now. - fracture_mesh.SetPoints( points ) - if polygons.GetNumberOfCells() > 0: - fracture_mesh.SetCells( [ VTK_POLYGON ] * polygons.GetNumberOfCells(), polygons ) - fracture_mesh.GetPointData().AddArray( array ) - - # The copy of fields from the old mesh to the fracture is only available when using the internal_surfaces policy - # because the FractureInfo is linked to 2D elements from the old_mesh - if fracture_info.face_cell_id != list(): - __copy_fields_fracture_mesh( old_mesh, fracture_mesh, face_cell_id, node_3d_to_node_2d ) - - return fracture_mesh - - -def __split_mesh_on_fractures( mesh: vtkUnstructuredGrid, - options: Options ) -> tuple[ vtkUnstructuredGrid, list[ vtkUnstructuredGrid ] ]: - all_fracture_infos: list[ FractureInfo ] = list() - for fracture_id in range( len( options.field_values_per_fracture ) ): - fracture_info: FractureInfo = build_fracture_info( mesh, options, False, fracture_id ) - all_fracture_infos.append( fracture_info ) - combined_fractures: FractureInfo = build_fracture_info( mesh, options, True ) - cell_to_cell: networkx.Graph = build_cell_to_cell_graph( mesh, combined_fractures ) - cell_to_node_mapping: Mapping[ int, IDMapping ] = __identify_split( mesh.GetNumberOfPoints(), cell_to_cell, - combined_fractures.node_to_cells ) - output_mesh: vtkUnstructuredGrid = __perform_split( mesh, cell_to_node_mapping ) - fracture_meshes: list[ vtkUnstructuredGrid ] = list() - for fracture_info_separated in all_fracture_infos: - fracture_mesh: vtkUnstructuredGrid = __generate_fracture_mesh( mesh, fracture_info_separated, - cell_to_node_mapping ) - fracture_meshes.append( fracture_mesh ) - return ( output_mesh, fracture_meshes ) - - -def __action( mesh, options: Options ) -> Result: - output_mesh, fracture_meshes = __split_mesh_on_fractures( mesh, options ) - write_mesh( output_mesh, options.mesh_VtkOutput ) - for i, fracture_mesh in enumerate( fracture_meshes ): - write_mesh( fracture_mesh, options.all_fractures_VtkOutput[ i ] ) - # TODO provide statistics about what was actually performed (size of the fracture, number of split nodes...). - return Result( info="OK" ) - - -def action( vtk_input_file: str, options: Options ) -> Result: - try: - mesh = read_mesh( vtk_input_file ) - # Mesh cannot contain global ids before splitting. - if has_array( mesh, [ "GLOBAL_IDS_POINTS", "GLOBAL_IDS_CELLS" ] ): - err_msg: str = ( "The mesh cannot contain global ids for neither cells nor points. The correct procedure " + - " is to split the mesh and then generate global ids for new split meshes." ) - setup_logger.error( err_msg ) - raise ValueError( err_msg ) - return __action( mesh, options ) - except BaseException as e: - setup_logger.error( e ) - return Result( info="Something went wrong" ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/generate_global_ids.py b/geos-mesh/src/geos/mesh/doctor/actions/generate_global_ids.py deleted file mode 100644 index f4df1871..00000000 --- a/geos-mesh/src/geos/mesh/doctor/actions/generate_global_ids.py +++ /dev/null @@ -1,60 +0,0 @@ -from dataclasses import dataclass -from vtkmodules.vtkCommonCore import vtkIdTypeArray -from geos.mesh.doctor.parsing.cli_parsing import setup_logger -from geos.mesh.io.vtkIO import VtkOutput, read_mesh, write_mesh - - -@dataclass( frozen=True ) -class Options: - vtk_output: VtkOutput - generate_cells_global_ids: bool - generate_points_global_ids: bool - - -@dataclass( frozen=True ) -class Result: - info: str - - -def __build_global_ids( mesh, generate_cells_global_ids: bool, generate_points_global_ids: bool ) -> None: - """ - Adds the global ids for cells and points in place into the mesh instance. - :param mesh: - :return: None - """ - # Building GLOBAL_IDS for points and cells.g GLOBAL_IDS for points and cells. - # First for points... - if mesh.GetPointData().GetGlobalIds(): - setup_logger.error( "Mesh already has globals ids for points; nothing done." ) - elif generate_points_global_ids: - point_global_ids = vtkIdTypeArray() - point_global_ids.SetName( "GLOBAL_IDS_POINTS" ) - point_global_ids.Allocate( mesh.GetNumberOfPoints() ) - for i in range( mesh.GetNumberOfPoints() ): - point_global_ids.InsertNextValue( i ) - mesh.GetPointData().SetGlobalIds( point_global_ids ) - # ... then for cells. - if mesh.GetCellData().GetGlobalIds(): - setup_logger.error( "Mesh already has globals ids for cells; nothing done." ) - elif generate_cells_global_ids: - cells_global_ids = vtkIdTypeArray() - cells_global_ids.SetName( "GLOBAL_IDS_CELLS" ) - cells_global_ids.Allocate( mesh.GetNumberOfCells() ) - for i in range( mesh.GetNumberOfCells() ): - cells_global_ids.InsertNextValue( i ) - mesh.GetCellData().SetGlobalIds( cells_global_ids ) - - -def __action( mesh, options: Options ) -> Result: - __build_global_ids( mesh, options.generate_cells_global_ids, options.generate_points_global_ids ) - write_mesh( mesh, options.vtk_output ) - return Result( info=f"Mesh was written to {options.vtk_output.output}" ) - - -def action( vtk_input_file: str, options: Options ) -> Result: - try: - mesh = read_mesh( vtk_input_file ) - return __action( mesh, options ) - except BaseException as e: - setup_logger.error( e ) - return Result( info="Something went wrong." ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/mainChecks.py b/geos-mesh/src/geos/mesh/doctor/actions/mainChecks.py new file mode 100644 index 00000000..f10cd49d --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/actions/mainChecks.py @@ -0,0 +1 @@ +from geos.mesh.doctor.actions.allChecks import action diff --git a/geos-mesh/src/geos/mesh/doctor/actions/main_checks.py b/geos-mesh/src/geos/mesh/doctor/actions/main_checks.py deleted file mode 100644 index 2ae3b9da..00000000 --- a/geos-mesh/src/geos/mesh/doctor/actions/main_checks.py +++ /dev/null @@ -1 +0,0 @@ -from geos.mesh.doctor.actions.all_checks import action diff --git a/geos-mesh/src/geos/mesh/doctor/actions/nonConformal.py b/geos-mesh/src/geos/mesh/doctor/actions/nonConformal.py new file mode 100644 index 00000000..b7683540 --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/actions/nonConformal.py @@ -0,0 +1,446 @@ +from dataclasses import dataclass +import math +import numpy +from tqdm import tqdm +from vtk import reference as vtkReference +from vtkmodules.vtkCommonCore import vtkDataArray, vtkIdList, vtkPoints +from vtkmodules.vtkCommonDataModel import ( vtkBoundingBox, vtkCell, vtkCellArray, vtkPointSet, vtkPolyData, + vtkStaticCellLocator, vtkStaticPointLocator, vtkUnstructuredGrid, + VTK_POLYHEDRON ) +from vtkmodules.vtkCommonTransforms import vtkTransform +from vtkmodules.vtkFiltersCore import vtkPolyDataNormals +from vtkmodules.vtkFiltersGeometry import vtkDataSetSurfaceFilter +from vtkmodules.vtkFiltersModeling import vtkCollisionDetectionFilter, vtkLinearExtrusionFilter +from geos.mesh.doctor.actions import reorientMesh, triangleDistance +from geos.mesh.utils.genericHelpers import vtkIter +from geos.mesh.io.vtkIO import readUnstructuredGrid + + +@dataclass( frozen=True ) +class Options: + angleTolerance: float + pointTolerance: float + faceTolerance: float + + +@dataclass( frozen=True ) +class Result: + nonConformalCells: list[ tuple[ int, int ] ] + + +class BoundaryMesh: + """ + A BoundaryMesh is the envelope of the 3d mesh on which we want to perform the simulations. + It is computed by vtk. But we want to be sure that the normals of the envelope are directed outwards. + The `vtkDataSetSurfaceFilter` does not have the same behavior for standard vtk cells (like tets or hexs), + and for polyhedron meshes, for which the result is a bit brittle. + Therefore, we reorient the polyhedron cells ourselves, so we're sure that they point outwards. + And then we compute the boundary meshes for both meshes, given that the computing options are not identical. + """ + + def __init__( self, mesh: vtkUnstructuredGrid ): + """Builds a boundary mesh. + + Args: + mesh (vtkUnstructuredGrid): The 3d mesh. + """ + # Building the boundary meshes + boundaryMesh, __normals, self.__originalCells = BoundaryMesh.__buildBoundaryMesh( mesh ) + cellsToReorient = filter( + lambda c: mesh.GetCell( c ).GetCellType() == VTK_POLYHEDRON, + map( self.__originalCells.GetValue, range( self.__originalCells.GetNumberOfValues() ) ) ) + reorientedMesh = reorientMesh.reorientMesh( mesh, cellsToReorient ) + self.reBoundaryMesh, reNormals, _ = BoundaryMesh.__buildBoundaryMesh( reorientedMesh, consistency=False ) + numCells = boundaryMesh.GetNumberOfCells() + # Precomputing the underlying cell type + self.__isUnderlyingCellTypeAPolyhedron = numpy.zeros( numCells, dtype=bool ) + for ic in range( numCells ): + self.__isUnderlyingCellTypeAPolyhedron[ ic ] = mesh.GetCell( + self.__originalCells.GetValue( ic ) ).GetCellType() == VTK_POLYHEDRON + # Precomputing the normals + self.__normals: numpy.ndarray = numpy.empty( ( numCells, 3 ), dtype=numpy.double, + order='C' ) # Do not modify the storage layout + for ic in range( numCells ): + if self.__isUnderlyingCellTypeAPolyhedron[ ic ]: + self.__normals[ ic, : ] = reNormals.GetTuple3( ic ) + else: + self.__normals[ ic, : ] = __normals.GetTuple3( ic ) + + @staticmethod + def __buildBoundaryMesh( mesh: vtkUnstructuredGrid, + consistency=True ) -> tuple[ vtkUnstructuredGrid, vtkDataArray, vtkDataArray ]: + """From a 3d mesh, build the envelope meshes. + + Args: + mesh (vtkUnstructuredGrid): The input 3d mesh. + consistency (bool, optional): The vtk option passed to the `vtkDataSetSurfaceFilter`. Defaults to True. + + Returns: + tuple[ vtkUnstructuredGrid, Any, Any ]: A tuple containing the boundary mesh, the normal vectors array, + an array that maps the id of the boundary element to the id of the 3d cell it touches. + """ + f = vtkDataSetSurfaceFilter() + f.PassThroughCellIdsOn() + f.PassThroughPointIdsOff() + f.FastModeOff() + + # Note that we do not need the original points, but we could keep them as well if needed + originalCellsKey = "ORIGINAL_CELLS" + f.SetOriginalCellIdsName( originalCellsKey ) + + boundaryMesh = vtkPolyData() + f.UnstructuredGridExecute( mesh, boundaryMesh ) + + n = vtkPolyDataNormals() + n.SetConsistency( consistency ) + n.SetAutoOrientNormals( consistency ) + n.FlipNormalsOff() + n.ComputeCellNormalsOn() + n.SetInputData( boundaryMesh ) + n.Update() + normals: vtkDataArray = n.GetOutput().GetCellData().GetArray( "Normals" ) + assert normals + assert normals.GetNumberOfComponents() == 3 + assert normals.GetNumberOfTuples() == boundaryMesh.GetNumberOfCells() + originalCells: vtkDataArray = boundaryMesh.GetCellData().GetArray( originalCellsKey ) + assert originalCells + return boundaryMesh, normals, originalCells + + def GetNumberOfCells( self ) -> int: + """The number of cells. + + Returns: + int: An integer. + """ + return self.reBoundaryMesh.GetNumberOfCells() + + def GetNumberOfPoints( self ) -> int: + """The number of points. + + Returns: + int: An integer. + """ + return self.reBoundaryMesh.GetNumberOfPoints() + + def bounds( self, i: int ) -> tuple[ float, float, float, float, float, float ]: + """The boundrary box of cell `i`. + + Args: + i (int): The boundary cell index. + + Returns: + tuple[ float, float, float, float, float, float ]: The bounding box of the cell. + """ + return self.reBoundaryMesh.GetCell( i ).GetBounds() + + def normals( self, i: int ) -> numpy.ndarray: + """The normal of cell `i`. This normal will be directed outwards + + Args: + i (int): The boundary cell index. + + Returns: + numpy.ndarray: The normal as a length-3 numpy array. + """ + return self.__normals[ i ] + + def GetCell( self, i: int ) -> vtkCell: + """Cell i of the boundary mesh. This cell will have its normal directed outwards. + + Args: + i (int): The boundary cell index. + + Returns: + vtkCell: The cell instance. + """ + return self.reBoundaryMesh.GetCell( i ) + + def GetPoint( self, i: int ) -> tuple[ float, float, float ]: + """Point i of the boundary mesh. + Args: + i (int): The boundary point index. + + Returns: + tuple[ float, float, float ]: A length-3 tuple containing the coordinates of the point. + """ + return self.reBoundaryMesh.GetPoint( i ) + + @property + def originalCells( self ) -> vtkDataArray: + """Returns the 2d boundary cell to the 3d cell index of the original mesh. + + Returns: + vtkDataArray: A 1d array. + """ + return self.__originalCells + + +def buildPolyDataForExtrusion( i: int, boundaryMesh: BoundaryMesh ) -> vtkPolyData: + """Creates a vtkPolyData containing the unique cell `i` of the boundary mesh. + + Args: + i (int): The boundary cell index that will eventually be extruded. + boundaryMesh (BoundaryMesh): The boundary mesh containing the cell. + + Returns: + vtkPolyData: The created vtkPolyData. + """ + cell = boundaryMesh.GetCell( i ) + copiedCell = cell.NewInstance() + copiedCell.DeepCopy( cell ) + pointsIdsMapping = [] + for i in range( copiedCell.GetNumberOfPoints() ): + copiedCell.GetPointIds().SetId( i, i ) + pointsIdsMapping.append( cell.GetPointId( i ) ) + polygons = vtkCellArray() + polygons.InsertNextCell( copiedCell ) + points = vtkPoints() + points.SetNumberOfPoints( len( pointsIdsMapping ) ) + for i, v in enumerate( pointsIdsMapping ): + points.SetPoint( i, boundaryMesh.GetPoint( v ) ) + polygonPolyData = vtkPolyData() + polygonPolyData.SetPoints( points ) + polygonPolyData.SetPolys( polygons ) + return polygonPolyData + + +def arePointsConformal( pointTolerance: float, cellI: vtkCell, cellJ: vtkCell ) -> bool: + """Checks if points of cell `i` matches, one by one, the points of cell `j`. + + Args: + pointTolerance (float): The point tolerance to consider that two points match. + cellI (vtkCell): The first cell. + cellJ (vtkCell): The second cell. + + Returns: + bool: True if the points are conformal, False otherwise. + """ + # In this last step, we check that the nodes are (or not) matching each other. + if cellI.GetNumberOfPoints() != cellJ.GetNumberOfPoints(): + return True + + pointLocator = vtkStaticPointLocator() + points = vtkPointSet() + points.SetPoints( cellI.GetPoints() ) + pointLocator.SetDataSet( points ) + pointLocator.BuildLocator() + foundPoints = set() + for ip in range( cellJ.GetNumberOfPoints() ): + p = cellJ.GetPoints().GetPoint( ip ) + squaredDist = vtkReference( 0. ) # unused + foundPoint = pointLocator.FindClosestPointWithinRadius( pointTolerance, p, squaredDist ) + foundPoints.add( foundPoint ) + return foundPoints == set( range( cellI.GetNumberOfPoints() ) ) + + +class Extruder: + """ + Computes and stores all the extrusions of the boundary faces. + The main reason for this class is to be lazy and cache the extrusions. + """ + + def __init__( self, boundaryMesh: BoundaryMesh, faceTolerance: float ): + self.__extrusions: list[ vtkPolyData ] = [ + None, + ] * boundaryMesh.GetNumberOfCells() + self.__boundaryMesh = boundaryMesh + self.__faceTolerance = faceTolerance + + def __extrude( self, polygonPolyData: vtkPolyData, normal: numpy.ndarray ) -> vtkPolyData: + """Extrude the polygon data to create a surface that will be used for intersection. + + Args: + polygonPolyData (vtkPolyData): The data to extrude + normal (numpy.ndarray): The (uniform) direction of the extrusion. + + Returns: + vtkPolyData: The extruded surface. + """ + extruder = vtkLinearExtrusionFilter() + extruder.SetExtrusionTypeToVectorExtrusion() + extruder.SetVector( normal ) + extruder.SetScaleFactor( self.__faceTolerance / 2. ) + extruder.SetInputData( polygonPolyData ) + extruder.Update() + return extruder.GetOutput() + + def __getitem__( self, i: int ) -> vtkPolyData: + """Returns the vtk extrusion for boundary element i. + + Args: + i (int): The cell index. + + Returns: + vtkPolyData: The vtk extrusion. + """ + extrusion = self.__extrusions[ i ] + if extrusion: + return extrusion + extrusion = self.__extrude( buildPolyDataForExtrusion( i, self.__boundaryMesh ), + self.__boundaryMesh.normals( i ) ) + self.__extrusions[ i ] = extrusion + return extrusion + + +def areFacesConformalUsingExtrusions( extrusions: Extruder, i: int, j: int, boundaryMesh: vtkUnstructuredGrid, + pointTolerance: float ) -> bool: + """ + Tests if two boundary faces are conformal, checking for intersection between their normal extruded volumes. + + Args: + extrusions (Extruder): The extrusions cache. + i (int): The cell index of the first cell. + j (int): The cell index of the second cell. + boundaryMesh (vtkUnstructuredGrid): The boundary mesh. + pointTolerance (float): The point tolerance to consider that two points match. + + Returns: + bool: True if the faces are conformal, False otherwise. + """ + collision = vtkCollisionDetectionFilter() + collision.SetCollisionModeToFirstContact() + collision.SetInputData( 0, extrusions[ i ] ) + collision.SetInputData( 1, extrusions[ j ] ) + mI = vtkTransform() + mJ = vtkTransform() + collision.SetTransform( 0, mI ) + collision.SetTransform( 1, mJ ) + collision.Update() + + if collision.GetNumberOfContacts() == 0: + return True + + # Duplicating data not to risk anything w.r.t. thread safety of the GetCell function. + cellI = boundaryMesh.GetCell( i ) + copiedCellI = cellI.NewInstance() + copiedCellI.DeepCopy( cellI ) + + return arePointsConformal( pointTolerance, copiedCellI, boundaryMesh.GetCell( j ) ) + + +def areFacesConformalUsingDistances( i: int, j: int, boundaryMesh: vtkUnstructuredGrid, faceTolerance: float, + pointTolerance: float ) -> bool: + """Tests if two boundary faces are conformal, checking the minimal distance between triangulated surfaces. + + Args: + i (int): The cell index of the first cell. + j (int): The cell index of the second cell. + boundaryMesh (vtkUnstructuredGrid): The boundary mesh. + faceTolerance (float): The tolerance under which we should consider the two faces "touching" each other. + pointTolerance (float): The point tolerance to consider that two points match. + + Returns: + bool: True if the faces are conformal, False otherwise. + """ + cpI = boundaryMesh.GetCell( i ).NewInstance() + cpI.DeepCopy( boundaryMesh.GetCell( i ) ) + cpJ = boundaryMesh.GetCell( j ).NewInstance() + cpJ.DeepCopy( boundaryMesh.GetCell( j ) ) + + def triangulate( cell ): + assert cell.GetCellDimension() == 2 + _pointsIds = vtkIdList() + _points = vtkPoints() + cell.Triangulate( 0, _pointsIds, _points ) + _pointsIds = tuple( vtkIter( _pointsIds ) ) + assert len( _pointsIds ) % 3 == 0 + assert _points.GetNumberOfPoints() % 3 == 0 + return _pointsIds, _points + + pointsIdsI, pointsI = triangulate( cpI ) + pointsIdsJ, pointsJ = triangulate( cpJ ) + + def buildNumpyTriangles( pointsIds ): + __triangles = [] + for __i in range( 0, len( pointsIds ), 3 ): + __t = [] + for __pi in pointsIds[ __i:__i + 3 ]: + __t.append( boundaryMesh.GetPoint( __pi ) ) + __triangles.append( numpy.array( __t, dtype=float ) ) + return __triangles + + trianglesI = buildNumpyTriangles( pointsIdsI ) + trianglesJ = buildNumpyTriangles( pointsIdsJ ) + + minDist = numpy.inf + for ti, tj in [ ( ti, tj ) for ti in trianglesI for tj in trianglesJ ]: + # Note that here, we compute the exact distance to compare with the threshold. + # We could improve by exiting the iterative distance computation as soon as + # we're sure we're smaller than the threshold. No need of the exact solution. + dist, _, _ = triangleDistance.distanceBetweenTwoTriangles( ti, tj ) + if dist < minDist: + minDist = dist + if minDist < faceTolerance: + break + if minDist > faceTolerance: + return True + + return arePointsConformal( pointTolerance, cpI, cpJ ) + + +def __action( mesh: vtkUnstructuredGrid, options: Options ) -> Result: + """Checks if the mesh is "conformal" (i.e. if some of its boundary faces may not be too close to each other + without matching nodes). + + Args: + mesh (vtkUnstructuredGrid): The vtk mesh + options (Options): The check options. + + Returns: + Result: The result of the conformity check. + """ + boundaryMesh = BoundaryMesh( mesh ) + cosTheta = abs( math.cos( numpy.deg2rad( options.angleTolerance ) ) ) + numCells = boundaryMesh.GetNumberOfCells() + + # Computing the exact number of cells per node + numCellsPerNode = numpy.zeros( boundaryMesh.GetNumberOfPoints(), dtype=int ) + for ic in range( boundaryMesh.GetNumberOfCells() ): + c = boundaryMesh.GetCell( ic ) + pointIds = c.GetPointIds() + for pointId in vtkIter( pointIds ): + numCellsPerNode[ pointId ] += 1 + + cellLocator = vtkStaticCellLocator() + cellLocator.Initialize() + cellLocator.SetNumberOfCellsPerNode( numCellsPerNode.max() ) + cellLocator.SetDataSet( boundaryMesh.reBoundaryMesh ) + cellLocator.BuildLocator() + + # Precomputing the bounding boxes. + # The options are important to directly interact with memory in C++. + boundingBoxes = numpy.empty( ( boundaryMesh.GetNumberOfCells(), 6 ), dtype=numpy.double, order="C" ) + for i in range( boundaryMesh.GetNumberOfCells() ): + bb = vtkBoundingBox( boundaryMesh.bounds( i ) ) + bb.Inflate( 2 * options.faceTolerance ) + assert boundingBoxes[ + i, : ].data.contiguous # Do not modify the storage layout since vtk deals with raw memory here. + bb.GetBounds( boundingBoxes[ i, : ] ) + + nonConformalCells = [] + extrusions = Extruder( boundaryMesh, options.faceTolerance ) + closeCells = vtkIdList() + # Looping on all the pairs of boundary cells. We'll hopefully discard most of the pairs. + for i in tqdm( range( numCells ), desc="Non conformal elements" ): + cellLocator.FindCellsWithinBounds( boundingBoxes[ i ], closeCells ) + for j in vtkIter( closeCells ): + if j < i: + continue + # Discarding pairs that are not facing each others (with a threshold). + normalI, normalJ = boundaryMesh.normals( i ), boundaryMesh.normals( j ) + if numpy.dot( normalI, normalJ ) > -cosTheta: # opposite directions only (can be facing or not) + continue + # At this point, back-to-back and face-to-face pairs of elements are considered. + if not areFacesConformalUsingExtrusions( extrusions, i, j, boundaryMesh, options.pointTolerance ): + nonConformalCells.append( ( i, j ) ) + # Extracting the original 3d element index (and not the index of the boundary mesh). + tmp = [] + for i, j in nonConformalCells: + tmp.append( ( boundaryMesh.originalCells.GetValue( i ), boundaryMesh.originalCells.GetValue( j ) ) ) + + return Result( nonConformalCells=tmp ) + + +def action( vtkInputFile: str, options: Options ) -> Result: + mesh: vtkUnstructuredGrid = readUnstructuredGrid( vtkInputFile ) + return __action( mesh, options ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/non_conformal.py b/geos-mesh/src/geos/mesh/doctor/actions/non_conformal.py deleted file mode 100644 index d1c83a37..00000000 --- a/geos-mesh/src/geos/mesh/doctor/actions/non_conformal.py +++ /dev/null @@ -1,408 +0,0 @@ -from dataclasses import dataclass -import math -import numpy -from tqdm import tqdm -from typing import List, Tuple, Any -from vtk import reference as vtk_reference -from vtkmodules.vtkCommonCore import vtkIdList, vtkPoints -from vtkmodules.vtkCommonDataModel import ( vtkBoundingBox, vtkCell, vtkCellArray, vtkPointSet, vtkPolyData, - vtkStaticCellLocator, vtkStaticPointLocator, vtkUnstructuredGrid, - VTK_POLYHEDRON ) -from vtkmodules.vtkCommonTransforms import vtkTransform -from vtkmodules.vtkFiltersCore import vtkPolyDataNormals -from vtkmodules.vtkFiltersGeometry import vtkDataSetSurfaceFilter -from vtkmodules.vtkFiltersModeling import vtkCollisionDetectionFilter, vtkLinearExtrusionFilter -from geos.mesh.doctor.actions import reorient_mesh, triangle_distance -from geos.mesh.utils.genericHelpers import vtk_iter -from geos.mesh.io.vtkIO import read_mesh - - -@dataclass( frozen=True ) -class Options: - angle_tolerance: float - point_tolerance: float - face_tolerance: float - - -@dataclass( frozen=True ) -class Result: - non_conformal_cells: List[ Tuple[ int, int ] ] - - -class BoundaryMesh: - """ - A BoundaryMesh is the envelope of the 3d mesh on which we want to perform the simulations. - It is computed by vtk. But we want to be sure that the normals of the envelope are directed outwards. - The `vtkDataSetSurfaceFilter` does not have the same behavior for standard vtk cells (like tets or hexs), - and for polyhedron meshes, for which the result is a bit brittle. - Therefore, we reorient the polyhedron cells ourselves, so we're sure that they point outwards. - And then we compute the boundary meshes for both meshes, given that the computing options are not identical. - """ - - def __init__( self, mesh: vtkUnstructuredGrid ): - """ - Builds a boundary mesh. - :param mesh: The 3d mesh. - """ - # Building the boundary meshes - boundary_mesh, __normals, self.__original_cells = BoundaryMesh.__build_boundary_mesh( mesh ) - cells_to_reorient = filter( - lambda c: mesh.GetCell( c ).GetCellType() == VTK_POLYHEDRON, - map( self.__original_cells.GetValue, range( self.__original_cells.GetNumberOfValues() ) ) ) - reoriented_mesh = reorient_mesh.reorient_mesh( mesh, cells_to_reorient ) - self.re_boundary_mesh, re_normals, _ = BoundaryMesh.__build_boundary_mesh( reoriented_mesh, consistency=False ) - num_cells = boundary_mesh.GetNumberOfCells() - # Precomputing the underlying cell type - self.__is_underlying_cell_type_a_polyhedron = numpy.zeros( num_cells, dtype=bool ) - for ic in range( num_cells ): - self.__is_underlying_cell_type_a_polyhedron[ ic ] = mesh.GetCell( - self.__original_cells.GetValue( ic ) ).GetCellType() == VTK_POLYHEDRON - # Precomputing the normals - self.__normals: numpy.ndarray = numpy.empty( ( num_cells, 3 ), dtype=numpy.double, - order='C' ) # Do not modify the storage layout - for ic in range( num_cells ): - if self.__is_underlying_cell_type_a_polyhedron[ ic ]: - self.__normals[ ic, : ] = re_normals.GetTuple3( ic ) - else: - self.__normals[ ic, : ] = __normals.GetTuple3( ic ) - - @staticmethod - def __build_boundary_mesh( mesh: vtkUnstructuredGrid, consistency=True ) -> Tuple[ vtkUnstructuredGrid, Any, Any ]: - """ - From a 3d mesh, build the envelope meshes. - :param mesh: The input 3d mesh. - :param consistency: The vtk option passed to the `vtkDataSetSurfaceFilter`. - :return: A tuple containing the boundary mesh, the normal vectors array, - an array that maps the id of the boundary element to the id of the 3d cell it touches. - """ - f = vtkDataSetSurfaceFilter() - f.PassThroughCellIdsOn() - f.PassThroughPointIdsOff() - f.FastModeOff() - - # Note that we do not need the original points, but we could keep them as well if needed - original_cells_key = "ORIGINAL_CELLS" - f.SetOriginalCellIdsName( original_cells_key ) - - boundary_mesh = vtkPolyData() - f.UnstructuredGridExecute( mesh, boundary_mesh ) - - n = vtkPolyDataNormals() - n.SetConsistency( consistency ) - n.SetAutoOrientNormals( consistency ) - n.FlipNormalsOff() - n.ComputeCellNormalsOn() - n.SetInputData( boundary_mesh ) - n.Update() - normals = n.GetOutput().GetCellData().GetArray( "Normals" ) - assert normals - assert normals.GetNumberOfComponents() == 3 - assert normals.GetNumberOfTuples() == boundary_mesh.GetNumberOfCells() - original_cells = boundary_mesh.GetCellData().GetArray( original_cells_key ) - assert original_cells - return boundary_mesh, normals, original_cells - - def GetNumberOfCells( self ) -> int: - """ - The number of cells. - :return: An integer. - """ - return self.re_boundary_mesh.GetNumberOfCells() - - def GetNumberOfPoints( self ) -> int: - """ - The number of points. - :return: An integer. - """ - return self.re_boundary_mesh.GetNumberOfPoints() - - def bounds( self, i ) -> Tuple[ float, float, float, float, float, float ]: - """ - The boundrary box of cell `i`. - :param i: The boundary cell index. - :return: The vtk bounding box. - """ - return self.re_boundary_mesh.GetCell( i ).GetBounds() - - def normals( self, i ) -> numpy.ndarray: - """ - The normal of cell `i`. This normal will be directed outwards - :param i: The boundary cell index. - :return: The normal as a length-3 numpy array. - """ - return self.__normals[ i ] - - def GetCell( self, i ) -> vtkCell: - """ - Cell i of the boundary mesh. This cell will have its normal directed outwards. - :param i: The boundary cell index. - :return: The cell instance. - :warning: This member function relies on the vtkUnstructuredGrid.GetCell member function which is not thread safe. - """ - return self.re_boundary_mesh.GetCell( i ) - - def GetPoint( self, i ) -> Tuple[ float, float, float ]: - """ - Point i of the boundary mesh. - :param i: The boundary point index. - :return: A length-3 tuple containing the coordinates of the point. - :warning: This member function relies on the vtkUnstructuredGrid.GetPoint member function which is not thread safe. - """ - return self.re_boundary_mesh.GetPoint( i ) - - @property - def original_cells( self ): - """ - Returns the 2d boundary cell to the 3d cell index of the original mesh. - :return: A 1d array. - """ - return self.__original_cells - - -def build_poly_data_for_extrusion( i: int, boundary_mesh: BoundaryMesh ) -> vtkPolyData: - """ - Creates a vtkPolyData containing the unique cell `i` of the boundary mesh. - This operation is needed to use the vtk extrusion filter. - :param i: The boundary cell index that will eventually be extruded. - :param boundary_mesh: - :return: The created vtkPolyData. - """ - cell = boundary_mesh.GetCell( i ) - copied_cell = cell.NewInstance() - copied_cell.DeepCopy( cell ) - points_ids_mapping = [] - for i in range( copied_cell.GetNumberOfPoints() ): - copied_cell.GetPointIds().SetId( i, i ) - points_ids_mapping.append( cell.GetPointId( i ) ) - polygons = vtkCellArray() - polygons.InsertNextCell( copied_cell ) - points = vtkPoints() - points.SetNumberOfPoints( len( points_ids_mapping ) ) - for i, v in enumerate( points_ids_mapping ): - points.SetPoint( i, boundary_mesh.GetPoint( v ) ) - polygon_poly_data = vtkPolyData() - polygon_poly_data.SetPoints( points ) - polygon_poly_data.SetPolys( polygons ) - return polygon_poly_data - - -def are_points_conformal( point_tolerance: float, cell_i: vtkCell, cell_j: vtkCell ) -> bool: - """ - Checks if points of cell `i` matches, one by one, the points of cell `j`. - :param point_tolerance: The point tolerance to consider that two points match. - :param cell_i: The first cell. - :param cell_j: The second cell. - :return: A boolean. - """ - # In this last step, we check that the nodes are (or not) matching each other. - if cell_i.GetNumberOfPoints() != cell_j.GetNumberOfPoints(): - return True - - point_locator = vtkStaticPointLocator() - points = vtkPointSet() - points.SetPoints( cell_i.GetPoints() ) - point_locator.SetDataSet( points ) - point_locator.BuildLocator() - found_points = set() - for ip in range( cell_j.GetNumberOfPoints() ): - p = cell_j.GetPoints().GetPoint( ip ) - squared_dist = vtk_reference( 0. ) # unused - found_point = point_locator.FindClosestPointWithinRadius( point_tolerance, p, squared_dist ) - found_points.add( found_point ) - return found_points == set( range( cell_i.GetNumberOfPoints() ) ) - - -class Extruder: - """ - Computes and stores all the extrusions of the boundary faces. - The main reason for this class is to be lazy and cache the extrusions. - """ - - def __init__( self, boundary_mesh: BoundaryMesh, face_tolerance: float ): - self.__extrusions: List[ vtkPolyData ] = [ - None, - ] * boundary_mesh.GetNumberOfCells() - self.__boundary_mesh = boundary_mesh - self.__face_tolerance = face_tolerance - - def __extrude( self, polygon_poly_data, normal ) -> vtkPolyData: - """ - Extrude the polygon data to create a volume that will be used for intersection. - :param polygon_poly_data: The data to extrude - :param normal: The (uniform) direction of the extrusion. - :return: The extrusion. - """ - extruder = vtkLinearExtrusionFilter() - extruder.SetExtrusionTypeToVectorExtrusion() - extruder.SetVector( normal ) - extruder.SetScaleFactor( self.__face_tolerance / 2. ) - extruder.SetInputData( polygon_poly_data ) - extruder.Update() - return extruder.GetOutput() - - def __getitem__( self, i ) -> vtkPolyData: - """ - Returns the vtk extrusion for boundary element i. - :param i: The cell index. - :return: The vtk instance. - """ - extrusion = self.__extrusions[ i ] - if extrusion: - return extrusion - extrusion = self.__extrude( build_poly_data_for_extrusion( i, self.__boundary_mesh ), - self.__boundary_mesh.normals( i ) ) - self.__extrusions[ i ] = extrusion - return extrusion - - -def are_faces_conformal_using_extrusions( extrusions: Extruder, i: int, j: int, boundary_mesh: vtkUnstructuredGrid, - point_tolerance: float ) -> bool: - """ - Tests if two boundary faces are conformal, checking for intersection between their normal extruded volumes. - :param extrusions: The extrusions cache. - :param i: The cell index of the first cell. - :param j: The cell index of the second cell. - :param boundary_mesh: The boundary mesh. - :param point_tolerance: The point tolerance to consider that two points match. - :return: A boolean. - """ - collision = vtkCollisionDetectionFilter() - collision.SetCollisionModeToFirstContact() - collision.SetInputData( 0, extrusions[ i ] ) - collision.SetInputData( 1, extrusions[ j ] ) - m_i = vtkTransform() - m_j = vtkTransform() - collision.SetTransform( 0, m_i ) - collision.SetTransform( 1, m_j ) - collision.Update() - - if collision.GetNumberOfContacts() == 0: - return True - - # Duplicating data not to risk anything w.r.t. thread safety of the GetCell function. - cell_i = boundary_mesh.GetCell( i ) - copied_cell_i = cell_i.NewInstance() - copied_cell_i.DeepCopy( cell_i ) - - return are_points_conformal( point_tolerance, copied_cell_i, boundary_mesh.GetCell( j ) ) - - -def are_faces_conformal_using_distances( i: int, j: int, boundary_mesh: vtkUnstructuredGrid, face_tolerance: float, - point_tolerance: float ) -> bool: - """ - Tests if two boundary faces are conformal, checking the minimal distance between triangulated surfaces. - :param i: The cell index of the first cell. - :param j: The cell index of the second cell. - :param boundary_mesh: The boundary mesh. - :param face_tolerance: The tolerance under which we should consider the two faces "touching" each other. - :param point_tolerance: The point tolerance to consider that two points match. - :return: A boolean. - """ - cp_i = boundary_mesh.GetCell( i ).NewInstance() - cp_i.DeepCopy( boundary_mesh.GetCell( i ) ) - cp_j = boundary_mesh.GetCell( j ).NewInstance() - cp_j.DeepCopy( boundary_mesh.GetCell( j ) ) - - def triangulate( cell ): - assert cell.GetCellDimension() == 2 - __points_ids = vtkIdList() - __points = vtkPoints() - cell.Triangulate( 0, __points_ids, __points ) - __points_ids = tuple( vtk_iter( __points_ids ) ) - assert len( __points_ids ) % 3 == 0 - assert __points.GetNumberOfPoints() % 3 == 0 - return __points_ids, __points - - points_ids_i, points_i = triangulate( cp_i ) - points_ids_j, points_j = triangulate( cp_j ) - - def build_numpy_triangles( points_ids ): - __triangles = [] - for __i in range( 0, len( points_ids ), 3 ): - __t = [] - for __pi in points_ids[ __i:__i + 3 ]: - __t.append( boundary_mesh.GetPoint( __pi ) ) - __triangles.append( numpy.array( __t, dtype=float ) ) - return __triangles - - triangles_i = build_numpy_triangles( points_ids_i ) - triangles_j = build_numpy_triangles( points_ids_j ) - - min_dist = numpy.inf - for ti, tj in [ ( ti, tj ) for ti in triangles_i for tj in triangles_j ]: - # Note that here, we compute the exact distance to compare with the threshold. - # We could improve by exiting the iterative distance computation as soon as - # we're sure we're smaller than the threshold. No need of the exact solution. - dist, _, _ = triangle_distance.distance_between_two_triangles( ti, tj ) - if dist < min_dist: - min_dist = dist - if min_dist < face_tolerance: - break - if min_dist > face_tolerance: - return True - - return are_points_conformal( point_tolerance, cp_i, cp_j ) - - -def __action( mesh: vtkUnstructuredGrid, options: Options ) -> Result: - """ - Checks if the mesh is "conformal" (i.e. if some of its boundary faces may not be too close to each other without matching nodes). - :param mesh: The vtk mesh - :param options: The check options. - :return: The Result instance. - """ - boundary_mesh = BoundaryMesh( mesh ) - cos_theta = abs( math.cos( numpy.deg2rad( options.angle_tolerance ) ) ) - num_cells = boundary_mesh.GetNumberOfCells() - - # Computing the exact number of cells per node - num_cells_per_node = numpy.zeros( boundary_mesh.GetNumberOfPoints(), dtype=int ) - for ic in range( boundary_mesh.GetNumberOfCells() ): - c = boundary_mesh.GetCell( ic ) - point_ids = c.GetPointIds() - for point_id in vtk_iter( point_ids ): - num_cells_per_node[ point_id ] += 1 - - cell_locator = vtkStaticCellLocator() - cell_locator.Initialize() - cell_locator.SetNumberOfCellsPerNode( num_cells_per_node.max() ) - cell_locator.SetDataSet( boundary_mesh.re_boundary_mesh ) - cell_locator.BuildLocator() - - # Precomputing the bounding boxes. - # The options are important to directly interact with memory in C++. - bounding_boxes = numpy.empty( ( boundary_mesh.GetNumberOfCells(), 6 ), dtype=numpy.double, order="C" ) - for i in range( boundary_mesh.GetNumberOfCells() ): - bb = vtkBoundingBox( boundary_mesh.bounds( i ) ) - bb.Inflate( 2 * options.face_tolerance ) - assert bounding_boxes[ - i, : ].data.contiguous # Do not modify the storage layout since vtk deals with raw memory here. - bb.GetBounds( bounding_boxes[ i, : ] ) - - non_conformal_cells = [] - extrusions = Extruder( boundary_mesh, options.face_tolerance ) - close_cells = vtkIdList() - # Looping on all the pairs of boundary cells. We'll hopefully discard most of the pairs. - for i in tqdm( range( num_cells ), desc="Non conformal elements" ): - cell_locator.FindCellsWithinBounds( bounding_boxes[ i ], close_cells ) - for j in vtk_iter( close_cells ): - if j < i: - continue - # Discarding pairs that are not facing each others (with a threshold). - normal_i, normal_j = boundary_mesh.normals( i ), boundary_mesh.normals( j ) - if numpy.dot( normal_i, normal_j ) > -cos_theta: # opposite directions only (can be facing or not) - continue - # At this point, back-to-back and face-to-face pairs of elements are considered. - if not are_faces_conformal_using_extrusions( extrusions, i, j, boundary_mesh, options.point_tolerance ): - non_conformal_cells.append( ( i, j ) ) - # Extracting the original 3d element index (and not the index of the boundary mesh). - tmp = [] - for i, j in non_conformal_cells: - tmp.append( ( boundary_mesh.original_cells.GetValue( i ), boundary_mesh.original_cells.GetValue( j ) ) ) - - return Result( non_conformal_cells=tmp ) - - -def action( vtk_input_file: str, options: Options ) -> Result: - mesh = read_mesh( vtk_input_file ) - return __action( mesh, options ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/reorientMesh.py b/geos-mesh/src/geos/mesh/doctor/actions/reorientMesh.py new file mode 100644 index 00000000..a9fd11a3 --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/actions/reorientMesh.py @@ -0,0 +1,166 @@ +import networkx +import numpy +from tqdm import tqdm +from typing import Iterator +from vtkmodules.vtkCommonCore import vtkIdList, vtkPoints +from vtkmodules.vtkCommonDataModel import ( VTK_POLYHEDRON, VTK_TRIANGLE, vtkCellArray, vtkPolyData, vtkPolygon, + vtkUnstructuredGrid, vtkTetra ) +from vtkmodules.vtkFiltersCore import vtkTriangleFilter +from geos.mesh.doctor.actions.vtkPolyhedron import FaceStream, buildFaceToFaceConnectivityThroughEdges +from geos.mesh.doctor.parsing.cliParsing import setupLogger +from geos.mesh.utils.genericHelpers import toVtkIdList + + +def __computeVolume( meshPoints: vtkPoints, faceStream: FaceStream ) -> float: + """Computes the volume of a polyhedron element (defined by its faceStream). + + .. Note:: + The faces of the polyhedron are triangulated and the volumes of the tetrahedra + from the barycenter to the triangular bases are summed. + The normal of each face plays critical role, + since the volume of each tetrahedron can be positive or negative. + + Args: + meshPoints (vtkPoints): The mesh points, needed to compute the volume. + faceStream (FaceStream): The vtk face stream. + + Returns: + float: The volume of the element. + """ + # Triangulating the envelope of the polyhedron for further volume computation. + polygons = vtkCellArray() + for faceNodes in faceStream.faceNodes: + polygon = vtkPolygon() + polygon.GetPointIds().SetNumberOfIds( len( faceNodes ) ) + # We use the same global points numbering for the polygons than for the input mesh. + # There will be a lot of points in the poly data that won't be used as a support for the polygons. + # But the algorithm deals with it, and it's actually faster (and easier) to do this + # than to renumber and allocate a new fit-for-purpose set of points just for the polygons. + for i, pointId in enumerate( faceNodes ): + polygon.GetPointIds().SetId( i, pointId ) + polygons.InsertNextCell( polygon ) + polygonPolyData = vtkPolyData() + polygonPolyData.SetPoints( meshPoints ) + polygonPolyData.SetPolys( polygons ) + + f = vtkTriangleFilter() + f.SetInputData( polygonPolyData ) + f.Update() + triangles = f.GetOutput() + # Computing the barycenter that will be used as the tip of all the tetra which mesh the polyhedron. + # (The basis of all the tetra being the triangles of the envelope). + # We could take any point, not only the barycenter. + # But in order to work with figure of the same magnitude, let's compute the barycenter. + tmpBarycenter = numpy.empty( ( faceStream.numSupportPoints, 3 ), dtype=float ) + for i, pointId in enumerate( faceStream.supportPointIds ): + tmpBarycenter[ i, : ] = meshPoints.GetPoint( pointId ) + barycenter = tmpBarycenter[ :, 0 ].mean(), tmpBarycenter[ :, 1 ].mean(), tmpBarycenter[ :, 2 ].mean() + # Looping on all the triangles of the envelope of the polyhedron, creating the matching tetra. + # Then the volume of all the tetra are added to get the final polyhedron volume. + cellVolume = 0. + for i in range( triangles.GetNumberOfCells() ): + triangle = triangles.GetCell( i ) + assert triangle.GetCellType() == VTK_TRIANGLE + p = triangle.GetPoints() + cellVolume += vtkTetra.ComputeVolume( barycenter, p.GetPoint( 0 ), p.GetPoint( 1 ), p.GetPoint( 2 ) ) + return cellVolume + + +def __selectAndFlipFaces( meshPoints: vtkPoints, colors: dict[ frozenset[ int ], int ], + faceStream: FaceStream ) -> FaceStream: + """Given a polyhedra, given that we were able to paint the faces in two colors, + we now need to select which faces/color to flip such that the volume of the element is positive. + + Args: + meshPoints (vtkPoints): The mesh points, needed to compute the volume. + colors (dict[ frozenset[ int ], int ]): Maps the nodes of each connected component (defined as a frozenset) + to its color. + faceStream (FaceStream): The face stream representing the polyhedron. + + Returns: + FaceStream: The face stream that leads to a positive volume. + """ + # Flipping either color 0 or 1. + colorToNodes: dict[ int, list[ int ] ] = { 0: [], 1: [] } + for connectedComponentsIndices, color in colors.items(): + colorToNodes[ color ] += connectedComponentsIndices + # This implementation works even if there is one unique color. + # Admittedly, there will be one face stream that won't be flipped. + fs: tuple[ FaceStream, + FaceStream ] = ( faceStream.flipFaces( colorToNodes[ 0 ] ), faceStream.flipFaces( colorToNodes[ 1 ] ) ) + volumes = __computeVolume( meshPoints, fs[ 0 ] ), __computeVolume( meshPoints, fs[ 1 ] ) + # We keep the flipped element for which the volume is largest + # (i.e. positive, since they should be the opposite of each other). + return fs[ numpy.argmax( volumes ) ] + + +def __reorientElement( meshPoints: vtkPoints, faceStreamIds: vtkIdList ) -> vtkIdList: + """Considers a vtk face stream and flips the appropriate faces to get an element with normals directed outwards. + + Args: + meshPoints (vtkPoints): The mesh points, needed to compute the volume. + faceStreamIds (vtkIdList): The raw vtk face stream, not converted into a more practical python class. + + Returns: + vtkIdList: The raw vtk face stream with faces properly flipped. + """ + faceStream = FaceStream.buildFromVtkIdList( faceStreamIds ) + faceGraph = buildFaceToFaceConnectivityThroughEdges( faceStream, addCompatibility=True ) + # Removing the non-compatible connections to build the non-connected components. + g = networkx.Graph() + g.add_nodes_from( faceGraph.nodes ) + g.add_edges_from( filter( lambda uvd: uvd[ 2 ][ "compatible" ] == "+", faceGraph.edges( data=True ) ) ) + connectedComponents = tuple( networkx.connected_components( g ) ) + # Squashing all the connected nodes that need to receive the normal direction flip (or not) together. + quotientGraph = networkx.algorithms.quotient_graph( faceGraph, connectedComponents ) + # Coloring the new graph lets us know how which cluster of faces need to eventually receive the same flip. + # W.r.t. the nature of our problem (a normal can be directed inwards or outwards), + # two colors should be enough to color the face graph. + # `colors` maps the nodes of each connected component to its color. + colors: dict[ frozenset[ int ], int ] = networkx.algorithms.greedy_color( quotientGraph ) + assert len( colors ) in ( 1, 2 ) + # We now compute the face stream which generates outwards normal vectors. + flippedFaceStream = __selectAndFlipFaces( meshPoints, colors, faceStream ) + return toVtkIdList( flippedFaceStream.dump() ) + + +def reorientMesh( mesh: vtkUnstructuredGrid, cellIndices: Iterator[ int ] ) -> vtkUnstructuredGrid: + """Reorient the polyhedron elements such that they all have their normals directed outwards. + + Args: + mesh (vtkUnstructuredGrid): The input vtk mesh. + cellIndices (Iterator[ int ]): The indices of the cells to reorient. + + Returns: + vtkUnstructuredGrid: The vtk mesh with the desired polyhedron cells directed outwards. + """ + numCells = mesh.GetNumberOfCells() + # Building an indicator/predicate from the list + needsToBeReoriented = numpy.zeros( numCells, dtype=bool ) + for ic in cellIndices: + needsToBeReoriented[ ic ] = True + + outputMesh = mesh.NewInstance() + # I did not manage to call `outputMesh.CopyStructure(mesh)` because I could not modify the polyhedron in place. + # Therefore, I insert the cells one by one... + outputMesh.SetPoints( mesh.GetPoints() ) + setupLogger.info( "Reorienting the polyhedron cells to enforce normals directed outward." ) + with tqdm( total=needsToBeReoriented.sum(), desc="Reorienting polyhedra" + ) as progressBar: # For smoother progress, we only update on reoriented elements. + for ic in range( numCells ): + cell = mesh.GetCell( ic ) + cellType = cell.GetCellType() + if cellType == VTK_POLYHEDRON: + faceStreamIds = vtkIdList() + mesh.GetFaceStream( ic, faceStreamIds ) + if needsToBeReoriented[ ic ]: + newFaceStreamIds = __reorientElement( mesh.GetPoints(), faceStreamIds ) + else: + newFaceStreamIds = faceStreamIds + outputMesh.InsertNextCell( VTK_POLYHEDRON, newFaceStreamIds ) + else: + outputMesh.InsertNextCell( cellType, cell.GetPointIds() ) + if needsToBeReoriented[ ic ]: + progressBar.update( 1 ) + assert outputMesh.GetNumberOfCells() == mesh.GetNumberOfCells() + return outputMesh diff --git a/geos-mesh/src/geos/mesh/doctor/actions/reorient_mesh.py b/geos-mesh/src/geos/mesh/doctor/actions/reorient_mesh.py deleted file mode 100644 index 7b10d313..00000000 --- a/geos-mesh/src/geos/mesh/doctor/actions/reorient_mesh.py +++ /dev/null @@ -1,151 +0,0 @@ -import networkx -import numpy -from tqdm import tqdm -from typing import Dict, FrozenSet, Iterator, List, Tuple -from vtkmodules.vtkCommonCore import vtkIdList, vtkPoints -from vtkmodules.vtkCommonDataModel import ( VTK_POLYHEDRON, VTK_TRIANGLE, vtkCellArray, vtkPolyData, vtkPolygon, - vtkUnstructuredGrid, vtkTetra ) -from vtkmodules.vtkFiltersCore import vtkTriangleFilter -from geos.mesh.doctor.actions.vtk_polyhedron import FaceStream, build_face_to_face_connectivity_through_edges -from geos.mesh.doctor.parsing.cli_parsing import setup_logger -from geos.mesh.utils.genericHelpers import to_vtk_id_list - - -def __compute_volume( mesh_points: vtkPoints, face_stream: FaceStream ) -> float: - """ - Computes the volume of a polyhedron element (defined by its face_stream). - :param mesh_points: The mesh points, needed to compute the volume. - :param face_stream: The vtk face stream. - :return: The volume of the element. - :note: The faces of the polyhedron are triangulated and the volumes of the tetrahedra - from the barycenter to the triangular bases are summed. - The normal of each face plays critical role, - since the volume of each tetrahedron can be positive or negative. - """ - # Triangulating the envelope of the polyhedron for further volume computation. - polygons = vtkCellArray() - for face_nodes in face_stream.face_nodes: - polygon = vtkPolygon() - polygon.GetPointIds().SetNumberOfIds( len( face_nodes ) ) - # We use the same global points numbering for the polygons than for the input mesh. - # There will be a lot of points in the poly data that won't be used as a support for the polygons. - # But the algorithm deals with it, and it's actually faster (and easier) to do this - # than to renumber and allocate a new fit-for-purpose set of points just for the polygons. - for i, point_id in enumerate( face_nodes ): - polygon.GetPointIds().SetId( i, point_id ) - polygons.InsertNextCell( polygon ) - polygon_poly_data = vtkPolyData() - polygon_poly_data.SetPoints( mesh_points ) - polygon_poly_data.SetPolys( polygons ) - - f = vtkTriangleFilter() - f.SetInputData( polygon_poly_data ) - f.Update() - triangles = f.GetOutput() - # Computing the barycenter that will be used as the tip of all the tetra which mesh the polyhedron. - # (The basis of all the tetra being the triangles of the envelope). - # We could take any point, not only the barycenter. - # But in order to work with figure of the same magnitude, let's compute the barycenter. - tmp_barycenter = numpy.empty( ( face_stream.num_support_points, 3 ), dtype=float ) - for i, point_id in enumerate( face_stream.support_point_ids ): - tmp_barycenter[ i, : ] = mesh_points.GetPoint( point_id ) - barycenter = tmp_barycenter[ :, 0 ].mean(), tmp_barycenter[ :, 1 ].mean(), tmp_barycenter[ :, 2 ].mean() - # Looping on all the triangles of the envelope of the polyhedron, creating the matching tetra. - # Then the volume of all the tetra are added to get the final polyhedron volume. - cell_volume = 0. - for i in range( triangles.GetNumberOfCells() ): - triangle = triangles.GetCell( i ) - assert triangle.GetCellType() == VTK_TRIANGLE - p = triangle.GetPoints() - cell_volume += vtkTetra.ComputeVolume( barycenter, p.GetPoint( 0 ), p.GetPoint( 1 ), p.GetPoint( 2 ) ) - return cell_volume - - -def __select_and_flip_faces( mesh_points: vtkPoints, colors: Dict[ FrozenSet[ int ], int ], - face_stream: FaceStream ) -> FaceStream: - """ - Given a polyhedra, given that we were able to paint the faces in two colors, - we now need to select which faces/color to flip such that the volume of the element is positive. - :param mesh_points: The mesh points, needed to compute the volume. - :param colors: Maps the nodes of each connected component (defined as a frozenset) to its color. - :param face_stream: the polyhedron. - :return: The face stream that leads to a positive volume. - """ - # Flipping either color 0 or 1. - color_to_nodes: Dict[ int, List[ int ] ] = { 0: [], 1: [] } - for connected_components_indices, color in colors.items(): - color_to_nodes[ color ] += connected_components_indices - # This implementation works even if there is one unique color. - # Admittedly, there will be one face stream that won't be flipped. - fs: Tuple[ FaceStream, FaceStream ] = ( face_stream.flip_faces( color_to_nodes[ 0 ] ), - face_stream.flip_faces( color_to_nodes[ 1 ] ) ) - volumes = __compute_volume( mesh_points, fs[ 0 ] ), __compute_volume( mesh_points, fs[ 1 ] ) - # We keep the flipped element for which the volume is largest - # (i.e. positive, since they should be the opposite of each other). - return fs[ numpy.argmax( volumes ) ] - - -def __reorient_element( mesh_points: vtkPoints, face_stream_ids: vtkIdList ) -> vtkIdList: - """ - Considers a vtk face stream and flips the appropriate faces to get an element with normals directed outwards. - :param mesh_points: The mesh points, needed to compute the volume. - :param face_stream_ids: The raw vtk face stream, not converted into a more practical python class. - :return: The raw vtk face stream with faces properly flipped. - """ - face_stream = FaceStream.build_from_vtk_id_list( face_stream_ids ) - face_graph = build_face_to_face_connectivity_through_edges( face_stream, add_compatibility=True ) - # Removing the non-compatible connections to build the non-connected components. - g = networkx.Graph() - g.add_nodes_from( face_graph.nodes ) - g.add_edges_from( filter( lambda uvd: uvd[ 2 ][ "compatible" ] == "+", face_graph.edges( data=True ) ) ) - connected_components = tuple( networkx.connected_components( g ) ) - # Squashing all the connected nodes that need to receive the normal direction flip (or not) together. - quotient_graph = networkx.algorithms.quotient_graph( face_graph, connected_components ) - # Coloring the new graph lets us know how which cluster of faces need to eventually receive the same flip. - # W.r.t. the nature of our problem (a normal can be directed inwards or outwards), - # two colors should be enough to color the face graph. - # `colors` maps the nodes of each connected component to its color. - colors: Dict[ FrozenSet[ int ], int ] = networkx.algorithms.greedy_color( quotient_graph ) - assert len( colors ) in ( 1, 2 ) - # We now compute the face stream which generates outwards normal vectors. - flipped_face_stream = __select_and_flip_faces( mesh_points, colors, face_stream ) - return to_vtk_id_list( flipped_face_stream.dump() ) - - -def reorient_mesh( mesh, cell_indices: Iterator[ int ] ) -> vtkUnstructuredGrid: - """ - Reorient the polyhedron elements such that they all have their normals directed outwards. - :param mesh: The input vtk mesh. - :param cell_indices: We may need to only flip a limited number of polyhedron cells (only on the boundary for example). - :return: The vtk mesh with the desired polyhedron cells directed outwards. - """ - num_cells = mesh.GetNumberOfCells() - # Building an indicator/predicate from the list - needs_to_be_reoriented = numpy.zeros( num_cells, dtype=bool ) - for ic in cell_indices: - needs_to_be_reoriented[ ic ] = True - - output_mesh = mesh.NewInstance() - # I did not manage to call `output_mesh.CopyStructure(mesh)` because I could not modify the polyhedron in place. - # Therefore, I insert the cells one by one... - output_mesh.SetPoints( mesh.GetPoints() ) - setup_logger.info( "Reorienting the polyhedron cells to enforce normals directed outward." ) - with tqdm( total=needs_to_be_reoriented.sum(), desc="Reorienting polyhedra" - ) as progress_bar: # For smoother progress, we only update on reoriented elements. - for ic in range( num_cells ): - cell = mesh.GetCell( ic ) - cell_type = cell.GetCellType() - if cell_type == VTK_POLYHEDRON: - face_stream_ids = vtkIdList() - mesh.GetFaceStream( ic, face_stream_ids ) - if needs_to_be_reoriented[ ic ]: - new_face_stream_ids = __reorient_element( mesh.GetPoints(), face_stream_ids ) - else: - new_face_stream_ids = face_stream_ids - output_mesh.InsertNextCell( VTK_POLYHEDRON, new_face_stream_ids ) - else: - output_mesh.InsertNextCell( cell_type, cell.GetPointIds() ) - if needs_to_be_reoriented[ ic ]: - progress_bar.update( 1 ) - assert output_mesh.GetNumberOfCells() == mesh.GetNumberOfCells() - return output_mesh diff --git a/geos-mesh/src/geos/mesh/doctor/actions/selfIntersectingElements.py b/geos-mesh/src/geos/mesh/doctor/actions/selfIntersectingElements.py new file mode 100644 index 00000000..55956524 --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/actions/selfIntersectingElements.py @@ -0,0 +1,80 @@ +from dataclasses import dataclass +from typing import Collection +from vtkmodules.util.numpy_support import vtk_to_numpy +from vtkmodules.vtkFiltersGeneral import vtkCellValidator +from vtkmodules.vtkCommonCore import vtkOutputWindow, vtkFileOutputWindow +from vtkmodules.vtkCommonDataModel import vtkUnstructuredGrid +from geos.mesh.io.vtkIO import readUnstructuredGrid + + +@dataclass( frozen=True ) +class Options: + minDistance: float + + +@dataclass( frozen=True ) +class Result: + wrongNumberOfPointsElements: Collection[ int ] + intersectingEdgesElements: Collection[ int ] + intersectingFacesElements: Collection[ int ] + nonContiguousEdgesElements: Collection[ int ] + nonConvexElements: Collection[ int ] + facesAreOrientedIncorrectlyElements: Collection[ int ] + + +def __action( mesh: vtkUnstructuredGrid, options: Options ) -> Result: + errOut = vtkFileOutputWindow() + errOut.SetFileName( "/dev/null" ) # vtkCellValidator outputs loads for each cell... + vtkStdErrOut = vtkOutputWindow() + vtkStdErrOut.SetInstance( errOut ) + + valid = 0x0 + wrongNumberOfPoints = 0x01 + intersectingEdges = 0x02 + intersectingFaces = 0x04 + nonContiguousEdges = 0x08 + nonConvex = 0x10 + facesAreOrientedIncorrectly = 0x20 + + wrongNumberOfPointsElements: list[ int ] = [] + intersectingEdgesElements: list[ int ] = [] + intersectingFacesElements: list[ int ] = [] + nonContiguousEdgesElements: list[ int ] = [] + nonConvexElements: list[ int ] = [] + facesAreOrientedIncorrectlyElements: list[ int ] = [] + + f = vtkCellValidator() + f.SetTolerance( options.minDistance ) + + f.SetInputData( mesh ) + f.Update() + output = f.GetOutput() + + validity = output.GetCellData().GetArray( "ValidityState" ) # Could not change name using the vtk interface. + assert validity is not None + validity = vtk_to_numpy( validity ) + for i, v in enumerate( validity ): + if not v & valid: + if v & wrongNumberOfPoints: + wrongNumberOfPointsElements.append( i ) + if v & intersectingEdges: + intersectingEdgesElements.append( i ) + if v & intersectingFaces: + intersectingFacesElements.append( i ) + if v & nonContiguousEdges: + nonContiguousEdgesElements.append( i ) + if v & nonConvex: + nonConvexElements.append( i ) + if v & facesAreOrientedIncorrectly: + facesAreOrientedIncorrectlyElements.append( i ) + return Result( wrongNumberOfPointsElements=wrongNumberOfPointsElements, + intersectingEdgesElements=intersectingEdgesElements, + intersectingFacesElements=intersectingFacesElements, + nonContiguousEdgesElements=nonContiguousEdgesElements, + nonConvexElements=nonConvexElements, + facesAreOrientedIncorrectlyElements=facesAreOrientedIncorrectlyElements ) + + +def action( vtkInputFile: str, options: Options ) -> Result: + mesh: vtkUnstructuredGrid = readUnstructuredGrid( vtkInputFile ) + return __action( mesh, options ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/self_intersecting_elements.py b/geos-mesh/src/geos/mesh/doctor/actions/self_intersecting_elements.py deleted file mode 100644 index 3b7d313a..00000000 --- a/geos-mesh/src/geos/mesh/doctor/actions/self_intersecting_elements.py +++ /dev/null @@ -1,79 +0,0 @@ -from dataclasses import dataclass -from typing import Collection, List -from vtkmodules.util.numpy_support import vtk_to_numpy -from vtkmodules.vtkFiltersGeneral import vtkCellValidator -from vtkmodules.vtkCommonCore import vtkOutputWindow, vtkFileOutputWindow -from geos.mesh.io.vtkIO import read_mesh - - -@dataclass( frozen=True ) -class Options: - min_distance: float - - -@dataclass( frozen=True ) -class Result: - wrong_number_of_points_elements: Collection[ int ] - intersecting_edges_elements: Collection[ int ] - intersecting_faces_elements: Collection[ int ] - non_contiguous_edges_elements: Collection[ int ] - non_convex_elements: Collection[ int ] - faces_are_oriented_incorrectly_elements: Collection[ int ] - - -def __action( mesh, options: Options ) -> Result: - err_out = vtkFileOutputWindow() - err_out.SetFileName( "/dev/null" ) # vtkCellValidator outputs loads for each cell... - vtk_std_err_out = vtkOutputWindow() - vtk_std_err_out.SetInstance( err_out ) - - valid = 0x0 - wrong_number_of_points = 0x01 - intersecting_edges = 0x02 - intersecting_faces = 0x04 - non_contiguous_edges = 0x08 - non_convex = 0x10 - faces_are_oriented_incorrectly = 0x20 - - wrong_number_of_points_elements: List[ int ] = [] - intersecting_edges_elements: List[ int ] = [] - intersecting_faces_elements: List[ int ] = [] - non_contiguous_edges_elements: List[ int ] = [] - non_convex_elements: List[ int ] = [] - faces_are_oriented_incorrectly_elements: List[ int ] = [] - - f = vtkCellValidator() - f.SetTolerance( options.min_distance ) - - f.SetInputData( mesh ) - f.Update() - output = f.GetOutput() - - validity = output.GetCellData().GetArray( "ValidityState" ) # Could not change name using the vtk interface. - assert validity is not None - validity = vtk_to_numpy( validity ) - for i, v in enumerate( validity ): - if not v & valid: - if v & wrong_number_of_points: - wrong_number_of_points_elements.append( i ) - if v & intersecting_edges: - intersecting_edges_elements.append( i ) - if v & intersecting_faces: - intersecting_faces_elements.append( i ) - if v & non_contiguous_edges: - non_contiguous_edges_elements.append( i ) - if v & non_convex: - non_convex_elements.append( i ) - if v & faces_are_oriented_incorrectly: - faces_are_oriented_incorrectly_elements.append( i ) - return Result( wrong_number_of_points_elements=wrong_number_of_points_elements, - intersecting_edges_elements=intersecting_edges_elements, - intersecting_faces_elements=intersecting_faces_elements, - non_contiguous_edges_elements=non_contiguous_edges_elements, - non_convex_elements=non_convex_elements, - faces_are_oriented_incorrectly_elements=faces_are_oriented_incorrectly_elements ) - - -def action( vtk_input_file: str, options: Options ) -> Result: - mesh = read_mesh( vtk_input_file ) - return __action( mesh, options ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/supportedElements.py b/geos-mesh/src/geos/mesh/doctor/actions/supportedElements.py new file mode 100644 index 00000000..e760c717 --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/actions/supportedElements.py @@ -0,0 +1,177 @@ +from dataclasses import dataclass +import multiprocessing +import networkx +from tqdm import tqdm +from typing import Iterable, Mapping, Optional +from vtkmodules.util.numpy_support import vtk_to_numpy +from vtkmodules.vtkCommonCore import vtkIdList +from vtkmodules.vtkCommonDataModel import ( vtkCellTypes, vtkUnstructuredGrid, VTK_HEXAGONAL_PRISM, VTK_HEXAHEDRON, + VTK_PENTAGONAL_PRISM, VTK_POLYHEDRON, VTK_PYRAMID, VTK_TETRA, VTK_VOXEL, + VTK_WEDGE ) +from geos.mesh.doctor.actions.vtkPolyhedron import buildFaceToFaceConnectivityThroughEdges, FaceStream +from geos.mesh.doctor.parsing.cliParsing import setupLogger +from geos.mesh.io.vtkIO import readUnstructuredGrid +from geos.mesh.utils.genericHelpers import vtkIter + + +@dataclass( frozen=True ) +class Options: + nproc: int + chunkSize: int + + +@dataclass( frozen=True ) +class Result: + unsupportedStdElementsTypes: frozenset[ int ] # list of unsupported types + unsupportedPolyhedronElements: frozenset[ + int ] # list of polyhedron elements that could not be converted to supported std elements + + +# for multiprocessing, vtkUnstructuredGrid cannot be pickled. Let's use a global variable instead. +MESH: Optional[ vtkUnstructuredGrid ] = None + + +def initWorkerMesh( inputFileForWorker: str ): + """Initializer for multiprocessing.Pool to set the global MESH variable in each worker process. + + Args: + inputFileForWorker (str): Filepath to vtk grid + """ + global MESH + setupLogger.debug( + f"Worker process (PID: {multiprocessing.current_process().pid}) initializing MESH from file: {inputFileForWorker}" + ) + mesh: vtkUnstructuredGrid = readUnstructuredGrid( inputFileForWorker ) + if MESH is None: + setupLogger.error( + f"Worker process (PID: {multiprocessing.current_process().pid}) failed to load mesh from {inputFileForWorker}" + ) + # You might want to raise an error here or ensure MESH being None is handled downstream + # For now, the assert MESH is not None in __call__ will catch this. + + +class IsPolyhedronConvertible: + + def __init__( self ): + + def buildPrismGraph( n: int, name: str ) -> networkx.Graph: + """Builds the face to face connectivities (through edges) for prism graphs. + + Args: + n (int): The number of nodes of the basis (i.e. the pentagonal prims gets n = 5) + name (str): A human-readable name for logging purpose. + + Returns: + networkx.Graph: A graph instance. + """ + tmp = networkx.cycle_graph( n ) + for node in range( n ): + tmp.add_edge( node, n ) + tmp.add_edge( node, n + 1 ) + tmp.name = name + return tmp + + # Building the reference graphs + tetGraph = networkx.complete_graph( 4 ) + tetGraph.name = "Tetrahedron" + pyrGraph = buildPrismGraph( 4, "Pyramid" ) + pyrGraph.remove_node( 5 ) # Removing a node also removes its associated edges. + self.__referenceGraphs: Mapping[ int, Iterable[ networkx.Graph ] ] = { + 4: ( tetGraph, ), + 5: ( pyrGraph, buildPrismGraph( 3, "Wedge" ) ), + 6: ( buildPrismGraph( 4, "Hexahedron" ), ), + 7: ( buildPrismGraph( 5, "Prism5" ), ), + 8: ( buildPrismGraph( 6, "Prism6" ), ), + 9: ( buildPrismGraph( 7, "Prism7" ), ), + 10: ( buildPrismGraph( 8, "Prism8" ), ), + 11: ( buildPrismGraph( 9, "Prism9" ), ), + 12: ( buildPrismGraph( 10, "Prism10" ), ), + 13: ( buildPrismGraph( 11, "Prism11" ), ), + } + + def __isPolyhedronSupported( self, faceStream ) -> str: + """Checks if a polyhedron can be converted into a supported cell. + If so, returns the name of the type. If not, the returned name will be empty. + + Args: + faceStream (_type_): The polyhedron. + + Returns: + str: The name of the supported type or an empty string. + """ + cellGraph = buildFaceToFaceConnectivityThroughEdges( faceStream, add_compatibility=True ) + if cellGraph.order() not in self.__referenceGraphs: + return "" + for referenceGraph in self.__referenceGraphs[ cellGraph.order() ]: + if networkx.is_isomorphic( referenceGraph, cellGraph ): + return str( referenceGraph.name ) + return "" + + def __call__( self, ic: int ) -> int: + """Checks if a vtk polyhedron cell can be converted into a supported GEOS element. + + Args: + ic (int): The index element. + + Returns: + int: -1 if the polyhedron vtk element can be converted into a supported element type. The index otherwise. + """ + global MESH + assert MESH is not None, f"MESH global variable not initialized in worker process (PID: {multiprocessing.current_process().pid}). This should have been set by initWorkerMesh." + if MESH.GetCellType( ic ) != VTK_POLYHEDRON: + return -1 + ptIds = vtkIdList() + MESH.GetFaceStream( ic, ptIds ) + faceStream = FaceStream.buildFromVtkIdList( ptIds ) + convertedTypeName = self.__isPolyhedronSupported( faceStream ) + if convertedTypeName: + setupLogger.debug( f"Polyhedron cell {ic} can be converted into \"{convertedTypeName}\"" ) + return -1 + else: + setupLogger.debug( + f"Polyhedron cell {ic} (in PID {multiprocessing.current_process().pid}) cannot be converted into any supported element." + ) + return ic + + +def __action( vtkInputFile: str, options: Options ) -> Result: + # Main process loads the mesh for its own use + mesh: vtkUnstructuredGrid = readUnstructuredGrid( vtkInputFile ) + if mesh is None: + setupLogger.error( f"Main process failed to load mesh from {vtkInputFile}. Aborting." ) + # Return an empty/error result or raise an exception + return Result( unsupportedStdElementsTypes=frozenset(), unsupportedPolyhedronElements=frozenset() ) + + if hasattr( mesh, "GetDistinctCellTypesArray" ): + cellTypesNumpy = vtk_to_numpy( mesh.GetDistinctCellTypesArray() ) + cellTypes = set( cellTypesNumpy.tolist() ) + else: + vtkCellTypesObj = vtkCellTypes() + mesh.GetCellTypes( vtkCellTypesObj ) + cellTypes = set( vtkIter( vtkCellTypesObj ) ) + + supportedCellTypes = { + VTK_HEXAGONAL_PRISM, VTK_HEXAHEDRON, VTK_PENTAGONAL_PRISM, VTK_POLYHEDRON, VTK_PYRAMID, VTK_TETRA, VTK_VOXEL, + VTK_WEDGE + } + unsupportedStdElementsTypes = cellTypes - supportedCellTypes + + # Dealing with polyhedron elements. + numCells = mesh.GetNumberOfCells() + polyhedronConverter = IsPolyhedronConvertible() + + unsupportedPolyhedronIndices = [] + # Pass the vtkInputFile to the initializer + with multiprocessing.Pool( processes=options.nproc, initializer=initWorkerMesh, + initargs=( vtkInputFile, ) ) as pool: # Comma makes it a tuple + generator = pool.imap_unordered( polyhedronConverter, range( numCells ), chunksize=options.chunkSize ) + for cellIndexOrNegOne in tqdm( generator, total=numCells, desc="Testing support for elements" ): + if cellIndexOrNegOne != -1: + unsupportedPolyhedronIndices.append( cellIndexOrNegOne ) + + return Result( unsupportedStdElementsTypes=frozenset( unsupportedStdElementsTypes ), + unsupportedPolyhedronElements=frozenset( unsupportedPolyhedronIndices ) ) + + +def action( vtkInputFile: str, options: Options ) -> Result: + return __action( vtkInputFile, options ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/supported_elements.py b/geos-mesh/src/geos/mesh/doctor/actions/supported_elements.py deleted file mode 100644 index 8d9fd46a..00000000 --- a/geos-mesh/src/geos/mesh/doctor/actions/supported_elements.py +++ /dev/null @@ -1,177 +0,0 @@ -from dataclasses import dataclass -import multiprocessing -import networkx -from tqdm import tqdm -from typing import FrozenSet, Iterable, Mapping, Optional -from vtkmodules.util.numpy_support import vtk_to_numpy -from vtkmodules.vtkCommonCore import vtkIdList -from vtkmodules.vtkCommonDataModel import ( vtkCellTypes, vtkUnstructuredGrid, VTK_HEXAGONAL_PRISM, VTK_HEXAHEDRON, - VTK_PENTAGONAL_PRISM, VTK_POLYHEDRON, VTK_PYRAMID, VTK_TETRA, VTK_VOXEL, - VTK_WEDGE ) -from geos.mesh.doctor.actions.vtk_polyhedron import build_face_to_face_connectivity_through_edges, FaceStream -from geos.mesh.doctor.parsing.cli_parsing import setup_logger -from geos.mesh.io.vtkIO import read_mesh -from geos.mesh.utils.genericHelpers import vtk_iter - - -@dataclass( frozen=True ) -class Options: - nproc: int - chunk_size: int - - -@dataclass( frozen=True ) -class Result: - unsupported_std_elements_types: FrozenSet[ int ] # list of unsupported types - unsupported_polyhedron_elements: FrozenSet[ - int ] # list of polyhedron elements that could not be converted to supported std elements - - -# for multiprocessing, vtkUnstructuredGrid cannot be pickled. Let's use a global variable instead. -MESH: Optional[ vtkUnstructuredGrid ] = None - - -def init_worker_mesh( input_file_for_worker: str ): - """Initializer for multiprocessing.Pool to set the global MESH variable in each worker process. - - Args: - input_file_for_worker (str): Filepath to vtk grid - """ - global MESH - setup_logger.debug( - f"Worker process (PID: {multiprocessing.current_process().pid}) initializing MESH from file: {input_file_for_worker}" - ) - MESH = read_mesh( input_file_for_worker ) - if MESH is None: - setup_logger.error( - f"Worker process (PID: {multiprocessing.current_process().pid}) failed to load mesh from {input_file_for_worker}" - ) - # You might want to raise an error here or ensure MESH being None is handled downstream - # For now, the assert MESH is not None in __call__ will catch this. - - -class IsPolyhedronConvertible: - - def __init__( self ): - - def build_prism_graph( n: int, name: str ) -> networkx.Graph: - """Builds the face to face connectivities (through edges) for prism graphs. - - Args: - n (int): The number of nodes of the basis (i.e. the pentagonal prims gets n = 5) - name (str): A human-readable name for logging purpose. - - Returns: - networkx.Graph: A graph instance. - """ - tmp = networkx.cycle_graph( n ) - for node in range( n ): - tmp.add_edge( node, n ) - tmp.add_edge( node, n + 1 ) - tmp.name = name - return tmp - - # Building the reference graphs - tet_graph = networkx.complete_graph( 4 ) - tet_graph.name = "Tetrahedron" - pyr_graph = build_prism_graph( 4, "Pyramid" ) - pyr_graph.remove_node( 5 ) # Removing a node also removes its associated edges. - self.__reference_graphs: Mapping[ int, Iterable[ networkx.Graph ] ] = { - 4: ( tet_graph, ), - 5: ( pyr_graph, build_prism_graph( 3, "Wedge" ) ), - 6: ( build_prism_graph( 4, "Hexahedron" ), ), - 7: ( build_prism_graph( 5, "Prism5" ), ), - 8: ( build_prism_graph( 6, "Prism6" ), ), - 9: ( build_prism_graph( 7, "Prism7" ), ), - 10: ( build_prism_graph( 8, "Prism8" ), ), - 11: ( build_prism_graph( 9, "Prism9" ), ), - 12: ( build_prism_graph( 10, "Prism10" ), ), - 13: ( build_prism_graph( 11, "Prism11" ), ), - } - - def __is_polyhedron_supported( self, face_stream ) -> str: - """Checks if a polyhedron can be converted into a supported cell. - If so, returns the name of the type. If not, the returned name will be empty. - - Args: - face_stream (_type_): The polyhedron. - - Returns: - str: The name of the supported type or an empty string. - """ - cell_graph = build_face_to_face_connectivity_through_edges( face_stream, add_compatibility=True ) - if cell_graph.order() not in self.__reference_graphs: - return "" - for reference_graph in self.__reference_graphs[ cell_graph.order() ]: - if networkx.is_isomorphic( reference_graph, cell_graph ): - return str( reference_graph.name ) - return "" - - def __call__( self, ic: int ) -> int: - """Checks if a vtk polyhedron cell can be converted into a supported GEOSX element. - - Args: - ic (int): The index element. - - Returns: - int: -1 if the polyhedron vtk element can be converted into a supported element type. The index otherwise. - """ - global MESH - assert MESH is not None, f"MESH global variable not initialized in worker process (PID: {multiprocessing.current_process().pid}). This should have been set by init_worker_mesh." - if MESH.GetCellType( ic ) != VTK_POLYHEDRON: - return -1 - pt_ids = vtkIdList() - MESH.GetFaceStream( ic, pt_ids ) - face_stream = FaceStream.build_from_vtk_id_list( pt_ids ) - converted_type_name = self.__is_polyhedron_supported( face_stream ) - if converted_type_name: - setup_logger.debug( f"Polyhedron cell {ic} can be converted into \"{converted_type_name}\"" ) - return -1 - else: - setup_logger.debug( - f"Polyhedron cell {ic} (in PID {multiprocessing.current_process().pid}) cannot be converted into any supported element." - ) - return ic - - -def __action( vtk_input_file: str, options: Options ) -> Result: - # Main process loads the mesh for its own use - mesh = read_mesh( vtk_input_file ) - if mesh is None: - setup_logger.error( f"Main process failed to load mesh from {vtk_input_file}. Aborting." ) - # Return an empty/error result or raise an exception - return Result( unsupported_std_elements_types=frozenset(), unsupported_polyhedron_elements=frozenset() ) - - if hasattr( mesh, "GetDistinctCellTypesArray" ): - cell_types_numpy = vtk_to_numpy( mesh.GetDistinctCellTypesArray() ) - cell_types = set( cell_types_numpy.tolist() ) - else: - vtk_cell_types_obj = vtkCellTypes() - mesh.GetCellTypes( vtk_cell_types_obj ) - cell_types = set( vtk_iter( vtk_cell_types_obj ) ) - - supported_cell_types = { - VTK_HEXAGONAL_PRISM, VTK_HEXAHEDRON, VTK_PENTAGONAL_PRISM, VTK_POLYHEDRON, VTK_PYRAMID, VTK_TETRA, VTK_VOXEL, - VTK_WEDGE - } - unsupported_std_elements_types = cell_types - supported_cell_types - - # Dealing with polyhedron elements. - num_cells = mesh.GetNumberOfCells() - polyhedron_converter = IsPolyhedronConvertible() - - unsupported_polyhedron_indices = [] - # Pass the vtk_input_file to the initializer - with multiprocessing.Pool( processes=options.nproc, initializer=init_worker_mesh, - initargs=( vtk_input_file, ) ) as pool: # Comma makes it a tuple - generator = pool.imap_unordered( polyhedron_converter, range( num_cells ), chunksize=options.chunk_size ) - for cell_index_or_neg_one in tqdm( generator, total=num_cells, desc="Testing support for elements" ): - if cell_index_or_neg_one != -1: - unsupported_polyhedron_indices.append( cell_index_or_neg_one ) - - return Result( unsupported_std_elements_types=frozenset( unsupported_std_elements_types ), - unsupported_polyhedron_elements=frozenset( unsupported_polyhedron_indices ) ) - - -def action( vtk_input_file: str, options: Options ) -> Result: - return __action( vtk_input_file, options ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/triangleDistance.py b/geos-mesh/src/geos/mesh/doctor/actions/triangleDistance.py new file mode 100644 index 00000000..a8e309fe --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/actions/triangleDistance.py @@ -0,0 +1,201 @@ +import itertools +from math import sqrt +import numpy +from numpy.linalg import norm +from typing import Union + + +def __divClamp( num: float, den: float ) -> float: + """Computes the division `num / den`. and clamps the result between 0 and 1. + If `den` is zero, the result of the division is set to 0. + + Args: + num (float): The numerator. + den (float): The denominator. + + Returns: + float: The result between 0 and 1. + """ + if den == 0.: + return 0. + tmp: float = num / den + if tmp < 0: + return 0. + elif tmp > 1: + return 1. + else: + return tmp + + +def distanceBetweenTwoSegments( x0: numpy.ndarray, d0: numpy.ndarray, x1: numpy.ndarray, + d1: numpy.ndarray ) -> tuple[ numpy.ndarray, numpy.ndarray ]: + """Computes the minimum distance between two segments. + + Args: + x0 (numpy.ndarray): First point of segment 0. + d0 (numpy.ndarray): Director vector such that x0 + d0 is the second point of segment 0. + x1 (numpy.ndarray): First point of segment 1. + d1 (numpy.ndarray): Director vector such that x1 + d1 is the second point of segment 1. + + Returns: + tuple[ numpy.ndarray, numpy.ndarray ]: A tuple containing the two points closest point for segments + 0 and 1 respectively. + """ + # The reference paper is: + # "On fast computation of distance between line segments" by Vladimir J. Lumelsky. + # Information Processing Letters, Vol. 21, number 2, pages 55-61, 08/16/1985. + + # In the reference, the indices start at 1, while in this implementation, they start at 0. + tmp: numpy.ndarray = x1 - x0 + D0: float = numpy.dot( d0, d0 ) # As such, this is D1 in the reference paper. + D1: float = numpy.dot( d1, d1 ) + R: float = numpy.dot( d0, d1 ) + S0: float = numpy.dot( d0, tmp ) + S1: float = numpy.dot( d1, tmp ) + + # `t0` parameterizes line 0: + # - when t0 = 0 the point is p0. + # - when t0 = 1, the point is p0 + u0, the other side of the segment + # Same for `t1` and line 1. + + # Step 1 of the algorithm considers degenerate cases. + # They'll be considered along the line using `divClamp`. + + # Step 2: Computing t0 using eq (11). + t0: float = __divClamp( S0 * D1 - S1 * R, D0 * D1 - R * R ) + + # Step 3: compute t1 for point on line 1 closest to point at t0. + t1: float = __divClamp( t0 * R - S1, D1 ) # Eq (10, right) + sol1: numpy.ndarray = x1 + t1 * d1 # Eq (3) + t0: float = __divClamp( t1 * R + S0, D0 ) # Eq (10, left) + sol0: numpy.ndarray = x0 + t0 * d0 # Eq (4) + + return sol0, sol1 + + +def __computeNodesToTriangleDistance( + tri0: numpy.ndarray, edges0, tri1: numpy.ndarray +) -> tuple[ Union[ float, None ], Union[ numpy.ndarray, None ], Union[ numpy.ndarray, None ], bool ]: + """Computes the distance from nodes of `tri1` points onto `tri0`. + + Args: + tri0 (numpy.ndarray): First triangle. + edges0: The edges of triangle 0. First element being edge [0, 1], etc. + tri1 (numpy.ndarray): Second triangle. + + Returns: + tuple[ Union[ float, None ], Union[ numpy.ndarray, None ], Union[ numpy.ndarray, None ], bool ]: + The distance, the closest point on triangle 0, the closest on triangle 1 and a boolean indicating of the + triangles are disjoint. If nothing was found, then the first three arguments are None. + The boolean being still defined. + """ + areDisjoint: bool = False + tri0Normal: numpy.ndarray = numpy.cross( edges0[ 0 ], edges0[ 1 ] ) + tri0NormalNorm: float = numpy.dot( tri0Normal, tri0Normal ) + + # Forget about degenerate cases. + if tri0NormalNorm > numpy.finfo( float ).eps: + # Build projection lengths of `tri1` points. + tri1Proj = numpy.empty( 3, dtype=float ) + for i in range( 3 ): + tri1Proj[ i ] = numpy.dot( tri0[ 0 ] - tri1[ i ], tri0Normal ) + + # Considering `tri0` separates the space in 2, + # let's check if `tri1` is on one side only. + # If so, let's take the closest point. + point: int = -1 + if numpy.all( tri1Proj > 0 ): + point = numpy.argmin( tri1Proj ) + elif numpy.all( tri1Proj < 0 ): + point = numpy.argmax( tri1Proj ) + + # So if `tri1` is actually "on one side", + # point `tri1[point]` is candidate to be the closest point. + if point > -1: + areDisjoint = True + # But we must check that its projection is inside `tri0`. + if numpy.dot( tri1[ point ] - tri0[ 0 ], numpy.cross( tri0Normal, edges0[ 0 ] ) ) > 0: + if numpy.dot( tri1[ point ] - tri0[ 1 ], numpy.cross( tri0Normal, edges0[ 1 ] ) ) > 0: + if numpy.dot( tri1[ point ] - tri0[ 2 ], numpy.cross( tri0Normal, edges0[ 2 ] ) ) > 0: + # It is! + sol0 = tri1[ point ] + sol1 = tri1[ point ] + ( tri1Proj[ point ] / tri0NormalNorm ) * tri0Normal + return norm( sol1 - sol0 ), sol0, sol1, areDisjoint + return None, None, None, areDisjoint + + +def distanceBetweenTwoTriangles( tri0: numpy.ndarray, + tri1: numpy.ndarray ) -> tuple[ float, numpy.ndarray, numpy.ndarray ]: + """Returns the minimum distance between two triangles, and the two points where this minimum occurs. + If the two triangles touch, then distance is exactly 0. + But the two points are dummy and cannot be used as contact points (they are still though). + + Args: + tri0 (numpy.ndarray): The first 3x3 triangle points. + tri1 (numpy.ndarray): The second 3x3 triangle points. + + Returns: + tuple[ float, numpy.ndarray, numpy.ndarray ]: The distance and the two points. + """ + # Compute vectors along the 6 sides + edges0 = numpy.empty( ( 3, 3 ), dtype=float ) + edges1 = numpy.empty( ( 3, 3 ), dtype=float ) + for i in range( 3 ): + edges0[ i ][ : ] = tri0[ ( i + 1 ) % 3 ] - tri0[ i ] + edges1[ i ][ : ] = tri1[ ( i + 1 ) % 3 ] - tri1[ i ] + + minSol0 = numpy.empty( 3, dtype=float ) + minSol1 = numpy.empty( 3, dtype=float ) + areDisjoint: bool = False + + minDist = numpy.inf + + # Looping over all the pair of edges. + for i, j in itertools.product( range( 3 ), repeat=2 ): + # Find the closest points on edges i and j. + sol0, sol1 = distanceBetweenTwoSegments( tri0[ i ], edges0[ i ], tri1[ j ], edges1[ j ] ) + # Computing the distance between the two solutions. + deltaSol = sol1 - sol0 + dist: float = numpy.dot( deltaSol, deltaSol ) + # Update minimum if relevant and check if it's the closest pair of points. + if dist <= minDist: + minSol0[ : ] = sol0 + minSol1[ : ] = sol1 + minDist = dist + + # `tri0[(i + 2) % 3]` is the points opposite to edges0[i] where the closest point sol0 lies. + # Computing those scalar products and checking the signs somehow let us determine + # if the triangles are getting closer to each other when approaching the sol_(0|1) nodes. + # If so, we have a minimum. + a: float = numpy.dot( tri0[ ( i + 2 ) % 3 ] - sol0, deltaSol ) + b: float = numpy.dot( tri1[ ( j + 2 ) % 3 ] - sol1, deltaSol ) + if a <= 0 <= b: + return sqrt( dist ), sol0, sol1 + + if a < 0: + a = 0 + if b > 0: + b = 0 + # `dist - a + b` expands to `numpy.dot(tri1[(j + 2) % 3] - tri0[(i + 2) % 3], sol1 - sol0)`. + # If the "directions" of the (sol1 - sol0) vector and the vector joining the extra points of the triangles + # (i.e. not involved in the current edge check) re the "same", then the triangles do not intersect. + if dist - a + b > 0: + areDisjoint = True + # No edge pair contained the closest points. + # Checking the node/face situation. + distance, sol0, sol1, areDisjointTmp = __computeNodesToTriangleDistance( tri0, edges0, tri1 ) + if distance: + return distance, sol0, sol1 + areDisjoint = areDisjoint or areDisjointTmp + + distance, sol0, sol1, areDisjointTmp = __computeNodesToTriangleDistance( tri1, edges1, tri0 ) + if distance: + return distance, sol0, sol1 + areDisjoint = areDisjoint or areDisjointTmp + # It's not a node/face situation. + # If the triangles do not overlap, let's return the minimum found during the edges loop. + # (maybe an edge was parallel to the other face, and we could not decide for a unique closest point). + if areDisjoint: + return sqrt( minDist ), minSol0, minSol1 + else: # Surely overlapping or degenerate triangles. + return 0., numpy.zeros( 3, dtype=float ), numpy.zeros( 3, dtype=float ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/triangle_distance.py b/geos-mesh/src/geos/mesh/doctor/actions/triangle_distance.py deleted file mode 100644 index 989fac09..00000000 --- a/geos-mesh/src/geos/mesh/doctor/actions/triangle_distance.py +++ /dev/null @@ -1,187 +0,0 @@ -import itertools -from math import sqrt -import numpy -from numpy.linalg import norm -from typing import Tuple, Union - - -def __div_clamp( num: float, den: float ) -> float: - """ - Computes the division `num / den`. and clamps the result between 0 and 1. - If `den` is zero, the result of the division is set to 0. - :param num: The numerator. - :param den: The denominator. - :return: The result between 0 and 1. - """ - if den == 0.: - return 0. - tmp: float = num / den - if tmp < 0: - return 0. - elif tmp > 1: - return 1. - else: - return tmp - - -def distance_between_two_segments( x0: numpy.ndarray, d0: numpy.ndarray, x1: numpy.ndarray, - d1: numpy.ndarray ) -> Tuple[ numpy.ndarray, numpy.ndarray ]: - """ - Compute the minimum distance between two segments. - :param x0: First point of segment 0. - :param d0: Director vector such that x0 + d0 is the second point of segment 0. - :param x1: First point of segment 1. - :param d1: Director vector such that x1 + d1 is the second point of segment 1. - :return: A tuple containing the two points closest point for segments 0 and 1 respectively. - """ - # The reference paper is: - # "On fast computation of distance between line segments" by Vladimir J. Lumelsky. - # Information Processing Letters, Vol. 21, number 2, pages 55-61, 08/16/1985. - - # In the reference, the indices start at 1, while in this implementation, they start at 0. - tmp: numpy.ndarray = x1 - x0 - D0: float = numpy.dot( d0, d0 ) # As such, this is D1 in the reference paper. - D1: float = numpy.dot( d1, d1 ) - R: float = numpy.dot( d0, d1 ) - S0: float = numpy.dot( d0, tmp ) - S1: float = numpy.dot( d1, tmp ) - - # `t0` parameterizes line 0: - # - when t0 = 0 the point is p0. - # - when t0 = 1, the point is p0 + u0, the other side of the segment - # Same for `t1` and line 1. - - # Step 1 of the algorithm considers degenerate cases. - # They'll be considered along the line using `div_clamp`. - - # Step 2: Computing t0 using eq (11). - t0: float = __div_clamp( S0 * D1 - S1 * R, D0 * D1 - R * R ) - - # Step 3: compute t1 for point on line 1 closest to point at t0. - t1: float = __div_clamp( t0 * R - S1, D1 ) # Eq (10, right) - sol_1: numpy.ndarray = x1 + t1 * d1 # Eq (3) - t0: float = __div_clamp( t1 * R + S0, D0 ) # Eq (10, left) - sol_0: numpy.ndarray = x0 + t0 * d0 # Eq (4) - - return sol_0, sol_1 - - -def __compute_nodes_to_triangle_distance( - tri_0, edges_0, - tri_1 ) -> Tuple[ Union[ float, None ], Union[ numpy.ndarray, None ], Union[ numpy.ndarray, None ], bool ]: - """ - Computes the distance from nodes of `tri_1` points onto `tri_0`. - :param tri_0: First triangle. - :param edges_0: The edges of triangle 0. First element being edge [0, 1], etc. - :param tri_1: Second triangle - :return: The distance, the closest point on triangle 0, the closest on triangle 1 - and a boolean indicating of the triangles are disjoint. If nothing was found, - then the first three arguments are None. The boolean being still defined. - """ - are_disjoint: bool = False - tri_0_normal: numpy.ndarray = numpy.cross( edges_0[ 0 ], edges_0[ 1 ] ) - tri_0_normal_norm: float = numpy.dot( tri_0_normal, tri_0_normal ) - - # Forget about degenerate cases. - if tri_0_normal_norm > numpy.finfo( float ).eps: - # Build projection lengths of `tri_1` points. - tri_1_proj = numpy.empty( 3, dtype=float ) - for i in range( 3 ): - tri_1_proj[ i ] = numpy.dot( tri_0[ 0 ] - tri_1[ i ], tri_0_normal ) - - # Considering `tri_0` separates the space in 2, - # let's check if `tri_1` is on one side only. - # If so, let's take the closest point. - point: int = -1 - if numpy.all( tri_1_proj > 0 ): - point = numpy.argmin( tri_1_proj ) - elif numpy.all( tri_1_proj < 0 ): - point = numpy.argmax( tri_1_proj ) - - # So if `tri_1` is actually "on one side", - # point `tri_1[point]` is candidate to be the closest point. - if point > -1: - are_disjoint = True - # But we must check that its projection is inside `tri_0`. - if numpy.dot( tri_1[ point ] - tri_0[ 0 ], numpy.cross( tri_0_normal, edges_0[ 0 ] ) ) > 0: - if numpy.dot( tri_1[ point ] - tri_0[ 1 ], numpy.cross( tri_0_normal, edges_0[ 1 ] ) ) > 0: - if numpy.dot( tri_1[ point ] - tri_0[ 2 ], numpy.cross( tri_0_normal, edges_0[ 2 ] ) ) > 0: - # It is! - sol_0 = tri_1[ point ] - sol_1 = tri_1[ point ] + ( tri_1_proj[ point ] / tri_0_normal_norm ) * tri_0_normal - return norm( sol_1 - sol_0 ), sol_0, sol_1, are_disjoint - return None, None, None, are_disjoint - - -def distance_between_two_triangles( tri_0: numpy.ndarray, - tri_1: numpy.ndarray ) -> Tuple[ float, numpy.ndarray, numpy.ndarray ]: - """ - Returns the minimum distance between two triangles, and the two points where this minimum occurs. - If the two triangles touch, then distance is exactly 0. - But the two points are dummy and cannot be used as contact points (they are still though). - :param tri_0: The first 3x3 triangle points. - :param tri_1: The second 3x3 triangle points. - :return: The distance and the two points. - """ - # Compute vectors along the 6 sides - edges_0 = numpy.empty( ( 3, 3 ), dtype=float ) - edges_1 = numpy.empty( ( 3, 3 ), dtype=float ) - for i in range( 3 ): - edges_0[ i ][ : ] = tri_0[ ( i + 1 ) % 3 ] - tri_0[ i ] - edges_1[ i ][ : ] = tri_1[ ( i + 1 ) % 3 ] - tri_1[ i ] - - min_sol_0 = numpy.empty( 3, dtype=float ) - min_sol_1 = numpy.empty( 3, dtype=float ) - are_disjoint: bool = False - - min_dist = numpy.inf - - # Looping over all the pair of edges. - for i, j in itertools.product( range( 3 ), repeat=2 ): - # Find the closest points on edges i and j. - sol_0, sol_1 = distance_between_two_segments( tri_0[ i ], edges_0[ i ], tri_1[ j ], edges_1[ j ] ) - # Computing the distance between the two solutions. - delta_sol = sol_1 - sol_0 - dist: float = numpy.dot( delta_sol, delta_sol ) - # Update minimum if relevant and check if it's the closest pair of points. - if dist <= min_dist: - min_sol_0[ : ] = sol_0 - min_sol_1[ : ] = sol_1 - min_dist = dist - - # `tri_0[(i + 2) % 3]` is the points opposite to edges_0[i] where the closest point sol_0 lies. - # Computing those scalar products and checking the signs somehow let us determine - # if the triangles are getting closer to each other when approaching the sol_(0|1) nodes. - # If so, we have a minimum. - a: float = numpy.dot( tri_0[ ( i + 2 ) % 3 ] - sol_0, delta_sol ) - b: float = numpy.dot( tri_1[ ( j + 2 ) % 3 ] - sol_1, delta_sol ) - if a <= 0 <= b: - return sqrt( dist ), sol_0, sol_1 - - if a < 0: - a = 0 - if b > 0: - b = 0 - # `dist - a + b` expands to `numpy.dot(tri_1[(j + 2) % 3] - tri_0[(i + 2) % 3], sol_1 - sol_0)`. - # If the "directions" of the (sol_1 - sol_0) vector and the vector joining the extra points of the triangles - # (i.e. not involved in the current edge check) re the "same", then the triangles do not intersect. - if dist - a + b > 0: - are_disjoint = True - # No edge pair contained the closest points. - # Checking the node/face situation. - distance, sol_0, sol_1, are_disjoint_tmp = __compute_nodes_to_triangle_distance( tri_0, edges_0, tri_1 ) - if distance: - return distance, sol_0, sol_1 - are_disjoint = are_disjoint or are_disjoint_tmp - - distance, sol_0, sol_1, are_disjoint_tmp = __compute_nodes_to_triangle_distance( tri_1, edges_1, tri_0 ) - if distance: - return distance, sol_0, sol_1 - are_disjoint = are_disjoint or are_disjoint_tmp - # It's not a node/face situation. - # If the triangles do not overlap, let's return the minimum found during the edges loop. - # (maybe an edge was parallel to the other face, and we could not decide for a unique closest point). - if are_disjoint: - return sqrt( min_dist ), min_sol_0, min_sol_1 - else: # Surely overlapping or degenerate triangles. - return 0., numpy.zeros( 3, dtype=float ), numpy.zeros( 3, dtype=float ) diff --git a/geos-mesh/src/geos/mesh/doctor/actions/vtkPolyhedron.py b/geos-mesh/src/geos/mesh/doctor/actions/vtkPolyhedron.py new file mode 100644 index 00000000..3093d96a --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/actions/vtkPolyhedron.py @@ -0,0 +1,218 @@ +from collections import defaultdict +from dataclasses import dataclass +import networkx +from typing import Collection, Iterable, Sequence +from vtkmodules.vtkCommonCore import vtkIdList +from geos.mesh.utils.genericHelpers import vtkIter + + +@dataclass( frozen=True ) +class Options: + dummy: float + + +@dataclass( frozen=True ) +class Result: + dummy: float + + +def parseFaceStream( ids: vtkIdList ) -> Sequence[ Sequence[ int ] ]: + """Parses the face stream raw information and converts it into a tuple of tuple of integers, + each tuple of integer being the nodes of a face. + + Args: + ids (vtkIdList): The raw vtk face stream. + + Returns: + Sequence[ Sequence[ int ] ]: The tuple of tuple of integers. + """ + result = [] + it = vtkIter( ids ) + numFaces = next( it ) + try: + while True: + numNodes = next( it ) + tmp = [] + for i in range( numNodes ): + tmp.append( next( it ) ) + result.append( tuple( tmp ) ) + except StopIteration: + pass + assert len( result ) == numFaces + assert sum( map( len, result ) ) + len( result ) + 1 == ids.GetNumberOfIds() + + return tuple( result ) + + +class FaceStream: + """ + Helper class to manipulate the vtk face streams. + """ + + def __init__( self, data: Sequence[ Sequence[ int ] ] ): + # self.__data contains the list of faces nodes, like it appears in vtk face streams. + # Except that the additional size information is removed + # in favor of the __len__ of the containers. + self.__data: Sequence[ Sequence[ int ] ] = data + + @staticmethod + def buildFromVtkIdList( ids: vtkIdList ): + """Builds a FaceStream from the raw vtk face stream. + + Args: + ids (vtkIdList): The vtk face stream. + + Returns: + FaceStream: A new FaceStream instance. + """ + return FaceStream( parseFaceStream( ids ) ) + + @property + def faceNodes( self ) -> Iterable[ Sequence[ int ] ]: + """Iterate on the nodes of all the faces. + + Returns: + Iterable[ Sequence[ int ] ]: An iterator over the nodes of all the faces. + """ + return iter( self.__data ) + + @property + def numFaces( self ) -> int: + """Number of faces in the face stream + + Returns: + int: The number of faces. + """ + return len( self.__data ) + + @property + def supportPointIds( self ) -> Collection[ int ]: + """The list of all (unique) support points of the face stream, in no specific order. + + Returns: + Collection[ int ]: The set of all the point ids. + """ + tmp: list[ int ] = [] + for nodes in self.faceNodes: + tmp += nodes + return frozenset( tmp ) + + @property + def numSupportPoints( self ) -> int: + """The number of unique support nodes of the polyhedron. + + Returns: + int: The number of unique support nodes. + """ + return len( self.supportPointIds ) + + def __getitem__( self, faceIndex: int ) -> Sequence[ int ]: + """The support point ids for the `faceIndex` face. + + Args: + faceIndex (int): The face index (within the face stream). + + Returns: + Sequence[ int ]: A tuple containing all the point ids. + """ + return self.__data[ faceIndex ] + + def flipFaces( self, faceIndices: Collection[ int ] ) -> "FaceStream": + """Returns a new FaceStream instance with the face indices defined in faceIndices flipped. + + Args: + faceIndices (Collection[ int ]): The faces (local) indices to flip. + + Returns: + FaceStream: A newly created instance. + """ + result = [] + for faceIndex, faceNodes in enumerate( self.__data ): + result.append( tuple( reversed( faceNodes ) ) if faceIndex in faceIndices else faceNodes ) + return FaceStream( tuple( result ) ) + + def dump( self ) -> Sequence[ int ]: + """Returns the face stream awaited by vtk, but in a python container. + The content can be used, once converted to a vtkIdList, to define another polyhedron in vtk. + + Returns: + Sequence[ int ]: The face stream in a python container. + """ + result = [ len( self.__data ) ] + for faceNodes in self.__data: + result.append( len( faceNodes ) ) + result += faceNodes + return tuple( result ) + + def __repr__( self ): + result = [ str( len( self.__data ) ) ] + for faceNodes in self.__data: + result.append( str( len( faceNodes ) ) ) + result.append( ", ".join( map( str, faceNodes ) ) ) + return ",\n".join( result ) + + +def buildFaceToFaceConnectivityThroughEdges( faceStream: FaceStream, addCompatibility=False ) -> networkx.Graph: + """Given a face stream/polyhedron, builds the connections between the faces. + Those connections happen when two faces share an edge. + + Args: + faceStream (FaceStream): The face stream description of the polyhedron. + addCompatibility (bool, optional): Two faces are considered compatible if their normals point in the same + direction (inwards or outwards). If `addCompatibility=True`, we add a `compatible={"-", "+"}` flag on the edges + to indicate that the two connected faces are compatible or not. + If `addCompatibility=False`, non-compatible faces are simply not connected by any edge.. Defaults to False. + + Returns: + networkx.Graph: A graph which nodes are actually the faces of the polyhedron. + Two nodes of the graph are connected if they share an edge. + """ + edgesToFaceIndices: dict[ frozenset[ int ], list[ int ] ] = defaultdict( list ) + for faceIndex, faceNodes in enumerate( faceStream.faceNodes ): + # Each edge is defined by two nodes. We do a small trick to loop on consecutive points. + faceIndices: tuple[ int, int ] + for faceIndices in zip( faceNodes, faceNodes[ 1: ] + ( faceNodes[ 0 ], ) ): + edgesToFaceIndices[ frozenset( faceIndices ) ].append( faceIndex ) + # We are doing here some small validations w.r.t. the connections of the faces + # which may only make sense in the context of numerical simulations. + # As such, an error will be thrown in case the polyhedron is not closed. + # So there may be a lack of absolute genericity, and the code may evolve if needed. + for faceIndices in edgesToFaceIndices.values(): + assert len( faceIndices ) == 2 + # Computing the graph degree for validation + degrees: dict[ int, int ] = defaultdict( int ) + for faceIndices in edgesToFaceIndices.values(): + for faceIndex in faceIndices: + degrees[ faceIndex ] += 1 + for faceIndex, degree in degrees.items(): + assert len( faceStream[ faceIndex ] ) == degree + # Validation that there is one unique edge connecting two faces. + faceIndicesToEdgeIndex = defaultdict( list ) + for edgeIndex, faceIndices in edgesToFaceIndices.items(): + faceIndicesToEdgeIndex[ frozenset( faceIndices ) ].append( edgeIndex ) + for edgeIndices in faceIndicesToEdgeIndex.values(): + assert len( edgeIndices ) == 1 + # Connecting the faces. Neighbor faces with consistent normals (i.e. facing both inward or outward) + # will be connected. This will allow us to extract connected components with consistent orientations. + # Another step will then reconcile all the components such that all the normals of the cell + # will consistently point outward. + graph = networkx.Graph() + graph.add_nodes_from( range( faceStream.numFaces ) ) + for edge, faceIndices in edgesToFaceIndices.items(): + faceIndex0, faceIndex1 = faceIndices + faceNodes0 = faceStream[ faceIndex0 ] + ( faceStream[ faceIndex0 ][ 0 ], ) + faceNodes1 = faceStream[ faceIndex1 ] + ( faceStream[ faceIndex1 ][ 0 ], ) + node0, node1 = edge + order0 = 1 if faceNodes0[ faceNodes0.index( node0 ) + 1 ] == node1 else -1 + order1 = 1 if faceNodes1[ faceNodes1.index( node0 ) + 1 ] == node1 else -1 + # Same order of nodes means that the normals of the faces + # are _not_ both in the same "direction" (inward or outward). + if order0 * order1 == 1: + if addCompatibility: + graph.add_edge( faceIndex0, faceIndex1, compatible="-" ) + else: + if addCompatibility: + graph.add_edge( faceIndex0, faceIndex1, compatible="+" ) + else: + graph.add_edge( faceIndex0, faceIndex1 ) + return graph diff --git a/geos-mesh/src/geos/mesh/doctor/actions/vtk_polyhedron.py b/geos-mesh/src/geos/mesh/doctor/actions/vtk_polyhedron.py deleted file mode 100644 index 8e628a66..00000000 --- a/geos-mesh/src/geos/mesh/doctor/actions/vtk_polyhedron.py +++ /dev/null @@ -1,198 +0,0 @@ -from collections import defaultdict -from dataclasses import dataclass -import networkx -from typing import Collection, Dict, FrozenSet, Iterable, List, Sequence, Tuple -from vtkmodules.vtkCommonCore import vtkIdList -from geos.mesh.utils.genericHelpers import vtk_iter - - -@dataclass( frozen=True ) -class Options: - dummy: float - - -@dataclass( frozen=True ) -class Result: - dummy: float - - -def parse_face_stream( ids: vtkIdList ) -> Sequence[ Sequence[ int ] ]: - """ - Parses the face stream raw information and converts it into a tuple of tuple of integers, - each tuple of integer being the nodes of a face. - :param ids: The raw vtk face stream. - :return: The tuple of tuple of integers. - """ - result = [] - it = vtk_iter( ids ) - num_faces = next( it ) - try: - while True: - num_nodes = next( it ) - tmp = [] - for i in range( num_nodes ): - tmp.append( next( it ) ) - result.append( tuple( tmp ) ) - except StopIteration: - pass - assert len( result ) == num_faces - assert sum( map( len, result ) ) + len( result ) + 1 == ids.GetNumberOfIds() - - return tuple( result ) - - -class FaceStream: - """ - Helper class to manipulate the vtk face streams. - """ - - def __init__( self, data: Sequence[ Sequence[ int ] ] ): - # self.__data contains the list of faces nodes, like it appears in vtk face streams. - # Except that the additional size information is removed - # in favor of the __len__ of the containers. - self.__data: Sequence[ Sequence[ int ] ] = data - - @staticmethod - def build_from_vtk_id_list( ids: vtkIdList ): - """ - Builds a FaceStream from the raw vtk face stream. - :param ids: The vtk face stream. - :return: A new FaceStream instance. - """ - return FaceStream( parse_face_stream( ids ) ) - - @property - def face_nodes( self ) -> Iterable[ Sequence[ int ] ]: - """ - Iterate on the nodes of all the faces. - :return: An iterator. - """ - return iter( self.__data ) - - @property - def num_faces( self ) -> int: - """ - Number of faces in the face stream - :return: An integer - """ - return len( self.__data ) - - @property - def support_point_ids( self ) -> Collection[ int ]: - """ - The list of all (unique) support points of the face stream, in no specific order. - :return: The set of all the point ids. - """ - tmp: List[ int ] = [] - for nodes in self.face_nodes: - tmp += nodes - return frozenset( tmp ) - - @property - def num_support_points( self ) -> int: - """ - The number of unique support nodes of the polyhedron. - :return: An integer. - """ - return len( self.support_point_ids ) - - def __getitem__( self, face_index ) -> Sequence[ int ]: - """ - The support point ids for the `face_index` face. - :param face_index: The face index (within the face stream). - :return: A tuple containing all the point ids. - """ - return self.__data[ face_index ] - - def flip_faces( self, face_indices ): - """ - Returns a new FaceStream instance with the face indices defined in face_indices flipped., - :param face_indices: The faces (local) indices to flip. - :return: A newly created instance. - """ - result = [] - for face_index, face_nodes in enumerate( self.__data ): - result.append( tuple( reversed( face_nodes ) ) if face_index in face_indices else face_nodes ) - return FaceStream( tuple( result ) ) - - def dump( self ) -> Sequence[ int ]: - """ - Returns the face stream awaited by vtk, but in a python container. - The content can be used, once converted to a vtkIdList, to define another polyhedron in vtk. - :return: The face stream in a python container. - """ - result = [ len( self.__data ) ] - for face_nodes in self.__data: - result.append( len( face_nodes ) ) - result += face_nodes - return tuple( result ) - - def __repr__( self ): - result = [ str( len( self.__data ) ) ] - for face_nodes in self.__data: - result.append( str( len( face_nodes ) ) ) - result.append( ", ".join( map( str, face_nodes ) ) ) - return ",\n".join( result ) - - -def build_face_to_face_connectivity_through_edges( face_stream: FaceStream, add_compatibility=False ) -> networkx.Graph: - """ - Given a face stream/polyhedron, builds the connections between the faces. - Those connections happen when two faces share an edge. - :param face_stream: The face stream description of the polyhedron. - :param add_compatibility: Two faces are considered compatible if their normals point in the same direction (inwards or outwards). - If `add_compatibility=True`, we add a `compatible={"-", "+"}` flag on the edges - to indicate that the two connected faces are compatible or not. - If `add_compatibility=False`, non-compatible faces are simply not connected by any edge. - :return: A graph which nodes are actually the faces of the polyhedron. - Two nodes of the graph are connected if they share an edge. - """ - edges_to_face_indices: Dict[ FrozenSet[ int ], List[ int ] ] = defaultdict( list ) - for face_index, face_nodes in enumerate( face_stream.face_nodes ): - # Each edge is defined by two nodes. We do a small trick to loop on consecutive points. - face_indices: Tuple[ int, int ] - for face_indices in zip( face_nodes, face_nodes[ 1: ] + ( face_nodes[ 0 ], ) ): - edges_to_face_indices[ frozenset( face_indices ) ].append( face_index ) - # We are doing here some small validations w.r.t. the connections of the faces - # which may only make sense in the context of numerical simulations. - # As such, an error will be thrown in case the polyhedron is not closed. - # So there may be a lack of absolute genericity, and the code may evolve if needed. - for face_indices in edges_to_face_indices.values(): - assert len( face_indices ) == 2 - # Computing the graph degree for validation - degrees: Dict[ int, int ] = defaultdict( int ) - for face_indices in edges_to_face_indices.values(): - for face_index in face_indices: - degrees[ face_index ] += 1 - for face_index, degree in degrees.items(): - assert len( face_stream[ face_index ] ) == degree - # Validation that there is one unique edge connecting two faces. - face_indices_to_edge_index = defaultdict( list ) - for edge_index, face_indices in edges_to_face_indices.items(): - face_indices_to_edge_index[ frozenset( face_indices ) ].append( edge_index ) - for edge_indices in face_indices_to_edge_index.values(): - assert len( edge_indices ) == 1 - # Connecting the faces. Neighbor faces with consistent normals (i.e. facing both inward or outward) - # will be connected. This will allow us to extract connected components with consistent orientations. - # Another step will then reconcile all the components such that all the normals of the cell - # will consistently point outward. - graph = networkx.Graph() - graph.add_nodes_from( range( face_stream.num_faces ) ) - for edge, face_indices in edges_to_face_indices.items(): - face_index_0, face_index_1 = face_indices - face_nodes_0 = face_stream[ face_index_0 ] + ( face_stream[ face_index_0 ][ 0 ], ) - face_nodes_1 = face_stream[ face_index_1 ] + ( face_stream[ face_index_1 ][ 0 ], ) - node_0, node_1 = edge - order_0 = 1 if face_nodes_0[ face_nodes_0.index( node_0 ) + 1 ] == node_1 else -1 - order_1 = 1 if face_nodes_1[ face_nodes_1.index( node_0 ) + 1 ] == node_1 else -1 - # Same order of nodes means that the normals of the faces - # are _not_ both in the same "direction" (inward or outward). - if order_0 * order_1 == 1: - if add_compatibility: - graph.add_edge( face_index_0, face_index_1, compatible="-" ) - else: - if add_compatibility: - graph.add_edge( face_index_0, face_index_1, compatible="+" ) - else: - graph.add_edge( face_index_0, face_index_1 ) - return graph diff --git a/geos-mesh/src/geos/mesh/doctor/meshDoctor.py b/geos-mesh/src/geos/mesh/doctor/meshDoctor.py new file mode 100644 index 00000000..ab05af4b --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/meshDoctor.py @@ -0,0 +1,24 @@ +import sys +from geos.mesh.doctor.parsing import ActionHelper +from geos.mesh.doctor.parsing.cliParsing import parseAndSetVerbosity, setupLogger +from geos.mesh.doctor.register import registerParsingActions + + +def main(): + parseAndSetVerbosity( sys.argv ) + mainParser, allActions, allActionsHelpers = registerParsingActions() + args = mainParser.parse_args( sys.argv[ 1: ] ) + setupLogger.info( f"Working on mesh \"{args.vtkInputFile}\"." ) + actionOptions = allActionsHelpers[ args.subparsers ].convert( vars( args ) ) + try: + action = allActions[ args.subparsers ] + except KeyError: + setupLogger.error( f"Action {args.subparsers} is not a valid action." ) + sys.exit( 1 ) + helper: ActionHelper = allActionsHelpers[ args.subparsers ] + result = action( args.vtkInputFile, actionOptions ) + helper.displayResults( actionOptions, result ) + + +if __name__ == '__main__': + main() diff --git a/geos-mesh/src/geos/mesh/doctor/mesh_doctor.py b/geos-mesh/src/geos/mesh/doctor/mesh_doctor.py deleted file mode 100644 index 3c6187d4..00000000 --- a/geos-mesh/src/geos/mesh/doctor/mesh_doctor.py +++ /dev/null @@ -1,24 +0,0 @@ -import sys -from geos.mesh.doctor.parsing import ActionHelper -from geos.mesh.doctor.parsing.cli_parsing import parse_and_set_verbosity, setup_logger -from geos.mesh.doctor.register import register_parsing_actions - - -def main(): - parse_and_set_verbosity( sys.argv ) - main_parser, all_actions, all_actions_helpers = register_parsing_actions() - args = main_parser.parse_args( sys.argv[ 1: ] ) - setup_logger.info( f"Working on mesh \"{args.vtk_input_file}\"." ) - action_options = all_actions_helpers[ args.subparsers ].convert( vars( args ) ) - try: - action = all_actions[ args.subparsers ] - except KeyError: - setup_logger.error( f"Action {args.subparsers} is not a valid action." ) - sys.exit( 1 ) - helper: ActionHelper = all_actions_helpers[ args.subparsers ] - result = action( args.vtk_input_file, action_options ) - helper.display_results( action_options, result ) - - -if __name__ == '__main__': - main() diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/__init__.py b/geos-mesh/src/geos/mesh/doctor/parsing/__init__.py index deb553cf..9c747332 100644 --- a/geos-mesh/src/geos/mesh/doctor/parsing/__init__.py +++ b/geos-mesh/src/geos/mesh/doctor/parsing/__init__.py @@ -2,21 +2,21 @@ from dataclasses import dataclass from typing import Callable, Any -ALL_CHECKS = "all_checks" -MAIN_CHECKS = "main_checks" -COLLOCATES_NODES = "collocated_nodes" -ELEMENT_VOLUMES = "element_volumes" -FIX_ELEMENTS_ORDERINGS = "fix_elements_orderings" -GENERATE_CUBE = "generate_cube" -GENERATE_FRACTURES = "generate_fractures" -GENERATE_GLOBAL_IDS = "generate_global_ids" -NON_CONFORMAL = "non_conformal" -SELF_INTERSECTING_ELEMENTS = "self_intersecting_elements" -SUPPORTED_ELEMENTS = "supported_elements" +ALL_CHECKS = "allChecks" +MAIN_CHECKS = "mainChecks" +COLLOCATES_NODES = "collocatedNodes" +ELEMENT_VOLUMES = "elementVolumes" +FIX_ELEMENTS_ORDERINGS = "fixElementsOrderings" +GENERATE_CUBE = "generateCube" +GENERATE_FRACTURES = "generateFractures" +GENERATE_GLOBAL_IDS = "generateGlobalIds" +NON_CONFORMAL = "nonConformal" +SELF_INTERSECTING_ELEMENTS = "selfIntersectingElements" +SUPPORTED_ELEMENTS = "supportedElements" @dataclass( frozen=True ) class ActionHelper: - fill_subparser: Callable[ [ Any ], argparse.ArgumentParser ] + fillSubparser: Callable[ [ Any ], argparse.ArgumentParser ] convert: Callable[ [ Any ], Any ] - display_results: Callable[ [ Any, Any ], None ] + displayResults: Callable[ [ Any, Any ], None ] diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/_sharedChecksParsingLogic.py b/geos-mesh/src/geos/mesh/doctor/parsing/_sharedChecksParsingLogic.py new file mode 100644 index 00000000..a8c6cfb1 --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/parsing/_sharedChecksParsingLogic.py @@ -0,0 +1,176 @@ +import argparse +from copy import deepcopy +from dataclasses import dataclass +from typing import Type, Any +from geos.mesh.doctor.actions.allChecks import Options as AllChecksOptions +from geos.mesh.doctor.actions.allChecks import Result as AllChecksResult +from geos.mesh.doctor.parsing.cliParsing import parseCommaSeparatedString, setupLogger + + +# --- Data Structure for Check Features --- +@dataclass( frozen=True ) +class CheckFeature: + """A container for a check's configuration and associated classes.""" + name: str + optionsCls: Type[ Any ] + resultCls: Type[ Any ] + defaultParams: dict[ str, Any ] + display: Type[ Any ] + + +# --- Argument Parser Constants --- +CHECKS_TO_DO_ARG = "checksToPerform" +PARAMETERS_ARG = "setParameters" + + +def _generateParametersHelp( orderedCheckNames: list[ str ], checkFeaturesConfig: dict[ str, CheckFeature ] ) -> str: + """Dynamically generates the help text for the setParameters argument.""" + helpText: str = "" + for checkName in orderedCheckNames: + config = checkFeaturesConfig.get( checkName ) + if config and config.defaultParams: + configParams = [ f"{name}:{value}" for name, value in config.defaultParams.items() ] + helpText += f"For {checkName}: {', '.join(configParams)}. " + return helpText + + +def getOptionsUsedMessage( optionsUsed: dataclass ) -> str: + """Dynamically generates the description of every parameter used when loaching a check. + + Args: + optionsUsed (dataclass) + + Returns: + str: A message like "Parameters used: ( param1:value1 param2:value2 )" for as many paramters found. + """ + optionsMsg: str = "Parameters used: (" + for attrName in optionsUsed.__dataclass_fields__: + attrValue = getattr( optionsUsed, attrName ) + optionsMsg += f" {attrName} = {attrValue}" + return optionsMsg + " )." + + +# --- Generic Argument Parser Setup --- +def fillSubparser( subparsers: argparse._SubParsersAction, subparserName: str, helpMessage: str, + orderedCheckNames: list[ str ], checkFeaturesConfig: dict[ str, CheckFeature ] ) -> None: + """ + Fills a subparser with arguments for performing a set of checks. + + Args: + subparsers: The subparsers action from argparse. + subparserName: The name for this specific subparser (e.g., 'all-checks'). + helpMessage: The help message for this subparser. + orderedCheckNames: The list of check names to be used in help messages. + checkFeaturesConfig: The configuration dictionary for the checks. + """ + parser = subparsers.add_parser( subparserName, + help=helpMessage, + formatter_class=argparse.ArgumentDefaultsHelpFormatter ) + + parametersHelp: str = _generateParametersHelp( orderedCheckNames, checkFeaturesConfig ) + + parser.add_argument( f"--{CHECKS_TO_DO_ARG}", + type=str, + default="", + required=False, + help=( "Comma-separated list of checks to perform. " + f"If empty, all of the following are run by default: {orderedCheckNames}. " + f"Available choices: {orderedCheckNames}. " + f"Example: --{CHECKS_TO_DO_ARG} {orderedCheckNames[0]},{orderedCheckNames[1]}" ) ) + parser.add_argument( f"--{PARAMETERS_ARG}", + type=str, + default="", + required=False, + help=( "Comma-separated list of parameters to override defaults (e.g., 'param_name:value'). " + f"Default parameters are: {parametersHelp}" + f"Example: --{PARAMETERS_ARG} parameter_name:10.5,other_param:25" ) ) + + +def convert( parsedArgs: argparse.Namespace, orderedCheckNames: list[ str ], + checkFeaturesConfig: dict[ str, CheckFeature ] ) -> AllChecksOptions: + """ + Converts parsed command-line arguments into an AllChecksOptions object based on the provided configuration. + """ + # 1. Determine which checks to perform + if not parsedArgs[ CHECKS_TO_DO_ARG ]: # handles default and if user explicitly provides --checksToPerform "" + finalSelectedCheckNames: list[ str ] = deepcopy( orderedCheckNames ) + setupLogger.info( "All configured checks will be performed by default." ) + else: + userChecks = parseCommaSeparatedString( parsedArgs[ CHECKS_TO_DO_ARG ] ) + finalSelectedCheckNames = list() + for name in userChecks: + if name not in checkFeaturesConfig: + setupLogger.warning( f"Check '{name}' does not exist. Choose from: {orderedCheckNames}." ) + elif name not in finalSelectedCheckNames: + finalSelectedCheckNames.append( name ) + + if not finalSelectedCheckNames: + raise ValueError( "No valid checks were selected. No operations will be configured." ) + + # 2. Prepare parameters for the selected checks + defaultParams = { name: feature.defaultParams.copy() for name, feature in checkFeaturesConfig.items() } + finalCheckParams = { name: defaultParams[ name ] for name in finalSelectedCheckNames } + + if not parsedArgs[ PARAMETERS_ARG ]: # handles default and if user explicitly provides --setParameters "" + setupLogger.info( "Default configuration of parameters adopted for every check to perform." ) + else: + setParameters = parseCommaSeparatedString( parsedArgs[ PARAMETERS_ARG ] ) + for param in setParameters: + if ':' not in param: + setupLogger.warning( f"Parameter '{param}' is not in 'name:value' format. Skipping." ) + continue + + name, _, valueStr = param.partition( ':' ) + name = name.strip() + valueStr = valueStr.strip() + + if not valueStr: + setupLogger.warning( f"Parameter '{name}' has no value. Skipping." ) + continue + + try: + valueFloat = float( valueStr ) + except ValueError: + setupLogger.warning( f"Invalid value for '{name}': '{valueStr}'. Must be a number. Skipping." ) + continue + + # Apply the parameter override to any check that uses it + for checkNameKey in finalCheckParams: + if name in finalCheckParams[ checkNameKey ]: + finalCheckParams[ checkNameKey ][ name ] = valueFloat + + # 3. Instantiate Options objects for the selected checks + individualCheckOptions: dict[ str, Any ] = dict() + individualCheckDisplay: dict[ str, Any ] = dict() + + for checkName in list( finalCheckParams.keys() ): + params = finalCheckParams[ checkName ] + featureConfig = checkFeaturesConfig[ checkName ] + try: + individualCheckOptions[ checkName ] = featureConfig.optionsCls( **params ) + individualCheckDisplay[ checkName ] = featureConfig.display + except Exception as e: + setupLogger.error( f"Failed to create options for check '{checkName}': {e}. This check will be skipped." ) + finalSelectedCheckNames.remove( checkName ) + + return AllChecksOptions( checksToPerform=finalSelectedCheckNames, + checksOptions=individualCheckOptions, + checkDisplays=individualCheckDisplay ) + + +# Generic display of Results +def displayResults( options: AllChecksOptions, result: AllChecksResult ) -> None: + """Displays the results of all the checks that have been performed.""" + if not options.checksToPerform: + setupLogger.results( "No checks were performed or all failed during configuration." ) + return + + maxLength: int = max( len( name ) for name in options.checksToPerform ) + for name, res in result.checkResults.items(): + setupLogger.results( "" ) # Blank line for visibility + setupLogger.results( f"******** {name:<{maxLength}} ********" ) + displayFunc = options.checkDisplays.get( name ) + opts = options.checksOptions.get( name ) + if displayFunc and opts: + displayFunc( opts, res ) + setupLogger.results( "" ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/_shared_checks_parsing_logic.py b/geos-mesh/src/geos/mesh/doctor/parsing/_shared_checks_parsing_logic.py deleted file mode 100644 index a2aa6538..00000000 --- a/geos-mesh/src/geos/mesh/doctor/parsing/_shared_checks_parsing_logic.py +++ /dev/null @@ -1,178 +0,0 @@ -import argparse -from copy import deepcopy -from dataclasses import dataclass -from typing import Type, Any - -from geos.mesh.doctor.actions.all_checks import Options as AllChecksOptions -from geos.mesh.doctor.actions.all_checks import Result as AllChecksResult -from geos.mesh.doctor.parsing.cli_parsing import parse_comma_separated_string, setup_logger - - -# --- Data Structure for Check Features --- -@dataclass( frozen=True ) -class CheckFeature: - """A container for a check's configuration and associated classes.""" - name: str - options_cls: Type[ Any ] - result_cls: Type[ Any ] - default_params: dict[ str, Any ] - display: Type[ Any ] - - -# --- Argument Parser Constants --- -CHECKS_TO_DO_ARG = "checks_to_perform" -PARAMETERS_ARG = "set_parameters" - - -def _generate_parameters_help( ordered_check_names: list[ str ], check_features_config: dict[ str, - CheckFeature ] ) -> str: - """Dynamically generates the help text for the set_parameters argument.""" - help_text: str = "" - for check_name in ordered_check_names: - config = check_features_config.get( check_name ) - if config and config.default_params: - config_params = [ f"{name}:{value}" for name, value in config.default_params.items() ] - help_text += f"For {check_name}: {', '.join(config_params)}. " - return help_text - - -def get_options_used_message( options_used: dataclass ) -> str: - """Dynamically generates the description of every parameter used when loaching a check. - - Args: - options_used (dataclass) - - Returns: - str: A message like "Parameters used: ( param1:value1 param2:value2 )" for as many paramters found. - """ - options_msg: str = "Parameters used: (" - for attr_name in options_used.__dataclass_fields__: - attr_value = getattr( options_used, attr_name ) - options_msg += f" {attr_name} = {attr_value}" - return options_msg + " )." - - -# --- Generic Argument Parser Setup --- -def fill_subparser( subparsers: argparse._SubParsersAction, subparser_name: str, help_message: str, - ordered_check_names: list[ str ], check_features_config: dict[ str, CheckFeature ] ) -> None: - """ - Fills a subparser with arguments for performing a set of checks. - - Args: - subparsers: The subparsers action from argparse. - subparser_name: The name for this specific subparser (e.g., 'all-checks'). - help_message: The help message for this subparser. - ordered_check_names: The list of check names to be used in help messages. - check_features_config: The configuration dictionary for the checks. - """ - parser = subparsers.add_parser( subparser_name, - help=help_message, - formatter_class=argparse.ArgumentDefaultsHelpFormatter ) - - parameters_help: str = _generate_parameters_help( ordered_check_names, check_features_config ) - - parser.add_argument( f"--{CHECKS_TO_DO_ARG}", - type=str, - default="", - required=False, - help=( "Comma-separated list of checks to perform. " - f"If empty, all of the following are run by default: {ordered_check_names}. " - f"Available choices: {ordered_check_names}. " - f"Example: --{CHECKS_TO_DO_ARG} {ordered_check_names[0]},{ordered_check_names[1]}" ) ) - parser.add_argument( f"--{PARAMETERS_ARG}", - type=str, - default="", - required=False, - help=( "Comma-separated list of parameters to override defaults (e.g., 'param_name:value'). " - f"Default parameters are: {parameters_help}" - f"Example: --{PARAMETERS_ARG} parameter_name:10.5,other_param:25" ) ) - - -def convert( parsed_args: argparse.Namespace, ordered_check_names: list[ str ], - check_features_config: dict[ str, CheckFeature ] ) -> AllChecksOptions: - """ - Converts parsed command-line arguments into an AllChecksOptions object based on the provided configuration. - """ - # 1. Determine which checks to perform - if not parsed_args[ CHECKS_TO_DO_ARG ]: # handles default and if user explicitly provides --checks_to_perform "" - final_selected_check_names: list[ str ] = deepcopy( ordered_check_names ) - setup_logger.info( "All configured checks will be performed by default." ) - else: - user_checks = parse_comma_separated_string( parsed_args[ CHECKS_TO_DO_ARG ] ) - final_selected_check_names = list() - for name in user_checks: - if name not in check_features_config: - setup_logger.warning( f"Check '{name}' does not exist. Choose from: {ordered_check_names}." ) - elif name not in final_selected_check_names: - final_selected_check_names.append( name ) - - if not final_selected_check_names: - raise ValueError( "No valid checks were selected. No operations will be configured." ) - - # 2. Prepare parameters for the selected checks - default_params = { name: feature.default_params.copy() for name, feature in check_features_config.items() } - final_check_params = { name: default_params[ name ] for name in final_selected_check_names } - - if not parsed_args[ PARAMETERS_ARG ]: # handles default and if user explicitly provides --set_parameters "" - setup_logger.info( "Default configuration of parameters adopted for every check to perform." ) - else: - set_parameters = parse_comma_separated_string( parsed_args[ PARAMETERS_ARG ] ) - for param in set_parameters: - if ':' not in param: - setup_logger.warning( f"Parameter '{param}' is not in 'name:value' format. Skipping." ) - continue - - name, _, value_str = param.partition( ':' ) - name = name.strip() - value_str = value_str.strip() - - if not value_str: - setup_logger.warning( f"Parameter '{name}' has no value. Skipping." ) - continue - - try: - value_float = float( value_str ) - except ValueError: - setup_logger.warning( f"Invalid value for '{name}': '{value_str}'. Must be a number. Skipping." ) - continue - - # Apply the parameter override to any check that uses it - for check_name_key in final_check_params: - if name in final_check_params[ check_name_key ]: - final_check_params[ check_name_key ][ name ] = value_float - - # 3. Instantiate Options objects for the selected checks - individual_check_options: dict[ str, Any ] = dict() - individual_check_display: dict[ str, Any ] = dict() - - for check_name in list( final_check_params.keys() ): - params = final_check_params[ check_name ] - feature_config = check_features_config[ check_name ] - try: - individual_check_options[ check_name ] = feature_config.options_cls( **params ) - individual_check_display[ check_name ] = feature_config.display - except Exception as e: - setup_logger.error( f"Failed to create options for check '{check_name}': {e}. This check will be skipped." ) - final_selected_check_names.remove( check_name ) - - return AllChecksOptions( checks_to_perform=final_selected_check_names, - checks_options=individual_check_options, - check_displays=individual_check_display ) - - -# Generic display of Results -def display_results( options: AllChecksOptions, result: AllChecksResult ) -> None: - """Displays the results of all the checks that have been performed.""" - if not options.checks_to_perform: - setup_logger.results( "No checks were performed or all failed during configuration." ) - return - - max_length: int = max( len( name ) for name in options.checks_to_perform ) - for name, res in result.check_results.items(): - setup_logger.results( "" ) # Blank line for visibility - setup_logger.results( f"******** {name:<{max_length}} ********" ) - display_func = options.check_displays.get( name ) - opts = options.checks_options.get( name ) - if display_func and opts: - display_func( opts, res ) - setup_logger.results( "" ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/allChecksParsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/allChecksParsing.py new file mode 100644 index 00000000..2411f140 --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/parsing/allChecksParsing.py @@ -0,0 +1,77 @@ +import argparse +from copy import deepcopy +from geos.mesh.doctor.parsing._sharedChecksParsingLogic import ( CheckFeature, convert as sharedConvert, fillSubparser + as sharedFillSubparser, displayResults ) +from geos.mesh.doctor.actions.allChecks import Options as AllChecksOptions +from geos.mesh.doctor.parsing import ( + ALL_CHECKS, + COLLOCATES_NODES, + ELEMENT_VOLUMES, + NON_CONFORMAL, + SELF_INTERSECTING_ELEMENTS, + SUPPORTED_ELEMENTS, +) +from geos.mesh.doctor.parsing import collocatedNodesParsing as cnParser +from geos.mesh.doctor.parsing import elementVolumesParsing as evParser +from geos.mesh.doctor.parsing import nonConformalParsing as ncParser +from geos.mesh.doctor.parsing import selfIntersectingElementsParsing as sieParser +from geos.mesh.doctor.parsing import supportedElementsParsing as seParser + +# Ordered list of check names for this configuration +ORDERED_CHECK_NAMES = [ + COLLOCATES_NODES, + ELEMENT_VOLUMES, + NON_CONFORMAL, + SELF_INTERSECTING_ELEMENTS, + SUPPORTED_ELEMENTS, +] + +# Centralized configuration for the checks managed by this module +CHECK_FEATURES_CONFIG = { + COLLOCATES_NODES: + CheckFeature( name=COLLOCATES_NODES, + optionsCls=cnParser.Options, + resultCls=cnParser.Result, + defaultParams=deepcopy( cnParser.__COLLOCATED_NODES_DEFAULT ), + display=cnParser.displayResults ), + ELEMENT_VOLUMES: + CheckFeature( name=ELEMENT_VOLUMES, + optionsCls=evParser.Options, + resultCls=evParser.Result, + defaultParams=deepcopy( evParser.__ELEMENT_VOLUMES_DEFAULT ), + display=evParser.displayResults ), + NON_CONFORMAL: + CheckFeature( name=NON_CONFORMAL, + optionsCls=ncParser.Options, + resultCls=ncParser.Result, + defaultParams=deepcopy( ncParser.__NON_CONFORMAL_DEFAULT ), + display=ncParser.displayResults ), + SELF_INTERSECTING_ELEMENTS: + CheckFeature( name=SELF_INTERSECTING_ELEMENTS, + optionsCls=sieParser.Options, + resultCls=sieParser.Result, + defaultParams=deepcopy( sieParser.__SELF_INTERSECTING_ELEMENTS_DEFAULT ), + display=sieParser.displayResults ), + SUPPORTED_ELEMENTS: + CheckFeature( name=SUPPORTED_ELEMENTS, + optionsCls=seParser.Options, + resultCls=seParser.Result, + defaultParams=deepcopy( seParser.__SUPPORTED_ELEMENTS_DEFAULT ), + display=seParser.displayResults ), +} + + +def fillSubparser( subparsers: argparse._SubParsersAction ) -> None: + """Fills the subparser by calling the shared logic with the specific 'allChecks' configuration.""" + sharedFillSubparser( subparsers=subparsers, + subparserName=ALL_CHECKS, + helpMessage="Perform one or multiple mesh-doctor checks from the complete set available.", + orderedCheckNames=ORDERED_CHECK_NAMES, + checkFeaturesConfig=CHECK_FEATURES_CONFIG ) + + +def convert( parsedArgs: argparse.Namespace ) -> AllChecksOptions: + """Converts arguments by calling the shared logic with the 'allChecks' configuration.""" + return sharedConvert( parsedArgs=parsedArgs, + orderedCheckNames=ORDERED_CHECK_NAMES, + checkFeaturesConfig=CHECK_FEATURES_CONFIG ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/all_checks_parsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/all_checks_parsing.py deleted file mode 100644 index 53782f78..00000000 --- a/geos-mesh/src/geos/mesh/doctor/parsing/all_checks_parsing.py +++ /dev/null @@ -1,84 +0,0 @@ -import argparse -from copy import deepcopy -from geos.mesh.doctor.parsing._shared_checks_parsing_logic import ( CheckFeature, convert as shared_convert, - fill_subparser as shared_fill_subparser, - display_results ) -from geos.mesh.doctor.actions.all_checks import Options as AllChecksOptions - -# Import constants for check names -from geos.mesh.doctor.parsing import ( - ALL_CHECKS, - COLLOCATES_NODES, - ELEMENT_VOLUMES, - NON_CONFORMAL, - SELF_INTERSECTING_ELEMENTS, - SUPPORTED_ELEMENTS, -) - -# Import module-specific parsing components -from geos.mesh.doctor.parsing import collocated_nodes_parsing as cn_parser -from geos.mesh.doctor.parsing import element_volumes_parsing as ev_parser -from geos.mesh.doctor.parsing import non_conformal_parsing as nc_parser -from geos.mesh.doctor.parsing import self_intersecting_elements_parsing as sie_parser -from geos.mesh.doctor.parsing import supported_elements_parsing as se_parser - -# --- Configuration Specific to "All Checks" --- - -# Ordered list of check names for this configuration -ORDERED_CHECK_NAMES = [ - COLLOCATES_NODES, - ELEMENT_VOLUMES, - NON_CONFORMAL, - SELF_INTERSECTING_ELEMENTS, - SUPPORTED_ELEMENTS, -] - -# Centralized configuration for the checks managed by this module -CHECK_FEATURES_CONFIG = { - COLLOCATES_NODES: - CheckFeature( name=COLLOCATES_NODES, - options_cls=cn_parser.Options, - result_cls=cn_parser.Result, - default_params=deepcopy( cn_parser.__COLLOCATED_NODES_DEFAULT ), - display=cn_parser.display_results ), - ELEMENT_VOLUMES: - CheckFeature( name=ELEMENT_VOLUMES, - options_cls=ev_parser.Options, - result_cls=ev_parser.Result, - default_params=deepcopy( ev_parser.__ELEMENT_VOLUMES_DEFAULT ), - display=ev_parser.display_results ), - NON_CONFORMAL: - CheckFeature( name=NON_CONFORMAL, - options_cls=nc_parser.Options, - result_cls=nc_parser.Result, - default_params=deepcopy( nc_parser.__NON_CONFORMAL_DEFAULT ), - display=nc_parser.display_results ), - SELF_INTERSECTING_ELEMENTS: - CheckFeature( name=SELF_INTERSECTING_ELEMENTS, - options_cls=sie_parser.Options, - result_cls=sie_parser.Result, - default_params=deepcopy( sie_parser.__SELF_INTERSECTING_ELEMENTS_DEFAULT ), - display=sie_parser.display_results ), - SUPPORTED_ELEMENTS: - CheckFeature( name=SUPPORTED_ELEMENTS, - options_cls=se_parser.Options, - result_cls=se_parser.Result, - default_params=deepcopy( se_parser.__SUPPORTED_ELEMENTS_DEFAULT ), - display=se_parser.display_results ), -} - - -def fill_subparser( subparsers: argparse._SubParsersAction ) -> None: - """Fills the subparser by calling the shared logic with the specific 'all_checks' configuration.""" - shared_fill_subparser( subparsers=subparsers, - subparser_name=ALL_CHECKS, - help_message="Perform one or multiple mesh-doctor checks from the complete set available.", - ordered_check_names=ORDERED_CHECK_NAMES, - check_features_config=CHECK_FEATURES_CONFIG ) - - -def convert( parsed_args: argparse.Namespace ) -> AllChecksOptions: - """Converts arguments by calling the shared logic with the 'all_checks' configuration.""" - return shared_convert( parsed_args=parsed_args, - ordered_check_names=ORDERED_CHECK_NAMES, - check_features_config=CHECK_FEATURES_CONFIG ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/check_fractures_parsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/checkFracturesParsing.py similarity index 100% rename from geos-mesh/src/geos/mesh/doctor/parsing/check_fractures_parsing.py rename to geos-mesh/src/geos/mesh/doctor/parsing/checkFracturesParsing.py diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/cli_parsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/cliParsing.py similarity index 52% rename from geos-mesh/src/geos/mesh/doctor/parsing/cli_parsing.py rename to geos-mesh/src/geos/mesh/doctor/parsing/cliParsing.py index e7cd6348..d185d981 100644 --- a/geos-mesh/src/geos/mesh/doctor/parsing/cli_parsing.py +++ b/geos-mesh/src/geos/mesh/doctor/parsing/cliParsing.py @@ -1,8 +1,7 @@ import argparse import logging import textwrap -from typing import List -from geos.utils.Logger import getLogger # Alias for clarity +from geos.utils.Logger import getLogger __VERBOSE_KEY = "verbose" __QUIET_KEY = "quiet" @@ -10,82 +9,78 @@ __VERBOSITY_FLAG = "v" __QUIET_FLAG = "q" -# Get a logger for this setup module itself, using your custom logger -# This ensures its messages (like the "Logger level set to...") use your custom format. -setup_logger = getLogger( "mesh-doctor" ) -setup_logger.propagate = False +setupLogger = getLogger( "mesh-doctor" ) +setupLogger.propagate = False -# --- Conversion Logic --- -def parse_comma_separated_string( value: str ) -> list[ str ]: +def parseCommaSeparatedString( value: str ) -> list[ str ]: """Helper to parse comma-separated strings, stripping whitespace and removing empty items.""" if not value or not value.strip(): return list() return [ item.strip() for item in value.split( ',' ) if item.strip() ] -def parse_and_set_verbosity( cli_args: List[ str ] ) -> None: - """ - Parse the verbosity flag only and set the root logger's level accordingly. +def parseAndSetVerbosity( cliArgs: list[ str ] ) -> None: + """Parse the verbosity flag only and set the root logger's level accordingly. Messages from loggers created with `get_custom_logger` will inherit this level if their own level is set to NOTSET. - :param cli_args: The list of command-line arguments (e.g., sys.argv) - :return: None + + Args: + cliArgs (list[ str ]): The list of command-line arguments (e.g., sys.argv) """ - dummy_verbosity_parser = argparse.ArgumentParser( add_help=False ) + dummyVerbosityParser = argparse.ArgumentParser( add_help=False ) # Add verbosity arguments to this dummy parser - dummy_verbosity_parser.add_argument( + dummyVerbosityParser.add_argument( '-' + __VERBOSITY_FLAG, '--' + __VERBOSE_KEY, action='count', default=0, # Base default, actual interpretation depends on help text mapping dest=__VERBOSE_KEY ) - dummy_verbosity_parser.add_argument( '-' + __QUIET_FLAG, - '--' + __QUIET_KEY, - action='count', - default=0, - dest=__QUIET_KEY ) + dummyVerbosityParser.add_argument( '-' + __QUIET_FLAG, + '--' + __QUIET_KEY, + action='count', + default=0, + dest=__QUIET_KEY ) # Parse only known args to extract verbosity/quiet flags - # cli_args[1:] is used assuming cli_args[0] is the script name (like sys.argv) - args, _ = dummy_verbosity_parser.parse_known_args( cli_args[ 1: ] ) + # cliArgs[1:] is used assuming cliArgs[0] is the script name (like sys.argv) + args, _ = dummyVerbosityParser.parse_known_args( cliArgs[ 1: ] ) - verbose_count = args.verbose - quiet_count = args.quiet + verboseCount = args.verbose + quietCount = args.quiet - if verbose_count == 0 and quiet_count == 0: + if verboseCount == 0 and quietCount == 0: # Default level (no -v or -q flags) - effective_level = logging.WARNING - elif verbose_count == 1: - effective_level = logging.INFO - elif verbose_count >= 2: - effective_level = logging.DEBUG - elif quiet_count == 1: - effective_level = logging.ERROR - elif quiet_count >= 2: - effective_level = logging.CRITICAL + effectiveLevel = logging.WARNING + elif verboseCount == 1: + effectiveLevel = logging.INFO + elif verboseCount >= 2: + effectiveLevel = logging.DEBUG + elif quietCount == 1: + effectiveLevel = logging.ERROR + elif quietCount >= 2: + effectiveLevel = logging.CRITICAL else: # Should not happen with count logic but good to have a fallback - effective_level = logging.WARNING + effectiveLevel = logging.WARNING # Set the level on the ROOT logger. - # Loggers from get_custom_logger (with level NOTSET) will inherit this. - setup_logger.setLevel( effective_level ) - - # Use the setup_logger (which uses your custom formatter) for this message - setup_logger.info( f"Logger level set to \"{logging.getLevelName( effective_level )}\"" ) + # Loggers from getCustomLogger (with level NOTSET) will inherit this. + setupLogger.setLevel( effectiveLevel ) + # Use the setupLogger (which uses your custom formatter) for this message + setupLogger.info( f"Logger level set to \"{logging.getLevelName( effectiveLevel )}\"" ) -def init_parser() -> argparse.ArgumentParser: - vtk_input_file_key = "vtk_input_file" - epilog_msg = f"""\ +def initParser() -> argparse.ArgumentParser: + vtkInputFileKey = "vtkInputFile" + epilogMsg = f"""\ Note that checks are dynamically loaded. An option may be missing because of an unloaded module. Increase verbosity (-{__VERBOSITY_FLAG}, -{__VERBOSITY_FLAG * 2}) to get full information. """ formatter = lambda prog: argparse.RawTextHelpFormatter( prog, max_help_position=8 ) parser = argparse.ArgumentParser( description='Inspects meshes for GEOS.', - epilog=textwrap.dedent( epilog_msg ), + epilog=textwrap.dedent( epilogMsg ), formatter_class=formatter ) # Nothing will be done with this verbosity/quiet input. # It's only here for the `--help` message. @@ -106,5 +101,5 @@ def init_parser() -> argparse.ArgumentParser: metavar='VTK_MESH_FILE', type=str, required=True, - dest=vtk_input_file_key ) + dest=vtkInputFileKey ) return parser diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/collocatedNodesParsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/collocatedNodesParsing.py new file mode 100644 index 00000000..1b48c6f3 --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/parsing/collocatedNodesParsing.py @@ -0,0 +1,48 @@ +from geos.mesh.doctor.actions.collocatedNodes import Options, Result +from geos.mesh.doctor.parsing import COLLOCATES_NODES +from geos.mesh.doctor.parsing._sharedChecksParsingLogic import getOptionsUsedMessage +from geos.mesh.doctor.parsing.cliParsing import setupLogger + +__TOLERANCE = "tolerance" +__TOLERANCE_DEFAULT = 0. + +__COLLOCATED_NODES_DEFAULT = { __TOLERANCE: __TOLERANCE_DEFAULT } + + +def convert( parsedOptions ) -> Options: + return Options( parsedOptions[ __TOLERANCE ] ) + + +def fillSubparser( subparsers ) -> None: + p = subparsers.add_parser( COLLOCATES_NODES, help="Checks if nodes are collocated." ) + p.add_argument( '--' + __TOLERANCE, + type=float, + metavar=__TOLERANCE_DEFAULT, + default=__TOLERANCE_DEFAULT, + required=True, + help="[float]: The absolute distance between two nodes for them to be considered collocated." ) + + +def displayResults( options: Options, result: Result ): + setupLogger.results( getOptionsUsedMessage( options ) ) + allCollocatedNodes: list[ int ] = [] + for bucket in result.nodesBuckets: + for node in bucket: + allCollocatedNodes.append( node ) + allCollocatedNodes: frozenset[ int ] = frozenset( allCollocatedNodes ) # Surely useless + if allCollocatedNodes: + setupLogger.results( f"You have {len( allCollocatedNodes )} collocated nodes." ) + setupLogger.results( "Here are all the buckets of collocated nodes." ) + tmp: list[ str ] = [] + for bucket in result.nodesBuckets: + tmp.append( f"({', '.join(map(str, bucket))})" ) + setupLogger.results( f"({', '.join(tmp)})" ) + else: + setupLogger.results( "You have no collocated node." ) + + if result.wrongSupportElements: + tmp: str = ", ".join( map( str, result.wrongSupportElements ) ) + setupLogger.results( f"You have {len(result.wrongSupportElements)} elements with duplicated support nodes.\n" + + tmp ) + else: + setupLogger.results( "You have no element with duplicated support nodes." ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/collocated_nodes_parsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/collocated_nodes_parsing.py deleted file mode 100644 index ac93feb8..00000000 --- a/geos-mesh/src/geos/mesh/doctor/parsing/collocated_nodes_parsing.py +++ /dev/null @@ -1,48 +0,0 @@ -from geos.mesh.doctor.actions.collocated_nodes import Options, Result -from geos.mesh.doctor.parsing import COLLOCATES_NODES -from geos.mesh.doctor.parsing._shared_checks_parsing_logic import get_options_used_message -from geos.mesh.doctor.parsing.cli_parsing import setup_logger - -__TOLERANCE = "tolerance" -__TOLERANCE_DEFAULT = 0. - -__COLLOCATED_NODES_DEFAULT = { __TOLERANCE: __TOLERANCE_DEFAULT } - - -def convert( parsed_options ) -> Options: - return Options( parsed_options[ __TOLERANCE ] ) - - -def fill_subparser( subparsers ) -> None: - p = subparsers.add_parser( COLLOCATES_NODES, help="Checks if nodes are collocated." ) - p.add_argument( '--' + __TOLERANCE, - type=float, - metavar=__TOLERANCE_DEFAULT, - default=__TOLERANCE_DEFAULT, - required=True, - help="[float]: The absolute distance between two nodes for them to be considered collocated." ) - - -def display_results( options: Options, result: Result ): - setup_logger.results( get_options_used_message( options ) ) - all_collocated_nodes: list[ int ] = [] - for bucket in result.nodes_buckets: - for node in bucket: - all_collocated_nodes.append( node ) - all_collocated_nodes: frozenset[ int ] = frozenset( all_collocated_nodes ) # Surely useless - if all_collocated_nodes: - setup_logger.results( f"You have {len( all_collocated_nodes )} collocated nodes." ) - setup_logger.results( "Here are all the buckets of collocated nodes." ) - tmp: list[ str ] = [] - for bucket in result.nodes_buckets: - tmp.append( f"({', '.join(map(str, bucket))})" ) - setup_logger.results( f"({', '.join(tmp)})" ) - else: - setup_logger.results( "You have no collocated node." ) - - if result.wrong_support_elements: - tmp: str = ", ".join( map( str, result.wrong_support_elements ) ) - setup_logger.results( - f"You have {len(result.wrong_support_elements)} elements with duplicated support nodes.\n" + tmp ) - else: - setup_logger.results( "You have no element with duplicated support nodes." ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/elementVolumesParsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/elementVolumesParsing.py new file mode 100644 index 00000000..102fbebd --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/parsing/elementVolumesParsing.py @@ -0,0 +1,44 @@ +from geos.mesh.doctor.actions.elementVolumes import Options, Result +from geos.mesh.doctor.parsing import ELEMENT_VOLUMES +from geos.mesh.doctor.parsing._sharedChecksParsingLogic import getOptionsUsedMessage +from geos.mesh.doctor.parsing.cliParsing import setupLogger + +__MIN_VOLUME = "minVolume" +__MIN_VOLUME_DEFAULT = 0. + +__ELEMENT_VOLUMES_DEFAULT = { __MIN_VOLUME: __MIN_VOLUME_DEFAULT } + + +def fillSubparser( subparsers ) -> None: + p = subparsers.add_parser( ELEMENT_VOLUMES, + help=f"Checks if the volumes of the elements are greater than \"{__MIN_VOLUME}\"." ) + p.add_argument( '--' + __MIN_VOLUME, + type=float, + metavar=__MIN_VOLUME_DEFAULT, + default=__MIN_VOLUME_DEFAULT, + required=True, + help=f"[float]: The minimum acceptable volume. Defaults to {__MIN_VOLUME_DEFAULT}." ) + + +def convert( parsedOptions ) -> Options: + """From the parsed cli options, return the converted options for elements volumes check. + + Args: + parsedOptions: Parsed cli options. + + Returns: + Options: The converted options for elements volumes check. + """ + return Options( minVolume=parsedOptions[ __MIN_VOLUME ] ) + + +def displayResults( options: Options, result: Result ): + setupLogger.results( getOptionsUsedMessage( options ) ) + setupLogger.results( + f"You have {len(result.elementVolumes)} elements with volumes smaller than {options.minVolume}." ) + if result.elementVolumes: + setupLogger.results( "Elements index | Volumes calculated" ) + setupLogger.results( "-----------------------------------" ) + maxLength: int = len( "Elements index " ) + for ( ind, volume ) in result.elementVolumes: + setupLogger.results( f"{ind:<{maxLength}}" + "| " + str( volume ) ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/element_volumes_parsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/element_volumes_parsing.py deleted file mode 100644 index 23315785..00000000 --- a/geos-mesh/src/geos/mesh/doctor/parsing/element_volumes_parsing.py +++ /dev/null @@ -1,41 +0,0 @@ -from geos.mesh.doctor.actions.element_volumes import Options, Result -from geos.mesh.doctor.parsing import ELEMENT_VOLUMES -from geos.mesh.doctor.parsing._shared_checks_parsing_logic import get_options_used_message -from geos.mesh.doctor.parsing.cli_parsing import setup_logger - -__MIN_VOLUME = "min_volume" -__MIN_VOLUME_DEFAULT = 0. - -__ELEMENT_VOLUMES_DEFAULT = { __MIN_VOLUME: __MIN_VOLUME_DEFAULT } - - -def fill_subparser( subparsers ) -> None: - p = subparsers.add_parser( ELEMENT_VOLUMES, - help=f"Checks if the volumes of the elements are greater than \"{__MIN_VOLUME}\"." ) - p.add_argument( '--' + __MIN_VOLUME, - type=float, - metavar=__MIN_VOLUME_DEFAULT, - default=__MIN_VOLUME_DEFAULT, - required=True, - help=f"[float]: The minimum acceptable volume. Defaults to {__MIN_VOLUME_DEFAULT}." ) - - -def convert( parsed_options ) -> Options: - """ - From the parsed cli options, return the converted options for elements volumes check. - :param options_str: Parsed cli options. - :return: Options instance. - """ - return Options( min_volume=parsed_options[ __MIN_VOLUME ] ) - - -def display_results( options: Options, result: Result ): - setup_logger.results( get_options_used_message( options ) ) - setup_logger.results( - f"You have {len(result.element_volumes)} elements with volumes smaller than {options.min_volume}." ) - if result.element_volumes: - setup_logger.results( "Elements index | Volumes calculated" ) - setup_logger.results( "-----------------------------------" ) - max_length: int = len( "Elements index " ) - for ( ind, volume ) in result.element_volumes: - setup_logger.results( f"{ind:<{max_length}}" + "| " + str( volume ) ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/fixElementsOrderingsParsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/fixElementsOrderingsParsing.py new file mode 100644 index 00000000..83fbbc34 --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/parsing/fixElementsOrderingsParsing.py @@ -0,0 +1,85 @@ +import random +from vtkmodules.vtkCommonDataModel import ( + VTK_HEXAGONAL_PRISM, + VTK_HEXAHEDRON, + VTK_PENTAGONAL_PRISM, + VTK_PYRAMID, + VTK_TETRA, + VTK_VOXEL, + VTK_WEDGE, +) +from geos.mesh.doctor.actions.fixElementsOrderings import Options, Result +from geos.mesh.doctor.parsing import vtkOutputParsing, FIX_ELEMENTS_ORDERINGS +from geos.mesh.doctor.parsing.cliParsing import setupLogger + +__CELL_TYPE_MAPPING = { + "Hexahedron": VTK_HEXAHEDRON, + "Prism5": VTK_PENTAGONAL_PRISM, + "Prism6": VTK_HEXAGONAL_PRISM, + "Pyramid": VTK_PYRAMID, + "Tetrahedron": VTK_TETRA, + "Voxel": VTK_VOXEL, + "Wedge": VTK_WEDGE, +} + +__CELL_TYPE_SUPPORT_SIZE = { + VTK_HEXAHEDRON: 8, + VTK_PENTAGONAL_PRISM: 10, + VTK_HEXAGONAL_PRISM: 12, + VTK_PYRAMID: 5, + VTK_TETRA: 4, + VTK_VOXEL: 8, + VTK_WEDGE: 6, +} + + +def fillSubparser( subparsers ) -> None: + p = subparsers.add_parser( FIX_ELEMENTS_ORDERINGS, help="Reorders the support nodes for the given cell types." ) + for key, vtkKey in __CELL_TYPE_MAPPING.items(): + tmp = list( range( __CELL_TYPE_SUPPORT_SIZE[ vtkKey ] ) ) + random.Random( 4 ).shuffle( tmp ) + p.add_argument( '--' + key, + type=str, + metavar=",".join( map( str, tmp ) ), + default=None, + required=False, + help=f"[list of integers]: node permutation for \"{key}\"." ) + vtkOutputParsing.fillVtkOutputSubparser( p ) + + +def convert( parsedOptions ) -> Options: + """From the parsed cli options, return the converted options for self intersecting elements check. + + Args: + parsedOptions: Parsed cli options. + + Raises: + ValueError: If the parsed options are invalid. + + Returns: + Options: The converted options for self intersecting elements check. + """ + cellTypeToOrdering = {} + for key, vtkKey in __CELL_TYPE_MAPPING.items(): + rawMapping = parsedOptions[ key ] + if rawMapping: + tmp = tuple( map( int, rawMapping.split( "," ) ) ) + if not set( tmp ) == set( range( __CELL_TYPE_SUPPORT_SIZE[ vtkKey ] ) ): + errMsg = f"Permutation {rawMapping} for type {key} is not valid." + setupLogger.error( errMsg ) + raise ValueError( errMsg ) + cellTypeToOrdering[ vtkKey ] = tmp + vtkOutput = vtkOutputParsing.convert( parsedOptions ) + return Options( vtkOutput=vtkOutput, cellTypeToOrdering=cellTypeToOrdering ) + + +def displayResults( options: Options, result: Result ): + if result.output: + setupLogger.info( f"New mesh was written to file '{result.output}'" ) + if result.unchangedCellTypes: + setupLogger.info( + f"Those vtk types were not reordered: [{', '.join(map(str, result.unchangedCellTypes))}]." ) + else: + setupLogger.info( "All the cells of the mesh were reordered." ) + else: + setupLogger.info( "No output file was written." ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/fix_elements_orderings_parsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/fix_elements_orderings_parsing.py deleted file mode 100644 index 76a6e3e7..00000000 --- a/geos-mesh/src/geos/mesh/doctor/parsing/fix_elements_orderings_parsing.py +++ /dev/null @@ -1,79 +0,0 @@ -import random -from vtkmodules.vtkCommonDataModel import ( - VTK_HEXAGONAL_PRISM, - VTK_HEXAHEDRON, - VTK_PENTAGONAL_PRISM, - VTK_PYRAMID, - VTK_TETRA, - VTK_VOXEL, - VTK_WEDGE, -) -from geos.mesh.doctor.actions.fix_elements_orderings import Options, Result -from geos.mesh.doctor.parsing import vtk_output_parsing, FIX_ELEMENTS_ORDERINGS -from geos.mesh.doctor.parsing.cli_parsing import setup_logger - -__CELL_TYPE_MAPPING = { - "Hexahedron": VTK_HEXAHEDRON, - "Prism5": VTK_PENTAGONAL_PRISM, - "Prism6": VTK_HEXAGONAL_PRISM, - "Pyramid": VTK_PYRAMID, - "Tetrahedron": VTK_TETRA, - "Voxel": VTK_VOXEL, - "Wedge": VTK_WEDGE, -} - -__CELL_TYPE_SUPPORT_SIZE = { - VTK_HEXAHEDRON: 8, - VTK_PENTAGONAL_PRISM: 10, - VTK_HEXAGONAL_PRISM: 12, - VTK_PYRAMID: 5, - VTK_TETRA: 4, - VTK_VOXEL: 8, - VTK_WEDGE: 6, -} - - -def fill_subparser( subparsers ) -> None: - p = subparsers.add_parser( FIX_ELEMENTS_ORDERINGS, help="Reorders the support nodes for the given cell types." ) - for key, vtk_key in __CELL_TYPE_MAPPING.items(): - tmp = list( range( __CELL_TYPE_SUPPORT_SIZE[ vtk_key ] ) ) - random.Random( 4 ).shuffle( tmp ) - p.add_argument( '--' + key, - type=str, - metavar=",".join( map( str, tmp ) ), - default=None, - required=False, - help=f"[list of integers]: node permutation for \"{key}\"." ) - vtk_output_parsing.fill_vtk_output_subparser( p ) - - -def convert( parsed_options ) -> Options: - """ - From the parsed cli options, return the converted options for self intersecting elements check. - :param options_str: Parsed cli options. - :return: Options instance. - """ - cell_type_to_ordering = {} - for key, vtk_key in __CELL_TYPE_MAPPING.items(): - raw_mapping = parsed_options[ key ] - if raw_mapping: - tmp = tuple( map( int, raw_mapping.split( "," ) ) ) - if not set( tmp ) == set( range( __CELL_TYPE_SUPPORT_SIZE[ vtk_key ] ) ): - err_msg = f"Permutation {raw_mapping} for type {key} is not valid." - setup_logger.error( err_msg ) - raise ValueError( err_msg ) - cell_type_to_ordering[ vtk_key ] = tmp - vtk_output = vtk_output_parsing.convert( parsed_options ) - return Options( vtk_output=vtk_output, cell_type_to_ordering=cell_type_to_ordering ) - - -def display_results( options: Options, result: Result ): - if result.output: - setup_logger.info( f"New mesh was written to file '{result.output}'" ) - if result.unchanged_cell_types: - setup_logger.info( - f"Those vtk types were not reordered: [{', '.join(map(str, result.unchanged_cell_types))}]." ) - else: - setup_logger.info( "All the cells of the mesh were reordered." ) - else: - setup_logger.info( "No output file was written." ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/generate_cube_parsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/generateCubeParsing.py similarity index 61% rename from geos-mesh/src/geos/mesh/doctor/parsing/generate_cube_parsing.py rename to geos-mesh/src/geos/mesh/doctor/parsing/generateCubeParsing.py index c717e99a..f0bc8a38 100644 --- a/geos-mesh/src/geos/mesh/doctor/parsing/generate_cube_parsing.py +++ b/geos-mesh/src/geos/mesh/doctor/parsing/generateCubeParsing.py @@ -1,23 +1,23 @@ -from geos.mesh.doctor.actions.generate_cube import Options, Result, FieldInfo -from geos.mesh.doctor.parsing import vtk_output_parsing, generate_global_ids_parsing, GENERATE_CUBE -from geos.mesh.doctor.parsing.cli_parsing import setup_logger -from geos.mesh.doctor.parsing.generate_global_ids_parsing import GlobalIdsInfo +from geos.mesh.doctor.actions.generateCube import Options, Result, FieldInfo +from geos.mesh.doctor.parsing import vtkOutputParsing, generateGlobalIdsParsing, GENERATE_CUBE +from geos.mesh.doctor.parsing.cliParsing import setupLogger +from geos.mesh.doctor.parsing.generateGlobalIdsParsing import GlobalIdsInfo __X, __Y, __Z, __NX, __NY, __NZ = "x", "y", "z", "nx", "ny", "nz" __FIELDS = "fields" -def convert( parsed_options ) -> Options: +def convert( parsedOptions ) -> Options: - def check_discretizations( x, nx, title ): + def checkDiscretizations( x, nx, title ): if len( x ) != len( nx ) + 1: raise ValueError( f"{title} information (\"{x}\" and \"{nx}\") does not have consistent size." ) - check_discretizations( parsed_options[ __X ], parsed_options[ __NX ], __X ) - check_discretizations( parsed_options[ __Y ], parsed_options[ __NY ], __Y ) - check_discretizations( parsed_options[ __Z ], parsed_options[ __NZ ], __Z ) + checkDiscretizations( parsedOptions[ __X ], parsedOptions[ __NX ], __X ) + checkDiscretizations( parsedOptions[ __Y ], parsedOptions[ __NY ], __Y ) + checkDiscretizations( parsedOptions[ __Z ], parsedOptions[ __NZ ], __Z ) - def parse_fields( s ): + def parseFields( s ): name, support, dim = s.split( ":" ) if support not in ( "CELLS", "POINTS" ): raise ValueError( f"Support {support} for field \"{name}\" must be one of \"CELLS\" or \"POINTS\"." ) @@ -30,21 +30,21 @@ def parse_fields( s ): raise ValueError( f"Dimension {dim} must be a positive integer" ) return FieldInfo( name=name, support=support, dimension=dim ) - gids: GlobalIdsInfo = generate_global_ids_parsing.convert_global_ids( parsed_options ) + gids: GlobalIdsInfo = generateGlobalIdsParsing.convertGlobalIds( parsedOptions ) - return Options( vtk_output=vtk_output_parsing.convert( parsed_options ), - generate_cells_global_ids=gids.cells, - generate_points_global_ids=gids.points, - xs=parsed_options[ __X ], - ys=parsed_options[ __Y ], - zs=parsed_options[ __Z ], - nxs=parsed_options[ __NX ], - nys=parsed_options[ __NY ], - nzs=parsed_options[ __NZ ], - fields=tuple( map( parse_fields, parsed_options[ __FIELDS ] ) ) ) + return Options( vtkOutput=vtkOutputParsing.convert( parsedOptions ), + generateCellsGlobalIds=gids.cells, + generatePointsGlobalIds=gids.points, + xs=parsedOptions[ __X ], + ys=parsedOptions[ __Y ], + zs=parsedOptions[ __Z ], + nxs=parsedOptions[ __NX ], + nys=parsedOptions[ __NY ], + nzs=parsedOptions[ __NZ ], + fields=tuple( map( parseFields, parsedOptions[ __FIELDS ] ) ) ) -def fill_subparser( subparsers ) -> None: +def fillSubparser( subparsers ) -> None: p = subparsers.add_parser( GENERATE_CUBE, help="Generate a cube and its fields." ) p.add_argument( '--' + __X, type=lambda s: tuple( map( float, s.split( ":" ) ) ), @@ -77,9 +77,9 @@ def fill_subparser( subparsers ) -> None: required=False, default=(), help="Create fields on CELLS or POINTS, with given dimension (typically 1 or 3)." ) - generate_global_ids_parsing.fill_generate_global_ids_subparser( p ) - vtk_output_parsing.fill_vtk_output_subparser( p ) + generateGlobalIdsParsing.fillGenerateGlobalIdsSubparser( p ) + vtkOutputParsing.fillVtkOutputSubparser( p ) -def display_results( options: Options, result: Result ): - setup_logger.info( result.info ) +def displayResults( options: Options, result: Result ): + setupLogger.info( result.info ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/generate_fractures_parsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/generateFracturesParsing.py similarity index 53% rename from geos-mesh/src/geos/mesh/doctor/parsing/generate_fractures_parsing.py rename to geos-mesh/src/geos/mesh/doctor/parsing/generateFracturesParsing.py index 85dcb5d4..ffc3f17e 100644 --- a/geos-mesh/src/geos/mesh/doctor/parsing/generate_fractures_parsing.py +++ b/geos-mesh/src/geos/mesh/doctor/parsing/generateFracturesParsing.py @@ -1,28 +1,34 @@ import os -from geos.mesh.doctor.actions.generate_fractures import Options, Result, FracturePolicy -from geos.mesh.doctor.parsing import vtk_output_parsing, GENERATE_FRACTURES +from geos.mesh.doctor.actions.generateFractures import Options, Result, FracturePolicy +from geos.mesh.doctor.parsing import vtkOutputParsing, GENERATE_FRACTURES from geos.mesh.io.vtkIO import VtkOutput __POLICY = "policy" __FIELD_POLICY = "field" -__INTERNAL_SURFACES_POLICY = "internal_surfaces" +__INTERNAL_SURFACES_POLICY = "internalSurfaces" __POLICIES = ( __FIELD_POLICY, __INTERNAL_SURFACES_POLICY ) __FIELD_NAME = "name" __FIELD_VALUES = "values" -__FRACTURES_OUTPUT_DIR = "fractures_output_dir" -__FRACTURES_DATA_MODE = "fractures_data_mode" +__FRACTURES_OUTPUT_DIR = "fracturesOutputDir" +__FRACTURES_DATA_MODE = "fracturesDataMode" __FRACTURES_DATA_MODE_VALUES = "binary", "ascii" __FRACTURES_DATA_MODE_DEFAULT = __FRACTURES_DATA_MODE_VALUES[ 0 ] -def convert_to_fracture_policy( s: str ) -> FracturePolicy: - """ - Converts the user input to the proper enum chosen. +def convertToFracturePolicy( s: str ) -> FracturePolicy: + """Converts the user input to the proper enum chosen. I do not want to use the auto conversion already available to force explicit conversion. - :param s: The user input - :return: The matching enum. + + Args: + s (str): The user input + + Raises: + ValueError: If the parsed options are invalid. + + Returns: + FracturePolicy: The matching enum. """ if s == __FIELD_POLICY: return FracturePolicy.FIELD @@ -31,10 +37,10 @@ def convert_to_fracture_policy( s: str ) -> FracturePolicy: raise ValueError( f"Policy {s} is not valid. Please use one of \"{', '.join(map(str, __POLICIES))}\"." ) -def fill_subparser( subparsers ) -> None: +def fillSubparser( subparsers ) -> None: p = subparsers.add_parser( GENERATE_FRACTURES, help="Splits the mesh to generate the faults and fractures." ) p.add_argument( '--' + __POLICY, - type=convert_to_fracture_policy, + type=convertToFracturePolicy, metavar=", ".join( __POLICIES ), required=True, help=f"[string]: The criterion to define the surfaces that will be changed into fracture zones. " @@ -55,7 +61,7 @@ def fill_subparser( subparsers ) -> None: f"You can create multiple fractures by separating the values with ':' like shown in this example. " f"--{__FIELD_VALUES} 10,12:13,14,16,18:22 will create 3 fractures identified respectively with the values (10,12), (13,14,16,18) and (22). " f"If no ':' is found, all values specified will be assumed to create only 1 single fracture." ) - vtk_output_parsing.fill_vtk_output_subparser( p ) + vtkOutputParsing.fillVtkOutputSubparser( p ) p.add_argument( '--' + __FRACTURES_OUTPUT_DIR, type=str, @@ -68,42 +74,41 @@ def fill_subparser( subparsers ) -> None: help=f'[string]: For ".vtu" output format, the data mode can be binary or ascii. Defaults to binary.' ) -def convert( parsed_options ) -> Options: - policy: str = parsed_options[ __POLICY ] - field: str = parsed_options[ __FIELD_NAME ] - all_values: str = parsed_options[ __FIELD_VALUES ] - if not are_values_parsable( all_values ): +def convert( parsedOptions ) -> Options: + policy: str = parsedOptions[ __POLICY ] + field: str = parsedOptions[ __FIELD_NAME ] + allValues: str = parsedOptions[ __FIELD_VALUES ] + if not areValuesParsable( allValues ): raise ValueError( f"When entering --{__FIELD_VALUES}, respect this given format example:\n--{__FIELD_VALUES} " + "10,12:13,14,16,18:22 to create 3 fractures identified with respectively the values (10,12), (13,14,16,18) and (22)." ) - all_values_no_separator: str = all_values.replace( ":", "," ) - field_values_combined: frozenset[ int ] = frozenset( map( int, all_values_no_separator.split( "," ) ) ) - mesh_vtk_output = vtk_output_parsing.convert( parsed_options ) + allValuesNoSeparator: str = allValues.replace( ":", "," ) + fieldValuesCombined: frozenset[ int ] = frozenset( map( int, allValuesNoSeparator.split( "," ) ) ) + meshVtkOutput = vtkOutputParsing.convert( parsedOptions ) # create the different fractures - per_fracture: list[ str ] = all_values.split( ":" ) - field_values_per_fracture: list[ frozenset[ int ] ] = [ - frozenset( map( int, fracture.split( "," ) ) ) for fracture in per_fracture + perFracture: list[ str ] = allValues.split( ":" ) + fieldValuesPerFracture: list[ frozenset[ int ] ] = [ + frozenset( map( int, fracture.split( "," ) ) ) for fracture in perFracture ] - fracture_names: list[ str ] = [ "fracture_" + frac.replace( ",", "_" ) + ".vtu" for frac in per_fracture ] - fractures_output_dir: str = parsed_options[ __FRACTURES_OUTPUT_DIR ] - fractures_data_mode: str = parsed_options[ __FRACTURES_DATA_MODE ] == __FRACTURES_DATA_MODE_DEFAULT - all_fractures_VtkOutput: list[ VtkOutput ] = build_all_fractures_VtkOutput( fractures_output_dir, - fractures_data_mode, mesh_vtk_output, - fracture_names ) + fractureNames: list[ str ] = [ "fracture_" + frac.replace( ",", "_" ) + ".vtu" for frac in perFracture ] + fracturesOutputDir: str = parsedOptions[ __FRACTURES_OUTPUT_DIR ] + fracturesDataMode: str = parsedOptions[ __FRACTURES_DATA_MODE ] == __FRACTURES_DATA_MODE_DEFAULT + allFracturesVtkOutput: list[ VtkOutput ] = buildAllFracturesVtkOutput( fracturesOutputDir, fracturesDataMode, + meshVtkOutput, fractureNames ) return Options( policy=policy, field=field, - field_values_combined=field_values_combined, - field_values_per_fracture=field_values_per_fracture, - mesh_VtkOutput=mesh_vtk_output, - all_fractures_VtkOutput=all_fractures_VtkOutput ) + fieldValuesCombined=fieldValuesCombined, + fieldValuesPerFracture=fieldValuesPerFracture, + meshVtkOutput=meshVtkOutput, + allFracturesVtkOutput=allFracturesVtkOutput ) -def display_results( options: Options, result: Result ): +def displayResults( options: Options, result: Result ): pass -def are_values_parsable( values: str ) -> bool: +def areValuesParsable( values: str ) -> bool: if not all( character.isdigit() or character in { ':', ',' } for character in values ): return False if values.startswith( ":" ) or values.startswith( "," ): @@ -113,19 +118,19 @@ def are_values_parsable( values: str ) -> bool: return True -def build_all_fractures_VtkOutput( fracture_output_dir: str, fractures_data_mode: bool, mesh_vtk_output: VtkOutput, - fracture_names: list[ str ] ) -> list[ VtkOutput ]: - if not os.path.exists( fracture_output_dir ): - raise ValueError( f"The --{__FRACTURES_OUTPUT_DIR} given directory '{fracture_output_dir}' does not exist." ) +def buildAllFracturesVtkOutput( fractureOutputDir: str, fracturesDataMode: bool, meshVtkOutput: VtkOutput, + fractureNames: list[ str ] ) -> list[ VtkOutput ]: + if not os.path.exists( fractureOutputDir ): + raise ValueError( f"The --{__FRACTURES_OUTPUT_DIR} given directory '{fractureOutputDir}' does not exist." ) - if not os.access( fracture_output_dir, os.W_OK ): - raise ValueError( f"The --{__FRACTURES_OUTPUT_DIR} given directory '{fracture_output_dir}' is not writable." ) + if not os.access( fractureOutputDir, os.W_OK ): + raise ValueError( f"The --{__FRACTURES_OUTPUT_DIR} given directory '{fractureOutputDir}' is not writable." ) - output_name = os.path.basename( mesh_vtk_output.output ) - splitted_name_without_expension: list[ str ] = output_name.split( "." )[ :-1 ] - name_without_extension: str = '_'.join( splitted_name_without_expension ) + "_" - all_fractures_VtkOuput: list[ VtkOutput ] = list() - for fracture_name in fracture_names: - fracture_path = os.path.join( fracture_output_dir, name_without_extension + fracture_name ) - all_fractures_VtkOuput.append( VtkOutput( fracture_path, fractures_data_mode ) ) - return all_fractures_VtkOuput + outputName = os.path.basename( meshVtkOutput.output ) + splittedNameWithoutExtension: list[ str ] = outputName.split( "." )[ :-1 ] + nameWithoutExtension: str = '_'.join( splittedNameWithoutExtension ) + "_" + allFracturesVtkOutput: list[ VtkOutput ] = list() + for fractureName in fractureNames: + fracturePath = os.path.join( fractureOutputDir, nameWithoutExtension + fractureName ) + allFracturesVtkOutput.append( VtkOutput( fracturePath, fracturesDataMode ) ) + return allFracturesVtkOutput diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/generateGlobalIdsParsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/generateGlobalIdsParsing.py new file mode 100644 index 00000000..e7e934a5 --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/parsing/generateGlobalIdsParsing.py @@ -0,0 +1,52 @@ +from dataclasses import dataclass +from geos.mesh.doctor.actions.generateGlobalIds import Options, Result +from geos.mesh.doctor.parsing import vtkOutputParsing, GENERATE_GLOBAL_IDS +from geos.mesh.doctor.parsing.cliParsing import setupLogger + +__CELLS, __POINTS = "cells", "points" + + +@dataclass( frozen=True ) +class GlobalIdsInfo: + cells: bool + points: bool + + +def convertGlobalIds( parsedOptions ) -> GlobalIdsInfo: + return GlobalIdsInfo( cells=parsedOptions[ __CELLS ], points=parsedOptions[ __POINTS ] ) + + +def convert( parsedOptions ) -> Options: + gids: GlobalIdsInfo = convertGlobalIds( parsedOptions ) + return Options( vtkOutput=vtkOutputParsing.convert( parsedOptions ), + generateCellsGlobalIds=gids.cells, + generatePointsGlobalIds=gids.points ) + + +def fillGenerateGlobalIdsSubparser( p ): + p.add_argument( '--' + __CELLS, + action="store_true", + help="[bool]: Generate global ids for cells. Defaults to true." ) + p.add_argument( '--no-' + __CELLS, + action="store_false", + dest=__CELLS, + help="[bool]: Don't generate global ids for cells." ) + p.set_defaults( **{ __CELLS: True } ) + p.add_argument( '--' + __POINTS, + action="store_true", + help="[bool]: Generate global ids for points. Defaults to true." ) + p.add_argument( '--no-' + __POINTS, + action="store_false", + dest=__POINTS, + help="[bool]: Don't generate global ids for points." ) + p.set_defaults( **{ __POINTS: True } ) + + +def fillSubparser( subparsers ) -> None: + p = subparsers.add_parser( GENERATE_GLOBAL_IDS, help="Adds globals ids for points and cells." ) + fillGenerateGlobalIdsSubparser( p ) + vtkOutputParsing.fillVtkOutputSubparser( p ) + + +def displayResults( options: Options, result: Result ): + setupLogger.info( result.info ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/generate_global_ids_parsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/generate_global_ids_parsing.py deleted file mode 100644 index 2c1a09cd..00000000 --- a/geos-mesh/src/geos/mesh/doctor/parsing/generate_global_ids_parsing.py +++ /dev/null @@ -1,52 +0,0 @@ -from dataclasses import dataclass -from geos.mesh.doctor.actions.generate_global_ids import Options, Result -from geos.mesh.doctor.parsing import vtk_output_parsing, GENERATE_GLOBAL_IDS -from geos.mesh.doctor.parsing.cli_parsing import setup_logger - -__CELLS, __POINTS = "cells", "points" - - -@dataclass( frozen=True ) -class GlobalIdsInfo: - cells: bool - points: bool - - -def convert_global_ids( parsed_options ) -> GlobalIdsInfo: - return GlobalIdsInfo( cells=parsed_options[ __CELLS ], points=parsed_options[ __POINTS ] ) - - -def convert( parsed_options ) -> Options: - gids: GlobalIdsInfo = convert_global_ids( parsed_options ) - return Options( vtk_output=vtk_output_parsing.convert( parsed_options ), - generate_cells_global_ids=gids.cells, - generate_points_global_ids=gids.points ) - - -def fill_generate_global_ids_subparser( p ): - p.add_argument( '--' + __CELLS, - action="store_true", - help=f"[bool]: Generate global ids for cells. Defaults to true." ) - p.add_argument( '--no-' + __CELLS, - action="store_false", - dest=__CELLS, - help=f"[bool]: Don't generate global ids for cells." ) - p.set_defaults( **{ __CELLS: True } ) - p.add_argument( '--' + __POINTS, - action="store_true", - help=f"[bool]: Generate global ids for points. Defaults to true." ) - p.add_argument( '--no-' + __POINTS, - action="store_false", - dest=__POINTS, - help=f"[bool]: Don't generate global ids for points." ) - p.set_defaults( **{ __POINTS: True } ) - - -def fill_subparser( subparsers ) -> None: - p = subparsers.add_parser( GENERATE_GLOBAL_IDS, help="Adds globals ids for points and cells." ) - fill_generate_global_ids_subparser( p ) - vtk_output_parsing.fill_vtk_output_subparser( p ) - - -def display_results( options: Options, result: Result ): - setup_logger.info( result.info ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/mainChecksParsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/mainChecksParsing.py new file mode 100644 index 00000000..d7d124ff --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/parsing/mainChecksParsing.py @@ -0,0 +1,62 @@ +import argparse +from copy import deepcopy +from geos.mesh.doctor.actions.allChecks import Options as AllChecksOptions +from geos.mesh.doctor.parsing._sharedChecksParsingLogic import ( CheckFeature, convert as sharedConvert, fillSubparser + as sharedFillSubparser, displayResults ) +from geos.mesh.doctor.parsing import ( + MAIN_CHECKS, + COLLOCATES_NODES, + ELEMENT_VOLUMES, + SELF_INTERSECTING_ELEMENTS, +) +from geos.mesh.doctor.parsing import collocatedNodesParsing as cnParser +from geos.mesh.doctor.parsing import elementVolumesParsing as evParser +from geos.mesh.doctor.parsing import selfIntersectingElementsParsing as sieParser + +# Ordered list of check names for this configuration +ORDERED_CHECK_NAMES = [ + COLLOCATES_NODES, + ELEMENT_VOLUMES, + SELF_INTERSECTING_ELEMENTS, +] + +# Centralized configuration for the checks managed by this module +CHECK_FEATURES_CONFIG = { + COLLOCATES_NODES: + CheckFeature( name=COLLOCATES_NODES, + optionsCls=cnParser.Options, + resultCls=cnParser.Result, + defaultParams=deepcopy( cnParser.__COLLOCATED_NODES_DEFAULT ), + display=cnParser.displayResults ), + ELEMENT_VOLUMES: + CheckFeature( name=ELEMENT_VOLUMES, + optionsCls=evParser.Options, + resultCls=evParser.Result, + defaultParams=deepcopy( evParser.__ELEMENT_VOLUMES_DEFAULT ), + display=evParser.displayResults ), + SELF_INTERSECTING_ELEMENTS: + CheckFeature( name=SELF_INTERSECTING_ELEMENTS, + optionsCls=sieParser.Options, + resultCls=sieParser.Result, + defaultParams=deepcopy( sieParser.__SELF_INTERSECTING_ELEMENTS_DEFAULT ), + display=sieParser.displayResults ), +} + + +def fillSubparser( subparsers: argparse._SubParsersAction ) -> None: + """Fills the subparser by calling the shared logic with the specific 'main_checks' configuration.""" + sharedFillSubparser( subparsers=subparsers, + subparserName=MAIN_CHECKS, + helpMessage="Perform a curated set of main mesh-doctor checks.", + orderedCheckNames=ORDERED_CHECK_NAMES, + checkFeaturesConfig=CHECK_FEATURES_CONFIG ) + + +def convert( parsedArgs: argparse.Namespace ) -> AllChecksOptions: + """Converts arguments by calling the shared logic with the 'main_checks' configuration.""" + return sharedConvert( parsedArgs=parsedArgs, + orderedCheckNames=ORDERED_CHECK_NAMES, + checkFeaturesConfig=CHECK_FEATURES_CONFIG ) + + +# The displayResults function is imported directly as it needs no special configuration. diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/main_checks_parsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/main_checks_parsing.py deleted file mode 100644 index 8c396930..00000000 --- a/geos-mesh/src/geos/mesh/doctor/parsing/main_checks_parsing.py +++ /dev/null @@ -1,69 +0,0 @@ -import argparse -from copy import deepcopy -from geos.mesh.doctor.actions.all_checks import Options as AllChecksOptions -from geos.mesh.doctor.parsing._shared_checks_parsing_logic import ( CheckFeature, convert as shared_convert, - fill_subparser as shared_fill_subparser, - display_results ) - -# Import constants for check names -from geos.mesh.doctor.parsing import ( - MAIN_CHECKS, - COLLOCATES_NODES, - ELEMENT_VOLUMES, - SELF_INTERSECTING_ELEMENTS, -) - -# Import module-specific parsing components -from geos.mesh.doctor.parsing import collocated_nodes_parsing as cn_parser -from geos.mesh.doctor.parsing import element_volumes_parsing as ev_parser -from geos.mesh.doctor.parsing import self_intersecting_elements_parsing as sie_parser - -# --- Configuration Specific to "Main Checks" --- - -# Ordered list of check names for this configuration -ORDERED_CHECK_NAMES = [ - COLLOCATES_NODES, - ELEMENT_VOLUMES, - SELF_INTERSECTING_ELEMENTS, -] - -# Centralized configuration for the checks managed by this module -CHECK_FEATURES_CONFIG = { - COLLOCATES_NODES: - CheckFeature( name=COLLOCATES_NODES, - options_cls=cn_parser.Options, - result_cls=cn_parser.Result, - default_params=deepcopy( cn_parser.__COLLOCATED_NODES_DEFAULT ), - display=cn_parser.display_results ), - ELEMENT_VOLUMES: - CheckFeature( name=ELEMENT_VOLUMES, - options_cls=ev_parser.Options, - result_cls=ev_parser.Result, - default_params=deepcopy( ev_parser.__ELEMENT_VOLUMES_DEFAULT ), - display=ev_parser.display_results ), - SELF_INTERSECTING_ELEMENTS: - CheckFeature( name=SELF_INTERSECTING_ELEMENTS, - options_cls=sie_parser.Options, - result_cls=sie_parser.Result, - default_params=deepcopy( sie_parser.__SELF_INTERSECTING_ELEMENTS_DEFAULT ), - display=sie_parser.display_results ), -} - - -def fill_subparser( subparsers: argparse._SubParsersAction ) -> None: - """Fills the subparser by calling the shared logic with the specific 'main_checks' configuration.""" - shared_fill_subparser( subparsers=subparsers, - subparser_name=MAIN_CHECKS, - help_message="Perform a curated set of main mesh-doctor checks.", - ordered_check_names=ORDERED_CHECK_NAMES, - check_features_config=CHECK_FEATURES_CONFIG ) - - -def convert( parsed_args: argparse.Namespace ) -> AllChecksOptions: - """Converts arguments by calling the shared logic with the 'main_checks' configuration.""" - return shared_convert( parsed_args=parsed_args, - ordered_check_names=ORDERED_CHECK_NAMES, - check_features_config=CHECK_FEATURES_CONFIG ) - - -# The display_results function is imported directly as it needs no special configuration. diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/non_conformal_parsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/nonConformalParsing.py similarity index 53% rename from geos-mesh/src/geos/mesh/doctor/parsing/non_conformal_parsing.py rename to geos-mesh/src/geos/mesh/doctor/parsing/nonConformalParsing.py index 801e04f4..b57ca504 100644 --- a/geos-mesh/src/geos/mesh/doctor/parsing/non_conformal_parsing.py +++ b/geos-mesh/src/geos/mesh/doctor/parsing/nonConformalParsing.py @@ -1,11 +1,11 @@ -from geos.mesh.doctor.actions.non_conformal import Options, Result +from geos.mesh.doctor.actions.nonConformal import Options, Result from geos.mesh.doctor.parsing import NON_CONFORMAL -from geos.mesh.doctor.parsing._shared_checks_parsing_logic import get_options_used_message -from geos.mesh.doctor.parsing.cli_parsing import setup_logger +from geos.mesh.doctor.parsing._sharedChecksParsingLogic import getOptionsUsedMessage +from geos.mesh.doctor.parsing.cliParsing import setupLogger -__ANGLE_TOLERANCE = "angle_tolerance" -__POINT_TOLERANCE = "point_tolerance" -__FACE_TOLERANCE = "face_tolerance" +__ANGLE_TOLERANCE = "angleTolerance" +__POINT_TOLERANCE = "pointTolerance" +__FACE_TOLERANCE = "faceTolerance" __ANGLE_TOLERANCE_DEFAULT = 10. __POINT_TOLERANCE_DEFAULT = 0. @@ -18,13 +18,13 @@ } -def convert( parsed_options ) -> Options: - return Options( angle_tolerance=parsed_options[ __ANGLE_TOLERANCE ], - point_tolerance=parsed_options[ __POINT_TOLERANCE ], - face_tolerance=parsed_options[ __FACE_TOLERANCE ] ) +def convert( parsedOptions ) -> Options: + return Options( angleTolerance=parsedOptions[ __ANGLE_TOLERANCE ], + pointTolerance=parsedOptions[ __POINT_TOLERANCE ], + faceTolerance=parsedOptions[ __FACE_TOLERANCE ] ) -def fill_subparser( subparsers ) -> None: +def fillSubparser( subparsers ) -> None: p = subparsers.add_parser( NON_CONFORMAL, help="Detects non conformal elements. [EXPERIMENTAL]" ) p.add_argument( '--' + __ANGLE_TOLERANCE, type=float, @@ -45,11 +45,11 @@ def fill_subparser( subparsers ) -> None: help=f"[float]: tolerance for two faces to be considered \"touching\". Defaults to {__FACE_TOLERANCE_DEFAULT}" ) -def display_results( options: Options, result: Result ): - setup_logger.results( get_options_used_message( options ) ) - non_conformal_cells: list[ int ] = [] - for i, j in result.non_conformal_cells: - non_conformal_cells += i, j - non_conformal_cells: frozenset[ int ] = frozenset( non_conformal_cells ) - setup_logger.results( f"You have {len( non_conformal_cells )} non conformal cells." ) - setup_logger.results( f"{', '.join( map( str, sorted( non_conformal_cells ) ) )}" ) +def displayResults( options: Options, result: Result ): + setupLogger.results( getOptionsUsedMessage( options ) ) + nonConformalCells: list[ int ] = [] + for i, j in result.nonConformalCells: + nonConformalCells += i, j + nonConformalCells: frozenset[ int ] = frozenset( nonConformalCells ) + setupLogger.results( f"You have {len( nonConformalCells )} non conformal cells." ) + setupLogger.results( f"{', '.join( map( str, sorted( nonConformalCells ) ) )}" ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/selfIntersectingElementsParsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/selfIntersectingElementsParsing.py new file mode 100644 index 00000000..a61e1fbd --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/parsing/selfIntersectingElementsParsing.py @@ -0,0 +1,43 @@ +import numpy +from geos.mesh.doctor.actions.selfIntersectingElements import Options, Result +from geos.mesh.doctor.parsing import SELF_INTERSECTING_ELEMENTS +from geos.mesh.doctor.parsing._sharedChecksParsingLogic import getOptionsUsedMessage +from geos.mesh.doctor.parsing.cliParsing import setupLogger + +__MIN_DISTANCE = "minDistance" +__MIN_DISTANCE_DEFAULT = numpy.finfo( float ).eps + +__SELF_INTERSECTING_ELEMENTS_DEFAULT = { __MIN_DISTANCE: __MIN_DISTANCE_DEFAULT } + + +def convert( parsedOptions ) -> Options: + minDistance = parsedOptions[ __MIN_DISTANCE ] + if minDistance == 0: + setupLogger.warning( + "Having minimum distance set to 0 can induce lots of false positive results (adjacent faces may be considered intersecting)." + ) + elif minDistance < 0: + raise ValueError( + f"Negative minimum distance ({minDistance}) in the {SELF_INTERSECTING_ELEMENTS} check is not allowed." ) + return Options( minDistance=minDistance ) + + +def fillSubparser( subparsers ) -> None: + p = subparsers.add_parser( SELF_INTERSECTING_ELEMENTS, + help="Checks if the faces of the elements are self intersecting." ) + p.add_argument( + '--' + __MIN_DISTANCE, + type=float, + required=False, + metavar=__MIN_DISTANCE_DEFAULT, + default=__MIN_DISTANCE_DEFAULT, + help= + f"[float]: The minimum distance in the computation. Defaults to your machine precision {__MIN_DISTANCE_DEFAULT}." + ) + + +def displayResults( options: Options, result: Result ): + setupLogger.results( getOptionsUsedMessage( options ) ) + setupLogger.results( f"You have {len(result.intersectingFacesElements)} elements with self intersecting faces." ) + if result.intersectingFacesElements: + setupLogger.results( "The elements indices are:\n" + ", ".join( map( str, result.intersectingFacesElements ) ) ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/self_intersecting_elements_parsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/self_intersecting_elements_parsing.py deleted file mode 100644 index 430a2532..00000000 --- a/geos-mesh/src/geos/mesh/doctor/parsing/self_intersecting_elements_parsing.py +++ /dev/null @@ -1,44 +0,0 @@ -import numpy -from geos.mesh.doctor.actions.self_intersecting_elements import Options, Result -from geos.mesh.doctor.parsing import SELF_INTERSECTING_ELEMENTS -from geos.mesh.doctor.parsing._shared_checks_parsing_logic import get_options_used_message -from geos.mesh.doctor.parsing.cli_parsing import setup_logger - -__MIN_DISTANCE = "min_distance" -__MIN_DISTANCE_DEFAULT = numpy.finfo( float ).eps - -__SELF_INTERSECTING_ELEMENTS_DEFAULT = { __MIN_DISTANCE: __MIN_DISTANCE_DEFAULT } - - -def convert( parsed_options ) -> Options: - min_distance = parsed_options[ __MIN_DISTANCE ] - if min_distance == 0: - setup_logger.warning( - "Having minimum distance set to 0 can induce lots of false positive results (adjacent faces may be considered intersecting)." - ) - elif min_distance < 0: - raise ValueError( - f"Negative minimum distance ({min_distance}) in the {SELF_INTERSECTING_ELEMENTS} check is not allowed." ) - return Options( min_distance=min_distance ) - - -def fill_subparser( subparsers ) -> None: - p = subparsers.add_parser( SELF_INTERSECTING_ELEMENTS, - help="Checks if the faces of the elements are self intersecting." ) - p.add_argument( - '--' + __MIN_DISTANCE, - type=float, - required=False, - metavar=__MIN_DISTANCE_DEFAULT, - default=__MIN_DISTANCE_DEFAULT, - help= - f"[float]: The minimum distance in the computation. Defaults to your machine precision {__MIN_DISTANCE_DEFAULT}." - ) - - -def display_results( options: Options, result: Result ): - setup_logger.results( get_options_used_message( options ) ) - setup_logger.results( f"You have {len(result.intersecting_faces_elements)} elements with self intersecting faces." ) - if result.intersecting_faces_elements: - setup_logger.results( "The elements indices are:\n" + - ", ".join( map( str, result.intersecting_faces_elements ) ) ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/supportedElementsParsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/supportedElementsParsing.py new file mode 100644 index 00000000..4ed3b1ef --- /dev/null +++ b/geos-mesh/src/geos/mesh/doctor/parsing/supportedElementsParsing.py @@ -0,0 +1,52 @@ +import multiprocessing +from geos.mesh.doctor.actions.supportedElements import Options, Result +from geos.mesh.doctor.parsing import SUPPORTED_ELEMENTS +from geos.mesh.doctor.parsing._sharedChecksParsingLogic import getOptionsUsedMessage +from geos.mesh.doctor.parsing.cliParsing import setupLogger + +__CHUNK_SIZE = "chunkSize" +__NUM_PROC = "nproc" + +__CHUNK_SIZE_DEFAULT = 1 +__NUM_PROC_DEFAULT = multiprocessing.cpu_count() + +__SUPPORTED_ELEMENTS_DEFAULT = { __CHUNK_SIZE: __CHUNK_SIZE_DEFAULT, __NUM_PROC: __NUM_PROC_DEFAULT } + + +def convert( parsedOptions ) -> Options: + return Options( chunkSize=parsedOptions[ __CHUNK_SIZE ], nproc=parsedOptions[ __NUM_PROC ] ) + + +def fillSubparser( subparsers ) -> None: + p = subparsers.add_parser( SUPPORTED_ELEMENTS, + help="Check that all the elements of the mesh are supported by GEOSX." ) + p.add_argument( '--' + __CHUNK_SIZE, + type=int, + required=False, + metavar=__CHUNK_SIZE_DEFAULT, + default=__CHUNK_SIZE_DEFAULT, + help=f"[int]: Defaults chunk size for parallel processing to {__CHUNK_SIZE_DEFAULT}" ) + p.add_argument( + '--' + __NUM_PROC, + type=int, + required=False, + metavar=__NUM_PROC_DEFAULT, + default=__NUM_PROC_DEFAULT, + help=f"[int]: Number of threads used for parallel processing. Defaults to your CPU count {__NUM_PROC_DEFAULT}." + ) + + +def displayResults( options: Options, result: Result ): + setupLogger.results( getOptionsUsedMessage( options ) ) + if result.unsupportedPolyhedronElements: + setupLogger.results( f"There is/are {len(result.unsupportedPolyhedronElements)} polyhedra that may not be " + f"converted to supported elements." ) + setupLogger.results( + f"The list of the unsupported polyhedra is\n{tuple(sorted(result.unsupportedPolyhedronElements))}." ) + else: + setupLogger.results( "All the polyhedra (if any) can be converted to supported elements." ) + if result.unsupportedStdElementsTypes: + setupLogger.results( f"There are unsupported vtk standard element types. The list of those vtk types is " + f"{tuple(sorted(result.unsupportedStdElementsTypes))}." ) + else: + setupLogger.results( "All the standard vtk element types (if any) are supported." ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/supported_elements_parsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/supported_elements_parsing.py deleted file mode 100644 index f9f8dd84..00000000 --- a/geos-mesh/src/geos/mesh/doctor/parsing/supported_elements_parsing.py +++ /dev/null @@ -1,54 +0,0 @@ -import multiprocessing -from geos.mesh.doctor.actions.supported_elements import Options, Result -from geos.mesh.doctor.parsing import SUPPORTED_ELEMENTS -from geos.mesh.doctor.parsing._shared_checks_parsing_logic import get_options_used_message -from geos.mesh.doctor.parsing.cli_parsing import setup_logger - -__CHUNK_SIZE = "chunk_size" -__NUM_PROC = "nproc" - -__CHUNK_SIZE_DEFAULT = 1 -__NUM_PROC_DEFAULT = multiprocessing.cpu_count() - -__SUPPORTED_ELEMENTS_DEFAULT = { __CHUNK_SIZE: __CHUNK_SIZE_DEFAULT, __NUM_PROC: __NUM_PROC_DEFAULT } - - -def convert( parsed_options ) -> Options: - return Options( chunk_size=parsed_options[ __CHUNK_SIZE ], nproc=parsed_options[ __NUM_PROC ] ) - - -def fill_subparser( subparsers ) -> None: - p = subparsers.add_parser( SUPPORTED_ELEMENTS, - help="Check that all the elements of the mesh are supported by GEOSX." ) - p.add_argument( '--' + __CHUNK_SIZE, - type=int, - required=False, - metavar=__CHUNK_SIZE_DEFAULT, - default=__CHUNK_SIZE_DEFAULT, - help=f"[int]: Defaults chunk size for parallel processing to {__CHUNK_SIZE_DEFAULT}" ) - p.add_argument( - '--' + __NUM_PROC, - type=int, - required=False, - metavar=__NUM_PROC_DEFAULT, - default=__NUM_PROC_DEFAULT, - help=f"[int]: Number of threads used for parallel processing. Defaults to your CPU count {__NUM_PROC_DEFAULT}." - ) - - -def display_results( options: Options, result: Result ): - setup_logger.results( get_options_used_message( options ) ) - if result.unsupported_polyhedron_elements: - setup_logger.results( - f"There is/are {len(result.unsupported_polyhedron_elements)} polyhedra that may not be converted to supported elements." - ) - setup_logger.results( - f"The list of the unsupported polyhedra is\n{tuple(sorted(result.unsupported_polyhedron_elements))}." ) - else: - setup_logger.results( "All the polyhedra (if any) can be converted to supported elements." ) - if result.unsupported_std_elements_types: - setup_logger.results( - f"There are unsupported vtk standard element types. The list of those vtk types is {tuple(sorted(result.unsupported_std_elements_types))}." - ) - else: - setup_logger.results( "All the standard vtk element types (if any) are supported." ) diff --git a/geos-mesh/src/geos/mesh/doctor/parsing/vtk_output_parsing.py b/geos-mesh/src/geos/mesh/doctor/parsing/vtkOutputParsing.py similarity index 53% rename from geos-mesh/src/geos/mesh/doctor/parsing/vtk_output_parsing.py rename to geos-mesh/src/geos/mesh/doctor/parsing/vtkOutputParsing.py index 06fedd0c..1431ce4e 100644 --- a/geos-mesh/src/geos/mesh/doctor/parsing/vtk_output_parsing.py +++ b/geos-mesh/src/geos/mesh/doctor/parsing/vtkOutputParsing.py @@ -1,6 +1,6 @@ import os.path import textwrap -from geos.mesh.doctor.parsing.cli_parsing import setup_logger +from geos.mesh.doctor.parsing.cliParsing import setupLogger from geos.mesh.io.vtkIO import VtkOutput __OUTPUT_FILE = "output" @@ -9,24 +9,24 @@ __OUTPUT_BINARY_MODE_DEFAULT = __OUTPUT_BINARY_MODE_VALUES[ 0 ] -def get_vtk_output_help(): +def getVtkOutputHelp(): msg = \ f"""{__OUTPUT_FILE} [string]: The vtk output file destination. {__OUTPUT_BINARY_MODE} [string]: For ".vtu" output format, the data mode can be {" or ".join(__OUTPUT_BINARY_MODE_VALUES)}. Defaults to {__OUTPUT_BINARY_MODE_DEFAULT}.""" return textwrap.dedent( msg ) -def __build_arg( prefix, main ): +def __buildArg( prefix, main ): return "-".join( filter( None, ( prefix, main ) ) ) -def fill_vtk_output_subparser( parser, prefix="" ) -> None: - parser.add_argument( '--' + __build_arg( prefix, __OUTPUT_FILE ), +def fillVtkOutputSubparser( parser, prefix="" ) -> None: + parser.add_argument( '--' + __buildArg( prefix, __OUTPUT_FILE ), type=str, required=True, help=f"[string]: The vtk output file destination." ) parser.add_argument( - '--' + __build_arg( prefix, __OUTPUT_BINARY_MODE ), + '--' + __buildArg( prefix, __OUTPUT_BINARY_MODE ), type=str, metavar=", ".join( __OUTPUT_BINARY_MODE_VALUES ), default=__OUTPUT_BINARY_MODE_DEFAULT, @@ -35,11 +35,11 @@ def fill_vtk_output_subparser( parser, prefix="" ) -> None: ) -def convert( parsed_options, prefix="" ) -> VtkOutput: - output_key = __build_arg( prefix, __OUTPUT_FILE ).replace( "-", "_" ) - binary_mode_key = __build_arg( prefix, __OUTPUT_BINARY_MODE ).replace( "-", "_" ) - output = parsed_options[ output_key ] - if parsed_options[ binary_mode_key ] and os.path.splitext( output )[ -1 ] == ".vtk": - setup_logger.info( "VTK data mode will be ignored for legacy file format \"vtk\"." ) - is_data_mode_binary: bool = parsed_options[ binary_mode_key ] == __OUTPUT_BINARY_MODE_DEFAULT - return VtkOutput( output=output, is_data_mode_binary=is_data_mode_binary ) +def convert( parsedOptions, prefix="" ) -> VtkOutput: + outputKey = __buildArg( prefix, __OUTPUT_FILE ).replace( "-", "_" ) + binaryModeKey = __buildArg( prefix, __OUTPUT_BINARY_MODE ).replace( "-", "_" ) + output = parsedOptions[ outputKey ] + if parsedOptions[ binaryModeKey ] and os.path.splitext( output )[ -1 ] == ".vtk": + setupLogger.info( "VTK data mode will be ignored for legacy file format \"vtk\"." ) + isDataModeBinary: bool = parsedOptions[ binaryModeKey ] == __OUTPUT_BINARY_MODE_DEFAULT + return VtkOutput( output=output, isDataModeBinary=isDataModeBinary ) diff --git a/geos-mesh/src/geos/mesh/doctor/register.py b/geos-mesh/src/geos/mesh/doctor/register.py index 41c52bd6..e265c0dd 100644 --- a/geos-mesh/src/geos/mesh/doctor/register.py +++ b/geos-mesh/src/geos/mesh/doctor/register.py @@ -1,67 +1,70 @@ import argparse import importlib -from typing import Dict, Callable, Any, Tuple +from typing import Callable, Any import geos.mesh.doctor.parsing as parsing -from geos.mesh.doctor.parsing import ActionHelper, cli_parsing -from geos.mesh.doctor.parsing.cli_parsing import setup_logger +from geos.mesh.doctor.parsing import ActionHelper, cliParsing +from geos.mesh.doctor.parsing.cliParsing import setupLogger -__HELPERS: Dict[ str, Callable[ [ None ], ActionHelper ] ] = dict() -__ACTIONS: Dict[ str, Callable[ [ None ], Any ] ] = dict() +__HELPERS: dict[ str, Callable[ [ None ], ActionHelper ] ] = dict() +__ACTIONS: dict[ str, Callable[ [ None ], Any ] ] = dict() -def __load_module_action( module_name: str, action_fct="action" ): - module = importlib.import_module( "geos.mesh.doctor.actions." + module_name ) - return getattr( module, action_fct ) +def __loadModuleAction( moduleName: str, actionFct="action" ): + module = importlib.import_module( "geos.mesh.doctor.actions." + moduleName ) + return getattr( module, actionFct ) -def __load_module_action_helper( module_name: str, parsing_fct_suffix="_parsing" ): - module = importlib.import_module( "geos.mesh.doctor.parsing." + module_name + parsing_fct_suffix ) - return ActionHelper( fill_subparser=module.fill_subparser, +def __loadModuleActionHelper( moduleName: str, parsingFctSuffix="Parsing" ): + module = importlib.import_module( "geos.mesh.doctor.parsing." + moduleName + parsingFctSuffix ) + return ActionHelper( fillSubparser=module.fillSubparser, convert=module.convert, - display_results=module.display_results ) + displayResults=module.displayResults ) -def __load_actions() -> Dict[ str, Callable[ [ str, Any ], Any ] ]: - """ - Loads all the actions. +def __loadActions() -> dict[ str, Callable[ [ str, Any ], Any ] ]: + """Loads all the actions. This function acts like a protection layer if a module fails to load. A action that fails to load won't stop the process. - :return: The actions. + + Returns: + dict[ str, Callable[ [ str, Any ], Any ] ]: The actions. """ - loaded_actions: Dict[ str, Callable[ [ str, Any ], Any ] ] = dict() - for action_name, action_provider in __ACTIONS.items(): + loadedActions: dict[ str, Callable[ [ str, Any ], Any ] ] = dict() + for actionName, actionProvider in __ACTIONS.items(): try: - loaded_actions[ action_name ] = action_provider() - setup_logger.debug( f"Action \"{action_name}\" is loaded." ) + loadedActions[ actionName ] = actionProvider() + setupLogger.debug( f"Action \"{actionName}\" is loaded." ) except Exception as e: - setup_logger.warning( f"Could not load module \"{action_name}\": {e}" ) - return loaded_actions + setupLogger.warning( f"Could not load module \"{actionName}\": {e}" ) + return loadedActions -def register_parsing_actions( -) -> Tuple[ argparse.ArgumentParser, Dict[ str, Callable[ [ str, Any ], Any ] ], Dict[ str, ActionHelper ] ]: - """ - Register all the parsing actions. Eventually initiate the registration of all the actions too. - :return: The actions and the actions helpers. +def registerParsingActions( +) -> tuple[ argparse.ArgumentParser, dict[ str, Callable[ [ str, Any ], Any ] ], dict[ str, ActionHelper ] ]: + """Register all the parsing actions. Eventually initiate the registration of all the actions too. + + Returns: + tuple[ argparse.ArgumentParser, dict[ str, Callable[ [ str, Any ], Any ] ], dict[ str, ActionHelper ] ]: + The parser, actions and helpers. """ - parser = cli_parsing.init_parser() + parser = cliParsing.initParser() subparsers = parser.add_subparsers( help="Modules", dest="subparsers" ) - def closure_trick( cn: str ): - __HELPERS[ action_name ] = lambda: __load_module_action_helper( cn ) - __ACTIONS[ action_name ] = lambda: __load_module_action( cn ) + def closureTrick( cn: str ): + __HELPERS[ actionName ] = lambda: __loadModuleActionHelper( cn ) + __ACTIONS[ actionName ] = lambda: __loadModuleAction( cn ) # Register the modules to load here. - for action_name in ( parsing.ALL_CHECKS, parsing.COLLOCATES_NODES, parsing.ELEMENT_VOLUMES, - parsing.FIX_ELEMENTS_ORDERINGS, parsing.GENERATE_CUBE, parsing.GENERATE_FRACTURES, - parsing.GENERATE_GLOBAL_IDS, parsing.MAIN_CHECKS, parsing.NON_CONFORMAL, - parsing.SELF_INTERSECTING_ELEMENTS, parsing.SUPPORTED_ELEMENTS ): - closure_trick( action_name ) - loaded_actions: Dict[ str, Callable[ [ str, Any ], Any ] ] = __load_actions() - loaded_actions_helpers: Dict[ str, ActionHelper ] = dict() - for action_name in loaded_actions.keys(): - h = __HELPERS[ action_name ]() - h.fill_subparser( subparsers ) - loaded_actions_helpers[ action_name ] = h - setup_logger.debug( f"Parsing for action \"{action_name}\" is loaded." ) - return parser, loaded_actions, loaded_actions_helpers + for actionName in ( parsing.ALL_CHECKS, parsing.COLLOCATES_NODES, parsing.ELEMENT_VOLUMES, + parsing.FIX_ELEMENTS_ORDERINGS, parsing.GENERATE_CUBE, parsing.GENERATE_FRACTURES, + parsing.GENERATE_GLOBAL_IDS, parsing.MAIN_CHECKS, parsing.NON_CONFORMAL, + parsing.SELF_INTERSECTING_ELEMENTS, parsing.SUPPORTED_ELEMENTS ): + closureTrick( actionName ) + loadedActions: dict[ str, Callable[ [ str, Any ], Any ] ] = __loadActions() + loadedActionsHelpers: dict[ str, ActionHelper ] = dict() + for actionName in loadedActions.keys(): + h = __HELPERS[ actionName ]() + h.fillSubparser( subparsers ) + loadedActionsHelpers[ actionName ] = h + setupLogger.debug( f"Parsing for action \"{actionName}\" is loaded." ) + return parser, loadedActions, loadedActionsHelpers diff --git a/geos-mesh/src/geos/mesh/io/vtkIO.py b/geos-mesh/src/geos/mesh/io/vtkIO.py index 1b93648a..55efbc9f 100644 --- a/geos-mesh/src/geos/mesh/io/vtkIO.py +++ b/geos-mesh/src/geos/mesh/io/vtkIO.py @@ -1,192 +1,268 @@ # SPDX-License-Identifier: Apache-2.0 # SPDX-FileCopyrightText: Copyright 2023-2024 TotalEnergies. # SPDX-FileContributor: Alexandre Benedicto - -import os.path -import logging from dataclasses import dataclass -from typing import Optional -from vtkmodules.vtkCommonDataModel import vtkUnstructuredGrid, vtkStructuredGrid, vtkPointSet -from vtkmodules.vtkIOLegacy import vtkUnstructuredGridWriter, vtkUnstructuredGridReader -from vtkmodules.vtkIOXML import ( vtkXMLUnstructuredGridReader, vtkXMLUnstructuredGridWriter, - vtkXMLStructuredGridReader, vtkXMLPUnstructuredGridReader, - vtkXMLPStructuredGridReader, vtkXMLStructuredGridWriter ) +from enum import Enum +from pathlib import Path +from typing import Optional, Type, TypeAlias +from vtkmodules.vtkCommonDataModel import vtkPointSet, vtkUnstructuredGrid +from vtkmodules.vtkIOCore import vtkWriter +from vtkmodules.vtkIOLegacy import vtkDataReader, vtkUnstructuredGridWriter, vtkUnstructuredGridReader +from vtkmodules.vtkIOXML import ( vtkXMLGenericDataObjectReader, vtkXMLUnstructuredGridWriter, vtkXMLWriter, + vtkXMLStructuredGridWriter ) +from geos.utils.Logger import getLogger __doc__ = """ -Input and Ouput methods for VTK meshes: - - VTK, VTU, VTS, PVTU, PVTS readers - - VTK, VTS, VTU writers +Input and Output methods for various VTK mesh formats. +Supports reading: .vtk (legacy), .vtu, .vts, .vti, .vtp, .vtr, .pvtu, .pvts, .pvti, .pvtp, .pvtr +Supports writing: .vtk, .vtu, .vts +Uses vtkXMLGenericDataObjectReader for automatic XML format detection. """ +ioLogger = getLogger( "IO for geos-mesh" ) +ioLogger.propagate = False + + +class VtkFormat( Enum ): + """Enumeration for supported VTK file formats and their extensions.""" + VTK = ".vtk" + VTS = ".vts" + VTU = ".vtu" + VTI = ".vti" + VTP = ".vtp" + VTR = ".vtr" + PVTU = ".pvtu" + PVTS = ".pvts" + PVTI = ".pvti" + PVTP = ".pvtp" + PVTR = ".pvtr" + + +# Use TypeAlias for cleaner and more readable type hints +VtkReaderClass: TypeAlias = Type[ vtkDataReader | vtkXMLGenericDataObjectReader ] +VtkWriterClass: TypeAlias = Type[ vtkWriter | vtkXMLWriter ] + +# XML-based formats that can be read by vtkXMLGenericDataObjectReader +XML_FORMATS: set[ VtkFormat ] = { + VtkFormat.VTU, VtkFormat.VTS, VtkFormat.VTI, VtkFormat.VTP, VtkFormat.VTR, VtkFormat.PVTU, VtkFormat.PVTS, + VtkFormat.PVTI, VtkFormat.PVTP, VtkFormat.PVTR +} + +# Centralized mapping of formats to their corresponding writer classes +WRITER_MAP: dict[ VtkFormat, VtkWriterClass ] = { + VtkFormat.VTK: vtkUnstructuredGridWriter, + VtkFormat.VTS: vtkXMLStructuredGridWriter, + VtkFormat.VTU: vtkXMLUnstructuredGridWriter, +} + @dataclass( frozen=True ) class VtkOutput: + """Configuration for writing a VTK file.""" output: str - is_data_mode_binary: bool - - -def __read_vtk( vtk_input_file: str ) -> Optional[ vtkUnstructuredGrid ]: - reader = vtkUnstructuredGridReader() - logging.info( f"Testing file format \"{vtk_input_file}\" using legacy format reader..." ) - reader.SetFileName( vtk_input_file ) - if reader.IsFileUnstructuredGrid(): - logging.info( f"Reader matches. Reading file \"{vtk_input_file}\" using legacy format reader." ) - reader.Update() - return reader.GetOutput() - else: - logging.info( "Reader did not match the input file format." ) - return None + isDataModeBinary: bool = True -def __read_vts( vtk_input_file: str ) -> Optional[ vtkStructuredGrid ]: - reader = vtkXMLStructuredGridReader() - logging.info( f"Testing file format \"{vtk_input_file}\" using XML format reader..." ) - if reader.CanReadFile( vtk_input_file ): - reader.SetFileName( vtk_input_file ) - logging.info( f"Reader matches. Reading file \"{vtk_input_file}\" using XML format reader." ) - reader.Update() - return reader.GetOutput() - else: - logging.info( "Reader did not match the input file format." ) - return None +def _readData( filepath: str, readerClass: VtkReaderClass ) -> Optional[ vtkPointSet ]: + """Generic helper to read a VTK file using a specific reader class. + Args: + filepath (str): Path to the VTK file. + readerClass (VtkReaderClass): The VTK reader class to use. -def __read_vtu( vtk_input_file: str ) -> Optional[ vtkUnstructuredGrid ]: - reader = vtkXMLUnstructuredGridReader() - logging.info( f"Testing file format \"{vtk_input_file}\" using XML format reader..." ) - if reader.CanReadFile( vtk_input_file ): - reader.SetFileName( vtk_input_file ) - logging.info( f"Reader matches. Reading file \"{vtk_input_file}\" using XML format reader." ) - reader.Update() - return reader.GetOutput() - else: - logging.info( "Reader did not match the input file format." ) - return None + Returns: + Optional[ vtkPointSet ]: The read VTK point set, or None if reading failed. + """ + reader = readerClass() + ioLogger.info( f"Attempting to read '{filepath}' with {readerClass.__name__}..." ) + + reader.SetFileName( str( filepath ) ) + # Note: vtkXMLGenericDataObjectReader's CanReadFile() is unreliable, so we skip it for that reader + # and rely on Update() + error checking instead + if readerClass != vtkXMLGenericDataObjectReader: + # For other XML-based readers, CanReadFile is a reliable and fast pre-check + if hasattr( reader, 'CanReadFile' ) and not reader.CanReadFile( filepath ): + ioLogger.error( f"Reader {readerClass.__name__} reports it cannot read file '{filepath}'." ) + return None -def __read_pvts( vtk_input_file: str ) -> Optional[ vtkStructuredGrid ]: - reader = vtkXMLPStructuredGridReader() - logging.info( f"Testing file format \"{vtk_input_file}\" using XML format reader..." ) - if reader.CanReadFile( vtk_input_file ): - reader.SetFileName( vtk_input_file ) - logging.info( f"Reader matches. Reading file \"{vtk_input_file}\" using XML format reader." ) - reader.Update() - return reader.GetOutput() - else: - logging.info( "Reader did not match the input file format." ) + reader.Update() + + # Check the reader's error code. This is the most reliable way to + # detect a failed read, as GetOutput() can return a default empty object on failure. + if hasattr( reader, 'GetErrorCode' ) and reader.GetErrorCode() != 0: + ioLogger.warning( + f"VTK reader {readerClass.__name__} reported an error code after attempting to read '{filepath}'." ) return None + output = reader.GetOutput() -def __read_pvtu( vtk_input_file: str ) -> Optional[ vtkUnstructuredGrid ]: - reader = vtkXMLPUnstructuredGridReader() - logging.info( f"Testing file format \"{vtk_input_file}\" using XML format reader..." ) - if reader.CanReadFile( vtk_input_file ): - reader.SetFileName( vtk_input_file ) - logging.info( f"Reader matches. Reading file \"{vtk_input_file}\" using XML format reader." ) - reader.Update() - return reader.GetOutput() - else: - logging.info( "Reader did not match the input file format." ) + if output is None: return None + ioLogger.info( "Read successful." ) + return output -def read_mesh( vtk_input_file: str ) -> vtkPointSet: - """Read vtk file and build either an unstructured grid or a structured grid from it. - Args: - vtk_input_file (str): The file name. Extension will be used to guess file format\ - If first guess fails, other available readers will be tried. +def _writeData( mesh: vtkPointSet, writerClass: VtkWriterClass, output: str, isBinary: bool ) -> int: + """Generic helper to write a VTK file using a specific writer class. - Raises: - ValueError: Invalid file path error - ValueError: No appropriate reader available for the file format + Args: + mesh (vtkPointSet): The grid data to write. + writerClass (VtkWriterClass): The VTK writer class to use. + output (str): The output file path. + isBinary (bool): Whether to write the file in binary mode (True) or ASCII (False). Returns: - vtkPointSet: Mesh read + int: 1 if success, 0 otherwise. """ - if not os.path.exists( vtk_input_file ): - err_msg: str = f"Invalid file path. Could not read \"{vtk_input_file}\"." - logging.error( err_msg ) - raise ValueError( err_msg ) - file_extension = os.path.splitext( vtk_input_file )[ -1 ] - extension_to_reader = { - ".vtk": __read_vtk, - ".vts": __read_vts, - ".vtu": __read_vtu, - ".pvtu": __read_pvtu, - ".pvts": __read_pvts - } - # Testing first the reader that should match - if file_extension in extension_to_reader: - output_mesh = extension_to_reader.pop( file_extension )( vtk_input_file ) - if output_mesh: - return output_mesh - # If it does not match, then test all the others. - for reader in extension_to_reader.values(): - output_mesh = reader( vtk_input_file ) - if output_mesh: - return output_mesh - # No reader did work. - err_msg = f"Could not find the appropriate VTK reader for file \"{vtk_input_file}\"." - logging.error( err_msg ) - raise ValueError( err_msg ) - - -def __write_vtk( mesh: vtkUnstructuredGrid, output: str ) -> int: - logging.info( f"Writing mesh into file \"{output}\" using legacy format." ) - writer = vtkUnstructuredGridWriter() + ioLogger.info( f"Writing mesh to '{output}' using {writerClass.__name__}..." ) + writer = writerClass() writer.SetFileName( output ) writer.SetInputData( mesh ) - return writer.Write() + # Set data mode only for XML writers that support it + if isinstance( writer, vtkXMLWriter ): + if isBinary: + writer.SetDataModeToBinary() + ioLogger.info( "Data mode set to Binary." ) + else: + writer.SetDataModeToAscii() + ioLogger.info( "Data mode set to ASCII." ) -def __write_vts( mesh: vtkStructuredGrid, output: str, toBinary: bool = False ) -> int: - logging.info( f"Writing mesh into file \"{output}\" using XML format." ) - writer = vtkXMLStructuredGridWriter() - writer.SetFileName( output ) - writer.SetInputData( mesh ) - writer.SetDataModeToBinary() if toBinary else writer.SetDataModeToAscii() return writer.Write() -def __write_vtu( mesh: vtkUnstructuredGrid, output: str, toBinary: bool = False ) -> int: - logging.info( f"Writing mesh into file \"{output}\" using XML format." ) - writer = vtkXMLUnstructuredGridWriter() - writer.SetFileName( output ) - writer.SetInputData( mesh ) - writer.SetDataModeToBinary() if toBinary else writer.SetDataModeToAscii() - return writer.Write() +def readMesh( filepath: str ) -> vtkPointSet: + """ + Reads a VTK file, automatically detecting the format. + Uses vtkXMLGenericDataObjectReader for all XML-based formats (.vtu, .vts, .vti, .vtp, .vtr + and their parallel variants) and vtkUnstructuredGridReader for legacy .vtk format. -def write_mesh( mesh: vtkPointSet, vtk_output: VtkOutput, canOverwrite: bool = False ) -> int: - """Write mesh to disk. + Args: + filepath (str): The path to the VTK file. - Nothing is done if file already exists. + Raises: + FileNotFoundError: If the input file does not exist. + ValueError: If no suitable reader can be found for the file. + + Returns: + vtkPointSet: The resulting mesh data. + """ + filepathPath: Path = Path( filepath ) + if not filepathPath.exists(): + raise FileNotFoundError( f"Invalid file path: '{filepath}' does not exist." ) + + # Determine the appropriate reader based on file extension + try: + fileFormat = VtkFormat( filepathPath.suffix ) + if fileFormat in XML_FORMATS: + # Use the generic XML reader for all XML-based formats + readerClass = vtkXMLGenericDataObjectReader + elif fileFormat == VtkFormat.VTK: + # Use legacy reader for .vtk files + readerClass = vtkUnstructuredGridReader + else: + raise ValueError( f"Unsupported file format: '{filepathPath.suffix}'." ) + except ValueError: + # Unknown extension - try both readers + ioLogger.warning( f"Unknown file extension '{filepathPath.suffix}'. Trying available readers." ) + + # Try XML reader first (more common) + outputMesh = _readData( filepath, vtkXMLGenericDataObjectReader ) + if outputMesh: + return outputMesh + + # Fall back to legacy reader + outputMesh = _readData( filepath, vtkUnstructuredGridReader ) + if outputMesh: + return outputMesh + + raise ValueError( f"Could not find a suitable reader for '{filepath}'." ) + + # Attempt to read with the selected reader + outputMesh = _readData( filepath, readerClass ) + if outputMesh: + return outputMesh + + raise ValueError( f"Failed to read file '{filepath}' with {readerClass.__name__}." ) + + +def readUnstructuredGrid( filepath: str ) -> vtkUnstructuredGrid: + """ + Reads a VTK file and ensures it is a vtkUnstructuredGrid. + + This function uses the general `readMesh` to load the data and then + validates its type. Args: - mesh (vtkPointSet): Grid to write - vtk_output (VtkOutput): File path. File extension will be used to select VTK file format - canOverwrite (bool, optional): Authorize overwriting the file. Defaults to False. + filepath (str): The path to the VTK file. Raises: - ValueError: Invalid VTK format. + FileNotFoundError: If the input file does not exist. + ValueError: If no suitable reader can be found for the file. + TypeError: If the file is read successfully but is not a vtkUnstructuredGrid. Returns: - int: 0 if success + vtkUnstructuredGrid: The resulting unstructured grid data. """ - if os.path.exists( vtk_output.output ) and canOverwrite: - logging.error( f"File \"{vtk_output.output}\" already exists, nothing done." ) - return 1 - file_extension = os.path.splitext( vtk_output.output )[ -1 ] - if file_extension == ".vtk": - success_code = __write_vtk( mesh, vtk_output.output ) - elif file_extension == ".vts": - success_code = __write_vts( mesh, vtk_output.output, vtk_output.is_data_mode_binary ) - elif file_extension == ".vtu": - success_code = __write_vtu( mesh, vtk_output.output, vtk_output.is_data_mode_binary ) - else: - # No writer found did work. Dying. - err_msg = f"Could not find the appropriate VTK writer for extension \"{file_extension}\"." - logging.error( err_msg ) - raise ValueError( err_msg ) - return 0 if success_code else 2 # the Write member function return 1 in case of success, 0 otherwise. + ioLogger.info( f"Reading file '{filepath}' and expecting vtkUnstructuredGrid." ) + mesh = readMesh( filepath ) + + if not isinstance( mesh, vtkUnstructuredGrid ): + errorMsg = ( f"File '{filepath}' was read successfully, but it is of type " + f"'{type(mesh).__name__}', not the expected vtkUnstructuredGrid." ) + ioLogger.error( errorMsg ) + raise TypeError( errorMsg ) + + ioLogger.info( "Validation successful. Mesh is a vtkUnstructuredGrid." ) + return mesh + + +def writeMesh( mesh: vtkPointSet, vtkOutput: VtkOutput, canOverwrite: bool = False ) -> int: + """ + Writes a vtkPointSet to a file. + + The format is determined by the file extension in `VtkOutput.output`. + + Args: + mesh (vtkPointSet): The grid data to write. + vtkOutput (VtkOutput): Configuration for the output file. + canOverwrite (bool, optional): If False, raises an error if the file + already exists. Defaults to False. + + Raises: + FileExistsError: If the output file exists and `canOverwrite` is False. + ValueError: If the file extension is not a supported write format. + RuntimeError: If the VTK writer fails to write the file. + + Returns: + int: Returns 1 on success, consistent with the VTK writer's return code. + """ + outputPath = Path( vtkOutput.output ) + if outputPath.exists() and not canOverwrite: + raise FileExistsError( f"File '{outputPath}' already exists. Set canOverwrite=True to replace it." ) + + try: + # Catch the ValueError from an invalid enum to provide a consistent error message. + try: + fileFormat = VtkFormat( outputPath.suffix ) + except ValueError: + # Re-raise with the message expected by the test. + raise ValueError( f"Writing to extension '{outputPath.suffix}' is not supported." ) + + writerClass = WRITER_MAP.get( fileFormat ) + if not writerClass: + raise ValueError( f"Writing to extension '{outputPath.suffix}' is not supported." ) + + successCode = _writeData( mesh, writerClass, str( outputPath ), vtkOutput.isDataModeBinary ) + if not successCode: + raise RuntimeError( f"VTK writer failed to write file '{outputPath}'." ) + + ioLogger.info( f"Successfully wrote mesh to '{outputPath}'." ) + return successCode + + except ( ValueError, RuntimeError ) as e: + ioLogger.error( e ) + raise diff --git a/geos-mesh/src/geos/mesh/utils/arrayHelpers.py b/geos-mesh/src/geos/mesh/utils/arrayHelpers.py index 29443c9d..ac6159b9 100644 --- a/geos-mesh/src/geos/mesh/utils/arrayHelpers.py +++ b/geos-mesh/src/geos/mesh/utils/arrayHelpers.py @@ -269,12 +269,12 @@ def UpdateDictElementMappingFromDataSetToDataSet( idElementFrom += 1 -def has_array( mesh: vtkUnstructuredGrid, array_names: list[ str ] ) -> bool: +def hasArray( mesh: vtkUnstructuredGrid, arrayNames: list[ str ] ) -> bool: """Checks if input mesh contains at least one of input data arrays. Args: mesh (vtkUnstructuredGrid): An unstructured mesh. - array_names (list[str]): List of array names. + arrayNames (list[str]): List of array names. Returns: bool: True if at least one array is found, else False. @@ -284,7 +284,7 @@ def has_array( mesh: vtkUnstructuredGrid, array_names: list[ str ] ) -> bool: for data in ( mesh.GetCellData(), mesh.GetFieldData(), mesh.GetPointData() ): if data is None: continue # type: ignore[unreachable] - for arrayName in array_names: + for arrayName in arrayNames: if data.HasArray( arrayName ): logging.error( f"The mesh contains the array named '{arrayName}'." ) return True diff --git a/geos-mesh/src/geos/mesh/utils/genericHelpers.py b/geos-mesh/src/geos/mesh/utils/genericHelpers.py index 481b391f..fb168d44 100644 --- a/geos-mesh/src/geos/mesh/utils/genericHelpers.py +++ b/geos-mesh/src/geos/mesh/utils/genericHelpers.py @@ -20,7 +20,7 @@ """ -def to_vtk_id_list( data: List[ int ] ) -> vtkIdList: +def toVtkIdList( data: List[ int ] ) -> vtkIdList: """Utility function transforming a list of ids into a vtkIdList. Args: @@ -36,7 +36,7 @@ def to_vtk_id_list( data: List[ int ] ) -> vtkIdList: return result -def vtk_iter( vtkContainer: Union[ vtkIdList, vtkCellTypes ] ) -> Iterator[ Any ]: +def vtkIter( vtkContainer: Union[ vtkIdList, vtkCellTypes ] ) -> Iterator[ Any ]: """Utility function transforming a vtk "container" into an iterable. Args: diff --git a/geos-mesh/tests/data/fracture_res5_id.vtp b/geos-mesh/tests/data/fracture_res5_id.vtp index 0c6b93c1..e3b1f3d2 100644 --- a/geos-mesh/tests/data/fracture_res5_id.vtp +++ b/geos-mesh/tests/data/fracture_res5_id.vtp @@ -3,7 +3,7 @@ - + 3905.8931117 diff --git a/geos-mesh/tests/data/fracture_res5_id.vtu b/geos-mesh/tests/data/fracture_res5_id.vtu index 0fba5b61..db728e08 100644 --- a/geos-mesh/tests/data/fracture_res5_id.vtu +++ b/geos-mesh/tests/data/fracture_res5_id.vtu @@ -3,7 +3,7 @@ - + AQAAAACAAABADQAABgMAAA==eJwtyMVSFgAAhVFduHDh+Ah2d2GD2I0NdnchJnZiYmCLgYmtKHZid3djgo0dG2f8z918c27B7Jn+LyVzoIX4BBfmk1yET3FRPs3F+AwX57N8Tkv4S+p5fym+wKX5IpfhS1yWL3M5vsJBfJWvaXl/Bb3ur8g3uBLf5Mp8i6vwba7KdziY7/I9DfFX0/v+UH7A1fkh1+BHXJMfcy1+wrX5KT/TOv66muqvx8+5Pr/gBvySG/IrbsSvuTG/4TQN8zfRdH9TfsvN+B035/fcgj9wS/7IrfgTf9Zwf4Rm+FvzF27DX7ktf+N2/J1nZgm0vX8Wd+BY7siddLa/M8/hudrFP4+7chx34/ncnRdwD17IPbmXLvL35sW8RPv4l3JfXsb9OJ7783IewCt4IEfqSv8gXsUJGuVfzYN5DQ/htTyU1/EwXs/DeYRu8EdzIm/Ukf5NPIo382jewmN4K4/lbTyOx+t2/wTewTt1oj+JJ/Eunsy7eQoncwzv4ak8Tff6p/M+3q8z/Ad4Jh/kWXyIY/kwz+YjPIfn6lH/PD7Gcdwya6DzuRUv4HCO0IX+1ryI2/BiXqJt/Uu5HS/j9hzPHXg5d+ROusLfmVdyF17FCdrVv5q78Rruzmu5B6/jntxL1/t78wbuw4m8Ufv6N3E/3sz9eQsP4K08kCN1m38Qb+co3sE7dbA/iYfwLh7Ku3kYJ/NwHqF7/NG8l0fyPt6vo/wHeDQf5DF8iMfyYR7H4/WIfwIf5Yl8jI/rJH8KT+YTPIVPcgyf4qk8TU/7p/MZPqs5sgWaU8/5c/F5vqC5/Xn0ov+S5vVf5nx8hfPzVS7ABfWavxBf5xta2F9Eb/pvaVH/bS7Gd7g43+USXFLv+UvxfX6gpf1l9KH/kZb1P+Zy/ISD+CmX5wr6zF+RU7kSP+fK/IJfahX/K67KrzmY33AIV9M0fyinc3V+yzX4Hb/Xmv4PXIs/cm3+xHW4rn721+MMrs9fuAF/5W/a0P+dG/EPbsw/OYyb6C9/U/7NzfgPN+e//A+qS/z/ diff --git a/geos-mesh/tests/test_AttributeMapping.py b/geos-mesh/tests/test_AttributeMapping.py index 019492f1..8bba913c 100644 --- a/geos-mesh/tests/test_AttributeMapping.py +++ b/geos-mesh/tests/test_AttributeMapping.py @@ -3,16 +3,14 @@ # SPDX-FileContributor: Romain Baville # ruff: noqa: E402 # disable Module level import not at top of file import pytest - from typing import Union, Any from geos.mesh.utils.arrayModifiers import fillAllPartialAttributes from geos.mesh.processing.AttributeMapping import AttributeMapping - from vtkmodules.vtkCommonDataModel import vtkMultiBlockDataSet, vtkDataSet @pytest.mark.parametrize( "meshFromName, meshToName, attributeNames, onPoints", [ - ( "fracture", "emptyFracture", { "collocated_nodes" }, True ), + ( "fracture", "emptyFracture", { "collocatedNodes" }, True ), ( "multiblock", "emptyFracture", { "FAULT" }, False ), ( "multiblock", "emptymultiblock", { "FAULT" }, False ), ( "dataset", "emptymultiblock", { "FAULT" }, False ), diff --git a/geos-mesh/tests/test_arrayHelpers.py b/geos-mesh/tests/test_arrayHelpers.py index e1d58ab3..a59bb690 100644 --- a/geos-mesh/tests/test_arrayHelpers.py +++ b/geos-mesh/tests/test_arrayHelpers.py @@ -53,7 +53,7 @@ def test_computeElementMapping( @pytest.mark.parametrize( "onpoints, expected", [ ( True, { 'GLOBAL_IDS_POINTS': 1, - 'collocated_nodes': 2, + 'collocatedNodes': 2, 'PointAttribute': 3 } ), ( False, { 'CELL_MARKERS': 1, @@ -198,7 +198,7 @@ def test_getArrayInObject( request: pytest.FixtureRequest, arrayExpected: npt.ND @pytest.mark.parametrize( "attributeName, vtkDataType, onPoints", [ ( "CellAttribute", 11, False ), ( "PointAttribute", 11, True ), - ( "collocated_nodes", 12, True ), + ( "collocatedNodes", 12, True ), ] ) def test_getVtkArrayTypeInMultiBlock( dataSetTest: vtkMultiBlockDataSet, attributeName: str, vtkDataType: int, onPoints: bool ) -> None: @@ -307,7 +307,7 @@ def test_getComponentNamesMultiBlock( @pytest.mark.parametrize( "attributeNames, onPoints, expected_columns", [ - ( ( "collocated_nodes", ), True, ( "collocated_nodes_0", "collocated_nodes_1" ) ), + ( ( "collocatedNodes", ), True, ( "collocatedNodes_0", "collocatedNodes_1" ) ), ] ) def test_getAttributeValuesAsDF( dataSetTest: vtkPolyData, attributeNames: Tuple[ str, ...], onPoints: bool, expected_columns: Tuple[ str, ...] ) -> None: diff --git a/geos-mesh/tests/test_arrayModifiers.py b/geos-mesh/tests/test_arrayModifiers.py index 0b2528d8..0e8f44f9 100644 --- a/geos-mesh/tests/test_arrayModifiers.py +++ b/geos-mesh/tests/test_arrayModifiers.py @@ -58,12 +58,12 @@ np.nan ), np.float64( np.nan ), np.float64( np.nan ) ], VTK_DOUBLE ), # Test fill attributes with different number of component with or without component names. ( 3, "PORO", 1, (), False, None, [ np.float32( np.nan ) ], VTK_FLOAT ), - ( 1, "collocated_nodes", 2, ( None, None ), True, None, [ np.int64( -1 ), np.int64( -1 ) ], VTK_ID_TYPE ), + ( 1, "collocatedNodes", 2, ( None, None ), True, None, [ np.int64( -1 ), np.int64( -1 ) ], VTK_ID_TYPE ), # Test fill an attribute with different type of value. ( 3, "FAULT", 1, (), False, None, [ np.int32( -1 ) ], VTK_INT ), ( 3, "FAULT", 1, (), False, [ 4 ], [ np.int32( 4 ) ], VTK_INT ), ( 3, "PORO", 1, (), False, [ 4 ], [ np.float32( 4 ) ], VTK_FLOAT ), - ( 1, "collocated_nodes", 2, ( None, None ), True, [ 4, 4 ], [ np.int64( 4 ), np.int64( 4 ) ], VTK_ID_TYPE ), + ( 1, "collocatedNodes", 2, ( None, None ), True, [ 4, 4 ], [ np.int64( 4 ), np.int64( 4 ) ], VTK_ID_TYPE ), ( 3, "CellAttribute", 3, ( "AX1", "AX2", "AX3" ), False, [ 4, 4, 4 ], [ np.float64( 4 ), np.float64( 4 ), np.float64( 4 ) ], VTK_DOUBLE ), ] ) @@ -140,7 +140,7 @@ def test_FillAllPartialAttributes( for blockIndex in elementaryBlockIndexes: dataSet: vtkDataSet = vtkDataSet.SafeDownCast( multiBlockDataSetTest.GetDataSet( blockIndex ) ) attributeExist: int - for attributeNameOnPoint in [ "PointAttribute", "collocated_nodes" ]: + for attributeNameOnPoint in [ "PointAttribute", "collocatedNodes" ]: attributeExist = dataSet.GetPointData().HasArray( attributeNameOnPoint ) assert attributeExist == 1 for attributeNameOnCell in [ "CELL_MARKERS", "CellAttribute", "FAULT", "PERM", "PORO" ]: @@ -482,7 +482,7 @@ def test_copyAttributeDataSet( @pytest.mark.parametrize( "meshFromName, meshToName, attributeName, onPoints, defaultValueTest", [ - ( "fracture", "emptyFracture", "collocated_nodes", True, [ -1, -1 ] ), + ( "fracture", "emptyFracture", "collocatedNodes", True, [ -1, -1 ] ), ( "multiblock", "emptyFracture", "FAULT", False, -1 ), ( "multiblock", "emptymultiblock", "FAULT", False, -1 ), ( "dataset", "emptymultiblock", "FAULT", False, -1 ), diff --git a/geos-mesh/tests/test_cliParsing.py b/geos-mesh/tests/test_cliParsing.py new file mode 100644 index 00000000..fdaff897 --- /dev/null +++ b/geos-mesh/tests/test_cliParsing.py @@ -0,0 +1,77 @@ +import argparse +from dataclasses import dataclass +import pytest +from typing import Iterator, Sequence +from geos.mesh.doctor.actions.generateFractures import FracturePolicy, Options +from geos.mesh.doctor.parsing.generateFracturesParsing import convert, displayResults, fillSubparser +from geos.mesh.io.vtkIO import VtkOutput + + +@dataclass( frozen=True ) +class TestCase: + __test__ = False + cliArgs: Sequence[ str ] + options: Options + exception: bool = False + + +def __generateGenerateFracturesParsingTestData() -> Iterator[ TestCase ]: + field: str = "attribute" + mainMesh: str = "output.vtu" + fractureMesh: str = "fracture.vtu" + + cliGen: str = f"generateFractures --policy {{}} --name {field} --values 0,1 --output {mainMesh} --fracturesOutputDir ." + allCliArgs = cliGen.format( "field" ).split(), cliGen.format( "internalSurfaces" ).split(), cliGen.format( + "dummy" ).split() + policies = FracturePolicy.FIELD, FracturePolicy.INTERNAL_SURFACES, FracturePolicy.FIELD + exceptions = False, False, True + for cliArgs, policy, exception in zip( allCliArgs, policies, exceptions ): + options: Options = Options( policy=policy, + field=field, + fieldValuesCombined=frozenset( ( 0, 1 ) ), + fieldValuesPerFracture=[ frozenset( ( 0, 1 ) ) ], + meshVtkOutput=VtkOutput( output=mainMesh, isDataModeBinary=True ), + allFracturesVtkOutput=[ VtkOutput( output=fractureMesh, isDataModeBinary=True ) ] ) + yield TestCase( cliArgs, options, exception ) + + +def __parseAndValidateOptions( testCase: TestCase ): + """ + Parse CLI arguments and validate that the resulting options match expected values. + + This helper function simulates the CLI parsing process by: + 1. Creating an argument parser with the generateFractures subparser + 2. Parsing the test case's CLI arguments + 3. Converting the parsed arguments to Options + 4. Asserting that key fields match the expected options + + Args: + testCase (TestCase): Test case containing CLI arguments and expected options. + + Raises: + AssertionError: If any of the parsed options don't match expected values. + """ + parser = argparse.ArgumentParser( description='Testing.' ) + subparsers = parser.add_subparsers() + fillSubparser( subparsers ) + args = parser.parse_args( testCase.cliArgs ) + options = convert( vars( args ) ) + assert options.policy == testCase.options.policy + assert options.field == testCase.options.field + assert options.fieldValuesCombined == testCase.options.fieldValuesCombined + + +def test_displayResults(): + # Dummy test for code coverage only. Shame on me! + displayResults( None, None ) + + +@pytest.mark.parametrize( "testCase", __generateGenerateFracturesParsingTestData() ) +def test( testCase: TestCase ): + if testCase.exception: + with pytest.raises( SystemExit ): + pytest.skip( "Test to be fixed" ) + __parseAndValidateOptions( testCase ) + else: + pytest.skip( "Test to be fixed" ) + __parseAndValidateOptions( testCase ) diff --git a/geos-mesh/tests/test_cli_parsing.py b/geos-mesh/tests/test_cli_parsing.py deleted file mode 100644 index 5187d2e5..00000000 --- a/geos-mesh/tests/test_cli_parsing.py +++ /dev/null @@ -1,63 +0,0 @@ -import argparse -from dataclasses import dataclass -import pytest -from typing import Iterator, Sequence -from geos.mesh.doctor.actions.generate_fractures import FracturePolicy, Options -from geos.mesh.doctor.parsing.generate_fractures_parsing import convert, display_results, fill_subparser -from geos.mesh.io.vtkIO import VtkOutput - - -@dataclass( frozen=True ) -class TestCase: - __test__ = False - cli_args: Sequence[ str ] - options: Options - exception: bool = False - - -def __generate_generate_fractures_parsing_test_data() -> Iterator[ TestCase ]: - field: str = "attribute" - main_mesh: str = "output.vtu" - fracture_mesh: str = "fracture.vtu" - - cli_gen: str = f"generate_fractures --policy {{}} --name {field} --values 0,1 --output {main_mesh} --fractures_output_dir ." - all_cli_args = cli_gen.format( "field" ).split(), cli_gen.format( "internal_surfaces" ).split(), cli_gen.format( - "dummy" ).split() - policies = FracturePolicy.FIELD, FracturePolicy.INTERNAL_SURFACES, FracturePolicy.FIELD - exceptions = False, False, True - for cli_args, policy, exception in zip( all_cli_args, policies, exceptions ): - options: Options = Options( - policy=policy, - field=field, - field_values_combined=frozenset( ( 0, 1 ) ), - field_values_per_fracture=[ frozenset( ( 0, 1 ) ) ], - mesh_VtkOutput=VtkOutput( output=main_mesh, is_data_mode_binary=True ), - all_fractures_VtkOutput=[ VtkOutput( output=fracture_mesh, is_data_mode_binary=True ) ] ) - yield TestCase( cli_args, options, exception ) - - -def __f( test_case: TestCase ): - parser = argparse.ArgumentParser( description='Testing.' ) - subparsers = parser.add_subparsers() - fill_subparser( subparsers ) - args = parser.parse_args( test_case.cli_args ) - options = convert( vars( args ) ) - assert options.policy == test_case.options.policy - assert options.field == test_case.options.field - assert options.field_values_combined == test_case.options.field_values_combined - - -def test_display_results(): - # Dummy test for code coverage only. Shame on me! - display_results( None, None ) - - -@pytest.mark.parametrize( "test_case", __generate_generate_fractures_parsing_test_data() ) -def test( test_case: TestCase ): - if test_case.exception: - with pytest.raises( SystemExit ): - pytest.skip( "Test to be fixed" ) - __f( test_case ) - else: - pytest.skip( "Test to be fixed" ) - __f( test_case ) diff --git a/geos-mesh/tests/test_collocated_nodes.py b/geos-mesh/tests/test_collocatedNodes.py similarity index 61% rename from geos-mesh/tests/test_collocated_nodes.py rename to geos-mesh/tests/test_collocatedNodes.py index 86f798f7..7232a2a8 100644 --- a/geos-mesh/tests/test_collocated_nodes.py +++ b/geos-mesh/tests/test_collocatedNodes.py @@ -2,10 +2,10 @@ from typing import Iterator, Tuple from vtkmodules.vtkCommonCore import vtkPoints from vtkmodules.vtkCommonDataModel import vtkCellArray, vtkTetra, vtkUnstructuredGrid, VTK_TETRA -from geos.mesh.doctor.actions.collocated_nodes import Options, __action +from geos.mesh.doctor.actions.collocatedNodes import Options, __action -def get_points() -> Iterator[ Tuple[ vtkPoints, int ] ]: +def getPoints() -> Iterator[ Tuple[ vtkPoints, int ] ]: """Generates the data for the cases. One case has two nodes at the exact same position. The other has two differente nodes @@ -16,26 +16,26 @@ def get_points() -> Iterator[ Tuple[ vtkPoints, int ] ]: points.SetNumberOfPoints( 2 ) points.SetPoint( 0, p0 ) points.SetPoint( 1, p1 ) - num_nodes_bucket = 1 if p0 == p1 else 0 - yield points, num_nodes_bucket + numNodesBucket = 1 if p0 == p1 else 0 + yield points, numNodesBucket -@pytest.mark.parametrize( "data", get_points() ) -def test_simple_collocated_points( data: Tuple[ vtkPoints, int ] ): - points, num_nodes_bucket = data +@pytest.mark.parametrize( "data", getPoints() ) +def test_simpleCollocatedPoints( data: Tuple[ vtkPoints, int ] ): + points, numNodesBucket = data mesh = vtkUnstructuredGrid() mesh.SetPoints( points ) result = __action( mesh, Options( tolerance=1.e-12 ) ) - assert len( result.wrong_support_elements ) == 0 - assert len( result.nodes_buckets ) == num_nodes_bucket - if num_nodes_bucket == 1: - assert len( result.nodes_buckets[ 0 ] ) == points.GetNumberOfPoints() + assert len( result.wrongSupportElements ) == 0 + assert len( result.nodesBuckets ) == numNodesBucket + if numNodesBucket == 1: + assert len( result.nodesBuckets[ 0 ] ) == points.GetNumberOfPoints() -def test_wrong_support_elements(): +def test_wrongSupportElements(): points = vtkPoints() points.SetNumberOfPoints( 4 ) points.SetPoint( 0, ( 0, 0, 0 ) ) @@ -43,7 +43,7 @@ def test_wrong_support_elements(): points.SetPoint( 2, ( 0, 1, 0 ) ) points.SetPoint( 3, ( 0, 0, 1 ) ) - cell_types = [ VTK_TETRA ] + cellTypes = [ VTK_TETRA ] cells = vtkCellArray() cells.AllocateExact( 1, 4 ) @@ -56,10 +56,10 @@ def test_wrong_support_elements(): mesh = vtkUnstructuredGrid() mesh.SetPoints( points ) - mesh.SetCells( cell_types, cells ) + mesh.SetCells( cellTypes, cells ) result = __action( mesh, Options( tolerance=1.e-12 ) ) - assert len( result.nodes_buckets ) == 0 - assert len( result.wrong_support_elements ) == 1 - assert result.wrong_support_elements[ 0 ] == 0 + assert len( result.nodesBuckets ) == 0 + assert len( result.wrongSupportElements ) == 1 + assert result.wrongSupportElements[ 0 ] == 0 diff --git a/geos-mesh/tests/test_element_volumes.py b/geos-mesh/tests/test_elementVolumes.py similarity index 58% rename from geos-mesh/tests/test_element_volumes.py rename to geos-mesh/tests/test_elementVolumes.py index dccbda93..f62c74ec 100644 --- a/geos-mesh/tests/test_element_volumes.py +++ b/geos-mesh/tests/test_elementVolumes.py @@ -1,10 +1,10 @@ import numpy from vtkmodules.vtkCommonCore import vtkPoints from vtkmodules.vtkCommonDataModel import VTK_TETRA, vtkCellArray, vtkTetra, vtkUnstructuredGrid -from geos.mesh.doctor.actions.element_volumes import Options, __action +from geos.mesh.doctor.actions.elementVolumes import Options, __action -def test_simple_tet(): +def test_simpleTet(): # creating a simple tetrahedron points = vtkPoints() points.SetNumberOfPoints( 4 ) @@ -13,7 +13,7 @@ def test_simple_tet(): points.SetPoint( 2, ( 0, 1, 0 ) ) points.SetPoint( 3, ( 0, 0, 1 ) ) - cell_types = [ VTK_TETRA ] + cellTypes = [ VTK_TETRA ] cells = vtkCellArray() cells.AllocateExact( 1, 4 ) @@ -26,14 +26,14 @@ def test_simple_tet(): mesh = vtkUnstructuredGrid() mesh.SetPoints( points ) - mesh.SetCells( cell_types, cells ) + mesh.SetCells( cellTypes, cells ) - result = __action( mesh, Options( min_volume=1. ) ) + result = __action( mesh, Options( minVolume=1. ) ) - assert len( result.element_volumes ) == 1 - assert result.element_volumes[ 0 ][ 0 ] == 0 - assert abs( result.element_volumes[ 0 ][ 1 ] - 1. / 6. ) < 10 * numpy.finfo( float ).eps + assert len( result.elementVolumes ) == 1 + assert result.elementVolumes[ 0 ][ 0 ] == 0 + assert abs( result.elementVolumes[ 0 ][ 1 ] - 1. / 6. ) < 10 * numpy.finfo( float ).eps - result = __action( mesh, Options( min_volume=0. ) ) + result = __action( mesh, Options( minVolume=0. ) ) - assert len( result.element_volumes ) == 0 + assert len( result.elementVolumes ) == 0 diff --git a/geos-mesh/tests/test_generate_cube.py b/geos-mesh/tests/test_generateCube.py similarity index 70% rename from geos-mesh/tests/test_generate_cube.py rename to geos-mesh/tests/test_generateCube.py index d02ef68b..069aba5e 100644 --- a/geos-mesh/tests/test_generate_cube.py +++ b/geos-mesh/tests/test_generateCube.py @@ -1,10 +1,10 @@ -from geos.mesh.doctor.actions.generate_cube import __build, Options, FieldInfo +from geos.mesh.doctor.actions.generateCube import __build, Options, FieldInfo -def test_generate_cube(): - options = Options( vtk_output=None, - generate_cells_global_ids=True, - generate_points_global_ids=False, +def test_generateCube(): + options = Options( vtkOutput=None, + generateCellsGlobalIds=True, + generatePointsGlobalIds=False, xs=( 0, 5, 10 ), ys=( 0, 4, 8 ), zs=( 0, 1 ), diff --git a/geos-mesh/tests/test_generateFractures.py b/geos-mesh/tests/test_generateFractures.py new file mode 100644 index 00000000..cef3f325 --- /dev/null +++ b/geos-mesh/tests/test_generateFractures.py @@ -0,0 +1,360 @@ +from dataclasses import dataclass +import numpy +import pytest +from typing import Iterable, Iterator, Sequence +from vtkmodules.vtkCommonDataModel import ( vtkUnstructuredGrid, vtkQuad, VTK_HEXAHEDRON, VTK_POLYHEDRON, VTK_QUAD ) +from vtkmodules.util.numpy_support import numpy_to_vtk, vtk_to_numpy +from geos.mesh.doctor.actions.checkFractures import formatCollocatedNodes +from geos.mesh.doctor.actions.generateCube import buildRectilinearBlocksMesh, XYZ +from geos.mesh.doctor.actions.generateFractures import ( __splitMeshOnFractures, Options, FracturePolicy, Coordinates3D, + IDMapping ) +from geos.mesh.utils.genericHelpers import toVtkIdList + +FaceNodesCoords = tuple[ tuple[ float ] ] +IDMatrix = Sequence[ Sequence[ int ] ] + + +@dataclass( frozen=True ) +class TestResult: + __test__ = False + mainMeshNumPoints: int + mainMeshNumCells: int + fractureMeshNumPoints: int + fractureMeshNumCells: int + + +@dataclass( frozen=True ) +class TestCase: + __test__ = False + inputMesh: vtkUnstructuredGrid + options: Options + collocatedNodes: IDMatrix + result: TestResult + + +def __buildTestCase( xs: tuple[ numpy.ndarray, numpy.ndarray, numpy.ndarray ], + attribute: Iterable[ int ], + fieldValues: Iterable[ int ] = None, + policy: FracturePolicy = FracturePolicy.FIELD ): + xyz = XYZ( *xs ) + + mesh: vtkUnstructuredGrid = buildRectilinearBlocksMesh( ( xyz, ) ) + + ref = numpy.array( attribute, dtype=int ) + if policy == FracturePolicy.FIELD: + assert len( ref ) == mesh.GetNumberOfCells() + attr = numpy_to_vtk( ref ) + attr.SetName( "attribute" ) + mesh.GetCellData().AddArray( attr ) + + if fieldValues is None: + fv = frozenset( attribute ) + else: + fv = frozenset( fieldValues ) + + options = Options( policy=policy, + field="attribute", + fieldValuesCombined=fv, + fieldValuesPerFracture=[ fv ], + meshVtkOutput=None, + allFracturesVtkOutput=None ) + return mesh, options + + +# Utility class to generate the new indices of the newly created collocated nodes. +class Incrementor: + + def __init__( self, start ): + self.__val = start + + def next( self, num: int ) -> Iterable[ int ]: + self.__val += num + return range( self.__val - num, self.__val ) + + +def __generateTestData() -> Iterator[ TestCase ]: + twoNodes = numpy.arange( 2, dtype=float ) + threeNodes = numpy.arange( 3, dtype=float ) + fourNodes = numpy.arange( 4, dtype=float ) + + # Split in 2 + mesh, options = __buildTestCase( ( threeNodes, threeNodes, threeNodes ), ( 0, 1, 0, 1, 0, 1, 0, 1 ) ) + yield TestCase( inputMesh=mesh, + options=options, + collocatedNodes=tuple( map( lambda i: ( 1 + 3 * i, 27 + i ), range( 9 ) ) ), + result=TestResult( 9 * 4, 8, 9, 4 ) ) + + # Split in 3 + inc = Incrementor( 27 ) + collocatedNodes: IDMatrix = ( ( 1, *inc.next( 1 ) ), ( 3, *inc.next( 1 ) ), ( 4, *inc.next( 2 ) ), + ( 7, *inc.next( 1 ) ), ( 1 + 9, *inc.next( 1 ) ), ( 3 + 9, *inc.next( 1 ) ), + ( 4 + 9, *inc.next( 2 ) ), ( 7 + 9, *inc.next( 1 ) ), ( 1 + 18, *inc.next( 1 ) ), + ( 3 + 18, *inc.next( 1 ) ), ( 4 + 18, *inc.next( 2 ) ), ( 7 + 18, *inc.next( 1 ) ) ) + mesh, options = __buildTestCase( ( threeNodes, threeNodes, threeNodes ), ( 0, 1, 2, 1, 0, 1, 2, 1 ) ) + yield TestCase( inputMesh=mesh, + options=options, + collocatedNodes=collocatedNodes, + result=TestResult( 9 * 4 + 6, 8, 12, 6 ) ) + + # Split in 8 + inc = Incrementor( 27 ) + collocatedNodes: IDMatrix = ( ( 1, *inc.next( 1 ) ), ( 3, *inc.next( 1 ) ), ( 4, *inc.next( 3 ) ), + ( 5, *inc.next( 1 ) ), ( 7, *inc.next( 1 ) ), ( 0 + 9, *inc.next( 1 ) ), + ( 1 + 9, *inc.next( 3 ) ), ( 2 + 9, *inc.next( 1 ) ), ( 3 + 9, *inc.next( 3 ) ), + ( 4 + 9, *inc.next( 7 ) ), ( 5 + 9, *inc.next( 3 ) ), ( 6 + 9, *inc.next( 1 ) ), + ( 7 + 9, *inc.next( 3 ) ), ( 8 + 9, *inc.next( 1 ) ), ( 1 + 18, *inc.next( 1 ) ), + ( 3 + 18, *inc.next( 1 ) ), ( 4 + 18, *inc.next( 3 ) ), ( 5 + 18, *inc.next( 1 ) ), + ( 7 + 18, *inc.next( 1 ) ) ) + mesh, options = __buildTestCase( ( threeNodes, threeNodes, threeNodes ), range( 8 ) ) + yield TestCase( inputMesh=mesh, + options=options, + collocatedNodes=collocatedNodes, + result=TestResult( 8 * 8, 8, 3 * 3 * 3 - 8, 12 ) ) + + # Straight notch + inc = Incrementor( 27 ) + collocatedNodes: IDMatrix = ( ( 1, *inc.next( 1 ) ), ( 4, ), ( 1 + 9, *inc.next( 1 ) ), ( 4 + 9, ), + ( 1 + 18, *inc.next( 1 ) ), ( 4 + 18, ) ) + mesh, options = __buildTestCase( ( threeNodes, threeNodes, threeNodes ), ( 0, 1, 2, 2, 0, 1, 2, 2 ), + fieldValues=( 0, 1 ) ) + yield TestCase( inputMesh=mesh, + options=options, + collocatedNodes=collocatedNodes, + result=TestResult( 3 * 3 * 3 + 3, 8, 6, 2 ) ) + + # L-shaped notch + inc = Incrementor( 27 ) + collocatedNodes: IDMatrix = ( ( 1, *inc.next( 1 ) ), ( 4, *inc.next( 1 ) ), ( 7, *inc.next( 1 ) ), + ( 1 + 9, *inc.next( 1 ) ), ( 4 + 9, ), ( 7 + 9, ), ( 19, *inc.next( 1 ) ), ( 22, ) ) + mesh, options = __buildTestCase( ( threeNodes, threeNodes, threeNodes ), ( 0, 1, 0, 1, 0, 1, 2, 2 ), + fieldValues=( 0, 1 ) ) + yield TestCase( inputMesh=mesh, + options=options, + collocatedNodes=collocatedNodes, + result=TestResult( 3 * 3 * 3 + 5, 8, 8, 3 ) ) + + # 3x1x1 split + inc = Incrementor( 2 * 2 * 4 ) + collocatedNodes: IDMatrix = ( ( 1, *inc.next( 1 ) ), ( 2, *inc.next( 1 ) ), ( 5, *inc.next( 1 ) ), + ( 6, *inc.next( 1 ) ), ( 1 + 8, *inc.next( 1 ) ), ( 2 + 8, *inc.next( 1 ) ), + ( 5 + 8, *inc.next( 1 ) ), ( 6 + 8, *inc.next( 1 ) ) ) + mesh, options = __buildTestCase( ( fourNodes, twoNodes, twoNodes ), ( 0, 1, 2 ) ) + yield TestCase( inputMesh=mesh, + options=options, + collocatedNodes=collocatedNodes, + result=TestResult( 6 * 4, 3, 2 * 4, 2 ) ) + + # Discarded fracture element if no node duplication. + collocatedNodes: IDMatrix = tuple() + mesh, options = __buildTestCase( ( threeNodes, fourNodes, fourNodes ), ( 0, ) * 8 + ( 1, 2 ) + ( 0, ) * 8, + fieldValues=( 1, 2 ) ) + yield TestCase( inputMesh=mesh, + options=options, + collocatedNodes=collocatedNodes, + result=TestResult( 3 * 4 * 4, 2 * 3 * 3, 0, 0 ) ) + + # Fracture on a corner + inc = Incrementor( 3 * 4 * 4 ) + collocatedNodes: IDMatrix = ( ( 1 + 12, ), ( 4 + 12, ), ( 7 + 12, ), ( 1 + 12 * 2, *inc.next( 1 ) ), + ( 4 + 12 * 2, *inc.next( 1 ) ), ( 7 + 12 * 2, ), ( 1 + 12 * 3, *inc.next( 1 ) ), + ( 4 + 12 * 3, *inc.next( 1 ) ), ( 7 + 12 * 3, ) ) + mesh, options = __buildTestCase( ( threeNodes, fourNodes, fourNodes ), + ( 0, ) * 6 + ( 1, 2, 1, 2, 0, 0, 1, 2, 1, 2, 0, 0 ), + fieldValues=( 1, 2 ) ) + yield TestCase( inputMesh=mesh, + options=options, + collocatedNodes=collocatedNodes, + result=TestResult( 3 * 4 * 4 + 4, 2 * 3 * 3, 9, 4 ) ) + + # Generate mesh with 2 hexs, one being a standard hex, the other a 42 hex. + inc = Incrementor( 3 * 2 * 2 ) + collocatedNodes: IDMatrix = ( ( 1, *inc.next( 1 ) ), ( 1 + 3, *inc.next( 1 ) ), ( 1 + 6, *inc.next( 1 ) ), + ( 1 + 9, *inc.next( 1 ) ) ) + mesh, options = __buildTestCase( ( threeNodes, twoNodes, twoNodes ), ( 0, 1 ) ) + polyhedronMesh = vtkUnstructuredGrid() + polyhedronMesh.SetPoints( mesh.GetPoints() ) + polyhedronMesh.Allocate( 2 ) + polyhedronMesh.InsertNextCell( VTK_HEXAHEDRON, toVtkIdList( ( 1, 2, 5, 4, 7, 8, 10, 11 ) ) ) + poly = toVtkIdList( [ 6 ] + [ 4, 0, 1, 7, 6 ] + [ 4, 1, 4, 10, 7 ] + [ 4, 4, 3, 9, 10 ] + [ 4, 3, 0, 6, 9 ] + + [ 4, 6, 7, 10, 9 ] + [ 4, 1, 0, 3, 4 ] ) + polyhedronMesh.InsertNextCell( VTK_POLYHEDRON, poly ) + polyhedronMesh.GetCellData().AddArray( mesh.GetCellData().GetArray( "attribute" ) ) + + yield TestCase( inputMesh=polyhedronMesh, + options=options, + collocatedNodes=collocatedNodes, + result=TestResult( 4 * 4, 2, 4, 1 ) ) + + # Split in 2 using the internal fracture description + inc = Incrementor( 3 * 2 * 2 ) + collocatedNodes: IDMatrix = ( ( 1, *inc.next( 1 ) ), ( 1 + 3, *inc.next( 1 ) ), ( 1 + 6, *inc.next( 1 ) ), + ( 1 + 9, *inc.next( 1 ) ) ) + mesh, options = __buildTestCase( ( threeNodes, twoNodes, twoNodes ), + attribute=( 0, 0, 0 ), + fieldValues=( 0, ), + policy=FracturePolicy.INTERNAL_SURFACES ) + mesh.InsertNextCell( VTK_QUAD, toVtkIdList( ( 1, 4, 7, 10 ) ) ) # Add a fracture on the fly + yield TestCase( inputMesh=mesh, + options=options, + collocatedNodes=collocatedNodes, + result=TestResult( 4 * 4, 3, 4, 1 ) ) + + +@pytest.mark.parametrize( "TestCase", __generateTestData() ) +def test_generateFracture( TestCase: TestCase ): + mainMesh, fractureMeshes = __splitMeshOnFractures( TestCase.inputMesh, TestCase.options ) + fractureMesh: vtkUnstructuredGrid = fractureMeshes[ 0 ] + assert mainMesh.GetNumberOfPoints() == TestCase.result.mainMeshNumPoints + assert mainMesh.GetNumberOfCells() == TestCase.result.mainMeshNumCells + assert fractureMesh.GetNumberOfPoints() == TestCase.result.fractureMeshNumPoints + assert fractureMesh.GetNumberOfCells() == TestCase.result.fractureMeshNumCells + + res = formatCollocatedNodes( fractureMesh ) + assert res == TestCase.collocatedNodes + assert len( res ) == TestCase.result.fractureMeshNumPoints + + +def addSimplifiedFieldForCells( mesh: vtkUnstructuredGrid, field_name: str, fieldDimension: int ): + """Reduce functionality obtained from src.geos.mesh.doctor.actions.generateFractures.__add_fields + where the goal is to add a cell data array with incrementing values. + + Args: + mesh (vtkUnstructuredGrid): Unstructured mesh. + field_name (str): Name of the field to add to CellData + fieldDimension (int): Number of components for the field. + """ + data = mesh.GetCellData() + n = mesh.GetNumberOfCells() + array = numpy.ones( ( n, fieldDimension ), dtype=float ) + array = numpy.arange( 1, n * fieldDimension + 1 ).reshape( n, fieldDimension ) + vtkArray = numpy_to_vtk( array ) + vtkArray.SetName( field_name ) + data.AddArray( vtkArray ) + + +def findBordersFacesRectilinearGrid( mesh: vtkUnstructuredGrid ) -> tuple[ FaceNodesCoords ]: + """ + 6+--------+7 + / /| + / / | + 4+--------+5 | + | | | + | 2+ | +3 + | | / + | |/ + 0+--------+1 + + For a vtk rectilinear grid, gives the coordinates of each of its borders face nodes. + + Args: + mesh (vtkUnstructuredGrid): Unstructured mesh. + + Returns: + tuple[QuadCoords]: For a rectilinear grid, returns a tuple of 6 elements. + """ + meshBounds: tuple[ float ] = mesh.GetBounds() + minBound: Coordinates3D = [ meshBounds[ i ] for i in range( len( meshBounds ) ) if i % 2 == 0 ] + maxBound: Coordinates3D = [ meshBounds[ i ] for i in range( len( meshBounds ) ) if i % 2 == 1 ] + center: Coordinates3D = mesh.GetCenter() + faceDiag: tuple[ float ] = ( ( maxBound[ 0 ] - minBound[ 0 ] ) / 2, ( maxBound[ 1 ] - minBound[ 1 ] ) / 2, + ( maxBound[ 2 ] - minBound[ 2 ] ) / 2 ) + node0: Coordinates3D = ( center[ 0 ] - faceDiag[ 0 ], center[ 1 ] - faceDiag[ 1 ], center[ 2 ] - faceDiag[ 2 ] ) + node1: Coordinates3D = ( center[ 0 ] + faceDiag[ 0 ], center[ 1 ] - faceDiag[ 1 ], center[ 2 ] - faceDiag[ 2 ] ) + node2: Coordinates3D = ( center[ 0 ] - faceDiag[ 0 ], center[ 1 ] + faceDiag[ 1 ], center[ 2 ] - faceDiag[ 2 ] ) + node3: Coordinates3D = ( center[ 0 ] + faceDiag[ 0 ], center[ 1 ] + faceDiag[ 1 ], center[ 2 ] - faceDiag[ 2 ] ) + node4: Coordinates3D = ( center[ 0 ] - faceDiag[ 0 ], center[ 1 ] - faceDiag[ 1 ], center[ 2 ] + faceDiag[ 2 ] ) + node5: Coordinates3D = ( center[ 0 ] + faceDiag[ 0 ], center[ 1 ] - faceDiag[ 1 ], center[ 2 ] + faceDiag[ 2 ] ) + node6: Coordinates3D = ( center[ 0 ] - faceDiag[ 0 ], center[ 1 ] + faceDiag[ 1 ], center[ 2 ] + faceDiag[ 2 ] ) + node7: Coordinates3D = ( center[ 0 ] + faceDiag[ 0 ], center[ 1 ] + faceDiag[ 1 ], center[ 2 ] + faceDiag[ 2 ] ) + faces: tuple[ FaceNodesCoords ] = ( ( node0, node1, node3, node2 ), ( node4, node5, node7, node6 ), + ( node0, node2, node6, node4 ), ( node1, node3, node7, node5 ), + ( node0, node1, node5, node4 ), ( node2, node3, node7, node6 ) ) + return faces + + +def addQuad( mesh: vtkUnstructuredGrid, face: FaceNodesCoords ): + """Adds a quad cell to each border of an unstructured mesh. + + Args: + mesh (vtkUnstructuredGrid): Unstructured mesh. + """ + pointsCoords = mesh.GetPoints().GetData() + quad: vtkQuad = vtkQuad() + idsAssociation: IDMapping = {} + for i in range( mesh.GetNumberOfPoints() ): + for j in range( len( face ) ): + if pointsCoords.GetTuple( i ) == face[ j ]: + idsAssociation[ i ] = j + break + if len( idsAssociation ) == 4: + break + + for verticeId, quadCoordIndex in idsAssociation.items(): + quad.GetPoints().InsertNextPoint( face[ quadCoordIndex ] ) + quad.GetPointIds().SetId( quadCoordIndex, verticeId ) + + mesh.InsertNextCell( quad.GetCellType(), quad.GetPointIds() ) + + +@pytest.mark.skip( "Test to be fixed" ) +def test_copyFieldsWhenSplittingMesh(): + """This test is designed to check the __copyFields method from generateFractures, + that will be called when using __splitMeshOnFractures method from generateFractures. + """ + # Generating the rectilinear grid and its quads on all borders + x: numpy.array = numpy.array( [ 0, 1, 2 ] ) + y: numpy.array = numpy.array( [ 0, 1 ] ) + z: numpy.array = numpy.array( [ 0, 1 ] ) + xyzs: XYZ = XYZ( x, y, z ) + mesh: vtkUnstructuredGrid = buildRectilinearBlocksMesh( [ xyzs ] ) + assert mesh.GetCells().GetNumberOfCells() == 2 + borderFaces: tuple[ FaceNodesCoords ] = findBordersFacesRectilinearGrid( mesh ) + for face in borderFaces: + addQuad( mesh, face ) + assert mesh.GetCells().GetNumberOfCells() == 8 + # Create a quad cell to represent the fracture surface. + fracture: FaceNodesCoords = ( ( 1.0, 0.0, 0.0 ), ( 1.0, 1.0, 0.0 ), ( 1.0, 1.0, 1.0 ), ( 1.0, 0.0, 1.0 ) ) + addQuad( mesh, fracture ) + assert mesh.GetCells().GetNumberOfCells() == 9 + # Add a "TestField" array + assert mesh.GetCellData().GetNumberOfArrays() == 0 + addSimplifiedFieldForCells( mesh, "TestField", 1 ) + assert mesh.GetCellData().GetNumberOfArrays() == 1 + assert mesh.GetCellData().GetArrayName( 0 ) == "TestField" + testFieldValues: list[ int ] = vtk_to_numpy( mesh.GetCellData().GetArray( 0 ) ).tolist() + assert testFieldValues == [ 1, 2, 3, 4, 5, 6, 7, 8, 9 ] + # Split the mesh along the fracture surface which is number 9 on TestField + options = Options( policy=FracturePolicy.INTERNAL_SURFACES, + field="TestField", + fieldValuesCombined=frozenset( map( int, [ "9" ] ) ), + fieldValuesPerFracture=[ frozenset( map( int, [ "9" ] ) ) ], + meshVtkOutput=None, + allFracturesVtkOutput=None ) + mainMesh, fractureMeshes = __splitMeshOnFractures( mesh, options ) + fractureMesh: vtkUnstructuredGrid = fractureMeshes[ 0 ] + assert mainMesh.GetCellData().GetNumberOfArrays() == 1 + assert fractureMesh.GetCellData().GetNumberOfArrays() == 1 + assert mainMesh.GetCellData().GetArrayName( 0 ) == "TestField" + assert fractureMesh.GetCellData().GetArrayName( 0 ) == "TestField" + # Make sure that only 1 correct value is in "TestField" array for fractureMesh, 9 values for mainMesh + fractureMeshValues: list[ int ] = vtk_to_numpy( fractureMesh.GetCellData().GetArray( 0 ) ).tolist() + mainMeshValues: list[ int ] = vtk_to_numpy( mainMesh.GetCellData().GetArray( 0 ) ).tolist() + assert fractureMeshValues == [ 9 ] # The value for the fracture surface + assert mainMeshValues == [ 1, 2, 3, 4, 5, 6, 7, 8, 9 ] + # Test for invalid point field name + addSimplifiedFieldForCells( mesh, "GLOBAL_IDS_POINTS", 1 ) + with pytest.raises( ValueError ) as pytestWrappedError: + mainMesh, fractureMeshes = __splitMeshOnFractures( mesh, options ) + assert pytestWrappedError.type == ValueError + # Test for invalid cell field name + mesh: vtkUnstructuredGrid = buildRectilinearBlocksMesh( [ xyzs ] ) + borderFaces: tuple[ FaceNodesCoords ] = findBordersFacesRectilinearGrid( mesh ) + for face in borderFaces: + addQuad( mesh, face ) + addQuad( mesh, fracture ) + addSimplifiedFieldForCells( mesh, "TestField", 1 ) + addSimplifiedFieldForCells( mesh, "GLOBAL_IDS_CELLS", 1 ) + assert mesh.GetCellData().GetNumberOfArrays() == 2 + with pytest.raises( ValueError ) as pytestWrappedError: + mainMesh, fractureMeshes = __splitMeshOnFractures( mesh, options ) + assert pytestWrappedError.type == ValueError diff --git a/geos-mesh/tests/test_generate_global_ids.py b/geos-mesh/tests/test_generateGlobalIds.py similarity index 55% rename from geos-mesh/tests/test_generate_global_ids.py rename to geos-mesh/tests/test_generateGlobalIds.py index 614f771c..f623ca27 100644 --- a/geos-mesh/tests/test_generate_global_ids.py +++ b/geos-mesh/tests/test_generateGlobalIds.py @@ -1,9 +1,9 @@ from vtkmodules.vtkCommonCore import vtkPoints from vtkmodules.vtkCommonDataModel import vtkCellArray, vtkUnstructuredGrid, vtkVertex, VTK_VERTEX -from geos.mesh.doctor.actions.generate_global_ids import __build_global_ids +from geos.mesh.doctor.actions.generateGlobalIds import __buildGlobalIds -def test_generate_global_ids(): +def test_generateGlobalIds(): points = vtkPoints() points.InsertNextPoint( 0, 0, 0 ) @@ -17,9 +17,9 @@ def test_generate_global_ids(): mesh.SetPoints( points ) mesh.SetCells( [ VTK_VERTEX ], vertices ) - __build_global_ids( mesh, True, True ) + __buildGlobalIds( mesh, True, True ) - global_cell_ids = mesh.GetCellData().GetGlobalIds() - global_point_ids = mesh.GetPointData().GetGlobalIds() - assert global_cell_ids.GetNumberOfValues() == 1 - assert global_point_ids.GetNumberOfValues() == 1 + globalCellIds = mesh.GetCellData().GetGlobalIds() + globalPointIds = mesh.GetPointData().GetGlobalIds() + assert globalCellIds.GetNumberOfValues() == 1 + assert globalPointIds.GetNumberOfValues() == 1 diff --git a/geos-mesh/tests/test_generate_fractures.py b/geos-mesh/tests/test_generate_fractures.py deleted file mode 100644 index 66c9496f..00000000 --- a/geos-mesh/tests/test_generate_fractures.py +++ /dev/null @@ -1,360 +0,0 @@ -from dataclasses import dataclass -import numpy -import pytest -from typing import Iterable, Iterator, Sequence -from vtkmodules.vtkCommonDataModel import ( vtkUnstructuredGrid, vtkQuad, VTK_HEXAHEDRON, VTK_POLYHEDRON, VTK_QUAD ) -from vtkmodules.util.numpy_support import numpy_to_vtk, vtk_to_numpy -from geos.mesh.doctor.actions.check_fractures import format_collocated_nodes -from geos.mesh.doctor.actions.generate_cube import build_rectilinear_blocks_mesh, XYZ -from geos.mesh.doctor.actions.generate_fractures import ( __split_mesh_on_fractures, Options, FracturePolicy, - Coordinates3D, IDMapping ) -from geos.mesh.utils.genericHelpers import to_vtk_id_list - -FaceNodesCoords = tuple[ tuple[ float ] ] -IDMatrix = Sequence[ Sequence[ int ] ] - - -@dataclass( frozen=True ) -class TestResult: - __test__ = False - main_mesh_num_points: int - main_mesh_num_cells: int - fracture_mesh_num_points: int - fracture_mesh_num_cells: int - - -@dataclass( frozen=True ) -class TestCase: - __test__ = False - input_mesh: vtkUnstructuredGrid - options: Options - collocated_nodes: IDMatrix - result: TestResult - - -def __build_test_case( xs: tuple[ numpy.ndarray, numpy.ndarray, numpy.ndarray ], - attribute: Iterable[ int ], - field_values: Iterable[ int ] = None, - policy: FracturePolicy = FracturePolicy.FIELD ): - xyz = XYZ( *xs ) - - mesh: vtkUnstructuredGrid = build_rectilinear_blocks_mesh( ( xyz, ) ) - - ref = numpy.array( attribute, dtype=int ) - if policy == FracturePolicy.FIELD: - assert len( ref ) == mesh.GetNumberOfCells() - attr = numpy_to_vtk( ref ) - attr.SetName( "attribute" ) - mesh.GetCellData().AddArray( attr ) - - if field_values is None: - fv = frozenset( attribute ) - else: - fv = frozenset( field_values ) - - options = Options( policy=policy, - field="attribute", - field_values_combined=fv, - field_values_per_fracture=[ fv ], - mesh_VtkOutput=None, - all_fractures_VtkOutput=None ) - return mesh, options - - -# Utility class to generate the new indices of the newly created collocated nodes. -class Incrementor: - - def __init__( self, start ): - self.__val = start - - def next( self, num: int ) -> Iterable[ int ]: - self.__val += num - return range( self.__val - num, self.__val ) - - -def __generate_test_data() -> Iterator[ TestCase ]: - two_nodes = numpy.arange( 2, dtype=float ) - three_nodes = numpy.arange( 3, dtype=float ) - four_nodes = numpy.arange( 4, dtype=float ) - - # Split in 2 - mesh, options = __build_test_case( ( three_nodes, three_nodes, three_nodes ), ( 0, 1, 0, 1, 0, 1, 0, 1 ) ) - yield TestCase( input_mesh=mesh, - options=options, - collocated_nodes=tuple( map( lambda i: ( 1 + 3 * i, 27 + i ), range( 9 ) ) ), - result=TestResult( 9 * 4, 8, 9, 4 ) ) - - # Split in 3 - inc = Incrementor( 27 ) - collocated_nodes: IDMatrix = ( ( 1, *inc.next( 1 ) ), ( 3, *inc.next( 1 ) ), ( 4, *inc.next( 2 ) ), - ( 7, *inc.next( 1 ) ), ( 1 + 9, *inc.next( 1 ) ), ( 3 + 9, *inc.next( 1 ) ), - ( 4 + 9, *inc.next( 2 ) ), ( 7 + 9, *inc.next( 1 ) ), ( 1 + 18, *inc.next( 1 ) ), - ( 3 + 18, *inc.next( 1 ) ), ( 4 + 18, *inc.next( 2 ) ), ( 7 + 18, *inc.next( 1 ) ) ) - mesh, options = __build_test_case( ( three_nodes, three_nodes, three_nodes ), ( 0, 1, 2, 1, 0, 1, 2, 1 ) ) - yield TestCase( input_mesh=mesh, - options=options, - collocated_nodes=collocated_nodes, - result=TestResult( 9 * 4 + 6, 8, 12, 6 ) ) - - # Split in 8 - inc = Incrementor( 27 ) - collocated_nodes: IDMatrix = ( ( 1, *inc.next( 1 ) ), ( 3, *inc.next( 1 ) ), ( 4, *inc.next( 3 ) ), - ( 5, *inc.next( 1 ) ), ( 7, *inc.next( 1 ) ), ( 0 + 9, *inc.next( 1 ) ), - ( 1 + 9, *inc.next( 3 ) ), ( 2 + 9, *inc.next( 1 ) ), ( 3 + 9, *inc.next( 3 ) ), - ( 4 + 9, *inc.next( 7 ) ), ( 5 + 9, *inc.next( 3 ) ), ( 6 + 9, *inc.next( 1 ) ), - ( 7 + 9, *inc.next( 3 ) ), ( 8 + 9, *inc.next( 1 ) ), ( 1 + 18, *inc.next( 1 ) ), - ( 3 + 18, *inc.next( 1 ) ), ( 4 + 18, *inc.next( 3 ) ), ( 5 + 18, *inc.next( 1 ) ), - ( 7 + 18, *inc.next( 1 ) ) ) - mesh, options = __build_test_case( ( three_nodes, three_nodes, three_nodes ), range( 8 ) ) - yield TestCase( input_mesh=mesh, - options=options, - collocated_nodes=collocated_nodes, - result=TestResult( 8 * 8, 8, 3 * 3 * 3 - 8, 12 ) ) - - # Straight notch - inc = Incrementor( 27 ) - collocated_nodes: IDMatrix = ( ( 1, *inc.next( 1 ) ), ( 4, ), ( 1 + 9, *inc.next( 1 ) ), ( 4 + 9, ), - ( 1 + 18, *inc.next( 1 ) ), ( 4 + 18, ) ) - mesh, options = __build_test_case( ( three_nodes, three_nodes, three_nodes ), ( 0, 1, 2, 2, 0, 1, 2, 2 ), - field_values=( 0, 1 ) ) - yield TestCase( input_mesh=mesh, - options=options, - collocated_nodes=collocated_nodes, - result=TestResult( 3 * 3 * 3 + 3, 8, 6, 2 ) ) - - # L-shaped notch - inc = Incrementor( 27 ) - collocated_nodes: IDMatrix = ( ( 1, *inc.next( 1 ) ), ( 4, *inc.next( 1 ) ), ( 7, *inc.next( 1 ) ), - ( 1 + 9, *inc.next( 1 ) ), ( 4 + 9, ), ( 7 + 9, ), ( 19, *inc.next( 1 ) ), ( 22, ) ) - mesh, options = __build_test_case( ( three_nodes, three_nodes, three_nodes ), ( 0, 1, 0, 1, 0, 1, 2, 2 ), - field_values=( 0, 1 ) ) - yield TestCase( input_mesh=mesh, - options=options, - collocated_nodes=collocated_nodes, - result=TestResult( 3 * 3 * 3 + 5, 8, 8, 3 ) ) - - # 3x1x1 split - inc = Incrementor( 2 * 2 * 4 ) - collocated_nodes: IDMatrix = ( ( 1, *inc.next( 1 ) ), ( 2, *inc.next( 1 ) ), ( 5, *inc.next( 1 ) ), - ( 6, *inc.next( 1 ) ), ( 1 + 8, *inc.next( 1 ) ), ( 2 + 8, *inc.next( 1 ) ), - ( 5 + 8, *inc.next( 1 ) ), ( 6 + 8, *inc.next( 1 ) ) ) - mesh, options = __build_test_case( ( four_nodes, two_nodes, two_nodes ), ( 0, 1, 2 ) ) - yield TestCase( input_mesh=mesh, - options=options, - collocated_nodes=collocated_nodes, - result=TestResult( 6 * 4, 3, 2 * 4, 2 ) ) - - # Discarded fracture element if no node duplication. - collocated_nodes: IDMatrix = tuple() - mesh, options = __build_test_case( ( three_nodes, four_nodes, four_nodes ), ( 0, ) * 8 + ( 1, 2 ) + ( 0, ) * 8, - field_values=( 1, 2 ) ) - yield TestCase( input_mesh=mesh, - options=options, - collocated_nodes=collocated_nodes, - result=TestResult( 3 * 4 * 4, 2 * 3 * 3, 0, 0 ) ) - - # Fracture on a corner - inc = Incrementor( 3 * 4 * 4 ) - collocated_nodes: IDMatrix = ( ( 1 + 12, ), ( 4 + 12, ), ( 7 + 12, ), ( 1 + 12 * 2, *inc.next( 1 ) ), - ( 4 + 12 * 2, *inc.next( 1 ) ), ( 7 + 12 * 2, ), ( 1 + 12 * 3, *inc.next( 1 ) ), - ( 4 + 12 * 3, *inc.next( 1 ) ), ( 7 + 12 * 3, ) ) - mesh, options = __build_test_case( ( three_nodes, four_nodes, four_nodes ), - ( 0, ) * 6 + ( 1, 2, 1, 2, 0, 0, 1, 2, 1, 2, 0, 0 ), - field_values=( 1, 2 ) ) - yield TestCase( input_mesh=mesh, - options=options, - collocated_nodes=collocated_nodes, - result=TestResult( 3 * 4 * 4 + 4, 2 * 3 * 3, 9, 4 ) ) - - # Generate mesh with 2 hexs, one being a standard hex, the other a 42 hex. - inc = Incrementor( 3 * 2 * 2 ) - collocated_nodes: IDMatrix = ( ( 1, *inc.next( 1 ) ), ( 1 + 3, *inc.next( 1 ) ), ( 1 + 6, *inc.next( 1 ) ), - ( 1 + 9, *inc.next( 1 ) ) ) - mesh, options = __build_test_case( ( three_nodes, two_nodes, two_nodes ), ( 0, 1 ) ) - polyhedron_mesh = vtkUnstructuredGrid() - polyhedron_mesh.SetPoints( mesh.GetPoints() ) - polyhedron_mesh.Allocate( 2 ) - polyhedron_mesh.InsertNextCell( VTK_HEXAHEDRON, to_vtk_id_list( ( 1, 2, 5, 4, 7, 8, 10, 11 ) ) ) - poly = to_vtk_id_list( [ 6 ] + [ 4, 0, 1, 7, 6 ] + [ 4, 1, 4, 10, 7 ] + [ 4, 4, 3, 9, 10 ] + [ 4, 3, 0, 6, 9 ] + - [ 4, 6, 7, 10, 9 ] + [ 4, 1, 0, 3, 4 ] ) - polyhedron_mesh.InsertNextCell( VTK_POLYHEDRON, poly ) - polyhedron_mesh.GetCellData().AddArray( mesh.GetCellData().GetArray( "attribute" ) ) - - yield TestCase( input_mesh=polyhedron_mesh, - options=options, - collocated_nodes=collocated_nodes, - result=TestResult( 4 * 4, 2, 4, 1 ) ) - - # Split in 2 using the internal fracture description - inc = Incrementor( 3 * 2 * 2 ) - collocated_nodes: IDMatrix = ( ( 1, *inc.next( 1 ) ), ( 1 + 3, *inc.next( 1 ) ), ( 1 + 6, *inc.next( 1 ) ), - ( 1 + 9, *inc.next( 1 ) ) ) - mesh, options = __build_test_case( ( three_nodes, two_nodes, two_nodes ), - attribute=( 0, 0, 0 ), - field_values=( 0, ), - policy=FracturePolicy.INTERNAL_SURFACES ) - mesh.InsertNextCell( VTK_QUAD, to_vtk_id_list( ( 1, 4, 7, 10 ) ) ) # Add a fracture on the fly - yield TestCase( input_mesh=mesh, - options=options, - collocated_nodes=collocated_nodes, - result=TestResult( 4 * 4, 3, 4, 1 ) ) - - -@pytest.mark.parametrize( "test_case", __generate_test_data() ) -def test_generate_fracture( test_case: TestCase ): - main_mesh, fracture_meshes = __split_mesh_on_fractures( test_case.input_mesh, test_case.options ) - fracture_mesh: vtkUnstructuredGrid = fracture_meshes[ 0 ] - assert main_mesh.GetNumberOfPoints() == test_case.result.main_mesh_num_points - assert main_mesh.GetNumberOfCells() == test_case.result.main_mesh_num_cells - assert fracture_mesh.GetNumberOfPoints() == test_case.result.fracture_mesh_num_points - assert fracture_mesh.GetNumberOfCells() == test_case.result.fracture_mesh_num_cells - - res = format_collocated_nodes( fracture_mesh ) - assert res == test_case.collocated_nodes - assert len( res ) == test_case.result.fracture_mesh_num_points - - -def add_simplified_field_for_cells( mesh: vtkUnstructuredGrid, field_name: str, field_dimension: int ): - """Reduce functionality obtained from src.geos.mesh.doctor.actions.generate_fracture.__add_fields - where the goal is to add a cell data array with incrementing values. - - Args: - mesh (vtkUnstructuredGrid): Unstructured mesh. - field_name (str): Name of the field to add to CellData - field_dimension (int): Number of components for the field. - """ - data = mesh.GetCellData() - n = mesh.GetNumberOfCells() - array = numpy.ones( ( n, field_dimension ), dtype=float ) - array = numpy.arange( 1, n * field_dimension + 1 ).reshape( n, field_dimension ) - vtk_array = numpy_to_vtk( array ) - vtk_array.SetName( field_name ) - data.AddArray( vtk_array ) - - -def find_borders_faces_rectilinear_grid( mesh: vtkUnstructuredGrid ) -> tuple[ FaceNodesCoords ]: - """ - 6+--------+7 - / /| - / / | - 4+--------+5 | - | | | - | 2+ | +3 - | | / - | |/ - 0+--------+1 - - For a vtk rectilinear grid, gives the coordinates of each of its borders face nodes. - - Args: - mesh (vtkUnstructuredGrid): Unstructured mesh. - - Returns: - tuple[QuadCoords]: For a rectilinear grid, returns a tuple of 6 elements. - """ - mesh_bounds: tuple[ float ] = mesh.GetBounds() - min_bound: Coordinates3D = [ mesh_bounds[ i ] for i in range( len( mesh_bounds ) ) if i % 2 == 0 ] - max_bound: Coordinates3D = [ mesh_bounds[ i ] for i in range( len( mesh_bounds ) ) if i % 2 == 1 ] - center: Coordinates3D = mesh.GetCenter() - face_diag: tuple[ float ] = ( ( max_bound[ 0 ] - min_bound[ 0 ] ) / 2, ( max_bound[ 1 ] - min_bound[ 1 ] ) / 2, - ( max_bound[ 2 ] - min_bound[ 2 ] ) / 2 ) - node0: Coordinates3D = ( center[ 0 ] - face_diag[ 0 ], center[ 1 ] - face_diag[ 1 ], center[ 2 ] - face_diag[ 2 ] ) - node1: Coordinates3D = ( center[ 0 ] + face_diag[ 0 ], center[ 1 ] - face_diag[ 1 ], center[ 2 ] - face_diag[ 2 ] ) - node2: Coordinates3D = ( center[ 0 ] - face_diag[ 0 ], center[ 1 ] + face_diag[ 1 ], center[ 2 ] - face_diag[ 2 ] ) - node3: Coordinates3D = ( center[ 0 ] + face_diag[ 0 ], center[ 1 ] + face_diag[ 1 ], center[ 2 ] - face_diag[ 2 ] ) - node4: Coordinates3D = ( center[ 0 ] - face_diag[ 0 ], center[ 1 ] - face_diag[ 1 ], center[ 2 ] + face_diag[ 2 ] ) - node5: Coordinates3D = ( center[ 0 ] + face_diag[ 0 ], center[ 1 ] - face_diag[ 1 ], center[ 2 ] + face_diag[ 2 ] ) - node6: Coordinates3D = ( center[ 0 ] - face_diag[ 0 ], center[ 1 ] + face_diag[ 1 ], center[ 2 ] + face_diag[ 2 ] ) - node7: Coordinates3D = ( center[ 0 ] + face_diag[ 0 ], center[ 1 ] + face_diag[ 1 ], center[ 2 ] + face_diag[ 2 ] ) - faces: tuple[ FaceNodesCoords ] = ( ( node0, node1, node3, node2 ), ( node4, node5, node7, node6 ), - ( node0, node2, node6, node4 ), ( node1, node3, node7, node5 ), - ( node0, node1, node5, node4 ), ( node2, node3, node7, node6 ) ) - return faces - - -def add_quad( mesh: vtkUnstructuredGrid, face: FaceNodesCoords ): - """Adds a quad cell to each border of an unstructured mesh. - - Args: - mesh (vtkUnstructuredGrid): Unstructured mesh. - """ - points_coords = mesh.GetPoints().GetData() - quad: vtkQuad = vtkQuad() - ids_association: IDMapping = {} - for i in range( mesh.GetNumberOfPoints() ): - for j in range( len( face ) ): - if points_coords.GetTuple( i ) == face[ j ]: - ids_association[ i ] = j - break - if len( ids_association ) == 4: - break - - for vertice_id, quad_coord_index in ids_association.items(): - quad.GetPoints().InsertNextPoint( face[ quad_coord_index ] ) - quad.GetPointIds().SetId( quad_coord_index, vertice_id ) - - mesh.InsertNextCell( quad.GetCellType(), quad.GetPointIds() ) - - -@pytest.mark.skip( "Test to be fixed" ) -def test_copy_fields_when_splitting_mesh(): - """This test is designed to check the __copy_fields method from generate_fractures, - that will be called when using __split_mesh_on_fractures method from generate_fractures. - """ - # Generating the rectilinear grid and its quads on all borders - x: numpy.array = numpy.array( [ 0, 1, 2 ] ) - y: numpy.array = numpy.array( [ 0, 1 ] ) - z: numpy.array = numpy.array( [ 0, 1 ] ) - xyzs: XYZ = XYZ( x, y, z ) - mesh: vtkUnstructuredGrid = build_rectilinear_blocks_mesh( [ xyzs ] ) - assert mesh.GetCells().GetNumberOfCells() == 2 - border_faces: tuple[ FaceNodesCoords ] = find_borders_faces_rectilinear_grid( mesh ) - for face in border_faces: - add_quad( mesh, face ) - assert mesh.GetCells().GetNumberOfCells() == 8 - # Create a quad cell to represent the fracture surface. - fracture: FaceNodesCoords = ( ( 1.0, 0.0, 0.0 ), ( 1.0, 1.0, 0.0 ), ( 1.0, 1.0, 1.0 ), ( 1.0, 0.0, 1.0 ) ) - add_quad( mesh, fracture ) - assert mesh.GetCells().GetNumberOfCells() == 9 - # Add a "TestField" array - assert mesh.GetCellData().GetNumberOfArrays() == 0 - add_simplified_field_for_cells( mesh, "TestField", 1 ) - assert mesh.GetCellData().GetNumberOfArrays() == 1 - assert mesh.GetCellData().GetArrayName( 0 ) == "TestField" - testField_values: list[ int ] = vtk_to_numpy( mesh.GetCellData().GetArray( 0 ) ).tolist() - assert testField_values == [ 1, 2, 3, 4, 5, 6, 7, 8, 9 ] - # Split the mesh along the fracture surface which is number 9 on TestField - options = Options( policy=FracturePolicy.INTERNAL_SURFACES, - field="TestField", - field_values_combined=frozenset( map( int, [ "9" ] ) ), - field_values_per_fracture=[ frozenset( map( int, [ "9" ] ) ) ], - mesh_VtkOutput=None, - all_fractures_VtkOutput=None ) - main_mesh, fracture_meshes = __split_mesh_on_fractures( mesh, options ) - fracture_mesh: vtkUnstructuredGrid = fracture_meshes[ 0 ] - assert main_mesh.GetCellData().GetNumberOfArrays() == 1 - assert fracture_mesh.GetCellData().GetNumberOfArrays() == 1 - assert main_mesh.GetCellData().GetArrayName( 0 ) == "TestField" - assert fracture_mesh.GetCellData().GetArrayName( 0 ) == "TestField" - # Make sure that only 1 correct value is in "TestField" array for fracture_mesh, 9 values for main_mesh - fracture_mesh_values: list[ int ] = vtk_to_numpy( fracture_mesh.GetCellData().GetArray( 0 ) ).tolist() - main_mesh_values: list[ int ] = vtk_to_numpy( main_mesh.GetCellData().GetArray( 0 ) ).tolist() - assert fracture_mesh_values == [ 9 ] # The value for the fracture surface - assert main_mesh_values == [ 1, 2, 3, 4, 5, 6, 7, 8, 9 ] - # Test for invalid point field name - add_simplified_field_for_cells( mesh, "GLOBAL_IDS_POINTS", 1 ) - with pytest.raises( ValueError ) as pytest_wrapped_e: - main_mesh, fracture_meshes = __split_mesh_on_fractures( mesh, options ) - assert pytest_wrapped_e.type == ValueError - # Test for invalid cell field name - mesh: vtkUnstructuredGrid = build_rectilinear_blocks_mesh( [ xyzs ] ) - border_faces: tuple[ FaceNodesCoords ] = find_borders_faces_rectilinear_grid( mesh ) - for face in border_faces: - add_quad( mesh, face ) - add_quad( mesh, fracture ) - add_simplified_field_for_cells( mesh, "TestField", 1 ) - add_simplified_field_for_cells( mesh, "GLOBAL_IDS_CELLS", 1 ) - assert mesh.GetCellData().GetNumberOfArrays() == 2 - with pytest.raises( ValueError ) as pytest_wrapped_e: - main_mesh, fracture_meshes = __split_mesh_on_fractures( mesh, options ) - assert pytest_wrapped_e.type == ValueError diff --git a/geos-mesh/tests/test_nonConformal.py b/geos-mesh/tests/test_nonConformal.py new file mode 100644 index 00000000..231d01ed --- /dev/null +++ b/geos-mesh/tests/test_nonConformal.py @@ -0,0 +1,63 @@ +import numpy +from geos.mesh.doctor.actions.generateCube import buildRectilinearBlocksMesh, XYZ +from geos.mesh.doctor.actions.nonConformal import Options, __action + + +def test_twoCloseHexs(): + delta = 1.e-6 + tmp = numpy.arange( 2, dtype=float ) + xyz0 = XYZ( tmp, tmp, tmp ) + xyz1 = XYZ( tmp + 1 + delta, tmp, tmp ) + mesh = buildRectilinearBlocksMesh( ( xyz0, xyz1 ) ) + + # Close enough, but points tolerance is too strict to consider the faces matching. + options = Options( angleTolerance=1., pointTolerance=delta / 2, faceTolerance=delta * 2 ) + results = __action( mesh, options ) + assert len( results.nonConformalCells ) == 1 + assert set( results.nonConformalCells[ 0 ] ) == { 0, 1 } + + # Close enough, and points tolerance is loose enough to consider the faces matching. + options = Options( angleTolerance=1., pointTolerance=delta * 2, faceTolerance=delta * 2 ) + results = __action( mesh, options ) + assert len( results.nonConformalCells ) == 0 + + +def test_twoDistantHexs(): + delta = 1 + tmp = numpy.arange( 2, dtype=float ) + xyz0 = XYZ( tmp, tmp, tmp ) + xyz1 = XYZ( tmp + 1 + delta, tmp, tmp ) + mesh = buildRectilinearBlocksMesh( ( xyz0, xyz1 ) ) + + options = Options( angleTolerance=1., pointTolerance=delta / 2., faceTolerance=delta / 2. ) + + results = __action( mesh, options ) + assert len( results.nonConformalCells ) == 0 + + +def test_twoCloseShiftedHexs(): + deltaX, deltaY = 1.e-6, 0.5 + tmp = numpy.arange( 2, dtype=float ) + xyz0 = XYZ( tmp, tmp, tmp ) + xyz1 = XYZ( tmp + 1 + deltaX, tmp + deltaY, tmp + deltaY ) + mesh = buildRectilinearBlocksMesh( ( xyz0, xyz1 ) ) + + options = Options( angleTolerance=1., pointTolerance=deltaX * 2, faceTolerance=deltaX * 2 ) + + results = __action( mesh, options ) + assert len( results.nonConformalCells ) == 1 + assert set( results.nonConformalCells[ 0 ] ) == { 0, 1 } + + +def test_bigElemNextToSmallElem(): + delta = 1.e-6 + tmp = numpy.arange( 2, dtype=float ) + xyz0 = XYZ( tmp, tmp + 1, tmp + 1 ) + xyz1 = XYZ( 3 * tmp + 1 + delta, 3 * tmp, 3 * tmp ) + mesh = buildRectilinearBlocksMesh( ( xyz0, xyz1 ) ) + + options = Options( angleTolerance=1., pointTolerance=delta * 2, faceTolerance=delta * 2 ) + + results = __action( mesh, options ) + assert len( results.nonConformalCells ) == 1 + assert set( results.nonConformalCells[ 0 ] ) == { 0, 1 } diff --git a/geos-mesh/tests/test_non_conformal.py b/geos-mesh/tests/test_non_conformal.py deleted file mode 100644 index 9f6da41a..00000000 --- a/geos-mesh/tests/test_non_conformal.py +++ /dev/null @@ -1,63 +0,0 @@ -import numpy -from geos.mesh.doctor.actions.generate_cube import build_rectilinear_blocks_mesh, XYZ -from geos.mesh.doctor.actions.non_conformal import Options, __action - - -def test_two_close_hexs(): - delta = 1.e-6 - tmp = numpy.arange( 2, dtype=float ) - xyz0 = XYZ( tmp, tmp, tmp ) - xyz1 = XYZ( tmp + 1 + delta, tmp, tmp ) - mesh = build_rectilinear_blocks_mesh( ( xyz0, xyz1 ) ) - - # Close enough, but points tolerance is too strict to consider the faces matching. - options = Options( angle_tolerance=1., point_tolerance=delta / 2, face_tolerance=delta * 2 ) - results = __action( mesh, options ) - assert len( results.non_conformal_cells ) == 1 - assert set( results.non_conformal_cells[ 0 ] ) == { 0, 1 } - - # Close enough, and points tolerance is loose enough to consider the faces matching. - options = Options( angle_tolerance=1., point_tolerance=delta * 2, face_tolerance=delta * 2 ) - results = __action( mesh, options ) - assert len( results.non_conformal_cells ) == 0 - - -def test_two_distant_hexs(): - delta = 1 - tmp = numpy.arange( 2, dtype=float ) - xyz0 = XYZ( tmp, tmp, tmp ) - xyz1 = XYZ( tmp + 1 + delta, tmp, tmp ) - mesh = build_rectilinear_blocks_mesh( ( xyz0, xyz1 ) ) - - options = Options( angle_tolerance=1., point_tolerance=delta / 2., face_tolerance=delta / 2. ) - - results = __action( mesh, options ) - assert len( results.non_conformal_cells ) == 0 - - -def test_two_close_shifted_hexs(): - delta_x, delta_y = 1.e-6, 0.5 - tmp = numpy.arange( 2, dtype=float ) - xyz0 = XYZ( tmp, tmp, tmp ) - xyz1 = XYZ( tmp + 1 + delta_x, tmp + delta_y, tmp + delta_y ) - mesh = build_rectilinear_blocks_mesh( ( xyz0, xyz1 ) ) - - options = Options( angle_tolerance=1., point_tolerance=delta_x * 2, face_tolerance=delta_x * 2 ) - - results = __action( mesh, options ) - assert len( results.non_conformal_cells ) == 1 - assert set( results.non_conformal_cells[ 0 ] ) == { 0, 1 } - - -def test_big_elem_next_to_small_elem(): - delta = 1.e-6 - tmp = numpy.arange( 2, dtype=float ) - xyz0 = XYZ( tmp, tmp + 1, tmp + 1 ) - xyz1 = XYZ( 3 * tmp + 1 + delta, 3 * tmp, 3 * tmp ) - mesh = build_rectilinear_blocks_mesh( ( xyz0, xyz1 ) ) - - options = Options( angle_tolerance=1., point_tolerance=delta * 2, face_tolerance=delta * 2 ) - - results = __action( mesh, options ) - assert len( results.non_conformal_cells ) == 1 - assert set( results.non_conformal_cells[ 0 ] ) == { 0, 1 } diff --git a/geos-mesh/tests/test_reorient_mesh.py b/geos-mesh/tests/test_reorientMesh.py similarity index 54% rename from geos-mesh/tests/test_reorient_mesh.py rename to geos-mesh/tests/test_reorientMesh.py index dea8abdc..3f7277a8 100644 --- a/geos-mesh/tests/test_reorient_mesh.py +++ b/geos-mesh/tests/test_reorientMesh.py @@ -4,21 +4,21 @@ from typing import Generator from vtkmodules.vtkCommonCore import vtkIdList, vtkPoints from vtkmodules.vtkCommonDataModel import vtkUnstructuredGrid, VTK_POLYHEDRON -from geos.mesh.doctor.actions.reorient_mesh import reorient_mesh -from geos.mesh.doctor.actions.vtk_polyhedron import FaceStream -from geos.mesh.utils.genericHelpers import to_vtk_id_list, vtk_iter +from geos.mesh.doctor.actions.reorientMesh import reorientMesh +from geos.mesh.doctor.actions.vtkPolyhedron import FaceStream +from geos.mesh.utils.genericHelpers import toVtkIdList, vtkIter @dataclass( frozen=True ) class Expected: mesh: vtkUnstructuredGrid - face_stream: FaceStream + faceStream: FaceStream -def __build_test_meshes() -> Generator[ Expected, None, None ]: +def __buildTestMeshes() -> Generator[ Expected, None, None ]: # Creating the support nodes for the polyhedron. # It has a C shape and is actually non-convex, non star-shaped. - front_nodes = numpy.array( ( + frontNodes = numpy.array( ( ( 0, 0, 0 ), ( 3, 0, 0 ), ( 3, 1, 0 ), @@ -28,17 +28,17 @@ def __build_test_meshes() -> Generator[ Expected, None, None ]: ( 3, 3, 0 ), ( 0, 3, 0 ), ), - dtype=float ) - front_nodes = numpy.array( front_nodes, dtype=float ) - back_nodes = front_nodes - ( 0., 0., 1. ) + dtype=float ) + frontNodes = numpy.array( frontNodes, dtype=float ) + backNodes = frontNodes - ( 0., 0., 1. ) - n = len( front_nodes ) + n = len( frontNodes ) points = vtkPoints() points.Allocate( 2 * n ) - for coords in front_nodes: + for coords in frontNodes: points.InsertNextPoint( coords ) - for coords in back_nodes: + for coords in backNodes: points.InsertNextPoint( coords ) # Creating the polyhedron with faces all directed outward. @@ -49,7 +49,7 @@ def __build_test_meshes() -> Generator[ Expected, None, None ]: # Creating the front faces faces.append( tuple( range( n ) ) ) faces.append( tuple( reversed( range( n, 2 * n ) ) ) ) - face_stream = FaceStream( faces ) + faceStream = FaceStream( faces ) # Creating multiple meshes, each time with one unique polyhedron, # but with different "face flip status". @@ -57,33 +57,33 @@ def __build_test_meshes() -> Generator[ Expected, None, None ]: mesh = vtkUnstructuredGrid() mesh.Allocate( 1 ) mesh.SetPoints( points ) - mesh.InsertNextCell( VTK_POLYHEDRON, to_vtk_id_list( face_stream.dump() ) ) - yield Expected( mesh=mesh, face_stream=face_stream ) + mesh.InsertNextCell( VTK_POLYHEDRON, toVtkIdList( faceStream.dump() ) ) + yield Expected( mesh=mesh, faceStream=faceStream ) # Here, two faces are flipped. mesh = vtkUnstructuredGrid() mesh.Allocate( 1 ) mesh.SetPoints( points ) - mesh.InsertNextCell( VTK_POLYHEDRON, to_vtk_id_list( face_stream.flip_faces( ( 1, 2 ) ).dump() ) ) - yield Expected( mesh=mesh, face_stream=face_stream ) + mesh.InsertNextCell( VTK_POLYHEDRON, toVtkIdList( faceStream.flipFaces( ( 1, 2 ) ).dump() ) ) + yield Expected( mesh=mesh, faceStream=faceStream ) # Last, all faces are flipped. mesh = vtkUnstructuredGrid() mesh.Allocate( 1 ) mesh.SetPoints( points ) - mesh.InsertNextCell( VTK_POLYHEDRON, to_vtk_id_list( face_stream.flip_faces( range( len( faces ) ) ).dump() ) ) - yield Expected( mesh=mesh, face_stream=face_stream ) + mesh.InsertNextCell( VTK_POLYHEDRON, toVtkIdList( faceStream.flipFaces( range( len( faces ) ) ).dump() ) ) + yield Expected( mesh=mesh, faceStream=faceStream ) -@pytest.mark.parametrize( "expected", __build_test_meshes() ) -def test_reorient_polyhedron( expected: Expected ): - output_mesh = reorient_mesh( expected.mesh, range( expected.mesh.GetNumberOfCells() ) ) - assert output_mesh.GetNumberOfCells() == 1 - assert output_mesh.GetCell( 0 ).GetCellType() == VTK_POLYHEDRON - face_stream_ids = vtkIdList() - output_mesh.GetFaceStream( 0, face_stream_ids ) +@pytest.mark.parametrize( "expected", __buildTestMeshes() ) +def test_reorientPolyhedron( expected: Expected ): + outputMesh = reorientMesh( expected.mesh, range( expected.mesh.GetNumberOfCells() ) ) + assert outputMesh.GetNumberOfCells() == 1 + assert outputMesh.GetCell( 0 ).GetCellType() == VTK_POLYHEDRON + faceStreamIds = vtkIdList() + outputMesh.GetFaceStream( 0, faceStreamIds ) # Note that the following makes a raw (but simple) check. # But one may need to be more precise some day, # since triangular faces (0, 1, 2) and (1, 2, 0) should be considered as equivalent. # And the current simpler check does not consider this case. - assert tuple( vtk_iter( face_stream_ids ) ) == expected.face_stream.dump() + assert tuple( vtkIter( faceStreamIds ) ) == expected.faceStream.dump() diff --git a/geos-mesh/tests/test_self_intersecting_elements.py b/geos-mesh/tests/test_selfIntersectingElements.py similarity index 75% rename from geos-mesh/tests/test_self_intersecting_elements.py rename to geos-mesh/tests/test_selfIntersectingElements.py index 45216f01..a1783e59 100644 --- a/geos-mesh/tests/test_self_intersecting_elements.py +++ b/geos-mesh/tests/test_selfIntersectingElements.py @@ -1,9 +1,9 @@ from vtkmodules.vtkCommonCore import vtkPoints from vtkmodules.vtkCommonDataModel import vtkCellArray, vtkHexahedron, vtkUnstructuredGrid, VTK_HEXAHEDRON -from geos.mesh.doctor.actions.self_intersecting_elements import Options, __action +from geos.mesh.doctor.actions.selfIntersectingElements import Options, __action -def test_jumbled_hex(): +def test_jumbledHex(): # creating a simple hexahedron points = vtkPoints() points.SetNumberOfPoints( 8 ) @@ -16,7 +16,7 @@ def test_jumbled_hex(): points.SetPoint( 6, ( 1, 1, 1 ) ) points.SetPoint( 7, ( 0, 1, 1 ) ) - cell_types = [ VTK_HEXAHEDRON ] + cellTypes = [ VTK_HEXAHEDRON ] cells = vtkCellArray() cells.AllocateExact( 1, 8 ) @@ -33,9 +33,9 @@ def test_jumbled_hex(): mesh = vtkUnstructuredGrid() mesh.SetPoints( points ) - mesh.SetCells( cell_types, cells ) + mesh.SetCells( cellTypes, cells ) - result = __action( mesh, Options( min_distance=0. ) ) + result = __action( mesh, Options( minDistance=0. ) ) - assert len( result.intersecting_faces_elements ) == 1 - assert result.intersecting_faces_elements[ 0 ] == 0 + assert len( result.intersectingFacesElements ) == 1 + assert result.intersectingFacesElements[ 0 ] == 0 diff --git a/geos-mesh/tests/test_sharedChecksParsingLogic.py b/geos-mesh/tests/test_sharedChecksParsingLogic.py new file mode 100644 index 00000000..ec6420f5 --- /dev/null +++ b/geos-mesh/tests/test_sharedChecksParsingLogic.py @@ -0,0 +1,172 @@ +import argparse +from dataclasses import dataclass +import pytest +from unittest.mock import patch +# Import the module to test +from geos.mesh.doctor.actions.allChecks import Options as AllChecksOptions +from geos.mesh.doctor.actions.allChecks import Result as AllChecksResult +from geos.mesh.doctor.parsing._sharedChecksParsingLogic import ( CheckFeature, _generateParametersHelp, + getOptionsUsedMessage, fillSubparser, convert, + displayResults, CHECKS_TO_DO_ARG, PARAMETERS_ARG ) + + +# Mock dataclasses and functions we depend on +@dataclass +class MockOptions: + param1: float = 1.0 + param2: float = 2.0 + + +@dataclass +class MockResult: + value: str = "testResult" + + +def mockDisplayFunc( options, result ): + pass + + +@pytest.fixture +def checkFeaturesConfig(): + return { + "check1": + CheckFeature( name="check1", + optionsCls=MockOptions, + resultCls=MockResult, + defaultParams={ + "param1": 1.0, + "param2": 2.0 + }, + display=mockDisplayFunc ), + "check2": + CheckFeature( name="check2", + optionsCls=MockOptions, + resultCls=MockResult, + defaultParams={ + "param1": 3.0, + "param2": 4.0 + }, + display=mockDisplayFunc ) + } + + +@pytest.fixture +def orderedCheckNames(): + return [ "check1", "check2" ] + + +def test_generateParametersHelp( checkFeaturesConfig, orderedCheckNames ): + helpText = _generateParametersHelp( orderedCheckNames, checkFeaturesConfig ) + assert "For check1: param1:1.0, param2:2.0" in helpText + assert "For check2: param1:3.0, param2:4.0" in helpText + + +def test_getOptionsUsedMessage(): + options = MockOptions( param1=10.0, param2=20.0 ) + message = getOptionsUsedMessage( options ) + assert "Parameters used: (" in message + assert "param1 = 10.0" in message + assert "param2 = 20.0" in message + assert ")." in message + + +def test_fillSubparser( checkFeaturesConfig, orderedCheckNames ): + parser = argparse.ArgumentParser() + subparsers = parser.add_subparsers( dest="command" ) + fillSubparser( subparsers, "test-command", "Test help message", orderedCheckNames, checkFeaturesConfig ) + # Parse with no args should use defaults + args = parser.parse_args( [ "test-command" ] ) + assert getattr( args, CHECKS_TO_DO_ARG ) == "" + assert getattr( args, PARAMETERS_ARG ) == "" + # Parse with specified args + args = parser.parse_args( + [ "test-command", f"--{CHECKS_TO_DO_ARG}", "check1", f"--{PARAMETERS_ARG}", "param1:10.5" ] ) + assert getattr( args, CHECKS_TO_DO_ARG ) == "check1" + assert getattr( args, PARAMETERS_ARG ) == "param1:10.5" + + +@patch( 'geos.mesh.doctor.parsing._sharedChecksParsingLogic.setupLogger' ) +def test_convertDefaultChecks( mockLogger, checkFeaturesConfig, orderedCheckNames ): + parsedArgs = { CHECKS_TO_DO_ARG: "", PARAMETERS_ARG: "" } + options = convert( parsedArgs, orderedCheckNames, checkFeaturesConfig ) + assert options.checksToPerform == orderedCheckNames + assert len( options.checksOptions ) == 2 + assert options.checksOptions[ "check1" ].param1 == 1.0 + assert options.checksOptions[ "check2" ].param2 == 4.0 + + +@patch( 'geos.mesh.doctor.parsing._sharedChecksParsingLogic.setupLogger' ) +def test_convertSpecificChecks( mockLogger, checkFeaturesConfig, orderedCheckNames ): + parsedArgs = { CHECKS_TO_DO_ARG: "check1", PARAMETERS_ARG: "" } + options = convert( parsedArgs, orderedCheckNames, checkFeaturesConfig ) + assert options.checksToPerform == [ "check1" ] + assert len( options.checksOptions ) == 1 + assert "check1" in options.checksOptions + assert "check2" not in options.checksOptions + + +@patch( 'geos.mesh.doctor.parsing._sharedChecksParsingLogic.setupLogger' ) +def test_convertWithParameters( mockLogger, checkFeaturesConfig, orderedCheckNames ): + parsedArgs = { CHECKS_TO_DO_ARG: "", PARAMETERS_ARG: "param1:10.5,param2:20.5" } + options = convert( parsedArgs, orderedCheckNames, checkFeaturesConfig ) + assert options.checksToPerform == orderedCheckNames + assert options.checksOptions[ "check1" ].param1 == 10.5 + assert options.checksOptions[ "check1" ].param2 == 20.5 + assert options.checksOptions[ "check2" ].param1 == 10.5 + assert options.checksOptions[ "check2" ].param2 == 20.5 + + +@patch( 'geos.mesh.doctor.parsing._sharedChecksParsingLogic.setupLogger' ) +def test_convertWithInvalidParameters( mockLogger, checkFeaturesConfig, orderedCheckNames ): + parsedArgs = { CHECKS_TO_DO_ARG: "", PARAMETERS_ARG: "param1:invalid,param2:20.5" } + options = convert( parsedArgs, orderedCheckNames, checkFeaturesConfig ) + # The invalid parameter should be skipped, but the valid one applied + assert options.checksOptions[ "check1" ].param1 == 1.0 # Default maintained + assert options.checksOptions[ "check1" ].param2 == 20.5 # Updated + + +@patch( 'geos.mesh.doctor.parsing._sharedChecksParsingLogic.setupLogger' ) +def test_convertWithInvalidCheck( mockLogger, checkFeaturesConfig, orderedCheckNames ): + parsedArgs = { CHECKS_TO_DO_ARG: "invalid_check,check1", PARAMETERS_ARG: "" } + options = convert( parsedArgs, orderedCheckNames, checkFeaturesConfig ) + # The invalid check should be skipped + assert options.checksToPerform == [ "check1" ] + assert "check1" in options.checksOptions + assert "invalid_check" not in options.checksOptions + + +@patch( 'geos.mesh.doctor.parsing._sharedChecksParsingLogic.setupLogger' ) +def test_convertWithAllInvalidChecks( mockLogger, checkFeaturesConfig, orderedCheckNames ): + parsedArgs = { CHECKS_TO_DO_ARG: "invalid_check1,invalid_check2", PARAMETERS_ARG: "" } + # Should raise ValueError since no valid checks were selected + with pytest.raises( ValueError, match="No valid checks were selected" ): + convert( parsedArgs, orderedCheckNames, checkFeaturesConfig ) + + +@patch( 'geos.mesh.doctor.parsing._sharedChecksParsingLogic.setupLogger' ) +def test_displayResultsWithChecks( mockLogger, checkFeaturesConfig, orderedCheckNames ): + options = AllChecksOptions( checksToPerform=[ "check1", "check2" ], + checksOptions={ + "check1": MockOptions(), + "check2": MockOptions() + }, + checkDisplays={ + "check1": mockDisplayFunc, + "check2": mockDisplayFunc + } ) + result = AllChecksResult( checkResults={ + "check1": MockResult( value="result1" ), + "check2": MockResult( value="result2" ) + } ) + displayResults( options, result ) + # Check that results logger was called for each check + assert mockLogger.results.call_count >= 2 + + +@patch( 'geos.mesh.doctor.parsing._sharedChecksParsingLogic.setupLogger' ) +def test_displayResultsNoChecks( mockLogger ): + options = AllChecksOptions( checksToPerform=[], checksOptions={}, checkDisplays={} ) + result = AllChecksResult( checkResults={} ) + displayResults( options, result ) + # Should display a message that no checks were performed + mockLogger.results.assert_called_with( "No checks were performed or all failed during configuration." ) diff --git a/geos-mesh/tests/test_shared_checks_parsing_logic.py b/geos-mesh/tests/test_shared_checks_parsing_logic.py deleted file mode 100644 index f02c697f..00000000 --- a/geos-mesh/tests/test_shared_checks_parsing_logic.py +++ /dev/null @@ -1,172 +0,0 @@ -import argparse -from dataclasses import dataclass -import pytest -from unittest.mock import patch -# Import the module to test -from geos.mesh.doctor.actions.all_checks import Options as AllChecksOptions -from geos.mesh.doctor.actions.all_checks import Result as AllChecksResult -from geos.mesh.doctor.parsing._shared_checks_parsing_logic import ( CheckFeature, _generate_parameters_help, - get_options_used_message, fill_subparser, convert, - display_results, CHECKS_TO_DO_ARG, PARAMETERS_ARG ) - - -# Mock dataclasses and functions we depend on -@dataclass -class MockOptions: - param1: float = 1.0 - param2: float = 2.0 - - -@dataclass -class MockResult: - value: str = "test_result" - - -def mock_display_func( options, result ): - pass - - -@pytest.fixture -def check_features_config(): - return { - "check1": - CheckFeature( name="check1", - options_cls=MockOptions, - result_cls=MockResult, - default_params={ - "param1": 1.0, - "param2": 2.0 - }, - display=mock_display_func ), - "check2": - CheckFeature( name="check2", - options_cls=MockOptions, - result_cls=MockResult, - default_params={ - "param1": 3.0, - "param2": 4.0 - }, - display=mock_display_func ) - } - - -@pytest.fixture -def ordered_check_names(): - return [ "check1", "check2" ] - - -def test_generate_parameters_help( check_features_config, ordered_check_names ): - help_text = _generate_parameters_help( ordered_check_names, check_features_config ) - assert "For check1: param1:1.0, param2:2.0" in help_text - assert "For check2: param1:3.0, param2:4.0" in help_text - - -def test_get_options_used_message(): - options = MockOptions( param1=10.0, param2=20.0 ) - message = get_options_used_message( options ) - assert "Parameters used: (" in message - assert "param1 = 10.0" in message - assert "param2 = 20.0" in message - assert ")." in message - - -def test_fill_subparser( check_features_config, ordered_check_names ): - parser = argparse.ArgumentParser() - subparsers = parser.add_subparsers( dest="command" ) - fill_subparser( subparsers, "test-command", "Test help message", ordered_check_names, check_features_config ) - # Parse with no args should use defaults - args = parser.parse_args( [ "test-command" ] ) - assert getattr( args, CHECKS_TO_DO_ARG ) == "" - assert getattr( args, PARAMETERS_ARG ) == "" - # Parse with specified args - args = parser.parse_args( - [ "test-command", f"--{CHECKS_TO_DO_ARG}", "check1", f"--{PARAMETERS_ARG}", "param1:10.5" ] ) - assert getattr( args, CHECKS_TO_DO_ARG ) == "check1" - assert getattr( args, PARAMETERS_ARG ) == "param1:10.5" - - -@patch( 'geos.mesh.doctor.parsing._shared_checks_parsing_logic.setup_logger' ) -def test_convert_default_checks( mock_logger, check_features_config, ordered_check_names ): - parsed_args = { CHECKS_TO_DO_ARG: "", PARAMETERS_ARG: "" } - options = convert( parsed_args, ordered_check_names, check_features_config ) - assert options.checks_to_perform == ordered_check_names - assert len( options.checks_options ) == 2 - assert options.checks_options[ "check1" ].param1 == 1.0 - assert options.checks_options[ "check2" ].param2 == 4.0 - - -@patch( 'geos.mesh.doctor.parsing._shared_checks_parsing_logic.setup_logger' ) -def test_convert_specific_checks( mock_logger, check_features_config, ordered_check_names ): - parsed_args = { CHECKS_TO_DO_ARG: "check1", PARAMETERS_ARG: "" } - options = convert( parsed_args, ordered_check_names, check_features_config ) - assert options.checks_to_perform == [ "check1" ] - assert len( options.checks_options ) == 1 - assert "check1" in options.checks_options - assert "check2" not in options.checks_options - - -@patch( 'geos.mesh.doctor.parsing._shared_checks_parsing_logic.setup_logger' ) -def test_convert_with_parameters( mock_logger, check_features_config, ordered_check_names ): - parsed_args = { CHECKS_TO_DO_ARG: "", PARAMETERS_ARG: "param1:10.5,param2:20.5" } - options = convert( parsed_args, ordered_check_names, check_features_config ) - assert options.checks_to_perform == ordered_check_names - assert options.checks_options[ "check1" ].param1 == 10.5 - assert options.checks_options[ "check1" ].param2 == 20.5 - assert options.checks_options[ "check2" ].param1 == 10.5 - assert options.checks_options[ "check2" ].param2 == 20.5 - - -@patch( 'geos.mesh.doctor.parsing._shared_checks_parsing_logic.setup_logger' ) -def test_convert_with_invalid_parameters( mock_logger, check_features_config, ordered_check_names ): - parsed_args = { CHECKS_TO_DO_ARG: "", PARAMETERS_ARG: "param1:invalid,param2:20.5" } - options = convert( parsed_args, ordered_check_names, check_features_config ) - # The invalid parameter should be skipped, but the valid one applied - assert options.checks_options[ "check1" ].param1 == 1.0 # Default maintained - assert options.checks_options[ "check1" ].param2 == 20.5 # Updated - - -@patch( 'geos.mesh.doctor.parsing._shared_checks_parsing_logic.setup_logger' ) -def test_convert_with_invalid_check( mock_logger, check_features_config, ordered_check_names ): - parsed_args = { CHECKS_TO_DO_ARG: "invalid_check,check1", PARAMETERS_ARG: "" } - options = convert( parsed_args, ordered_check_names, check_features_config ) - # The invalid check should be skipped - assert options.checks_to_perform == [ "check1" ] - assert "check1" in options.checks_options - assert "invalid_check" not in options.checks_options - - -@patch( 'geos.mesh.doctor.parsing._shared_checks_parsing_logic.setup_logger' ) -def test_convert_with_all_invalid_checks( mock_logger, check_features_config, ordered_check_names ): - parsed_args = { CHECKS_TO_DO_ARG: "invalid_check1,invalid_check2", PARAMETERS_ARG: "" } - # Should raise ValueError since no valid checks were selected - with pytest.raises( ValueError, match="No valid checks were selected" ): - convert( parsed_args, ordered_check_names, check_features_config ) - - -@patch( 'geos.mesh.doctor.parsing._shared_checks_parsing_logic.setup_logger' ) -def test_display_results_with_checks( mock_logger, check_features_config, ordered_check_names ): - options = AllChecksOptions( checks_to_perform=[ "check1", "check2" ], - checks_options={ - "check1": MockOptions(), - "check2": MockOptions() - }, - check_displays={ - "check1": mock_display_func, - "check2": mock_display_func - } ) - result = AllChecksResult( check_results={ - "check1": MockResult( value="result1" ), - "check2": MockResult( value="result2" ) - } ) - display_results( options, result ) - # Check that results logger was called for each check - assert mock_logger.results.call_count >= 2 - - -@patch( 'geos.mesh.doctor.parsing._shared_checks_parsing_logic.setup_logger' ) -def test_display_results_no_checks( mock_logger ): - options = AllChecksOptions( checks_to_perform=[], checks_options={}, check_displays={} ) - result = AllChecksResult( check_results={} ) - display_results( options, result ) - # Should display a message that no checks were performed - mock_logger.results.assert_called_with( "No checks were performed or all failed during configuration." ) diff --git a/geos-mesh/tests/test_supported_elements.py b/geos-mesh/tests/test_supportedElements.py similarity index 70% rename from geos-mesh/tests/test_supported_elements.py rename to geos-mesh/tests/test_supportedElements.py index 07321abc..4e8ecc28 100644 --- a/geos-mesh/tests/test_supported_elements.py +++ b/geos-mesh/tests/test_supportedElements.py @@ -1,29 +1,30 @@ # import os import pytest -from typing import Tuple from vtkmodules.vtkCommonCore import vtkIdList, vtkPoints from vtkmodules.vtkCommonDataModel import vtkUnstructuredGrid, VTK_POLYHEDRON -# from geos.mesh.doctor.actions.supported_elements import Options, action, __action -from geos.mesh.doctor.actions.vtk_polyhedron import parse_face_stream, FaceStream -from geos.mesh.utils.genericHelpers import to_vtk_id_list +# from geos.mesh.doctor.actions.supportedElements import Options, action, __action +from geos.mesh.doctor.actions.vtkPolyhedron import parseFaceStream, FaceStream +from geos.mesh.utils.genericHelpers import toVtkIdList # TODO Update this test to have access to another meshTests file -@pytest.mark.parametrize( "base_name", ( "supportedElements.vtk", "supportedElementsAsVTKPolyhedra.vtk" ) ) -def test_supported_elements( base_name ) -> None: +@pytest.mark.parametrize( "baseName", ( "supportedElements.vtk", "supportedElementsAsVTKPolyhedra.vtk" ) ) +def test_supportedElements( baseName: str ) -> None: """Testing that the supported elements are properly detected as supported! - :param base_name: Supported elements are provided as standard elements or polyhedron elements. + + Args: + baseName (str): Supported elements are provided as standard elements or polyhedron elements. """ ... # directory = os.path.dirname( os.path.realpath( __file__ ) ) - # supported_elements_file_name = os.path.join( directory, "../../../../unitTests/meshTests", base_name ) - # options = Options( chunk_size=1, num_proc=4 ) - # result = check( supported_elements_file_name, options ) - # assert not result.unsupported_std_elements_types - # assert not result.unsupported_polyhedron_elements + # supportedElementsFileName = os.path.join( directory, "../../../../unitTests/meshTests", baseName ) + # options = Options( chunkSize=1, numProc=4 ) + # result = check( supportedElementsFileName, options ) + # assert not result.unsupportedStdElementsTypes + # assert not result.unsupportedPolyhedronElements -def make_dodecahedron() -> Tuple[ vtkPoints, vtkIdList ]: +def makeDodecahedron() -> tuple[ vtkPoints, vtkIdList ]: """Returns the points and faces for a dodecahedron. This code was adapted from an official vtk example. :return: The tuple of points and faces (as vtk instances). @@ -72,7 +73,7 @@ def make_dodecahedron() -> Tuple[ vtkPoints, vtkIdList ]: for coords in points: p.InsertNextPoint( coords ) - f = to_vtk_id_list( faces ) + f = toVtkIdList( faces ) return p, f @@ -81,7 +82,7 @@ def make_dodecahedron() -> Tuple[ vtkPoints, vtkIdList ]: def test_dodecahedron() -> None: """Tests whether a dodecahedron is support by GEOS or not. """ - points, faces = make_dodecahedron() + points, faces = makeDodecahedron() mesh = vtkUnstructuredGrid() mesh.Allocate( 1 ) mesh.SetPoints( points ) @@ -93,9 +94,9 @@ def test_dodecahedron() -> None: # assert not result.unsupported_std_elements_types -def test_parse_face_stream() -> None: - _, faces = make_dodecahedron() - result = parse_face_stream( faces ) +def test_parseFaceStream() -> None: + _, faces = makeDodecahedron() + result = parseFaceStream( faces ) # yapf: disable expected = ( (0, 1, 2, 3, 4), @@ -113,6 +114,6 @@ def test_parse_face_stream() -> None: ) # yapf: enable assert result == expected - face_stream = FaceStream.build_from_vtk_id_list( faces ) - assert face_stream.num_faces == 12 - assert face_stream.num_support_points == 20 + face_stream = FaceStream.buildFromVtkIdList( faces ) + assert face_stream.numFaces == 12 + assert face_stream.numSupportPoints == 20 diff --git a/geos-mesh/tests/test_triangle_distance.py b/geos-mesh/tests/test_triangleDistance.py similarity index 50% rename from geos-mesh/tests/test_triangle_distance.py rename to geos-mesh/tests/test_triangleDistance.py index 96274f14..326585b6 100644 --- a/geos-mesh/tests/test_triangle_distance.py +++ b/geos-mesh/tests/test_triangleDistance.py @@ -2,7 +2,7 @@ import numpy from numpy.linalg import norm import pytest -from geos.mesh.doctor.actions.triangle_distance import distance_between_two_segments, distance_between_two_triangles +from geos.mesh.doctor.actions.triangleDistance import distanceBetweenTwoSegments, distanceBetweenTwoTriangles @dataclass( frozen=True ) @@ -15,14 +15,14 @@ class ExpectedSeg: y: numpy.array @classmethod - def from_tuples( cls, p0, u0, p1, u1, x, y ): + def fromTuples( cls, p0, u0, p1, u1, x, y ): return cls( numpy.array( p0 ), numpy.array( u0 ), numpy.array( p1 ), numpy.array( u1 ), numpy.array( x ), numpy.array( y ) ) -def __get_segments_references(): +def __getSegmentsReferences(): # Node to node configuration. - yield ExpectedSeg.from_tuples( + yield ExpectedSeg.fromTuples( p0=( 0., 0., 0. ), u0=( 1., 0., 0. ), p1=( 2., 0., 0. ), @@ -31,7 +31,7 @@ def __get_segments_references(): y=( 2., 0., 0. ), ) # Node to edge configuration. - yield ExpectedSeg.from_tuples( + yield ExpectedSeg.fromTuples( p0=( 0., 0., 0. ), u0=( 1., 0., 0. ), p1=( 2., -1., -1. ), @@ -40,7 +40,7 @@ def __get_segments_references(): y=( 2., 0., 0. ), ) # Edge to edge configuration. - yield ExpectedSeg.from_tuples( + yield ExpectedSeg.fromTuples( p0=( 0., 0., -1. ), u0=( 0., 0., 2. ), p1=( 1., -1., -1. ), @@ -51,7 +51,7 @@ def __get_segments_references(): # Example from "On fast computation of distance between line segments" by Vladimir J. Lumelsky. # Information Processing Letters, Vol. 21, number 2, pages 55-61, 08/16/1985. # It's a node to edge configuration. - yield ExpectedSeg.from_tuples( + yield ExpectedSeg.fromTuples( p0=( 0., 0., 0. ), u0=( 1., 2., 1. ), p1=( 1., 0., 0. ), @@ -60,7 +60,7 @@ def __get_segments_references(): y=( 1., 0., 0. ), ) # Overlapping edges. - yield ExpectedSeg.from_tuples( + yield ExpectedSeg.fromTuples( p0=( 0., 0., 0. ), u0=( 2., 0., 0. ), p1=( 1., 0., 0. ), @@ -69,7 +69,7 @@ def __get_segments_references(): y=( 0., 0., 0. ), ) # Crossing edges. - yield ExpectedSeg.from_tuples( + yield ExpectedSeg.fromTuples( p0=( 0., 0., 0. ), u0=( 2., 0., 0. ), p1=( 1., -1., 0. ), @@ -79,10 +79,10 @@ def __get_segments_references(): ) -@pytest.mark.parametrize( "expected", __get_segments_references() ) +@pytest.mark.parametrize( "expected", __getSegmentsReferences() ) def test_segments( expected: ExpectedSeg ): eps = numpy.finfo( float ).eps - x, y = distance_between_two_segments( expected.p0, expected.u0, expected.p1, expected.u1 ) + x, y = distanceBetweenTwoSegments( expected.p0, expected.u0, expected.p1, expected.u1 ) if norm( expected.x - expected.y ) == 0: assert norm( x - y ) == 0. else: @@ -99,53 +99,53 @@ class ExpectedTri: p1: numpy.array @classmethod - def from_tuples( cls, t0, t1, d, p0, p1 ): + def fromTuples( cls, t0, t1, d, p0, p1 ): return cls( numpy.array( t0 ), numpy.array( t1 ), float( d ), numpy.array( p0 ), numpy.array( p1 ) ) -def __get_triangles_references(): +def __getTrianglesReferences(): # Node to node configuration. - yield ExpectedTri.from_tuples( t0=( ( 0., 0., 0. ), ( 1., 0., 0. ), ( 0., 1., 1. ) ), - t1=( ( 2., 0., 0. ), ( 3., 0., 0. ), ( 2., 1., 1. ) ), - d=1., - p0=( 1., 0., 0. ), - p1=( 2., 0., 0. ) ) + yield ExpectedTri.fromTuples( t0=( ( 0., 0., 0. ), ( 1., 0., 0. ), ( 0., 1., 1. ) ), + t1=( ( 2., 0., 0. ), ( 3., 0., 0. ), ( 2., 1., 1. ) ), + d=1., + p0=( 1., 0., 0. ), + p1=( 2., 0., 0. ) ) # Node to edge configuration. - yield ExpectedTri.from_tuples( t0=( ( 0., 0., 0. ), ( 1., 0., 0. ), ( 0., 1., 1. ) ), - t1=( ( 2., -1., 0. ), ( 3., 0., 0. ), ( 2., 1., 0. ) ), - d=1., - p0=( 1., 0., 0. ), - p1=( 2., 0., 0. ) ) + yield ExpectedTri.fromTuples( t0=( ( 0., 0., 0. ), ( 1., 0., 0. ), ( 0., 1., 1. ) ), + t1=( ( 2., -1., 0. ), ( 3., 0., 0. ), ( 2., 1., 0. ) ), + d=1., + p0=( 1., 0., 0. ), + p1=( 2., 0., 0. ) ) # Edge to edge configuration. - yield ExpectedTri.from_tuples( t0=( ( 0., 0., 0. ), ( 1., 1., 1. ), ( 1., -1., -1. ) ), - t1=( ( 2., -1., 0. ), ( 2., 1., 0. ), ( 3., 0., 0. ) ), - d=1., - p0=( 1., 0., 0. ), - p1=( 2., 0., 0. ) ) + yield ExpectedTri.fromTuples( t0=( ( 0., 0., 0. ), ( 1., 1., 1. ), ( 1., -1., -1. ) ), + t1=( ( 2., -1., 0. ), ( 2., 1., 0. ), ( 3., 0., 0. ) ), + d=1., + p0=( 1., 0., 0. ), + p1=( 2., 0., 0. ) ) # Point to face configuration. - yield ExpectedTri.from_tuples( t0=( ( 0., 0., 0. ), ( 1., 0., 0. ), ( 0., 1., 1. ) ), - t1=( ( 2., -1., 0. ), ( 2., 1., -1. ), ( 2, 1., 1. ) ), - d=1., - p0=( 1., 0., 0. ), - p1=( 2., 0., 0. ) ) + yield ExpectedTri.fromTuples( t0=( ( 0., 0., 0. ), ( 1., 0., 0. ), ( 0., 1., 1. ) ), + t1=( ( 2., -1., 0. ), ( 2., 1., -1. ), ( 2, 1., 1. ) ), + d=1., + p0=( 1., 0., 0. ), + p1=( 2., 0., 0. ) ) # Same triangles configuration. - yield ExpectedTri.from_tuples( t0=( ( 0., 0., 0. ), ( 1., 0., 0. ), ( 0., 1., 1. ) ), - t1=( ( 0., 0., 0. ), ( 1., 0., 0. ), ( 0., 1., 1. ) ), - d=0., - p0=( 0., 0., 0. ), - p1=( 0., 0., 0. ) ) + yield ExpectedTri.fromTuples( t0=( ( 0., 0., 0. ), ( 1., 0., 0. ), ( 0., 1., 1. ) ), + t1=( ( 0., 0., 0. ), ( 1., 0., 0. ), ( 0., 1., 1. ) ), + d=0., + p0=( 0., 0., 0. ), + p1=( 0., 0., 0. ) ) # Crossing triangles configuration. - yield ExpectedTri.from_tuples( t0=( ( 0., 0., 0. ), ( 2., 0., 0. ), ( 2., 0., 1. ) ), - t1=( ( 1., -1., 0. ), ( 1., 1., 0. ), ( 1., 1., 1. ) ), - d=0., - p0=( 0., 0., 0. ), - p1=( 0., 0., 0. ) ) + yield ExpectedTri.fromTuples( t0=( ( 0., 0., 0. ), ( 2., 0., 0. ), ( 2., 0., 1. ) ), + t1=( ( 1., -1., 0. ), ( 1., 1., 0. ), ( 1., 1., 1. ) ), + d=0., + p0=( 0., 0., 0. ), + p1=( 0., 0., 0. ) ) -@pytest.mark.parametrize( "expected", __get_triangles_references() ) +@pytest.mark.parametrize( "expected", __getTrianglesReferences() ) def test_triangles( expected: ExpectedTri ): eps = numpy.finfo( float ).eps - d, p0, p1 = distance_between_two_triangles( expected.t0, expected.t1 ) + d, p0, p1 = distanceBetweenTwoTriangles( expected.t0, expected.t1 ) assert abs( d - expected.d ) < eps if d != 0: assert norm( p0 - expected.p0 ) < eps diff --git a/geos-mesh/tests/test_vtkIO.py b/geos-mesh/tests/test_vtkIO.py new file mode 100644 index 00000000..c25462cb --- /dev/null +++ b/geos-mesh/tests/test_vtkIO.py @@ -0,0 +1,441 @@ +import pytest +import numpy as np +from vtkmodules.vtkCommonCore import vtkPoints +from vtkmodules.vtkCommonDataModel import vtkUnstructuredGrid, vtkStructuredGrid, VTK_TETRA +from geos.mesh.utils.genericHelpers import createSingleCellMesh +from geos.mesh.io.vtkIO import ( VtkFormat, VtkOutput, readMesh, readUnstructuredGrid, writeMesh, XML_FORMATS, + WRITER_MAP ) + +__doc__ = """ +Test module for vtkIO module. +Tests the functionality of reading and writing various VTK file formats. +Note: we will use the "tmp_path" fixture from pytest through some of these tests to allow for temporary file creation. +""" + + +@pytest.fixture( scope="module" ) +def simpleUnstructuredMesh(): + """Fixture for a simple unstructured mesh with tetrahedron.""" + return createSingleCellMesh( VTK_TETRA, np.array( [ [ 0, 0, 0 ], [ 1, 0, 0 ], [ 0, 1, 0 ], [ 0, 0, 1 ] ] ) ) + + +@pytest.fixture( scope="module" ) +def structuredMesh(): + """Fixture for a simple structured grid.""" + mesh = vtkStructuredGrid() + mesh.SetDimensions( 2, 2, 2 ) + + points = vtkPoints() + for k in range( 2 ): + for j in range( 2 ): + for i in range( 2 ): + points.InsertNextPoint( i, j, k ) + + mesh.SetPoints( points ) + return mesh + + +class TestVtkFormat: + """Test class for VtkFormat enumeration.""" + + def test_vtkFormatValues( self ): + """Test that VtkFormat enum has correct values.""" + assert VtkFormat.VTK.value == ".vtk" + assert VtkFormat.VTS.value == ".vts" + assert VtkFormat.VTU.value == ".vtu" + assert VtkFormat.VTI.value == ".vti" + assert VtkFormat.VTP.value == ".vtp" + assert VtkFormat.VTR.value == ".vtr" + assert VtkFormat.PVTU.value == ".pvtu" + assert VtkFormat.PVTS.value == ".pvts" + assert VtkFormat.PVTI.value == ".pvti" + assert VtkFormat.PVTP.value == ".pvtp" + assert VtkFormat.PVTR.value == ".pvtr" + + def test_vtkFormatFromString( self ): + """Test creating VtkFormat from string values.""" + assert VtkFormat( ".vtk" ) == VtkFormat.VTK + assert VtkFormat( ".vtu" ) == VtkFormat.VTU + assert VtkFormat( ".vts" ) == VtkFormat.VTS + assert VtkFormat( ".vti" ) == VtkFormat.VTI + assert VtkFormat( ".vtp" ) == VtkFormat.VTP + assert VtkFormat( ".vtr" ) == VtkFormat.VTR + assert VtkFormat( ".pvtu" ) == VtkFormat.PVTU + assert VtkFormat( ".pvts" ) == VtkFormat.PVTS + assert VtkFormat( ".pvti" ) == VtkFormat.PVTI + assert VtkFormat( ".pvtp" ) == VtkFormat.PVTP + assert VtkFormat( ".pvtr" ) == VtkFormat.PVTR + + def test_invalidFormat( self ): + """Test that invalid format raises ValueError.""" + with pytest.raises( ValueError ): + VtkFormat( ".invalid" ) + + +class TestVtkOutput: + """Test class for VtkOutput dataclass.""" + + def test_vtkOutputCreation( self ): + """Test VtkOutput creation with default parameters.""" + output = VtkOutput( "test.vtu" ) + assert output.output == "test.vtu" + assert output.isDataModeBinary is True + + def test_vtkOutputCreationCustom( self ): + """Test VtkOutput creation with custom parameters.""" + output = VtkOutput( "test.vtu", isDataModeBinary=False ) + assert output.output == "test.vtu" + assert output.isDataModeBinary is False + + def test_vtkOutputImmutable( self ): + """Test that VtkOutput is immutable (frozen dataclass).""" + output = VtkOutput( "test.vtu" ) + with pytest.raises( AttributeError ): + output.output = "new_test.vtu" + + +class TestMappings: + """Test class for reader and writer mappings.""" + + def test_xmlFormatsCompleteness( self ): + """Test that XML_FORMATS contains all XML-based readable formats.""" + expectedFormats = { + VtkFormat.VTU, VtkFormat.VTS, VtkFormat.VTI, VtkFormat.VTP, VtkFormat.VTR, VtkFormat.PVTU, VtkFormat.PVTS, + VtkFormat.PVTI, VtkFormat.PVTP, VtkFormat.PVTR + } + assert XML_FORMATS == expectedFormats + + def test_writerMapCompleteness( self ): + """Test that WRITER_MAP contains all writable formats.""" + expectedFormats = { VtkFormat.VTK, VtkFormat.VTS, VtkFormat.VTU } + assert set( WRITER_MAP.keys() ) == expectedFormats + + def test_writerMapClasses( self ): + """Test that WRITER_MAP contains valid writer classes.""" + for formatType, writerClass in WRITER_MAP.items(): + assert hasattr( writerClass, '__name__' ) + # All writers should be classes + assert isinstance( writerClass, type ) + + +class TestWriteMesh: + """Test class for writeMesh functionality.""" + + def test_writeVtuBinary( self, simpleUnstructuredMesh, tmp_path ): + """Test writing VTU file in binary mode.""" + outputFile = tmp_path / "testMesh.vtu" + vtkOutput = VtkOutput( str( outputFile ), isDataModeBinary=True ) + + result = writeMesh( simpleUnstructuredMesh, vtkOutput, canOverwrite=True ) + + assert result == 1 # VTK success code + assert outputFile.exists() + assert outputFile.stat().st_size > 0 + + def test_writeVtuAscii( self, simpleUnstructuredMesh, tmp_path ): + """Test writing VTU file in ASCII mode.""" + outputFile = tmp_path / "testMesh_ascii.vtu" + vtkOutput = VtkOutput( str( outputFile ), isDataModeBinary=False ) + + result = writeMesh( simpleUnstructuredMesh, vtkOutput, canOverwrite=True ) + + assert result == 1 # VTK success code + assert outputFile.exists() + assert outputFile.stat().st_size > 0 + + def test_writeVtkFormat( self, simpleUnstructuredMesh, tmp_path ): + """Test writing VTK legacy format.""" + outputFile = tmp_path / "testMesh.vtk" + vtkOutput = VtkOutput( str( outputFile ) ) + + result = writeMesh( simpleUnstructuredMesh, vtkOutput, canOverwrite=True ) + + assert result == 1 # VTK success code + assert outputFile.exists() + assert outputFile.stat().st_size > 0 + + def test_writeVtsFormat( self, structuredMesh, tmp_path ): + """Test writing VTS (structured grid) format.""" + outputFile = tmp_path / "testMesh.vts" + vtkOutput = VtkOutput( str( outputFile ) ) + + result = writeMesh( structuredMesh, vtkOutput, canOverwrite=True ) + + assert result == 1 # VTK success code + assert outputFile.exists() + assert outputFile.stat().st_size > 0 + + def test_writeFileExistsError( self, simpleUnstructuredMesh, tmp_path ): + """Test that writing to existing file raises error when canOverwrite=False.""" + outputFile = tmp_path / "existingFile.vtu" + outputFile.write_text( "dummy content" ) # Create existing file + + vtkOutput = VtkOutput( str( outputFile ) ) + + with pytest.raises( FileExistsError, match="already exists" ): + writeMesh( simpleUnstructuredMesh, vtkOutput, canOverwrite=False ) + + def test_writeUnsupportedFormat( self, simpleUnstructuredMesh, tmp_path ): + """Test that writing unsupported format raises ValueError.""" + outputFile = tmp_path / "testMesh.unsupported" + vtkOutput = VtkOutput( str( outputFile ) ) + + with pytest.raises( ValueError, match="not supported" ): + writeMesh( simpleUnstructuredMesh, vtkOutput ) + + def test_writeOverwriteAllowed( self, simpleUnstructuredMesh, tmp_path ): + """Test that overwriting is allowed when canOverwrite=True.""" + outputFile = tmp_path / "overwrite_test.vtu" + vtkOutput = VtkOutput( str( outputFile ) ) + + # First write + result1 = writeMesh( simpleUnstructuredMesh, vtkOutput, canOverwrite=True ) + assert result1 == 1 + assert outputFile.exists() + + # Second write (overwrite) + result2 = writeMesh( simpleUnstructuredMesh, vtkOutput, canOverwrite=True ) + assert result2 == 1 + assert outputFile.exists() + + +class TestReadMesh: + """Test class for readMesh functionality.""" + + def test_readNonexistentFile( self ): + """Test that reading nonexistent file raises FileNotFoundError.""" + with pytest.raises( FileNotFoundError, match="does not exist" ): + readMesh( "nonexistentFile.vtu" ) + + def test_readVtuFile( self, simpleUnstructuredMesh, tmp_path ): + """Test reading VTU file.""" + outputFile = tmp_path / "test_read.vtu" + vtkOutput = VtkOutput( str( outputFile ) ) + + # First write the file + writeMesh( simpleUnstructuredMesh, vtkOutput, canOverwrite=True ) + + # Then read it back + readMeshResult = readMesh( str( outputFile ) ) + + assert readMeshResult is not None + assert isinstance( readMeshResult, vtkUnstructuredGrid ) + assert readMeshResult.GetNumberOfPoints() == simpleUnstructuredMesh.GetNumberOfPoints() + assert readMeshResult.GetNumberOfCells() == simpleUnstructuredMesh.GetNumberOfCells() + + def test_readVtkFile( self, simpleUnstructuredMesh, tmp_path ): + """Test reading VTK legacy file.""" + outputFile = tmp_path / "test_read.vtk" + vtkOutput = VtkOutput( str( outputFile ) ) + + # First write the file + writeMesh( simpleUnstructuredMesh, vtkOutput, canOverwrite=True ) + + # Then read it back + readMeshResult = readMesh( str( outputFile ) ) + + assert readMeshResult is not None + assert isinstance( readMeshResult, vtkUnstructuredGrid ) + assert readMeshResult.GetNumberOfPoints() == simpleUnstructuredMesh.GetNumberOfPoints() + assert readMeshResult.GetNumberOfCells() == simpleUnstructuredMesh.GetNumberOfCells() + + def test_readVtsFile( self, structuredMesh, tmp_path ): + """Test reading VTS (structured grid) file.""" + outputFile = tmp_path / "test_read.vts" + vtkOutput = VtkOutput( str( outputFile ) ) + + # First write the file + writeMesh( structuredMesh, vtkOutput, canOverwrite=True ) + + # Then read it back + readMeshResult = readMesh( str( outputFile ) ) + + assert readMeshResult is not None + assert isinstance( readMeshResult, vtkStructuredGrid ) + assert readMeshResult.GetNumberOfPoints() == structuredMesh.GetNumberOfPoints() + + def test_readUnknownExtension( self, simpleUnstructuredMesh, tmp_path ): + """Test reading file with unknown extension falls back to trying all readers.""" + # Create a VTU file but with unknown extension + vtuFile = tmp_path / "test.vtu" + unknownFile = tmp_path / "test.unknown" + + # Write as VTU first + vtkOutput = VtkOutput( str( vtuFile ) ) + writeMesh( simpleUnstructuredMesh, vtkOutput, canOverwrite=True ) + + # Copy to unknown extension + unknownFile.write_bytes( vtuFile.read_bytes() ) + + # Should still be able to read it + readMeshResult = readMesh( str( unknownFile ) ) + + assert readMeshResult is not None + assert isinstance( readMeshResult, vtkUnstructuredGrid ) + + def test_readInvalidFileContent( self, tmp_path ): + """Test that reading invalid file content raises ValueError.""" + invalidFile = tmp_path / "invalid.vtu" + invalidFile.write_text( "This is not a valid VTU file" ) + + with pytest.raises( ValueError, match="Failed to read file" ): + readMesh( str( invalidFile ) ) + + +class TestReadUnstructuredGrid: + """Test class for readUnstructuredGrid functionality.""" + + def test_readUnstructuredGridSuccess( self, simpleUnstructuredMesh, tmp_path ): + """Test successfully reading an unstructured grid.""" + outputFile = tmp_path / "test_ug.vtu" + vtkOutput = VtkOutput( str( outputFile ) ) + + # Write unstructured grid + writeMesh( simpleUnstructuredMesh, vtkOutput, canOverwrite=True ) + + # Read back as unstructured grid + result = readUnstructuredGrid( str( outputFile ) ) + + assert isinstance( result, vtkUnstructuredGrid ) + assert result.GetNumberOfPoints() == simpleUnstructuredMesh.GetNumberOfPoints() + assert result.GetNumberOfCells() == simpleUnstructuredMesh.GetNumberOfCells() + + def test_readUnstructuredGridWrongType( self, structuredMesh, tmp_path ): + """Test that reading non-unstructured grid raises TypeError.""" + outputFile = tmp_path / "test_sg.vts" + vtkOutput = VtkOutput( str( outputFile ) ) + + # Write structured grid + writeMesh( structuredMesh, vtkOutput, canOverwrite=True ) + + # Try to read as unstructured grid - should fail + with pytest.raises( TypeError, match="not the expected vtkUnstructuredGrid" ): + readUnstructuredGrid( str( outputFile ) ) + + def test_readUnstructuredGrid_nonexistent( self ): + """Test that reading nonexistent file raises FileNotFoundError.""" + with pytest.raises( FileNotFoundError, match="does not exist" ): + readUnstructuredGrid( "nonexistent.vtu" ) + + +class TestRoundTripReadWrite: + """Test class for round-trip read/write operations.""" + + def test_vtuRoundTripBinary( self, simpleUnstructuredMesh, tmp_path ): + """Test round-trip write and read for VTU binary format.""" + outputFile = tmp_path / "roundtrip_binary.vtu" + vtkOutput = VtkOutput( str( outputFile ), isDataModeBinary=True ) + + # Write + writeResult = writeMesh( simpleUnstructuredMesh, vtkOutput, canOverwrite=True ) + assert writeResult == 1 + + # Read back + readResult = readUnstructuredGrid( str( outputFile ) ) + + # Compare + assert readResult.GetNumberOfPoints() == simpleUnstructuredMesh.GetNumberOfPoints() + assert readResult.GetNumberOfCells() == simpleUnstructuredMesh.GetNumberOfCells() + + # Check point coordinates are preserved + for i in range( readResult.GetNumberOfPoints() ): + origPoint = simpleUnstructuredMesh.GetPoint( i ) + readPoint = readResult.GetPoint( i ) + np.testing.assert_array_almost_equal( origPoint, readPoint, decimal=6 ) + + def test_vtuRoundTripAscii( self, simpleUnstructuredMesh, tmp_path ): + """Test round-trip write and read for VTU ASCII format.""" + outputFile = tmp_path / "roundtrip_ascii.vtu" + vtkOutput = VtkOutput( str( outputFile ), isDataModeBinary=False ) + + # Write + writeResult = writeMesh( simpleUnstructuredMesh, vtkOutput, canOverwrite=True ) + assert writeResult == 1 + + # Read back + readResult = readUnstructuredGrid( str( outputFile ) ) + + # Compare + assert readResult.GetNumberOfPoints() == simpleUnstructuredMesh.GetNumberOfPoints() + assert readResult.GetNumberOfCells() == simpleUnstructuredMesh.GetNumberOfCells() + + def test_vtkRoundTrip( self, simpleUnstructuredMesh, tmp_path ): + """Test round-trip write and read for VTK legacy format.""" + outputFile = tmp_path / "roundtrip.vtk" + vtkOutput = VtkOutput( str( outputFile ) ) + + # Write + writeResult = writeMesh( simpleUnstructuredMesh, vtkOutput, canOverwrite=True ) + assert writeResult == 1 + + # Read back + readResult = readUnstructuredGrid( str( outputFile ) ) + + # Compare + assert readResult.GetNumberOfPoints() == simpleUnstructuredMesh.GetNumberOfPoints() + assert readResult.GetNumberOfCells() == simpleUnstructuredMesh.GetNumberOfCells() + + def test_vtsRoundTrip( self, structuredMesh, tmp_path ): + """Test round-trip write and read for VTS format.""" + outputFile = tmp_path / "roundtrip.vts" + vtkOutput = VtkOutput( str( outputFile ) ) + + # Write + writeResult = writeMesh( structuredMesh, vtkOutput, canOverwrite=True ) + assert writeResult == 1 + + # Read back + readResult = readMesh( str( outputFile ) ) + + # Compare + assert isinstance( readResult, vtkStructuredGrid ) + assert readResult.GetNumberOfPoints() == structuredMesh.GetNumberOfPoints() + + +class TestEdgeCases: + """Test class for edge cases and error conditions.""" + + def test_emptyMeshWrite( self, tmp_path ): + """Test writing an empty mesh.""" + emptyMesh = vtkUnstructuredGrid() + outputFile = tmp_path / "empty.vtu" + vtkOutput = VtkOutput( str( outputFile ) ) + + result = writeMesh( emptyMesh, vtkOutput, canOverwrite=True ) + assert result == 1 + assert outputFile.exists() + + def test_emptyMeshRoundTrip( self, tmp_path ): + """Test round-trip with empty mesh.""" + emptyMesh = vtkUnstructuredGrid() + outputFile = tmp_path / "empty_roundtrip.vtu" + vtkOutput = VtkOutput( str( outputFile ) ) + + # Write + writeResult = writeMesh( emptyMesh, vtkOutput, canOverwrite=True ) + assert writeResult == 1 + + # Read back + readResult = readUnstructuredGrid( str( outputFile ) ) + assert readResult.GetNumberOfPoints() == 0 + assert readResult.GetNumberOfCells() == 0 + + def test_largePathNames( self, simpleUnstructuredMesh, tmp_path ): + """Test handling of long file paths.""" + # Create a deep directory structure + deepDir = tmp_path + for i in range( 5 ): + deepDir = deepDir / f"veryLongDirectoryNameLevel{i}" + deepDir.mkdir( parents=True ) + + outputFile = deepDir / "meshWithVeryLongFilenameThatShouldStillWork.vtu" + vtkOutput = VtkOutput( str( outputFile ) ) + + # Should work fine + result = writeMesh( simpleUnstructuredMesh, vtkOutput, canOverwrite=True ) + assert result == 1 + assert outputFile.exists() + + # And read back + readResult = readUnstructuredGrid( str( outputFile ) ) + assert readResult.GetNumberOfPoints() == simpleUnstructuredMesh.GetNumberOfPoints() diff --git a/pygeos-tools/src/geos/pygeos_tools/mesh/VtkMesh.py b/pygeos-tools/src/geos/pygeos_tools/mesh/VtkMesh.py index 67ce1a88..e1880cc2 100644 --- a/pygeos-tools/src/geos/pygeos_tools/mesh/VtkMesh.py +++ b/pygeos-tools/src/geos/pygeos_tools/mesh/VtkMesh.py @@ -21,7 +21,7 @@ from vtkmodules.vtkFiltersCore import vtkExtractCells, vtkResampleWithDataSet from vtkmodules.vtkFiltersExtraction import vtkExtractGrid from geos.mesh.utils.arrayHelpers import getNumpyArrayByName, getNumpyGlobalIdsArray -from geos.mesh.io.vtkIO import VtkOutput, read_mesh, write_mesh +from geos.mesh.io.vtkIO import VtkOutput, readMesh, writeMesh from geos.pygeos_tools.model.pyevtk_tools import cGlobalIds from geos.utils.errors_handling.classes import required_attributes @@ -86,7 +86,7 @@ def read( self: Self ) -> vtkPointSet: vtk.vtkPointSet General representation of VTK mesh data """ - return read_mesh( self.meshfile ) + return readMesh( self.meshfile ) def export( self: Self, data: vtkPointSet = None, rootname: str = None, vtktype: str = None ) -> str: """ @@ -114,7 +114,7 @@ def export( self: Self, data: vtkPointSet = None, rootname: str = None, vtktype: data = self.read() filename: str = ".".join( ( rootname, vtktype ) ) - write_mesh( data, VtkOutput( filename, True ) ) + writeMesh( data, VtkOutput( filename, True ) ) return filename def extractMesh( self: Self,