Skip to content

Commit

Permalink
Merge pull request #2044 from mikedh/rc2
Browse files Browse the repository at this point in the history
Release Candidate: 4.0.0.rc2
  • Loading branch information
mikedh authored Oct 13, 2023
2 parents b2e59ee + 7373e4d commit 6e3e697
Show file tree
Hide file tree
Showing 27 changed files with 221 additions and 187 deletions.
4 changes: 2 additions & 2 deletions docker/trimesh-setup
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ def fetch(url, sha256):
Location of remote resource.
sha256: str
The SHA256 hash of the resource once retrieved,
wil raise a `ValueError` if the hash doesn't match.
will raise a `ValueError` if the hash doesn't match.
Returns
-------------
Expand Down Expand Up @@ -362,7 +362,7 @@ if __name__ == "__main__":
"apt": lambda x: apt_select.extend(x),
}

# allow comma delimeters and de-duplicate
# allow comma delimiters and de-duplicate
if args.install is None:
parser.print_help()
exit()
Expand Down
2 changes: 1 addition & 1 deletion docs/content/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ RUN pip install trimesh[easy]

### Using Prebuilt Images

The `trimesh/trimesh` docker images are based on the offical Python base image, currently `python:3.11-slim-bullseye`. They are built and pushed to Docker Hub automatically in Github Actions for every release.
The `trimesh/trimesh` docker images are based on the official Python base image, currently `python:3.11-slim-bullseye`. They are built and pushed to Docker Hub automatically in Github Actions for every release.

If you need some of the more demanding dependencies they can be a good option. The `trimesh/trimesh` images are pushed with three tags: `latest` (for latest :), semantic version (i.e. `3.15.5`), or git short hash (i.e. `1c6178d`). These images include `embree` and `trimesh[all]` which is run in a multi-stage build to avoid including intermediate files in the final image.

Expand Down
5 changes: 2 additions & 3 deletions docs/content/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,7 @@ Conda Packages

If you prefer a `conda` environment, `trimesh` is available on `conda-forge` ([trimesh-feedstock repo](https://github.com/conda-forge/trimesh-feedstock))


If you install [Miniconda](https://conda.io/docs/install/quick.html) you can then run:
If you install [Miniconda](https://docs.conda.io/projects/miniconda/en/latest/) you can then run:

```
conda install -c conda-forge trimesh
Expand Down Expand Up @@ -81,4 +80,4 @@ Trimesh has a lot of soft-required upstream packages. We try to make sure they'r
|`pytest`| A test runner. | | `test`|
|`pytest-cov`| A plugin to calculate test coverage. | | `test`|
|`pyinstrument`| A sampling based profiler for performance tweaking. | | `test`|
|`pyvhacd`| A binding for VHACD which provides convex decompositions | | `recommend`|
|`vhacdx`| A binding for VHACD which provides convex decompositions | | `recommend`|
12 changes: 6 additions & 6 deletions docs/content/nricp.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Non-Rigid Registration
=====================

Mesh non-rigid registration methods are capable of aligning (*i.e.* superimposing) a *source mesh* on a *target geometry* which can be any 3D structure that enables nearest point query. In Trimesh, the target geometry can either be a mesh `trimesh.Trimesh` or a point cloud `trimesh.PointCloud`. This process is often used to build dense correspondence, needed for the creation of [3D Morphable Models](https://www.face-rec.org/algorithms/3d_morph/morphmod2.pdf).
The "non-rigid" part means that the vertices of the source mesh are not scaled, rotated and translated together to match the target geometry as with [Iterative Closest Points](https://en.wikipedia.org/wiki/Iterative_closest_point) (ICP) methods. Instead, they are allowed to move *more or less independantly* to land on the target geometry.
The "non-rigid" part means that the vertices of the source mesh are not scaled, rotated and translated together to match the target geometry as with [Iterative Closest Points](https://en.wikipedia.org/wiki/Iterative_closest_point) (ICP) methods. Instead, they are allowed to move *more or less independently* to land on the target geometry.

Trimesh implements two mesh non-rigid registrations algorithms which are both extensions of ICP. They are called Non-Rigid ICP methods :

Expand Down Expand Up @@ -30,7 +30,7 @@ $\mathbf{x}_4$ is basically the point at the tip of the triangle normal starting
Each deformed vertex $\tilde{\mathbf{v}}_i$ is computed from the vertex $\mathbf{v}_i$ via an affine transformation $\{\mathbf{T}, \mathbf{d}\}_i$ with $\mathbf{T}_i \in \mathbb{R}^{3\times3}$ being its scaling/rotational part and $\mathbf{d}_i$ being its translational part.
We get $\tilde{\mathbf{v}}_i = \mathbf{T}_i\mathbf{v}_i + \mathbf{d}_i$.

The main idea is to subtract $\mathbf{d}$ from the previous equation. To do this, we substract $\mathbf{x}_1$ from each tetrahedron to obtain frames $\mathbf{V}_i$ and $\tilde{\mathbf{V}}_i \in \mathbb{R}^{3\times3}$ :
The main idea is to subtract $\mathbf{d}$ from the previous equation. To do this, we subtract $\mathbf{x}_1$ from each tetrahedron to obtain frames $\mathbf{V}_i$ and $\tilde{\mathbf{V}}_i \in \mathbb{R}^{3\times3}$ :

$$
\begin{matrix}
Expand Down Expand Up @@ -103,7 +103,7 @@ Then we either start the next iteration or return the result.
The number of iterations is determined by the length of the `steps` argument. `steps` should be an iterable of five floats iterables `[[wc_1, wi_1, ws_1, wl_1, wn_1], ..., [wc_n, wi_n, ws_n, wl_n, wn_n]]`. The floats should correspond to $w_C, w_I, w_S, w_L$ and $w_N$. The extra weight $w_N$ is related to outlier robustness.

### Robustness to outliers
The target geometry can be noisy or incomplete which can lead to bad closest points $\mathbf{c}$. To remedy this issue, the linear equations related to $E_C$ are also weighted by *closest point validity weights*. First, if the distance to the closest point greater than the user specified threshold `distance_threshold`, the corresponding linear equations are multiplied by 0 (*i.e.* removed). Second, one may need the normals at the source mesh vertices and the normals at target geoemtry closest points to coincide. We use the dot product to the power $w_N$ to determine if normals are well aligned and use it to weight the linear equations. Eventually, the *closest point validity weights* are :
The target geometry can be noisy or incomplete which can lead to bad closest points $\mathbf{c}$. To remedy this issue, the linear equations related to $E_C$ are also weighted by *closest point validity weights*. First, if the distance to the closest point greater than the user specified threshold `distance_threshold`, the corresponding linear equations are multiplied by 0 (*i.e.* removed). Second, one may need the normals at the source mesh vertices and the normals at target geometry closest points to coincide. We use the dot product to the power $w_N$ to determine if normals are well aligned and use it to weight the linear equations. Eventually, the *closest point validity weights* are :

$$
\boldsymbol{\alpha}=\left[
Expand All @@ -115,7 +115,7 @@ $$
$$


With $d_{max}$ being the threshold given with the argument `distance_threshold`, and $\mathbf{n}_v$ and $\mathbf{n}_c$ the normals mentionned above.
With $d_{max}$ being the threshold given with the argument `distance_threshold`, and $\mathbf{n}_v$ and $\mathbf{n}_c$ the normals mentioned above.

### Summary

Expand Down Expand Up @@ -148,7 +148,7 @@ Three energies are minimized :

$$E_C = \sum\limits^n_{i=1} \boldsymbol{\alpha}_i \lVert \mathbf{w}_i^\text{T}\mathbf{X}_i - \mathbf{c}_i \rVert^2$$

- The **deformation smoothness term** (stiffness term) $E_S$. In the following, $\mathbf{G}=[1,1,1,\gamma]$ is used to weights differences in the rotational and skew part of the deformation, and can be accesed via the argument `gamma`. Two vertices are adjacent if they share an edge.
- The **deformation smoothness term** (stiffness term) $E_S$. In the following, $\mathbf{G}=[1,1,1,\gamma]$ is used to weights differences in the rotational and skew part of the deformation, and can be accessed via the argument `gamma`. Two vertices are adjacent if they share an edge.

$$E_S = \sum\limits^n_{j\in\text{adj}(i)} \lVert (\mathbf{X}_i - \mathbf{X}_j) \mathbf{G} \rVert^2$$

Expand Down Expand Up @@ -206,7 +206,7 @@ The [same implementation](#robustness-to-outliers) than `nricp_sumner` is used,
- $j ← j + 1$


> In contrast to `nricp_sumner`, the matrix $\mathbf{A}_C$ is built only once at initilization.
> In contrast to `nricp_sumner`, the matrix $\mathbf{A}_C$ is built only once at initialization.
## Comparison of the two methods
The main difference between `nricp_sumner` and `nricp_amberg` is the kind of transformations that is optimized. `nricp_sumner` involves frames with an extra vertex representing the orientation of the triangles, and solves implicitly for transformations that act on these frames. In `nricp_amberg`, per-vertex transformations are explicitly solved for which allows to construct the correspondence cost matrix $\mathbf{A}_C$ only once. As a result, `nricp_sumner` tends to output smoother results with less high frequencies. The users are advised to try both algorithms with different parameter sets, especially different `steps` arguments, and find which suits better their problem. `nricp_amberg` appears to be easier to tune, though.
Expand Down
2 changes: 1 addition & 1 deletion models/jacked.obj
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# https://github.com/mikedh/trimesh
mtllib nonexistant.mtl
mtllib nonexistent.mtl
usemtl material0
v -0.50000000 -0.50000000 -0.50000000
v -0.50000000 -0.50000000 0.50000000
Expand Down
2 changes: 1 addition & 1 deletion models/plane.xaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
<ModelVisual3D.Content>
<GeometryModel3D>

<!-- The geometry specifes the shape of the 3D plane. In this sample, a flat sheet is created. -->
<!-- The geometry specifies the shape of the 3D plane. In this sample, a flat sheet is created. -->
<GeometryModel3D.Geometry>
<MeshGeometry3D
TriangleIndices="0,1,2 3,4,5 "
Expand Down
6 changes: 3 additions & 3 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ requires = ["setuptools >= 61.0", "wheel"]
[project]
name = "trimesh"
requires-python = ">=3.7"
version = "4.0.0.rc1"
version = "4.0.0.rc2"
authors = [{name = "Michael Dawson-Haggerty", email = "[email protected]"}]
license = {text = "MIT"}
license = {file = "LICENSE.md"}
description = "Import, export, process, analyze and view triangular meshes."
keywords = ["graphics", "mesh", "geometry", "3D"]
classifiers = [
Expand Down Expand Up @@ -88,7 +88,7 @@ recommend = [
"xatlas",
"scikit-image",
"python-fcl",
"pyVHACD",
"vhacdx",
]

# this is the list of everything that is ever added anywhere
Expand Down
2 changes: 1 addition & 1 deletion tests/generic.py
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ def random_transforms(count, translate=1000):
Yields
------------
transform : (4, 4) float
Homogenous transformation matrix
Homogeneous transformation matrix
"""
quaternion = random((count, 3))
translate = (random((count, 3)) - 0.5) * float(translate)
Expand Down
2 changes: 1 addition & 1 deletion tests/test_bounds.py
Original file line number Diff line number Diff line change
Expand Up @@ -294,7 +294,7 @@ def test_obb_corpus(self):
# check the mesh bounds against the claimed OBB bounds
half = o.primitive.extents / 2.0
check_extents = g.np.array([-half, half])
# check that the OBB does countain the mesh
# check that the OBB does contain the mesh
assert g.np.allclose(check.bounds, check_extents, rtol=1e-4)


Expand Down
2 changes: 1 addition & 1 deletion tests/test_cache.py
Original file line number Diff line number Diff line change
Expand Up @@ -299,7 +299,7 @@ def test_method_combinations(self):
# add 2 and 3 length permutations of our guesses
attempts.extend([tuple(G) for G in itertools.product(flat, repeat=2)])

# adding 3-length permuations makes this test 10x slower but if you
# adding 3-length permutations makes this test 10x slower but if you
# are suspicious of a method caching you could uncomment this out:
# attempts.extend([tuple(G) for G in itertools.permutations(flat, 3)])
skip = set()
Expand Down
2 changes: 1 addition & 1 deletion tests/test_decomposition.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
class DecompositionTest(g.unittest.TestCase):
def test_convex_decomposition(self):
try:
import pyVHACD # noqa
import vhacdx # noqa
except ImportError:
return

Expand Down
2 changes: 1 addition & 1 deletion tests/test_gltf.py
Original file line number Diff line number Diff line change
Expand Up @@ -957,7 +957,7 @@ def test_embed_buffer(self):
path = g.os.path.join(D, "hi.gltf")
scene.export(path, embed_buffers=True)

# should export with embeded bufferes
# should export with embedded buffers
assert len(g.os.listdir(D)) == 1

reloaded = g.trimesh.load(path)
Expand Down
4 changes: 2 additions & 2 deletions tests/test_paths.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,8 @@ def test_discrete(self):
if d.metadata["file_name"][-3:] == "dxf":
assert len(d.layers) == len(d.entities)

for path in d.paths:
verts = d.discretize_path(path)
for path, verts in zip(d.paths, d.discrete):
assert len(path) >= 1
dists = g.np.sum((g.np.diff(verts, axis=0)) ** 2, axis=1) ** 0.5

if not g.np.all(dists > g.tol_path.zero):
Expand Down
4 changes: 2 additions & 2 deletions tests/test_polygons.py
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ def truth_corner(bh):
)
# check against wikipedia
t = truth(bh)
# for a centered rectangle, the principal axis are alread aligned
# for a centered rectangle, the principal axis are already aligned
# with the frame axis
assert g.np.allclose(O_moments, t)
assert g.np.any(g.np.isclose(O_moments, O_principal_moments[0]))
Expand All @@ -207,7 +207,7 @@ def truth_corner(bh):
# now check a rectangle with the corner, so Ixy != 0

# First we test with centering. The results should be same as
# with the initally centered rectangles
# with the initially centered rectangles
C_moments, C_principal_moments, C_alpha, C_transform = second_moments(
poly_corner(bh), return_centered=True
)
Expand Down
4 changes: 2 additions & 2 deletions tests/test_primitives.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def setUp(self):
center=[102.20, 0, 102.0], extents=[29, 100, 1000]
)
)
raise ValueError("Box shouldnt have accepted `center`!")
raise ValueError("Box shouldn't have accepted `center`!")
except TypeError:
# this should have raised a TypeError as `center` is not a kwarg
pass
Expand Down Expand Up @@ -132,7 +132,7 @@ def test_scaling(self):
# converting to mesh will do all scaling
# and transformations on simple discrete
# copy of primitive and should match with
# only tesselation differences
# only tessellation differences
p = po.copy()
m = p.to_mesh()

Expand Down
2 changes: 1 addition & 1 deletion tests/test_resolvers.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ def test_filepath_namespace(self):
models = g.dir_models
subdir = "2D"

# create a resolver for the models diretory
# create a resolver for the models directory
resolver = g.trimesh.resolvers.FilePathResolver(models)

# should be able to get an asset
Expand Down
2 changes: 1 addition & 1 deletion tests/test_sample.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def test_sample_volume(self):
m = g.trimesh.creation.icosphere()
samples = g.trimesh.sample.volume_mesh(mesh=m, count=100)

# all samples should be approximatly within the sphere
# all samples should be approximately within the sphere
radii = g.np.linalg.norm(samples, axis=1)
assert (radii < 1.00000001).all()

Expand Down
10 changes: 7 additions & 3 deletions trimesh/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -1316,7 +1316,7 @@ def unique_faces(self) -> NDArray[bool]:
Returns
--------
unique : (len(faces),) bool
A mask where the first occurance of a unique face is true.
A mask where the first occurrence of a unique face is true.
"""
mask = np.zeros(len(self.faces), dtype=bool)
mask[grouping.unique_rows(np.sort(self.faces, axis=1))[0]] = True
Expand Down Expand Up @@ -2860,7 +2860,7 @@ def to_dict(self) -> Dict[str, Union[str, List[List[float]], List[List[int]]]]:
"faces": self.faces.tolist(),
}

def convex_decomposition(self) -> List["Trimesh"]:
def convex_decomposition(self, **kwargs) -> List["Trimesh"]:
"""
Compute an approximate convex decomposition of a mesh
using `pip install pyVHACD`.
Expand All @@ -2869,8 +2869,12 @@ def convex_decomposition(self) -> List["Trimesh"]:
-------
meshes
List of convex meshes that approximate the original
**kwargs : VHACD keyword arguments
"""
return [Trimesh(**kwargs) for kwargs in decomposition.convex_decomposition(self)]
return [
Trimesh(**kwargs)
for kwargs in decomposition.convex_decomposition(self, **kwargs)
]

def union(
self, other: "Trimesh", engine: Optional[str] = None, **kwargs
Expand Down
24 changes: 20 additions & 4 deletions trimesh/decomposition.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,22 +3,38 @@
import numpy as np


def convex_decomposition(mesh) -> List[Dict]:
def convex_decomposition(mesh, **kwargs) -> List[Dict]:
"""
Compute an approximate convex decomposition of a mesh.
VHACD Parameters which can be passed as kwargs:
Name Default
-----------------------------------------
maxConvexHulls 64
resolution 400000
minimumVolumePercentErrorAllowed 1.0
maxRecursionDepth 10
shrinkWrap True
fillMode "flood"
maxNumVerticesPerCH 64
asyncACD True
minEdgeLength 2
findBestPlane False
Parameters
----------
mesh : trimesh.Trimesh
Mesh to be decomposed into convex parts
**kwargs : VHACD keyword arguments
Returns
-------
mesh_args : list
List of **kwargs for Trimeshes that are nearly
convex and approximate the original.
"""
from pyVHACD import compute_vhacd
from vhacdx import compute_vhacd

# the faces are triangulated in a (len(face), ...vertex-index)
# for vtkPolyData
Expand All @@ -30,6 +46,6 @@ def convex_decomposition(mesh) -> List[Dict]:
)

return [
{"vertices": v, "faces": f.reshape((-1, 4))[:, 1:]}
for v, f in compute_vhacd(mesh.vertices, faces)
{"vertices": v, "faces": f}
for v, f in compute_vhacd(mesh.vertices, faces, **kwargs)
]
2 changes: 2 additions & 0 deletions trimesh/exchange/dae.py
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,8 @@ def _parse_node(
primitive = primitive.triangleset()
if isinstance(primitive, collada.triangleset.TriangleSet):
vertex = primitive.vertex
if vertex is None:
continue
vertex_index = primitive.vertex_index
vertices = vertex[vertex_index].reshape(len(vertex_index) * 3, 3)

Expand Down
2 changes: 1 addition & 1 deletion trimesh/exchange/threemf.py
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ def model_id(x):
with xf.element("object", **attribs):
with xf.element("mesh"):
with xf.element("vertices"):
# vertex nodes are writed directly to the file
# vertex nodes are written directly to the file
# so make sure lxml's buffer is flushed
xf.flush()
for i in range(0, len(m.vertices), batch_size):
Expand Down
Loading

0 comments on commit 6e3e697

Please sign in to comment.