+
+```
+python run_part_clustering.py --root exp_results/partfield_features/trellis --dump_dir exp_results/clustering/trellis_bad --source_dir data/trellis_samples --use_agglo True --max_num_clusters 20 --option 0
+```
+
+When this occurs, we explore different options that can lead to better results:
+
+### 1. Preprocess Input Mesh
+
+We can perform a simple cleanup on the input meshes by removing duplicate vertices and faces, and by merging nearby vertices using `pymeshlab`. This preprocessing step can be enabled via a flag when generating PartField features:
+
+```
+python partfield_inference.py -c configs/final/demo.yaml --opts continue_ckpt model/model_objaverse.ckpt result_name partfield_features/trellis_preprocess dataset.data_path data/trellis_samples preprocess_mesh True
+```
+
+When running agglomerative clustering on a cleaned-up mesh, we observe improved part segmentation:
+
+
+
+```
+python run_part_clustering.py --root exp_results/partfield_features/trellis_preprocess --dump_dir exp_results/clustering/trellis_preprocess --source_dir data/trellis_samples --use_agglo True --max_num_clusters 20 --option 0
+```
+
+### 2. Cluster with KMeans
+
+If modifying the input mesh is not desirable and you prefer to avoid preprocessing, an alternative is to use KMeans clustering, which does not rely on an adjacency matrix.
+
+
+
+
+```
+python run_part_clustering.py --root exp_results/partfield_features/trellis --dump_dir exp_results/clustering/trellis_kmeans --source_dir data/trellis_samples --max_num_clusters 20
+```
+
+### 3. MST-based Adjacency Matrix
+
+Instead of simply chaining the connected components of the input mesh, we also explore adding pseudo-edges to the adjacency matrix by constructing a KNN graph using face centroids and computing the minimum spanning tree of that graph.
+
+
+
+```
+python run_part_clustering.py --root exp_results/partfield_features/trellis --dump_dir exp_results/clustering/trellis_faceadj --source_dir data/trellis_samples --use_agglo True --max_num_clusters 20 --option 1 --with_knn True
+```
+
+
+
+### More Challenging Meshes!
+The proposed approaches improve results for some meshes, but we find that certain cases still do not produce satisfactory segmentations. We leave these challenges for future work. If you're interested, here are some examples of difficult meshes we encountered:
+
+**Challenging Meshes:**
+```
+cd data
+mkdir challenge_samples
+cd challenge_samples
+wget https://huggingface.co/datasets/allenai/objaverse/resolve/main/glbs/000-007/00790c705e4c4a1fbc0af9bf5c9e9525.glb
+wget https://huggingface.co/datasets/allenai/objaverse/resolve/main/glbs/000-132/13cc3ffc69964894a2bc94154aed687f.glb
+```
+
+## Citation
+```
+@inproceedings{partfield2025,
+ title={PartField: Learning 3D Feature Fields for Part Segmentation and Beyond},
+ author={Minghua Liu and Mikaela Angelina Uy and Donglai Xiang and Hao Su and Sanja Fidler and Nicholas Sharp and Jun Gao},
+ year={2025}
+}
+```
+
+## References
+PartField borrows code from the following repositories:
+- [OpenLRM](https://github.com/3DTopia/OpenLRM)
+- [PyTorch 3D UNet](https://github.com/wolny/pytorch-3dunet)
+- [PVCNN](https://github.com/mit-han-lab/pvcnn)
+- [SAMPart3D](https://github.com/Pointcept/SAMPart3D) — evaluation script
+
+Many thanks to the authors for sharing their code!
diff --git a/PartField/applications/.polyscope.ini b/PartField/applications/.polyscope.ini
new file mode 100644
index 0000000000000000000000000000000000000000..c186950de8973b9a6541fe7d80dcc267838ba918
--- /dev/null
+++ b/PartField/applications/.polyscope.ini
@@ -0,0 +1,6 @@
+{
+ "windowHeight": 1104,
+ "windowPosX": 66,
+ "windowPosY": 121,
+ "windowWidth": 2215
+}
diff --git a/PartField/applications/README.md b/PartField/applications/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..53289b3e6970da4356079711ce33b64de950ed9b
--- /dev/null
+++ b/PartField/applications/README.md
@@ -0,0 +1,142 @@
+# Interactive Tools and Applications
+
+## Single-Shape Feature and Segmentation Visualization Tool
+We can visualize the output features and segmentation of a single shape by running the script below:
+
+```
+cd applications/
+python single_shape.py --data_root ../exp_results/partfield_features/trellis/ --filename dwarf
+```
+
+- `Mode: pca, feature_viz, cluster_agglo, cluster_kmeans`
+- `pca` : Visualizes the pca Partfield features of the input model as colors.
+- `feature_viz` : Visualizes each dimension of the PartField features as a colormap.
+- `cluster_agglo` : Visualizes the part segmentation of the input model using Agglomerative clustering.
+ - Number of clusters is specified with the slider.
+ - `Adj Matrix Def`: Specifies how the adjacency matrix is defined for the clustering algorithm by adding dummy edges to make the input mesh a single connected component.
+ - `Add KNN edges` : Adds additional dummy edges based on k nearest neighbors.
+- `cluster_kmeans` : Visualizes the part segmentation of the input model using KMeans clustering.
+ - Number of clusters is specified with the slider.
+
+
+## Shape-Pair Co-Segmentation and Feature Exploration Tool
+We provide a tool to analyze and visualize a pair of shapes that has two main functionalities: 1) **Co-segmentation** via co-clustering and 2) Partfield **feature exploration** and visualization. Try it out as follows:
+
+```
+cd applications/
+python shape_pair.py --data_root ../exp_results/partfield_features/trellis/ --filename dwarf --filename_alt goblin
+```
+
+### Co-Clustering for Co-Segmentation
+
+Here explains the use-case for `Mode: co-segmentation`.
+
+
+
+The shape-pair is co-segmented by running co-clustering, In this application, we use the KMeans clustering algorithm. The `first shape (left)` is separated into parts via **unsupervised clustering** of its features with KMeans, from which the parts of the `second shape (right)` are then defined.
+
+Below are a list parameters:
+- `Source init`:
+ - `True`: Initializes the cluster centers of the second shape (right) with the cluster centers of the first shape (left).
+ - `False`: Uses KMeans++ to initialize the cluster centers for KMeans for the second shape.
+- `Independent`:
+ - `True`: Labels after running KMeans clustering are directly used as parts for the second shape. Correspondence with the parts of the first shape is not explicitly computed after KMeans clustering.
+ - `False`: After KMeans clustering is ran on the features of the second shape, the mean features for each unique part is then computed. The mean part feature for each part of the first shape is also computed. Then the parts of the second shaped are assigned labels based on the nearest neighbor part of the first shape.
+- `Num cluster`:
+ - `Model1` : A slider is used to specify the number of parts for the first shape, i.e. number of clusters for KMeans clustering.
+ - `Model2` : A slider is used to specify the number of parts for the second shape, i.e. number of clusters for KMeans clustering. Note: if `Source init` is set to `True` then this slider is ignored and the number of clusters for Model1 is used.
+
+
+### Feature Exploration and Visualization
+
+Here explains the use-case for `Mode: feature_explore`.
+
+
+
+This feature allows us to select a query point from the first shape (left) and the feature distance to all points in the second shape (left) and itself is then visualized as a colormap.
+- `range` : A slider to specify the distance radius for feature similarity visualization. Large values will result in bigger highlighter areas.
+- `continuous` :
+ - `False` : Query point is specified with a mouse click.
+ - `True` : You can slide your mouse around the first mesh to visualize feature distances.
+
+## Multi-shape Cosegmentation Tool
+We further demonstrate PartField for cosegmentation of multiple/a set of shapes. Try out our demo application as follows:
+
+### Dependency Installation
+Let's first install the necessary dependencies for this tool:
+```
+pip install cuml-cu12
+pip install xgboost
+```
+
+### Dataset
+We use the Shape COSEG dataset for our demo. We first download the dataset here:
+```
+mkdir data/coseg_guitar
+cd data/coseg_guitar
+wget https://irc.cs.sdu.edu.cn/~yunhai/public_html/ssl/data/Guitars/shapes.zip
+wget https://irc.cs.sdu.edu.cn/~yunhai/public_html/ssl/data/Guitars/gt.zip
+unzip shapes.zip
+unzip gt.zip
+```
+
+Now, let's extract Partfield features for the set of shapes:
+```
+python partfield_inference.py -c configs/final/demo.yaml --opts continue_ckpt model/model_objaverse.ckpt result_name partfield_features/coseg_guitar/ dataset.data_path data/coseg_guitar/shapes
+```
+
+Now, we're ready to run the tool! We support two modes: 1) **Few-shot** with click-based annotations and 2) **Supervised** with ground truth labels.
+
+### Annotate Mode
+
+
+We can run our few-shot segmentation tool as follows:
+```
+cd applications/
+python multi_shape_cosegment.py --meshes ../exp_results/partfield_features/coseg_guitar/
+```
+We can annotate the segments with a few clicks. A classifier is then ran to obtain the part segmentation.
+- N_class: number of segmentation class labels
+- Annotate:
+ - `00, 01, 02, ...`: Select the segmentation class label, then click on a shape region of that class.
+ - `Undo Last Selection`: Removes and disregards the last annotation made.
+- Fit:
+ - Fit Method: Selects the classification method used for fitting. Default uses `Logistic Regression`.
+ - `Update Fit`: By default, the fitting process is automatically updated. This can also be changed to a manual update.
+
+### Ground Truth Labels Mode
+
+
+
+Alternatively, we can use the ground truth labels of a subset of the shapes to train the classifier.
+
+```
+cd applications/
+python multi_shape_cosegment.py --meshes ../exp_results/partfield_features/coseg_guitar/ --n_train_subset 15
+```
+`Fit Method` can also be selected here to choose the classifier to be used.
+
+
+## 3D Correspondences
+
+First, we clone the repository [SmoothFunctionalMaps](https://github.com/RobinMagnet/SmoothFunctionalMaps) and install additional packages.
+```
+pip install omegaconf robust_laplacian
+git submodule init
+git submodule update --recursive
+```
+
+Download the [DenseCorr3D dataset](https://drive.google.com/file/d/1bpgsNu8JewRafhdRN4woQL7ObQtfgcpu/view?usp=sharing) into the `data` folder. Unzip the contents and ensure that the file structure is organized so that you can access
+`data/DenseCorr3D/animals/071b8_toy_animals_017`.
+
+Extract the PartField features.
+```
+# run in root directory of this repo
+python partfield_inference.py -c configs/final/correspondence_demo.yaml --opts continue_ckpt model/model_objaverse.ckpt preprocess_mesh True
+```
+
+Run the functional map.
+```
+cd applications/
+python run_smooth_functional_map.py -c ../configs/final/correspondence_demo.yaml --opts
+```
\ No newline at end of file
diff --git a/PartField/applications/multi_shape_cosegment.py b/PartField/applications/multi_shape_cosegment.py
new file mode 100644
index 0000000000000000000000000000000000000000..a47d8d8ca402f1d74fe401ab06b301f2e212567e
--- /dev/null
+++ b/PartField/applications/multi_shape_cosegment.py
@@ -0,0 +1,482 @@
+import numpy as np
+import torch
+import argparse
+from dataclasses import dataclass
+
+from arrgh import arrgh
+import polyscope as ps
+import polyscope.imgui as psim
+import potpourri3d as pp3d
+import trimesh
+
+import cuml
+import xgboost as xgb
+
+import os, random
+
+import sys
+sys.path.append("..")
+from partfield.utils import *
+
+@dataclass
+class State:
+
+ objects = None
+ train_objects = None
+
+ # Input options
+ subsample_inputs: int = -1
+ n_train_subset: int = 0
+
+ # Label
+ N_class: int = 2
+
+ # Annotations
+ # A annotations (initially A = 0)
+ anno_feat: np.array = np.zeros((0,448), dtype=np.float32) # [A,F]
+ anno_label: np.array = np.zeros((0,), dtype=np.int32) # [A]
+ anno_pos: np.array = np.zeros((0,3), dtype=np.float32) # [A,3]
+
+ # Intermediate selection data
+ is_selecting: bool = False
+ selection_class: int = 0
+
+ # Fitting algorithm
+ fit_to: str = "Annotations"
+ fit_method : str = "LogisticRegression"
+ auto_update_fit: bool = True
+
+ # Training data
+ # T training datapoints
+ train_feat: np.array = np.zeros((0,448), dtype=np.float32) # [T,F]
+ train_label: np.array = np.zeros((0,), dtype=np.int32) # [T]
+
+ # Viz
+ grid_w : int = 8
+ per_obj_shift : float = 2.
+ anno_radius : float = 0.01
+ ps_cloud_annotation = None
+ ps_structure_name_to_index_map = {}
+
+
+fit_methods_list = ["LinearRegression", "LogisticRegression", "LinearSVC", "RandomForest", "NearestNeighbors", "XGBoost"]
+fit_to_list = ["Annotations", "TrainingSet"]
+
+def load_mesh_and_features(mesh_filepath, ind, require_gt=False, gt_label_fol = ""):
+
+ dirpath, filename = os.path.split(mesh_filepath)
+ filename_core = filename[9:-6] # splits off "feat_pca_" ... "_0.ply"
+ feature_filename = "part_feat_"+ filename_core + "_0_batch.npy"
+ feature_filepath = os.path.join(dirpath, feature_filename)
+
+ gt_filename = filename_core + ".seg"
+ gt_filepath = os.path.join(gt_label_fol, gt_filename)
+ have_gt = os.path.isfile(gt_filepath)
+
+ print(" Reading file:")
+ print(f" Mesh filename: {mesh_filepath}")
+ print(f" Feature filename: {feature_filepath}")
+ print(f" Ground Truth Label filename: {gt_filepath} -- present = {have_gt}")
+
+ # load features
+ feat = np.load(feature_filepath, allow_pickle=False)
+ feat = feat.astype(np.float32)
+
+ # load mesh things
+ # TODO replace this with just loading V/F from numpy archive
+ tm = load_mesh_util(mesh_filepath)
+
+ V = np.array(tm.vertices, dtype=np.float32)
+ F = np.array(tm.faces)
+
+ # load ground truth, if available
+ if have_gt:
+ gt_labels = np.loadtxt(gt_filepath)
+ gt_labels = gt_labels.astype(np.int32) - 1
+ else:
+ if require_gt:
+ raise ValueError("could not find ground-truth file, but it is required")
+ gt_labels = None
+
+ # pca_colors = None
+
+ return {
+ 'nicename' : f"{ind:02d}_{filename_core}",
+ 'mesh_filepath' : mesh_filepath,
+ 'feature_filepath' : feature_filepath,
+ 'V' : V,
+ 'F' : F,
+ 'feat_np' : feat,
+ # 'feat_pt' : torch.tensor(feat, device='cuda'),
+ 'gt_labels' : gt_labels
+ }
+
+def shift_for_ind(state : State, ind):
+
+ x_ind = ind % state.grid_w
+ y_ind = ind // state.grid_w
+
+ shift = np.array([state.per_obj_shift * x_ind, 0, -state.per_obj_shift * y_ind])
+
+ return shift
+
+def viz_upper_limit(state : State, ind_count):
+
+ x_max = min(ind_count, state.grid_w)
+ y_max = ind_count // state.grid_w
+
+ bound = np.array([state.per_obj_shift * x_max, 0, -state.per_obj_shift * y_max])
+
+ return bound
+
+
+def initialize_object_viz(state : State, obj, index=0):
+ obj['ps_mesh'] = ps.register_surface_mesh(obj['nicename'], obj['V'], obj['F'], color=(.8, .8, .8))
+ shift = shift_for_ind(state, index)
+ obj['ps_mesh'].translate(shift)
+ obj['ps_mesh'].set_selection_mode('faces_only')
+ state.ps_structure_name_to_index_map[obj['nicename']] = index
+
+def update_prediction(state: State):
+
+ print("Updating predictions..")
+
+ N_anno = state.anno_label.shape[0]
+
+ # Quick out if we don't have at least two distinct class labels present
+ if(state.fit_to == "Annotations" and len(np.unique(state.anno_label)) <= 1):
+ return state
+
+ # Quick out if we don't have
+ if(state.fit_to == "TrainingSet" and state.train_objects is None):
+ return state
+
+ if state.fit_method == "LinearRegression":
+ classifier = cuml.multiclass.MulticlassClassifier(cuml.linear_model.LinearRegression(), strategy='ovr')
+ elif state.fit_method == "LogisticRegression":
+ classifier = cuml.multiclass.MulticlassClassifier(cuml.linear_model.LogisticRegression(), strategy='ovr')
+ elif state.fit_method == "LinearSVC":
+ classifier = cuml.multiclass.MulticlassClassifier(cuml.svm.LinearSVC(), strategy='ovr')
+ elif state.fit_method == "RandomForest":
+ classifier = cuml.ensemble.RandomForestClassifier()
+ elif state.fit_method == "NearestNeighbors":
+ classifier = cuml.multiclass.MulticlassClassifier(cuml.neighbors.KNeighborsRegressor(n_neighbors=1), strategy='ovr')
+ elif state.fit_method == "XGBoost":
+ classifier = xgb.XGBClassifier(max_depth=7, n_estimators=1000)
+ else:
+ raise ValueError("unrecognized fit method")
+
+ if state.fit_to == "TrainingSet":
+
+ all_train_feats = []
+ all_train_labels = []
+ for obj in state.train_objects:
+ all_train_feats.append(obj['feat_np'])
+ all_train_labels.append(obj['gt_labels'])
+
+ all_train_feats = np.concatenate(all_train_feats, axis=0)
+ all_train_labels = np.concatenate(all_train_labels, axis=0)
+
+ state.N_class = np.max(all_train_labels) + 1
+
+ classifier.fit(all_train_feats, all_train_labels)
+
+
+ elif state.fit_to == "Annotations":
+ classifier.fit(state.anno_feat,state.anno_label)
+ else:
+ raise ValueError("unrecognized fit to")
+
+ n_total = 0
+ n_correct = 0
+
+ for obj in state.objects:
+ obj['pred_label'] = classifier.predict(obj['feat_np'])
+
+ if obj['gt_labels'] is not None:
+ n_total += obj['gt_labels'].shape[0]
+ n_correct += np.sum(obj['pred_label'] == obj['gt_labels'], dtype=np.int32)
+
+ if(state.fit_to == "TrainingSet" and n_total > 0):
+ frac = n_correct / n_total
+ print(f"Test accuracy: {n_correct:d} / {n_total:d} {100*frac:.02f}%")
+
+
+ print("Done updating predictions.")
+
+ return state
+
+def update_prediction_viz(state: State):
+
+ for obj in state.objects:
+ if 'pred_label' in obj:
+ obj['ps_mesh'].add_scalar_quantity("pred labels", obj['pred_label'], defined_on='faces', vminmax=(0,state.N_class-1), cmap='turbo', enabled=True)
+
+ return state
+
+def update_annotation_viz(state: State):
+
+ ps_cloud = ps.register_point_cloud("annotations", state.anno_pos, radius=state.anno_radius, material='candy')
+ ps_cloud.add_scalar_quantity("labels", state.anno_label, vminmax=(0,state.N_class-1), cmap='turbo', enabled=True)
+
+ state.ps_cloud_annotation = ps_cloud
+
+ return state
+
+
+def filter_old_labels(state: State):
+ """
+ Filter out annotations from classes that don't exist any more
+ """
+
+ keep_mask = state.anno_label < state.N_class
+ state.anno_feat = state.anno_feat[keep_mask,:]
+ state.anno_label = state.anno_label[keep_mask]
+ state.anno_pos = state.anno_pos[keep_mask,:]
+
+ return state
+
+def undo_last_annotation(state: State):
+
+ state.anno_feat = state.anno_feat[:-1,:]
+ state.anno_label = state.anno_label[:-1]
+ state.anno_pos = state.anno_pos[:-1,:]
+
+ return state
+
+def ps_callback(state_list):
+ state : State = state_list[0] # hacky pass-by-reference, since we want to edit it below
+
+
+ # If we're in selection mode, that's the only thing we can do
+ if state.is_selecting:
+
+ psim.TextUnformatted(f"Annotating class {state.selection_class:02d}. Click on any mesh face.")
+
+ io = psim.GetIO()
+ if io.MouseClicked[0]:
+ screen_coords = io.MousePos
+ pick_result = ps.pick(screen_coords=screen_coords)
+
+ # Check if we hit one of the meshes
+ if pick_result.is_hit and pick_result.structure_name in state.ps_structure_name_to_index_map:
+ if pick_result.structure_data['element_type'] != "face":
+ # shouldn't be possible
+ raise ValueError("pick returned non-face")
+
+ i_obj = state.ps_structure_name_to_index_map[pick_result.structure_name]
+ f_hit = pick_result.structure_data['index']
+
+ obj = state.objects[i_obj]
+ V = obj['V']
+ F = obj['F']
+ feat = obj['feat_np']
+
+ face_corners = V[F[f_hit,:],:]
+ new_anno_feat = feat[f_hit,:]
+ new_anno_label = state.selection_class
+ new_anno_pos = np.mean(face_corners, axis=0) + shift_for_ind(state, i_obj)
+
+ state.anno_feat = np.concatenate((state.anno_feat, new_anno_feat[None,:]))
+ state.anno_label = np.concatenate((state.anno_label, np.array((new_anno_label,))))
+ state.anno_pos = np.concatenate((state.anno_pos, new_anno_pos[None,:]))
+
+ state = update_annotation_viz(state)
+ state.is_selecting = False
+ needs_pred_update = True
+
+ if state.auto_update_fit:
+ state = update_prediction(state)
+ state = update_prediction_viz(state)
+
+
+ return
+
+ # If not selecting, build the main UI
+ needs_pred_update = False
+
+ psim.PushItemWidth(150)
+ changed, state.N_class = psim.InputInt("N_class", state.N_class, step=1)
+ psim.PopItemWidth()
+ if changed:
+ state = filter_old_labels(state)
+ state = update_annotation_viz(state)
+
+
+ # Check for keypress annotation
+ io = psim.GetIO()
+ class_keys = { 'w' : 0, '1' : 1, '2' : 2, '3' : 3, '4' : 4, '5' : 5, '6' : 6, '7' : 7, '8' : 8, '9' : 9,}
+ for c in class_keys:
+ if class_keys[c] >= state.N_class:
+ continue
+
+ if psim.IsKeyPressed(ps.get_key_code(c)):
+ state.is_selecting = True
+ state.selection_class = class_keys[c]
+
+
+ psim.SetNextItemOpen(True, psim.ImGuiCond_FirstUseEver)
+ if(psim.TreeNode("Annotate")):
+
+ psim.TextUnformatted("New class annotation. Select class to add add annotation for:")
+ psim.TextUnformatted("(alternately, press key {w,1,2,3,4...})")
+ for i_class in range(state.N_class):
+
+ if i_class > 0:
+ psim.SameLine()
+
+ if psim.Button(f"{i_class:02d}"):
+ state.is_selecting = True
+ state.selection_class = i_class
+
+
+ if psim.Button("Undo Last Annotation"):
+ state = undo_last_annotation(state)
+ state = update_annotation_viz(state)
+ needs_pred_update = True
+
+
+
+ psim.TreePop()
+
+ psim.SetNextItemOpen(True, psim.ImGuiCond_FirstUseEver)
+ if(psim.TreeNode("Fit")):
+
+ psim.PushItemWidth(150)
+
+ changed, ind = psim.Combo("Fit To", fit_to_list.index(state.fit_to), fit_to_list)
+ if changed:
+ state.fit_to = fit_methods_list[ind]
+ needs_pred_update = True
+
+ changed, ind = psim.Combo("Fit Method", fit_methods_list.index(state.fit_method), fit_methods_list)
+ if changed:
+ state.fit_method = fit_methods_list[ind]
+ needs_pred_update = True
+
+ if psim.Button("Update fit"):
+ state = update_prediction(state)
+ state = update_prediction_viz(state)
+
+ psim.SameLine()
+
+ changed, state.auto_update_fit = psim.Checkbox("Auto-update fit", state.auto_update_fit)
+ if changed:
+ needs_pred_update = True
+
+
+ psim.PopItemWidth()
+
+ psim.TreePop()
+
+ psim.SetNextItemOpen(True, psim.ImGuiCond_FirstUseEver)
+ if(psim.TreeNode("Visualization")):
+
+ psim.PushItemWidth(150)
+ changed, state.anno_radius = psim.SliderFloat("Annotation Point Radius", state.anno_radius, 0.00001, 0.02)
+ if changed:
+ state = update_annotation_viz(state)
+ psim.PopItemWidth()
+
+ psim.TreePop()
+
+
+ if needs_pred_update and state.auto_update_fit:
+ state = update_prediction(state)
+ state = update_prediction_viz(state)
+
+
+def main():
+
+ state = State()
+
+ ## Parse args
+ parser = argparse.ArgumentParser()
+
+ parser.add_argument('--meshes', nargs='+', help='List of meshes to process.', required=True)
+ parser.add_argument('--n_train_subset', default=0, help='How many meshes to train on.')
+ parser.add_argument('--gt_label_fol', default="../data/coseg_guitar/gt", help='Path where labels are stored.')
+ parser.add_argument('--subsample_inputs', default=state.subsample_inputs, help='Only show a random fraction of inputs')
+ parser.add_argument('--per_obj_shift', default=state.per_obj_shift, help='How to space out objects in UI grid')
+ parser.add_argument('--grid_w', default=state.grid_w, help='Grid width')
+
+ args = parser.parse_args()
+
+
+ state.n_train_subset = int(args.n_train_subset)
+ state.subsample_inputs = int(args.subsample_inputs)
+ state.per_obj_shift = float(args.per_obj_shift)
+ state.grid_w = int(args.grid_w)
+
+ ## Load data
+ # First, resolve directories to load all files in directory
+ all_filepaths = []
+ print("Resolving passed directories")
+ for entry in args.meshes:
+ if os.path.isdir(entry):
+ dir_path = entry
+ print(f" processing directory {dir_path}")
+ for filename in os.listdir(dir_path):
+ file_path = os.path.join(dir_path, filename)
+ if os.path.isfile(file_path) and file_path.endswith(".ply") and "feat_pca" in file_path:
+ print(f" adding file {file_path}")
+ all_filepaths.append(file_path)
+ else:
+ all_filepaths.append(entry)
+
+ random.shuffle(all_filepaths)
+
+ if state.subsample_inputs != -1:
+ all_filepaths = all_filepaths[:state.subsample_inputs]
+
+
+ if state.n_train_subset != 0:
+
+ print(state.n_train_subset)
+
+ train_filepaths = all_filepaths[:state.n_train_subset]
+ all_filepaths = all_filepaths[state.n_train_subset:]
+
+ print(f"Loading {len(train_filepaths)} files")
+ state.train_objects = []
+ for i, file_path in enumerate(train_filepaths):
+ state.train_objects.append(load_mesh_and_features(file_path, i, require_gt=True, gt_label_fol=args.gt_label_fol))
+
+ state.fit_to = "TrainingSet"
+
+ # Load files
+ print(f"Loading {len(all_filepaths)} files")
+ state.objects = []
+ for i, file_path in enumerate(all_filepaths):
+ state.objects.append(load_mesh_and_features(file_path, i))
+
+
+ ## Set up visualization
+ ps.init()
+ ps.set_automatically_compute_scene_extents(False)
+ lim = viz_upper_limit(state, len(state.objects))
+ ps.set_length_scale(np.linalg.norm(lim) / 4.)
+ low = np.array((0, -1., -1.))
+ high = lim
+ ps.set_bounding_box(low, high)
+
+ for ind, o in enumerate(state.objects):
+ initialize_object_viz(state, o, ind)
+
+ print(f"Loaded {len(state.objects)} objects")
+ if state.n_train_subset != 0:
+ print(f"Loaded {len(state.train_objects)} training objects")
+
+ # One first prediction
+ # (does nothing if there is no annotatoins / training data)
+ state = update_prediction(state)
+ state = update_prediction_viz(state)
+
+ # Start the interactive UI
+ ps.set_user_callback(lambda : ps_callback([state]))
+ ps.show()
+
+
+if __name__ == "__main__":
+ main()
+
diff --git a/PartField/applications/pack_labels_to_obj.py b/PartField/applications/pack_labels_to_obj.py
new file mode 100644
index 0000000000000000000000000000000000000000..1b5925726dfeb675e4944d7dfad59c63505e693b
--- /dev/null
+++ b/PartField/applications/pack_labels_to_obj.py
@@ -0,0 +1,47 @@
+import sys, os, fnmatch, re
+import argparse
+
+import numpy as np
+import matplotlib
+from matplotlib import colors as mcolors
+import matplotlib.cm
+import potpourri3d as pp3d
+import igl
+from arrgh import arrgh
+
+def main():
+
+ parser = argparse.ArgumentParser()
+
+ parser.add_argument("--input_mesh", type=str, required=True, help="The mesh to read from from, mesh file format.")
+ parser.add_argument("--input_labels", type=str, required=True, help="The labels, as a text file with one entry per line")
+ parser.add_argument("--label_count", type=int, default=-1, help="The number of labels to use for the visualization. If -1, computed as max of given labels.")
+ parser.add_argument("--output", type=str, required=True, help="The obj file to write output to")
+
+ args = parser.parse_args()
+
+
+ # Read the mesh
+ V, F = igl.read_triangle_mesh(args.input_mesh)
+
+ # Read the scalar function
+ S = np.loadtxt(args.input_labels)
+
+ # Convert integers to scalars on [0,1]
+ if args.label_count == -1:
+ N_max = np.max(S) + 1
+ else:
+ N_max = args.label_count
+ S = S.astype(np.float32) / max(N_max-1, 1)
+
+ # Validate and write
+ if len(S.shape) != 1 or S.shape[0] != F.shape[0]:
+ raise ValueError(f"when scalar_on==faces, the scalar should be a length num-faces numpy array, but it has shape {S.shape[0]} and F={F.shape[0]}")
+
+ S = np.stack((S, np.zeros_like(S)), axis=-1)
+
+ pp3d.write_mesh(V, F, args.output, UV_coords=S, UV_type='per-face')
+
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/PartField/applications/run_smooth_functional_map.py b/PartField/applications/run_smooth_functional_map.py
new file mode 100644
index 0000000000000000000000000000000000000000..8a294156aafb73d435e716b15860a71cd9971295
--- /dev/null
+++ b/PartField/applications/run_smooth_functional_map.py
@@ -0,0 +1,80 @@
+import os, sys
+import numpy as np
+import torch
+import trimesh
+import json
+
+sys.path.append("..")
+sys.path.append("../third_party/SmoothFunctionalMaps")
+sys.path.append("../third_party/SmoothFunctionalMaps/pyFM")
+
+from partfield.config import default_argument_parser, setup
+from pyFM.mesh import TriMesh
+from pyFM.spectral import mesh_FM_to_p2p
+import DiscreteOpt
+
+
+def vertex_color_map(vertices):
+ min_coord, max_coord = np.min(vertices, axis=0, keepdims=True), np.max(vertices, axis=0, keepdims=True)
+ cmap = (vertices - min_coord) / (max_coord - min_coord)
+ return cmap
+
+
+if __name__ == '__main__':
+ parser = default_argument_parser()
+ args = parser.parse_args()
+ cfg = setup(args, freeze=False)
+
+ feature_dir = os.path.join("../exp_results", cfg.result_name)
+
+ all_files = cfg.dataset.all_files
+ assert len(all_files) % 2 == 0
+ num_pairs = len(all_files) // 2
+
+ device = "cuda"
+
+ output_dir = "../exp_results/correspondence/"
+ os.makedirs(output_dir, exist_ok=True)
+
+ for i in range(num_pairs):
+ file0 = all_files[2 * i]
+ file1 = all_files[2 * i + 1]
+
+ uid0 = file0.split(".")[-2].replace("/", "_")
+ uid1 = file1.split(".")[-2].replace("/", "_")
+
+ mesh0 = trimesh.load(os.path.join(feature_dir, f"input_{uid0}_0.ply"), process=True)
+ mesh1 = trimesh.load(os.path.join(feature_dir, f"input_{uid1}_0.ply"), process=True)
+
+ feat0 = np.load(os.path.join(feature_dir, f"part_feat_{uid0}_0_batch.npy"))
+ feat1 = np.load(os.path.join(feature_dir, f"part_feat_{uid1}_0_batch.npy"))
+
+ assert mesh0.vertices.shape[0] == feat0.shape[0], "num of vertices should match num of features"
+ assert mesh1.vertices.shape[0] == feat1.shape[0], "num of vertices should match num of features"
+
+ th_descr0 = torch.tensor(feat0, device=device, dtype=torch.float32)
+ th_descr1 = torch.tensor(feat1, device=device, dtype=torch.float32)
+
+ cdist_01 = torch.cdist(th_descr0, th_descr1, p=2)
+ p2p_10_init = cdist_01.argmin(dim=0).cpu().numpy()
+ p2p_01_init = cdist_01.argmin(dim=1).cpu().numpy()
+
+ fm_mesh0 = TriMesh(mesh0.vertices, mesh0.faces, area_normalize=True, center=True).process(k=200, intrinsic=True)
+ fm_mesh1 = TriMesh(mesh1.vertices, mesh1.faces, area_normalize=True, center=True).process(k=200, intrinsic=True)
+
+ model = DiscreteOpt.SmoothDiscreteOptimization(fm_mesh0, fm_mesh1)
+ model.set_params("zoomout_rhm")
+ model.opt_params.step = 10
+ model.solve_from_p2p(p2p_21=p2p_10_init, p2p_12=p2p_01_init, n_jobs=30, verbose=True)
+
+ p2p_10_FM = mesh_FM_to_p2p(model.FM_12, fm_mesh0, fm_mesh1, use_adj=True)
+
+ color0 = vertex_color_map(mesh0.vertices)
+ color1 = color0[p2p_10_FM]
+
+ output_mesh0 = trimesh.Trimesh(mesh0.vertices, mesh0.faces, vertex_colors=color0)
+ output_mesh1 = trimesh.Trimesh(mesh1.vertices, mesh1.faces, vertex_colors=color1)
+
+ output_mesh0.export(os.path.join(output_dir, f"correspondence_{uid0}_{uid1}_0.ply"))
+ output_mesh1.export(os.path.join(output_dir, f"correspondence_{uid0}_{uid1}_1.ply"))
+
diff --git a/PartField/applications/shape_pair.py b/PartField/applications/shape_pair.py
new file mode 100644
index 0000000000000000000000000000000000000000..6c0ba3e29b5446e121ec426a4a46a8d12c957824
--- /dev/null
+++ b/PartField/applications/shape_pair.py
@@ -0,0 +1,385 @@
+import numpy as np
+import torch
+import polyscope as ps
+import polyscope.imgui as psim
+import potpourri3d as pp3d
+import trimesh
+import igl
+from dataclasses import dataclass
+from simple_parsing import ArgumentParser
+from arrgh import arrgh
+
+### For clustering
+from collections import defaultdict
+from sklearn.cluster import AgglomerativeClustering, DBSCAN, KMeans
+from scipy.sparse import coo_matrix, csr_matrix
+from scipy.spatial import KDTree
+from scipy.sparse.csgraph import connected_components
+from sklearn.neighbors import NearestNeighbors
+import networkx as nx
+
+from scipy.optimize import linear_sum_assignment
+
+import os, sys
+sys.path.append("..")
+from partfield.utils import *
+
+@dataclass
+class Options:
+
+ """ Basic Options """
+ filename: str
+ filename_alt: str = None
+
+ """System Options"""
+ device: str = "cuda" # Device
+ debug: bool = False # enable debug checks
+ extras: bool = False # include extra output for viz/debugging
+
+ """ State """
+ mode: str = 'co-segmentation'
+ m: dict = None # mesh
+ m_alt: dict = None # second mesh
+
+ # pca mode
+
+ # feature explore mode
+ i_feature: int = 0
+
+ i_cluster: int = 1
+ i_cluster2: int = 1
+
+ i_eps: int = 0.6
+
+ ### For mixing in clustering
+ weight_dist = 1.0
+ weight_feat = 1.0
+
+ ### For clustering visualization
+ independent: bool = True
+ source_init: bool = True
+
+ feature_range: float = 0.1
+ continuous_explore: bool = False
+
+ viz_mode: str = "faces"
+
+ output_fol: str = "results_pair"
+
+ ### counter for screenshot
+ counter: int = 0
+
+modes_list = ['feature_explore', "co-segmentation"]
+
+def load_features(feature_filename, mesh_filename, viz_mode):
+
+ print("Reading features:")
+ print(f" Feature filename: {feature_filename}")
+ print(f" Mesh filename: {mesh_filename}")
+
+ # load features
+ feat = np.load(feature_filename, allow_pickle=True)
+ feat = feat.astype(np.float32)
+
+ # load mesh things
+ tm = load_mesh_util(mesh_filename)
+
+ V = np.array(tm.vertices, dtype=np.float32)
+ F = np.array(tm.faces)
+
+ if viz_mode == "faces":
+ pca_colors = np.array(tm.visual.face_colors, dtype=np.float32)
+ pca_colors = pca_colors[:,:3] / 255.
+
+ else:
+ pca_colors = np.array(tm.visual.vertex_colors, dtype=np.float32)
+ pca_colors = pca_colors[:,:3] / 255.
+
+ arrgh(V, F, pca_colors, feat)
+
+ return {
+ 'V' : V,
+ 'F' : F,
+ 'pca_colors' : pca_colors,
+ 'feat_np' : feat,
+ 'feat_pt' : torch.tensor(feat, device='cuda'),
+ 'trimesh' : tm,
+ 'label' : None,
+ 'num_cluster' : 1,
+ 'scalar' : None
+ }
+
+def prep_feature_mesh(m, name='mesh'):
+ ps_mesh = ps.register_surface_mesh(name, m['V'], m['F'])
+ ps_mesh.set_selection_mode('faces_only')
+ m['ps_mesh'] = ps_mesh
+
+def viz_pca_colors(m):
+ m['ps_mesh'].add_color_quantity('pca colors', m['pca_colors'], enabled=True, defined_on=m["viz_mode"])
+
+def viz_feature(m, ind):
+ m['ps_mesh'].add_scalar_quantity('pca colors', m['feat_np'][:,ind], cmap='turbo', enabled=True, defined_on=m["viz_mode"])
+
+def feature_distance_np(feats, query_feat):
+ # normalize
+ feats = feats / np.linalg.norm(feats,axis=1)[:,None]
+ query_feat = query_feat / np.linalg.norm(query_feat)
+ # cosine distance
+ cos_sim = np.dot(feats, query_feat)
+ cos_dist = (1 - cos_sim) / 2.
+ return cos_dist
+
+def feature_distance_pt(feats, query_feat):
+ return (1. - torch.nn.functional.cosine_similarity(feats, query_feat[None,:], dim=-1)) / 2.
+
+
+def ps_callback(opts):
+ m = opts.m
+
+ changed, ind = psim.Combo("Mode", modes_list.index(opts.mode), modes_list)
+ if changed:
+ opts.mode = modes_list[ind]
+ m['ps_mesh'].remove_all_quantities()
+ if opts.m_alt is not None:
+ opts.m_alt['ps_mesh'].remove_all_quantities()
+
+ elif opts.mode == 'feature_explore':
+ psim.TextUnformatted("Click on the mesh on the left")
+ psim.TextUnformatted("to highlight all faces within a given radius in feature space.""")
+
+ io = psim.GetIO()
+ if io.MouseClicked[0] or opts.continuous_explore:
+ screen_coords = io.MousePos
+ cam_params = ps.get_view_camera_parameters()
+
+ pick_result = ps.pick(screen_coords=screen_coords)
+
+ # Check if we hit one of the meshes
+ if pick_result.is_hit and pick_result.structure_name == "mesh":
+ if pick_result.structure_data['element_type'] != "face":
+ # shouldn't be possible
+ raise ValueError("pick returned non-face")
+
+ f_hit = pick_result.structure_data['index']
+ bary_weights = np.array(pick_result.structure_data['bary_coords'])
+
+ # get the feature via interpolation
+ point_feat = m['feat_np'][f_hit,:]
+ point_feat_pt = torch.tensor(point_feat, device='cuda')
+
+ all_dists1 = feature_distance_pt(m['feat_pt'], point_feat_pt).detach().cpu().numpy()
+ m['ps_mesh'].add_scalar_quantity("distance", all_dists1, cmap='blues', vminmax=(0, opts.feature_range), enabled=True, defined_on=m["viz_mode"])
+ opts.m['scalar'] = all_dists1
+
+ if opts.m_alt is not None:
+ all_dists2 = feature_distance_pt(opts.m_alt['feat_pt'], point_feat_pt).detach().cpu().numpy()
+ opts.m_alt['ps_mesh'].add_scalar_quantity("distance", all_dists2, cmap='blues', vminmax=(0, opts.feature_range), enabled=True, defined_on=m["viz_mode"])
+ opts.m_alt['scalar'] = all_dists2
+
+ else:
+ # not hit
+ pass
+
+ if psim.Button("Export"):
+ ### Save output
+ OUTPUT_FOL = opts.output_fol
+ fname1 = opts.filename
+ out_mesh_file = os.path.join(OUTPUT_FOL, fname1+'.obj')
+
+ igl.write_obj(out_mesh_file, opts.m["V"], opts.m["F"])
+ print("Saved '{}'.".format(out_mesh_file))
+
+ out_face_ids_file = os.path.join(OUTPUT_FOL, fname1 + '_feat_dist_' + str(opts.counter) +'.txt')
+ np.savetxt(out_face_ids_file, opts.m['scalar'], fmt='%f')
+ print("Saved '{}'.".format(out_face_ids_file))
+
+
+ fname2 = opts.filename_alt
+ out_mesh_file = os.path.join(OUTPUT_FOL, fname2+'.obj')
+
+ igl.write_obj(out_mesh_file, opts.m_alt["V"], opts.m_alt["F"])
+ print("Saved '{}'.".format(out_mesh_file))
+
+ out_face_ids_file = os.path.join(OUTPUT_FOL, fname2 + '_feat_dist_' + str(opts.counter) +'.txt')
+ np.savetxt(out_face_ids_file, opts.m_alt['scalar'], fmt='%f')
+ # print("Saved '{}'.".format(out_face_ids_file))
+
+ opts.counter += 1
+
+
+ _, opts.feature_range = psim.SliderFloat('range', opts.feature_range, v_min=0., v_max=1.0, power=3)
+ _, opts.continuous_explore = psim.Checkbox('continuous', opts.continuous_explore)
+
+ # TODO nsharp remember how the keycodes work
+ if io.KeysDown[ord('q')]:
+ opts.feature_range += 0.01
+ if io.KeysDown[ord('w')]:
+ opts.feature_range -= 0.01
+
+
+ elif opts.mode == "co-segmentation":
+
+ changed, opts.source_init = psim.Checkbox("Source Init", opts.source_init)
+ changed, opts.independent = psim.Checkbox("Independent", opts.independent)
+
+ psim.TextUnformatted("Use the slider to toggle the number of desired clusters.")
+ cluster_changed, opts.i_cluster = psim.SliderInt("num clusters for model1", opts.i_cluster, v_min=1, v_max=30)
+ cluster_changed, opts.i_cluster2 = psim.SliderInt("num clusters for model2", opts.i_cluster2, v_min=1, v_max=30)
+
+ # if cluster_changed:
+ if psim.Button("Recompute"):
+
+ ### Run clustering algorithm
+
+ ### Mesh 1
+ num_clusters1 = opts.i_cluster
+ point_feat1 = m['feat_np']
+ point_feat1 = point_feat1 / np.linalg.norm(point_feat1, axis=-1, keepdims=True)
+ clustering1 = KMeans(n_clusters=num_clusters1, random_state=0, n_init="auto").fit(point_feat1)
+
+ ### Get feature means per cluster
+ feature_means1 = []
+ for j in range(num_clusters1):
+ all_cluster_feat = point_feat1[clustering1.labels_==j]
+ mean_feat = np.mean(all_cluster_feat, axis=0)
+ feature_means1.append(mean_feat)
+
+ feature_means1 = np.array(feature_means1)
+ tree = KDTree(feature_means1)
+
+
+ if opts.source_init:
+ num_clusters2 = opts.i_cluster
+ init_mode = np.array(feature_means1)
+
+ ## default is kmeans++
+ else:
+ num_clusters2 = opts.i_cluster2
+ init_mode = "k-means++"
+
+ ### Mesh 2
+ point_feat2 = opts.m_alt['feat_np']
+ point_feat2 = point_feat2 / np.linalg.norm(point_feat2, axis=-1, keepdims=True)
+
+ clustering2 = KMeans(n_clusters=num_clusters2, random_state=0, init=init_mode).fit(point_feat2)
+
+ ### Get feature means per cluster
+ feature_means2 = []
+ for j in range(num_clusters2):
+ all_cluster_feat = point_feat2[clustering2.labels_==j]
+ mean_feat = np.mean(all_cluster_feat, axis=0)
+ feature_means2.append(mean_feat)
+
+ feature_means2 = np.array(feature_means2)
+ _, nn_idx = tree.query(feature_means2, k=1)
+
+ print(nn_idx)
+ print("Both KMeans")
+ print(np.unique(clustering1.labels_))
+ print(np.unique(clustering2.labels_))
+
+ relabelled_2 = nn_idx[clustering2.labels_]
+
+ print(np.unique(relabelled_2))
+ print()
+
+ m['ps_mesh'].add_scalar_quantity("cluster_both_kmeans", clustering1.labels_, cmap='turbo', vminmax=(0, num_clusters1-1), enabled=True, defined_on=m["viz_mode"])
+ opts.m['label'] = clustering1.labels_
+ opts.m['num_cluster'] = num_clusters1
+
+ if opts.independent:
+ opts.m_alt['ps_mesh'].add_scalar_quantity("cluster", clustering2.labels_, cmap='turbo', vminmax=(0, num_clusters2-1), enabled=True, defined_on=m["viz_mode"])
+ opts.m_alt['label'] = clustering2.labels_
+ opts.m_alt['num_cluster'] = num_clusters2
+ else:
+ opts.m_alt['ps_mesh'].add_scalar_quantity("cluster", relabelled_2, cmap='turbo', vminmax=(0, num_clusters1-1), enabled=True, defined_on=m["viz_mode"])
+ opts.m_alt['label'] = relabelled_2
+ opts.m_alt['num_cluster'] = num_clusters1
+
+
+ if psim.Button("Export"):
+ ### Save output
+ OUTPUT_FOL = opts.output_fol
+ fname1 = opts.filename
+ out_mesh_file = os.path.join(OUTPUT_FOL, fname1+'.obj')
+
+ igl.write_obj(out_mesh_file, opts.m["V"], opts.m["F"])
+ print("Saved '{}'.".format(out_mesh_file))
+
+ if m["viz_mode"] == "faces":
+ out_face_ids_file = os.path.join(OUTPUT_FOL, fname1 + "_" + str(opts.m['num_cluster']) + '_pred_face_ids.txt')
+ else:
+ out_face_ids_file = os.path.join(OUTPUT_FOL, fname1 + "_" + str(opts.m['num_cluster']) + '_pred_vertices_ids.txt')
+
+ np.savetxt(out_face_ids_file, opts.m['label'], fmt='%d')
+ print("Saved '{}'.".format(out_face_ids_file))
+
+
+ fname2 = opts.filename_alt
+ out_mesh_file = os.path.join(OUTPUT_FOL, fname2 +'.obj')
+
+ igl.write_obj(out_mesh_file, opts.m_alt["V"], opts.m_alt["F"])
+ print("Saved '{}'.".format(out_mesh_file))
+
+ if m["viz_mode"] == "faces":
+ out_face_ids_file = os.path.join(OUTPUT_FOL, fname2 + "_" + str(opts.m_alt['num_cluster']) + '_pred_face_ids.txt')
+ else:
+ out_face_ids_file = os.path.join(OUTPUT_FOL, fname2 + "_" + str(opts.m_alt['num_cluster']) + '_pred_vertices_ids.txt')
+
+ np.savetxt(out_face_ids_file, opts.m_alt['label'], fmt='%d')
+ print("Saved '{}'.".format(out_face_ids_file))
+
+
+def main():
+ ## Parse args
+ # Uses simple_parsing library to automatically construct parser from the dataclass Options
+ parser = ArgumentParser()
+ parser.add_arguments(Options, dest="options")
+ parser.add_argument('--data_root', default="../exp_results/partfield_features/trellis", help='Path the model features are stored.')
+ args = parser.parse_args()
+ opts: Options = args.options
+
+ DATA_ROOT = args.data_root
+
+ shape_1 = opts.filename
+ shape_2 = opts.filename_alt
+
+ if os.path.exists(os.path.join(DATA_ROOT, "part_feat_"+ shape_1 + "_0.npy")):
+ feature_fname1 = os.path.join(DATA_ROOT, "part_feat_"+ shape_1 + "_0.npy")
+ feature_fname2 = os.path.join(DATA_ROOT, "part_feat_"+ shape_2 + "_0.npy")
+
+ mesh_fname1 = os.path.join(DATA_ROOT, "feat_pca_"+ shape_1 + "_0.ply")
+ mesh_fname2 = os.path.join(DATA_ROOT, "feat_pca_"+ shape_2 + "_0.ply")
+ else:
+ feature_fname1 = os.path.join(DATA_ROOT, "part_feat_"+ shape_1 + "_0_batch.npy")
+ feature_fname2 = os.path.join(DATA_ROOT, "part_feat_"+ shape_2 + "_0_batch.npy")
+
+ mesh_fname1 = os.path.join(DATA_ROOT, "feat_pca_"+ shape_1 + "_0.ply")
+ mesh_fname2 = os.path.join(DATA_ROOT, "feat_pca_"+ shape_2 + "_0.ply")
+
+ #### To save output ####
+ os.makedirs(opts.output_fol, exist_ok=True)
+ ########################
+
+ # Initialize
+ ps.init()
+
+ mesh_dict = load_features(feature_fname1, mesh_fname1, opts.viz_mode)
+ prep_feature_mesh(mesh_dict)
+ mesh_dict["viz_mode"] = opts.viz_mode
+ opts.m = mesh_dict
+
+ mesh_dict_alt = load_features(feature_fname2, mesh_fname2, opts.viz_mode)
+ prep_feature_mesh(mesh_dict_alt, name='mesh_alt')
+ mesh_dict_alt['ps_mesh'].translate((2.5, 0., 0.))
+ mesh_dict_alt["viz_mode"] = opts.viz_mode
+ opts.m_alt = mesh_dict_alt
+
+ # Start the interactive UI
+ ps.set_user_callback(lambda : ps_callback(opts))
+ ps.show()
+
+
+if __name__ == "__main__":
+ main()
+
diff --git a/PartField/applications/single_shape.py b/PartField/applications/single_shape.py
new file mode 100644
index 0000000000000000000000000000000000000000..6f6af4969d87be1258fabf8c8a396db3777fffaf
--- /dev/null
+++ b/PartField/applications/single_shape.py
@@ -0,0 +1,758 @@
+import numpy as np
+import torch
+import polyscope as ps
+import polyscope.imgui as psim
+import potpourri3d as pp3d
+import trimesh
+import igl
+from dataclasses import dataclass
+from simple_parsing import ArgumentParser
+from arrgh import arrgh
+
+### For clustering
+from collections import defaultdict
+from sklearn.cluster import AgglomerativeClustering, DBSCAN, KMeans
+from scipy.sparse import coo_matrix, csr_matrix
+from scipy.spatial import KDTree
+from scipy.sparse.csgraph import connected_components
+from sklearn.neighbors import NearestNeighbors
+import networkx as nx
+
+from scipy.optimize import linear_sum_assignment
+
+import os, sys
+sys.path.append("..")
+from partfield.utils import *
+
+@dataclass
+class Options:
+
+ """ Basic Options """
+ filename: str
+
+ """System Options"""
+ device: str = "cuda" # Device
+ debug: bool = False # enable debug checks
+ extras: bool = False # include extra output for viz/debugging
+
+ """ State """
+ mode: str = 'pca'
+ m: dict = None # mesh
+
+ # pca mode
+
+ # feature explore mode
+ i_feature: int = 0
+
+ i_cluster: int = 1
+
+ i_eps: int = 0.6
+
+ ### For mixing in clustering
+ weight_dist = 1.0
+ weight_feat = 1.0
+
+ ### For clustering visualization
+ feature_range: float = 0.1
+ continuous_explore: bool = False
+
+ viz_mode: str = "faces"
+
+ output_fol: str = "results_single"
+
+ ### For adj_matrix
+ adj_mode: str = "Vanilla"
+ add_knn_edges: bool = False
+
+ ### counter for screenshot
+ counter: int = 0
+
+modes_list = ['pca', 'feature_viz', 'cluster_agglo', 'cluster_kmeans']
+adj_mode_list = ["Vanilla", "Face_MST", "CC_MST"]
+
+#### For clustering
+class UnionFind:
+ def __init__(self, n):
+ self.parent = list(range(n))
+ self.rank = [1] * n
+
+ def find(self, x):
+ if self.parent[x] != x:
+ self.parent[x] = self.find(self.parent[x])
+ return self.parent[x]
+
+ def union(self, x, y):
+ rootX = self.find(x)
+ rootY = self.find(y)
+
+ if rootX != rootY:
+ if self.rank[rootX] > self.rank[rootY]:
+ self.parent[rootY] = rootX
+ elif self.rank[rootX] < self.rank[rootY]:
+ self.parent[rootX] = rootY
+ else:
+ self.parent[rootY] = rootX
+ self.rank[rootX] += 1
+
+#####################################
+## Face adjacency computation options
+#####################################
+def construct_face_adjacency_matrix_ccmst(face_list, vertices, k=10, with_knn=True):
+ """
+ Given a list of faces (each face is a 3-tuple of vertex indices),
+ construct a face-based adjacency matrix of shape (num_faces, num_faces).
+
+ Two faces are adjacent if they share an edge (the "mesh adjacency").
+ If multiple connected components remain, we:
+ 1) Compute the centroid of each connected component as the mean of all face centroids.
+ 2) Use a KNN graph (k=10) based on centroid distances on each connected component.
+ 3) Compute MST of that KNN graph.
+ 4) Add MST edges that connect different components as "dummy" edges
+ in the face adjacency matrix, ensuring one connected component. The selected face for
+ each connected component is the face closest to the component centroid.
+
+ Parameters
+ ----------
+ face_list : list of tuples
+ List of faces, each face is a tuple (v0, v1, v2) of vertex indices.
+ vertices : np.ndarray of shape (num_vertices, 3)
+ Array of vertex coordinates.
+ k : int, optional
+ Number of neighbors to use in centroid KNN. Default is 10.
+
+ Returns
+ -------
+ face_adjacency : scipy.sparse.csr_matrix
+ A CSR sparse matrix of shape (num_faces, num_faces),
+ containing 1s for adjacent faces (shared-edge adjacency)
+ plus dummy edges ensuring a single connected component.
+ """
+ num_faces = len(face_list)
+ if num_faces == 0:
+ # Return an empty matrix if no faces
+ return csr_matrix((0, 0))
+
+ #--------------------------------------------------------------------------
+ # 1) Build adjacency based on shared edges.
+ # (Same logic as the original code, plus import statements.)
+ #--------------------------------------------------------------------------
+ edge_to_faces = defaultdict(list)
+ uf = UnionFind(num_faces)
+ for f_idx, (v0, v1, v2) in enumerate(face_list):
+ # Sort each edge’s endpoints so (i, j) == (j, i)
+ edges = [
+ tuple(sorted((v0, v1))),
+ tuple(sorted((v1, v2))),
+ tuple(sorted((v2, v0)))
+ ]
+ for e in edges:
+ edge_to_faces[e].append(f_idx)
+
+ row = []
+ col = []
+ for edge, face_indices in edge_to_faces.items():
+ unique_faces = list(set(face_indices))
+ if len(unique_faces) > 1:
+ # For every pair of distinct faces that share this edge,
+ # mark them as mutually adjacent
+ for i in range(len(unique_faces)):
+ for j in range(i + 1, len(unique_faces)):
+ fi = unique_faces[i]
+ fj = unique_faces[j]
+ row.append(fi)
+ col.append(fj)
+ row.append(fj)
+ col.append(fi)
+ uf.union(fi, fj)
+
+ data = np.ones(len(row), dtype=np.int8)
+ face_adjacency = coo_matrix(
+ (data, (row, col)), shape=(num_faces, num_faces)
+ ).tocsr()
+
+ #--------------------------------------------------------------------------
+ # 2) Check if the graph from shared edges is already connected.
+ #--------------------------------------------------------------------------
+ n_components = 0
+ for i in range(num_faces):
+ if uf.find(i) == i:
+ n_components += 1
+ print("n_components", n_components)
+
+ if n_components == 1:
+ # Already a single connected component, no need for dummy edges
+ return face_adjacency
+
+ #--------------------------------------------------------------------------
+ # 3) Compute centroids of each face for building a KNN graph.
+ #--------------------------------------------------------------------------
+ face_centroids = []
+ for (v0, v1, v2) in face_list:
+ centroid = (vertices[v0] + vertices[v1] + vertices[v2]) / 3.0
+ face_centroids.append(centroid)
+ face_centroids = np.array(face_centroids)
+
+ #--------------------------------------------------------------------------
+ # 4b) Build a KNN graph on connected components
+ #--------------------------------------------------------------------------
+ # Group faces by their root representative in the Union-Find structure
+ component_dict = {}
+ for face_idx in range(num_faces):
+ root = uf.find(face_idx)
+ if root not in component_dict:
+ component_dict[root] = set()
+ component_dict[root].add(face_idx)
+
+ connected_components = list(component_dict.values())
+
+ print("Using connected component MST.")
+ component_centroid_face_idx = []
+ connected_component_centroids = []
+ knn = NearestNeighbors(n_neighbors=1, algorithm='auto')
+ for component in connected_components:
+ curr_component_faces = list(component)
+ curr_component_face_centroids = face_centroids[curr_component_faces]
+ component_centroid = np.mean(curr_component_face_centroids, axis=0)
+
+ ### Assign a face closest to the centroid
+ face_idx = curr_component_faces[np.argmin(np.linalg.norm(curr_component_face_centroids-component_centroid, axis=-1))]
+
+ connected_component_centroids.append(component_centroid)
+ component_centroid_face_idx.append(face_idx)
+
+ component_centroid_face_idx = np.array(component_centroid_face_idx)
+ connected_component_centroids = np.array(connected_component_centroids)
+
+ if n_components < k:
+ knn = NearestNeighbors(n_neighbors=n_components, algorithm='auto')
+ else:
+ knn = NearestNeighbors(n_neighbors=k, algorithm='auto')
+ knn.fit(connected_component_centroids)
+ distances, indices = knn.kneighbors(connected_component_centroids)
+
+ #--------------------------------------------------------------------------
+ # 5) Build a weighted graph in NetworkX using centroid-distances as edges
+ #--------------------------------------------------------------------------
+ G = nx.Graph()
+ # Add each face as a node in the graph
+ G.add_nodes_from(range(num_faces))
+
+ # For each face i, add edges (i -> j) for each neighbor j in the KNN
+ for idx1 in range(n_components):
+ i = component_centroid_face_idx[idx1]
+ for idx2, dist in zip(indices[idx1], distances[idx1]):
+ j = component_centroid_face_idx[idx2]
+ if i == j:
+ continue # skip self-loop
+ # Add an undirected edge with 'weight' = distance
+ # NetworkX handles parallel edges gracefully via last add_edge,
+ # but it typically overwrites the weight if (i, j) already exists.
+ G.add_edge(i, j, weight=dist)
+
+ #--------------------------------------------------------------------------
+ # 6) Compute MST on that KNN graph
+ #--------------------------------------------------------------------------
+ mst = nx.minimum_spanning_tree(G, weight='weight')
+ # Sort MST edges by ascending weight, so we add the shortest edges first
+ mst_edges_sorted = sorted(
+ mst.edges(data=True), key=lambda e: e[2]['weight']
+ )
+ print("mst edges sorted", len(mst_edges_sorted))
+ #--------------------------------------------------------------------------
+ # 7) Use a union-find structure to add MST edges only if they
+ # connect two currently disconnected components of the adjacency matrix
+ #--------------------------------------------------------------------------
+
+ # Convert face_adjacency to LIL format for efficient edge addition
+ adjacency_lil = face_adjacency.tolil()
+
+ # Now, step through MST edges in ascending order
+ for (u, v, attr) in mst_edges_sorted:
+ if uf.find(u) != uf.find(v):
+ # These belong to different components, so unify them
+ uf.union(u, v)
+ # And add a "dummy" edge to our adjacency matrix
+ adjacency_lil[u, v] = 1
+ adjacency_lil[v, u] = 1
+
+ # Convert back to CSR format and return
+ face_adjacency = adjacency_lil.tocsr()
+
+ if with_knn:
+ print("Adding KNN edges.")
+ ### Add KNN edges graph too
+ dummy_row = []
+ dummy_col = []
+ for idx1 in range(n_components):
+ i = component_centroid_face_idx[idx1]
+ for idx2 in indices[idx1]:
+ j = component_centroid_face_idx[idx2]
+ dummy_row.extend([i, j])
+ dummy_col.extend([j, i]) ### duplicates are handled by coo
+
+ dummy_data = np.ones(len(dummy_row), dtype=np.int16)
+ dummy_mat = coo_matrix(
+ (dummy_data, (dummy_row, dummy_col)),
+ shape=(num_faces, num_faces)
+ ).tocsr()
+ face_adjacency = face_adjacency + dummy_mat
+ ###########################
+
+ return face_adjacency
+#########################
+
+def construct_face_adjacency_matrix_facemst(face_list, vertices, k=10, with_knn=True):
+ """
+ Given a list of faces (each face is a 3-tuple of vertex indices),
+ construct a face-based adjacency matrix of shape (num_faces, num_faces).
+
+ Two faces are adjacent if they share an edge (the "mesh adjacency").
+ If multiple connected components remain, we:
+ 1) Compute the centroid of each face.
+ 2) Use a KNN graph (k=10) based on centroid distances.
+ 3) Compute MST of that KNN graph.
+ 4) Add MST edges that connect different components as "dummy" edges
+ in the face adjacency matrix, ensuring one connected component.
+
+ Parameters
+ ----------
+ face_list : list of tuples
+ List of faces, each face is a tuple (v0, v1, v2) of vertex indices.
+ vertices : np.ndarray of shape (num_vertices, 3)
+ Array of vertex coordinates.
+ k : int, optional
+ Number of neighbors to use in centroid KNN. Default is 10.
+
+ Returns
+ -------
+ face_adjacency : scipy.sparse.csr_matrix
+ A CSR sparse matrix of shape (num_faces, num_faces),
+ containing 1s for adjacent faces (shared-edge adjacency)
+ plus dummy edges ensuring a single connected component.
+ """
+ num_faces = len(face_list)
+ if num_faces == 0:
+ # Return an empty matrix if no faces
+ return csr_matrix((0, 0))
+
+ #--------------------------------------------------------------------------
+ # 1) Build adjacency based on shared edges.
+ # (Same logic as the original code, plus import statements.)
+ #--------------------------------------------------------------------------
+ edge_to_faces = defaultdict(list)
+ uf = UnionFind(num_faces)
+ for f_idx, (v0, v1, v2) in enumerate(face_list):
+ # Sort each edge’s endpoints so (i, j) == (j, i)
+ edges = [
+ tuple(sorted((v0, v1))),
+ tuple(sorted((v1, v2))),
+ tuple(sorted((v2, v0)))
+ ]
+ for e in edges:
+ edge_to_faces[e].append(f_idx)
+
+ row = []
+ col = []
+ for edge, face_indices in edge_to_faces.items():
+ unique_faces = list(set(face_indices))
+ if len(unique_faces) > 1:
+ # For every pair of distinct faces that share this edge,
+ # mark them as mutually adjacent
+ for i in range(len(unique_faces)):
+ for j in range(i + 1, len(unique_faces)):
+ fi = unique_faces[i]
+ fj = unique_faces[j]
+ row.append(fi)
+ col.append(fj)
+ row.append(fj)
+ col.append(fi)
+ uf.union(fi, fj)
+
+ data = np.ones(len(row), dtype=np.int8)
+ face_adjacency = coo_matrix(
+ (data, (row, col)), shape=(num_faces, num_faces)
+ ).tocsr()
+
+ #--------------------------------------------------------------------------
+ # 2) Check if the graph from shared edges is already connected.
+ #--------------------------------------------------------------------------
+ n_components = 0
+ for i in range(num_faces):
+ if uf.find(i) == i:
+ n_components += 1
+ print("n_components", n_components)
+
+ if n_components == 1:
+ # Already a single connected component, no need for dummy edges
+ return face_adjacency
+ #--------------------------------------------------------------------------
+ # 3) Compute centroids of each face for building a KNN graph.
+ #--------------------------------------------------------------------------
+ face_centroids = []
+ for (v0, v1, v2) in face_list:
+ centroid = (vertices[v0] + vertices[v1] + vertices[v2]) / 3.0
+ face_centroids.append(centroid)
+ face_centroids = np.array(face_centroids)
+
+ #--------------------------------------------------------------------------
+ # 4) Build a KNN graph (k=10) over face centroids using scikit‐learn
+ #--------------------------------------------------------------------------
+ knn = NearestNeighbors(n_neighbors=k, algorithm='auto')
+ knn.fit(face_centroids)
+ distances, indices = knn.kneighbors(face_centroids)
+ # 'distances[i]' are the distances from face i to each of its 'k' neighbors
+ # 'indices[i]' are the face indices of those neighbors
+
+ #--------------------------------------------------------------------------
+ # 5) Build a weighted graph in NetworkX using centroid-distances as edges
+ #--------------------------------------------------------------------------
+ G = nx.Graph()
+ # Add each face as a node in the graph
+ G.add_nodes_from(range(num_faces))
+
+ # For each face i, add edges (i -> j) for each neighbor j in the KNN
+ for i in range(num_faces):
+ for j, dist in zip(indices[i], distances[i]):
+ if i == j:
+ continue # skip self-loop
+ # Add an undirected edge with 'weight' = distance
+ # NetworkX handles parallel edges gracefully via last add_edge,
+ # but it typically overwrites the weight if (i, j) already exists.
+ G.add_edge(i, j, weight=dist)
+
+ #--------------------------------------------------------------------------
+ # 6) Compute MST on that KNN graph
+ #--------------------------------------------------------------------------
+ mst = nx.minimum_spanning_tree(G, weight='weight')
+ # Sort MST edges by ascending weight, so we add the shortest edges first
+ mst_edges_sorted = sorted(
+ mst.edges(data=True), key=lambda e: e[2]['weight']
+ )
+ print("mst edges sorted", len(mst_edges_sorted))
+ #--------------------------------------------------------------------------
+ # 7) Use a union-find structure to add MST edges only if they
+ # connect two currently disconnected components of the adjacency matrix
+ #--------------------------------------------------------------------------
+
+ # Convert face_adjacency to LIL format for efficient edge addition
+ adjacency_lil = face_adjacency.tolil()
+
+ # Now, step through MST edges in ascending order
+ for (u, v, attr) in mst_edges_sorted:
+ if uf.find(u) != uf.find(v):
+ # These belong to different components, so unify them
+ uf.union(u, v)
+ # And add a "dummy" edge to our adjacency matrix
+ adjacency_lil[u, v] = 1
+ adjacency_lil[v, u] = 1
+
+ # Convert back to CSR format and return
+ face_adjacency = adjacency_lil.tocsr()
+
+ if with_knn:
+ print("Adding KNN edges.")
+ ### Add KNN edges graph too
+ dummy_row = []
+ dummy_col = []
+ for i in range(num_faces):
+ for j in indices[i]:
+ dummy_row.extend([i, j])
+ dummy_col.extend([j, i]) ### duplicates are handled by coo
+
+ dummy_data = np.ones(len(dummy_row), dtype=np.int16)
+ dummy_mat = coo_matrix(
+ (dummy_data, (dummy_row, dummy_col)),
+ shape=(num_faces, num_faces)
+ ).tocsr()
+ face_adjacency = face_adjacency + dummy_mat
+ ###########################
+
+ return face_adjacency
+
+def construct_face_adjacency_matrix_naive(face_list):
+ """
+ Given a list of faces (each face is a 3-tuple of vertex indices),
+ construct a face-based adjacency matrix of shape (num_faces, num_faces).
+ Two faces are adjacent if they share an edge.
+
+ If multiple connected components exist, dummy edges are added to
+ turn them into a single connected component. Edges are added naively by
+ randomly selecting a face and connecting consecutive components -- (comp_i, comp_i+1) ...
+
+ Parameters
+ ----------
+ face_list : list of tuples
+ List of faces, each face is a tuple (v0, v1, v2) of vertex indices.
+
+ Returns
+ -------
+ face_adjacency : scipy.sparse.csr_matrix
+ A CSR sparse matrix of shape (num_faces, num_faces),
+ containing 1s for adjacent faces and 0s otherwise.
+ Additional edges are added if the faces are in multiple components.
+ """
+
+ num_faces = len(face_list)
+ if num_faces == 0:
+ # Return an empty matrix if no faces
+ return csr_matrix((0, 0))
+
+ # Step 1: Map each undirected edge -> list of face indices that contain that edge
+ edge_to_faces = defaultdict(list)
+
+ # Populate the edge_to_faces dictionary
+ for f_idx, (v0, v1, v2) in enumerate(face_list):
+ # For an edge, we always store its endpoints in sorted order
+ # to avoid duplication (e.g. edge (2,5) is the same as (5,2)).
+ edges = [
+ tuple(sorted((v0, v1))),
+ tuple(sorted((v1, v2))),
+ tuple(sorted((v2, v0)))
+ ]
+ for e in edges:
+ edge_to_faces[e].append(f_idx)
+
+ # Step 2: Build the adjacency (row, col) lists among faces
+ row = []
+ col = []
+ for e, faces_sharing_e in edge_to_faces.items():
+ # If an edge is shared by multiple faces, make each pair of those faces adjacent
+ f_indices = list(set(faces_sharing_e)) # unique face indices for this edge
+ if len(f_indices) > 1:
+ # For each pair of faces, mark them as adjacent
+ for i in range(len(f_indices)):
+ for j in range(i + 1, len(f_indices)):
+ f_i = f_indices[i]
+ f_j = f_indices[j]
+ row.append(f_i)
+ col.append(f_j)
+ row.append(f_j)
+ col.append(f_i)
+
+ # Create a COO matrix, then convert it to CSR
+ data = np.ones(len(row), dtype=np.int8)
+ face_adjacency = coo_matrix(
+ (data, (row, col)),
+ shape=(num_faces, num_faces)
+ ).tocsr()
+
+ # Step 3: Ensure single connected component
+ # Use connected_components to see how many components exist
+ n_components, labels = connected_components(face_adjacency, directed=False)
+
+ if n_components > 1:
+ # We have multiple components; let's "connect" them via dummy edges
+ # The simplest approach is to pick one face from each component
+ # and connect them sequentially to enforce a single component.
+ component_representatives = []
+
+ for comp_id in range(n_components):
+ # indices of faces in this component
+ faces_in_comp = np.where(labels == comp_id)[0]
+ if len(faces_in_comp) > 0:
+ # take the first face in this component as a representative
+ component_representatives.append(faces_in_comp[0])
+
+ # Now, add edges between consecutive representatives
+ dummy_row = []
+ dummy_col = []
+ for i in range(len(component_representatives) - 1):
+ f_i = component_representatives[i]
+ f_j = component_representatives[i + 1]
+ dummy_row.extend([f_i, f_j])
+ dummy_col.extend([f_j, f_i])
+
+ if dummy_row:
+ dummy_data = np.ones(len(dummy_row), dtype=np.int8)
+ dummy_mat = coo_matrix(
+ (dummy_data, (dummy_row, dummy_col)),
+ shape=(num_faces, num_faces)
+ ).tocsr()
+ face_adjacency = face_adjacency + dummy_mat
+
+ return face_adjacency
+#####################################
+
+def load_features(feature_filename, mesh_filename, viz_mode):
+
+ print("Reading features:")
+ print(f" Feature filename: {feature_filename}")
+ print(f" Mesh filename: {mesh_filename}")
+
+ # load features
+ feat = np.load(feature_filename, allow_pickle=True)
+ feat = feat.astype(np.float32)
+
+ # load mesh things
+ tm = load_mesh_util(mesh_filename)
+
+ V = np.array(tm.vertices, dtype=np.float32)
+ F = np.array(tm.faces)
+
+ if viz_mode == "faces":
+ pca_colors = np.array(tm.visual.face_colors, dtype=np.float32)
+ pca_colors = pca_colors[:,:3] / 255.
+
+ else:
+ pca_colors = np.array(tm.visual.vertex_colors, dtype=np.float32)
+ pca_colors = pca_colors[:,:3] / 255.
+
+ arrgh(V, F, pca_colors, feat)
+
+ print(F)
+ print(V[F[1][0]])
+ print(V[F[1][1]])
+ print(V[F[1][2]])
+
+ return {
+ 'V' : V,
+ 'F' : F,
+ 'pca_colors' : pca_colors,
+ 'feat_np' : feat,
+ 'feat_pt' : torch.tensor(feat, device='cuda'),
+ 'trimesh' : tm,
+ 'label' : None,
+ 'num_cluster' : 1,
+ 'scalar' : None
+ }
+
+def prep_feature_mesh(m, name='mesh'):
+ ps_mesh = ps.register_surface_mesh(name, m['V'], m['F'])
+ ps_mesh.set_selection_mode('faces_only')
+ m['ps_mesh'] = ps_mesh
+
+def viz_pca_colors(m):
+ m['ps_mesh'].add_color_quantity('pca colors', m['pca_colors'], enabled=True, defined_on=m["viz_mode"])
+
+def viz_feature(m, ind):
+ m['ps_mesh'].add_scalar_quantity('pca colors', m['feat_np'][:,ind], cmap='turbo', enabled=True, defined_on=m["viz_mode"])
+
+def feature_distance_np(feats, query_feat):
+ # normalize
+ feats = feats / np.linalg.norm(feats,axis=1)[:,None]
+ query_feat = query_feat / np.linalg.norm(query_feat)
+ # cosine distance
+ cos_sim = np.dot(feats, query_feat)
+ cos_dist = (1 - cos_sim) / 2.
+ return cos_dist
+
+def feature_distance_pt(feats, query_feat):
+ return (1. - torch.nn.functional.cosine_similarity(feats, query_feat[None,:], dim=-1)) / 2.
+
+
+def ps_callback(opts):
+ m = opts.m
+
+ changed, ind = psim.Combo("Mode", modes_list.index(opts.mode), modes_list)
+ if changed:
+ opts.mode = modes_list[ind]
+ m['ps_mesh'].remove_all_quantities()
+
+ if opts.mode == 'pca':
+ psim.TextUnformatted("""3-dim PCA embeddeding of features is shown as rgb color""")
+ viz_pca_colors(m)
+
+ elif opts.mode == 'feature_viz':
+ psim.TextUnformatted("""Use the slider to scrub through all features.\nCtrl-click to type a particular index.""")
+
+ this_changed, opts.i_feature = psim.SliderInt("feature index", opts.i_feature, v_min=0, v_max=(m['feat_np'].shape[-1]-1))
+ this_changed = this_changed or changed
+
+ if this_changed:
+ viz_feature(m, opts.i_feature)
+
+ elif opts.mode == "cluster_agglo":
+ psim.TextUnformatted("""Use the slider to toggle the number of desired clusters.""")
+ cluster_changed, opts.i_cluster = psim.SliderInt("number of clusters", opts.i_cluster, v_min=1, v_max=30)
+
+ ### To handle different face adjacency options
+ mode_changed, ind = psim.Combo("Adj Matrix Def", adj_mode_list.index(opts.adj_mode), adj_mode_list)
+ knn_changed, opts.add_knn_edges = psim.Checkbox("Add KNN edges", opts.add_knn_edges)
+
+ if mode_changed:
+ opts.adj_mode = adj_mode_list[ind]
+
+ if psim.Button("Recompute"):
+
+ ### Run clustering algorithm
+ num_clusters = opts.i_cluster
+
+ ### Mesh 1
+ point_feat = m['feat_np']
+ point_feat = point_feat / np.linalg.norm(point_feat, axis=-1, keepdims=True)
+
+ ### Compute adjacency matrix ###
+ if opts.adj_mode == "Vanilla":
+ adj_matrix = construct_face_adjacency_matrix_naive(opts.m["F"])
+ elif opts.adj_mode == "Face_MST":
+ adj_matrix = construct_face_adjacency_matrix_facemst(opts.m["F"], opts.m["V"], with_knn=opts.add_knn_edges)
+ elif opts.adj_mode == "CC_MST":
+ adj_matrix = construct_face_adjacency_matrix_ccmst(opts.m["F"], opts.m["V"], with_knn=opts.add_knn_edges)
+ ################################
+
+ ## Agglomerative clustering
+ clustering = AgglomerativeClustering(connectivity= adj_matrix,
+ n_clusters=num_clusters,
+ ).fit(point_feat)
+
+ m['ps_mesh'].add_scalar_quantity("cluster", clustering.labels_, cmap='turbo', vminmax=(0, num_clusters-1), enabled=True, defined_on=m["viz_mode"])
+ print("Recomputed.")
+
+
+ elif opts.mode == "cluster_kmeans":
+ psim.TextUnformatted("""Use the slider to toggle the number of desired clusters.""")
+
+ cluster_changed, opts.i_cluster = psim.SliderInt("number of clusters", opts.i_cluster, v_min=1, v_max=30)
+
+ if psim.Button("Recompute"):
+
+ ### Run clustering algorithm
+ num_clusters = opts.i_cluster
+
+ ### Mesh 1
+ point_feat = m['feat_np']
+ point_feat = point_feat / np.linalg.norm(point_feat, axis=-1, keepdims=True)
+ clustering = KMeans(n_clusters=num_clusters, random_state=0, n_init="auto").fit(point_feat)
+
+ m['ps_mesh'].add_scalar_quantity("cluster", clustering.labels_, cmap='turbo', vminmax=(0, num_clusters-1), enabled=True, defined_on=m["viz_mode"])
+
+def main():
+ ## Parse args
+ # Uses simple_parsing library to automatically construct parser from the dataclass Options
+ parser = ArgumentParser()
+ parser.add_arguments(Options, dest="options")
+ parser.add_argument('--data_root', default="../exp_results/partfield_features/trellis/", help='Path the model features are stored.')
+ args = parser.parse_args()
+ opts: Options = args.options
+
+ DATA_ROOT = args.data_root
+
+ shape_1 = opts.filename
+
+ if os.path.exists(os.path.join(DATA_ROOT, "part_feat_"+ shape_1 + "_0.npy")):
+ feature_fname1 = os.path.join(DATA_ROOT, "part_feat_"+ shape_1 + "_0.npy")
+ mesh_fname1 = os.path.join(DATA_ROOT, "feat_pca_"+ shape_1 + "_0.ply")
+ else:
+ feature_fname1 = os.path.join(DATA_ROOT, "part_feat_"+ shape_1 + "_0_batch.npy")
+ mesh_fname1 = os.path.join(DATA_ROOT, "feat_pca_"+ shape_1 + "_0.ply")
+
+ #### To save output ####
+ os.makedirs(opts.output_fol, exist_ok=True)
+ ########################
+
+ # Initialize
+ ps.init()
+
+ mesh_dict = load_features(feature_fname1, mesh_fname1, opts.viz_mode)
+ prep_feature_mesh(mesh_dict)
+ mesh_dict["viz_mode"] = opts.viz_mode
+ opts.m = mesh_dict
+
+ # Start the interactive UI
+ ps.set_user_callback(lambda : ps_callback(opts))
+ ps.show()
+
+
+if __name__ == "__main__":
+ main()
+
diff --git a/PartField/compute_metric.py b/PartField/compute_metric.py
new file mode 100644
index 0000000000000000000000000000000000000000..818e4a7c55a21dd1c32885d6c146e2c1e834e0eb
--- /dev/null
+++ b/PartField/compute_metric.py
@@ -0,0 +1,97 @@
+import numpy as np
+import json
+from os.path import join
+from typing import List
+import os
+
+def compute_iou(pred, gt):
+ intersection = np.logical_and(pred, gt).sum()
+ union = np.logical_or(pred, gt).sum()
+ if union != 0:
+ return (intersection / union) * 100
+ else:
+ return 0
+
+def eval_single_gt_shape(gt_label, pred_masks):
+ # gt: [N,], label index
+ # pred: [B, N], B is the number of predicted parts, binary label
+ unique_gt_label = np.unique(gt_label)
+ best_ious = []
+ for label in unique_gt_label:
+ best_iou = 0
+ if label == -1:
+ continue
+ for mask in pred_masks:
+ iou = compute_iou(mask, gt_label == label)
+ best_iou = max(best_iou, iou)
+ best_ious.append(best_iou)
+ return np.mean(best_ious)
+
+def eval_whole_dataset(pred_folder, merge_parts=False):
+ print(pred_folder)
+ meta = json.load(open("/home/mikaelaangel/Desktop/data/PartObjaverse-Tiny_semantic.json", "r"))
+
+ categories = meta.keys()
+ results_per_cat = {}
+ per_cat_mious = []
+ overall_mious = []
+
+ MAX_NUM_CLUSTERS = 20
+ view_id = 0
+
+ for cat in categories:
+ results_per_cat[cat] = []
+ for shape_id in meta[cat].keys():
+
+ try:
+ all_pred_labels = []
+ for num_cluster in range(2, MAX_NUM_CLUSTERS):
+ ### load each label
+ fname_clustering = os.path.join(pred_folder, "cluster_out", str(shape_id) + "_" + str(view_id) + "_" + str(num_cluster).zfill(2)) + ".npy"
+ pred_label = np.load(fname_clustering)
+ all_pred_labels.append(np.squeeze(pred_label))
+
+ all_pred_labels = np.array(all_pred_labels)
+
+ except:
+ continue
+
+ pred_masks = []
+
+ #### Path for PartObjaverseTiny Labels
+ gt_labels_path = "PartObjaverse-Tiny_instance_gt"
+ #################################
+
+ gt_label = np.load(os.path.join(gt_labels_path, shape_id + ".npy"))
+
+ if merge_parts:
+ pred_masks = []
+ for result in all_pred_labels:
+ pred = result
+ assert pred.shape[0] == gt_label.shape[0]
+ for label in np.unique(pred):
+ pred_masks.append(pred == label)
+ miou = eval_single_gt_shape(gt_label, np.array(pred_masks))
+ results_per_cat[cat].append(miou)
+ else:
+ best_miou = 0
+ for result in all_pred_labels:
+ pred_masks = []
+ pred = result
+
+ for label in np.unique(pred):
+ pred_masks.append(pred == label)
+ miou = eval_single_gt_shape(gt_label, np.array(pred_masks))
+ best_miou = max(best_miou, miou)
+ results_per_cat[cat].append(best_miou)
+
+ print(np.mean(results_per_cat[cat]))
+ per_cat_mious.append(np.mean(results_per_cat[cat]))
+ overall_mious += results_per_cat[cat]
+ print(np.mean(per_cat_mious))
+ print(np.mean(overall_mious), len(overall_mious))
+
+
+if __name__ == "__main__":
+ eval_whole_dataset("dump_partobjtiny_clustering")
+
diff --git a/PartField/configs/final/correspondence_demo.yaml b/PartField/configs/final/correspondence_demo.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..42272f38c0b4e06f93a91b04fce9629c2f1b712e
--- /dev/null
+++ b/PartField/configs/final/correspondence_demo.yaml
@@ -0,0 +1,44 @@
+result_name: partfield_features/correspondence_demo
+
+continue_ckpt: model/model.ckpt
+
+triplane_channels_low: 128
+triplane_channels_high: 512
+triplane_resolution: 128
+
+vertex_feature: True
+n_point_per_face: 1000
+n_sample_each: 10000
+is_pc: False
+remesh_demo: False
+correspondence_demo: True
+
+preprocess_mesh: True
+
+dataset:
+ type: "Mix"
+ data_path: data/DenseCorr3D
+ train_batch_size: 1
+ val_batch_size: 1
+ train_num_workers: 8
+ all_files:
+ # pairs of example to run correspondence
+ - animals/071b8_toy_animals_017/simple_mesh.obj
+ - animals/bdfd0_toy_animals_016/simple_mesh.obj
+ - animals/2d6b3_toy_animals_009/simple_mesh.obj
+ - animals/96615_toy_animals_018/simple_mesh.obj
+ - chairs/063d1_chair_006/simple_mesh.obj
+ - chairs/bea57_chair_012/simple_mesh.obj
+ - chairs/fe0fe_chair_004/simple_mesh.obj
+ - chairs/288dc_chair_011/simple_mesh.obj
+ # consider decimating animals/../color_mesh.obj yourself for better mesh topology than the provided simple_mesh.obj
+ # (e.g. <50k vertices for functional map efficiency).
+
+loss:
+ triplet: 1.0
+
+use_2d_feat: False
+pvcnn:
+ point_encoder_type: 'pvcnn'
+ z_triplane_channels: 256
+ z_triplane_resolution: 128
\ No newline at end of file
diff --git a/PartField/configs/final/demo.yaml b/PartField/configs/final/demo.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..010fb8dbef280c6abfbc7fd082654402c2cbfcef
--- /dev/null
+++ b/PartField/configs/final/demo.yaml
@@ -0,0 +1,28 @@
+result_name: demo_test
+
+continue_ckpt: model/model.ckpt
+
+triplane_channels_low: 128
+triplane_channels_high: 512
+triplane_resolution: 128
+
+n_point_per_face: 1000
+n_sample_each: 10000
+is_pc : False
+remesh_demo : False
+
+dataset:
+ type: "Mix"
+ data_path: "objaverse_data"
+ train_batch_size: 1
+ val_batch_size: 1
+ train_num_workers: 8
+
+loss:
+ triplet: 1.0
+
+use_2d_feat: False
+pvcnn:
+ point_encoder_type: 'pvcnn'
+ z_triplane_channels: 256
+ z_triplane_resolution: 128
\ No newline at end of file
diff --git a/PartField/download_demo_data.sh b/PartField/download_demo_data.sh
new file mode 100644
index 0000000000000000000000000000000000000000..b9f31a36d46f3d7e371a8927f80346e127f253df
--- /dev/null
+++ b/PartField/download_demo_data.sh
@@ -0,0 +1,19 @@
+#!/bin/bash
+mkdir data
+cd data
+mkdir objaverse_samples
+cd objaverse_samples
+wget https://huggingface.co/datasets/allenai/objaverse/resolve/main/glbs/000-050/00200996b8f34f55a2dd2f44d316d107.glb
+wget https://huggingface.co/datasets/allenai/objaverse/resolve/main/glbs/000-042/002e462c8bfa4267a9c9f038c7966f3b.glb
+wget https://huggingface.co/datasets/allenai/objaverse/resolve/main/glbs/000-046/0c3ca2b32545416f8f1e6f0e87def1a6.glb
+wget https://huggingface.co/datasets/allenai/objaverse/resolve/main/glbs/000-063/65c6ffa083c6496eb84a0aa3c48d63ad.glb
+
+cd ..
+mkdir trellis_samples
+cd trellis_samples
+wget https://github.com/Trellis3D/trellis3d.github.io/raw/refs/heads/main/assets/scenes/blacksmith/glbs/dwarf.glb
+wget https://github.com/Trellis3D/trellis3d.github.io/raw/refs/heads/main/assets/img2/glbs/goblin.glb
+wget https://github.com/Trellis3D/trellis3d.github.io/raw/refs/heads/main/assets/img2/glbs/excavator.glb
+wget https://github.com/Trellis3D/trellis3d.github.io/raw/refs/heads/main/assets/img2/glbs/elephant.glb
+cd ..
+cd ..
diff --git a/PartField/environment.yml b/PartField/environment.yml
new file mode 100644
index 0000000000000000000000000000000000000000..152cafa636f9200abdd156c4f41ac56e0c270e78
--- /dev/null
+++ b/PartField/environment.yml
@@ -0,0 +1,772 @@
+name: partfield
+channels:
+ - nvidia/label/cuda-12.4.0
+ - conda-forge
+ - defaults
+dependencies:
+ - _anaconda_depends=2025.03=py310_mkl_0
+ - _libgcc_mutex=0.1=conda_forge
+ - _openmp_mutex=4.5=2_gnu
+ - aiobotocore=2.21.1=pyhd8ed1ab_0
+ - aiohappyeyeballs=2.6.1=pyhd8ed1ab_0
+ - aiohttp=3.11.14=py310h89163eb_0
+ - aioitertools=0.12.0=pyhd8ed1ab_1
+ - aiosignal=1.3.2=pyhd8ed1ab_0
+ - alabaster=1.0.0=pyhd8ed1ab_1
+ - alsa-lib=1.2.13=hb9d3cd8_0
+ - altair=5.5.0=pyhd8ed1ab_1
+ - anaconda=custom=py310_3
+ - anyio=4.9.0=pyh29332c3_0
+ - aom=3.9.1=hac33072_0
+ - appdirs=1.4.4=pyhd8ed1ab_1
+ - argon2-cffi=23.1.0=pyhd8ed1ab_1
+ - argon2-cffi-bindings=21.2.0=py310ha75aee5_5
+ - arrow=1.3.0=pyhd8ed1ab_1
+ - astroid=3.3.9=py310hff52083_0
+ - astropy=6.1.7=py310hf462985_0
+ - astropy-iers-data=0.2025.3.31.0.36.18=pyhd8ed1ab_0
+ - asttokens=3.0.0=pyhd8ed1ab_1
+ - async-lru=2.0.5=pyh29332c3_0
+ - async-timeout=5.0.1=pyhd8ed1ab_1
+ - asyncssh=2.20.0=pyhd8ed1ab_0
+ - atomicwrites=1.4.1=pyhd8ed1ab_1
+ - attr=2.5.1=h166bdaf_1
+ - attrs=25.3.0=pyh71513ae_0
+ - automat=24.8.1=pyhd8ed1ab_1
+ - autopep8=2.0.4=pyhd8ed1ab_0
+ - aws-c-auth=0.8.6=hd08a7f5_4
+ - aws-c-cal=0.8.7=h043a21b_0
+ - aws-c-common=0.12.0=hb9d3cd8_0
+ - aws-c-compression=0.3.1=h3870646_2
+ - aws-c-event-stream=0.5.4=h04a3f94_2
+ - aws-c-http=0.9.4=hb9b18c6_4
+ - aws-c-io=0.17.0=h3dad3f2_6
+ - aws-c-mqtt=0.12.2=h108da3e_2
+ - aws-c-s3=0.7.13=h822ba82_2
+ - aws-c-sdkutils=0.2.3=h3870646_2
+ - aws-checksums=0.2.3=h3870646_2
+ - aws-crt-cpp=0.31.0=h55f77e1_4
+ - aws-sdk-cpp=1.11.510=h37a5c72_3
+ - azure-core-cpp=1.14.0=h5cfcd09_0
+ - azure-identity-cpp=1.10.0=h113e628_0
+ - azure-storage-blobs-cpp=12.13.0=h3cf044e_1
+ - azure-storage-common-cpp=12.8.0=h736e048_1
+ - azure-storage-files-datalake-cpp=12.12.0=ha633028_1
+ - babel=2.17.0=pyhd8ed1ab_0
+ - backports=1.0=pyhd8ed1ab_5
+ - backports.tarfile=1.2.0=pyhd8ed1ab_1
+ - bcrypt=4.3.0=py310h505e2c1_0
+ - beautifulsoup4=4.13.3=pyha770c72_0
+ - binaryornot=0.4.4=pyhd8ed1ab_2
+ - binutils=2.43=h4852527_4
+ - binutils_impl_linux-64=2.43=h4bf12b8_4
+ - binutils_linux-64=2.43=h4852527_4
+ - black=25.1.0=pyha5154f8_0
+ - blas=1.0=mkl
+ - bleach=6.2.0=pyh29332c3_4
+ - bleach-with-css=6.2.0=h82add2a_4
+ - blinker=1.9.0=pyhff2d567_0
+ - blosc=1.21.6=he440d0b_1
+ - bokeh=3.7.0=pyhd8ed1ab_0
+ - brotli=1.1.0=hb9d3cd8_2
+ - brotli-bin=1.1.0=hb9d3cd8_2
+ - brotli-python=1.1.0=py310hf71b8c6_2
+ - brunsli=0.1=h9c3ff4c_0
+ - bzip2=1.0.8=h4bc722e_7
+ - c-ares=1.34.4=hb9d3cd8_0
+ - c-blosc2=2.15.2=h3122c55_1
+ - c-compiler=1.9.0=h2b85faf_0
+ - ca-certificates=2025.1.31=hbcca054_0
+ - cached-property=1.5.2=hd8ed1ab_1
+ - cached_property=1.5.2=pyha770c72_1
+ - cachetools=5.5.2=pyhd8ed1ab_0
+ - cairo=1.18.4=h3394656_0
+ - certifi=2025.1.31=pyhd8ed1ab_0
+ - cffi=1.17.1=py310h8deb56e_0
+ - chardet=5.2.0=pyhd8ed1ab_3
+ - charls=2.4.2=h59595ed_0
+ - charset-normalizer=3.4.1=pyhd8ed1ab_0
+ - click=8.1.8=pyh707e725_0
+ - cloudpickle=3.1.1=pyhd8ed1ab_0
+ - colorama=0.4.6=pyhd8ed1ab_1
+ - colorcet=3.1.0=pyhd8ed1ab_1
+ - comm=0.2.2=pyhd8ed1ab_1
+ - constantly=15.1.0=py_0
+ - contourpy=1.3.1=py310h3788b33_0
+ - cookiecutter=2.6.0=pyhd8ed1ab_1
+ - cpython=3.10.16=py310hd8ed1ab_1
+ - cryptography=44.0.2=py310h6c63255_0
+ - cssselect=1.2.0=pyhd8ed1ab_1
+ - cuda=12.4.0=0
+ - cuda-cccl_linux-64=12.8.90=ha770c72_1
+ - cuda-command-line-tools=12.8.1=ha770c72_0
+ - cuda-compiler=12.8.1=hbad6d8a_0
+ - cuda-crt-dev_linux-64=12.8.93=ha770c72_1
+ - cuda-crt-tools=12.8.93=ha770c72_1
+ - cuda-cudart=12.8.90=h5888daf_1
+ - cuda-cudart-dev=12.8.90=h5888daf_1
+ - cuda-cudart-dev_linux-64=12.8.90=h3f2d84a_1
+ - cuda-cudart-static=12.8.90=h5888daf_1
+ - cuda-cudart-static_linux-64=12.8.90=h3f2d84a_1
+ - cuda-cudart_linux-64=12.8.90=h3f2d84a_1
+ - cuda-cuobjdump=12.8.90=hbd13f7d_1
+ - cuda-cupti=12.8.90=hbd13f7d_0
+ - cuda-cupti-dev=12.8.90=h5888daf_0
+ - cuda-cuxxfilt=12.8.90=hbd13f7d_1
+ - cuda-demo-suite=12.4.99=0
+ - cuda-driver-dev=12.8.90=h5888daf_1
+ - cuda-driver-dev_linux-64=12.8.90=h3f2d84a_1
+ - cuda-gdb=12.8.90=h50b4baa_0
+ - cuda-libraries=12.8.1=ha770c72_0
+ - cuda-libraries-dev=12.8.1=ha770c72_0
+ - cuda-nsight=12.8.90=h7938cbb_1
+ - cuda-nvcc=12.8.93=hcdd1206_1
+ - cuda-nvcc-dev_linux-64=12.8.93=he91c749_1
+ - cuda-nvcc-impl=12.8.93=h85509e4_1
+ - cuda-nvcc-tools=12.8.93=he02047a_1
+ - cuda-nvcc_linux-64=12.8.93=h04802cd_1
+ - cuda-nvdisasm=12.8.90=hbd13f7d_1
+ - cuda-nvml-dev=12.8.90=hbd13f7d_0
+ - cuda-nvprof=12.8.90=hbd13f7d_0
+ - cuda-nvprune=12.8.90=hbd13f7d_1
+ - cuda-nvrtc=12.8.93=h5888daf_1
+ - cuda-nvrtc-dev=12.8.93=h5888daf_1
+ - cuda-nvtx=12.8.90=hbd13f7d_0
+ - cuda-nvvm-dev_linux-64=12.8.93=ha770c72_1
+ - cuda-nvvm-impl=12.8.93=he02047a_1
+ - cuda-nvvm-tools=12.8.93=he02047a_1
+ - cuda-nvvp=12.8.93=hbd13f7d_1
+ - cuda-opencl=12.8.90=hbd13f7d_0
+ - cuda-opencl-dev=12.8.90=h5888daf_0
+ - cuda-profiler-api=12.8.90=h7938cbb_1
+ - cuda-sanitizer-api=12.8.93=hbd13f7d_1
+ - cuda-toolkit=12.8.1=ha804496_0
+ - cuda-tools=12.8.1=ha770c72_0
+ - cuda-version=12.8=h5d125a7_3
+ - cuda-visual-tools=12.8.1=ha770c72_0
+ - curl=8.12.1=h332b0f4_0
+ - cxx-compiler=1.9.0=h1a2810e_0
+ - cycler=0.12.1=pyhd8ed1ab_1
+ - cyrus-sasl=2.1.27=h54b06d7_7
+ - cytoolz=1.0.1=py310ha75aee5_0
+ - datashader=0.17.0=pyhd8ed1ab_0
+ - dav1d=1.2.1=hd590300_0
+ - dbus=1.13.6=h5008d03_3
+ - debugpy=1.8.13=py310hf71b8c6_0
+ - decorator=5.2.1=pyhd8ed1ab_0
+ - defusedxml=0.7.1=pyhd8ed1ab_0
+ - deprecated=1.2.18=pyhd8ed1ab_0
+ - diff-match-patch=20241021=pyhd8ed1ab_1
+ - dill=0.3.9=pyhd8ed1ab_1
+ - docstring-to-markdown=0.16=pyh29332c3_1
+ - docutils=0.21.2=pyhd8ed1ab_1
+ - double-conversion=3.3.1=h5888daf_0
+ - et_xmlfile=2.0.0=pyhd8ed1ab_1
+ - exceptiongroup=1.2.2=pyhd8ed1ab_1
+ - executing=2.1.0=pyhd8ed1ab_1
+ - expat=2.7.0=h5888daf_0
+ - fcitx-qt5=1.2.7=h748e8b9_2
+ - filelock=3.18.0=pyhd8ed1ab_0
+ - flake8=7.1.2=pyhd8ed1ab_0
+ - font-ttf-dejavu-sans-mono=2.37=hab24e00_0
+ - font-ttf-inconsolata=3.000=h77eed37_0
+ - font-ttf-source-code-pro=2.038=h77eed37_0
+ - font-ttf-ubuntu=0.83=h77eed37_3
+ - fontconfig=2.15.0=h7e30c49_1
+ - fonts-conda-ecosystem=1=0
+ - fonts-conda-forge=1=0
+ - fonttools=4.56.0=py310h89163eb_0
+ - fqdn=1.5.1=pyhd8ed1ab_1
+ - freetype=2.13.3=h48d6fc4_0
+ - frozenlist=1.5.0=py310h89163eb_1
+ - fzf=0.61.0=h59e48b9_0
+ - gcc=13.3.0=h9576a4e_2
+ - gcc_impl_linux-64=13.3.0=h1e990d8_2
+ - gcc_linux-64=13.3.0=hc28eda2_8
+ - gds-tools=1.13.1.3=h5888daf_0
+ - gettext=0.23.1=h5888daf_0
+ - gettext-tools=0.23.1=h5888daf_0
+ - gflags=2.2.2=h5888daf_1005
+ - giflib=5.2.2=hd590300_0
+ - gitdb=4.0.12=pyhd8ed1ab_0
+ - gitpython=3.1.44=pyhff2d567_0
+ - glib=2.84.0=h07242d1_0
+ - glib-tools=2.84.0=h4833e2c_0
+ - glog=0.7.1=hbabe93e_0
+ - gmp=6.3.0=hac33072_2
+ - gmpy2=2.1.5=py310he8512ff_3
+ - graphite2=1.3.13=h59595ed_1003
+ - greenlet=3.1.1=py310hf71b8c6_1
+ - gst-plugins-base=1.24.7=h0a52356_0
+ - gstreamer=1.24.7=hf3bb09a_0
+ - gxx=13.3.0=h9576a4e_2
+ - gxx_impl_linux-64=13.3.0=hae580e1_2
+ - gxx_linux-64=13.3.0=h6834431_8
+ - h11=0.14.0=pyhd8ed1ab_1
+ - h2=4.2.0=pyhd8ed1ab_0
+ - h5py=3.13.0=nompi_py310h60e0fe6_100
+ - harfbuzz=10.4.0=h76408a6_0
+ - hdf5=1.14.3=nompi_h2d575fe_109
+ - holoviews=1.20.2=pyhd8ed1ab_0
+ - hpack=4.1.0=pyhd8ed1ab_0
+ - httpcore=1.0.7=pyh29332c3_1
+ - httpx=0.28.1=pyhd8ed1ab_0
+ - hvplot=0.11.2=pyhd8ed1ab_0
+ - hyperframe=6.1.0=pyhd8ed1ab_0
+ - hyperlink=21.0.0=pyh29332c3_1
+ - icu=75.1=he02047a_0
+ - idna=3.10=pyhd8ed1ab_1
+ - imagecodecs=2024.12.30=py310h78a9a29_0
+ - imageio=2.37.0=pyhfb79c49_0
+ - imagesize=1.4.1=pyhd8ed1ab_0
+ - imbalanced-learn=0.13.0=pyhd8ed1ab_0
+ - importlib-metadata=8.6.1=pyha770c72_0
+ - importlib_resources=6.5.2=pyhd8ed1ab_0
+ - incremental=24.7.2=pyhd8ed1ab_1
+ - inflection=0.5.1=pyhd8ed1ab_1
+ - iniconfig=2.0.0=pyhd8ed1ab_1
+ - intake=2.0.8=pyhd8ed1ab_0
+ - intervaltree=3.1.0=pyhd8ed1ab_1
+ - ipykernel=6.29.5=pyh3099207_0
+ - ipython=8.34.0=pyh907856f_0
+ - ipython_genutils=0.2.0=pyhd8ed1ab_2
+ - isoduration=20.11.0=pyhd8ed1ab_1
+ - isort=6.0.1=pyhd8ed1ab_0
+ - itemadapter=0.11.0=pyhd8ed1ab_0
+ - itemloaders=1.3.2=pyhd8ed1ab_1
+ - itsdangerous=2.2.0=pyhd8ed1ab_1
+ - jaraco.classes=3.4.0=pyhd8ed1ab_2
+ - jaraco.context=6.0.1=pyhd8ed1ab_0
+ - jaraco.functools=4.1.0=pyhd8ed1ab_0
+ - jedi=0.19.2=pyhd8ed1ab_1
+ - jeepney=0.9.0=pyhd8ed1ab_0
+ - jellyfish=1.1.3=py310h505e2c1_0
+ - jinja2=3.1.6=pyhd8ed1ab_0
+ - jmespath=1.0.1=pyhd8ed1ab_1
+ - joblib=1.4.2=pyhd8ed1ab_1
+ - jq=1.7.1=hd590300_0
+ - json5=0.10.0=pyhd8ed1ab_1
+ - jsonpointer=3.0.0=py310hff52083_1
+ - jsonschema=4.23.0=pyhd8ed1ab_1
+ - jsonschema-specifications=2024.10.1=pyhd8ed1ab_1
+ - jsonschema-with-format-nongpl=4.23.0=hd8ed1ab_1
+ - jupyter=1.1.1=pyhd8ed1ab_1
+ - jupyter-lsp=2.2.5=pyhd8ed1ab_1
+ - jupyter_client=8.6.3=pyhd8ed1ab_1
+ - jupyter_console=6.6.3=pyhd8ed1ab_1
+ - jupyter_core=5.7.2=pyh31011fe_1
+ - jupyter_events=0.12.0=pyh29332c3_0
+ - jupyter_server=2.15.0=pyhd8ed1ab_0
+ - jupyter_server_terminals=0.5.3=pyhd8ed1ab_1
+ - jupyterlab=4.3.6=pyhd8ed1ab_0
+ - jupyterlab-variableinspector=3.2.4=pyhd8ed1ab_0
+ - jupyterlab_pygments=0.3.0=pyhd8ed1ab_2
+ - jupyterlab_server=2.27.3=pyhd8ed1ab_1
+ - jxrlib=1.1=hd590300_3
+ - kernel-headers_linux-64=3.10.0=he073ed8_18
+ - keyring=25.6.0=pyha804496_0
+ - keyutils=1.6.1=h166bdaf_0
+ - kiwisolver=1.4.7=py310h3788b33_0
+ - krb5=1.21.3=h659f571_0
+ - lame=3.100=h166bdaf_1003
+ - lazy-loader=0.4=pyhd8ed1ab_2
+ - lazy_loader=0.4=pyhd8ed1ab_2
+ - lcms2=2.17=h717163a_0
+ - ld_impl_linux-64=2.43=h712a8e2_4
+ - lerc=4.0.0=h27087fc_0
+ - libabseil=20250127.1=cxx17_hbbce691_0
+ - libaec=1.1.3=h59595ed_0
+ - libarrow=19.0.1=h120c447_5_cpu
+ - libarrow-acero=19.0.1=hcb10f89_5_cpu
+ - libarrow-dataset=19.0.1=hcb10f89_5_cpu
+ - libarrow-substrait=19.0.1=h1bed206_5_cpu
+ - libasprintf=0.23.1=h8e693c7_0
+ - libasprintf-devel=0.23.1=h8e693c7_0
+ - libavif16=1.2.1=hbb36593_2
+ - libblas=3.9.0=1_h86c2bf4_netlib
+ - libbrotlicommon=1.1.0=hb9d3cd8_2
+ - libbrotlidec=1.1.0=hb9d3cd8_2
+ - libbrotlienc=1.1.0=hb9d3cd8_2
+ - libcap=2.75=h39aace5_0
+ - libcblas=3.9.0=8_h3b12eaf_netlib
+ - libclang-cpp19.1=19.1.7=default_hb5137d0_2
+ - libclang-cpp20.1=20.1.1=default_hb5137d0_0
+ - libclang13=20.1.1=default_h9c6a7e4_0
+ - libcrc32c=1.1.2=h9c3ff4c_0
+ - libcublas=12.8.4.1=h9ab20c4_1
+ - libcublas-dev=12.8.4.1=h9ab20c4_1
+ - libcufft=11.3.3.83=h5888daf_1
+ - libcufft-dev=11.3.3.83=h5888daf_1
+ - libcufile=1.13.1.3=h12f29b5_0
+ - libcufile-dev=1.13.1.3=h5888daf_0
+ - libcups=2.3.3=h4637d8d_4
+ - libcurand=10.3.9.90=h9ab20c4_1
+ - libcurand-dev=10.3.9.90=h9ab20c4_1
+ - libcurl=8.12.1=h332b0f4_0
+ - libcusolver=11.7.3.90=h9ab20c4_1
+ - libcusolver-dev=11.7.3.90=h9ab20c4_1
+ - libcusparse=12.5.8.93=hbd13f7d_0
+ - libcusparse-dev=12.5.8.93=h5888daf_0
+ - libdeflate=1.23=h4ddbbb0_0
+ - libdrm=2.4.124=hb9d3cd8_0
+ - libedit=3.1.20250104=pl5321h7949ede_0
+ - libegl=1.7.0=ha4b6fd6_2
+ - libev=4.33=hd590300_2
+ - libevent=2.1.12=hf998b51_1
+ - libexpat=2.7.0=h5888daf_0
+ - libffi=3.4.6=h2dba641_1
+ - libflac=1.4.3=h59595ed_0
+ - libgcc=14.2.0=h767d61c_2
+ - libgcc-devel_linux-64=13.3.0=hc03c837_102
+ - libgcc-ng=14.2.0=h69a702a_2
+ - libgcrypt-lib=1.11.0=hb9d3cd8_2
+ - libgettextpo=0.23.1=h5888daf_0
+ - libgettextpo-devel=0.23.1=h5888daf_0
+ - libgfortran=14.2.0=h69a702a_2
+ - libgfortran-ng=14.2.0=h69a702a_2
+ - libgfortran5=14.2.0=hf1ad2bd_2
+ - libgl=1.7.0=ha4b6fd6_2
+ - libglib=2.84.0=h2ff4ddf_0
+ - libglvnd=1.7.0=ha4b6fd6_2
+ - libglx=1.7.0=ha4b6fd6_2
+ - libgomp=14.2.0=h767d61c_2
+ - libgoogle-cloud=2.36.0=hc4361e1_1
+ - libgoogle-cloud-storage=2.36.0=h0121fbd_1
+ - libgpg-error=1.51=hbd13f7d_1
+ - libgrpc=1.71.0=he753a82_0
+ - libhwy=1.1.0=h00ab1b0_0
+ - libiconv=1.18=h4ce23a2_1
+ - libjpeg-turbo=3.0.0=hd590300_1
+ - libjxl=0.11.1=hdb8da77_0
+ - liblapack=3.9.0=8_h3b12eaf_netlib
+ - libllvm19=19.1.7=ha7bfdaf_1
+ - libllvm20=20.1.1=ha7bfdaf_0
+ - liblzma=5.6.4=hb9d3cd8_0
+ - libnghttp2=1.64.0=h161d5f1_0
+ - libnl=3.11.0=hb9d3cd8_0
+ - libnpp=12.3.3.100=h9ab20c4_1
+ - libnpp-dev=12.3.3.100=h9ab20c4_1
+ - libnsl=2.0.1=hd590300_0
+ - libntlm=1.8=hb9d3cd8_0
+ - libnuma=2.0.18=h4ab18f5_2
+ - libnvfatbin=12.8.90=hbd13f7d_0
+ - libnvfatbin-dev=12.8.90=h5888daf_0
+ - libnvjitlink=12.8.93=h5888daf_1
+ - libnvjitlink-dev=12.8.93=h5888daf_1
+ - libnvjpeg=12.3.5.92=h97fd463_0
+ - libnvjpeg-dev=12.3.5.92=ha770c72_0
+ - libogg=1.3.5=h4ab18f5_0
+ - libopengl=1.7.0=ha4b6fd6_2
+ - libopentelemetry-cpp=1.19.0=hd1b1c89_0
+ - libopentelemetry-cpp-headers=1.19.0=ha770c72_0
+ - libopus=1.3.1=h7f98852_1
+ - libparquet=19.0.1=h081d1f1_5_cpu
+ - libpciaccess=0.18=hd590300_0
+ - libpng=1.6.47=h943b412_0
+ - libpq=17.4=h27ae623_0
+ - libprotobuf=5.29.3=h501fc15_0
+ - libre2-11=2024.07.02=hba17884_3
+ - libsanitizer=13.3.0=he8ea267_2
+ - libsndfile=1.2.2=hc60ed4a_1
+ - libsodium=1.0.20=h4ab18f5_0
+ - libspatialindex=2.1.0=he57a185_0
+ - libsqlite=3.49.1=hee588c1_2
+ - libssh2=1.11.1=hf672d98_0
+ - libstdcxx=14.2.0=h8f9b012_2
+ - libstdcxx-devel_linux-64=13.3.0=hc03c837_102
+ - libstdcxx-ng=14.2.0=h4852527_2
+ - libsystemd0=257.4=h4e0b6ca_1
+ - libthrift=0.21.0=h0e7cc3e_0
+ - libtiff=4.7.0=hd9ff511_3
+ - libudev1=257.4=hbe16f8c_1
+ - libutf8proc=2.10.0=h4c51ac1_0
+ - libuuid=2.38.1=h0b41bf4_0
+ - libvorbis=1.3.7=h9c3ff4c_0
+ - libwebp=1.5.0=hae8dbeb_0
+ - libwebp-base=1.5.0=h851e524_0
+ - libxcb=1.17.0=h8a09558_0
+ - libxcrypt=4.4.36=hd590300_1
+ - libxkbcommon=1.8.1=hc4a0caf_0
+ - libxkbfile=1.1.0=h166bdaf_1
+ - libxml2=2.13.7=h8d12d68_0
+ - libxslt=1.1.39=h76b75d6_0
+ - libzlib=1.3.1=hb9d3cd8_2
+ - libzopfli=1.0.3=h9c3ff4c_0
+ - linkify-it-py=2.0.3=pyhd8ed1ab_1
+ - locket=1.0.0=pyhd8ed1ab_0
+ - lxml=5.3.1=py310h6ee67d5_0
+ - lz4=4.3.3=py310h80b8a69_2
+ - lz4-c=1.10.0=h5888daf_1
+ - markdown=3.6=pyhd8ed1ab_0
+ - markdown-it-py=3.0.0=pyhd8ed1ab_1
+ - markupsafe=3.0.2=py310h89163eb_1
+ - matplotlib=3.10.1=py310hff52083_0
+ - matplotlib-base=3.10.1=py310h68603db_0
+ - matplotlib-inline=0.1.7=pyhd8ed1ab_1
+ - mccabe=0.7.0=pyhd8ed1ab_1
+ - mdit-py-plugins=0.4.2=pyhd8ed1ab_1
+ - mdurl=0.1.2=pyhd8ed1ab_1
+ - mistune=3.1.3=pyh29332c3_0
+ - more-itertools=10.6.0=pyhd8ed1ab_0
+ - mpc=1.3.1=h24ddda3_1
+ - mpfr=4.2.1=h90cbb55_3
+ - mpg123=1.32.9=hc50e24c_0
+ - mpmath=1.3.0=pyhd8ed1ab_1
+ - msgpack-python=1.1.0=py310h3788b33_0
+ - multidict=6.2.0=py310h89163eb_0
+ - multipledispatch=0.6.0=pyhd8ed1ab_1
+ - munkres=1.1.4=pyh9f0ad1d_0
+ - mypy=1.15.0=py310ha75aee5_0
+ - mypy_extensions=1.0.0=pyha770c72_1
+ - mysql-common=9.0.1=h266115a_5
+ - mysql-libs=9.0.1=he0572af_5
+ - narwhals=1.32.0=pyhd8ed1ab_0
+ - nbclient=0.10.2=pyhd8ed1ab_0
+ - nbconvert=7.16.6=hb482800_0
+ - nbconvert-core=7.16.6=pyh29332c3_0
+ - nbconvert-pandoc=7.16.6=hed9df3c_0
+ - nbformat=5.10.4=pyhd8ed1ab_1
+ - ncurses=6.5=h2d0b736_3
+ - nest-asyncio=1.6.0=pyhd8ed1ab_1
+ - networkx=3.4.2=pyh267e887_2
+ - nlohmann_json=3.11.3=he02047a_1
+ - nltk=3.9.1=pyhd8ed1ab_1
+ - nomkl=1.0=h5ca1d4c_0
+ - notebook=7.3.3=pyhd8ed1ab_0
+ - notebook-shim=0.2.4=pyhd8ed1ab_1
+ - nsight-compute=2025.1.1.2=hb5ebaad_0
+ - nspr=4.36=h5888daf_0
+ - nss=3.110=h159eef7_0
+ - numexpr=2.10.2=py310hdb6e06b_100
+ - numpydoc=1.8.0=pyhd8ed1ab_1
+ - ocl-icd=2.3.2=hb9d3cd8_2
+ - oniguruma=6.9.10=hb9d3cd8_0
+ - opencl-headers=2024.10.24=h5888daf_0
+ - openjpeg=2.5.3=h5fbd93e_0
+ - openldap=2.6.9=he970967_0
+ - openpyxl=3.1.5=py310h0999ad4_1
+ - openssl=3.4.1=h7b32b05_0
+ - orc=2.1.1=h17f744e_1
+ - overrides=7.7.0=pyhd8ed1ab_1
+ - packaging=24.2=pyhd8ed1ab_2
+ - pandas=2.2.3=py310h5eaa309_1
+ - pandoc=3.6.4=ha770c72_0
+ - pandocfilters=1.5.0=pyhd8ed1ab_0
+ - panel=1.6.2=pyhd8ed1ab_0
+ - param=2.2.0=pyhd8ed1ab_0
+ - parsel=1.10.0=pyhd8ed1ab_0
+ - parso=0.8.4=pyhd8ed1ab_1
+ - partd=1.4.2=pyhd8ed1ab_0
+ - pathspec=0.12.1=pyhd8ed1ab_1
+ - patsy=1.0.1=pyhd8ed1ab_1
+ - pcre2=10.44=hba22ea6_2
+ - pexpect=4.9.0=pyhd8ed1ab_1
+ - pickleshare=0.7.5=pyhd8ed1ab_1004
+ - pillow=11.1.0=py310h7e6dc6c_0
+ - pip=25.0.1=pyh8b19718_0
+ - pixman=0.44.2=h29eaf8c_0
+ - pkgutil-resolve-name=1.3.10=pyhd8ed1ab_2
+ - platformdirs=4.3.7=pyh29332c3_0
+ - plotly=6.0.1=pyhd8ed1ab_0
+ - pluggy=1.5.0=pyhd8ed1ab_1
+ - ply=3.11=pyhd8ed1ab_3
+ - prometheus-cpp=1.3.0=ha5d0236_0
+ - prometheus_client=0.21.1=pyhd8ed1ab_0
+ - prompt-toolkit=3.0.50=pyha770c72_0
+ - prompt_toolkit=3.0.50=hd8ed1ab_0
+ - propcache=0.2.1=py310h89163eb_1
+ - protego=0.4.0=pyhd8ed1ab_0
+ - protobuf=5.29.3=py310hcba5963_0
+ - psutil=7.0.0=py310ha75aee5_0
+ - pthread-stubs=0.4=hb9d3cd8_1002
+ - ptyprocess=0.7.0=pyhd8ed1ab_1
+ - pulseaudio-client=17.0=hac146a9_1
+ - pure_eval=0.2.3=pyhd8ed1ab_1
+ - py-cpuinfo=9.0.0=pyhd8ed1ab_1
+ - pyarrow=19.0.1=py310hff52083_0
+ - pyarrow-core=19.0.1=py310hac404ae_0_cpu
+ - pyasn1=0.6.1=pyhd8ed1ab_2
+ - pyasn1-modules=0.4.2=pyhd8ed1ab_0
+ - pycodestyle=2.12.1=pyhd8ed1ab_1
+ - pyconify=0.2.1=pyhd8ed1ab_0
+ - pycparser=2.22=pyh29332c3_1
+ - pyct=0.5.0=pyhd8ed1ab_1
+ - pycurl=7.45.6=py310h6811363_0
+ - pydeck=0.9.1=pyhd8ed1ab_0
+ - pydispatcher=2.0.5=py_1
+ - pydocstyle=6.3.0=pyhd8ed1ab_1
+ - pyerfa=2.0.1.5=py310hf462985_0
+ - pyflakes=3.2.0=pyhd8ed1ab_1
+ - pygithub=2.6.1=pyhd8ed1ab_0
+ - pygments=2.19.1=pyhd8ed1ab_0
+ - pyjwt=2.10.1=pyhd8ed1ab_0
+ - pylint=3.3.5=pyh29332c3_0
+ - pylint-venv=3.0.4=pyhd8ed1ab_1
+ - pyls-spyder=0.4.0=pyhd8ed1ab_1
+ - pynacl=1.5.0=py310ha75aee5_4
+ - pyodbc=5.2.0=py310hf71b8c6_0
+ - pyopenssl=25.0.0=pyhd8ed1ab_0
+ - pyparsing=3.2.3=pyhd8ed1ab_1
+ - pyqt=5.15.9=py310h04931ad_5
+ - pyqt5-sip=12.12.2=py310hc6cd4ac_5
+ - pyqtwebengine=5.15.9=py310h704022c_5
+ - pyside6=6.8.3=py310hfd10a26_0
+ - pysocks=1.7.1=pyha55dd90_7
+ - pytables=3.10.1=py310h1affd9f_4
+ - pytest=8.3.5=pyhd8ed1ab_0
+ - python=3.10.16=he725a3c_1_cpython
+ - python-dateutil=2.9.0.post0=pyhff2d567_1
+ - python-fastjsonschema=2.21.1=pyhd8ed1ab_0
+ - python-gssapi=1.9.0=py310h695cd88_1
+ - python-json-logger=2.0.7=pyhd8ed1ab_0
+ - python-lsp-black=2.0.0=pyhff2d567_1
+ - python-lsp-jsonrpc=1.1.2=pyhff2d567_1
+ - python-lsp-server=1.12.2=pyhff2d567_0
+ - python-lsp-server-base=1.12.2=pyhd8ed1ab_0
+ - python-slugify=8.0.4=pyhd8ed1ab_1
+ - python-tzdata=2025.2=pyhd8ed1ab_0
+ - python_abi=3.10=5_cp310
+ - pytoolconfig=1.2.5=pyhd8ed1ab_1
+ - pytz=2024.1=pyhd8ed1ab_0
+ - pyuca=1.2=pyhd8ed1ab_2
+ - pyviz_comms=3.0.4=pyhd8ed1ab_1
+ - pywavelets=1.8.0=py310hf462985_0
+ - pyxdg=0.28=pyhd8ed1ab_0
+ - pyyaml=6.0.2=py310h89163eb_2
+ - pyzmq=26.3.0=py310h71f11fc_0
+ - qdarkstyle=3.2.3=pyhd8ed1ab_1
+ - qhull=2020.2=h434a139_5
+ - qstylizer=0.2.4=pyhff2d567_0
+ - qt-main=5.15.15=hc3cb62f_2
+ - qt-webengine=5.15.15=h0071231_2
+ - qt6-main=6.8.3=h588cce1_0
+ - qtawesome=1.4.0=pyh9208f05_1
+ - qtconsole=5.6.1=pyhd8ed1ab_1
+ - qtconsole-base=5.6.1=pyha770c72_1
+ - qtpy=2.4.3=pyhd8ed1ab_0
+ - queuelib=1.8.0=pyhd8ed1ab_0
+ - rav1e=0.6.6=he8a937b_2
+ - rdma-core=56.0=h5888daf_0
+ - re2=2024.07.02=h9925aae_3
+ - readline=8.2=h8c095d6_2
+ - referencing=0.36.2=pyh29332c3_0
+ - regex=2024.11.6=py310ha75aee5_0
+ - requests=2.32.3=pyhd8ed1ab_1
+ - requests-file=2.1.0=pyhd8ed1ab_1
+ - rfc3339-validator=0.1.4=pyhd8ed1ab_1
+ - rfc3986-validator=0.1.1=pyh9f0ad1d_0
+ - rich=14.0.0=pyh29332c3_0
+ - rope=1.13.0=pyhd8ed1ab_1
+ - rpds-py=0.24.0=py310hc1293b2_0
+ - rtree=1.4.0=pyh11ca60a_1
+ - s2n=1.5.14=h6c98b2b_0
+ - s3fs=2025.3.1=pyhd8ed1ab_0
+ - scikit-image=0.25.2=py310h5eaa309_0
+ - scikit-learn=1.6.1=py310h27f47ee_0
+ - scipy=1.15.2=py310h1d65ade_0
+ - scrapy=2.12.0=py310hff52083_1
+ - seaborn=0.13.2=hd8ed1ab_3
+ - seaborn-base=0.13.2=pyhd8ed1ab_3
+ - secretstorage=3.3.3=py310hff52083_3
+ - send2trash=1.8.3=pyh0d859eb_1
+ - service-identity=24.2.0=pyha770c72_1
+ - service_identity=24.2.0=hd8ed1ab_1
+ - setuptools=75.8.2=pyhff2d567_0
+ - sip=6.7.12=py310hc6cd4ac_0
+ - six=1.17.0=pyhd8ed1ab_0
+ - sklearn-compat=0.1.3=pyhd8ed1ab_0
+ - smmap=5.0.2=pyhd8ed1ab_0
+ - snappy=1.2.1=h8bd8927_1
+ - sniffio=1.3.1=pyhd8ed1ab_1
+ - snowballstemmer=2.2.0=pyhd8ed1ab_0
+ - sortedcontainers=2.4.0=pyhd8ed1ab_1
+ - soupsieve=2.5=pyhd8ed1ab_1
+ - sphinx=8.1.3=pyhd8ed1ab_1
+ - sphinxcontrib-applehelp=2.0.0=pyhd8ed1ab_1
+ - sphinxcontrib-devhelp=2.0.0=pyhd8ed1ab_1
+ - sphinxcontrib-htmlhelp=2.1.0=pyhd8ed1ab_1
+ - sphinxcontrib-jsmath=1.0.1=pyhd8ed1ab_1
+ - sphinxcontrib-qthelp=2.0.0=pyhd8ed1ab_1
+ - sphinxcontrib-serializinghtml=1.1.10=pyhd8ed1ab_1
+ - spyder=6.0.5=hd8ed1ab_0
+ - spyder-base=6.0.5=linux_pyh62a8a7d_0
+ - spyder-kernels=3.0.3=unix_pyh707e725_0
+ - sqlalchemy=2.0.40=py310ha75aee5_0
+ - stack_data=0.6.3=pyhd8ed1ab_1
+ - statsmodels=0.14.4=py310hf462985_0
+ - streamlit=1.44.0=pyhd8ed1ab_1
+ - superqt=0.7.3=pyhb6d5dde_0
+ - svt-av1=3.0.2=h5888daf_0
+ - sympy=1.13.3=pyh2585a3b_105
+ - sysroot_linux-64=2.17=h0157908_18
+ - tabulate=0.9.0=pyhd8ed1ab_2
+ - tblib=3.0.0=pyhd8ed1ab_1
+ - tenacity=9.0.0=pyhd8ed1ab_1
+ - terminado=0.18.1=pyh0d859eb_0
+ - text-unidecode=1.3=pyhd8ed1ab_2
+ - textdistance=4.6.3=pyhd8ed1ab_1
+ - threadpoolctl=3.6.0=pyhecae5ae_0
+ - three-merge=0.1.1=pyhd8ed1ab_1
+ - tifffile=2025.3.30=pyhd8ed1ab_0
+ - tinycss2=1.4.0=pyhd8ed1ab_0
+ - tk=8.6.13=noxft_h4845f30_101
+ - tldextract=5.1.3=pyhd8ed1ab_1
+ - toml=0.10.2=pyhd8ed1ab_1
+ - tomli=2.2.1=pyhd8ed1ab_1
+ - tomlkit=0.13.2=pyha770c72_1
+ - toolz=1.0.0=pyhd8ed1ab_1
+ - tornado=6.4.2=py310ha75aee5_0
+ - tqdm=4.67.1=pyhd8ed1ab_1
+ - traitlets=5.14.3=pyhd8ed1ab_1
+ - twisted=24.11.0=py310ha75aee5_0
+ - types-python-dateutil=2.9.0.20241206=pyhd8ed1ab_0
+ - typing-extensions=4.13.0=h9fa5a19_1
+ - typing_extensions=4.13.0=pyh29332c3_1
+ - typing_utils=0.1.0=pyhd8ed1ab_1
+ - tzdata=2025b=h78e105d_0
+ - uc-micro-py=1.0.3=pyhd8ed1ab_1
+ - ujson=5.10.0=py310hf71b8c6_1
+ - unicodedata2=16.0.0=py310ha75aee5_0
+ - unixodbc=2.3.12=h661eb56_0
+ - uri-template=1.3.0=pyhd8ed1ab_1
+ - urllib3=2.3.0=pyhd8ed1ab_0
+ - w3lib=2.3.1=pyhd8ed1ab_0
+ - watchdog=6.0.0=py310hff52083_0
+ - wayland=1.23.1=h3e06ad9_0
+ - wcwidth=0.2.13=pyhd8ed1ab_1
+ - webcolors=24.11.1=pyhd8ed1ab_0
+ - webencodings=0.5.1=pyhd8ed1ab_3
+ - websocket-client=1.8.0=pyhd8ed1ab_1
+ - whatthepatch=1.0.7=pyhd8ed1ab_1
+ - wheel=0.45.1=pyhd8ed1ab_1
+ - wrapt=1.17.2=py310ha75aee5_0
+ - wurlitzer=3.1.1=pyhd8ed1ab_1
+ - xarray=2025.3.1=pyhd8ed1ab_0
+ - xcb-util=0.4.1=hb711507_2
+ - xcb-util-cursor=0.1.5=hb9d3cd8_0
+ - xcb-util-image=0.4.0=hb711507_2
+ - xcb-util-keysyms=0.4.1=hb711507_0
+ - xcb-util-renderutil=0.3.10=hb711507_0
+ - xcb-util-wm=0.4.2=hb711507_0
+ - xkeyboard-config=2.43=hb9d3cd8_0
+ - xorg-libice=1.1.2=hb9d3cd8_0
+ - xorg-libsm=1.2.6=he73a12e_0
+ - xorg-libx11=1.8.12=h4f16b4b_0
+ - xorg-libxau=1.0.12=hb9d3cd8_0
+ - xorg-libxcomposite=0.4.6=hb9d3cd8_2
+ - xorg-libxcursor=1.2.3=hb9d3cd8_0
+ - xorg-libxdamage=1.1.6=hb9d3cd8_0
+ - xorg-libxdmcp=1.1.5=hb9d3cd8_0
+ - xorg-libxext=1.3.6=hb9d3cd8_0
+ - xorg-libxfixes=6.0.1=hb9d3cd8_0
+ - xorg-libxi=1.8.2=hb9d3cd8_0
+ - xorg-libxrandr=1.5.4=hb9d3cd8_0
+ - xorg-libxrender=0.9.12=hb9d3cd8_0
+ - xorg-libxtst=1.2.5=hb9d3cd8_3
+ - xorg-libxxf86vm=1.1.6=hb9d3cd8_0
+ - xyzservices=2025.1.0=pyhd8ed1ab_0
+ - yaml=0.2.5=h7f98852_2
+ - yapf=0.43.0=pyhd8ed1ab_1
+ - yarl=1.18.3=py310h89163eb_1
+ - zeromq=4.3.5=h3b0a872_7
+ - zfp=1.0.1=h5888daf_2
+ - zict=3.0.0=pyhd8ed1ab_1
+ - zipp=3.21.0=pyhd8ed1ab_1
+ - zlib=1.3.1=hb9d3cd8_2
+ - zlib-ng=2.2.4=h7955e40_0
+ - zope.interface=7.2=py310ha75aee5_0
+ - zstandard=0.23.0=py310ha75aee5_1
+ - zstd=1.5.7=hb8e6e7a_2
+ - pip:
+ - addict==2.4.0
+ - arrgh==1.0.0
+ - boto3==1.37.24
+ - botocore==1.37.24
+ - configargparse==1.7
+ - cuda-bindings==12.8.0
+ - cuda-python==12.8.0
+ - cudf-cu12==25.2.2
+ - cuml-cu12==25.2.1
+ - cupy-cuda12x==13.4.1
+ - cuvs-cu12==25.2.1
+ - dash==3.0.2
+ - dask==2024.12.1
+ - dask-cuda==25.2.0
+ - dask-cudf-cu12==25.2.2
+ - dask-expr==1.1.21
+ - distributed==2024.12.1
+ - distributed-ucxx-cu12==0.42.0
+ - docstring-parser==0.16
+ - einops==0.8.1
+ - fastrlock==0.8.3
+ - flask==3.0.3
+ - fsspec==2024.12.0
+ - ipywidgets==8.1.5
+ - jupyterlab-widgets==3.0.13
+ - libcudf-cu12==25.2.2
+ - libcuml-cu12==25.2.1
+ - libcuvs-cu12==25.2.1
+ - libigl==2.5.1
+ - libkvikio-cu12==25.2.1
+ - libraft-cu12==25.2.0
+ - libucx-cu12==1.18.0
+ - libucxx-cu12==0.42.0
+ - lightning==2.2.0
+ - lightning-utilities==0.14.2
+ - llvmlite==0.43.0
+ - loguru==0.7.3
+ - mesh2sdf==1.1.0
+ - numba==0.60.0
+ - numba-cuda==0.2.0
+ - numpy==2.0.2
+ - nvidia-cublas-cu12==12.4.2.65
+ - nvidia-cuda-cupti-cu12==12.4.99
+ - nvidia-cuda-nvrtc-cu12==12.4.99
+ - nvidia-cuda-runtime-cu12==12.4.99
+ - nvidia-cudnn-cu12==9.1.0.70
+ - nvidia-cufft-cu12==11.2.0.44
+ - nvidia-curand-cu12==10.3.5.119
+ - nvidia-cusolver-cu12==11.6.0.99
+ - nvidia-cusparse-cu12==12.3.0.142
+ - nvidia-ml-py==12.570.86
+ - nvidia-nccl-cu12==2.20.5
+ - nvidia-nvcomp-cu12==4.2.0.11
+ - nvidia-nvjitlink-cu12==12.4.99
+ - nvidia-nvtx-cu12==12.4.99
+ - nvtx==0.2.11
+ - open3d==0.19.0
+ - plyfile==1.1
+ - polyscope==2.4.0
+ - pooch==1.8.2
+ - potpourri3d==1.2.1
+ - pylibcudf-cu12==25.2.2
+ - pylibraft-cu12==25.2.0
+ - pymeshlab==2023.12.post3
+ - pynvjitlink-cu12==0.5.2
+ - pynvml==12.0.0
+ - pyquaternion==0.9.9
+ - pytorch-lightning==2.5.1
+ - pyvista==0.44.2
+ - raft-dask-cu12==25.2.0
+ - rapids-dask-dependency==25.2.0
+ - retrying==1.3.4
+ - rmm-cu12==25.2.0
+ - s3transfer==0.11.4
+ - scooby==0.10.0
+ - simple-parsing==0.1.7
+ - tetgen==0.6.5
+ - torch==2.4.0+cu124
+ - torch-scatter==2.1.2+pt24cu124
+ - torchaudio==2.4.0+cu124
+ - torchmetrics==1.7.0
+ - torchvision==0.19.0+cu124
+ - treelite==4.4.1
+ - trimesh==4.6.6
+ - triton==3.0.0
+ - ucx-py-cu12==0.42.0
+ - ucxx-cu12==0.42.0
+ - vtk==9.3.1
+ - werkzeug==3.0.6
+ - widgetsnbextension==4.0.13
+ - xgboost==3.0.0
+ - yacs==0.1.8
diff --git a/PartField/partfield/__pycache__/dataloader.cpython-310.pyc b/PartField/partfield/__pycache__/dataloader.cpython-310.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..a0b930e994798c163acdca1f8cb6fa16b90efab4
Binary files /dev/null and b/PartField/partfield/__pycache__/dataloader.cpython-310.pyc differ
diff --git a/PartField/partfield/__pycache__/model_trainer_pvcnn_only_demo.cpython-310.pyc b/PartField/partfield/__pycache__/model_trainer_pvcnn_only_demo.cpython-310.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..db7a74cd4e6b605004dea58eea4d3fc54dbf7c95
Binary files /dev/null and b/PartField/partfield/__pycache__/model_trainer_pvcnn_only_demo.cpython-310.pyc differ
diff --git a/PartField/partfield/__pycache__/utils.cpython-310.pyc b/PartField/partfield/__pycache__/utils.cpython-310.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..47b42cf17bc28405c1a5991e1022884ccdb99bfd
Binary files /dev/null and b/PartField/partfield/__pycache__/utils.cpython-310.pyc differ
diff --git a/PartField/partfield/config/__init__.py b/PartField/partfield/config/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..39582506b85759ab473acedb5d0f15b7d7f26594
--- /dev/null
+++ b/PartField/partfield/config/__init__.py
@@ -0,0 +1,26 @@
+import argparse
+import os.path as osp
+from datetime import datetime
+import pytz
+
+def default_argument_parser(add_help=True, default_config_file=""):
+ parser = argparse.ArgumentParser(add_help=add_help)
+ parser.add_argument("--config-file", '-c', default=default_config_file, metavar="FILE", help="path to config file")
+ parser.add_argument(
+ "--opts",
+ help="Modify config options using the command-line",
+ default=None,
+ nargs=argparse.REMAINDER,
+ )
+ return parser
+
+def setup(args, freeze=True):
+ from .defaults import _C as cfg
+ cfg = cfg.clone()
+ cfg.merge_from_file(args.config_file)
+ cfg.merge_from_list(args.opts)
+ dt = datetime.now(pytz.timezone('America/Los_Angeles')).strftime('%y%m%d-%H%M%S')
+ cfg.output_dir = osp.join(cfg.output_dir, cfg.name, dt)
+ if freeze:
+ cfg.freeze()
+ return cfg
\ No newline at end of file
diff --git a/PartField/partfield/config/__pycache__/__init__.cpython-310.pyc b/PartField/partfield/config/__pycache__/__init__.cpython-310.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..5296568b359080307e813e6e5e39e427886716c6
Binary files /dev/null and b/PartField/partfield/config/__pycache__/__init__.cpython-310.pyc differ
diff --git a/PartField/partfield/config/__pycache__/defaults.cpython-310.pyc b/PartField/partfield/config/__pycache__/defaults.cpython-310.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..e1f36d317fc60ade8627ce994465025c840367af
Binary files /dev/null and b/PartField/partfield/config/__pycache__/defaults.cpython-310.pyc differ
diff --git a/PartField/partfield/config/defaults.py b/PartField/partfield/config/defaults.py
new file mode 100644
index 0000000000000000000000000000000000000000..14f82ed464b7a9afb6bda92bad66c59b1077289a
--- /dev/null
+++ b/PartField/partfield/config/defaults.py
@@ -0,0 +1,92 @@
+from yacs.config import CfgNode as CN
+
+_C = CN()
+_C.seed = 0
+_C.output_dir = "results"
+_C.result_name = "test_all"
+
+_C.triplet_sampling = "random"
+_C.load_original_mesh = False
+
+_C.num_pos = 64
+_C.num_neg_random = 256
+_C.num_neg_hard_pc = 128
+_C.num_neg_hard_emb = 128
+
+_C.vertex_feature = False # if true, sample feature on vertices; if false, sample feature on faces
+_C.n_point_per_face = 2000
+_C.n_sample_each = 10000
+_C.preprocess_mesh = False
+
+_C.regress_2d_feat = False
+
+_C.is_pc = False
+
+_C.cut_manifold = False
+_C.remesh_demo = False
+_C.correspondence_demo = False
+
+_C.save_every_epoch = 10
+_C.training_epochs = 30
+_C.continue_training = False
+
+_C.continue_ckpt = None
+_C.epoch_selected = "epoch=50.ckpt"
+
+_C.triplane_resolution = 128
+_C.triplane_channels_low = 128
+_C.triplane_channels_high = 512
+_C.lr = 1e-3
+_C.train = True
+_C.test = False
+
+_C.inference_save_pred_sdf_to_mesh=True
+_C.inference_save_feat_pca=True
+_C.name = "test"
+_C.test_subset = False
+_C.test_corres = False
+_C.test_partobjaversetiny = False
+
+_C.dataset = CN()
+_C.dataset.type = "Demo_Dataset"
+_C.dataset.data_path = "objaverse_data/"
+_C.dataset.train_num_workers = 64
+_C.dataset.val_num_workers = 32
+_C.dataset.train_batch_size = 2
+_C.dataset.val_batch_size = 2
+_C.dataset.all_files = [] # only used for correspondence demo
+
+_C.voxel2triplane = CN()
+_C.voxel2triplane.transformer_dim = 1024
+_C.voxel2triplane.transformer_layers = 6
+_C.voxel2triplane.transformer_heads = 8
+_C.voxel2triplane.triplane_low_res = 32
+_C.voxel2triplane.triplane_high_res = 256
+_C.voxel2triplane.triplane_dim = 64
+_C.voxel2triplane.normalize_vox_feat = False
+
+
+_C.loss = CN()
+_C.loss.triplet = 0.0
+_C.loss.sdf = 1.0
+_C.loss.feat = 10.0
+_C.loss.l1 = 0.0
+
+_C.use_pvcnn = False
+_C.use_pvcnnonly = True
+
+_C.pvcnn = CN()
+_C.pvcnn.point_encoder_type = 'pvcnn'
+_C.pvcnn.use_point_scatter = True
+_C.pvcnn.z_triplane_channels = 64
+_C.pvcnn.z_triplane_resolution = 256
+_C.pvcnn.unet_cfg = CN()
+_C.pvcnn.unet_cfg.depth = 3
+_C.pvcnn.unet_cfg.enabled = True
+_C.pvcnn.unet_cfg.rolled = True
+_C.pvcnn.unet_cfg.use_3d_aware = True
+_C.pvcnn.unet_cfg.start_hidden_channels = 32
+_C.pvcnn.unet_cfg.use_initial_conv = False
+
+_C.use_2d_feat = False
+_C.inference_metrics_only = False
diff --git a/PartField/partfield/dataloader.py b/PartField/partfield/dataloader.py
new file mode 100644
index 0000000000000000000000000000000000000000..c305c566618c47bc045a76d928e56f497c642812
--- /dev/null
+++ b/PartField/partfield/dataloader.py
@@ -0,0 +1,366 @@
+import torch
+import boto3
+import json
+from os import path as osp
+# from botocore.config import Config
+# from botocore.exceptions import ClientError
+import h5py
+import io
+import numpy as np
+import skimage
+import trimesh
+import os
+from scipy.spatial import KDTree
+import gc
+from plyfile import PlyData
+
+## For remeshing
+import mesh2sdf
+import tetgen
+import vtk
+import math
+import tempfile
+
+### For mesh processing
+import pymeshlab
+
+from partfield.utils import *
+
+#########################
+## To handle quad inputs
+#########################
+def quad_to_triangle_mesh(F):
+ """
+ Converts a quad-dominant mesh into a pure triangle mesh by splitting quads into two triangles.
+
+ Parameters:
+ quad_mesh (trimesh.Trimesh): Input mesh with quad faces.
+
+ Returns:
+ trimesh.Trimesh: A new mesh with only triangle faces.
+ """
+ faces = F
+
+ ### If already a triangle mesh -- skip
+ if len(faces[0]) == 3:
+ return F
+
+ new_faces = []
+
+ for face in faces:
+ if len(face) == 4: # Quad face
+ # Split into two triangles
+ new_faces.append([face[0], face[1], face[2]]) # Triangle 1
+ new_faces.append([face[0], face[2], face[3]]) # Triangle 2
+ else:
+ print(f"Warning: Skipping non-triangle/non-quad face {face}")
+
+ new_faces = np.array(new_faces)
+
+ return new_faces
+#########################
+
+class Demo_Dataset(torch.utils.data.Dataset):
+ def __init__(self, cfg):
+ super().__init__()
+
+ self.data_path = cfg.dataset.data_path
+ self.is_pc = cfg.is_pc
+
+ all_files = os.listdir(self.data_path)
+
+ selected = []
+ for f in all_files:
+ if ".ply" in f and self.is_pc:
+ selected.append(f)
+ elif (".obj" in f or ".glb" in f or ".off" in f) and not self.is_pc:
+ selected.append(f)
+
+ self.data_list = selected
+ self.pc_num_pts = 100000
+
+ self.preprocess_mesh = cfg.preprocess_mesh
+ self.result_name = cfg.result_name
+
+ print("val dataset len:", len(self.data_list))
+
+
+ def __len__(self):
+ return len(self.data_list)
+
+ def load_ply_to_numpy(self, filename):
+ """
+ Load a PLY file and extract the point cloud as a (N, 3) NumPy array.
+
+ Parameters:
+ filename (str): Path to the PLY file.
+
+ Returns:
+ numpy.ndarray: Point cloud array of shape (N, 3).
+ """
+ ply_data = PlyData.read(filename)
+
+ # Extract vertex data
+ vertex_data = ply_data["vertex"]
+
+ # Convert to NumPy array (x, y, z)
+ points = np.vstack([vertex_data["x"], vertex_data["y"], vertex_data["z"]]).T
+
+ return points
+
+ def get_model(self, ply_file):
+
+ uid = ply_file.split(".")[-2].replace("/", "_")
+
+ ####
+ if self.is_pc:
+ ply_file_read = os.path.join(self.data_path, ply_file)
+ pc = self.load_ply_to_numpy(ply_file_read)
+
+ bbmin = pc.min(0)
+ bbmax = pc.max(0)
+ center = (bbmin + bbmax) * 0.5
+ scale = 2.0 * 0.9 / (bbmax - bbmin).max()
+ pc = (pc - center) * scale
+
+ else:
+ obj_path = os.path.join(self.data_path, ply_file)
+ mesh = load_mesh_util(obj_path)
+ vertices = mesh.vertices
+ faces = mesh.faces
+
+ bbmin = vertices.min(0)
+ bbmax = vertices.max(0)
+ center = (bbmin + bbmax) * 0.5
+ scale = 2.0 * 0.9 / (bbmax - bbmin).max()
+ vertices = (vertices - center) * scale
+ mesh.vertices = vertices
+
+ ### Make sure it is a triangle mesh -- just convert the quad
+ mesh.faces = quad_to_triangle_mesh(faces)
+
+ print("before preprocessing...")
+ print(mesh.vertices.shape)
+ print(mesh.faces.shape)
+ print()
+
+ ### Pre-process mesh
+ if self.preprocess_mesh:
+ # Create a PyMeshLab mesh directly from vertices and faces
+ ml_mesh = pymeshlab.Mesh(vertex_matrix=mesh.vertices, face_matrix=mesh.faces)
+
+ # Create a MeshSet and add your mesh
+ ms = pymeshlab.MeshSet()
+ ms.add_mesh(ml_mesh, "from_trimesh")
+
+ # Apply filters
+ ms.apply_filter('meshing_remove_duplicate_faces')
+ ms.apply_filter('meshing_remove_duplicate_vertices')
+ percentageMerge = pymeshlab.PercentageValue(0.5)
+ ms.apply_filter('meshing_merge_close_vertices', threshold=percentageMerge)
+ ms.apply_filter('meshing_remove_unreferenced_vertices')
+
+ # Save or extract mesh
+ processed = ms.current_mesh()
+ mesh.vertices = processed.vertex_matrix()
+ mesh.faces = processed.face_matrix()
+
+ print("after preprocessing...")
+ print(mesh.vertices.shape)
+ print(mesh.faces.shape)
+
+ ### Save input
+ save_dir = f"exp_results/{self.result_name}"
+ os.makedirs(save_dir, exist_ok=True)
+ view_id = 0
+ mesh.export(f'{save_dir}/input_{uid}_{view_id}.ply')
+
+
+ pc, _ = trimesh.sample.sample_surface(mesh, self.pc_num_pts)
+
+ result = {
+ 'uid': uid
+ }
+
+ result['pc'] = torch.tensor(pc, dtype=torch.float32)
+
+ if not self.is_pc:
+ result['vertices'] = mesh.vertices
+ result['faces'] = mesh.faces
+
+ return result
+
+ def __getitem__(self, index):
+
+ gc.collect()
+
+ return self.get_model(self.data_list[index])
+
+##############
+
+###############################
+class Demo_Remesh_Dataset(torch.utils.data.Dataset):
+ def __init__(self, cfg):
+ super().__init__()
+
+ self.data_path = cfg.dataset.data_path
+
+ all_files = os.listdir(self.data_path)
+
+ selected = []
+ for f in all_files:
+ if (".obj" in f or ".glb" in f):
+ selected.append(f)
+
+ self.data_list = selected
+ self.pc_num_pts = 100000
+
+ self.preprocess_mesh = cfg.preprocess_mesh
+ self.result_name = cfg.result_name
+
+ print("val dataset len:", len(self.data_list))
+
+
+ def __len__(self):
+ return len(self.data_list)
+
+
+ def get_model(self, ply_file):
+
+ uid = ply_file.split(".")[-2]
+
+ ####
+ obj_path = os.path.join(self.data_path, ply_file)
+ mesh = load_mesh_util(obj_path)
+ vertices = mesh.vertices
+ faces = mesh.faces
+
+ bbmin = vertices.min(0)
+ bbmax = vertices.max(0)
+ center = (bbmin + bbmax) * 0.5
+ scale = 2.0 * 0.9 / (bbmax - bbmin).max()
+ vertices = (vertices - center) * scale
+ mesh.vertices = vertices
+
+ ### Pre-process mesh
+ if self.preprocess_mesh:
+ # Create a PyMeshLab mesh directly from vertices and faces
+ ml_mesh = pymeshlab.Mesh(vertex_matrix=mesh.vertices, face_matrix=mesh.faces)
+
+ # Create a MeshSet and add your mesh
+ ms = pymeshlab.MeshSet()
+ ms.add_mesh(ml_mesh, "from_trimesh")
+
+ # Apply filters
+ ms.apply_filter('meshing_remove_duplicate_faces')
+ ms.apply_filter('meshing_remove_duplicate_vertices')
+ percentageMerge = pymeshlab.PercentageValue(0.5)
+ ms.apply_filter('meshing_merge_close_vertices', threshold=percentageMerge)
+ ms.apply_filter('meshing_remove_unreferenced_vertices')
+
+
+ # Save or extract mesh
+ processed = ms.current_mesh()
+ mesh.vertices = processed.vertex_matrix()
+ mesh.faces = processed.face_matrix()
+
+ print("after preprocessing...")
+ print(mesh.vertices.shape)
+ print(mesh.faces.shape)
+
+ ### Save input
+ save_dir = f"exp_results/{self.result_name}"
+ os.makedirs(save_dir, exist_ok=True)
+ view_id = 0
+ mesh.export(f'{save_dir}/input_{uid}_{view_id}.ply')
+
+ try:
+ ###### Remesh ######
+ size= 256
+ level = 2 / size
+
+ sdf = mesh2sdf.core.compute(mesh.vertices, mesh.faces, size)
+ # NOTE: the negative value is not reliable if the mesh is not watertight
+ udf = np.abs(sdf)
+ vertices, faces, _, _ = skimage.measure.marching_cubes(udf, level)
+
+ #### Only use SDF mesh ###
+ # new_mesh = trimesh.Trimesh(vertices, faces)
+ ##########################
+
+ #### Make tet #####
+ components = trimesh.Trimesh(vertices, faces).split(only_watertight=False)
+ new_mesh = [] #trimesh.Trimesh()
+ if len(components) > 100000:
+ raise NotImplementedError
+ for i, c in enumerate(components):
+ c.fix_normals()
+ new_mesh.append(c) #trimesh.util.concatenate(new_mesh, c)
+ new_mesh = trimesh.util.concatenate(new_mesh)
+
+ # generate tet mesh
+ tet = tetgen.TetGen(new_mesh.vertices, new_mesh.faces)
+ tet.tetrahedralize(plc=True, nobisect=1., quality=True, fixedvolume=True, maxvolume=math.sqrt(2) / 12 * (2 / size) ** 3)
+ tmp_vtk = tempfile.NamedTemporaryFile(suffix='.vtk', delete=True)
+ tet.grid.save(tmp_vtk.name)
+
+ # extract surface mesh from tet mesh
+ reader = vtk.vtkUnstructuredGridReader()
+ reader.SetFileName(tmp_vtk.name)
+ reader.Update()
+ surface_filter = vtk.vtkDataSetSurfaceFilter()
+ surface_filter.SetInputConnection(reader.GetOutputPort())
+ surface_filter.Update()
+ polydata = surface_filter.GetOutput()
+ writer = vtk.vtkOBJWriter()
+ tmp_obj = tempfile.NamedTemporaryFile(suffix='.obj', delete=True)
+ writer.SetFileName(tmp_obj.name)
+ writer.SetInputData(polydata)
+ writer.Update()
+ new_mesh = load_mesh_util(tmp_obj.name)
+ ##########################
+
+ new_mesh.vertices = new_mesh.vertices * (2.0 / size) - 1.0 # normalize it to [-1, 1]
+
+ mesh = new_mesh
+ ####################
+
+ except:
+ print("Error in tet.")
+ mesh = mesh
+
+ pc, _ = trimesh.sample.sample_surface(mesh, self.pc_num_pts)
+
+ result = {
+ 'uid': uid
+ }
+
+ result['pc'] = torch.tensor(pc, dtype=torch.float32)
+ result['vertices'] = mesh.vertices
+ result['faces'] = mesh.faces
+
+ return result
+
+ def __getitem__(self, index):
+
+ gc.collect()
+
+ return self.get_model(self.data_list[index])
+
+
+class Correspondence_Demo_Dataset(Demo_Dataset):
+ def __init__(self, cfg):
+ super().__init__(cfg)
+
+ self.data_path = cfg.dataset.data_path
+ self.is_pc = cfg.is_pc
+
+ self.data_list = cfg.dataset.all_files
+
+ self.pc_num_pts = 100000
+
+ self.preprocess_mesh = cfg.preprocess_mesh
+ self.result_name = cfg.result_name
+
+ print("val dataset len:", len(self.data_list))
+
\ No newline at end of file
diff --git a/PartField/partfield/model/PVCNN/__pycache__/conv_pointnet.cpython-310.pyc b/PartField/partfield/model/PVCNN/__pycache__/conv_pointnet.cpython-310.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..c4f911d6454808f1f07087fa153774d6b7b055e2
Binary files /dev/null and b/PartField/partfield/model/PVCNN/__pycache__/conv_pointnet.cpython-310.pyc differ
diff --git a/PartField/partfield/model/PVCNN/__pycache__/dnnlib_util.cpython-310.pyc b/PartField/partfield/model/PVCNN/__pycache__/dnnlib_util.cpython-310.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..9ccab903a2dd05199bef0d4d8bf311dc54135238
Binary files /dev/null and b/PartField/partfield/model/PVCNN/__pycache__/dnnlib_util.cpython-310.pyc differ
diff --git a/PartField/partfield/model/PVCNN/__pycache__/encoder_pc.cpython-310.pyc b/PartField/partfield/model/PVCNN/__pycache__/encoder_pc.cpython-310.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..84848f5d541c7e5689de3e2bd05103b620c072d6
Binary files /dev/null and b/PartField/partfield/model/PVCNN/__pycache__/encoder_pc.cpython-310.pyc differ
diff --git a/PartField/partfield/model/PVCNN/__pycache__/pc_encoder.cpython-310.pyc b/PartField/partfield/model/PVCNN/__pycache__/pc_encoder.cpython-310.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..61351109ddfb30ee43de8435b6efb3d28709278c
Binary files /dev/null and b/PartField/partfield/model/PVCNN/__pycache__/pc_encoder.cpython-310.pyc differ
diff --git a/PartField/partfield/model/PVCNN/__pycache__/unet_3daware.cpython-310.pyc b/PartField/partfield/model/PVCNN/__pycache__/unet_3daware.cpython-310.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..f77fadc20b13eb440d48700dbd4b2fa6592ba830
Binary files /dev/null and b/PartField/partfield/model/PVCNN/__pycache__/unet_3daware.cpython-310.pyc differ
diff --git a/PartField/partfield/model/PVCNN/conv_pointnet.py b/PartField/partfield/model/PVCNN/conv_pointnet.py
new file mode 100644
index 0000000000000000000000000000000000000000..8c5c806f1e725ed9a75aeb752f3e2ae4a5c606a1
--- /dev/null
+++ b/PartField/partfield/model/PVCNN/conv_pointnet.py
@@ -0,0 +1,251 @@
+"""
+Taken from gensdf
+https://github.com/princeton-computational-imaging/gensdf
+"""
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+# from dnnlib.util import printarr
+try:
+ from torch_scatter import scatter_mean, scatter_max
+except:
+ pass
+# from .unet import UNet
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+
+# Resnet Blocks
+class ResnetBlockFC(nn.Module):
+ ''' Fully connected ResNet Block class.
+ Args:
+ size_in (int): input dimension
+ size_out (int): output dimension
+ size_h (int): hidden dimension
+ '''
+
+ def __init__(self, size_in, size_out=None, size_h=None):
+ super().__init__()
+ # Attributes
+ if size_out is None:
+ size_out = size_in
+
+ if size_h is None:
+ size_h = min(size_in, size_out)
+
+ self.size_in = size_in
+ self.size_h = size_h
+ self.size_out = size_out
+ # Submodules
+ self.fc_0 = nn.Linear(size_in, size_h)
+ self.fc_1 = nn.Linear(size_h, size_out)
+ self.actvn = nn.ReLU()
+
+ if size_in == size_out:
+ self.shortcut = None
+ else:
+ self.shortcut = nn.Linear(size_in, size_out, bias=False)
+ # Initialization
+ nn.init.zeros_(self.fc_1.weight)
+
+ def forward(self, x):
+ net = self.fc_0(self.actvn(x))
+ dx = self.fc_1(self.actvn(net))
+
+ if self.shortcut is not None:
+ x_s = self.shortcut(x)
+ else:
+ x_s = x
+
+ return x_s + dx
+
+
+class ConvPointnet(nn.Module):
+ ''' PointNet-based encoder network with ResNet blocks for each point.
+ Number of input points are fixed.
+
+ Args:
+ c_dim (int): dimension of latent code c
+ dim (int): input points dimension
+ hidden_dim (int): hidden dimension of the network
+ scatter_type (str): feature aggregation when doing local pooling
+ unet (bool): weather to use U-Net
+ unet_kwargs (str): U-Net parameters
+ plane_resolution (int): defined resolution for plane feature
+ plane_type (str): feature type, 'xz' - 1-plane, ['xz', 'xy', 'yz'] - 3-plane, ['grid'] - 3D grid volume
+ padding (float): conventional padding paramter of ONet for unit cube, so [-0.5, 0.5] -> [-0.55, 0.55]
+ n_blocks (int): number of blocks ResNetBlockFC layers
+ '''
+
+ def __init__(self, c_dim=128, dim=3, hidden_dim=128, scatter_type='max',
+ # unet=False, unet_kwargs=None,
+ plane_resolution=None, plane_type=['xz', 'xy', 'yz'], padding=0.1, n_blocks=5):
+ super().__init__()
+ self.c_dim = c_dim
+
+ self.fc_pos = nn.Linear(dim, 2*hidden_dim)
+ self.blocks = nn.ModuleList([
+ ResnetBlockFC(2*hidden_dim, hidden_dim) for i in range(n_blocks)
+ ])
+ self.fc_c = nn.Linear(hidden_dim, c_dim)
+
+ self.actvn = nn.ReLU()
+ self.hidden_dim = hidden_dim
+
+ # if unet:
+ # self.unet = UNet(c_dim, in_channels=c_dim, **unet_kwargs)
+ # else:
+ # self.unet = None
+
+ self.reso_plane = plane_resolution
+ self.plane_type = plane_type
+ self.padding = padding
+
+ if scatter_type == 'max':
+ self.scatter = scatter_max
+ elif scatter_type == 'mean':
+ self.scatter = scatter_mean
+
+
+ # takes in "p": point cloud and "query": sdf_xyz
+ # sample plane features for unlabeled_query as well
+ def forward(self, p):#, query2):
+ batch_size, T, D = p.size()
+
+ # acquire the index for each point
+ coord = {}
+ index = {}
+ if 'xz' in self.plane_type:
+ coord['xz'] = self.normalize_coordinate(p.clone(), plane='xz', padding=self.padding)
+ index['xz'] = self.coordinate2index(coord['xz'], self.reso_plane)
+ if 'xy' in self.plane_type:
+ coord['xy'] = self.normalize_coordinate(p.clone(), plane='xy', padding=self.padding)
+ index['xy'] = self.coordinate2index(coord['xy'], self.reso_plane)
+ if 'yz' in self.plane_type:
+ coord['yz'] = self.normalize_coordinate(p.clone(), plane='yz', padding=self.padding)
+ index['yz'] = self.coordinate2index(coord['yz'], self.reso_plane)
+
+
+ net = self.fc_pos(p)
+
+ net = self.blocks[0](net)
+ for block in self.blocks[1:]:
+ pooled = self.pool_local(coord, index, net)
+ net = torch.cat([net, pooled], dim=2)
+ net = block(net)
+
+ c = self.fc_c(net)
+
+ fea = {}
+ plane_feat_sum = 0
+ #second_sum = 0
+ if 'xz' in self.plane_type:
+ fea['xz'] = self.generate_plane_features(p, c, plane='xz') # shape: batch, latent size, resolution, resolution (e.g. 16, 256, 64, 64)
+ # plane_feat_sum += self.sample_plane_feature(query, fea['xz'], 'xz')
+ #second_sum += self.sample_plane_feature(query2, fea['xz'], 'xz')
+ if 'xy' in self.plane_type:
+ fea['xy'] = self.generate_plane_features(p, c, plane='xy')
+ # plane_feat_sum += self.sample_plane_feature(query, fea['xy'], 'xy')
+ #second_sum += self.sample_plane_feature(query2, fea['xy'], 'xy')
+ if 'yz' in self.plane_type:
+ fea['yz'] = self.generate_plane_features(p, c, plane='yz')
+ # plane_feat_sum += self.sample_plane_feature(query, fea['yz'], 'yz')
+ #second_sum += self.sample_plane_feature(query2, fea['yz'], 'yz')
+ return fea
+
+ # return plane_feat_sum.transpose(2,1)#, second_sum.transpose(2,1)
+
+
+ def normalize_coordinate(self, p, padding=0.1, plane='xz'):
+ ''' Normalize coordinate to [0, 1] for unit cube experiments
+
+ Args:
+ p (tensor): point
+ padding (float): conventional padding paramter of ONet for unit cube, so [-0.5, 0.5] -> [-0.55, 0.55]
+ plane (str): plane feature type, ['xz', 'xy', 'yz']
+ '''
+ if plane == 'xz':
+ xy = p[:, :, [0, 2]]
+ elif plane =='xy':
+ xy = p[:, :, [0, 1]]
+ else:
+ xy = p[:, :, [1, 2]]
+
+ xy_new = xy / (1 + padding + 10e-6) # (-0.5, 0.5)
+ xy_new = xy_new + 0.5 # range (0, 1)
+
+ # f there are outliers out of the range
+ if xy_new.max() >= 1:
+ xy_new[xy_new >= 1] = 1 - 10e-6
+ if xy_new.min() < 0:
+ xy_new[xy_new < 0] = 0.0
+ return xy_new
+
+
+ def coordinate2index(self, x, reso):
+ ''' Normalize coordinate to [0, 1] for unit cube experiments.
+ Corresponds to our 3D model
+
+ Args:
+ x (tensor): coordinate
+ reso (int): defined resolution
+ coord_type (str): coordinate type
+ '''
+ x = (x * reso).long()
+ index = x[:, :, 0] + reso * x[:, :, 1]
+ index = index[:, None, :]
+ return index
+
+
+ # xy is the normalized coordinates of the point cloud of each plane
+ # I'm pretty sure the keys of xy are the same as those of index, so xy isn't needed here as input
+ def pool_local(self, xy, index, c):
+ bs, fea_dim = c.size(0), c.size(2)
+ keys = xy.keys()
+
+ c_out = 0
+ for key in keys:
+ # scatter plane features from points
+ fea = self.scatter(c.permute(0, 2, 1), index[key], dim_size=self.reso_plane**2)
+ if self.scatter == scatter_max:
+ fea = fea[0]
+ # gather feature back to points
+ fea = fea.gather(dim=2, index=index[key].expand(-1, fea_dim, -1))
+ c_out += fea
+ return c_out.permute(0, 2, 1)
+
+
+ def generate_plane_features(self, p, c, plane='xz'):
+ # acquire indices of features in plane
+ xy = self.normalize_coordinate(p.clone(), plane=plane, padding=self.padding) # normalize to the range of (0, 1)
+ index = self.coordinate2index(xy, self.reso_plane)
+
+ # scatter plane features from points
+ fea_plane = c.new_zeros(p.size(0), self.c_dim, self.reso_plane**2)
+ c = c.permute(0, 2, 1) # B x 512 x T
+ fea_plane = scatter_mean(c, index, out=fea_plane) # B x 512 x reso^2
+ fea_plane = fea_plane.reshape(p.size(0), self.c_dim, self.reso_plane, self.reso_plane) # sparce matrix (B x 512 x reso x reso)
+
+ # printarr(fea_plane, c, p, xy, index)
+ # import pdb; pdb.set_trace()
+
+ # process the plane features with UNet
+ # if self.unet is not None:
+ # fea_plane = self.unet(fea_plane)
+
+ return fea_plane
+
+
+ # sample_plane_feature function copied from /src/conv_onet/models/decoder.py
+ # uses values from plane_feature and pixel locations from vgrid to interpolate feature
+ def sample_plane_feature(self, query, plane_feature, plane):
+ xy = self.normalize_coordinate(query.clone(), plane=plane, padding=self.padding)
+ xy = xy[:, :, None].float()
+ vgrid = 2.0 * xy - 1.0 # normalize to (-1, 1)
+ sampled_feat = F.grid_sample(plane_feature, vgrid, padding_mode='border', align_corners=True, mode='bilinear').squeeze(-1)
+ return sampled_feat
+
+
+
\ No newline at end of file
diff --git a/PartField/partfield/model/PVCNN/dnnlib_util.py b/PartField/partfield/model/PVCNN/dnnlib_util.py
new file mode 100644
index 0000000000000000000000000000000000000000..9514fe685275a66fc83bf78fb0cf3c94952678dd
--- /dev/null
+++ b/PartField/partfield/model/PVCNN/dnnlib_util.py
@@ -0,0 +1,1074 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# NVIDIA CORPORATION & AFFILIATES and its licensors retain all intellectual property
+# and proprietary rights in and to this software, related documentation
+# and any modifications thereto. Any use, reproduction, disclosure or
+# distribution of this software and related documentation without an express
+# license agreement from NVIDIA CORPORATION & AFFILIATES is strictly prohibited.
+
+"""Miscellaneous utility classes and functions."""
+from collections import namedtuple
+import time
+import ctypes
+import fnmatch
+import importlib
+import inspect
+import numpy as np
+import json
+import os
+import shutil
+import sys
+import types
+import io
+import pickle
+import re
+# import requests
+import html
+import hashlib
+import glob
+import tempfile
+import urllib
+import urllib.request
+import uuid
+import boto3
+import threading
+from contextlib import ContextDecorator
+from contextlib import contextmanager, nullcontext
+
+from distutils.util import strtobool
+from typing import Any, List, Tuple, Union
+import importlib
+from loguru import logger
+# import wandb
+import torch
+import psutil
+import subprocess
+
+import random
+import string
+import pdb
+
+# Util classes
+# ------------------------------------------------------------------------------------------
+
+
+class EasyDict(dict):
+ """Convenience class that behaves like a dict but allows access with the attribute syntax."""
+
+ def __getattr__(self, name: str) -> Any:
+ try:
+ return self[name]
+ except KeyError:
+ raise AttributeError(name)
+
+ def __setattr__(self, name: str, value: Any) -> None:
+ self[name] = value
+
+ def __delattr__(self, name: str) -> None:
+ del self[name]
+
+
+class Logger(object):
+ """Redirect stderr to stdout, optionally print stdout to a file, and optionally force flushing on both stdout and the file."""
+
+ def __init__(self, file_name: str = None, file_mode: str = "w", should_flush: bool = True):
+ self.file = None
+
+ if file_name is not None:
+ self.file = open(file_name, file_mode)
+
+ self.should_flush = should_flush
+ self.stdout = sys.stdout
+ self.stderr = sys.stderr
+
+ sys.stdout = self
+ sys.stderr = self
+
+ def __enter__(self) -> "Logger":
+ return self
+
+ def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None:
+ self.close()
+
+ def write(self, text: Union[str, bytes]) -> None:
+ """Write text to stdout (and a file) and optionally flush."""
+ if isinstance(text, bytes):
+ text = text.decode()
+ if len(text) == 0: # workaround for a bug in VSCode debugger: sys.stdout.write(''); sys.stdout.flush() => crash
+ return
+
+ if self.file is not None:
+ self.file.write(text)
+
+ self.stdout.write(text)
+
+ if self.should_flush:
+ self.flush()
+
+ def flush(self) -> None:
+ """Flush written text to both stdout and a file, if open."""
+ if self.file is not None:
+ self.file.flush()
+
+ self.stdout.flush()
+
+ def close(self) -> None:
+ """Flush, close possible files, and remove stdout/stderr mirroring."""
+ self.flush()
+
+ # if using multiple loggers, prevent closing in wrong order
+ if sys.stdout is self:
+ sys.stdout = self.stdout
+ if sys.stderr is self:
+ sys.stderr = self.stderr
+
+ if self.file is not None:
+ self.file.close()
+ self.file = None
+
+
+# Cache directories
+# ------------------------------------------------------------------------------------------
+
+_dnnlib_cache_dir = None
+
+
+def set_cache_dir(path: str) -> None:
+ global _dnnlib_cache_dir
+ _dnnlib_cache_dir = path
+
+
+def make_cache_dir_path(*paths: str) -> str:
+ if _dnnlib_cache_dir is not None:
+ return os.path.join(_dnnlib_cache_dir, *paths)
+ if 'DNNLIB_CACHE_DIR' in os.environ:
+ return os.path.join(os.environ['DNNLIB_CACHE_DIR'], *paths)
+ if 'HOME' in os.environ:
+ return os.path.join(os.environ['HOME'], '.cache', 'dnnlib', *paths)
+ if 'USERPROFILE' in os.environ:
+ return os.path.join(os.environ['USERPROFILE'], '.cache', 'dnnlib', *paths)
+ return os.path.join(tempfile.gettempdir(), '.cache', 'dnnlib', *paths)
+
+
+# Small util functions
+# ------------------------------------------------------------------------------------------
+
+
+def format_time(seconds: Union[int, float]) -> str:
+ """Convert the seconds to human readable string with days, hours, minutes and seconds."""
+ s = int(np.rint(seconds))
+
+ if s < 60:
+ return "{0}s".format(s)
+ elif s < 60 * 60:
+ return "{0}m {1:02}s".format(s // 60, s % 60)
+ elif s < 24 * 60 * 60:
+ return "{0}h {1:02}m {2:02}s".format(s // (60 * 60), (s // 60) % 60, s % 60)
+ else:
+ return "{0}d {1:02}h {2:02}m".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24, (s // 60) % 60)
+
+
+def format_time_brief(seconds: Union[int, float]) -> str:
+ """Convert the seconds to human readable string with days, hours, minutes and seconds."""
+ s = int(np.rint(seconds))
+
+ if s < 60:
+ return "{0}s".format(s)
+ elif s < 60 * 60:
+ return "{0}m {1:02}s".format(s // 60, s % 60)
+ elif s < 24 * 60 * 60:
+ return "{0}h {1:02}m".format(s // (60 * 60), (s // 60) % 60)
+ else:
+ return "{0}d {1:02}h".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24)
+
+
+def ask_yes_no(question: str) -> bool:
+ """Ask the user the question until the user inputs a valid answer."""
+ while True:
+ try:
+ print("{0} [y/n]".format(question))
+ return strtobool(input().lower())
+ except ValueError:
+ pass
+
+
+def tuple_product(t: Tuple) -> Any:
+ """Calculate the product of the tuple elements."""
+ result = 1
+
+ for v in t:
+ result *= v
+
+ return result
+
+
+_str_to_ctype = {
+ "uint8": ctypes.c_ubyte,
+ "uint16": ctypes.c_uint16,
+ "uint32": ctypes.c_uint32,
+ "uint64": ctypes.c_uint64,
+ "int8": ctypes.c_byte,
+ "int16": ctypes.c_int16,
+ "int32": ctypes.c_int32,
+ "int64": ctypes.c_int64,
+ "float32": ctypes.c_float,
+ "float64": ctypes.c_double
+}
+
+
+def get_dtype_and_ctype(type_obj: Any) -> Tuple[np.dtype, Any]:
+ """Given a type name string (or an object having a __name__ attribute), return matching Numpy and ctypes types that have the same size in bytes."""
+ type_str = None
+
+ if isinstance(type_obj, str):
+ type_str = type_obj
+ elif hasattr(type_obj, "__name__"):
+ type_str = type_obj.__name__
+ elif hasattr(type_obj, "name"):
+ type_str = type_obj.name
+ else:
+ raise RuntimeError("Cannot infer type name from input")
+
+ assert type_str in _str_to_ctype.keys()
+
+ my_dtype = np.dtype(type_str)
+ my_ctype = _str_to_ctype[type_str]
+
+ assert my_dtype.itemsize == ctypes.sizeof(my_ctype)
+
+ return my_dtype, my_ctype
+
+
+def is_pickleable(obj: Any) -> bool:
+ try:
+ with io.BytesIO() as stream:
+ pickle.dump(obj, stream)
+ return True
+ except:
+ return False
+
+
+# Functionality to import modules/objects by name, and call functions by name
+# ------------------------------------------------------------------------------------------
+
+def get_module_from_obj_name(obj_name: str) -> Tuple[types.ModuleType, str]:
+ """Searches for the underlying module behind the name to some python object.
+ Returns the module and the object name (original name with module part removed)."""
+
+ # allow convenience shorthands, substitute them by full names
+ obj_name = re.sub("^np.", "numpy.", obj_name)
+ obj_name = re.sub("^tf.", "tensorflow.", obj_name)
+
+ # list alternatives for (module_name, local_obj_name)
+ parts = obj_name.split(".")
+ name_pairs = [(".".join(parts[:i]), ".".join(parts[i:])) for i in range(len(parts), 0, -1)]
+
+ # try each alternative in turn
+ for module_name, local_obj_name in name_pairs:
+ try:
+ module = importlib.import_module(module_name) # may raise ImportError
+ get_obj_from_module(module, local_obj_name) # may raise AttributeError
+ return module, local_obj_name
+ except:
+ pass
+
+ # maybe some of the modules themselves contain errors?
+ for module_name, _local_obj_name in name_pairs:
+ try:
+ importlib.import_module(module_name) # may raise ImportError
+ except ImportError:
+ if not str(sys.exc_info()[1]).startswith("No module named '" + module_name + "'"):
+ raise
+
+ # maybe the requested attribute is missing?
+ for module_name, local_obj_name in name_pairs:
+ try:
+ module = importlib.import_module(module_name) # may raise ImportError
+ get_obj_from_module(module, local_obj_name) # may raise AttributeError
+ except ImportError:
+ pass
+
+ # we are out of luck, but we have no idea why
+ raise ImportError(obj_name)
+
+
+def get_obj_from_module(module: types.ModuleType, obj_name: str) -> Any:
+ """Traverses the object name and returns the last (rightmost) python object."""
+ if obj_name == '':
+ return module
+ obj = module
+ for part in obj_name.split("."):
+ obj = getattr(obj, part)
+ return obj
+
+
+def get_obj_by_name(name: str) -> Any:
+ """Finds the python object with the given name."""
+ module, obj_name = get_module_from_obj_name(name)
+ return get_obj_from_module(module, obj_name)
+
+
+def call_func_by_name(*args, func_name: str = None, **kwargs) -> Any:
+ """Finds the python object with the given name and calls it as a function."""
+ assert func_name is not None
+ func_obj = get_obj_by_name(func_name)
+ assert callable(func_obj)
+ return func_obj(*args, **kwargs)
+
+
+def construct_class_by_name(*args, class_name: str = None, **kwargs) -> Any:
+ """Finds the python class with the given name and constructs it with the given arguments."""
+ return call_func_by_name(*args, func_name=class_name, **kwargs)
+
+
+def get_module_dir_by_obj_name(obj_name: str) -> str:
+ """Get the directory path of the module containing the given object name."""
+ module, _ = get_module_from_obj_name(obj_name)
+ return os.path.dirname(inspect.getfile(module))
+
+
+def is_top_level_function(obj: Any) -> bool:
+ """Determine whether the given object is a top-level function, i.e., defined at module scope using 'def'."""
+ return callable(obj) and obj.__name__ in sys.modules[obj.__module__].__dict__
+
+
+def get_top_level_function_name(obj: Any) -> str:
+ """Return the fully-qualified name of a top-level function."""
+ assert is_top_level_function(obj)
+ module = obj.__module__
+ if module == '__main__':
+ module = os.path.splitext(os.path.basename(sys.modules[module].__file__))[0]
+ return module + "." + obj.__name__
+
+
+# File system helpers
+# ------------------------------------------------------------------------------------------
+
+def list_dir_recursively_with_ignore(dir_path: str, ignores: List[str] = None, add_base_to_relative: bool = False) -> List[Tuple[str, str]]:
+ """List all files recursively in a given directory while ignoring given file and directory names.
+ Returns list of tuples containing both absolute and relative paths."""
+ assert os.path.isdir(dir_path)
+ base_name = os.path.basename(os.path.normpath(dir_path))
+
+ if ignores is None:
+ ignores = []
+
+ result = []
+
+ for root, dirs, files in os.walk(dir_path, topdown=True):
+ for ignore_ in ignores:
+ dirs_to_remove = [d for d in dirs if fnmatch.fnmatch(d, ignore_)]
+
+ # dirs need to be edited in-place
+ for d in dirs_to_remove:
+ dirs.remove(d)
+
+ files = [f for f in files if not fnmatch.fnmatch(f, ignore_)]
+
+ absolute_paths = [os.path.join(root, f) for f in files]
+ relative_paths = [os.path.relpath(p, dir_path) for p in absolute_paths]
+
+ if add_base_to_relative:
+ relative_paths = [os.path.join(base_name, p) for p in relative_paths]
+
+ assert len(absolute_paths) == len(relative_paths)
+ result += zip(absolute_paths, relative_paths)
+
+ return result
+
+
+def copy_files_and_create_dirs(files: List[Tuple[str, str]]) -> None:
+ """Takes in a list of tuples of (src, dst) paths and copies files.
+ Will create all necessary directories."""
+ for file in files:
+ target_dir_name = os.path.dirname(file[1])
+
+ # will create all intermediate-level directories
+ if not os.path.exists(target_dir_name):
+ os.makedirs(target_dir_name)
+
+ shutil.copyfile(file[0], file[1])
+
+
+# URL helpers
+# ------------------------------------------------------------------------------------------
+
+def is_url(obj: Any, allow_file_urls: bool = False) -> bool:
+ """Determine whether the given object is a valid URL string."""
+ if not isinstance(obj, str) or not "://" in obj:
+ return False
+ if allow_file_urls and obj.startswith('file://'):
+ return True
+ try:
+ res = requests.compat.urlparse(obj)
+ if not res.scheme or not res.netloc or not "." in res.netloc:
+ return False
+ res = requests.compat.urlparse(requests.compat.urljoin(obj, "/"))
+ if not res.scheme or not res.netloc or not "." in res.netloc:
+ return False
+ except:
+ return False
+ return True
+
+
+def open_url(url: str, cache_dir: str = None, num_attempts: int = 10, verbose: bool = True, return_filename: bool = False, cache: bool = True) -> Any:
+ """Download the given URL and return a binary-mode file object to access the data."""
+ assert num_attempts >= 1
+ assert not (return_filename and (not cache))
+
+ # Doesn't look like an URL scheme so interpret it as a local filename.
+ if not re.match('^[a-z]+://', url):
+ return url if return_filename else open(url, "rb")
+
+ # Handle file URLs. This code handles unusual file:// patterns that
+ # arise on Windows:
+ #
+ # file:///c:/foo.txt
+ #
+ # which would translate to a local '/c:/foo.txt' filename that's
+ # invalid. Drop the forward slash for such pathnames.
+ #
+ # If you touch this code path, you should test it on both Linux and
+ # Windows.
+ #
+ # Some internet resources suggest using urllib.request.url2pathname() but
+ # but that converts forward slashes to backslashes and this causes
+ # its own set of problems.
+ if url.startswith('file://'):
+ filename = urllib.parse.urlparse(url).path
+ if re.match(r'^/[a-zA-Z]:', filename):
+ filename = filename[1:]
+ return filename if return_filename else open(filename, "rb")
+
+ assert is_url(url)
+
+ # Lookup from cache.
+ if cache_dir is None:
+ cache_dir = make_cache_dir_path('downloads')
+
+ url_md5 = hashlib.md5(url.encode("utf-8")).hexdigest()
+ if cache:
+ cache_files = glob.glob(os.path.join(cache_dir, url_md5 + "_*"))
+ if len(cache_files) == 1:
+ filename = cache_files[0]
+ return filename if return_filename else open(filename, "rb")
+
+ # Download.
+ url_name = None
+ url_data = None
+ with requests.Session() as session:
+ if verbose:
+ print("Downloading %s ..." % url, end="", flush=True)
+ for attempts_left in reversed(range(num_attempts)):
+ try:
+ with session.get(url) as res:
+ res.raise_for_status()
+ if len(res.content) == 0:
+ raise IOError("No data received")
+
+ if len(res.content) < 8192:
+ content_str = res.content.decode("utf-8")
+ if "download_warning" in res.headers.get("Set-Cookie", ""):
+ links = [html.unescape(link) for link in content_str.split('"') if "export=download" in link]
+ if len(links) == 1:
+ url = requests.compat.urljoin(url, links[0])
+ raise IOError("Google Drive virus checker nag")
+ if "Google Drive - Quota exceeded" in content_str:
+ raise IOError("Google Drive download quota exceeded -- please try again later")
+
+ match = re.search(r'filename="([^"]*)"', res.headers.get("Content-Disposition", ""))
+ url_name = match[1] if match else url
+ url_data = res.content
+ if verbose:
+ print(" done")
+ break
+ except KeyboardInterrupt:
+ raise
+ except:
+ if not attempts_left:
+ if verbose:
+ print(" failed")
+ raise
+ if verbose:
+ print(".", end="", flush=True)
+
+ # Save to cache.
+ if cache:
+ safe_name = re.sub(r"[^0-9a-zA-Z-._]", "_", url_name)
+ cache_file = os.path.join(cache_dir, url_md5 + "_" + safe_name)
+ temp_file = os.path.join(cache_dir, "tmp_" + uuid.uuid4().hex + "_" + url_md5 + "_" + safe_name)
+ os.makedirs(cache_dir, exist_ok=True)
+ with open(temp_file, "wb") as f:
+ f.write(url_data)
+ os.replace(temp_file, cache_file) # atomic
+ if return_filename:
+ return cache_file
+
+ # Return data as file object.
+ assert not return_filename
+ return io.BytesIO(url_data)
+
+# ------------------------------------------------------------------------------------------
+# util function modified from https://github.com/nv-tlabs/LION/blob/0467d2199076e95a7e88bafd99dcd7d48a04b4a7/utils/model_helper.py
+def import_class(model_str):
+ from torch_utils.dist_utils import is_rank0
+ if is_rank0():
+ logger.info('import: {}', model_str)
+ p, m = model_str.rsplit('.', 1)
+ mod = importlib.import_module(p)
+ Model = getattr(mod, m)
+ return Model
+
+class ScopedTorchProfiler(ContextDecorator):
+ """
+ Marks ranges for both nvtx profiling (with nsys) and torch autograd profiler
+ """
+ __global_counts = {}
+ enabled=False
+
+ def __init__(self, unique_name: str):
+ """
+ Names must be unique!
+ """
+ ScopedTorchProfiler.__global_counts[unique_name] = 0
+ self._name = unique_name
+ self._autograd_scope = torch.profiler.record_function(unique_name)
+
+ def __enter__(self):
+ if ScopedTorchProfiler.enabled:
+ torch.cuda.nvtx.range_push(self._name)
+ self._autograd_scope.__enter__()
+
+ def __exit__(self, exc_type, exc_value, traceback):
+ self._autograd_scope.__exit__(exc_type, exc_value, traceback)
+ if ScopedTorchProfiler.enabled:
+ torch.cuda.nvtx.range_pop()
+
+class TimingsMonitor():
+ CUDATimer = namedtuple('CUDATimer', ['start', 'end'])
+ def __init__(self, device, enabled=True, timing_names:List[str]=[], cuda_timing_names:List[str]=[]):
+ """
+ Usage:
+ tmonitor = TimingsMonitor(device)
+ for i in range(n_iter):
+ # Record arbitrary scopes
+ with tmonitor.timing_scope('regular_scope_name'):
+ ...
+ with tmonitor.cuda_timing_scope('nested_scope_name'):
+ ...
+ with tmonitor.cuda_timing_scope('cuda_scope_name'):
+ ...
+ tmonitor.record_timing('duration_name', end_time - start_time)
+
+ # Gather timings
+ tmonitor.record_all_cuda_timings()
+ tmonitor.update_all_averages()
+ averages = tmonitor.get_average_timings()
+ all_timings = tmonitor.get_timings()
+
+ Two types of timers, standard report timing and cuda timings.
+ Cuda timing supports scoped context manager cuda_event_scope.
+ Args:
+ device: device to time on (needed for cuda timers)
+ # enabled: HACK to only report timings from rank 0, set enabled=(global_rank==0)
+ timing_names: timings to report optional (will auto add new names)
+ cuda_timing_names: cuda periods to time optional (will auto add new names)
+ """
+ self.enabled=enabled
+ self.device = device
+
+ # Normal timing
+ # self.all_timings_dict = {k:None for k in timing_names + cuda_timing_names}
+ self.all_timings_dict = {}
+ self.avg_meter_dict = {}
+
+ # Cuda event timers to measure time spent on pushing data to gpu and on training step
+ self.cuda_event_timers = {}
+
+ for k in timing_names:
+ self.add_new_timing(k)
+
+ for k in cuda_timing_names:
+ self.add_new_cuda_timing(k)
+
+ # Running averages
+ # self.avg_meter_dict = {k:AverageMeter() for k in self.all_timings_dict}
+
+ def add_new_timing(self, name):
+ self.avg_meter_dict[name] = AverageMeter()
+ self.all_timings_dict[name] = None
+
+ def add_new_cuda_timing(self, name):
+ start_event = torch.cuda.Event(enable_timing=True)
+ end_event = torch.cuda.Event(enable_timing=True)
+ self.cuda_event_timers[name] = self.CUDATimer(start=start_event, end=end_event)
+ self.add_new_timing(name)
+
+ def clear_timings(self):
+ self.all_timings_dict = {k:None for k in self.all_timings_dict}
+
+ def get_timings(self):
+ return self.all_timings_dict
+
+ def get_average_timings(self):
+ return {k:v.avg for k,v in self.avg_meter_dict.items()}
+
+ def update_all_averages(self):
+ """
+ Once per iter, when timings have been finished recording, one should
+ call update_average_iter to keep running average of timings.
+ """
+ for k,v in self.all_timings_dict.items():
+ if v is None:
+ print("none_timing", k)
+ continue
+ self.avg_meter_dict[k].update(v)
+
+ def record_timing(self, name, value):
+ if name not in self.all_timings_dict: self.add_new_timing(name)
+ # assert name in self.all_timings_dict
+ self.all_timings_dict[name] = value
+
+ def _record_cuda_event_start(self, name):
+ if name in self.cuda_event_timers:
+ self.cuda_event_timers[name].start.record(
+ torch.cuda.current_stream(self.device))
+
+ def _record_cuda_event_end(self, name):
+ if name in self.cuda_event_timers:
+ self.cuda_event_timers[name].end.record(
+ torch.cuda.current_stream(self.device))
+
+ @contextmanager
+ def cuda_timing_scope(self, name, profile=True):
+ if name not in self.all_timings_dict: self.add_new_cuda_timing(name)
+ with ScopedTorchProfiler(name) if profile else nullcontext():
+ self._record_cuda_event_start(name)
+ try:
+ yield
+ finally:
+ self._record_cuda_event_end(name)
+
+ @contextmanager
+ def timing_scope(self, name, profile=True):
+ if name not in self.all_timings_dict: self.add_new_timing(name)
+ with ScopedTorchProfiler(name) if profile else nullcontext():
+ start_time = time.time()
+ try:
+ yield
+ finally:
+ self.record_timing(name, time.time()-start_time)
+
+ def record_all_cuda_timings(self):
+ """ After all the cuda events call this to synchronize and record down the cuda timings. """
+ for k, events in self.cuda_event_timers.items():
+ with torch.no_grad():
+ events.end.synchronize()
+ # Convert to seconds
+ time_elapsed = events.start.elapsed_time(events.end)/1000.
+ self.all_timings_dict[k] = time_elapsed
+
+def init_s3(config_file):
+ config = json.load(open(config_file, 'r'))
+ s3_client = boto3.client("s3", **config)
+ return s3_client
+
+def download_from_s3(file_path, target_path, cfg):
+ tic = time.time()
+ s3_client = init_s3(cfg.checkpoint.write_s3_config) # use to test the s3_client can be init
+ bucket_name = file_path.split('/')[2]
+ file_key = file_path.split(bucket_name+'/')[-1]
+ print(bucket_name, file_key)
+ s3_client.download_file(bucket_name, file_key, target_path)
+ logger.info(f'finish download from ! s3://{bucket_name}/{file_key} to {target_path} %.1f sec'%(
+ time.time() - tic))
+
+def upload_to_s3(buffer, bucket_name, key, config_dict):
+ logger.info(f'start upload_to_s3! bucket_name={bucket_name}, key={key}')
+ tic = time.time()
+ s3 = boto3.client('s3', **config_dict)
+ s3.put_object(Bucket=bucket_name, Key=key, Body=buffer.getvalue())
+ logger.info(f'finish upload_to_s3! s3://{bucket_name}/{key} %.1f sec'%(time.time() - tic))
+
+def write_ckpt_to_s3(cfg, all_model_dict, ckpt_name):
+ buffer = io.BytesIO()
+ tic = time.time()
+ torch.save(all_model_dict, buffer) # take ~0.25 sec
+ # logger.info('write ckpt to buffer: %.2f sec'%(time.time() - tic))
+ group, name = cfg.outdir.rstrip("/").split("/")[-2:]
+ key = f"checkpoints/{group}/{name}/ckpt/{ckpt_name}"
+ bucket_name = cfg.checkpoint.write_s3_bucket
+
+ s3_client = init_s3(cfg.checkpoint.write_s3_config) # use to test the s3_client can be init
+
+ config_dict = json.load(open(cfg.checkpoint.write_s3_config, 'r'))
+ upload_thread = threading.Thread(target=upload_to_s3, args=(buffer, bucket_name, key, config_dict))
+ upload_thread.start()
+ path = f"s3://{bucket_name}/{key}"
+ return path
+
+def upload_file_to_s3(cfg, file_path, key_name=None):
+ # file_path is the local file path, can be a yaml file
+ # this function is used to upload the ckecpoint only
+ tic = time.time()
+ group, name = cfg.outdir.rstrip("/").split("/")[-2:]
+ if key_name is None:
+ key = os.path.basename(file_path)
+ key = f"checkpoints/{group}/{name}/{key}"
+ bucket_name = cfg.checkpoint.write_s3_bucket
+ s3_client = init_s3(cfg.checkpoint.write_s3_config)
+ # Upload the file
+ with open(file_path, 'rb') as f:
+ s3_client.upload_fileobj(f, bucket_name, key)
+ full_s3_path = f"s3://{bucket_name}/{key}"
+ logger.info(f'upload_to_s3: {file_path} {full_s3_path} | use time: {time.time()-tic}')
+
+ return full_s3_path
+
+
+def load_from_s3(file_path, cfg, load_fn):
+ """
+ ckpt_path example:
+ s3://xzeng/checkpoints/2023_0413/vae_kl_5e-1/ckpt/snapshot_epo000163_iter164000.pt
+ """
+ s3_client = init_s3(cfg.checkpoint.write_s3_config) # use to test the s3_client can be init
+ bucket_name = file_path.split("s3://")[-1].split('/')[0]
+ key = file_path.split(f'{bucket_name}/')[-1]
+ # logger.info(f"-> try to load s3://{bucket_name}/{key} ")
+ tic = time.time()
+ for attemp in range(10):
+ try:
+ # Download the state dict from S3 into memory (as a binary stream)
+ with io.BytesIO() as buffer:
+ s3_client.download_fileobj(bucket_name, key, buffer)
+ buffer.seek(0)
+
+ # Load the state dict into a PyTorch model
+ # out = torch.load(buffer, map_location=torch.device("cpu"))
+ out = load_fn(buffer)
+ break
+ except:
+ logger.info(f"fail to load s3://{bucket_name}/{key} attemp: {attemp}")
+ from torch_utils.dist_utils import is_rank0
+ if is_rank0():
+ logger.info(f'loaded {file_path} | use time: {time.time()-tic:.1f} sec')
+ return out
+
+def load_torch_dict_from_s3(ckpt_path, cfg):
+ """
+ ckpt_path example:
+ s3://xzeng/checkpoints/2023_0413/vae_kl_5e-1/ckpt/snapshot_epo000163_iter164000.pt
+ """
+ s3_client = init_s3(cfg.checkpoint.write_s3_config) # use to test the s3_client can be init
+ bucket_name = ckpt_path.split("s3://")[-1].split('/')[0]
+ key = ckpt_path.split(f'{bucket_name}/')[-1]
+ for attemp in range(10):
+ try:
+ # Download the state dict from S3 into memory (as a binary stream)
+ with io.BytesIO() as buffer:
+ s3_client.download_fileobj(bucket_name, key, buffer)
+ buffer.seek(0)
+
+ # Load the state dict into a PyTorch model
+ out = torch.load(buffer, map_location=torch.device("cpu"))
+ break
+ except:
+ logger.info(f"fail to load s3://{bucket_name}/{key} attemp: {attemp}")
+ return out
+
+def count_parameters_in_M(model):
+ return np.sum(np.prod(v.size()) for name, v in model.named_parameters() if "auxiliary" not in name) / 1e6
+
+def printarr(*arrs, float_width=6, **kwargs):
+ """
+ Print a pretty table giving name, shape, dtype, type, and content information for input tensors or scalars.
+
+ Call like: printarr(my_arr, some_other_arr, maybe_a_scalar). Accepts a variable number of arguments.
+
+ Inputs can be:
+ - Numpy tensor arrays
+ - Pytorch tensor arrays
+ - Jax tensor arrays
+ - Python ints / floats
+ - None
+
+ It may also work with other array-like types, but they have not been tested.
+
+ Use the `float_width` option specify the precision to which floating point types are printed.
+
+ Author: Nicholas Sharp (nmwsharp.com)
+ Canonical source: https://gist.github.com/nmwsharp/54d04af87872a4988809f128e1a1d233
+ License: This snippet may be used under an MIT license, and it is also released into the public domain.
+ Please retain this docstring as a reference.
+ """
+
+ frame = inspect.currentframe().f_back
+ default_name = "[temporary]"
+
+ ## helpers to gather data about each array
+ def name_from_outer_scope(a):
+ if a is None:
+ return '[None]'
+ name = default_name
+ for k, v in frame.f_locals.items():
+ if v is a:
+ name = k
+ break
+ return name
+
+ def type_strip(type_str):
+ return type_str.lstrip('+ 🌟 GitHub Repository | + 🚀 Project Page +
+Upload a 3D model (.obj or .glb format supported) to articulate it. Particulate takes this model and predicts the underlying articulated structure, which can be directly exported to URDF or MJCF format.
+