ActivePerception
1 Installation
1.1 Requirements
- Operating System: Linux Only. We test this project on Ubuntu 22.04.
- Cuda Toolkit Version: cuda 11.8 or higher.
- Python Version: python3.9 or higher
- Pytorch Version: Pytorch 2.0.0+cu118
1.2 Install ActivePerception Environment
1.2.1 Install Python and requirements.txt
Clone this repo with
git clone https://github.com/Jiyao06/ActivePerception.git
Go to repo root directory
cd ActivePerception
Create python Environment
conda create --name nbv python==3.9
Install the basic environment with requirements.txt
pip install -r requirements.txt
if you encounter this issue while running the project:
ImportError: cannot import name 'NDArray' from 'numpy.typing'
, please upgrade numpy to version 1.23.5 withpip install numpy==1.23.5
1.2.2 Install pointnet++ for ActivePerception
Go to pointnet2 repo root directory
cd modules/module_lib/pointnet2_utils/pointnet2
Install pointnet2
pip install -e .
1.3 Install GSNet Environment for Grasping Evaluation (Optional)
1.3.1 Install MinkowskiEngine (Linux Only)
Install dependencies of MinkowskiEngine
sudo apt-get update
sudo apt-get install -y git ninja-build cmake build-essential libopenblas-dev xterm xauth openssh-server tmux wget mate-desktop-environment-core
Clone MinkowskiEngine git repo
cd <any_path>
git clone --recursive "https://github.com/NVIDIA/MinkowskiEngine"
Install MinkowskiEngine
cd MinkowskiEngine
python setup.py install --force_cuda --blas=openblas
1.3.2 Install pointnet++ for GSNet
Go to GSNet repo root
cd <your_ActivePerception_repo_root>/baselines/grasping/GSNet
Go to pointnet++ repo root and install it
cd pointnet2
python setup.py install
1.3.3 Install KNN operator
cd ../knn
python setup.py install
1.3.4 Install GraspnetAPI for Evaluation
cd ../graspnetAPI
pip install .
1.4 Install FoundationPose Environment for Object Pose Evaluation (Optional)
...
2 Training
2.1 Prapare Datasets
Please download the dataset from the links shown as below:
or directly download a pre-organized dataset structure to skip "2.2 Organize Dataset Structure"
2.2 Organize Dataset Structure
Please organize the dataset into the following structure:
$ROOT_OF_DATA
- objects
- obj_name_0
- obj_name_1
- ...
- train
- scene_name_0
- scene_name_1
- ...
- test
- scene_name_0
- scene_name_1
- ...
2.3 Training for Grasping
2.3.1 Prepare Pretrained GSNet weights
Please download the pretrained GSNet weights from: pretrained_gsnet_weights
2.3.2 Preprocess View Score
...
2.3.3 Training Configuration
Open training config file in ActivePerception/configs/server_train_config.yaml
.
To run the training task, you need to customize at least the following experiment configuration:
- experiment name at
experiment -> name
- grasp pretrained model path at
experiment -> grasp_model_path
- dataset root path at
datasets -> general -> data_dir
2.3.4 Run View Generator Web Server
In order to test the model's predictions under the new view after each epoch's training, a view generator is needed. Therefore, you need to run the following command to start it.
Otherwise, please comment out all instances of the grasp_improvement
method for the test set in server_train_config.yaml
under settings -> test -> dataset_list -> eval_list
. And then you won't get the result of grasp score's improvement during training.
python runners/view_generator.py
2.3.5 Start Training
Run following command to start the training task.
python runners/trainer.py
3 Evaluation
...
4 Data and Results Visualization
Vue.js framework is required to ...
5 Custom Sim Data Generation
...