Commit 71d4aa7c authored by Nada Beili's avatar Nada Beili
Browse files

source code

parent 1a3a0bcb
Pipeline #30355 failed with stages
in 0 seconds
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
.idea/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# custom
*.pkl
*.log
outputs/
.log/
.tb_log/
data_log/
datasets/
#*zip
[submodule "third_party/zed-ros-wrapper"]
path = third_party/zed-ros-wrapper
url = https://github.com/stereolabs/zed-ros-wrapper.git
[submodule "third_party/kinova-ros"]
path = third_party/kinova-ros
url = git@github.com:ckorbach/kinova-ros.git
This diff is collapsed.
# next_best_view_rl_benchmark
# next_best_view_rl
This project is an extension to work presented in this [repository](https://gitlab.uni-koblenz.de/ckorbach/next_best_view_rl). The project aims to find the sequence of next-best-views of occluded and self-occluded objects using reinforcement learning algorithms: PPO, SAC, TD3, and A2C. We trained each algorithm separately on three different datasets, extracted from [TEOS dataset](https://data.nvision.eecs.yorku.ca/TEOS/), for five runs using random seed. We compared the performance of the algorithms during training and evaluation. We could find that SAC outperforms all the other algorithms.
# Setup
1. Clone the repository: `git clone --recurse-submodules ...`
2. in 'third_party/zed-ros-wrapper' verify branch `devel`: `git checkout devel`
3. `sudo apt-get update && sudo apt-get install cmake libopenmpi-dev python3-dev zlib1g-dev libosmesa6-dev libgl1-mesa-glx libglfw3'
4. Create virtual environment: `conda create --name nbv_env python=3.7`
5. Activate virual environment: 'conda activate nbv_env'
6. Download and unpack mjpro150 ([link](https://roboti.us/download.html)) and move the `mjpro150` folder into `~/.mujoco/`
7. Download and copy a license key ([link](https://roboti.us/license.htmlo)) into `~/.mujoco/`
8. Add `LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/USER/.mujoco/mjpro150/bin` to your `.bashrc`
9. Install mujoco from `https://github.com/openai/mujoco-py`
10. Install requirements: `pip install -r requirements.txt`
11. Install setup.py: `pip install -e .`
# Execution
- modify configs in `config` with hydra framework
- to execute training of ppo algorithm
- start RL training: `python3 scripts/train.py algorithm=ppo`
- evaluate RL model: `python3 scripts/evaluate.py algorithm=ppo`
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://gitlab.uni-koblenz.de/nbeili/next_best_view_rl_benchmark.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://gitlab.uni-koblenz.de/nbeili/next_best_view_rl_benchmark/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Automatically merge when pipeline succeeds](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
"""
SimpleNet in Pytorch
github.com/Coderx7/SimpleNet_Pytorch
github.com/Coderx7/TF_Pytorch_testbed/blob/master/Pytorch/models/simplenet.py
https://heartbeat.fritz.ai/basics-of-image-classification-with-pytorch-2f8973c51864
"""
import torch.nn as nn
class Unit(nn.Module):
def __init__(self, in_channels, out_channels):
super(Unit, self).__init__()
self.conv = nn.Conv2d(in_channels=in_channels, kernel_size=3, out_channels=out_channels, stride=1, padding=1)
self.bn = nn.BatchNorm2d(num_features=out_channels)
self.relu = nn.ReLU()
def forward(self, input):
output = self.conv(input)
output = self.bn(output)
output = self.relu(output)
return output
class BasicNet(nn.Module):
def __init__(self, classes=10):
super(BasicNet, self).__init__()
# Create 14 layers of the unit with max pooling in between
self.unit1 = Unit(in_channels=3, out_channels=32)
self.unit2 = Unit(in_channels=32, out_channels=32)
self.unit3 = Unit(in_channels=32, out_channels=32)
self.pool1 = nn.MaxPool2d(kernel_size=2)
self.unit4 = Unit(in_channels=32, out_channels=64)
self.unit5 = Unit(in_channels=64, out_channels=64)
self.unit6 = Unit(in_channels=64, out_channels=64)
self.unit7 = Unit(in_channels=64, out_channels=64)
self.pool2 = nn.MaxPool2d(kernel_size=2)
self.unit8 = Unit(in_channels=64, out_channels=128)
self.unit9 = Unit(in_channels=128, out_channels=128)
self.unit10 = Unit(in_channels=128, out_channels=128)
self.unit11 = Unit(in_channels=128, out_channels=128)
self.pool3 = nn.MaxPool2d(kernel_size=2)
self.unit12 = Unit(in_channels=128, out_channels=128)
self.unit13 = Unit(in_channels=128, out_channels=128)
self.unit14 = Unit(in_channels=128, out_channels=128)
self.avgpool = nn.AvgPool2d(kernel_size=4)
# Add all the units into the Sequential layer in exact order
self.net = nn.Sequential(self.unit1, self.unit2, self.unit3, self.pool1, self.unit4, self.unit5, self.unit6
, self.unit7, self.pool2, self.unit8, self.unit9, self.unit10, self.unit11, self.pool3,
self.unit12, self.unit13, self.unit14, self.avgpool)
self.fc = nn.Linear(in_features=128*7*7, out_features=classes)
def forward(self, input):
output = self.net(input)
# print(output.shape)
output = output.view(output.size(0), -1)
# output = output.view(-1, 128)
output = self.fc(output)
return output
import torch
import torch.nn as nn
from torchvision import models
from classification.simplenet import SimpleNet
from classification.basicnet import BasicNet
class NetLoader:
def __init__(self, model_name, num_classes, resize_size=224,
feature_extract=True, use_pretrained=True, custom_pre_model=None):
self.model_name = model_name
self.num_classes = num_classes
self.resize_size = resize_size
self.feature_extract = feature_extract
self.use_pretrained = use_pretrained
self.custom_pre_model = custom_pre_model
self.model, self.input_size = self.initialize_model()
def get_model(self):
return self.model, self.input_size
def initialize_model(self):
# Initialize these variables which will be set in this if statement. Each of these
# variables is model specific.
model_ft = None
input_size = 0
if self.model_name == "basicnet":
""" BasicNet
"""
model_ft = BasicNet(classes=self.num_classes)
input_size = self.resize_size
# TODO check for None
if self.use_pretrained and self.custom_pre_model:
checkpoint =torch.load(self.custom_pre_model)
model_ft.load_state_dict(checkpoint)
model_ft.train()
elif self.model_name == "simplenet":
""" SimpleNet
"""
model_ft = SimpleNet(classes=self.num_classes)
input_size = self.resize_size
if self.use_pretrained and self.custom_pre_model:
checkpoint = torch.load(self.custom_pre_model)
model_ft.load_state_dict(checkpoint)
model_ft.train()
elif self.model_name == "resnet18":
""" Resnet18
"""
model_ft = models.resnet18(pretrained=self.use_pretrained)
self.set_parameter_requires_grad(model_ft, self.feature_extract)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, self.num_classes)
input_size = 224
elif self.model_name == "resnet34":
""" Resnet34
"""
model_ft = models.resnet34(pretrained=self.use_pretrained)
self.set_parameter_requires_grad(model_ft, self.feature_extract)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, self.num_classes)
input_size = 224
elif self.model_name == "resnet50":
""" Resnet50
"""
model_ft = models.resnet50(pretrained=self.use_pretrained)
self.set_parameter_requires_grad(model_ft, self.feature_extract)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, self.num_classes)
input_size = 224
elif self.model_name == "resnet101":
""" Resnet101
"""
model_ft = models.resnet101(pretrained=self.use_pretrained)
self.set_parameter_requires_grad(model_ft, self.feature_extract)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, self.num_classes)
input_size = 224
elif self.model_name == "resnet152":
""" Resnet152
"""
model_ft = models.resnet152(pretrained=self.use_pretrained)
self.set_parameter_requires_grad(model_ft, self.feature_extract)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, self.num_classes)
input_size = 224
elif self.model_name == "alexnet":
""" Alexnet
"""
model_ft = models.alexnet(pretrained=self.use_pretrained)
self.set_parameter_requires_grad(model_ft, self.feature_extract)
num_ftrs = model_ft.classifier[6].in_features
model_ft.classifier[6] = nn.Linear(num_ftrs, self.num_classes)
input_size = 224
elif self.model_name == "vgg":
""" VGG11_bn
"""
model_ft = models.vgg11_bn(pretrained=self.use_pretrained)
self.set_parameter_requires_grad(model_ft, self.feature_extract)
num_ftrs = model_ft.classifier[6].in_features
model_ft.classifier[6] = nn.Linear(num_ftrs, self.num_classes)
input_size = 224
elif self.model_name == "squeezenet":
""" Squeezenet
"""
model_ft = models.squeezenet1_0(pretrained=self.use_pretrained)
self.set_parameter_requires_grad(model_ft, self.feature_extract)
model_ft.classifier[1] = nn.Conv2d(512, self.num_classes, kernel_size=(1,1), stride=(1,1))
model_ft.num_classes = self.num_classes
input_size = 224
elif self.model_name == "densenet":
""" Densenet
"""
model_ft = models.densenet121(pretrained=self.use_pretrained)
self.set_parameter_requires_grad(model_ft, self.feature_extract)
num_ftrs = model_ft.classifier.in_features
model_ft.classifier = nn.Linear(num_ftrs, self.num_classes)
input_size = 224
elif self.model_name == "inception":
""" Inception v3
Be careful, expects (299,299) sized images and has auxiliary output
"""
model_ft = models.inception_v3(pretrained= self.use_pretrained, aux_logits=False)
self.set_parameter_requires_grad(model_ft, self.feature_extract)
# Handle the auxilary net
# num_ftrs = model_ft.AuxLogits.fc.in_features
# model_ft.AuxLogits.fc = nn.Linear(num_ftrs, self.num_classes)
# Handle the primary net
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, self.num_classes)
input_size = 299
else:
print("Invalid model name, exiting...")
exit()
return model_ft, input_size
@staticmethod
def set_parameter_requires_grad(model, feature_extracting):
if feature_extracting:
for param in model.parameters():
param.requires_grad = False
import os
import numpy as np
import torch
from torchvision.transforms import transforms
from torch.autograd import Variable
from PIL import Image
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sn
import torch.nn as nn
from sklearn.manifold import TSNE
import umap.umap_ as umap
import umap.plot
import json
from classification.netloader import NetLoader
class Predictor:
def __init__(self, cfg, test=False):
self.cfg = cfg
#torch.manuel_seed(self.cfg.classificator.seed)
#torch.backends.cudnn.deterministic = True
#torch.backends.cudnn.benchmark = False
self.print_debug = False
self.test = test
root_dir = self.cfg.system.project_path
if self.test:
self.model_dir_path = self.cfg.classificator.test_model_path
self.model_path = os.path.join(self.model_dir_path, self.cfg.classificator.test_model)
self.class_map_path = os.path.join(self.model_dir_path, self.cfg.classificator.class_map)
self.net = self.cfg.classificator.net
self.resize_size = self.cfg.classificator.resize_size
self.activation = self.cfg.classificator.activation
else:
self.model_dir_path = root_dir + self.cfg.model.path
self.model_path = os.path.join(self.model_dir_path, self.cfg.model.model)
self.class_map_path = os.path.join(self.model_dir_path, self.cfg.model.class_map)
self.net = self.cfg.model.net
self.resize_size = self.cfg.model.resize_size
self.activation = self.cfg.model.activation
self.model_name = self.model_path.split("/")[-1].split(".")[0]
print("[Predictor] Model: %s: " % self.model_path)
self.class_map = json.load(open(self.class_map_path))
self.classes = len(self.class_map.keys())
netloader = NetLoader(model_name=self.net,
num_classes=self.classes,
resize_size=self.resize_size)
self.model, self.resize = netloader.get_model()
self.softmax = nn.Softmax(dim=1)
self.sigmoid = nn.Sigmoid()
self.cuda_avail = torch.cuda.is_available()
if self.cuda_avail:
self.model = self.model.cuda()
checkpoint = torch.load(self.model_path)
self.model.load_state_dict(checkpoint)
self.model.eval()
def predict_image(self, image, resize=224, use_activation=True):
if isinstance(image, str):
image = Image.open(image)
image = Image.fromarray(np.uint8(image)).convert("RGB")
elif isinstance(image, np.ndarray):
image = Image.fromarray(np.uint8(image)).convert("RGB")
# Define transformations for the image, should
transformation = transforms.Compose([
transforms.Resize(resize),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
# Pre-process the image
image_tensor = transformation(image).float()
# Add an extra batch dimension since pytorch treats all images as batches
image_tensor = image_tensor.unsqueeze_(0)
if self.cuda_avail:
image_tensor.cuda()
dtype = torch.cuda.FloatTensor
input = torch.autograd.Variable(image_tensor.type(dtype))
else:
# Turn the input into a Variable
input = Variable(image_tensor)
# Predict the class of the image
output = self.model(input)
if use_activation:
if self.activation == "softmax":
output = self.softmax(output)
elif self.activation == "sigmoid":
output = self.sigmoid(output)
elif self.activation is None:
pass
else:
print("[Error] self.activation wrong defined, using None")
# print(output)
output_data = output.data.cpu().numpy()
if self.print_debug:
print(f"{output}: {output}")
return output_data[0]
def get_predicted_object(self, output_data):
index = output_data.argmax()
accuracy = output_data[index]
acc_rounded = float(str(round(accuracy)))