Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

robosuite v1.3 release #260

Merged
merged 624 commits into from
Oct 19, 2021
Merged

robosuite v1.3 release #260

merged 624 commits into from
Oct 19, 2021

Conversation

yukezhu
Copy link
Member

@yukezhu yukezhu commented Oct 19, 2021

robosuite 1.3.0 Release Notes

  • Highlights
  • New Features
  • Improvements
  • Critical Bug Fixes
  • Other Bug Fixes

Highlights

This release of robosuite brings powerful rendering functionalities including new renderers and multiple vision modalities, in addition to some general-purpose camera utilties. Below, we discuss the key details of these new features:

Renderers

In addition to the native Mujoco renderer, we present two new renderers, NVISII and iGibson, and introduce a standardized rendering interface API to enable easy swapping of renderers.

NVISII is a high-fidelity ray-tracing renderer originally developed by NVIDIA, and adapted for plug-and-play usage in robosuite. It is primarily used for training perception models and visualizing results in high quality. It can run at up to ~0.5 fps using a GTX 1080Ti GPU. Note that NVISII must be installed (pip install nvisii) in order to use this renderer.

iGibson is a much faster renderer that additionally supports physics-based rendering (PBR) and direct rendering to pytorch tensors. While not as high-fidelity as NVISII, it is incredibly fast and can run at up to ~1500 fps using a GTX 1080Ti GPU. Note that iGibson must be installed (pip install igibson) in order to use this renderer.

With the addition of these new renderers, we also introduce a standardized renderer for easy usage and customization of the various renderers. During each environment step, the renderer updates its internal state by calling update() and renders by calling render(...). The resulting visual observations can be polled by calling get_pixel_obs() or by calling other methods specific to individual renderers. We provide a demo script for testing each new renderer, and our docs also provide additional information on specific renderer details and installation procedures.

Vision Modalities

In addition to new renderers, we also provide broad support for multiple vision modalities across all (Mujoco, NVISII, iGibson) renderers:

  • RGB: Standard 3-channel color frames with values in range [0, 255]. This is set during environment construction with the use_camera_obs argument.
  • Depth: 1-channel frame with normalized values in range [0, 1]. This is set during environment construction with the camera_depths argument.
  • Segmentation: 1-channel frames with pixel values corresponding to integer IDs for various objects. Segmentation can occur by class, instance, or geom, and is set during environment construction with the camera_segmentations argument.

In addition to the above modalities, the following modalities are supported by a subset of renderers:

  • Surface Normals: [NVISII, iGibson] 3-channel (x,y,z) normalized direction vectors.
  • Texture Coordinates: [NVISII] 3-channel (x,y,z) coordinate texture mappings for each element
  • Texture Positioning: [NVISII, iGibson] 3-channel (x,y,z) global coordinates of each pixel.

Specific modalities can be set during environment and renderer construction. We provide a demo script for testing the different modalities supported by NVISII and a demo script for testing the different modalities supported by iGibson.

Camera Utilities

We provide a set of general-purpose camera utilities that intended to enable easy manipulation of environment cameras. Of note, we include transform utilities for mapping between pixel, camera, and world frames, and include a CameraMover class for dynamically moving a camera during simulation, which can be used for many purposes such as the DemoPlaybackCameraMover subclass that enables smooth visualization during demonstration playback.

Improvements

The following briefly describes other changes that improve on the pre-existing structure. This is not an exhaustive list, but a highlighted list of changes.

Critical Bug Fixes

Other Bug Fixes


Contributor Spotlight

A big thank you to the following community members for spearheading the renderer PRs for this release!
@awesome-aj0123
@divyanshj16

abhihjoshi and others added 30 commits January 5, 2021 10:37
Merge latest public HEAD to robosuite-dev
yukezhu and others added 22 commits October 4, 2021 15:24
update roboturk docs with links to robomimic
addition of NVisII and iGibson renderers
Copy link
Member

@amandlek amandlek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Read through some portions of the code as a sanity check - it's likely that some functionality will break with the default mujoco renderers (see my comments).

README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
@@ -58,38 +56,29 @@ def __new__(meta, name, bases, class_dict):
class MujocoEnv(metaclass=EnvMeta):
"""
Initializes a Mujoco Environment.

Args:
has_renderer (bool): If true, render the simulation state in
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these renderer arguments ignored if not using the mujoco / default renderers? We need to update docstrings + documentation online if that's the case - it's an important detail.

"""
Gets the pixel observations for the environment from the specified renderer
"""
self.viewer.get_pixel_obs()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like it will break for the default / mujoco renderer. We should also probably make sure here that off-screen rendering is enabled (if using the mujoco renderer).

@@ -484,16 +510,22 @@ def visualize(self, vis_settings):
for obj in self.model.mujoco_objects:
obj.set_sites_visibility(sim=self.sim, visible=vis_settings["env"])

def set_camera_pos_quat(self, camera_pos, camera_quat):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems odd to only offer this method for nvisii renderer and not others - any particular reason why (a) we need this method for the nvisii renderer or (b) we do not support it for other renderers?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this for iG renderer.

Args:
xml_string (str): Filepath to the xml file that will be loaded directly into the sim
"""

# if there is an active viewer window, destroy it
self.close()
if self.renderer != 'nvisii':
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this not done for the nvisii renderer? more comments would help

@@ -10,6 +10,11 @@
from robosuite.robots import ROBOT_CLASS_MAPPING
from robosuite.controllers import reset_controllers

try:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this snippet here? we should be able to delete it right?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes we can delete this, it was just left when I was cleaning up.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deleted these lines.

robosuite/scripts/playback_demonstrations_from_hdf5.py Outdated Show resolved Hide resolved

try:
import torch
HAS_TORCH = True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we still using these flags? I thought we agreed on an alternative, or at least setting a Macro (utils/macros.py)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This flag will only be used when using the iG renderer. If HAS_TORCH is False and someone tries to use render2tensor it will raise an Exception.

@yukezhu yukezhu merged commit c7d0b51 into master Oct 19, 2021
yukezhu added a commit that referenced this pull request Nov 3, 2021
Address remaining issues in #260

* remove unnecessary code and HAS_TORCH flag
* add set_camera_pos_quat function for iG
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants