Releases: Unity-Technologies/ml-agents
ML-Agents Release 22
[3.0.0] - 2024-09-02
Major Changes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
Upgraded to Sentis 2.0.0 (#6137)
Upgraded to Sentis 1.3.0-pre.3 (#6070)
Upgraded to Sentis 1.3.0-exp.2 (#6013)
The minimum supported Unity version was updated to 2023.2. (#6071)
ml-agents / ml-agents-envs
Upgraded to PyTorch 2.1.1. (#6013)
Minor Changes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
Added no-graphics-monitor. (#6014)
ml-agents / ml-agents-envs
Update Installation.md (#6004)
Updated Using-Virtual-Environment.md (#6033)
Bug Fixes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
Fix failing ci post upgrade (#6141)
Fixed missing assembly reference for google protobuf. (#6099)
Fixed missing tensor Dispose in ModelRunner. (#6028)
Fixed 3DBall sample package to remove Barracuda dependency. (#6030)
ml-agents / ml-agents-envs
Fix sample code indentation in migrating.md (#5840)
Fixed continuous integration tests (#6079)
Fixed bad like format (#6078)
Bumped numpy version to >=1.23.5,<1.24.0 (#6082)
Bumped onnx version to 1.15.0 (#6062)
Bumped protobuf version to >=3.6,<21 (#6062)
ML-Agents Release 21
[3.0.0-exp.1] - 2023-10-09
Major Changes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- Upgraded ML-Agents to Sentis 1.2.0-exp.2 and deprecated Barracuda. (#5979)
- The minimum supported Unity version was updated to 2022.3. (#5950)
- Added batched raycast sensor option. (#5950)
ml-agents / ml-agents-envs
Minor Changes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- Added DecisionStep parameter to DecisionRequester (#5940)
- This will allow the staggering of execution timing when using multi-agents, leading to more stable performance.
ml-agents / ml-agents-envs
- Added timeout cli and yaml config file support for specifying environment timeout. (#5991)
- Added training config feature to evenly distribute checkpoints throughout training. (#5842)
- Updated training area replicator to add a condition to only replicate training areas when running a build. (#5842)
Bug Fixes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- Compiler errors when using IAsyncEnumerable with .NET Standard 2.1 enabled (#5951)
ml-agents / ml-agents-envs
ML-Agents Release 20
Package Versions
NOTE: It is strongly recommended that you use packages from the same release together for the best experience.
Package | Version |
---|---|
com.unity.ml-agents (C#) | v2.3.0-exp.3 |
com.unity.ml-agents.extensions (C#) | v0.6.1-preview |
ml-agents (Python) | v0.30.0 |
ml-agents-envs (Python) | v0.30.0 |
gym-unity (Python) | v0.30.0 |
Communicator (C#/Python) | v1.5.0 |
Release Notes
Major Changes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- The minimum supported Unity version was updated to 2021.3. (#)
ml-agents / ml-agents-envs
- Add your trainers to the package using Ml-Agents Custom Trainers plugin. (#)
- ML-Agents Custom Trainers plugin is an extensible plugin system to define new trainers based on the
High level trainer API, read more here.
- ML-Agents Custom Trainers plugin is an extensible plugin system to define new trainers based on the
- Refactored core modules to make ML-Agents internal classes more generalizable to various RL algorithms. (#)
- The minimum supported Python version for ML-agents has changed to 3.8.13. (#)
- The minimum supported version of PyTorch was changed to 1.8.0. (#)
- Add shared critic configurability for PPO. (#)
- We moved
UnityToGymWrapper
andPettingZoo
API toml-agents-envs
package. All these environments will be
versioned underml-agents-envs
package in the future (#)
Minor Changes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- Added switch to RayPerceptionSensor to allow rays to be ordered left to right. (#)
- Current alternating order is still the default but will be deprecated.
- Added suppport for enabling/disabling camera object attached to camera sensor in order to improve performance. (#)
ml-agents / ml-agents-envs
- Renaming the path that shadows torch with "mlagents/trainers/torch_entities" and update respective imports (#)
Bug Fixes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
ml-agents / ml-agents-envs
ML-Agents Release 19
Package Versions
NOTE: It is strongly recommended that you use packages from the same release together for the best experience.
Package | Version |
---|---|
com.unity.ml-agents (C#) | v2.2.1-exp.1 |
com.unity.ml-agents.extensions (C#) | v0.6.1-preview |
ml-agents (Python) | v0.28.0 |
ml-agents-envs (Python) | v0.28.0 |
gym-unity (Python) | v0.28.0 |
Communicator (C#/Python) | v1.5.0 |
Release Notes
Major Changes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- The minimum supported Unity version was updated to 2020.3. (#5673)
- Added a new feature to replicate training areas dynamically during runtime. (#5568)
- Update Barracuda to 2.3.1-preview (#5591)
- Update Input System to 1.3.0 (#5661)
ml-agents / ml-agents-envs / gym-unity (Python)
Minor Changes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- Added the capacity to initialize behaviors from any checkpoint and not just the latest one (#5525)
- Added the ability to get a read-only view of the stacked observations (#5523)
ml-agents / ml-agents-envs / gym-unity (Python)
- Set gym version in gym-unity to gym release 0.20.0 (#5540)
- Added support for having
beta
,epsilon
, andlearning rate
on separate schedules (affects only PPO and POCA). (#5538) - Changed default behavior to restart crashed Unity environments rather than exiting. (#5553)
- Rate & lifetime limits on this are configurable via 3 new yaml options
- env_params.max_lifetime_restarts (--max-lifetime-restarts) [default=10]
- env_params.restarts_rate_limit_n (--restarts-rate-limit-n) [default=1]
- env_params.restarts_rate_limit_period_s (--restarts-rate-limit-period-s) [default=60]
- Rate & lifetime limits on this are configurable via 3 new yaml options
- Deterministic action selection is now supported during training and inference(#5619)
- Added a new
--deterministic
cli flag to deterministically select the most probable actions in policy. The same thing can
be achieved by addingdeterministic: true
undernetwork_settings
of the run options configuration.(#5597) - Extra tensors are now serialized to support deterministic action selection in onnx. (#5593)
- Support inference with deterministic action selection in editor (#5599)
- Added a new
- Added minimal analytics collection to LL-API (#5511)
- Update Colab notebooks for GridWorld example with DQN illustrating the use of the Python API and how to export to ONNX (#5643)
Bug Fixes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- Update gRPC native lib to universal for arm64 and x86_64. This change should enable ml-agents usage on mac M1 (#5283, #5519)
- Fixed a bug where ml-agents code wouldn't compile on platforms that didn't support analytics (PS4/5, XBoxOne) (#5628)
ml-agents / ml-agents-envs / gym-unity (Python)
- Fixed a bug where the critics were not being normalized during training. (#5595)
- Fixed the bug where curriculum learning would crash because of the incorrect run_options parsing. (#5586)
- Fixed a bug in multi-agent cooperative training where agents might not receive all of the states of
terminated teammates. (#5441) - Fixed wrong attribute name in argparser for torch device option (#5433)(#5467)
- Fixed conflicting CLI and yaml options regarding resume & initialize_from (#5495)
- Fixed failing tests for gym-unity due to gym 0.20.0 release (#5540)
- Fixed a bug in VAIL where the variational bottleneck was not properly passing gradients (#5546)
- Harden user PII protection logic and extend TrainingAnalytics to expose detailed configuration parameters. (#5512)
ML-Agents Release 18
Package Versions
NOTE: It is strongly recommended that you use packages from the same release together for the best experience.
Package | Version |
---|---|
com.unity.ml-agents (C#) | v2.1.0-exp.1 |
com.unity.ml-agents.extensions (C#) | v0.5.0-preview |
ml-agents (Python) | v0.27.0 |
ml-agents-envs (Python) | v0.27.0 |
gym-unity (Python) | v0.27.0 |
Communicator (C#/Python) | v1.5.0 |
Release Notes
Minor Changes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- Updated Barracuda to 2.0.0-pre.3. (#5385)
- Fixed
NullReferenceException
when adding Behavior Parameters with noAgent
. (#5382) - Added stacking option in Editor for
VectorSensorComponent
. (#5376)
ml-agents / ml-agents-envs / gym-unity (Python)
- Locked cattrs dependency version to 1.6. (#5397)
- Added a fully connected visual encoder for environments with very small image inputs. (#5351)
- Colab notebooks illustrating the use of the Python API were added to the repository. (#5399)
Bug Fixes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
RigidBodySensorComponent
now displays a warning if it's used in a way that won't generate useful observations. (#5387)- Updated the documentation with a note saying that
GridSensor
does not work in 2D environments. (#5396) - Fixed an error where sensors would not reset properly before collecting the last observation at the end of an episode. (#5375)
ml-agents / ml-agents-envs / gym-unity (Python)
ML-Agents Release 17
ML-Agents Release 17
Package Versions
NOTE: It is strongly recommended that you use packages from the same release together for the best experience.
Package | Version |
---|---|
com.unity.ml-agents (C#) | v2.0.0 |
com.unity.ml-agents.extensions (C#) | v0.4.0-preview |
ml-agents (Python) | v0.26.0 |
ml-agents-envs (Python) | v0.26.0 |
gym-unity (Python) | v0.26.0 |
Communicator (C#/Python) | v1.5.0 |
Breaking Changes
Minimum Version Support
- The minimum supported Unity version was updated to 2019.4. (#5166)
C# API Changes
- Several breaking interface changes were made. See the Migration Guide for more details.
- Some methods previously marked as
Obsolete
have been removed. If you were using these methods, you need to replace them with their supported counterpart. (#5024) - The interface for disabling discrete actions in
IDiscreteActionMask
has changed.WriteMask(int branch, IEnumerable<int> actionIndices)
was replaced withSetActionEnabled(int branch, int actionIndex, bool isEnabled)
. (#5060) - IActuator now implements IHeuristicProvider. (#5110)
ISensor.GetObservationShape()
has been removed, andGetObservationSpec()
has been added. TheITypedSensor
andIDimensionPropertiesSensor
interfaces have been removed. (#5127)ISensor.GetCompressionType()
has been removed, andGetCompressionSpec()
has been added. TheISparseChannelSensor
interface has been removed. (#5164)- The abstract method
SensorComponent.GetObservationShape()
was no longer being called, so it has been removed. (#5172) SensorComponent.CreateSensor()
has been replaced withSensorComponent.CreateSensors()
, which returns anISensor[]
. (#5181)- The default
InferenceDevice
is nowInferenceDevice.Default
, which is equivalent toInferenceDevice.Burst
. If you depend on the previous behavior, you can explicitly set the Agent'sInferenceDevice
toInferenceDevice.CPU
. (#5175)
Model Format Changes
- Models trained with 1.x versions of ML-Agents no longer work at inference if they were trained using recurrent neural networks (#5254)
- The
.onnx
models input names have changed. All input placeholders now use the prefixobs_
removing the distinction between visual and vector observations. In addition, the inputs and outputs of LSTM have changed. Models created with this version are not usable with previous versions of the package (#5080, #5236) - The
.onnx
models discrete action output now contains the discrete actions values and not the logits. Models created with this version are not usable with previous versions of the package (#5080)
Features Moved from com.unity.ml-agents.extensions to com.unity.ml-agents
Match3
- The Match-3 integration utilities have been moved from
com.unity.ml-agents.extensions
tocom.unity.ml-agents
. (#5259) Match3Sensor
has been refactored to produce cell and special type observations separately, andMatch3SensorComponent
now produces twoMatch3Sensor
s (unless there are no special types). Previously trained models have different observation sizes and need to be retrained. (#5181)- The
AbstractBoard
class for integration with Match-3 games has been changed to make it easier to support boards with different sizes using the same model. For a summary of the interface changes, please see the Migration Guide. (##5189)
Grid Sensor
GridSensor
has been refactored and moved to the main package, with changes to both sensor interfaces and behaviors. Existing GridSensor created by the extension package do not work in newer versions. Previously trained models need to be retrained. Please see the Migration Guide for more details. (#5256)
Minor Changes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- Updated the Barracuda package to version
1.4.0-preview
(#5236) - Added ML-Agents package settings. Now you can configure project-level ML-Agents settings in Editor > Project Settings > ML-Agents. (#5027)
- Made
com.unity.modules.unityanalytics
an optional dependency. (#5109) - Made
com.unity.modules.physics
andcom.unity.modules.physics2d
optional dependencies. (#5112) - Added support for
Goal Signal
as a type of observation. Trainers can now use HyperNetworks to processGoal Signal
. Trainers with HyperNetworks are more effective at solving multiple tasks. (#5142, #5159, #5149) - Modified the GridWorld environment to use the new
Goal Signal
feature. (#5193) DecisionRequester.ShouldRequestDecision()
andShouldRequestAction()
methods have been added. These are used to determine whetherAgent.RequestDecision()
andAgent.RequestAction()
are called (respectively). (#5223)RaycastPerceptionSensor
now caches its raycast results; they can be accessed viaRayPerceptionSensor.RayPerceptionOutput
. (#5222)ActionBuffers
are now reset to zero before being passed toAgent.Heuristic()
andIHeuristicProvider.Heuristic()
. (#5227)Agent
now callsIDisposable.Dispose()
on allISensor
s that implement theIDisposable
interface. (#5233)CameraSensor
,RenderTextureSensor
, andMatch3Sensor
now reuse theirTexture2D
s, reducing the amount of memory that needs to be allocated during runtime. (#5233)- Optimized
ObservationWriter.WriteTexture()
so that it doesn't callTexture2D.GetPixels32()
forRGB24
textures. This results in much less memory being allocated during inference withCameraSensor
andRenderTextureSensor
. (#5233)
ml-agents / ml-agents-envs / gym-unity (Python)
- Some console outputs have been moved from
info
todebug
and are no longer printed by default. If you want all messages to be printed, you can runmlagents-learn
with the--debug
option or add the linedebug: true
at the top of the yaml config file. (#5211) - The embedding size of attention layers used when a BufferSensor is in the scene has been changed. It is now fixed to 128 units. It might be impossible to resume training from a checkpoint of a previous version. (#5272)
Bug Fixes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- Fixed a potential bug where sensors and actuators could get sorted inconsistently on different systems to different Culture settings. Unfortunately, this may require retraining models if it changes the resulting order of the sensors or actuators on your system. (#5194)
- Removed additional memory allocations that were occurring due to assert messages and iterating of DemonstrationRecorders. (#5246)
- Fixed a bug where agents were trying to access uninitialized fields when creating a new RayPerceptionSensorComponent on an agent. (#5261)
- Fixed a bug where the DemonstrationRecorder would throw a null reference exception if
Num Steps To Record > 0
andRecord
was turned off. (#5274)
ml-agents / ml-agents-envs / gym-unity (Python)
ML-Agents Release 16
ML-Agents Release 16
Package Versions
NOTE: It is strongly recommended that you use packages from the same release together for the best experience.
Package | Version |
---|---|
com.unity.ml-agents (C#) | v1.9.1 |
com.unity.ml-agents.extensions (C#) | v0.3.1-preview |
ml-agents (Python) | v0.25.1 |
ml-agents-envs (Python) | v0.25.1 |
gym-unity (Python) | v0.25.1 |
Communicator (C#/Python) | v1.5.0 |
Major Changes
ml-agents / ml-agents-envs / gym-unity (Python)
- The
--resume
flag now supports resuming experiments with additional reward providers or loading partial models if the network architecture has changed. See here for more details. (#5213)
Bug Fixes
com.unity.ml-agents (C#)
- Fixed erroneous warnings when using the Demonstration Recorder. (#5216)
ml-agents / ml-agents-envs / gym-unity (Python)
- Fixed an issue which was causing increased variance when using LSTMs. Also fixed an issue with LSTM when used with POCA and
sequence_length
<time_horizon
. (#5206) - Fixed a bug where the SAC replay buffer would not be saved out at the end of a run, even if
save_replay_buffer
was enabled. (#5205) - ELO now correctly resumes when loading from a checkpoint. (#5202)
- In the Python API, fixed
validate_action
to expect the right dimensions whenset_action_single_agent
is called. (#5208) - In the
GymToUnityWrapper
, raise an appropriate warning ifstep()
is called after an environment is done. (#5204) - Fixed an issue where using one of the
gym
wrappers would override user-set log levels. (#5201)
ML-Agents Release 15
ML-Agents Release 15
Package Versions
NOTE: It is strongly recommended that you use packages from the same release together for the best experience.
Package | Version |
---|---|
com.unity.ml-agents (C#) | v1.9.0 |
com.unity.ml-agents.extensions (C#) | v0.3.0-preview |
ml-agents (Python) | v0.25.0 |
ml-agents-envs (Python) | v0.25.0 |
gym-unity (Python) | v0.25.0 |
Communicator (C#/Python) | v1.5.0 |
Major Changes
com.unity.ml-agents (C#)
- The
BufferSensor
andBufferSensorComponent
have been added (documentation). They allow the Agent to observe variable number of entities. For an example, see the Sorter environment. (#4909) - The
SimpleMultiAgentGroup
class andIMultiAgentGroup
interface have been added (documentation). These allow Agents to be given rewards and end episodes in groups. For examples, see the Cooperative Push Block, Dungeon Escape and Soccer environments. (#4923)
ml-agents / ml-agents-envs / gym-unity (Python)
- The MA-POCA trainer has been added. This is a new trainer that enables Agents to learn how to work together in groups. Configure
poca
as the trainer in the configuration YAML after instantiating aSimpleMultiAgentGroup
to use this feature. (#5005)
Minor Changes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- Updated com.unity.barracuda to 1.3.2-preview. (#5084)
- Added 3D Ball to the
com.unity.ml-agents
samples. (#5077)
ml-agents / ml-agents-envs / gym-unity (Python)
- The
encoding_size
setting for RewardSignals has been deprecated. Please usenetwork_settings
instead. (#4982) - Sensor names are now passed through to
ObservationSpec.name
. (#5036)
Bug Fixes
ml-agents / ml-agents-envs / gym-unity (Python)
- An issue that caused GAIL to fail for environments where agents can terminate episodes by self-sacrifice has been fixed. (#4971)
- Made the error message when observations of different shapes are sent to the trainer clearer. (#5030)
- An issue that prevented curriculums from incrementing with self-play has been fixed. (#5098)
ML-Agents Release 14
ML-Agents Release 14
Package Versions
NOTE: It is strongly recommended that you use packages from the same release together for the best experience.
Package | Version |
---|---|
com.unity.ml-agents (C#) | v1.8.1 |
com.unity.ml-agents.extensions (C#) | v0.2.0-preview |
ml-agents (Python) | v0.24.1 |
ml-agents-envs (Python) | v0.24.01 |
gym-unity (Python) | v0.24.1 |
Communicator (C#/Python) | v1.4.0 |
Minor Changes
ml-agents / ml-agents-envs / gym-unity (Python)
- The
cattrs
version dependency was updated to allow>=1.1.0
on Python 3.8 or higher. (#4821)
Bug Fixes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- Fix an issue where queuing InputEvents overwrote data from previous events in the same frame.
ML-Agents Release 13
ML-Agents Release 13
Package Versions
NOTE: It is strongly recommended that you use packages from the same release together for the best experience.
Package | Version |
---|---|
com.unity.ml-agents (C#) | v1.8.0 |
com.unity.ml-agents.extensions (C#) | v0.1.0-preview |
ml-agents (Python) | v0.24.0 |
ml-agents-envs (Python) | v0.24.0 |
gym-unity (Python) | v0.24.0 |
Communicator (C#/Python) | v1.4.0 |
Major Features and Improvements
com.unity.ml-agents / com.unity.ml-agents.extensions
- Add an InputActuatorComponent to allow the generation of Agent action spaces from an InputActionAsset.
Projects wanting to use this feature will need to add the
Input System Package at version 1.1.0-preview.3 or later. (#4881)
ml-agents / ml-agents-envs / gym-unity (Python)
- TensorFlow trainers have been removed, please use the Torch trainers instead. (#4707)
- A plugin system for
mlagents-learn
has been added. You can now define custom
StatsWriter
implementations and register them to be called during training.
More types of plugins will be added in the future. (#4788)
Minor Changes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- The
ActionSpec
constructor is now public. Previously, it was not possible to create an
ActionSpec with both continuous and discrete actions from code. (#4896) StatAggregationMethod.Sum
can now be passed toStatsRecorder.Add()
. This
will result in the values being summed (instead of averaged) when written to
TensorBoard. Thanks to @brccabral for the contribution! (#4816)- The upper limit for the time scale (by setting the
--time-scale
parameter in mlagents-learn) was
removed when training with a player. The Editor still requires it to be clamped to 100. (#4867) - Added the IHeuristicProvider interface to allow IActuators as well as Agent implement the Heuristic function to generate actions.
Updated the Basic example and the Match3 Example to use Actuators.
Changed the namespace and file names of classes in com.unity.ml-agents.extensions. (#4849) - Added
VectorSensor.AddObservation(IList<float>)
.VectorSensor.AddObservation(IEnumerable<float>)
is deprecated. TheIList
version is recommended, as it does not generate any
additional memory allocations. (#4887) - Added
ObservationWriter.AddList()
and deprecatedObservationWriter.AddRange()
.
AddList()
is recommended, as it does not generate any additional memory allocations. (#4887) - The Barracuda dependency was upgraded to 1.3.0. (#4898)
- Added
ActuatorComponent.CreateActuators
, and deprecatedActuatorComponent.CreateActuator
. The
default implementation will wrapActuatorComponent.CreateActuator
in an array and return that. (#4899) InferenceDevice.Burst
was added, indicating that Agent's model will be run using Barracuda's Burst backend.
This is the default for new Agents, but existing ones that useInferenceDevice.CPU
should update to
InferenceDevice.Burst
. (#4925)
ml-agents / ml-agents-envs / gym-unity (Python)
- Tensorboard now logs the Environment Reward as both a scalar and a histogram. (#4878)
- Added a
--torch-device
commandline option tomlagents-learn
, which sets the default
torch.device
used for training. (#4888) - The
--cpu
commandline option had no effect and was removed. Use--torch-device=cpu
to force CPU training. (#4888) - The
mlagents_env
API has changed,BehaviorSpec
now has aobservation_specs
property containing a list ofObservationSpec
. For more information onObservationSpec
see here. (#4763, #4825)
Bug Fixes
com.unity.ml-agents (C#)
- Fix a compile warning about using an obsolete enum in
GrpcExtensions.cs
. (#4812) - CameraSensor now logs an error if the GraphicsDevice is null. (#4880)
- Removed unnecessary memory allocations in
ActuatorManager.UpdateActionArray()
(#4877) - Removed unnecessary memory allocations in
SensorShapeValidator.ValidateSensors()
(#4879) - Removed unnecessary memory allocations in
SideChannelManager.GetSideChannelMessage()
(#4886) - Removed several memory allocations that happened during inference. On a test scene, this
reduced the amount of memory allocated by approximately 25%. (#4887) - Removed several memory allocations that happened during inference with discrete actions. (#4922)
- Properly catch permission errors when writing timer files. (#4921)
- Unexpected exceptions during training initialization and shutdown are now logged. If you see
"noisy" logs, please let us know! (#4930, #4935)
ml-agents / ml-agents-envs / gym-unity (Python)
- Fixed a bug that would cause an exception when
RunOptions
was deserialized viapickle
. (#4842) - Fixed a bug that can cause a crash if a behavior can appear during training in multi-environment training. (#4872)
- Fixed the computation of entropy for continuous actions. (#4869)
- Fixed a bug that would cause
UnityEnvironment
to wait the full timeout
period and report a misleading error message if the executable crashed
without closing the connection. It now periodically checks the process status
while waiting for a connection, and raises a better error message if it crashes. (#4880) - Passing a
-logfile
option in the--env-args
option tomlagents-learn
is
no longer overwritten. (#4880) - The
load_weights
function was being called unnecessarily often in the Ghost Trainer leading to training slowdowns. (#4934)