Skip to content
This repository has been archived by the owner on May 27, 2024. It is now read-only.

Update formatting on FAQ section #1

Merged
merged 1 commit into from
Jan 14, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions RFC-0020-Unified-Memory-for-Pytorch.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,8 +119,8 @@ It is possible that there are different optimal block sizes when using managed m

## FAQ

Should we have set_enabled_uvm(bool)? Will there be a use case where the user can set this to false? As of now, UVM will be OFF until it is turned ON. We don’t want to mix the two so there won’t specifically be the ability to turn it OFF.
#### Should we have set_enabled_uvm(bool)? Will there be a use case where the user can set this to false?
- As of now, UVM will be OFF until it is turned ON. We don’t want to mix the two just yet, so there won’t specifically be the ability to turn it OFF.

Will this eliminate all calls to cudaMemcpy/hipMemcpy?

No, copies are still expected in many cases such as when are user assigns a new tensor B to an old tensor A with some modificaitons (dtype, layout, etc).
#### Will this eliminate all calls to cudaMemcpy/hipMemcpy?
- No, copies are still expected in many cases such as when are user assigns a new tensor B to an old tensor A with some modificaitons (dtype, layout, etc).