Fix for Mask Patch Failure and Quantization Issues in Latest transformers
Versions
#368
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In the latest versions of the
transformers
library (specifically above version 4.34.1), the_make_causal_mask
function in themodeling_clip
module has been removed. Previously, code that utilized this function looked like this:However, with recent updates, this call has been replaced by:
You can see more in this thread huggingface/transformers#28305.
This change disrupts the functionality in
python_coreml_stable_diffusion/torch2coreml.py
, where the following line:can no longer patch the
_make_causal_mask
function as expected, resulting in the following error during quantization:See related issues: #331, #303, #325, #246
This PR addresses the issue by adding a monkey patch to
modeling_clip
for the_create_4d_causal_attention_mask
function, thereby fixing the mask patch failure and restoring compatibility with the--quantize-nbits
feature in the latesttransformers
versions. It also retains the original function override to maintain support for oldertransformers
versions.