Skip to content

Releases: invoke-ai/InvokeAI

v5.6.0rc4

17 Jan 05:59
Compare
Choose a tag to compare
v5.6.0rc4 Pre-release
Pre-release

This release brings major improvements to Invoke's memory management, new Blur and Noise Canvas filters, and expanded batch capabilities in Workflows.

Changes since v5.6.0rc3

  • Fixed issue preventing you from typing in textarea fields in the workflow editor.

Changes since v5.6.0rc2

  • Reduce peak memory during FLUX model load.
  • Add keep_ram_copy_of_weights config option to reduce average RAM usage.
  • Revise the default logic for the model cache RAM limit to be more conservative.
  • Support float, integer and string batch data types.
  • Add batch data generators.
  • Support grouped (aka zipped) batches.
  • Fix image quality degradation when inpainting an image repeatedly.
  • Fix issue with transparent Canvas filter previews blend with unfiltered parent layer.
  • Add Noise and Blur filters to Canvas. Adding noise or blurring before generation can add a lot detail, especially when generating from a rough sketch. Thanks @dunkeroni!

Memory Management Improvements (aka Low-VRAM mode)

The goal of these changes is to allow users with low-VRAM GPUs to run even the beefiest models, like the 24GB unquantised FLUX dev model.

Despite the focus on low-VRAM GPUs and the colloquial name "Low-VRAM mode", most users benefit from these improvements to Invoke's memory management.

Low-VRAM mode works on systems with dedicated GPUs (Nvidia GPUs on Windows/Linux and AMD GPUs on Linux). It allows you to generate even if your GPU doesn't have enough VRAM to hold full models.

Low-VRAM mode involves 3 features, each of which can be configured or fine-tuned:

  • Partial model loading
  • Dynamic RAM and VRAM cache sizes
  • Working memory
  • Keeping a copy of models in RAM

Most users should only need to enable partial loading by adding this line to your invokeai.yaml file:

enable_partial_loading: true

🚨 Windows users should also disable the Nvidia sysmem fallback.

For more details and instructions for fine-tuning, see the Low-VRAM mode docs.

Thanks to @RyanJDick for designing and implementing these improvements!

Workflow Batches

We've expanded the capabilities for Batches in Workflows:

  • Float, integer and string batch data types
  • Batch collection generators
  • Grouped (aka zipped) batches

Float, integer and string batch data types

There's a new batch node for of the new data types. They work the same as the existing image batch node.

image

You can add a list of values directly in the node, but you'll probably find generators to be a nicer way to set up your batch.

Batch collection generators

These are essentially nodes that run in the frontend and generate a list of values to use in a batch node. Included in this release are these generators for floats and integers:

  • Arithmetic Sequence: Generate a sequence of count numbers, starting from start, that increase or decrease by step.
  • Linear Distribution: Generate a distribution of count numbers, starting with start and ending with end.
  • Uniform Random Distribution: Generation a random distribution of count numbers from min to max. The values are generated randomly when you click Invoke.
  • Parse String: Split the input on the specified character, parsing each value as a number. Non-numbers are ignored.

Screen.Recording.2025-01-17.at.12.26.52.pm.mov

You'll notice the different handle icon for batch generators. These nodes cannot connect to non-batch nodes, which run in the backend.

In the future, we can explore more batch generators. Some ideas:

  • Parse File (string, float, integer): Select a file and parse it, splitting on the specified character.
  • Board (image): Output all images on a board.

Grouped (aka zipped) batches

When you use multiple batches, we run the graph once for every possible combination of values. In math-y speak, we "take the Cartesian product" of all batch collections.

Consider this simple workflow that joins two strings:
image

We have two batch collections, each with two strings. This results in 2 * 2 = 4 runs, one for each possible combination of the strings. We get these outputs:

  • "a cute cat"
  • "a cute dog"
  • "a ferocious cat"
  • "a ferocious dog"

But what if we wanted to group or "zip" up the two string collections into a single collection, executing the graph once for each pair of strings? This is now possible - we can set both nodes to the same batch group:

image

This results in 2 runs, one for each "pair" of strings. We get these outputs:

  • "a cute cat"
  • "a ferocious dog"

It's a bit technical, but if you try it a few times you'll quickly gain an intuition for how things combine. You can use grouped and ungrouped batches arbitrarily - go wild! The Invoke button tooltip lets you know how many executions you'll end up with for the given batch nodes.

Keep in mind that grouped batch collections must have the same size, else we cannot zip them up into one collection. The Invoke button grey out and let you know there is a mismatch.

Details and technical explanation

On the backend, we first zip all grouped batch collections into a single collection. Ungrouped batch collections remain as-is.

Then, we take the product of all batch collections. If there is only a single collection (i.e. a single ungrouped batch nodes, or multiple batch nodes all with the same group), we still do the product operation, but the result is the same as if we had skipped it.

There are 5 slots for groups, plus a 6th ungrouped option:

  • None: Batch nodes will always be used as separate collections for the Cartesian product operation.
  • Groups 1 - 5: Batch nodes within a given group will first be zipped into a single collection, before the the Cartesian product operation.

All Changes

The launcher itself has been updated to fix a handful of issues, including requiring an install every time you start the launcher and systems with AMD GPUs using CPU. The latest launcher version is v1.2.1.

Fixes

  • Fix issue where excessively long board names could cause performance issues.
  • Fix error when using DPM++ schedulers with certain models. Thanks @Vargol!
  • Fix (maybe, hopefully) the app scrolling off screen when run via launcher.
  • Fix link to Scale setting's support docs.
  • Fix image quality degradation when inpainting an image repeatedly.
  • Fix issue with transparent Canvas filter previews blend with unfiltered parent layer.

Enhancements

  • Support float, integer and string batch data types.
  • Add batch data generators.
  • Support grouped (aka zipped) batches.
  • Reduce peak memory during FLUX model load.
  • Add Noise and Blur filters to Canvas. Adding noise or blurring before generation can add a lot detail, especially when generating from a rough sketch. Thanks @dunkeroni!
  • Reworked error handling when installing models from a URL.
  • Updated first run screen and OOM error toast with links to Low-VRAM mode docs.

Internal

  • Tidied some unused variables. Thanks @rikublock!
  • Added typegen check to CI pipeline. Thanks @rikublock!

Docs

  • Added stereogram nodes to Community Nodes docs. Thanks @simonfuhrmann!
  • Updated installation-related docs (quick start, manual install, dev install).
  • Add Low-VRAM mode docs.

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you already have the launcher, you can use it to update your existing install.

We've just updated the launcher to v1.2.1 with a handful of fixes. To update the launcher itself, download the latest version from the quick start guide - the download links are kept up to date.

Legacy Scripts (not recommended!)

We recommend using the launcher, as described in the previous section!

To install or update with the outdated legacy scripts 😱, download the latest legacy scripts and follow the legacy scripts instructions.

What's Changed

Read more

v5.6.0rc3

17 Jan 02:39
Compare
Choose a tag to compare
v5.6.0rc3 Pre-release
Pre-release

This release brings major improvements to Invoke's memory management, new Blur and Noise Canvas filters, and expanded batch capabilities in Workflows.

Changes since previous release candidate (v5.6.0rc2)

  • Reduce peak memory during FLUX model load.
  • Add keep_ram_copy_of_weights config option to reduce average RAM usage.
  • Revise the default logic for the model cache RAM limit to be more conservative.
  • Support float, integer and string batch data types.
  • Add batch data generators.
  • Support grouped (aka zipped) batches.
  • Fix image quality degradation when inpainting an image repeatedly.
  • Fix issue with transparent Canvas filter previews blend with unfiltered parent layer.
  • Add Noise and Blur filters to Canvas. Adding noise or blurring before generation can add a lot detail, especially when generating from a rough sketch. Thanks @dunkeroni!

Memory Management Improvements (aka Low-VRAM mode)

The goal of these changes is to allow users with low-VRAM GPUs to run even the beefiest models, like the 24GB unquantised FLUX dev model.

Despite the focus on low-VRAM GPUs and the colloquial name "Low-VRAM mode", most users benefit from these improvements to Invoke's memory management.

Low-VRAM mode works on systems with dedicated GPUs (Nvidia GPUs on Windows/Linux and AMD GPUs on Linux). It allows you to generate even if your GPU doesn't have enough VRAM to hold full models.

Low-VRAM mode involves 3 features, each of which can be configured or fine-tuned:

  • Partial model loading
  • Dynamic RAM and VRAM cache sizes
  • Working memory
  • Keeping a copy of models in RAM

Most users should only need to enable partial loading by adding this line to your invokeai.yaml file:

enable_partial_loading: true

🚨 Windows users should also disable the Nvidia sysmem fallback.

For more details and instructions for fine-tuning, see the Low-VRAM mode docs.

Thanks to @RyanJDick for designing and implementing these improvements!

Workflow Batches

We've expanded the capabilities for Batches in Workflows:

  • Float, integer and string batch data types
  • Batch collection generators
  • Grouped (aka zipped) batches

Float, integer and string batch data types

There's a new batch node for of the new data types. They work the same as the existing image batch node.

image

You can add a list of values directly in the node, but you'll probably find generators to be a nicer way to set up your batch.

Batch collection generators

These are essentially nodes that run in the frontend and generate a list of values to use in a batch node. Included in this release are these generators for floats and integers:

  • Arithmetic Sequence: Generate a sequence of count numbers, starting from start, that increase or decrease by step.
  • Linear Distribution: Generate a distribution of count numbers, starting with start and ending with end.
  • Uniform Random Distribution: Generation a random distribution of count numbers from min to max. The values are generated randomly when you click Invoke.
  • Parse String: Split the input on the specified character, parsing each value as a number. Non-numbers are ignored.

Screen.Recording.2025-01-17.at.12.26.52.pm.mov

You'll notice the different handle icon for batch generators. These nodes cannot connect to non-batch nodes, which run in the backend.

In the future, we can explore more batch generators. Some ideas:

  • Parse File (string, float, integer): Select a file and parse it, splitting on the specified character.
  • Board (image): Output all images on a board.

Grouped (aka zipped) batches

When you use multiple batches, we run the graph once for every possible combination of values. In math-y speak, we "take the Cartesian product" of all batch collections.

Consider this simple workflow that joins two strings:
image

We have two batch collections, each with two strings. This results in 2 * 2 = 4 runs, one for each possible combination of the strings. We get these outputs:

  • "a cute cat"
  • "a cute dog"
  • "a ferocious cat"
  • "a ferocious dog"

But what if we wanted to group or "zip" up the two string collections into a single collection, executing the graph once for each pair of strings? This is now possible - we can set both nodes to the same batch group:

image

This results in 2 runs, one for each "pair" of strings. We get these outputs:

  • "a cute cat"
  • "a ferocious dog"

It's a bit technical, but if you try it a few times you'll quickly gain an intuition for how things combine. You can use grouped and ungrouped batches arbitrarily - go wild! The Invoke button tooltip lets you know how many executions you'll end up with for the given batch nodes.

Keep in mind that grouped batch collections must have the same size, else we cannot zip them up into one collection. The Invoke button grey out and let you know there is a mismatch.

Details and technical explanation

On the backend, we first zip all grouped batch collections into a single collection. Ungrouped batch collections remain as-is.

Then, we take the product of all batch collections. If there is only a single collection (i.e. a single ungrouped batch nodes, or multiple batch nodes all with the same group), we still do the product operation, but the result is the same as if we had skipped it.

There are 5 slots for groups, plus a 6th ungrouped option:

  • None: Batch nodes will always be used as separate collections for the Cartesian product operation.
  • Groups 1 - 5: Batch nodes within a given group will first be zipped into a single collection, before the the Cartesian product operation.

All Changes

The launcher itself has been updated to fix a handful of issues, including requiring an install every time you start the launcher and systems with AMD GPUs using CPU. The latest launcher version is v1.2.1.

Fixes

  • Fix issue where excessively long board names could cause performance issues.
  • Fix error when using DPM++ schedulers with certain models. Thanks @Vargol!
  • Fix (maybe, hopefully) the app scrolling off screen when run via launcher.
  • Fix link to Scale setting's support docs.
  • Fix image quality degradation when inpainting an image repeatedly.
  • Fix issue with transparent Canvas filter previews blend with unfiltered parent layer.

Enhancements

  • Support float, integer and string batch data types.
  • Add batch data generators.
  • Support grouped (aka zipped) batches.
  • Reduce peak memory during FLUX model load.
  • Add Noise and Blur filters to Canvas. Adding noise or blurring before generation can add a lot detail, especially when generating from a rough sketch. Thanks @dunkeroni!
  • Reworked error handling when installing models from a URL.
  • Updated first run screen and OOM error toast with links to Low-VRAM mode docs.

Internal

  • Tidied some unused variables. Thanks @rikublock!
  • Added typegen check to CI pipeline. Thanks @rikublock!

Docs

  • Added stereogram nodes to Community Nodes docs. Thanks @simonfuhrmann!
  • Updated installation-related docs (quick start, manual install, dev install).
  • Add Low-VRAM mode docs.

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you already have the launcher, you can use it to update your existing install.

We've just updated the launcher to v1.2.1 with a handful of fixes. To update the launcher itself, download the latest version from the quick start guide - the download links are kept up to date.

Legacy Scripts (not recommended!)

We recommend using the launcher, as described in the previous section!

To install or update with the outdated legacy scripts 😱, download the latest legacy scripts and follow the legacy scripts instructions.

What's Changed

Read more

v5.6.0rc2

09 Jan 03:53
Compare
Choose a tag to compare
v5.6.0rc2 Pre-release
Pre-release

This release brings major improvements to Invoke's memory management, plus a few minor fixes.

Memory Management Improvements (aka Low-VRAM mode)

The goal of these changes is to allow users with low-VRAM GPUs to run even the beefiest models, like the 24GB unquantised FLUX dev model.

Despite the focus on low-VRAM GPUs and the colloquial name "Low-VRAM mode", most users benefit from these improvements to Invoke's memory management.

Low-VRAM mode works on systems with dedicated GPUs (Nvidia GPUs on Windows/Linux and AMD GPUs on Linux). It allows you to generate even if your GPU doesn't have enough VRAM to hold full models.

Low-VRAM mode involves 3 features, each of which can be configured or fine-tuned:

  • Partial model loading
  • Dynamic RAM and VRAM cache sizes
  • Working memory

Most users should only need to enable partial loading by adding this line to your invokeai.yaml file:

enable_partial_loading: true

🚨 Windows users should also disable the Nvidia sysmem fallback.

For more details and instructions for fine-tuning, see the Low-VRAM mode docs.

Thanks to @RyanJDick for designing and implementing these improvements!

Changes since previous release candidate (v5.6.0rc1)

  • Fix some model loading errors that occurred in edge cases.
  • Fix error when using DPM++ schedulers with certain models. Thanks @Vargol!
  • Deprecate the ram and vram settings in favor of new max_cache_ram_gb and max_cache_vram_gb settings. This is eases the upgrade path for users who had manually configured ram and vram in the past.
  • Fix (maybe, hopefully) the app scrolling off screen when run via launcher.

The launcher itself has also been updated to fix a handful of issues, including requiring an install every time you start the launcher and systems with AMD GPUs using CPU.

Other Changes

  • Fixed issue where excessively long board names could cause performance issues.
  • Reworked error handling when installing models from a URL.
  • Fix error when using DPM++ schedulers with certain models. Thanks @Vargol!
  • Fix (maybe, hopefully) the app scrolling off screen when run via launcher.
  • Updated first run screen and OOM error toast with links to Low-VRAM mode docs.
  • Fixed link to Scale setting's support docs.
  • Tidied some unused variables. Thanks @rikublock!
  • Added typegen check to CI pipeline. Thanks @rikublock!
  • Added stereogram nodes to Community Nodes docs. Thanks @simonfuhrmann!
  • Updated installation-related docs (quick start, manual install, dev install).
  • Add Low-VRAM mode docs.

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you already have the launcher, you can use it to update your existing install.

We've just updated the launcher to v1.2.1 with a handful of fixes. To update the launcher itself, download the latest version from the quick start guide - the download links are kept up to date.

Legacy Scripts (not recommended!)

We recommend using the launcher, as described in the previous section!

To install or update with the outdated legacy scripts 😱, download the latest legacy scripts and follow the legacy scripts instructions.

What's Changed

New Contributors

Full Changelog: v5.5.0...v5.6.0rc2

v5.6.0rc1

07 Jan 09:30
Compare
Choose a tag to compare
v5.6.0rc1 Pre-release
Pre-release

This release brings a two major improvements to Invoke's memory management: partial model loading (aka Low-VRAM mode) and dynamic memory limits.

Memory Management Improvements

Thanks to @RyanJDick for designing and implementing these improved memory management features!

Partial Model Loading (Low-VRAM mode)

Invoke's previous "all or nothing" model loading strategy required your GPU to have enough VRAM to hold whole models during generation.

As a result, as image generation models increased in size and auxiliary models (e.g. ControlNet) became critical to workflows, Invoke's VRAM requirements have increased at the same rate. The increased VRAM requirements have prevent many of our users from running Invoke with the latest and greatest models.

Partial model loading allows Invoke to load only the parts of the model that are actively being used onto the GPU, substantially reducing Invoke's VRAM requirements.

  • Applies to systems with a CUDA device.
  • Enables large models to run with limited GPU VRAM (e.g. Full 24GB FLUX dev on an 8GB GPU)
  • When models are too large to fit on the GPU, they will be partially offloaded to RAM. The model weights are still streamed to the GPU for fast inference. Inference speed won't be as fast as when a model is fully loaded, but will be much faster than running on the CPU.
  • The recommended minimum CUDA GPU size is 8GB. An 8GB GPU should now be capable of running all models supported by Invoke (even the full 24GB FLUX models with ControlNet).
  • If there is sufficient demand, we could probably support 4GB cards in the future by moving the VAE decoding operation fully to the CPU.

Dynamic Memory Limits

Previously, the amount of RAM and VRAM used for model caching were set to hard limits. Now, the amount of RAM and VRAM used is adjusted dynamically based on what's available.

For most users, this will result in more effective use of their RAM/VRAM without having to tune configuration values.

Users can expect:

  • Faster average model load times on systems with extra memory
  • Fewer out-of-memory errors when combined with Partial Model Loading

Enabling Partial Model Loading and Dynamic Memory Limits

Partial Model Loading is disabled by default. To enable it, set enable_partial_loading: true in your invokeai.yaml:

enable_partial_loading: true

This is highly recommended for users with limited VRAM. Users with 24GB+ of VRAM may prefer to leave this option disabled to guarantee that models get fully-loaded and run at full speed.

Dynamic memory limits are enabled by default, but can be overridden by setting ram or vram in your invokeai.yaml.

# Override the dynamic cache limits to ram=6GB and vram=20GB.
ram: 6
vram: 20

🚨 Note: Users who previously set ram or vram in their invokeai.yaml will need to delete these overrides in order to benefit from the new dynamic memory limits.

All Changes

  • Added support for partial model loading.
  • Added support for dynamic memory limits.
  • Fixed issue where excessively long board names could cause performance issues.
  • Reworked error handling when installing models from a URL.
  • Fixed link to Scale setting's support docs.
  • Tidied some unused variables. Thanks @rikublock!
  • Added typegen check to CI pipeline. Thanks @rikublock!
  • Added stereogram nodes to Community Nodes docs. Thanks @simonfuhrmann!
  • Updated installation-related docs (quick start, manual install, dev install).

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you already have the launcher, you can use it to update your existing install.

We've just updated the launcher to v1.2.0 with a handful of fixes. To update the launcher itself, download the latest version from the quick start guide - the download links are kept up to date.

Legacy Scripts (not recommended!)

We recommend using the launcher, as described in the previous section!

To install or update with the outdated legacy scripts 😱, download the latest legacy scripts and follow the legacy scripts instructions.

What's Changed

New Contributors

Full Changelog: v5.5.0...v5.6.0rc1

v5.5.0

20 Dec 05:47
Compare
Choose a tag to compare

This release brings support for FLUX Control LoRAs to Invoke, plus a few other fixes and enhancements.

It's also the first stable release alongside the new Invoke Launcher!

Invoke Launcher ✨

image

The Invoke Launcher is a desktop application that can install, update and run Invoke on Windows, macOS and Linux.

It can manage your existing Invoke installation - even if you previously installed with our legacy scripts.

Download the launcher to get started

Refer to the new Quick Start guide for more details. There's a workaround for macOS, which may not let you run the launcher.

FLUX Control LoRAs

Despite having "LoRA" in the name, these models are used in Invoke via Control Layers - like ControlNets. The only difference is that they do not support begin and end step percentages.

So far, BFL has released Canny and Depth models. You can install them from the Model Manager.

Other Changes

Enhancements

  • Support for FLUX Control LoRAs.

  • Improved error handling and recovery for Canvas, preventing Canvas from getting stuck if there is a network issue during some operations.

  • Reduced logging verbosity when default logging settings are used.

    Previously, all Uvicorn logging occurred at the same level as the app's logging. This logging was very verbose and frequent, and made the app's terminal output difficult to parse, with lots of extra noise.

    The Uvicorn log level is now set independently from the other log namespaces. To control it, set the log_level_network property in invokeai.yaml. The default is warning. To restore the previous log levels, set it to info (e.g. log_level_network: info).

Fixes

  • Image context menu actions to create a Regional and Global Reference Image layers were reversed.
  • Missing translation strings.
  • Canvas filters could execute twice. Besides being inefficient, on slow network connections, this could cause an error toast to appear even when the filter was successful. They now only execute once.
  • Model install error when the path contains quotes. Thanks @Quadiumm!

Internal

  • Upgrade docker image to Ubuntu 24.04 and use uv for package management.
  • Fix dynamic invocation values causing non-deterministic OpenAPI schema. This allows us to add a CI check to ensure the OpenAPI schema and TypeScript types are always in sync. Thanks @rikublock!

Translations

Installing and Updating

As mentioned above, the new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

Legacy Scripts (not recommended!)

We recommend using the launcher, as described in the previous section!

To install or update with the outdated legacy scripts 😱, download the latest legacy scripts and follow the legacy scripts instructions.

What's Changed

New Contributors

Full Changelog: v5.4.3...v5.5.0

v5.5.0rc1

19 Dec 23:57
Compare
Choose a tag to compare
v5.5.0rc1 Pre-release
Pre-release

This release brings support for FLUX Control LoRAs to Invoke, plus a few other fixes and enhancements.

FLUX Control LoRAs

Despite having "LoRA" in the name, these models are used in Invoke via Control Layers - like ControlNets. The only difference is that they do not support begin and end step percentages.

So far, BFL has released a Canny and Depth model. You can install them from the Model Manager.

All Changes

Enhancements

  • Support for FLUX Control LoRAs.

  • Improved error handling and recovery for Canvas, preventing Canvas from getting stuck if there is a network issue during some operations.

  • Reduced logging verbosity when default logging settings are used.

    Previously, all Uvicorn logging occurred at the same level as the app's logging. This logging was very verbose and frequent, and made the app's terminal output difficult to parse, with lots of extra noise.

    The Uvicorn log level is now set independently from the other log namespaces. To control it, set the log_level_network property in invokeai.yaml. The default is warning. To restore the previous log levels, set it to info (e.g. log_level_network: info).

Fixes

  • Image context menu actions to create a Regional and Global Reference Image layers were reversed.
  • Missing translation strings.
  • Canvas filters could execute twice. Besides being inefficient, on slow network connections, this could cause an error toast to appear even when the filter was successful. They now only execute once.
  • Model install error when the path contains quotes. Thanks @Quadiumm!

Internal

  • Upgrade docker image to Ubuntu 24.04 and use uv for package management.
  • Fix dynamic invocation values causing non-deterministic OpenAPI schema. This allows us to add a CI check to ensure the OpenAPI schema and TypeScript types are always in sync. Thanks @rikublock!

Translations

Installation and Updating

This is the first Invoke release since we debuted our launcher, a desktop app that can install, upgrade and run Invoke on Windows, macOS and Linux.

While technically still in a prerelease state, it is working well. Download it from the repo's releases page. It works with your existing Invoke installation, or you can use it to do a fresh install.

macOS users may need to do this workaround for macOS until the first stable release of the launcher.

Legacy installer

You can still use our legacy installer scripts to install and run Invoke, though we do plan to deprecate this at some point.

To install or update, download the latest installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

New Contributors

Full Changelog: v5.4.3...v5.5.0rc1

v5.4.4rc1

18 Dec 08:08
Compare
Choose a tag to compare
v5.4.4rc1 Pre-release
Pre-release

This release brings support for FLUX Control LoRAs to Invoke, plus a few other fixes and enhancements.

FLUX Control LoRAs

Despite having "LoRA" in the name, these models are used in Invoke via Control Layers - like ControlNets. The only difference is that they do not support begin and end step percentages.

So far, BFL has released a Canny and Depth model. You can install them from the Model Manager.

All Changes

Enhancements

  • Support for FLUX Control LoRAs.
  • Improved error handling and recovery for Canvas, preventing Canvas from getting stuck if there is a network issue during some operations.

Fixes

  • Image context menu actions to create a Regional and Global Reference Image layers were reversed.
  • Missing translation strings.

Internal

  • Upgrade docker image to Ubuntu 24.04 and use uv for package management.

Translations

Installation and Updating

This is the first Invoke release since we debuted our launcher, a desktop app that can install, upgrade and run Invoke on Windows, macOS and Linux.

While technically still in a prerelease state, it is working well. Download it from the repo's releases page. It works with your existing Invoke installation, or you can use it to do a fresh install.

macOS users may need to do this workaround for macOS until the first stable release of the launcher.

Legacy installer

You can still use our legacy installer scripts to install and run Invoke, though we do plan to deprecate this at some point.

To install or update, download the latest installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

Full Changelog: v5.4.3...v5.4.4rc1

v5.4.3

03 Dec 23:22
Compare
Choose a tag to compare

This minor release adds initial support for FLUX Regional Guidance, arrow key nudge on Canvas, plus an assortment of fixes and enhancements.

Changes

Enhancements

  • Add 1-pixel nudge to the move tool on Canvas. Use the arrow keys to make fine adjustments to a layer's position. Thanks @hippalectryon-0!
  • Change the default infill method from patchmatch to lama. You can use patchmatch.
  • Add empty state for Global Reference Images and Regional Guidance Reference Images, similar to the empty state for Control Layers. A blurb directs users to upload an image or drag an image from gallery to set the image.
  • FLUX performance improvements (~10% speed-up).
  • Added ImagePanelLayoutInvocation to facilitate FLUX IC-LoRA workflows.
  • FLUX Regional Guidance support (beta). Only positive prompts are supported; negative prompts, reference images and auto-negative are not supported for FLUX Regional Guidance.
  • Canvas layers now have a warning indicator that indicates issues with the layer that could prevent invoking or cause a problem.
  • New Layer from Image functions added to Canvas Staging Area Toolbar. These create a new layer without dismissing the rest of the staged images.
  • Improved empty state for Regional Guidance Reference Images.
  • Added missing New from... image context menu actions: Reference Image (Regional) and Reference Image (Global)
  • Added Vietnamese to language picker in Settings.

Fixes

  • Soft Edge (Lineart, Lineart Anime) Control Layers default to the Soft Edge filter correctly.
  • Remove the nonfunctional width and height outputs from the Image Batch node. If you want to use width and height in a batch, route the image from Image Batch to an Image Primitive node, which outputs width and height.
  • Ensure invocation templates have fully parsed before running studio init actions.
  • Bumped transformers to get a fix for Depth Anything artifacts.
  • False negative edge case with picklescan.
  • Invoke queue actions menu's Cancel Current action erroneously cleared the entire queue. Thanks @rikublock!
  • New Reference Images could inadvertently have the last-used Reference Image populated on creation.
  • Error when importing GGUF models. Thanks @JPPhoto!
  • Canceling any queue item from the Queue tab also erroneously canceled the currently-executing queue item.

Internal

  • Add redux actions for support video modal.
  • Tidied various things related to the queue. Thanks @rikublock!

Docs

Translations

Installation and Updating

To install or update, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

New Contributors

Full Changelog: v5.4.2...v5.4.3

v5.4.3rc2

03 Dec 00:14
Compare
Choose a tag to compare
v5.4.3rc2 Pre-release
Pre-release

This minor release adds initial support for FLUX Regional Guidance, arrow key nudge on Canvas, plus an assortment of fixes and enhancements.

Changes

Enhancements

  • Add 1-pixel nudge to the move tool on Canvas. Use the arrow keys to make fine adjustments to a layer's position. Thanks @hippalectryon-0!
  • Change the default infill method from patchmatch to lama. You can use patchmatch.
  • Add empty state for Global Reference Images and Regional Guidance Reference Images, similar to the empty state for Control Layers. A blurb directs users to upload an image or drag an image from gallery to set the image.
  • FLUX performance improvements (~10% speed-up).
  • Added ImagePanelLayoutInvocation to facilitate FLUX IC-LoRA workflows.
  • FLUX Regional Guidance support (beta). Only positive prompts are supported; negative prompts, reference images and auto-negative are not supported for FLUX Regional Guidance.
  • Canvas layers now have a warning indicator that indicates issues with the layer that could prevent invoking or cause a problem.
  • New Layer from Image functions added to Canvas Staging Area Toolbar. These create a new layer without dismissing the rest of the staged images.
  • Improved empty state for Regional Guidance Reference Images.
  • Added missing New from... image context menu actions: Reference Image (Regional) and Reference Image (Global)
  • Added Vietnamese to language picker in Settings.

Fixes

  • Soft Edge (Lineart, Lineart Anime) Control Layers default to the Soft Edge filter correctly.
  • Remove the nonfunctional width and height outputs from the Image Batch node. If you want to use width and height in a batch, route the image from Image Batch to an Image Primitive node, which outputs width and height.
  • Ensure invocation templates have fully parsed before running studio init actions.
  • Bumped transformers to get a fix for Depth Anything artifacts.
  • False negative edge case with picklescan.
  • Invoke queue actions menu's Cancel Current action erroneously cleared the entire queue. Thanks @rikublock!
  • New Reference Images could inadvertently have the last-used Reference Image populated on creation.
  • Error when importing GGUF models. Thanks @JPPhoto!
  • Canceling any queue item from the Queue tab also erroneously canceled the currently-executing queue item.

Internal

  • Add redux actions for support video modal.
  • Tidied various things related to the queue. Thanks @rikublock!

Docs

Translations

Installation and Updating

To install or update, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

New Contributors

Full Changelog: v5.4.2...v5.4.3rc2

v5.4.3rc1

21 Nov 18:57
Compare
Choose a tag to compare
v5.4.3rc1 Pre-release
Pre-release

This minor release adds arrow key nudge on Canvas, plus a handful of fixes and enhancements.

Changes

Enhancements

  • Add 1-pixel nudge to the move tool on Canvas. Use the arrow keys to make fine adjustments to a layer's position. Thanks @hippalectryon-0!
  • Change the default infill method from patchmatch to lama. You can use patchmatch.
  • Add empty state for Global Reference Images and Regional Guidance Reference Images, similar to the empty state for Control Layers. A blurb directs users to upload an image or drag an image from gallery to set the image.

Fixes

  • Soft Edge (Lineart, Lineart Anime) Control Layers default to the Soft Edge filter correctly.
  • Remove the nonfunctional width and height outputs from the Image Batch node. If you want to use width and height in a batch, route the image from Image Batch to an Image Primitive node, which outputs width and height.
  • Ensure invocation templates have fully parsed before running studio init actions.

Internal

  • Add redux actions for support video modal.

Installation and Updating

To install or update, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

Full Changelog: v5.4.2...v5.4.3rc1