Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

builtin.conf: modernize internal profiles #12384

Merged
merged 4 commits into from
Sep 19, 2023
Merged

Conversation

kasper93
Copy link
Contributor

It's confusing that the mid-quality option is in the gpu-hq profile, while the recommended filter is not. Also prefer the sharper catmull-rom for dscale, as it produces better results.

It's time to retire Spline36.

@Dudemanguy
Copy link
Member

To be honest the manual is full of imo misleading statements in the scale section. If we do change this profile, it shouldn't because of what the manual says. As for the actual change itself, I personally use ewa_lanczos but spline36 is arguably better depending on your preferences. It's not as if this change is going from a definitely inferior filter to a better one. It's an opinion.

@sfan5
Copy link
Member

sfan5 commented Sep 12, 2023

What is the goal of gpu-hq actually? To provide reasonably high quality settings, or to put everything at maximum?

It is also my personal opinion that anything except bilinear for cscale is wasted gpu power but that's an entirely different discussion.

@kasper93
Copy link
Contributor Author

kasper93 commented Sep 12, 2023

Arguably, we can use cscale=catmull_rom or cscale=bicubic for performance, no one will notice a difference anyway. I made this PR to spark some discussion and insults coming my way. I don't intend to force it, but rather would like modernize it, as these days HQ should mean more than Spline36.

For scale I think ewa_lanczossharp is good candidate. For dscale I always preferred sharper catmull_rom. For cscale it doesn't even matter, but we might as well go overkill for HQ profile.

It's not as if this change is going from a definitely inferior filter to a better one. It's an opinion.

I agree, if they were one better filter, we wouldn't even need so many options. Still I think polar lanczos is better choice for scale.

What is the goal of gpu-hq actually? To provide reasonably high quality settings, or to put everything at maximum?

I would say, reasonable high quality profile, to set mpv to best output it can produce with bulit-in processing. Is the ewa_lanczossharp that much of a performance bottleneck for todays gpus?

Frankly I would like for Spline36 to be default mpv scaler, with maybe gpu-fast option that defaults to native sampling. Although it is probably good, that default config can work on potato pc.

It is also my personal opinion that anything except bilinear for cscale is wasted gpu power but that's an entirely different discussion.

I would say, anything higher than bicubic. And indeed, if you don't compare patters, you wouldn't see significant diff.

@Obegg

This comment was marked as off-topic.

@kasper93
Copy link
Contributor Author

ewa_lanczos4sharpest

Nah, it is too much. It also forces antiringing at 0.8. And produces nasty artifacts in certain cases, because of that. IMHO, not suitable for general recommendation, but good if someone decides to get sharper results. But at this point, maybe go custom shader instead.

@christoph-heinrich
Copy link
Contributor

I've been using scale=ewa_lanczossharp and dscale=catmull_rom with cscale=bilinear.

Anything beyond bilinear for cscale only becomes noticeable on tiny videos, for 720p and 1080p I can't tell any improvement, even in a direct comparison.

I'm sure there are a lot of people using gpu-hq without anything else, which makes anything beyond bilinear in the preset a waste of energy.

We could point out in the documentation that changing it from bilinear only makes sense for small videos.

@llyyr
Copy link
Contributor

llyyr commented Sep 13, 2023

Since it's the gpu-hq profile, I don't think leaving cscale to bilinear makes a lot of sense even though I can't tell the difference half the time anyway. Leaving it to spline36/orthagonal lanczos should be good enough in every case though

Also I missed that this changed dscale to catrom too, I strongly disagree with this as well

@ghost
Copy link

ghost commented Sep 13, 2023

What is the goal of gpu-hq actually? To provide reasonably high quality settings, or to put everything at maximum?

This is a very good question to ask.
If this is meant to be a generally "good quality" profile then an argument for swapping spline36 out for lanczos can be made, but otherwise I strongly feel it should be left alone.

I feel like any more and this just goes into personal preference territory which is not the goal of a preset IMO.
At the very least, I strongly disagree with changing Mitchell to Catrom. Mitchell is a very competent downscaling filter with minimal aliasing and ringing while also performing fairly well on edge case patterns other good downscale filters (Hermite, Gaussian) have issues with.

@arch1t3cht
Copy link

Frankly I would like for Spline36 to be default mpv scaler

Spline36 is not a good scaler and only really popular because it was, well, popular in the past. It doesn't enjoy any good theoretical properties, and depending on your goal various other scalers will perform better: If you care about spatial properties like preserving linear gradients (which is equivalent to quadratic convergence, in practice this amounts to having little aliasing and ringing) you should just use a b+2c=1 bicubic spline like catmull_rom. If you care about sharpness you should just use an actual frequency-based kernel like lanczos. Unlike Spline36, both of these kernels also have a variance of 0, which prevents them from overly emphasizing or suppressing low frequencies.
Of course I'm aware that these theoretical properties are not always the full story, but I'm just trying to bring some objectivity into this. You can come to the same conclusion with direct visual comparisons.

catmull_rom for dscale is also not great. Unlike with upscaling, where negative lobes can improve sharpness and reduce aliasing, with downscaling negative lobes create ringing much faster. mitchell and hermite (b=0, c=0) perform much better for downscaling (less ringing and aliasing)

@Artoriuz
Copy link

While I think that modernising the defaults is probably a good idea, deciding which filter to pick is problematic as there isn't a clear winner.

  • lanczos is better than spline36 when it comes to sharpness, but it rings harder (whether the average user would notice the difference is honestly debatable, the filters are very similar).
  • ewa_lanczos/ewa_lanczossharp is much better than spline36 when it comes to aliasing, but the misaligned first zero crossing can make lines visibly thicker and the filter is generally blurrier as well.

On top of this being very subjective you also have to take into account that it highly depends on content. If you're mostly watching live-action content full of high-frequency information/noise, you can usually get away with a sharper filter and never notice any of its shortcomings. If you're mostly watching clean line-art however, all the problems become much easier to see.

For downsampling, mitchell is arguably cleaner than catmull_rom, which is specially true in linear light, but are the artefacts produced by the latter actually detrimental to the user experience when things are moving and you're not zooming in to compare screenshots? I'd say most people would prefer their high-resolution content to remain sharp after downsampling, and mitchell is arguably soft-looking. I can see reasonable arguments being made for both filters here.

@ghost
Copy link

ghost commented Sep 13, 2023

For downsampling, mitchell is arguably cleaner than catmull_rom, which is specially true in linear light, but are the artefacts produced by the latter actually detrimental to the user experience when things are moving and you're not zooming in to compare screenshots? I'd say most people would prefer their high-resolution content to remain sharp after downsampling, and mitchell is arguably soft-looking. I can see reasonable arguments being made for both filters here.

On "real content" Hermite is "cleaner" (no negative lobe) than both and considerably sharper than Mitchell. If any alternative is to be used over Mitchell, It's Hermite.

@Artoriuz
Copy link

On "real content" Hermite is "cleaner" (no negative lobe) than both and considerably sharper than Mitchell. If any alternative is to be used over Mitchell, It's Hermite.

I could even vouch for polar hermite if that's an option.

@ghost
Copy link

ghost commented Sep 13, 2023

The only option that needs to be added to profile=gpu-hq is osd-bar=no.

It empirically, measurably and objectively harms the visual quality of my videos, and should be disabled for high quality video rendering.

--

In all seriousness, my interpretation of gpu-hq has always been that it’s supposed to be a baseline. Now, I could just be ad-hoc rationalizing the purpose of gpu-hq after the fact, but at the very least, that’s the way I’ve always treated it. I feel like that’s the way other users have also treated it, but I could be wrong.

In my opinion, the most important options in gpu-hq have always been:

dither-depth=<value>
correct-downscaling=yes
linear-downscaling=yes
sigmoid-upscaling=yes

…and everything else is more-or-less just subjective, or completely tertiary to the question of what “high quality” is.

As for the discussion about whether anything other than cscale=bilinear is bloat… I honestly don’t really see why gpu-hq needs to downgrade its chroma scaler to be worse than 99% of video rendering applications. For most dedicated GPUs, you’re saving AT MOST like ±0-1W on GPU power draw by using bilinear instead. You may as well just go the full mile, and get better chroma coverage.

DOCS/man/options.rst Outdated Show resolved Hide resolved
@Obegg

This comment was marked as off-topic.

@Jules-A
Copy link

Jules-A commented Sep 13, 2023

I don't really think changing from Mitchell to Catmul-Rom is a good idea, without good anti-ringing (such as using ravu-zoom-ar-r3), it's notably worse imo. Also I've noticed Catmul-Rom is bad when downscaling from content that originally had a small resolution but better on 1080p+. Is it possible to decide based on resolution? The ewa_lanczossharp don't seem to work too great at lower resolutions either (too much aliasing) ewa_lanczos4sharpest is useless in all scenarios and looks nasty imo.
Maybe the new antiringing can be forced (at lower amounts) on the GPU-HQ also if these scalers are to be used?

Currently I'm doing this with small modifications to the shaders:
[720-1080p]
profile-desc=Shaders for 720/1080p
profile-cond=height <= 1080
profile-cond=height > 540
dscale=catmull_rom
glsl-shaders="~~/shaders/ravu-zoom-ar-r3g.hook;~~/shaders/nlmeans_light3.glsl;~~/shaders/FSRCNNX_x2_16-0-4-12.glsl;~~/shaders/CAS4.glsl;~~/shaders/Anime4K_Thin_HQ5.glsl;~~/shaders/KrigBilateral.glsl"


[540p]
profile-desc=Shaders for ~540p
profile-cond=height <= 540
profile-cond=height > 480
dscale=mitchell
glsl-shaders="~~/shaders/ravu-zoom-ar-r3g.hook;~~/shaders/FSRCNNX_x2_16-0-4-12.glsl;~~/shaders/nlmeans_light3.glsl;~~/shaders/CAS6.glsl;~~/shaders/Anime4K_Thin_HQ4.glsl;~~/shaders/KrigBilateral.glsl"


[480p]
profile-desc=Shaders for ~480p
profile-cond=height <= 480
profile-cond=height > 360
dscale=mitchell
glsl-shaders="~~/shaders/ravu-zoom-ar-r3g.hook;~~/shaders/FSRCNNX_x2_16-0-4-12.glsl;~~/shaders/nlmeans_light3.glsl;~~/shaders/CAS6.glsl;~~/shaders/Anime4K_Thin_HQ4.glsl;~~/shaders/Anime4K_Denoise_CNN_x2_M.glsl;~~/shaders/KrigBilateral.glsl"


[360p]
profile-desc=Shaders for 360p or lower
profile-cond=height <= 360
dscale=mitchell
glsl-shaders="~~/shaders/ravu-zoom-ar-r3g.hook;~~/shaders/FSRCNNX_x2_16-0-4-12.glsl;~~/shaders/nlmeans_light2.glsl;~~/shaders/CAS6.glsl;~~/shaders/Anime4K_Thin_HQ4.glsl;~~/shaders/Anime4K_Denoise_CNN_x2_VL.glsl;~~/shaders/KrigBilateral.glsl"

I haven't tried hermite yet but I'm looking for something in between mitchell and catmull_rom.

EDIT: Tried hermite in anime and it's worse than both, way too much aliasing that any low-res stuff becomes unwatchable and it can still be noticed with 1080p source. Probably does better on real content where aliasing isn't as noticeable.

EDIT: NVM, I wasn't testing hermite correctly...

@ghost
Copy link

ghost commented Sep 13, 2023

way too much aliasing that any low-res stuff becomes unwatchable

Low-res stuff? ...Are you using it to upscale? It's well known Hermite is terrible for upscaling, though aliasing isn't quite the right word. The filter creates blocking when upscaling.

@Isaacx123
Copy link

Isaacx123 commented Sep 13, 2023

Hermite shouldn't have too much aliasing when downsampling, at least not more than Catrom:

Downsampled from 1080p to 480p(your example) using ortho Hermite:

mpv-shot0001

Can you post the parameters you used?

@Jules-A
Copy link

Jules-A commented Sep 13, 2023

Hermite shouldn't have too much aliasing when downsampling, at least not more than Catrom:
Downsampled from 1080p to 480p(your example) using ortho Hermite:

That wasn't my example, I was talking about older 480p content, upscaled to 4k with my shaders and then downscaled to my native 1440p with either mitchell or hermite. Anime I was checking it on was "Get Backers" on HiDive and some others.

shaders.zip

[480p]
profile-desc=Shaders for ~480p
profile-cond=height <= 480
profile-cond=height > 360
dscale=mitchell
glsl-shaders="~~/shaders/ravu-zoom-ar-r3g.hook;~~/shaders/FSRCNNX_x2_16-0-4-12.glsl;~~/shaders/nlmeans_light3.glsl;~~/shaders/CAS6.glsl;~~/shaders/Anime4K_Thin_HQ4.glsl;~~/shaders/Anime4K_Denoise_CNN_x2_M.glsl;~~/shaders/KrigBilateral.glsl"

Although it turns out I was incorrect, it doesn't create aliasing, that's the result of sharpening with CAS but Mitchell does a better job of masking it.

@Isaacx123
Copy link

I think you should cool it with the meme shaders.

@christoph-heinrich
Copy link
Contributor

christoph-heinrich commented Sep 13, 2023

For most dedicated GPUs, you’re saving AT MOST like ±0-1W on GPU power draw by using bilinear instead.

Makes a difference of 6W for me on a 1080p 60fps YouTube video on a 1080p 60Hz monitor when comparing bilinear with ewa_lanczossharp.
Granted my R9 380 certainly isn't as efficient as more recent gpus, but I'm under-clocking it to the lowest I can get away with to get some more efficiency.

@Obegg You're confusing the osd-bar with the osc.

@Jules-A
Copy link

Jules-A commented Sep 13, 2023

I think you should cool it with the meme shaders.

Why? I'm yet to find a better set of shaders that helps get rid of noise and sharpens the image without destroying too much image quality. With mitchell it's fine for me at 480p although I haven't experimented too much with using a sharper downscaler and turning down CAS, I only ever tried mitchell, lancoz (all versions), roubidouxsharp thoroughly before.

EDIT: HAHA yeah, turns out I wasn't correctly testing hermite...

@Isaacx123
Copy link

🤦‍♂️
That's what you are doing wrong. Hermite currently isn't mapped in mpv, you need to do --dscale=mitchell --dscale-param1=0 --dscale-param2=0 to use it.

@Artoriuz
Copy link

@christoph-heinrich I have a similarish power draw difference on a 6600 XT, so it has nothing to do with your GPU being old.

Still though, while I think taking performance into consideration has its merits, gpu-hq has "high quality" in its name so being more power hungry is 100% justified. Downgrading cscale to bilinear makes no sense when it has been set to spline36 for so long.

@haasn
Copy link
Member

haasn commented Sep 13, 2023

I think ewa_lanczossharp is too heavy for gpu-hq profile. We need a middle ground for machines in between "complete potato" and "high end desktop GPU", and this is the role gpu-hq has always fulfilled.

That's why, in libplacebo, I opted for three preset levels (fast, default and highquality).

I have no opinon on mitchell vs catrom but I'd like to at least see some justification for the claim. (What about downscaling HDR sources?)

@ghost
Copy link

ghost commented Sep 13, 2023

Makes a difference of 6W for me on a 1080p 60fps YouTube video on a 1080p 60Hz monitor when comparing bilinear with ewa_lanczossharp.

I have a similarish power draw difference on a 6600 XT, so it has nothing to do with your GPU being old.

Very interesting, on my setup the difference between cscale=bilinear and cscale=spline36 is in the margin of +0-2W, but my setup is also overkill for mpv, so perhaps my GPU just has higher power-draw overall, and it makes no difference on my end. I'll refrain from commenting on anything power-draw related, I guess. I'd consider +6W a significant enough difference if your main concern is power-draw.

Edit: Just noticed the original post said ewa_lanczossharp instead of spline36, but my power-draw with spline36 and ewa_lanczossharp are actually identical on d3d11, so my post still stands, I guess.

@christoph-heinrich
Copy link
Contributor

I think spline36 is too heavy for gpu-hq profile. We need a middle ground for machines in between "complete potato" and "high end desktop GPU", and this is the role gpu-hq has always fulfilled.

But libplacebo default preset also uses spline36 for upscaling, which I presume also targets that middle ground.

That's why, in libplacebo, I opted for three preset levels (fast, default and highquality).

We could make two more profiles for mpv and mirror the ones from libplacebo as closely as vo=gpu compatibility allows. That would remove any ambiguity about which quality level gpu-hq should actually target. Once vo=gpu gets removed they can then simply use the libplacebo presets.

@haasn
Copy link
Member

haasn commented Sep 13, 2023

If you ask me, we should make mpv defaults match pl_render_default_params and add a --profile=fast to disable all advanced rendering, and a --profile=highquality to enable high quality rendering. And gpu-hq should just be deprecated/removed. (It also makes no sense, I mean why the gpu- prefix? vo=gpu has been the main renderer forever now, and we already promoted all the other options to the main options scope)

Other shit like dithering being disabled by default is also just objectively wrong and stupid frustrating, ignorant, dense, unpleasant, cheesy, or awful. Also, mpv defaults matching libplacebo defaults is what users of gpu-next expect these days, and it's only due to my own lack of energy that I haven't kept updating mpv to match the libplacebo defaults.

Honestly, I would like to overhaul the entire options system to make rendering options directly map to their pl_render_params analogs (ideally via pl_options bridge, to avoid having to keep everything in sync...)... but of course all of this is blocked by vo=gpu existing, and mpv git master not currently requiring libplacebo git master, and gpu-hq not being supported by libmpv/render API. (Again, due to lack of energy and motivation on my part, since I don't use libmpv..)

@haasn
Copy link
Member

haasn commented Sep 13, 2023

But libplacebo default preset also uses spline36 for upscaling, which I presume also targets that middle ground.

It was typo, I meant ewa_lanczossharp..

@llyyr
Copy link
Contributor

llyyr commented Sep 19, 2023

Minor bikeshed, but I'd prefer the use of dither-depth=no over dither=no, since we often recommend users to set dither-depth=8 for 8 bit monitors, with the proposed defaults, they'd need to set both dither=fruit and dither-depth=8.

And in the commit message

dither=yes

This is not a valid value for the option

In the future, we could rename dither to dither-algo and remove the no option from it, and allow dither-depth to control dithering

@haasn
Copy link
Member

haasn commented Sep 19, 2023

since we often recommend users to set dither-depth=8 for 8 bit monitors

But why? With the new default, dither depth will be auto-detected. Unless auto-detection doesn't work, in which case, shouldn't we just default to dither-depth=8 on affected platforms?

@llyyr
Copy link
Contributor

llyyr commented Sep 19, 2023

shouldn't we just default to dither-depth=8 on affected platforms?

We don't know which platforms are affected. I have to manually set dither-depth=8 on my hardware on Windows/X11/Wayland. You're also on AMD but you don't. We don't really know what causes it to work for some people and not work for others. Dither depth being auto-detected is fine for people for whom it works, and if it's broken you can simply set one option to change it to a specific bit depth.

@haasn
Copy link
Member

haasn commented Sep 19, 2023

Minor bikeshed, but I'd prefer the use of dither-depth=no over dither=no, since we often recommend users to set dither-depth=8 for 8 bit monitors, with the proposed defaults, they'd need to set both dither=fruit and dither-depth=8.

I don't follow. The new defaults are --dither-depth=auto --dither=fruit. Are you confusing the defaults for --profile=fast? In profile=fast, I argue, we want to forcibly disable dithering even if the user has a dither depth set, no? That's the whole point of the profile, at least.

Or maybe we should set dither=ordered in the fast profile, since ordered dither is much faster (at least on gpu-next, which I have a feeling will soon become the default anyway). Or just remove the dither override altogether. I mean, really, how slow can dithering possibly be that we need to really cut corners here?

@haasn
Copy link
Member

haasn commented Sep 19, 2023

Or maybe we should set dither=ordered in the fast profile, since ordered dither is much faster (at least on gpu-next, which I have a feeling will soon become the default anyway). Or just remove the dither override altogether. I mean, really, how slow can dithering possibly be that we need to really cut corners here?

On my end, dither=ordered is 5256.41 fps vs dither=no 5890.83 fps. For comparison, dither=fruit is 4635.11 fps. (All numbers on gpu-next only)

@llyyr
Copy link
Contributor

llyyr commented Sep 19, 2023

The new defaults are --dither-depth=auto

Ah I missed this since it's part of the same commit, my bad. This is fine then. I guess the point still stands if somebody wanted dithering on the fast profile on environments where auto is broken, but I guess it's on them to read what options the profile sets.

DOCS/interface-changes.rst Outdated Show resolved Hide resolved
DOCS/man/options.rst Outdated Show resolved Hide resolved
DOCS/man/options.rst Outdated Show resolved Hide resolved
DOCS/man/options.rst Outdated Show resolved Hide resolved
DOCS/man/options.rst Outdated Show resolved Hide resolved
DOCS/man/options.rst Show resolved Hide resolved
DOCS/man/options.rst Show resolved Hide resolved
The goal is to provide simple to understand quality/performance level
profiles for the users.

Instead of default and gpu-hq profile. There main profiles were added:
    - fast: can run on any hardware
    - default: balanced profile between quality and performance
    - high-quality: out of the box high quality experience. Intended
      mostly for dGPU.

Summary of three profiles, including default one:

[fast]
scale=bilinear
cscale=bilinear (implicit)
dscale=bilinear
dither=no
correct-downscaling=no
linear-downscaling=no
sigmoid-upscaling=no
hdr-compute-peak=no

[default] (implicit mpv defaults)
scale=lanczos
cscale=lanczos
dscale=mitchell
dither-depth=auto
correct-downscaling=yes
linear-downscaling=yes
sigmoid-upscaling=yes
hdr-compute-peak=yes

[high-quality] (inherits default options)
scale=ewa_lanczossharp
cscale=ewa_lanczossharp (implicit)
hdr-peak-percentile=99.995
hdr-contrast-recovery=0.30
allow-delayed-peak-detect=no
deband=yes
scaler-lut-size=8
@Jules-A
Copy link

Jules-A commented Sep 19, 2023

But why? With the new default, dither depth will be auto-detected. Unless auto-detection doesn't work, in which case, shouldn't we just default to dither-depth=8 on affected platforms?

I don't think auto ever worked for me on gpu-next, it always has detected 10bit on my 8bit monitor playing 8bit content. It did detect 12bit or something weird like that before so at least it's improved. That said I could never tell the difference anyway so not sure what was going on.

@kasper93
Copy link
Contributor Author

I don't think auto ever worked for me on gpu-next, it always has detected 10bit on my 8bit monitor playing 8bit content. It did detect 12bit or something weird like that before so at least it's improved. That said I could never tell the difference anyway so not sure what was going on.

auto is not meant to detect your display, but target backbuffer. It always better to have it enabled, than not. I'm not sure what argument are you making.

@llyyr
Copy link
Contributor

llyyr commented Sep 19, 2023

But why? With the new default, dither depth will be auto-detected. Unless auto-detection doesn't work, in which case, shouldn't we just default to dither-depth=8 on affected platforms?

I don't think auto ever worked for me on gpu-next, it always has detected 10bit on my 8bit monitor playing 8bit content. It did detect 12bit or something weird like that before so at least it's improved. That said I could never tell the difference anyway so not sure what was going on.

Please read #11862 for why this matters. mpv shouldn't need to dither content at all if gpu drivers worked correctly.

@haasn
Copy link
Member

haasn commented Sep 19, 2023

Please read #11862 for why this matters. mpv shouldn't need to dither content at all if gpu drivers worked correctly.

You always need to dither to the backbuffer depth.

@kasper93
Copy link
Contributor Author

Please read #11862 for why this matters. mpv shouldn't need to dither content at all if gpu drivers worked correctly.

That's incorrect statement. If you are using 8-bit backbuffer, we have to dither to 8-bits from whatever internal precision we have. Not dithering is an error and in practice there shouldn't be even an option to disable it given how free dithering is.

@llyyr
Copy link
Contributor

llyyr commented Sep 19, 2023

Please read #11862 for why this matters. mpv shouldn't need to dither content at all if gpu drivers worked correctly.

You always need to dither to the backbuffer depth.

Yes, but mpv shouldn't need to dither content to the display bit depth. If mpv is offered a 10 bit backbuffer on an 8 bit display then mpv should be able to pass 10 bit video and just assume it'll work. Instead on my system, mpv is offered 16 bit backbuffer and I see banding on my 8 bit display unless I explicitly set dither-depth=8

@kasper93
Copy link
Contributor Author

Yes, but mpv shouldn't need to dither content to the display bit depth. If mpv is offered a 10 bit backbuffer on an 8 bit display then mpv should be able to pass 10 bit video and just assume it'll work. Instead on my system, mpv is offered 16 bit backbuffer and I see banding on my 8 bit display unless I explicitly set dither-depth=8

How this is related to this PR again?

@llyyr
Copy link
Contributor

llyyr commented Sep 19, 2023

How this is related to this PR again?

I made an ambiguously worded statement that was interpreted as factually incorrect in response to an off topic comment, so I felt the need to correct what I meant. It's not related. I'd flag it as off topic if I could for my own comments.

@kasper93
Copy link
Contributor Author

I made an ambiguously worded statement that was interpreted as factually incorrect in response to an off topic comment, so I felt the need to correct what I meant. It's not related. I'd flag it as off topic if I could for my own comments.

No worries, I'm tired, not really read carefully everything. Let's focus on things that are relevant for the changes, discussion about specific options/changes can be also moved to next issues to have better focus. And don't spam all people who subscribed in this thread.

@haasn
Copy link
Member

haasn commented Sep 19, 2023

Let's get this moo-erged

1695076338600548

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.