Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] SeDuMi does natively support complex-valued variables #3257

Closed
wants to merge 2 commits into from
Closed

[docs] SeDuMi does natively support complex-valued variables #3257

wants to merge 2 commits into from

Conversation

araujoms
Copy link
Contributor

See the manual.

@blegat
Copy link
Member

blegat commented Feb 28, 2023

SDPT3.jl also supports complex numbers but I failed to understand how it works: jump-dev/SDPT3.jl#6, help would be appreciated.
Note that if the solver just reformulate everything into real numbers equivalently as the MOI bridge does, it's not worth the effort adding support for complex numbers at the MOI wrapper level.

@araujoms
Copy link
Contributor Author

I can't help with SDPT3, I only understand how SeDuMi works. But in the PR you have written yourself that SDPT3 removed the support for complex numbers, doesn't that explain it?

In any case, I have tested a complex SDP with SeDuMi, and it is much faster than mapping the problem into real numbers, the resulting SDP is smaller.

@blegat
Copy link
Member

blegat commented Feb 28, 2023

We should try adding that support to the MOI wrapper then, could you give a MWE calling SeDuMi.sedumi directly from SeDuMi.jl ?

@araujoms
Copy link
Contributor Author

araujoms commented Feb 28, 2023

Sure, it follows below. I couldn't use SeDuMi.sedumi because it doesn't allow for complex input, so I called MATLAB directly. Note that SeDuMi currently has a bug that you need to fix with this patch, otherwise you'll get incorrect results.

If you run the code below you'll see that the difference in performance is brutal, with SeDuMi.jl it needs 2 + d + 12d^2 variables, whereas using the native complex interface only 1 + 4d^2 variables are needed.

using JuMP
using LinearAlgebra
import Random
import SeDuMi
import MATLAB

mutable struct Cone2
    f::Float64 # number of free primal variables / linear dual equality constraints
    l::Float64 # length of LP cone
    q::Vector{Float64} # list of second-order cone dimension
    r::Vector{Float64} # list of rotated second-order cone dimension
    s::Vector{Float64} # list of semidefinite constraints
    scomplex::Vector{Float64}
    ycomplex::Vector{Float64}
    xcomplex::Vector{Float64}
end


function random_state(d)
    x = randn(ComplexF64, (d,d))
    y = x*x'
    rho = Hermitian(y/tr(y))
    return rho
end


function discriminate(d)
    # first we solve the SDP with SeDuMi.jl
    model = Model(SeDuMi.Optimizer)
    rho1 = random_state(d)
    rho2 = random_state(d)
    E1 = @variable(model, [1:d, 1:d] in HermitianPSDCone())
    E2 = @variable(model, [1:d, 1:d] in HermitianPSDCone())
    @constraint(model, E1+E2 .== I)
    obj = real(dot(rho1,E1) + dot(rho2,E2))/2
    @objective(model, Max, obj)
    optimize!(model)
    print(value(obj),"\n\n")
    
    # now we do it directly on SeDuMi using complex numbers
    c = -[vec(rho1);vec(rho2)]/2
    smallA = zeros(Float64,Int64(d + d*(d-1)/2),d^2)
    b = zeros(Float64,Int64(d + d*(d-1)/2))
    offdiagonals = zeros(Float64,Int64(d*(d-1)/2))
    counter = 0
    counter2 = 0
    for j = 1:d
        for i = 1:d
            if i >= j
                counter += 1
                smallA[counter,i + (j-1)*d] = 1
                if i == j
                    b[counter] = 1
                end
                if i > j
                    counter2 += 1
                    offdiagonals[counter2] = counter
                end
            end
        end
    end
    A = hcat(smallA,smallA)
    
    K = Cone2(0,0,Float64[],Float64[],[d, d],[1, 2],offdiagonals,Float64[])
    
    x, y, info = MATLAB.mxcall(:sedumi, 3, A, b, c,K)
    print(-real(dot(c,x)),"\n\n")
    
    #we print the analytical answer as well for debugging
    print(0.5 + 0.25*sum(svdvals(rho1-rho2)))
    return
end

@odow
Copy link
Member

odow commented Feb 28, 2023

Let's update SeDuMi.jl first before updating the JuMP documentation? Currently, no solvers accessible from JuMP support complex values.

@blegat
Copy link
Member

blegat commented Mar 1, 2023

Let's update SeDuMi.jl first before updating the JuMP documentation? Currently, no solvers accessible from JuMP support complex values.

Yes, that's also what I have in mind

@araujoms
Copy link
Contributor Author

araujoms commented Mar 1, 2023

The manual is making a factual statement that is incorrect, I just wanted to fix that independently of what JuMP supports.

Nevertheless, I found out that there's no point in using SeDuMi's complex interface. I did some benchmarking with YALMIP, and it turns out that the performance is indistinguishable w.r.t. the usual technique of mapping the problem to the reals.

There are two unrelated issues causing the low performance of JuMP: the first is that JuMP is solving the dual problem, whereas the low-level interface is solving the primal problem, which is more efficient. Indeed, even when I restrict the problem to reals the performance is still much lower.

Now YALMIP allows you to choose whether you're solving the primal or the dual (by default it solves the dual). If I let it solve the dual problem over the reals, the performance is exactly the same as JuMP over the reals. To fix that one would need to add the option to JuMP to choose either primal or dual.

The second issue is JuMP's support for complex numbers. When I tell YALMIP to solve the dual problem over the complexes the performance is still much better than JuMP's. Apparently JuMP is generating a bunch of redundant constraints.

@blegat
Copy link
Member

blegat commented Mar 1, 2023

The manual is making a factual statement that is incorrect, I just wanted to fix that independently of what JuMP supports.

I agree, but fixing it once the SeDuMi interface supports complex numbers would avoid users misunderstanding this.

There are two unrelated issues causing the low performance of JuMP: the first is that JuMP is solving the dual problem, whereas the low-level interface is solving the primal problem, which is more efficient. Indeed, even when I restrict the problem to reals the performance is still much lower.

Now YALMIP allows you to choose whether you're solving the primal or the dual (by default it solves the dual). If I let it solve the dual problem over the reals, the performance is exactly the same as JuMP over the reals. To fix that one would need to add the option to JuMP to choose either primal or dual.

Yes, choosing between primal and dual is crucial for performance. In JuMP, SeDuMi defaults to geometric form (free variables and affine expressions in cones) and you can use https://github.com/jump-dev/Dualization.jl/ to transform a standard form (variables in cones and equality constraints) into geometric form.

The second issue is JuMP's support for complex numbers. When I tell YALMIP to solve the dual problem over the complexes the performance is still much better than JuMP's. Apparently JuMP is generating a bunch of redundant constraints.

That is surprising, we should investigate.

@araujoms
Copy link
Contributor Author

araujoms commented Mar 1, 2023

I've tested Dualization.jl. It was not enough to match YALMIP's performance, it still generated redundant constraints even in the real case.

@blegat
Copy link
Member

blegat commented Mar 1, 2023

Isn't is simply that JuMP generates identical constraints for the real part of the off-diagonal entries of @constraint(model, E1+E2 .== I) ?

@araujoms
Copy link
Contributor Author

araujoms commented Mar 1, 2023

That's very likely. JuMP generates d^2 constraints, whereas YALMIP generates d(d+1)/2. The difference is precisely the number of constraints that are redundant because E1+E2 is symmetric (or Hermitian).

@blegat
Copy link
Member

blegat commented Mar 1, 2023

Not sure what YALMIP is doing but JuMP doesn't do anything magical there, it just gives to the solver exactly what the user gives. Either YALMIP directly sees it in the algebraic expression and never produces these constraints or it presolves them out. JuMP has no presolve, there is an attempt in https://github.com/mtanneau/MathOptPresolve.jl which acts as a layer you can add (similarly to Dualization). I would be curious to know what Yalmip does, could you share the MATLAB script ?

@odow
Copy link
Member

odow commented Mar 1, 2023

Do you have a reproducible example for the YALMIP and JuMP codes? Are they exactly the same?

@odow
Copy link
Member

odow commented Mar 1, 2023

Since no current solvers other than SeDuMi natively support complex-valued variables

I get that the current text is confusing if you know that SeDuMi (the solver) supports complex values. But equally, if we add this text, then general users will assume that SeDuMi.jl (the Julia package) does and get confused when JuMP adds extra constraints.

@odow
Copy link
Member

odow commented Mar 1, 2023

How about we just delete the entire sentence?

docs/src/manual/complex.md Outdated Show resolved Hide resolved
docs/src/manual/complex.md Outdated Show resolved Hide resolved
docs/src/manual/complex.md Outdated Show resolved Hide resolved
@odow odow changed the title correction: SeDuMi does natively support complex-valued variables [docs] SeDuMi does natively support complex-valued variables Mar 1, 2023
@codecov
Copy link

codecov bot commented Mar 1, 2023

Codecov Report

Patch coverage has no change and project coverage change: +0.01 🎉

Comparison is base (6329762) 98.09% compared to head (b172e9a) 98.10%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #3257      +/-   ##
==========================================
+ Coverage   98.09%   98.10%   +0.01%     
==========================================
  Files          34       34              
  Lines        4663     4702      +39     
==========================================
+ Hits         4574     4613      +39     
  Misses         89       89              
Impacted Files Coverage Δ
src/macros.jl 98.57% <0.00%> (+<0.01%) ⬆️
src/optimizer_interface.jl 96.87% <0.00%> (+0.54%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

constraints. Since no current solvers natively support complex-valued variables
and constraints, JuMP reformulates all complex expressions into their real and
imaginary parts.
constraints.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you remove the next sentence, it's not clear why you say "limited range"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestions then? Some more general introduction? I think it is limited because it's only Complex-in-EqualTo and HermitianPSD.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what else we could support. You could say it's limited because the variables are actually real under the hood but at the JuMP level, you don't really see the difference. Maybe we can just remove the fact that it's limited. JuMP don't "reformulates all complex expressions into real and imaginary parts". Bridges only do that in case the solver does not support it which is in fact the best thing you could do. So it's actually the opposite, JuMP passes the complex expressions as is. Additionally, reformulation is done automatically for the solvers that need it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried afresh: #3262

@araujoms
Copy link
Contributor Author

araujoms commented Mar 2, 2023

Do you have a reproducible example for the YALMIP and JuMP codes? Are they exactly the same?

Sure. If you don't dualize the model they generate the same SDP. The YALMIP code is

function objective = real_sedumi(d)

    rho1 = random_state(d);
    rho2 = random_state(d);

    E1 = sdpvar(d);
    E2 = sdpvar(d);

    objective = real(rho1(:)'*E1(:) + rho2(:)'*E2(:))/2;

    constraints = [E1 >= 0, E2 >= 0, E1 + E2 == eye(d)];

    ops = sdpsettings(sdpsettings,'verbose',1,'solver','sedumi','dualize',0);
    sol = optimize(constraints,-objective,ops);
    objective = value(objective);
    
end


function rho = random_state(d)

    x = randn(d,d);
    rho = x*x'/trace(x*x');

end

The JuMP code is pretty much the same thing as before, but I can paste it here for convenience:

using JuMP
using LinearAlgebra
import SeDuMi
import Dualization

function random_state(d)
    x = randn(Float64, (d,d))
    y = x*x'
    rho = Symmetric(y/tr(y))
    return rho
end


function discriminate(d)
#    model = Model(Dualization.dual_optimizer(SeDuMi.Optimizer))
    model = Model(SeDuMi.Optimizer)
    rho1 = random_state(d)
    rho2 = random_state(d)
    E1 = @variable(model, [1:d, 1:d] in PSDCone())
    E2 = @variable(model, [1:d, 1:d] in PSDCone())
    @constraint(model, E1+E2 .== I)
    obj = real(dot(rho1,E1) + dot(rho2,E2))/2
    @objective(model, Max, obj)
    optimize!(model)
    print(value(obj),"\n\n")
    return
end

@araujoms
Copy link
Contributor Author

araujoms commented Mar 2, 2023

Not sure what YALMIP is doing but JuMP doesn't do anything magical there, it just gives to the solver exactly what the user gives. Either YALMIP directly sees it in the algebraic expression and never produces these constraints or it presolves them out. JuMP has no presolve, there is an attempt in https://github.com/mtanneau/MathOptPresolve.jl which acts as a layer you can add (similarly to Dualization). I would be curious to know what Yalmip does, could you share the MATLAB script ?

We don't need magic here. When calling @constraint on a Hermitian (or symmetric) variable you just generate the constraints for the lower triangular (and if you want to be really nice, also throw an error if lower triangular and upper triangular constraints don't match).

In any case, since YALMIP generates the same SDP as JuMP when not dualizing, it must be during dualization that YALMIP eliminates the superfluous constraints.

@blegat
Copy link
Member

blegat commented Mar 2, 2023

We don't need magic here. When calling @constraint on a Hermitian (or symmetric) variable you just generate the constraints for the lower triangular (and if you want to be really nice, also throw an error if lower triangular and upper triangular constraints don't match).

Yes but when you do @constraint(model, E1+E2 .== I), you broadcast over E1 + E2 - I so it's the Julia broadcast which is the issue:

julia> A = Hermitian(rand(2, 2))
2×2 Hermitian{Float64, Matrix{Float64}}:
 0.698166  0.024486
 0.024486  0.0138144

julia> println.(A + A - I)
0.3963316380355213
0.04897191775630416
0.04897191775630416
-0.972371283566182

What we could do is support the syntax with == instead of .==:

julia> model = Model()
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.

julia> @variable(model, H[1:2, 1:2] in HermitianPSDCone())
^[[A2×2 Hermitian{GenericAffExpr{ComplexF64, VariableRef}, Matrix{GenericAffExpr{ComplexF64, VariableRef}}}:
 real(H[1,1])                               real(H[1,2]) + (0.0 + 1.0im) imag(H[1,2])
 real(H[1,2]) + (0.0 - 1.0im) imag(H[1,2])  real(H[2,2])

julia> @constraint(model, H == I)
ERROR: At REPL[14]:1: `@constraint(model, H == I)`: Unexpected vector in scalar constraint. Did you mean to use the dot comparison operators like .==, .<=, and .>= instead?
Stacktrace:
 [1] error(::String, ::String, ::String, ::String)
   @ Base ./error.jl:44
 [2] _macro_error(::Symbol, ::Tuple{Symbol, Expr}, ::LineNumberNode, ::String, ::Vararg{String})
   @ JuMP ~/.julia/dev/JuMP/src/macros.jl:1623
 [3] (::JuMP.var"#_error#65"{Tuple{Symbol, Expr}, Symbol, LineNumberNode})(::String, ::Vararg{String})
   @ JuMP ~/.julia/dev/JuMP/src/macros.jl:743
 [4] build_constraint(_error::JuMP.var"#_error#65"{Tuple{Symbol, Expr}, Symbol, LineNumberNode}, x::Hermitian{GenericAffExpr{ComplexF64, VariableRef}, Matrix{GenericAffExpr{ComplexF64, VariableRef}}}, set::MathOptInterface.EqualTo{Float64})
   @ JuMP ~/.julia/dev/JuMP/src/macros.jl:609
 [5] macro expansion
   @ ~/.julia/dev/JuMP/src/macros.jl:833 [inlined]
 [6] top-level scope
   @ REPL[14]:1

julia> function JuMP.build_constraint(error::Function, H::Hermitian, set::MOI.EqualTo)
           @assert iszero(MOI.constant(set))
           n = LinearAlgebra.checksquare(H)
           shape = HermitianMatrixShape(n)
           x = vectorize(H, shape)
           build_constraint(error, x, MOI.Zeros(length(x)))
       end

julia> @constraint(model, H == I)
[real(H[1,1]) - 1, real(H[1,2]), real(H[2,2]) - 1, imag(H[1,2])]  MathOptInterface.Zeros(4)

There, you don't generate any redundant equality.

In any case, since YALMIP generates the same SDP as JuMP when not dualizing, it must be during dualization that YALMIP eliminates the superfluous constraints.

That's interesting, we should find where that is in YALMIP's code to see if it's just finding exact duplicates or doing something more sophisticated.

@araujoms
Copy link
Contributor Author

araujoms commented Mar 2, 2023

Wow, that's amazing. I didn't think it would be so easy.

As for YALMIP, I find its code rather difficult to read. I would just ask Löfberg instead.

@odow
Copy link
Member

odow commented Mar 2, 2023

You could also do:

julia> begin
           d = 3
           model = Model()
           rho1 = random_state(d)
           rho2 = random_state(d)
           E1 = @variable(model, [1:d, 1:d] in PSDCone())
           E2 = @variable(model, [1:d, 1:d] in PSDCone())
           @constraint(model, [i=1:d, j=1:i], E1[i, j] + E2[i, j] == LinearAlgebra.I[i, j])
           obj = real(LinearAlgebra.dot(rho1, E1) + LinearAlgebra.dot(rho2, E2)) / 2
           @objective(model, Max, obj)
           model
       end
A JuMP Model
Maximization problem with:
Variables: 12
Objective function type: AffExpr
`AffExpr`-in-`MathOptInterface.EqualTo{Float64}`: 6 constraints
`Vector{VariableRef}`-in-`MathOptInterface.PositiveSemidefiniteConeTriangle`: 2 constraints
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.

I don't know if JuMP can/should try and be cleverer here. There are too many edge cases:

julia> function JuMP.build_constraint(error::Function, H::Symmetric, set::MOI.EqualTo)
           @assert iszero(MOI.constant(set))
           n = LinearAlgebra.checksquare(H)
           shape = SymmetricMatrixShape(n)
           x = vectorize(H, shape)
           build_constraint(error, x, MOI.Zeros(length(x)))
       end

julia> @constraint(model, E1 + E2 == 2)
ERROR: Operation `sub_mul` between `Symmetric{AffExpr, Matrix{AffExpr}}` and `Int64` is not allowed. You should use broadcast.
Stacktrace:
 [1] error(s::String)
   @ Base ./error.jl:33
 [2] operate!!(#unused#::typeof(MutableArithmetics.sub_mul), x::Symmetric{AffExpr, Matrix{AffExpr}}, y::Int64)
   @ JuMP ~/.julia/dev/JuMP/src/sets.jl:106
 [3] macro expansion
   @ ~/.julia/packages/MutableArithmetics/9dpep/src/rewrite.jl:322 [inlined]
 [4] macro expansion
   @ ~/.julia/dev/JuMP/src/macros.jl:832 [inlined]
 [5] top-level scope
   @ REPL[42]:1

julia> @constraint(model, E1 + E2 .== 2)
3×3 Matrix{ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64}, MathOptInterface.EqualTo{Float64}}, ScalarShape}}:
 _[1] + _[7] = 2.0   _[2] + _[8] = 2.0   _[4] + _[10] = 2.0
 _[2] + _[8] = 2.0   _[3] + _[9] = 2.0   _[5] + _[11] = 2.0
 _[4] + _[10] = 2.0  _[5] + _[11] = 2.0  _[6] + _[12] = 2.0

julia> @constraint(model, E1 + E2 == I)
[_[1] + _[7] - 1, _[2] + _[8], _[3] + _[9] - 1, _[4] + _[10], _[5] + _[11], _[6] + _[12] - 1]  MathOptInterface.Zeros(6)

julia> A = 2
2

julia> @constraint(model, E1 + E2 == A)
ERROR: Operation `sub_mul` between `Symmetric{AffExpr, Matrix{AffExpr}}` and `Int64` is not allowed. You should use broadcast.
Stacktrace:
 [1] error(s::String)
   @ Base ./error.jl:33
 [2] operate!!(#unused#::typeof(MutableArithmetics.sub_mul), x::Symmetric{AffExpr, Matrix{AffExpr}}, y::Int64)
   @ JuMP ~/.julia/dev/JuMP/src/sets.jl:106
 [3] macro expansion
   @ ~/.julia/packages/MutableArithmetics/9dpep/src/rewrite.jl:322 [inlined]
 [4] macro expansion
   @ ~/.julia/dev/JuMP/src/macros.jl:832 [inlined]
 [5] top-level scope
   @ REPL[46]:1

julia> @constraint(model, E1 + E2 .- 2 == 0)
ERROR: At REPL[47]:1: `@constraint(model, (E1 + E2) .- 2 == 0)`: Unexpected vector in scalar constraint. Did you mean to use the dot comparison operators like .==, .<=, and .>= instead?
Stacktrace:
 [1] error(::String, ::String, ::String, ::String)
   @ Base ./error.jl:42
 [2] _macro_error(::Symbol, ::Tuple{Symbol, Expr}, ::LineNumberNode, ::String, ::Vararg{String, N} where N)
   @ JuMP ~/.julia/dev/JuMP/src/macros.jl:1623
 [3] (::JuMP.var"#_error#69"{Tuple{Symbol, Expr}, Symbol, LineNumberNode})(::String, ::Vararg{String, N} where N)
   @ JuMP ~/.julia/dev/JuMP/src/macros.jl:743
 [4] build_constraint(_error::JuMP.var"#_error#69"{Tuple{Symbol, Expr}, Symbol, LineNumberNode}, x::Matrix{AffExpr}, set::MathOptInterface.EqualTo{Float64})
   @ JuMP ~/.julia/dev/JuMP/src/macros.jl:609
 [5] macro expansion
   @ ~/.julia/dev/JuMP/src/macros.jl:833 [inlined]
 [6] top-level scope
   @ REPL[47]:1

@araujoms
Copy link
Contributor Author

araujoms commented Mar 3, 2023

Of course I could just do @constraint(model, [i=1:d, j=1:i], E1[i, j] + E2[i, j] == LinearAlgebra.I[i, j]), but asking the user to do low-level stuff like this kind of defeats the purpose of a preprocessor. It's perfectly clear what the user means by E1 + E2 == I and JuMP should understand that.

I don't understand your objection. In the first two errors you're trying to constrain a matrix to be equal to a scalar. That's nonsensical, and Julia should indeed throw an error. In the third case the problem is that Julia doesn't understand that elementwise subtraction preserves symmetry. It's easy to fix by using instead @constraint(model, Symmetric(E1 + E2 .-2) == 0). In any case, that's still an improvement over the current behaviour, where JuMP throws an error when the user tries the perfectly sensible operation to constrain a matrix to be equal to another.

I spoke to Löfberg, btw. YALMIP is a bit smart when setting up constraints. It tests whether the variables are Hermitian, and in this case constrains their upper triangulars to be equal:

if ishermitian(X) && ishermitian(Y)
    y = constraint(triu(X),'==',triu(Y));
else
    y = constraint(X,'==',Y);
end

This is not very smart, though, because it adds a bunch of 0 == 0 constraints in the lower triangular. But then when you call dualize YALMIP can detect this trivial constraints and eliminate them. This explains why YALMIP generates the same number of constraints as JuMP when working with the primal, but a more efficient SDP when dualizing. Now if we use @blegat 's code it will generate a more efficient SDP than YALMIP when working with the primal, and the same when dualizing.

@blegat
Copy link
Member

blegat commented Mar 3, 2023

@araujoms Thanks for asking, that makes much more sense now.

Maybe JuMP should use JuMP.Zero (or MA.Zero ?) which is like MOI.Zeros/MOI.EqualTo unless no dimension nor constant is specified yet (like JuMP.SecondOrderCone) so that it get transformed into MOI.EqualTo, ...
Not that in #3191, MOI.EqualTo(0.0) is transformed into MOI.EqualTo(false) which is weird so JuMP.Zero would resolve that as well.

The the signature would be JuMP.build_constraint(error::Function, ::Hermitian, ::JuMP.Zero) so no edge case could be possible and it's much cleaner.

@odow
Copy link
Member

odow commented Mar 3, 2023

I've opened an issue to track this: #3265

JuMP.Zeros solves the problem.

JuMP.build_constraint(error::Function, ::Hermitian, ::JuMP.Zero). Yes. This works well.

@araujoms, my main issue was that this method JuMP.build_constraint(error::Function, H::Symmetric, set::MOI.EqualTo) "feels wrong" because it's constraining a matrix to an EqualTo set, and people @constraint(model, H == 0) would work, but @constraint(model, H == 1) wouldn't, despite both RHSs nothing being scalars.

@araujoms araujoms deleted the patch-1 branch March 3, 2023 19:30
@araujoms
Copy link
Contributor Author

Now that the underlying support has been merged, are you going to add support for @constraint(model, H == I) for Hermitian or symmetric H?

@odow
Copy link
Member

odow commented Mar 12, 2023

See #3281

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants