-
-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Abseil overhaul, again #45
Comments
Can you provide a specific example where this was true? |
How is it any different from using stdlib implementations which live in libstdc++.so etc?
Why do you claim that it would have a performance issue? One disadvantage you forgot was interoperability with std types. Some codes using C++17 might assume that abseil::any and std::any are interoperable. |
Tagging abseil/abseil-cpp#696 If you want to hear terrible, no-good, very-bad ideas:
Now, most libraries can link with the standard |
Grpc itself on windows, see conda-forge/grpc-cpp-feedstock#237. (Public linkage obviously fails without a rundep, removing the linkage fails with missing symbols - unsurprisingly - and private linkage still fails at link time, because the grpc builds on windows are static.) |
My impression is that the stdlib does better than what abseil can do (it's not a level playing field, the compiler knows the stdlib better than a third-party lib), so falling back on abseil types where it wouldn't strictly be necessary might have an impact. I don't have numbers beyond this impression though, so I'm proposing already in the OP to move to the lowest denominator ABI - not least to keep the effort manageable and the migrations moving forward.
Is that a valid (=supported) assumption when using abseil? |
I delt with something similar with pytorch. It is very likely that somebody didn't declare abseil as a dependency to one of the submodules and it succeeded transitively. I don't think this is an issue of "even private symbols leak". If configured correctly, I don't think I've seen this before. We may have to find the package in question, and add a dependency to abseil ourselves, and eventually upstream. |
I know the explanation about this from the cmake mailing list is long, but I think it's worth a read, e.g.:
In particular, building a static target will not be self-contained (as in: bring along all the symbols of its dependencies), and so linking a static target still needs the libs that are "privately" linked to be present. Here's another ML post that lays out why such self-contained static libs are hard to achieve (and some workarounds that I don't think are scalable for us). It could still be that I'm somehow misunderstanding something, but the stuff I found when researching this is consistent with the behaviour of grpc at link time (happily, conda-forge/grpc-cpp-feedstock#237 now contains a test, so we can catch this on the feedstock itself, and not in downstream builds). Not sure if someone has a different way of achieving the "bundling" of necessary-symbols-for-static-targets that would give us back the encapsulation which would make option 3. work out better. This SO question has some answers, but not sure how applicable/scalable this is for us. |
Do you happen to know some projects that are relying on this explicitly or implicitly? I'd like to get a feel how this looks in practice - it feels risky to me because that assumption depends on something outside of the control of that project (how abseil is built). But you're right, if code depends on interop between abseil and standard types based on using the C++17 ABI, then moving to the lowest common denominator might break something. It's another constraint in an already very tangled web, but not sure how to weight it relative to the others, resp. if we'd still have a workable solution left at all if we take the lowest common denominator off the table. |
Thanks for joining in @coryan! (For context, Carlos is the top committer of https://github.com/googleapis/google-cloud-cpp & also maintains the feedstock on cf-side. and he proposed the lowest-common-denominator ABI independently of @pitrou on the same day).
I'm not sure how what to read out of this, TBH 😅 |
That would actually sound like a nice way out of this quagmire. |
OK, the intro (and mix of C++11 with
sounds like a really good idea to have symbols for two different ABIs that can coexist without blowing up. 🥳 |
I updated the OP to include this option, and I'm trying to think this through, as it seems like the best of the options by far. If I understood the idea correctly, it would mean that:
This means that at build-time, there will be both |
Poking fun at myself mostly, and since I do not have full context on the conda ecosystem, there is a good chance my ideas are bad. I am willing to share them. |
Is grpc static on windows? |
There shouldn't be any duplicate symbols thanks to the different inline namespace references. |
Yes. I've been wanting to try shared builds, but protoc sometimes generates stuff that's not compilable on windows for shared builds. I've experienced these kinds of issues at least for sentencepiece and I assume I ran into a similar issue when I tried shared builds of google-cloud-cpp on windows (some protobuf DLLs couldn't compile).
So, I'll admit I'm pretty far out of my depths around linkers, but on the level of the source code, there are no symbol namespaces. So when the code specifies I'm guessing there's probably linker options that control this though (certainly, if not passing the respective |
Since I proposed this, and it is receiving some consideration, let's talk about the downsides:
On balance, I still like this option. But I thought folks would want to consider the downsides before making a decision. |
I'm sorry to ask all these questions about specific packages, but is there a package combination that is currently unable to be installed that you would like to enable? Ultimately, I'm trying to see if there is the 'need' to support these strange combination of packages. Not just the desire to do so. |
All this is coming from the desire to rebuild our stack for newer abseil / grpc / google-cloud-cpp, etc. Since grpc 1.47 moved the minimum C++ version to 14, our previous abseil setup wasn't workable anymore, culminating in #35. I've tried to shepherd the respective migrations based on those builds as far as I could, but based on the issues I'm seeing, I beginning to really doubt that this is feasible (to say nothing of the issues that came up on the side, like nonstandard setup for the static libs, no co-installability, etc.). And this has impacts downstream, e.g. arrow having to disable GCS because we don't have concluded several migration yet, and the old setup is reaching its limits. It seems that either option 4. or 5. would have a substantially higher chance of actually enabling us to proceed. |
This is somewhat off-topic: while compiling gRPC and Protobuf themselves as DLLs is possible, using them is a different problem: see googleapis/google-cloud-cpp#5849 for details, including grpc/grpc#937, grpc/grpc#15653, and grpc/grpc#15333 |
I'm looking for a specific story. Ultimately, if upstream doesn't want to support a common abseil, I think it is out of scope for us to try to force support. That is, the effort is better spent working with upstream to get their packages updated. I think it is definitely possible to say:
So if a certain package chain cannot be fully updated, that is OK. Generally this is how software development takes place. A foundational package gets updated, and it takes time to propagate to downstream libraries. |
That is not what is happening here. To sum things up quickly, it's potentially the same Abseil source code everywhere, but with a different ABI depending on which C++ language version is enabled at compile-time. Ideally, every package build is moving to C++17 (or later, because the issue is really with types that were introduced in C++17) and the problem is solved. The current situation is not that ideal, as some packages still want to be compatible with earlier C++ versions. For example, Arrow C++ until now required C++11, and managed to link to (note the perversity that Arrow C++ itself does not directly depend on |
@h-vetinari Do you know the list of all indirect consumers of abseil-cpp on conda-forge? |
.... I mean abseil and C++ABI.
Great, so the combination that is breaking is:
This transition seems to be painful for packages even outside of conda-forge. I don't see how this is a challenge to be tackled at conda-forge specifically. Instead, I would work with upstream to make sure that in a future you can do an upgrade. Users of the old versions can continue to use the old versions. We aren't "breaking" anything. Its just users that really wish to be on the bleeding edge that might "suffer" temporarily through this transition. To summarize, I understand this situation is painful, transitions are painful. But I don't know if we should keep "overhauling" abseil at conda-forge. Instead, I would suggest working with upstream to get longer lasting sustainable fixes:
Again, the key is to be specific about which combinations, explicitly break. Otherwise, we are designing for hypothetical situations that might not exist in the wild. |
It also seems that these issues are somewhat contained to Windows, and do not affect Unix builds. There are other cases where features are dropped from windows due to our inability to package them, or due to upstream not supporting it. One package that comes to mind is ffmpeg where some features just don't exist for windows on conda-forge and opencv where hdf5 is linked only on unix (though it might be possible to link on windows now too) Ultimately, maybe arrow users on windows don't need google-cloud-cpp? Maybe they can continue to use the old versions? |
@hmaarrfk, I don't know how to say this politely, but your last comments sound both condescending and irrelevant. |
Sorry, I don't mean to sound condescending. |
I think you are all very capable. I'm going to take a step back. I've given the feedback that I think I can contribute. You are entirely free to ignore it. |
Maybe I'm missing something, but the headers should be identical? The |
Alas, nirvana still eludes us. There's a feedstock that already requires C++20 (see conda-forge/obake-feedstock#32), and unfortunately, the ABI is again different between C++20 and 17. Moving everything to C++20 is clearly out of the question. Alternatives I see would be to make C++17 the new "compat" version, but I also don't see the mentioned difference ( |
Lets just ask the obake maintainer, who is the author of their own package:conda-forge/obake-feedstock#33 |
Maybe this shouldn't matter if |
Exactly. And "now" in this case was the state as of March, perhaps there's even more divergences by now. I think we should at least ask upstream to provide options for all ABI relevant types, also for C++20 |
I'm not sure that is true. You have all made tremendous progress on ensuring that arrow + tensorflow are co-installable. As you mentionned, opening lines of communication with upstream will help ensure you get compatibility fallbacks when you need them. Maybe at this stage you can write a guide on how to ensure that new packages that use ABSEIL work with Tensorflow and Arrow on conda-forge today. |
I was referring to the comment from the upstream maintainer I linked about differences between C++17 and C++20, which was in March. 🙃
That might not be a bad idea. :) |
@hmaarrfk xref'd some chromium presentation of experience with moving to C++20, see #50. Abseil appears at least as one solution ( PS. I opened abseil/abseil-cpp#1280 |
conda-forge/abseil-cpp-feedstock#45 (comment) Or should we only depend on libabseil here?
I noticed a new addition in the We're going to have to come up with a solution for this soonish, as projects are starting to compile with C++20 more and more. I still like the plan formulated further up, but it hasn't gotten any feedback. I'll just quote the conclusion
For packages with abseil types in the API (not that many), we could even consider building for both ABIs, and then we should have a pretty complete solution to this issue AFAICT. Thoughts? @conda-forge/abseil-cpp @conda-forge/core @coryan |
Oh no :-( Thanks for noticing this... I get the impression that |
We don't. Nor do we have any plans to do so. |
It's not a question of google-cloud-cpp, but of having diverging ABIs in the first place. Though if we're talking impact across many feedstocks, the real question is what re2/protobuf/grpc will be doing. We can probably get away with not caring about the ordering types (unlikely to appear in signatures), but hope is not a great strategy. Presumably abseil will also keep adding more divergences over time. AFAICT there's a long-term stable solution possible now, and so I'd like to try fixing this. Mostly I'm interested if someone disagrees with the feasibility of the plan, or with the proposed approach in the first place. |
Since it explicitly says not to use mode |
Yeah, I thought so too, though all the options say not to use Of course, it would be a possible choice (or at least buy us some time) to now settle on the C++17 baseline we've established, and keep using abseil-types ( That's a simpler setup than what I proposed above, but has the limitation that projects relying on interoperability of abseil types with C++20 standard types would be stuck (that's a point you raised further up). However, that might be a bridge we want to cross when we get there, as there's nothing affected on the horizon so far that's anywhere near as ubiquitous as e.g. |
I think that's the best choice in the short term (potentially also longer-term; in any case it leaves the door open for introducing another ABI later, if it becomes necessary), so I implemented this in #73. |
So, as most people know by now, abseil's ABI (by default) depends on the C++ version used to compile it1.
This is problematic because we can only have one ABI per shared library (trying to load two different ones would lead to lots of ODR violations at best, segfaults at worst).
This left us with the choice of:
I tried 3. in #35, because it seemed like it would have the least painful contraints (and was nicely explicit in making people aware of the ABI issue in the build string as well as having to match using the C++ version of consuming packages). The idea was that shared builds stay on C++17 for those who can, and those who're stuck with C++11/14 can use a static lib, with the intention to encapsulate that ABI dependence at the package using the static lib (because a run-export for the same C++ version would then conflict with the C++17 shared builds).
This worked for grpc (or seemed to at least) and some other packages, but the encapsulation is leakier than I had hoped. For example, for building a static library (which we have a few of, especially on windows), even if specifying
target_link_libraries(<target> PRIVATE absl::something)
, this effectively gets turned into public linkage, which makeslibabseil-static
a runtime requirement, and runs into the same virality issue where stuff that's run-depending onlibabseil-static=*=cxx14*
cannot be co-installed withlibabseil[=*=cxx17*]
. There are also packages which need public linkage anyway (and thus a run-dependence), because abseil types are part of their public API.In practice it's been messy to try to work through this, but so far this was the only solution (given that e.g. gRPC bumped its required C++ version to 14, and moving to C++17 seemed too aggressive on windows given that some consumers weren't yet ready to change). I've tried as best as I could so far, but I'm getting to the point where I don't think it's really feasible as an overall approach. The remaining options are:
use C++11 ABItry to support several ABIs at once using build stringsThe last option is something I hadn't been aware of. It works by forcing abseil to never replace its own types with the ones from the stdlib, which means the ABI stays the same regardless of the C++ version that a consuming package uses, but also that now there's a lot more symbols and that the compiler will have a harder time optimizing things because those symbols live behind a DLL boundary (and aren't as transparent to the compiler as stdlib types).
[update] option 5 appeared:
5. support C++17 & compat ABI by changing the inline namespace for symbols used by the latter. This should mean that two shared builds with different ABIs could co-exist at runtime without stuff blowing up.
Some advantages & disadvantages as I see them:
necessary
/ complexity
/ footprint
some code bases might not be ready yet
stdlib & abseil for newer standards;
potential performance impact
I don't think 3. is really salvageable (except through moving stuff that breaks the encapsulation to C++17), and I don't think 1. is reasonable from the work involved for a migration (and already, the migrators for abseil, grpc & protobuf are piled on top of each other).
obsoleted section about going with 4.
So I'm planning to move forward with 4., unless there are other opinions. In keeping with the spirit of marking the ABI in the build-string, I'm planning to do:This intentionally doesn't refer to a standard version (which would IMO be confusing vis-à-vis abseil default behaviour), and AFAICT, there's no guarantee that this has to match the ABI of one of C++11/14/17, because abseil might have more compat types than are necessary even in the oldest C++ version they support (in particular it'll likely change once abseil drops C++11 in the next release).
I think 4 sounds doable, but might still run into problems as pointed out by Isuru, and 5. just sounds way better overall. More detailed exposition on option 5 here.
Thankfully, the universe has kindly provided us with a new minor version (#44), so that the transition from the current setup can happen sanely (rerun another migration against 20220623.1).
Please comment if you have thoughts
@conda-forge/abseil-cpp @conda-forge/core
Footnotes
basically because it depends whether ABI will use the types from the C++ standard library (if the targetted standard version is high enough), or it's own internal emulations of those types (which aren't ABI-compatible with the stdlib-types). ↩
The text was updated successfully, but these errors were encountered: