Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jar hell in test classpath when running with JDK 1.8.0_66 on OS X (ant-javafx.jar, packager.jar) #14348

Closed
robinst opened this issue Oct 29, 2015 · 23 comments

Comments

@robinst
Copy link
Contributor

robinst commented Oct 29, 2015

Trying to run a test that extends ESIntegTestCase with Elasticsearch 2.0.0, getting this:

java.lang.RuntimeException: found jar hell in test classpath
    at org.elasticsearch.bootstrap.BootstrapForTesting.<clinit>(BootstrapForTesting.java:63)
    at org.elasticsearch.test.ESTestCase.<clinit>(ESTestCase.java:106)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at com.carrotsearch.randomizedtesting.RandomizedRunner$1.run(RandomizedRunner.java:573)
Caused by: java.lang.IllegalStateException: jar hell!
class: jdk.packager.services.UserJvmOptionsService
jar1: /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/lib/ant-javafx.jar
jar2: /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/lib/packager.jar
    at org.elasticsearch.bootstrap.JarHell.checkClass(JarHell.java:267)
    at org.elasticsearch.bootstrap.JarHell.checkJarHell(JarHell.java:185)
    at org.elasticsearch.bootstrap.JarHell.checkJarHell(JarHell.java:86)
    at org.elasticsearch.bootstrap.BootstrapForTesting.<clinit>(BootstrapForTesting.java:61)
    ... 4 more

Not sure I can do much about that.

@rmuir
Copy link
Contributor

rmuir commented Oct 29, 2015

You have to fix your IDE config: #13465

@rmuir rmuir closed this as completed Oct 29, 2015
@robinst
Copy link
Contributor Author

robinst commented Oct 29, 2015

Thanks. I removed "ant-javafx.jar" from the configured JRE classpath, works now.

By the way, I wouldn't be surprised if you guys got more bug reports about this, because that's really weird (and the first time a library forced me to do that).

@nik9000
Copy link
Member

nik9000 commented Oct 29, 2015

Yeah, I suspect we will. Jarhell checks are too useful to give up though.
They detect all kinds of "fun" ways we've seen things broken.

I could see some extra help on these checks because they are much more
likely to be hit by new contributors, exactly the people who we want to
encourage and the people who are going to have the hardest time figuring
out the issue on their own.
On Oct 29, 2015 3:02 AM, "Robin Stocker" [email protected] wrote:

Thanks. I removed "ant-javafx.jar" from the configured JRE classpath,
works now.

By the way, I wouldn't be surprised if you guys got more bug reports about
this, because that's really weird (and the first time a library forced me
to do that).


Reply to this email directly or view it on GitHub
#14348 (comment)
.

@rmuir
Copy link
Contributor

rmuir commented Oct 29, 2015

Everyone loves intellij, but nobody submits bugs to them for their clearly broken configuration.

So I think its good if it confuses people, at some point it will encourage someone to fix the damn thing. I don't care about intellij, so it will not be me :)

@jprante
Copy link
Contributor

jprante commented Dec 10, 2015

FYI same config works here flawlessly

Jorg-Prantes-MacBook-Pro:~ joerg$ uname -a
Darwin Jorg-Prantes-MacBook-Pro.local 13.4.0 Darwin Kernel Version 13.4.0: Wed Mar 18 16:20:14 PDT 2015; root:xnu-2422.115.14~1/RELEASE_X86_64 x86_64
Jorg-Prantes-MacBook-Pro:~ joerg$ java -version
java version "1.8.0_66"
Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
Jorg-Prantes-MacBook-Pro:~ joerg$ ls -l  /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/lib/ant-javafx.jar
-rwxrwxr-x  1 root  wheel  1165611  6 Okt 22:59 /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/lib/ant-javafx.jar
Jorg-Prantes-MacBook-Pro:~ joerg$ ls -l  /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/lib/packager.jar
-rwxrwxr-x  1 root  wheel  4646  6 Okt 22:59 /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/lib/packager.jar
Jorg-Prantes-MacBook-Pro:~ joerg$ cd ~es/elasticsearch-2.1.0
Jorg-Prantes-MacBook-Pro:elasticsearch-2.1.0 joerg$ ./bin/elasticsearch
[2015-12-10 14:37:28,445][INFO ][node                     ] [Kiden Nixon] version[2.1.0], pid[83532], build[72cd1f1/2015-11-18T22:40:03Z]
[2015-12-10 14:37:28,446][INFO ][node                     ] [Kiden Nixon] initializing ...
[2015-12-10 14:37:28,495][INFO ][plugins                  ] [Kiden Nixon] loaded [], sites []
[2015-12-10 14:37:28,517][INFO ][env                      ] [Kiden Nixon] using [1] data paths, mounts [[/ (/dev/disk0s2)]], net usable_space [336.9gb], net total_space [931gb], spins? [unknown], types [hfs]
[2015-12-10 14:37:30,217][INFO ][node                     ] [Kiden Nixon] initialized
[2015-12-10 14:37:30,218][INFO ][node                     ] [Kiden Nixon] starting ...
[2015-12-10 14:37:30,274][INFO ][transport                ] [Kiden Nixon] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[fe80::1]:9300}, {[::1]:9300}
[2015-12-10 14:37:30,280][INFO ][discovery                ] [Kiden Nixon] elasticsearch/sr2IKPk7QnizIRE-2yquZQ
[2015-12-10 14:37:33,305][INFO ][cluster.service          ] [Kiden Nixon] new_master {Kiden Nixon}{sr2IKPk7QnizIRE-2yquZQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2015-12-10 14:37:33,317][INFO ][http                     ] [Kiden Nixon] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[fe80::1]:9200}, {[::1]:9200}
[2015-12-10 14:37:33,317][INFO ][node                     ] [Kiden Nixon] started
[2015-12-10 14:37:33,344][INFO ][gateway                  ] [Kiden Nixon] recovered [1] indices into cluster_state

@rjernst
Copy link
Member

rjernst commented Dec 10, 2015

@jprante That is because java (which bin/elasticsearch invokes) does not just put all of its jars in jdk and jre onto the bootclasspath. Intellij does, which is wrong.

@jprante
Copy link
Contributor

jprante commented Dec 10, 2015

After some digging, I found this error in IntelliJ when selecting "Run" from the "Run" menu with the green triangle. What a strange way to start ES.

It's not the boot class path, but the ordinary class path, which can be configured in "File -> Project Structure -> SDKs -> 1.8 -> Classpath".

After editing configuration in Run>Edit configurations by setting VM options to -Des.path.home=/var/tmp and program argument to start it is indeed possible to select Run>Run 'Elasticsearch' and execute a node without error ...

How good that I'm used to console and Maven/Gradle, which IntelliJ supports very well.

@bonitao
Copy link

bonitao commented Dec 29, 2015

I have been trying to upgrade to elasticsearch 2.0, but the jar hell check is preventing me from doing so. From #13404, it seems that there are no plans to make it optional.

However, I fail to see how it is possible to write tests using ESIntegTestCase in complex environment with the jar hell check enabled. In production I am usually deploying an uber jar, so everything works fine, since only one instance of a class goes in the flattened jar.

But when running tests from maven, there isn't much I can do to avoid the jar hell, because the duplicated classes are in the dependency jars and the classloader is deciding at runtime which class to pick (as maven shade, which I use to build the uber jar, has decided offline). It seems that others in stackoverflow are having the same problem: http://stackoverflow.com/questions/33975807/elasticsearch-jar-hell-error.

I used maven duplicate finder plugin to take a deep look in my duplicated jars to see what I could do, and although I could get rid of many of them, there are still some that seem impossible to fix. See below just for a glimpse of the problem:

mvn clean duplicate-finder:check
...
[WARNING] Found duplicate and different classes in [au.com.bytecode:opencsv:2.4, net.sf.opencsv:opencsv:2.3]:
[WARNING]   au.com.bytecode.opencsv.CSVReader
[WARNING]   au.com.bytecode.opencsv.CSVWriter
[WARNING]   au.com.bytecode.opencsv.bean.HeaderColumnNameTranslateMappingStrategy
[WARNING] Found duplicate and different classes in [com.google.guava:guava:18.0, org.apache.spark:spark-network-common_2.10:1.6.0]:
[WARNING]   com.google.common.base.Absent
[WARNING]   com.google.common.base.Function
[WARNING]   com.google.common.base.Optional
[WARNING]   com.google.common.base.Present
[WARNING]   com.google.common.base.Supplier
...

Unfortunately, the reality is that upstream dependencies can be packaged in some crazy ways (sometimes for good reasons, sometimes not), and as far as I know, when in maven or intellij, different from deploy jars where I can use maven-shade or others, there isn't an "offline" way for me to tell the system how to get rid of the jar hell.

After struggling this for quite some time now, the only way I out I see is to keep my own locally hacked elasticsearch, which is a solution I really don't like. Any other suggestions on how to work around this?

If there is a true solution, I will be happy to hear, but right now I have 45 dependencies contributing to the jar hell, and I have no idea how to fix them.

@jprante
Copy link
Contributor

jprante commented Dec 29, 2015

@bonitao I think you should take one step back and take a fresh breath. Wow, ES gives you the opportunity to clean up the dependencies of your project.

Are you authoring a plugin? Or do you just need an embedded ES client?

Beside wrestling with Maven, you could use Gradle for your project. There is a learning curve but the classpath configuration abilities are more flexible than in Maven. The ES team is switching to Gradle (again) because Maven is not flexible enough.

For an example, see my integration test at https://github.com/jprante/elasticsearch-langdetect/blob/master/build.gradle which can test plugin loading- it does not require the ES test framework.

Before thinking about a "hacked" Elasticsearch (which is a good thing), you could also try to just change the Elasticsearch build. From above it seems Google Guava has conflict with Spark. So it may be an alternative to shadow dependencides, either in your project, or in Spark, or in Elasticsearch. For example, there is a great Gradle shadow plugin at https://github.com/johnrengelman/shadow

@bonitao
Copy link

bonitao commented Dec 29, 2015

Hi @jprante,

Thanks for the response. A month ago, when first tried to upgrade to ES 2.x, I have actually tried to approach this as you said: as an opportunity to clean up my dependencies. But it did not work out, and I do not believe it will. Let me try to explain why.

I am not authoring a plugin. I am building a restful service that has an embedded ES client and an extract/load/transform pipeline, which also talks with ES. The code does a bunch of things, like doing search queries, doing snapshot/restores through the admin api, and doing bulk indexing with elasticsearch-hadoop-mr. This all works with ES 1.7.x, and from what I have seen, it also works in ES 2.x after I build an uber jar. The only thing that fails are my unit tests in ES 2.x, because in the test environment my classpath is built from the individual dependency jars.

The code is built with maven, but I also have a SBT setup for it. I have a passing knowledge of gradle, but as I understand, none of these tools really support the idea of creating a test classpath by unzipping and manipulating dependencies jars. Yes, they all have tricks like the one you did in https://github.com/jprante/elasticsearch-langdetect/blob/master/build.gradle#L121 to remove full jars from the classpath, but not for manipulating the contents of the individual classpaths for tests.

It is of course possible, after all, both the maven shade plugin and the gradle shadow plugin you linked, solve that exact problem. But these are tied to the release steps of the build, and as far as I know, there is not a natural way to run my unit tests with a classpath pointing solely to an uber jar. So, all the jar hell check is buying is forcing me into moving my unit tests into integration tests, and forcing the integration tests to work out of the flat jar (which I still need to figure out how to do).

As for fixing the jar hell in the dependencies, it is just not possible. I have already fixed many simple cases, like common-beanutils and netty, which is brought as a uber jar and individual deps in my build, and for smaller libraries, I have worked on upstream fixes to their pom, but I still have dozens of conflicts (all easily seen with duplicate finder), and some are really beyond my league. I would love to see anyone that has been able to run unit tests with ES 2.x and spark together. Direct shadowing is of no help here, because the libraries are being used directly in the tests. I would need to shadow guava within spark, and depend on the resulting jar instead of the original spark libraries.

And the spark issue is just one of many. For example, hadoop has some fake conflicts with itself:

[WARNING] Found duplicate and different classes in [org.apache.hadoop:hadoop-yarn-client:2.7.1, org.apache.hadoop:hadoop-yarn-common:2.7.1]:
[WARNING]   org.apache.hadoop.yarn.client.api.impl.package-info
[WARNING]   org.apache.hadoop.yarn.client.api.package-info
[WARNING] Found duplicate and different classes in [org.apache.hadoop:hadoop-yarn-api:2.7.1, org.apache.hadoop:hadoop-yarn-common:2.7.1]:
[WARNING]   org.apache.hadoop.yarn.factories.package-info
[WARNING]   org.apache.hadoop.yarn.factory.providers.package-info
[WARNING]   org.apache.hadoop.yarn.util.package-info
[

And so does spark:

[WARNING] Found duplicate and different classes in [org.apache.spark:spark-catalyst_2.10:1.5.0, org.apache.spark:spark-core_2.10:1.5.0, org.apache.spark:spark-graphx_2.10:1.5.0, org.apache.spark:spark-launcher_2.10:1.5.0, org.apache.spark:spark-mllib_2.10:1.5.0, org.apache.spark:spark-network-common_2.10:1.5.0, org.apache.spark:spark-network-shuffle_2.10:1.5.0, org.apache.spark:spark-repl_2.10:1.5.0, org.apache.spark:spark-sql_2.10:1.5.0, org.apache.spark:spark-streaming_2.10:1.5.0, org.apache.spark:spark-unsafe_2.10:1.5.0, org.spark-project.spark:unused:1.0.0]:
[WARNING]   org.apache.spark.unused.UnusedStubClass

And scala-lang bundles stuff it probably shouldn't:

[WARNING] Found duplicate and different classes in [org.fusesource.jansi:jansi:1.4, org.scala-lang:jline:2.10.4]:
[WARNING]   org.fusesource.jansi.Ansi
[WARNING]   org.fusesource.jansi.AnsiConsole
[

And my dependencies bring conflicting opencsv versions:

[WARNING] Found duplicate and different classes in [au.com.bytecode:opencsv:2.4, net.sf.opencsv:opencsv:2.3]:
[WARNING]   au.com.bytecode.opencsv.CSVReader
[WARNING]   au.com.bytecode.opencsv.CSVWriter
[WARNING]   au.com.bytecode.opencsv.bean.HeaderColumnNameTranslateMappingStrategy

So, I will need to solve each one of those to be able to run elasticsearch tests, even though I already have a solution for production, through the creation of uber jar. So, all I really need is a flag to disable the jar hell check in unit tests. With that flag, I could even choose to run the tests again in the integration phase with the flattened jar, but that is later in the development cycle, and slower.

I hope that exposition of the problem helps. ES is a great product, and I will definitively hack the source to get my tests to run if needed, but it seems to me that many people will face similar issues, and it would be best to leave a system property toggle for when fixing the jar hell is not feasible. Again, in production there is nothing to fix if you are running a flattened jar (damage is already done), and if I understand it right, people not running tests won't even hit the check.

@jprante
Copy link
Contributor

jprante commented Dec 29, 2015

@bonitao I know this is the wrong place to discuss further because it may be going off-topic. You can email me privately (see my github profile). The only thing I like to understand is why it is not possible to assemble a cleaned up uberjar of your project by exploding all the various messy dependencies you have mentioned here, except the ES dependencies, and pass the uberjar later to the classpath of an ES integration test.

@cff3
Copy link

cff3 commented Dec 31, 2015

@jprante I would really appreciate if you could continue the discussion publicly. I'm facing the same issues as @bonitao does (especially the issue with org.apache.spark.unused.UnusedStubClass) and I would be glad to see whether there is a good down-to-earth solution for the problem.
Fighting jar hell is certainly worth a lot of efforts and it's really great that elastic offers the JarHell checker. It would be event greater if people could decide on their own whether it is more important for them to cope with jar hell or to concentrate on writing and running tests. I would argue that this decision is highly context specific.
Currently I'm hacking around this problem by overriding org.elasticsearch.bootstrap.BootstrapForTesting.ensureInitialized().
I do this by implementing org.elasticsearch.bootstrap.BootstrapForTesting.java in my source path

package org.elasticsearch.bootstrap;
public class BootstrapForTesting {
    public static void ensureInitialized(){}
}

which basically means I'm fighting jar hell fighting with jar hell. Far from ideal.

@elastic elastic locked and limited conversation to collaborators Dec 31, 2015
@elastic elastic unlocked this conversation Jan 4, 2016
@PeterLappo
Copy link

@jprante I'm facing the same issue as @bonitao and @cff3 (as are others looking at other issues raised on this project) and its taking a lot of time to sort out and all we are trying to do is write an integration test.

Surely the obvious thing is to have a system property to disable the check with a warning if duplicates exist. After all there is no guarantee that the duplicate class may actually be used by the code. The default can still be crash and burn. And if even if a duplicate was used and the "wrong" class was loaded isn't it a case of caveat emptor ("let the buyer beware" ) ?

@rjernst
Copy link
Member

rjernst commented Jan 5, 2016

@PeterLappo The test framework is meant to check elasticsearch works, and we have done a lot of work to ensure we know exactly which classes are being loaded (ie the jarhell work). If you are just using elasticsearch as a service, a much better integration test is using a real elasticsearch node (and the test setup already allows this, see the many QA tests or integration tests for plugins in the elasticsearch repo, and the discussion in #11932). If you are embedding elasticsearch, then it is better for you to have your own test base classes which initializes an embedded node as you would (and constructing a Node directly does not do a jarhell check).

@s1monw
Copy link
Contributor

s1monw commented Jan 6, 2016

Surely the obvious thing is to have a system property to disable the check with a warning if duplicates exist. After all there is no guarantee that the duplicate class may actually be used by the code. The default can still be crash and burn. And if even if a duplicate was used and the "wrong" class was loaded isn't it a case of caveat emptor ("let the buyer beware" ) ?

I think 2 years ago I would have agreed with you. Now after all the things that i have seen working on es core for a long time I can tell you any kind of opt-out and or leniency is the root of all evil. The only thing that I am considering as a temporary solution until we have a good and released test-fixture for a real integ test is to add an opt-out to the jar hell check in the test bootstrap. On the real node I am sorry we won't go there to protect us and others from even getting to the point where this can be trouble.

@PeterLappo
Copy link

@rjernst @s1monw
Hi Guys,
I understand that you may want to use JarHell to ensure you have a clean build to test ES or ES plugins, but my use case is different.

I have some client code written with Jest that writes to ES and I want to create a test that proves my code works. I have tests working but they relied on a local instance of ES running which is not ideal as I'd need to ensure the tests run on integration servers or other peoples machines. Our code is not so easy to clean as there are several conflicts that JarHell detects which are beyond my control, e.g. sl4j in dependent libraries. So I would like to spin up an instance of ES and test my Jest client logic and verify the data exists as I expect in ES. At the end of the test I then want to close down ES and clean up any data.

Ideally I want

  • an in-memory version of ES rather than storing data on disk;
  • as Jest uses http:port a random unused port;
  • I don't want to inherit from anything as I'm using Scala test which has its own base class that I inherit from (its not a trait which I must admit is sub-optimal).

Obviously I can start a ES in a separate process during my test and do all the setup myself but I was hoping something may exist already and maybe an example that I can copy to get me started?

@rjernst
Copy link
Member

rjernst commented Jan 6, 2016

@PeterLappo Having external services for integration tests is exactly the point of us adding test fixtures (in master). See #15561. For 2.x, you can look at how the client tests start an ES cluster.

@PeterLappo
Copy link

Thanks will take a look

@anjithp
Copy link

anjithp commented Jan 14, 2016

I'm also facing the same problem as @bonitao. Is there any solution - other than modifying the elasticsearch source code - to this issue? Your help will be much appreciated.

@PeterLappo
Copy link

We gave up and wrote a script that started a ES instance, ran the tests and then shutdown ES

On 14 Jan 2016, at 10:13, Anjith Kumar Paila [email protected] wrote:

I'm also facing the same problem as @bonitao https://github.com/bonitao. Is there any solution - other than modifying the elasticsearch source code - to this issue? Your help will be much appreciated.


Reply to this email directly or view it on GitHub #14348 (comment).

@sawyercade
Copy link

Echoing what others have said here - we build an uber jar of ES with some shaded/relocated dependencies, mostly due to version conflicts between ES and our project's dependencies. We would LOVE to be able to use ESIntegTestCase to test our mappings, transforms, index requests, and queries, but at the moment are unable to do so without shading the entire test-jar, which raises a whole host of other issues.

@s1monw any chance of adding that opt-out to the jar hell check in the test bootstrap soon?

@ndtreviv
Copy link

I stumbled across this today, and am now also fighting jar hell in transient dependencies.

I get the sentiment re: jar hell, I really do, and I want to be on board. At the moment, I can't use elasticsearch with Apache Storm and Flux, because Flux has commons-cli as a dependency, and is "overriding" class behaviour with it's own jar hell a la @cff3:

Caused by: java.lang.IllegalStateException: jar hell!
class: org.apache.commons.cli.AlreadySelectedException
jar1: /Users/me/.m2/repository/org/apache/storm/flux-core/1.0.1/flux-core-1.0.1.jar
jar2: /Users/me/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar

screen shot 2016-07-20 at 15 40 17

screen shot 2016-07-20 at 15 40 32

I can't do anything about this. It's holding up my development.
Like I said - I agree with the sentiment, but it's totally knee-capping my work, it's not my fault, and I'm powerless to do anything about it. This makes me sad. Very sad.

How would others solve this?

@dadoonet
Copy link
Member

@ndtreviv May be you could find some ideas here: https://www.elastic.co/blog/to-shade-or-not-to-shade.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests