This post was updated on .
I'm executing the following snippet on two different environments.
StreamExecutionEnvironment streamEnv = StreamExecutionEnvironment.createRemoteEnvironment("xxxxxxxxx", 6123);
StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(streamEnv);
DataStream<Tuple2<Integer, String>> stream1 = streamEnv.fromElements(new Tuple2<>(1, "hello"));
DataStream<Row> dataStream = tableEnv.toAppendStream(tableEnv.sql("SELECT f0, f1 from a"), Row.class);
When executing through IDE everything works fine. However if I execute the same code loaded by a different class loader used on our application I get the following error, on flink side.
org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot load user class: ch.qos.logback.classic.Logger
ClassLoader info: URL ClassLoader:
Class not resolvable through given classloader.
On the client side we are using logback but according with the snippet there is no need to load logback on flink side. Is there any reference binding while building a stream graph or job that might reference logback as dependency? Or does flink assumes client log mechanism?
Please take a look at FLINK-6767.
On Wed, Jul 26, 2017 at 3:53 AM, nragon <[hidden email]> wrote:
I've changed that line and compiled it into lib/. Error remains.
I'm running a local custer with start-local.sh
The only difference is that intelliJ is using log4j and the other application is using logback.
Moreover, the snippet is quite simple, it does not reference any user class other than flink's
Does flink uses user loaded log implementation and tries to use it on server side? Which in this case would justify the logback dependency.
You seem to have a reference to the Logback Logger somewhere in your code.
The class for that logger seems to be not in the user code jar file, or in the flink lib directory.
Since Flink does not bundle logback by itself, you need to package this dependency explicitly or add the logback jar to the lib folder.
On Wed, Jul 26, 2017 at 3:01 PM, nragon <[hidden email]> wrote:
It seems there is a bug in the internal Table API operators: they store the Logger that is available on the Client machine and deserialising that Logger fails on the cluster. I created a Jira issue for this: https://issues.apache.org/jira/browse/FLINK-7398
|Free forum by Nabble||Edit this page|