Are there any news on custom trigger support for SQL/Table API?

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Are there any news on custom trigger support for SQL/Table API?

Theo Diefenthal
Hi there,

I currently evaluate to let our experienced system users write Flink-SQL queries directly. Currently, all queries our users need are implemented programmatically.

There is one major problem preventing us from just giving SQL to our users directly. Almost all queries of our users are "threshold-based". They are something like "Notify me directly if there were (>=)10 machine restarts today". So the time window is one day, but if there are 10 outages at 10:00 already, we must trigger the window.

In our programmed pipelines, that's rather easy to accomplish. We just build Count-Triggers. In Flink-SQL, this seems to not be possible at all. See [1].

That mailing list entry references a word draft document of implementations talking about a SQL syntax enrichments (With EMIT clause) but as a pre-step, it seems to be planned that Flink could provide trigger customization via QueryConfig:

"Flink’s Table API / SQL features a QueryConfig object to configure the execution of a streaming query. So far (Flink 1.3), it only allows to specify a state retention period but we plan to extend it with trigger policies. This way we do not have to extend SQL itself to specify triggering policies. However, an EMIT statement could of course override the configuration of a QueryConfig. So from our point of view it would be an optional alternative."

That mailing list article is >1 year old and that word doc references to Flink 1.3. I checked out QueryConfig in my Flink 1.8.0 and it sadly seems to still not support this. So my question is: Is there any timeline / roadmap / JIRA issue to track this and is this a still planned feature (in near/mid/long-term?) Just being able to change the trigger via QueryConfig would already be really great for us, and having this EMIT clause would of course be awesome.

Best regards
Theo

[1] http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/SQL-Do-Not-Support-Custom-Trigger-td20932.html
Reply | Threaded
Open this post in threaded view
|

Re: Are there any news on custom trigger support for SQL/Table API?

Fabian Hueske-2
Hi Theo,

The work on custom triggers has been put on hold due to some major refactorings (splitting the modules, porting Scala code to Java, new type system, new catalog interfaces, integration of the Blink planner).
It's also not on the near-time roadmap AFAIK.
To be honest, I'm not sure how much support for custom trigger there will be. 

Best,
Fabian

Am Fr., 23. Aug. 2019 um 14:39 Uhr schrieb Theo Diefenthal <[hidden email]>:
Hi there,

I currently evaluate to let our experienced system users write Flink-SQL queries directly. Currently, all queries our users need are implemented programmatically.

There is one major problem preventing us from just giving SQL to our users directly. Almost all queries of our users are "threshold-based". They are something like "Notify me directly if there were (>=)10 machine restarts today". So the time window is one day, but if there are 10 outages at 10:00 already, we must trigger the window.

In our programmed pipelines, that's rather easy to accomplish. We just build Count-Triggers. In Flink-SQL, this seems to not be possible at all. See [1].

That mailing list entry references a word draft document of implementations talking about a SQL syntax enrichments (With EMIT clause) but as a pre-step, it seems to be planned that Flink could provide trigger customization via QueryConfig:

"Flink’s Table API / SQL features a QueryConfig object to configure the execution of a streaming query. So far (Flink 1.3), it only allows to specify a state retention period but we plan to extend it with trigger policies. This way we do not have to extend SQL itself to specify triggering policies. However, an EMIT statement could of course override the configuration of a QueryConfig. So from our point of view it would be an optional alternative."

That mailing list article is >1 year old and that word doc references to Flink 1.3. I checked out QueryConfig in my Flink 1.8.0 and it sadly seems to still not support this. So my question is: Is there any timeline / roadmap / JIRA issue to track this and is this a still planned feature (in near/mid/long-term?) Just being able to change the trigger via QueryConfig would already be really great for us, and having this EMIT clause would of course be awesome.

Best regards
Theo