happy new year from my side, too, and thanks a lot to Hequn for helping out with the weekly updates during the last three weeks! I enjoyed reading these myself for a change.
This week's community digest features an update on Flink 1.10 release testing, a proposal for a SQL catalog to read the schema of relational databases and the Call for Presentations of Flink Forward San Francisco.
* [releases] The community is still testing and fixing bugs for Flink 1.10. You can follow the effort on the release burndown board . Should not be too long until a first RC is ready.
* [sql] Bowen proposes to add a JDBC and Postgres Catalog to the Table API. By this, Flink could automatically create tables corresponding to the tables in relational databases. Currently, users need to manually create corresponding tables (incl. schema) on the Flink-side. [2,3]
* [configuration] Xintong proposes to change some of the default values for Flink's memory configuration following his work on FLIP-49 and is looking for feedback 
* [datastream api] Congxian proposes to unify the handling of adding "null" values to AppendingState across statebackends. The proposed behavior is to make all statebackends refuse "null" values. 
A lot of activity due to release testing, but I did not catch any new notable bugs for already released versions.
Events, Blog Posts, Misc ===================
* Flink Forward San Francisco Call for Presentations is ending soon, but you still have a chance to submit your talk to the one (and possibly only) Apache Flink community conference in North America. In case of questions or if you are unsure whether to submit, feel free to reach out to me personally. 
* Upcoming Meetups
* On January 18th Preetdeep Kumar will share some basic Flink
DataStream processing API, followed by a hands-on demo. This will be an
online event. Check more details within the meetup link. 
* On January 22 my colleague Alexander Fedulov will talk about Fraud Detection with Apache Flink at the Apache Flink Meetup in Madrid .