2

Is it possible to sink processed stream data into a database using pyflink? All methods to write processed data are limited to save them in the txt, csv or Json formats and there is no way to sink data with database.

3

1 Answer 1

2

You could use SQL DDL within pyflink to define a JDBC table sink that you can then insert into. That will look something like this

my_sink_ddl = """
CREATE TABLE MyUserTable (
  id BIGINT,
  name STRING,
  age INT,
  status BOOLEAN,
  PRIMARY KEY (id) NOT ENFORCED
) WITH (
   'connector' = 'jdbc',
   'url' = 'jdbc:mysql://localhost:3306/mydatabase',
   'table-name' = 'users'
);
"""

t_env.sql_update(my_sink_ddl)
Sign up to request clarification or add additional context in comments.

1 Comment

I don't understand the point about "where they should be placed". A sink is always a terminal node in the dataflow DAG. But no, there is no Flink connector for MongoDB.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.