For sink side:
I'm a bit more comfortable with "batch mode" than with "run once and it
will do something every hour" because the former puts scheduling firmly in
the user hands (and their cron), the latter means that connector developers
need to figure out schedules.
For source side:
I'm not convi
On Fri, Aug 14, 2015 at 10:57 AM, Jay Kreps wrote:
> I thought batch was dead? :-)
>
> Yeah I think this would be really useful. Kafka kind of allows you to unify
> batch and streams since you produce or consume your stream on your own
> schedule so you would want the ingress/egress to work the s
I thought batch was dead? :-)
Yeah I think this would be really useful. Kafka kind of allows you to unify
batch and streams since you produce or consume your stream on your own
schedule so you would want the ingress/egress to work the same.
Ewen, rather than sleeping, I think the use case is that
The JDBC connector I started implementing just handles this manually, and
isn't much code (and could be made into a simple utility):
https://github.com/confluentinc/copycat-jdbc/blob/master/src/main/java/io/confluent/copycat/jdbc/JdbcSourceTask.java#L152
Given the current APIs, sources can just ha
Hi Team Kafka,
(sorry for the flood, this is last one! promise!)
If you tried out PR-99, you know that CopyCat now does on-going
export/import. So it will continuously read data from a source and write it
to Kafka (or vice versa). This is great for tailing logs and replicating
from MySQL binlog.