2020-02-27 09:18:55 UTC - Prashant  Shandilya: @Prashant  Shandilya has joined 
the channel
----
2020-02-27 09:19:39 UTC - Prashant  Shandilya: i m using ansible - python to 
setup pulsar cluster .. all works fine .. except my broker keeps on restarting 
..
----
2020-02-27 09:19:52 UTC - Prashant  Shandilya: anyone faced the similar issue ?
----
2020-02-27 11:45:21 UTC - Yuvaraj Loganathan: Please check and share the logs
----
2020-02-27 16:43:53 UTC - matt_innerspace.io: that makes sense - thanks for the 
heads up.  wondering if a function, as a consumer, would be slightly more 
intelligent, especially if the function is defined for parallelism?
----
2020-02-27 17:31:51 UTC - eilonk: @eilonk has joined the channel
----
2020-02-27 19:26:45 UTC - Addison Higham: :thinking_face: so, just ran into a 
case where a developer is looking to do the following with a Pulsar function:
• we have a few hundred topics that map one-to-one to an output topic where we 
just do some minor transformation that is fairly consistent 
• we don't want to use a function per topic as that is just hard to manage and 
expensive
• the problem with doing it in one function though is that it then becomes 
really complex to get good perf (gotta use asnyc) but the API model of pulsar 
function with it being message at time and not being able to control acking, 
that makes it difficult
• two ideas I have: 
• 1. Instead of using the context to send, make it so that a pulsar function 
returns a `Record` object that the functions machinery can do the work of 
sending to multiple output topics
• 2. have an `asyncFunction` that can return a completable future that doesn't 
ack the message until the completable future finishes
----
2020-02-27 19:28:33 UTC - Tobias Macey: @Tobias Macey has joined the channel
----
2020-02-27 19:30:11 UTC - Devin G. Bost: @Addison Higham 
Can't you listen to the input topics with a regex and then dynamically route to 
the output topics based on message metadata (which you could extract)? The list 
of output topics could be specified in the function config unless it needs to 
be updated more dynamically.
----
2020-02-27 21:50:20 UTC - ikeda: @ikeda has joined the channel
----
2020-02-27 23:16:56 UTC - John Duffie: could use some help with a roadblock - 
flink+pulsar+AVRO GenericRecord

We have a special capability where common code operates on GenericRecord 
objects.  Has no need to know the schema until it arrives just prior to first 
message.  This works b/c the keys into the record are injecting 
programmatically at runtime.  Been doing this for years in other environments.

The problem is, the pulsar flink consumer *must* specify the schema before it 
connects.  That “startup” schema is used for compatibility checks.  In our 
case, there is no need for a check. Is there a way to get the consumer builder 
to ignore the provided schema ?
```PulsarSourceBuilder
        .builder(initialSchema)```

Trying to make a rest call to the registry and then use the result when 
starting up the consumer in a flink environment is problematic in flink.
----
2020-02-28 00:04:32 UTC - yijie: please try to use 
<https://github.com/streamnative/pulsar-flink|https://github.com/streamnative/pulsar-flink>
----
2020-02-28 07:18:04 UTC - Sijie Guo: @John Duffie you can checkout the 
connector that yijie points out. that’s the one is contributed to upstream 
flink.
----

Reply via email to