[GitHub] [pulsar-dotpulsar] blankensteiner commented on pull request #95: Add token factory support, respond to server auth challenge on token refresh

2022-02-03 Thread GitBox


blankensteiner commented on pull request #95:
URL: https://github.com/apache/pulsar-dotpulsar/pull/95#issuecomment-1028705632


   Hi @goldenccargill 
   I couldn't run the tests, so I made some changes to the tests and also 
ensured that only one standalone instance is used for all the tests in that 
project. I am able to run them locally, but it fails on the GitHub workflow. I 
think the problem is with the token given to the broker (in standalone.conf), 
which was also the cause of my initial problems running it locally.
   Like I wrote in the commit message, I made some changes to make it more like 
the Java client (IAuthentication and AuthenticationFactory) and made the 
Connection class responsible for getting the token (instead of splitting that 
responsibility with ConnectionPool).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-dotpulsar] goldenccargill commented on pull request #95: Add token factory support, respond to server auth challenge on token refresh

2022-02-03 Thread GitBox


goldenccargill commented on pull request #95:
URL: https://github.com/apache/pulsar-dotpulsar/pull/95#issuecomment-1028719172


   Ok, I wanted separate instances as I wanted one instance that used token 
auth for the new stuff and one that didn't for all the current tests.
   
   As long as the token is generated from the same secret key then it should be 
accepted. I remember spending a lot of time trying to get the instance up and 
running with proper token auth. I vaguely remember having a similar issue you 
said in the commit message where the default namespaces didn't exist but I 
can't remember how I fixed it.
   
   Not sure why the tests didn't run for you, they also passed on the build 
server. Regarding responsibility changes that's fine, I'd have been happy to 
make that change also in the PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-dotpulsar] blankensteiner commented on pull request #95: Add token factory support, respond to server auth challenge on token refresh

2022-02-03 Thread GitBox


blankensteiner commented on pull request #95:
URL: https://github.com/apache/pulsar-dotpulsar/pull/95#issuecomment-1028730259


   We might need multiple instances in the future, but the old tests could just 
have a token added and then we only have to maintain one yml-file and only have 
to spin up and wait for one instance. I also think that, at some point, it 
would be nice to merge "StressTests" and "IntegrationTests" into one project.
   
   The token didn't work for me locally, but it worked when creating a new 
token. Now the token doesn't work in the workflow. I would have loved that 
Pulsar made it easy to "one line" an entire test setup with TLS and 
Authentication and so on. Not sure how to solve this as the Metadata exception 
we are seeing is because the token the broker is using doesn't validate for 
some weird reason.
   
   If you prefer I can guide you more on the next PR, instead of just merging 
and changing stuff myself. I did it because experience tells me that to much 
guidance can make some abandon the PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-dotpulsar] goldenccargill commented on pull request #95: Add token factory support, respond to server auth challenge on token refresh

2022-02-03 Thread GitBox


goldenccargill commented on pull request #95:
URL: https://github.com/apache/pulsar-dotpulsar/pull/95#issuecomment-1028759637


   Fair enough, hadn't considered using the token for the previous tests.
   Agree merging Stress and Integration would be good.
   
   I think over time it might be hard to only have one instance as there are 
many ways to configure the Pulsar service.
   
   I will try and find some time to have a look and see if I get any issues 
running the tests. 
   
   I guess regarding PRs you could give the option of what improvements you'd 
like and clarify that you can also do them yourself if its preferable so as not 
to scare people away.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-dotpulsar] VisualBean commented on issue #92: Client usage

2022-02-03 Thread GitBox


VisualBean commented on issue #92:
URL: 
https://github.com/apache/pulsar-dotpulsar/issues/92#issuecomment-1028808586


   Looks good. From my quick glance, it looks like we can simply implement a 
different authfactory for oauth/whatever we want?
   I see another issue has been created for OAuth specifically #94 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-dotpulsar] blankensteiner commented on issue #92: Client usage

2022-02-03 Thread GitBox


blankensteiner commented on issue #92:
URL: 
https://github.com/apache/pulsar-dotpulsar/issues/92#issuecomment-1028824684


   Yeah, go nuts, you just need to implement IAuthentication :-)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-dotpulsar] VisualBean closed issue #92: Client usage

2022-02-03 Thread GitBox


VisualBean closed issue #92:
URL: https://github.com/apache/pulsar-dotpulsar/issues/92


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-dotpulsar] VisualBean commented on issue #92: Client usage

2022-02-03 Thread GitBox


VisualBean commented on issue #92:
URL: 
https://github.com/apache/pulsar-dotpulsar/issues/92#issuecomment-1028828300


   Awesome, I will most likely contribute within a couple of days then! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-dotpulsar] VisualBean edited a comment on issue #92: Client usage

2022-02-03 Thread GitBox


VisualBean edited a comment on issue #92:
URL: 
https://github.com/apache/pulsar-dotpulsar/issues/92#issuecomment-1028828300


   Awesome, I will most likely contribute an OAuth factory within a couple of 
days then! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-dotpulsar] blankensteiner commented on issue #92: Client usage

2022-02-03 Thread GitBox


blankensteiner commented on issue #92:
URL: 
https://github.com/apache/pulsar-dotpulsar/issues/92#issuecomment-1028829037


   You are most welcome .:-)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [VOTE] Pulsar Release 2.9.2 Candidate 2

2022-02-03 Thread Nicolò Boschi
Hi Ran, thanks for driving the release.

I haven't tested the rc yet but I firmly believe we should include this
pull [1] which fixes a regression introduced in Pulsar 2.9.0

[1] https://github.com/apache/pulsar/pull/14097



Il giorno mer 2 feb 2022 alle ore 08:34 Enrico Olivelli 
ha scritto:

> (sorry for the late reply, I am still testing, I had some other
> priorities).
>
> I hope that the community will test this RC and report back
>
>
> Enrico
>
> Il giorno mar 25 gen 2022 alle ore 15:07 Ran Gao  ha
> scritto:
> >
> > Sorry, the 2.9.2 release candidate-1 has a wrong sign certificate.
> >
> > This is the second release candidate for Apache Pulsar, version 2.9.2.
> >
> > *** Please download, test, and vote on this release. This vote will stay
> > open
> > for at least 72 hours ***
> >
> > Note that we are voting upon the source (tag), binaries are provided for
> > convenience.
> >
> > Source and binary files:
> > https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.9.2-candidate-2/
> >
> > SHA-512 checksums:
> >
> >
> 563f65582c5307b4ef1e0322958ed19d7c181fb8bb8d7b8cab06ab0a6adb5520f7d18b6f97960b93c3318815529a8b8721e00e9cc9484532a2e5ed3221450094
> >  ./apache-pulsar-2.9.2-bin.tar.gz
> >
> 60d1049611b938b0ddc769132124d43820728afc8a06813a5ec9efc095c5497c59d9bbcaaf7df5b0c0e97e051d66f59c1f8ee08885d05ca2c635773e0283770a
> >  ./apache-pulsar-2.9.2-src.tar.gz
> >
> > Maven staging repo:
> > https://repository.apache.org/content/repositories/orgapachepulsar-1136
> >
> > The tag to be voted upon:
> > v2.9.2-candidate-2 (8a5d2253b888b3b865a2aedf635d672821c7)
> > https://github.com/apache/pulsar/releases/tag/v2.9.2-candidate-2
> >
> > Pulsar's KEYS file containing PGP keys we use to sign the release:
> > https://dist.apache.org/repos/dist/dev/pulsar/KEYS
> >
> > Please download the source package, and follow the README to build
> > and run the Pulsar standalone service.
>


-- 
Nicolò Boschi


Pulsar Flaky test report 2022-02-03 for PR builds in CI

2022-02-03 Thread Lari Hotari
Dear Pulsar community members,

Here's a report of the flaky tests in Pulsar CI during the observation
period of 2022-01-27 to 2022-02-03 .
The full report is available as a Google Sheet,
https://docs.google.com/spreadsheets/d/165FHpHjs5fHccSsmQM4beeg6brn-zfUjcrXf6xAu4yQ

There are a lot more flaky test failures that what are seen in the report.
The report contains a subset of the test failures.
The flaky tests are observed from builds of merged PRs.
The GitHub Actions logs will be checked for builds where the SHA of the
head of the PR matches the SHA which got merged.
This ensures that all found exceptions are real flakes, since no changes
were made to the PR to make the tests pass later
so that the PR was merged successfully.

Here are the most flaky test methods:
Test method name Number of build failures due to this test
Detailed test results
org.apache.pulsar.client.api.BrokerServiceLookupTest.testModularLoadManagerSplitBundle
21 Test results

org.apache.pulsar.broker.service.PersistentTopicTest.setup 14 Test results

org.apache.pulsar.metadata.LockManagerTest.revalidateLockOnDifferentSession
13 Test results

org.apache.pulsar.broker.admin.AdminApi2Test.testGetListInBundle 13 Test
results

org.apache.pulsar.broker.service.RackAwareTest.testPlacement 12 Test results

org.apache.pulsar.metadata.ZKSessionTest.testDisconnection 12 Test results

org.apache.pulsar.broker.service.persistent.PersistentTopicStreamingDispatcherTest.setup
12 Test results

org.apache.pulsar.client.api.BrokerServiceLookupTest.testPartitionTopicLookup
11 Test results

org.apache.pulsar.broker.service.PersistentTopicE2ETest.testBrokerConnectionStats
11 Test results

org.apache.pulsar.broker.service.ReplicatorTest.testDoNotReplicateSystemTopic
9 Test results

org.apache.pulsar.broker.service.BacklogQuotaManagerTest.testConsumerBacklogEvictionTimeQuota
9 Test results

org.apache.pulsar.broker.service.persistent.PersistentTopicStreamingDispatcherE2ETest.testBrokerConnectionStats
8 Test results

org.apache.pulsar.testclient.PerformanceProducerTest.testMsgKey 7 Test
results

org.apache.pulsar.metadata.LockManagerTest.updateValue 6 Test results

org.apache.pulsar.metadata.ZKSessionTest.testSessionLost 5 Test results

org.apache.pulsar.broker.service.BacklogQuotaManagerTest.testConsumerBacklogEvictionTimeQuotaWithEmptyLedger
5 Test results

org.apache.pulsar.broker.service.ReplicatorSubscriptionTest.testGetReplicatedSubscriptionStatus
5 Test results


Blocker issues on 'master' and 'branch-2.9' branches

2022-02-03 Thread Nicolò Boschi
Hi folks,

Just want to make you notice that currently the master branch has two
blocker issues:
1. https://github.com/apache/pulsar/issues/14104
2. https://github.com/apache/pulsar/pull/13891

Also the 2.9 branch has one blocker issue - a regression introduced with
Pulsar 2.9.0. I believe it is better to include it in the next 2.9.x release
3. https://github.com/apache/pulsar/pull/14097

BR,
Nicolò Boschi


Re: [DISCUSS] PIP-136: Sync Pulsar policies across multiple clouds

2022-02-03 Thread Joe F
>On my first reading, it wasn't clear if there was only one topic
required for this feature. I now see that the topic is not tied to a
specific tenant or namespace. As such, we can avoid complicated
authorization questions by putting the required event topic(s) into a
"system" tenant and namespace

We should consider complicated questions. We can say why we chose not to
address it, or why it does not apply. for a particular situation

Many namespace policies are administered by tenants.  As such any tenant
can load this topic.  Is it possible for one abusive tenant to make your
system topic dysfunctional?

Pulsar committers should think about
(1) scenarios where the Pulsar cluster operators and tenant admins  are
different entities and tenants can be malicious, or more probably, write
bad code that will produce malicious outcomes.
(2) whether the changes introduce  additional SPOFs into the cluster.

I don't think this PIP has those issues, but  as a matter of practice, I
would like to see backend/system PIPs consider these questions  and
explicitly state the conclusions with rationale

Joe


On Wed, Feb 2, 2022 at 9:27 PM Michael Marshall 
wrote:

> Thanks for your responses.
>
> > I don't see a need of protobuf for this particular usecase
>
> If no one else feels strongly on this point, I am good with using a POJO.
>
> > It doesn't matter if it's system-topic or not because it's
> > configurable and the admin of the system can decide and configure it
> > according to the required persistent policy.
>
> On my first reading, it wasn't clear if there was only one topic
> required for this feature. I now see that the topic is not tied to a
> specific tenant or namespace. As such, we can avoid complicated
> authorization questions by putting the required event topic(s) into a
> "system" tenant and namespace, by default. The `pulsar/system` tenant
> and namespace seem appropriate to me.
>
> > I would keep the system topic
> > separate because this topic serves a specific purpose with specific
> schema,
> > replication policy and retention policy.
>
> I think we need a more formal definition for system topics. This topic
> is exactly the kind of topic I would call a system topic: its intended
> producers and consumers are Pulsar components. However, because
> this feature can live on a topic in a system namespace, we can avoid
> the classification discussion for this PIP.
>
> > Source region will have a broker which will create a failover consumer on
> > that topic and a broker with an active consumer will watch the metadata
> > changes and publish the changes to the event topic.
>
> How do we designate the host broker? Is it manual? How does it work
> when the host broker is removed from the cluster?
>
> If we collocate the active consumer with the broker hosting the event
> topic, can we skip creating the failover consumer?
>
> > PIP briefly talks about it but I will update the PIP with more
> > explanation.
>
> I look forward to seeing more about this design for conflict resolution.
>
> Thanks,
> Michael
>
>
>
> On Tue, Feb 1, 2022 at 3:01 AM Rajan Dhabalia 
> wrote:
> >
> > Please find my response inline.
> >
> > On Mon, Jan 31, 2022 at 9:17 PM Michael Marshall 
> > wrote:
> >
> > > I think this is a very appropriate direction to take Pulsar's
> > > geo-replication. Your proposal is essentially to make the
> > > inter-cluster configuration event driven. This increases fault
> > > tolerance and better decouples clusters.
> > >
> > > Thank you for your detailed proposal. After reading through it, I have
> > > some questions :)
> > >
> > > 1. What do you think about using protobuf to define the event
> > > protocol? I know we already have a topic policy event stream
> > > defined with Java POJOs, but since this feature is specifically
> > > designed for egressing cloud providers, ensuring compact data transfer
> > > would keep egress costs down. Additionally, protobuf can help make it
> > > clear that the schema is strict, should evolve thoughtfully, and
> > > should be designed to work between clusters of different versions.
> > >
> >
> >  >>> I don't see a need of protobuf for this particular usecase because
> of
> > two reasons:
> >   >> a. policy changes don't generate huge traffic which could be 1 rps
> b.
> > and it doesn't need performance optimization.
> >   >> It should be similar as storing policy in text instead protobuf
> which
> > doesn't impact footprint size or performance due to limited number of
> >  >> update operations and relatively less complexity. I agree that
> protobuf
> > could be another option but in this case it's not needed. Also, POJO
> >  >> can also support schema and versioning.
> >
> >
> >
> > >
> > > 2. In your view, which tenant/namespace will host
> > > `metadataSyncEventTopic`? Will there be several of these topics or is
> > > it just hosted in a system tenant/namespace? This question gets back
> > > to my questions about system topics on this mailing list last week
> [0].

Re: Pulsar Flaky test report 2022-02-03 for PR builds in CI

2022-02-03 Thread Lari Hotari
I added the links to GitHub issues to the spreadsheet:
https://docs.google.com/spreadsheets/d/165FHpHjs5fHccSsmQM4beeg6brn-zfUjcrXf6xAu4yQ/edit#gid=456314619
Let's focus fixing the top 10 most flaky tests asap. Please comment on the
issue that you are working on it so that we don't unnecessarily do
duplicate work on fixing the issues.

-Lari

Test method name Reported issue
org.apache.pulsar.client.api.BrokerServiceLookupTest.testModularLoadManagerSplitBundle

https://github.com/apache/pulsar/issues/13102
org.apache.pulsar.broker.service.PersistentTopicTest.setup

https://github.com/apache/pulsar/issues/13620
org.apache.pulsar.metadata.LockManagerTest.revalidateLockOnDifferentSession

https://github.com/apache/pulsar/issues/11690
org.apache.pulsar.broker.admin.AdminApi2Test.testGetListInBundle

https://github.com/apache/pulsar/issues/14105
org.apache.pulsar.broker.service.RackAwareTest.testPlacement

https://github.com/apache/pulsar/issues/14106
org.apache.pulsar.metadata.ZKSessionTest.testDisconnection

https://github.com/apache/pulsar/issues/13008
org.apache.pulsar.broker.service.persistent.PersistentTopicStreamingDispatcherTest.setup

https://github.com/apache/pulsar/issues/13808
org.apache.pulsar.client.api.BrokerServiceLookupTest.testPartitionTopicLookup

https://github.com/apache/pulsar/issues/14046
org.apache.pulsar.broker.service.PersistentTopicE2ETest.testBrokerConnectionStats

https://github.com/apache/pulsar/issues/10150
org.apache.pulsar.broker.service.ReplicatorTest.testDoNotReplicateSystemTopic

https://github.com/apache/pulsar/issues/12774
org.apache.pulsar.broker.service.BacklogQuotaManagerTest.testConsumerBacklogEvictionTimeQuota

https://github.com/apache/pulsar/issues/13952
org.apache.pulsar.broker.service.persistent.PersistentTopicStreamingDispatcherE2ETest.testBrokerConnectionStats

https://github.com/apache/pulsar/issues/10150
org.apache.pulsar.testclient.PerformanceProducerTest.testMsgKey

https://github.com/apache/pulsar/issues/14052
org.apache.pulsar.metadata.LockManagerTest.updateValue

https://github.com/apache/pulsar/issues/13663
org.apache.pulsar.metadata.ZKSessionTest.testSessionLost

https://github.com/apache/pulsar/issues/14107
org.apache.pulsar.broker.service.BacklogQuotaManagerTest.testConsumerBacklogEvictionTimeQuotaWithEmptyLedger

https://github.com/apache/pulsar/issues/14108
org.apache.pulsar.broker.service.ReplicatorSubscriptionTest.testGetReplicatedSubscriptionStatus

https://github.com/apache/pulsar/issues/13626
org.apache.pulsar.testclient.PerformanceTransactionTest.testConsumeTxnMessage

https://github.com/apache/pulsar/issues/14109
org.apache.pulsar.functions.source.batch.BatchSourceExecutorTest.testPushLifeCycle

https://github.com/apache/pulsar/issues/11735
org.apache.pulsar.metadata.MetadataCacheTest.insertionDeletion

https://github.com/apache/pulsar/issues/14110

>


[GitHub] [pulsar-adapters] dlg99 opened a new pull request #34: Removing Flink in favor of the one included with Flink

2022-02-03 Thread GitBox


dlg99 opened a new pull request #34:
URL: https://github.com/apache/pulsar-adapters/pull/34


   ### Motivation
   
   Removing Flink in favor of 
https://github.com/apache/flink/tree/master/flink-connectors/flink-connector-pulsar
   This one is based on an old version of Flink which bring in dependencies 
with various CVEs, since that version Flink added pulsar connector in their 
project. 
   
   ### Modifications
   
   Removed Flink adapter, tests, examples, and dependencies.
   
   ### Verifying this change
   
   - [ ] Make sure that the change passes the CI checks.
   
   ### Does this pull request potentially affect one of the following parts:
   
   *If `yes` was chosen, please highlight the changes*
   
 - Dependencies (does it add or upgrade a dependency): *YES*, removed
 - The public API: (yes / no)
 - The schema: (yes / no / don't know)
 - The default values of configurations: (yes / no)
 - The wire protocol: (yes / no)
 - The rest endpoints: (yes / no)
 - The admin cli options: (yes / no)
 - Anything that affects deployment: (yes / no / don't know)
   
   ### Documentation
   
 - Does this pull request introduce a new feature? removes the feature
 - If yes, how is the feature documented? not applicable but should be 
mentioned in the release notes
 - If a feature is not applicable for documentation, explain why?
 - If a feature is not documented yet in this PR, please create a followup 
issue for adding the documentation
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[PR] pulsar-adapters: remove Flink adapter

2022-02-03 Thread Andrey Yegorov
Hello,

I created a PR to remove Flink from the Pulsar adapters project:
https://github.com/apache/pulsar-adapters/pull/34

Motivation is:
* It is based on an old version of Flink (1.7.2, current is 1.14.3) which
brings in dependencies with various CVEs. 1.7 is not maintained anymore.
* Flink added a pulsar connector in their project
https://github.com/apache/flink/tree/master/flink-connectors/flink-connector-pulsar
* Previous releases of the adapter still available and can be used, if
needed

Please review and let me know if there is a reason to not deprecate/remove
the Flink adapter.

-- 
Andrey Yegorov


[GitHub] [pulsar-adapters] michaeljmarshall commented on a change in pull request #34: Removing Flink in favor of the one included with Flink

2022-02-03 Thread GitBox


michaeljmarshall commented on a change in pull request #34:
URL: https://github.com/apache/pulsar-adapters/pull/34#discussion_r798892369



##
File path: 
examples/kafka-streams/src/main/java/org/apache/kafka/kafkastreams/pulsar/example/PageViewTypedDemo.java
##
@@ -0,0 +1,279 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.kafka.kafkastreams.pulsar.example;
+
+import com.fasterxml.jackson.annotation.JsonSubTypes;
+import com.fasterxml.jackson.annotation.JsonTypeInfo;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.common.errors.SerializationException;
+import org.apache.kafka.common.serialization.Deserializer;
+import org.apache.kafka.common.serialization.Serde;
+import org.apache.kafka.common.serialization.Serdes;
+import org.apache.kafka.common.serialization.Serializer;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.apache.kafka.common.serialization.StringSerializer;
+import org.apache.kafka.connect.json.JsonDeserializer;
+import org.apache.kafka.connect.json.JsonSerializer;
+import org.apache.kafka.streams.KafkaStreams;
+import org.apache.kafka.streams.KeyValue;
+import org.apache.kafka.streams.StreamsBuilder;
+import org.apache.kafka.streams.StreamsConfig;
+import org.apache.kafka.streams.kstream.Consumed;
+import org.apache.kafka.streams.kstream.Grouped;
+import org.apache.kafka.streams.kstream.KStream;
+import org.apache.kafka.streams.kstream.KTable;
+import org.apache.kafka.streams.kstream.TimeWindows;
+import org.apache.pulsar.client.impl.schema.generic.GenericJsonSchema;
+
+import java.io.IOException;
+import java.time.Duration;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.CountDownLatch;
+
+/**
+ * Slightly adjusted
+ * 
https://github.com/apache/kafka/blob/2.7/streams/examples/src/main/java/org/apache/kafka/streams/examples/pageview/PageViewTypedDemo.java
+ *
+ * Demonstrates how to perform a join between a KStream and a KTable, i.e. an 
example of a stateful computation,
+ * using specific data types (here: JSON POJO; but can also be Avro specific 
bindings, etc.) for serdes
+ * in Kafka Streams.
+ *
+ * In this example, we join a stream of pageviews (aka clickstreams) that 
reads from a topic named "streams-pageview-input"
+ * with a user profile table that reads from a topic named 
"streams-userprofile-input", where the data format
+ * is JSON string representing a record in the stream or table, to compute the 
number of pageviews per user region.
+ *
+ * Before running this example you must create the input topics and the output 
topic (e.g. via
+ * bin/kafka-topics --create ...), and write some data to the input topics 
(e.g. via
+ * bin/kafka-console-producer). Otherwise you won't see any data arriving in 
the output topic.
+ *
+ * The inputs for this example are:
+ * - Topic: streams-pageview-input
+ *   Key Format: (String) USER_ID
+ *   Value Format: (JSON) {"_t": "pv", "user": (String USER_ID), "page": 
(String PAGE_ID), "timestamp": (long ms TIMESTAMP)}
+ *
+ * - Topic: streams-userprofile-input
+ *   Key Format: (String) USER_ID
+ *   Value Format: (JSON) {"_t": "up", "region": (String REGION), "timestamp": 
(long ms TIMESTAMP)}
+ *
+ * To observe the results, read the output topic (e.g., via 
bin/kafka-console-consumer)
+ * - Topic: streams-pageviewstats-typed-output
+ *   Key Format: (JSON) {"_t": "wpvbr", "windowStart": (long ms 
WINDOW_TIMESTAMP), "region": (String REGION)}
+ *   Value Format: (JSON) {"_t": "rc", "count": (long REGION_COUNT), "region": 
(String REGION)}
+ *
+ * Note, the "_t" field is necessary to help Jackson identify the correct 
class for deserialization in the
+ * generic {@link JSONSerde}. If you instead specify a specific serde per 
class, you won't need the extra "_t" field.
+ */
+@SuppressWarnings({"WeakerAccess", "unused"})
+public class PageViewTypedDemo {

Review comment:
   @dlg99 - is this kafka work related to removing flink support? I am not 
finding a connection, but I could be missing something.




-- 
This is an automated message from the Apache Git Service.
To respond to the messag

[GitHub] [pulsar-adapters] merlimat commented on pull request #34: Removing Flink adapter in favor of the one included with Flink

2022-02-03 Thread GitBox


merlimat commented on pull request #34:
URL: https://github.com/apache/pulsar-adapters/pull/34#issuecomment-1029318181


   We could also update the README to mention the adapters that are now 
developed somewhere else


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-adapters] dlg99 commented on a change in pull request #34: Removing Flink adapter in favor of the one included with Flink

2022-02-03 Thread GitBox


dlg99 commented on a change in pull request #34:
URL: https://github.com/apache/pulsar-adapters/pull/34#discussion_r798893867



##
File path: 
examples/kafka-streams/src/main/java/org/apache/kafka/kafkastreams/pulsar/example/PageViewTypedDemo.java
##
@@ -0,0 +1,279 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.kafka.kafkastreams.pulsar.example;
+
+import com.fasterxml.jackson.annotation.JsonSubTypes;
+import com.fasterxml.jackson.annotation.JsonTypeInfo;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.common.errors.SerializationException;
+import org.apache.kafka.common.serialization.Deserializer;
+import org.apache.kafka.common.serialization.Serde;
+import org.apache.kafka.common.serialization.Serdes;
+import org.apache.kafka.common.serialization.Serializer;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.apache.kafka.common.serialization.StringSerializer;
+import org.apache.kafka.connect.json.JsonDeserializer;
+import org.apache.kafka.connect.json.JsonSerializer;
+import org.apache.kafka.streams.KafkaStreams;
+import org.apache.kafka.streams.KeyValue;
+import org.apache.kafka.streams.StreamsBuilder;
+import org.apache.kafka.streams.StreamsConfig;
+import org.apache.kafka.streams.kstream.Consumed;
+import org.apache.kafka.streams.kstream.Grouped;
+import org.apache.kafka.streams.kstream.KStream;
+import org.apache.kafka.streams.kstream.KTable;
+import org.apache.kafka.streams.kstream.TimeWindows;
+import org.apache.pulsar.client.impl.schema.generic.GenericJsonSchema;
+
+import java.io.IOException;
+import java.time.Duration;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.CountDownLatch;
+
+/**
+ * Slightly adjusted
+ * 
https://github.com/apache/kafka/blob/2.7/streams/examples/src/main/java/org/apache/kafka/streams/examples/pageview/PageViewTypedDemo.java
+ *
+ * Demonstrates how to perform a join between a KStream and a KTable, i.e. an 
example of a stateful computation,
+ * using specific data types (here: JSON POJO; but can also be Avro specific 
bindings, etc.) for serdes
+ * in Kafka Streams.
+ *
+ * In this example, we join a stream of pageviews (aka clickstreams) that 
reads from a topic named "streams-pageview-input"
+ * with a user profile table that reads from a topic named 
"streams-userprofile-input", where the data format
+ * is JSON string representing a record in the stream or table, to compute the 
number of pageviews per user region.
+ *
+ * Before running this example you must create the input topics and the output 
topic (e.g. via
+ * bin/kafka-topics --create ...), and write some data to the input topics 
(e.g. via
+ * bin/kafka-console-producer). Otherwise you won't see any data arriving in 
the output topic.
+ *
+ * The inputs for this example are:
+ * - Topic: streams-pageview-input
+ *   Key Format: (String) USER_ID
+ *   Value Format: (JSON) {"_t": "pv", "user": (String USER_ID), "page": 
(String PAGE_ID), "timestamp": (long ms TIMESTAMP)}
+ *
+ * - Topic: streams-userprofile-input
+ *   Key Format: (String) USER_ID
+ *   Value Format: (JSON) {"_t": "up", "region": (String REGION), "timestamp": 
(long ms TIMESTAMP)}
+ *
+ * To observe the results, read the output topic (e.g., via 
bin/kafka-console-consumer)
+ * - Topic: streams-pageviewstats-typed-output
+ *   Key Format: (JSON) {"_t": "wpvbr", "windowStart": (long ms 
WINDOW_TIMESTAMP), "region": (String REGION)}
+ *   Value Format: (JSON) {"_t": "rc", "count": (long REGION_COUNT), "region": 
(String REGION)}
+ *
+ * Note, the "_t" field is necessary to help Jackson identify the correct 
class for deserialization in the
+ * generic {@link JSONSerde}. If you instead specify a specific serde per 
class, you won't need the extra "_t" field.
+ */
+@SuppressWarnings({"WeakerAccess", "unused"})
+public class PageViewTypedDemo {

Review comment:
   oh no, I'll fix that




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr..

[GitHub] [pulsar-adapters] michaeljmarshall commented on a change in pull request #34: Removing Flink adapter in favor of the one included with Flink

2022-02-03 Thread GitBox


michaeljmarshall commented on a change in pull request #34:
URL: https://github.com/apache/pulsar-adapters/pull/34#discussion_r798896872



##
File path: 
examples/kafka-streams/src/main/java/org/apache/kafka/kafkastreams/pulsar/example/PageViewTypedDemo.java
##
@@ -0,0 +1,279 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.kafka.kafkastreams.pulsar.example;
+
+import com.fasterxml.jackson.annotation.JsonSubTypes;
+import com.fasterxml.jackson.annotation.JsonTypeInfo;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.common.errors.SerializationException;
+import org.apache.kafka.common.serialization.Deserializer;
+import org.apache.kafka.common.serialization.Serde;
+import org.apache.kafka.common.serialization.Serdes;
+import org.apache.kafka.common.serialization.Serializer;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.apache.kafka.common.serialization.StringSerializer;
+import org.apache.kafka.connect.json.JsonDeserializer;
+import org.apache.kafka.connect.json.JsonSerializer;
+import org.apache.kafka.streams.KafkaStreams;
+import org.apache.kafka.streams.KeyValue;
+import org.apache.kafka.streams.StreamsBuilder;
+import org.apache.kafka.streams.StreamsConfig;
+import org.apache.kafka.streams.kstream.Consumed;
+import org.apache.kafka.streams.kstream.Grouped;
+import org.apache.kafka.streams.kstream.KStream;
+import org.apache.kafka.streams.kstream.KTable;
+import org.apache.kafka.streams.kstream.TimeWindows;
+import org.apache.pulsar.client.impl.schema.generic.GenericJsonSchema;
+
+import java.io.IOException;
+import java.time.Duration;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.CountDownLatch;
+
+/**
+ * Slightly adjusted
+ * 
https://github.com/apache/kafka/blob/2.7/streams/examples/src/main/java/org/apache/kafka/streams/examples/pageview/PageViewTypedDemo.java
+ *
+ * Demonstrates how to perform a join between a KStream and a KTable, i.e. an 
example of a stateful computation,
+ * using specific data types (here: JSON POJO; but can also be Avro specific 
bindings, etc.) for serdes
+ * in Kafka Streams.
+ *
+ * In this example, we join a stream of pageviews (aka clickstreams) that 
reads from a topic named "streams-pageview-input"
+ * with a user profile table that reads from a topic named 
"streams-userprofile-input", where the data format
+ * is JSON string representing a record in the stream or table, to compute the 
number of pageviews per user region.
+ *
+ * Before running this example you must create the input topics and the output 
topic (e.g. via
+ * bin/kafka-topics --create ...), and write some data to the input topics 
(e.g. via
+ * bin/kafka-console-producer). Otherwise you won't see any data arriving in 
the output topic.
+ *
+ * The inputs for this example are:
+ * - Topic: streams-pageview-input
+ *   Key Format: (String) USER_ID
+ *   Value Format: (JSON) {"_t": "pv", "user": (String USER_ID), "page": 
(String PAGE_ID), "timestamp": (long ms TIMESTAMP)}
+ *
+ * - Topic: streams-userprofile-input
+ *   Key Format: (String) USER_ID
+ *   Value Format: (JSON) {"_t": "up", "region": (String REGION), "timestamp": 
(long ms TIMESTAMP)}
+ *
+ * To observe the results, read the output topic (e.g., via 
bin/kafka-console-consumer)
+ * - Topic: streams-pageviewstats-typed-output
+ *   Key Format: (JSON) {"_t": "wpvbr", "windowStart": (long ms 
WINDOW_TIMESTAMP), "region": (String REGION)}
+ *   Value Format: (JSON) {"_t": "rc", "count": (long REGION_COUNT), "region": 
(String REGION)}
+ *
+ * Note, the "_t" field is necessary to help Jackson identify the correct 
class for deserialization in the
+ * generic {@link JSONSerde}. If you instead specify a specific serde per 
class, you won't need the extra "_t" field.
+ */
+@SuppressWarnings({"WeakerAccess", "unused"})
+public class PageViewTypedDemo {

Review comment:
   Great, thanks. Other than this point, this LGTM.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

[GitHub] [pulsar-adapters] dlg99 commented on a change in pull request #34: Removing Flink adapter in favor of the one included with Flink

2022-02-03 Thread GitBox


dlg99 commented on a change in pull request #34:
URL: https://github.com/apache/pulsar-adapters/pull/34#discussion_r798905138



##
File path: 
examples/kafka-streams/src/main/java/org/apache/kafka/kafkastreams/pulsar/example/PageViewTypedDemo.java
##
@@ -0,0 +1,279 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.kafka.kafkastreams.pulsar.example;
+
+import com.fasterxml.jackson.annotation.JsonSubTypes;
+import com.fasterxml.jackson.annotation.JsonTypeInfo;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.common.errors.SerializationException;
+import org.apache.kafka.common.serialization.Deserializer;
+import org.apache.kafka.common.serialization.Serde;
+import org.apache.kafka.common.serialization.Serdes;
+import org.apache.kafka.common.serialization.Serializer;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.apache.kafka.common.serialization.StringSerializer;
+import org.apache.kafka.connect.json.JsonDeserializer;
+import org.apache.kafka.connect.json.JsonSerializer;
+import org.apache.kafka.streams.KafkaStreams;
+import org.apache.kafka.streams.KeyValue;
+import org.apache.kafka.streams.StreamsBuilder;
+import org.apache.kafka.streams.StreamsConfig;
+import org.apache.kafka.streams.kstream.Consumed;
+import org.apache.kafka.streams.kstream.Grouped;
+import org.apache.kafka.streams.kstream.KStream;
+import org.apache.kafka.streams.kstream.KTable;
+import org.apache.kafka.streams.kstream.TimeWindows;
+import org.apache.pulsar.client.impl.schema.generic.GenericJsonSchema;
+
+import java.io.IOException;
+import java.time.Duration;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.CountDownLatch;
+
+/**
+ * Slightly adjusted
+ * 
https://github.com/apache/kafka/blob/2.7/streams/examples/src/main/java/org/apache/kafka/streams/examples/pageview/PageViewTypedDemo.java
+ *
+ * Demonstrates how to perform a join between a KStream and a KTable, i.e. an 
example of a stateful computation,
+ * using specific data types (here: JSON POJO; but can also be Avro specific 
bindings, etc.) for serdes
+ * in Kafka Streams.
+ *
+ * In this example, we join a stream of pageviews (aka clickstreams) that 
reads from a topic named "streams-pageview-input"
+ * with a user profile table that reads from a topic named 
"streams-userprofile-input", where the data format
+ * is JSON string representing a record in the stream or table, to compute the 
number of pageviews per user region.
+ *
+ * Before running this example you must create the input topics and the output 
topic (e.g. via
+ * bin/kafka-topics --create ...), and write some data to the input topics 
(e.g. via
+ * bin/kafka-console-producer). Otherwise you won't see any data arriving in 
the output topic.
+ *
+ * The inputs for this example are:
+ * - Topic: streams-pageview-input
+ *   Key Format: (String) USER_ID
+ *   Value Format: (JSON) {"_t": "pv", "user": (String USER_ID), "page": 
(String PAGE_ID), "timestamp": (long ms TIMESTAMP)}
+ *
+ * - Topic: streams-userprofile-input
+ *   Key Format: (String) USER_ID
+ *   Value Format: (JSON) {"_t": "up", "region": (String REGION), "timestamp": 
(long ms TIMESTAMP)}
+ *
+ * To observe the results, read the output topic (e.g., via 
bin/kafka-console-consumer)
+ * - Topic: streams-pageviewstats-typed-output
+ *   Key Format: (JSON) {"_t": "wpvbr", "windowStart": (long ms 
WINDOW_TIMESTAMP), "region": (String REGION)}
+ *   Value Format: (JSON) {"_t": "rc", "count": (long REGION_COUNT), "region": 
(String REGION)}
+ *
+ * Note, the "_t" field is necessary to help Jackson identify the correct 
class for deserialization in the
+ * generic {@link JSONSerde}. If you instead specify a specific serde per 
class, you won't need the extra "_t" field.
+ */
+@SuppressWarnings({"WeakerAccess", "unused"})
+public class PageViewTypedDemo {

Review comment:
   Fixed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache

[GitHub] [pulsar-adapters] dlg99 commented on pull request #34: Removing Flink adapter in favor of the one included with Flink

2022-02-03 Thread GitBox


dlg99 commented on pull request #34:
URL: https://github.com/apache/pulsar-adapters/pull/34#issuecomment-1029332355


   @merlimat I updated readme


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] ckdarby commented on pull request #224: Allow us to turn on PodSecurityPolicy for Pulsar clusters with the same name deployed in different K8S namespaces in the same clust

2022-02-03 Thread GitBox


ckdarby commented on pull request #224:
URL: 
https://github.com/apache/pulsar-helm-chart/pull/224#issuecomment-1029349997


   Just an FYI, from [the 
docs](https://kubernetes.io/docs/concepts/policy/pod-security-policy/):
   > PodSecurityPolicy is deprecated as of Kubernetes v1.21, and will be 
removed in v1.25.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] frankjkelly commented on pull request #224: Allow us to turn on PodSecurityPolicy for Pulsar clusters with the same name deployed in different K8S namespaces in the same c

2022-02-03 Thread GitBox


frankjkelly commented on pull request #224:
URL: 
https://github.com/apache/pulsar-helm-chart/pull/224#issuecomment-1029360884


   @ckdarby good point `1.25` will be coming at some point later this year 
`1.24` looks due in April 
https://github.com/kubernetes/sig-release/tree/master/releases/release-1.24
   and with a release cycle every 15 weeks give or take that puts us I think in 
July/August timeframe if my math is correct.
   https://kubernetes.io/blog/2021/07/20/new-kubernetes-release-cadence/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-adapters] michaeljmarshall merged pull request #34: Removing Flink adapter in favor of the one included with Flink

2022-02-03 Thread GitBox


michaeljmarshall merged pull request #34:
URL: https://github.com/apache/pulsar-adapters/pull/34


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [PR] pulsar-adapters: remove Flink adapter

2022-02-03 Thread Devin Bost
I support this change. Having different connectors in each project is
actually really confusing for new folks since it's not automatically clear
which adapter they should be using.

--
Devin G. Bost

On Thu, Feb 3, 2022, 1:17 PM Andrey Yegorov 
wrote:

> Hello,
>
> I created a PR to remove Flink from the Pulsar adapters project:
> https://github.com/apache/pulsar-adapters/pull/34
>
> Motivation is:
> * It is based on an old version of Flink (1.7.2, current is 1.14.3) which
> brings in dependencies with various CVEs. 1.7 is not maintained anymore.
> * Flink added a pulsar connector in their project
>
> https://github.com/apache/flink/tree/master/flink-connectors/flink-connector-pulsar
> * Previous releases of the adapter still available and can be used, if
> needed
>
> Please review and let me know if there is a reason to not deprecate/remove
> the Flink adapter.
>
> --
> Andrey Yegorov
>


Re: [PR] pulsar-adapters: remove Flink adapter

2022-02-03 Thread Enrico Olivelli
The patch is merged.


Thank you Andrey, removing dead code is a good thing, as it allows the
community to not waste energy.


Enrico


Il Ven 4 Feb 2022, 06:04 Devin Bost  ha scritto:

> I support this change. Having different connectors in each project is
> actually really confusing for new folks since it's not automatically clear
> which adapter they should be using.
>
> --
> Devin G. Bost
>
> On Thu, Feb 3, 2022, 1:17 PM Andrey Yegorov 
> wrote:
>
> > Hello,
> >
> > I created a PR to remove Flink from the Pulsar adapters project:
> > https://github.com/apache/pulsar-adapters/pull/34
> >
> > Motivation is:
> > * It is based on an old version of Flink (1.7.2, current is 1.14.3) which
> > brings in dependencies with various CVEs. 1.7 is not maintained anymore.
> > * Flink added a pulsar connector in their project
> >
> >
> https://github.com/apache/flink/tree/master/flink-connectors/flink-connector-pulsar
> > * Previous releases of the adapter still available and can be used, if
> > needed
> >
> > Please review and let me know if there is a reason to not
> deprecate/remove
> > the Flink adapter.
> >
> > --
> > Andrey Yegorov
> >
>