Github user nielsbasjes commented on a diff in the pull request:
https://github.com/apache/flink/pull/6181#discussion_r196435400
--- Diff:
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/partitioner/FlinkKeyHashPartitioner.java
GitHub user nielsbasjes opened a pull request:
https://github.com/apache/flink/pull/6181
[FLINK-9610] [flink-connector-kafka-base] Add Kafka Partitioner that uses
the hash of the provided key.
## What is the purpose of the change
Add the simple feature of being able to
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2332
FYI:
In close relation to this issue I submitted an enhancement at the HBase
side to support these kinds of usecases much better:
https://issues.apache.org/jira/browse/HBASE-19486
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2332
@fhueske: What is "older" ?
I would like a clear statement about the (minimal) supported versions of
HBase.
I would see 1.1.x as old enough, or do you see 0.98 still required
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2332
@fhueske Should the TableInputFormat be updated to use the HBase 1.1.2 api
also? It would make things a bit cleaner.
---
If your project is set up for it, you can reply to this email and have
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2332
Yes, as long as everything is compatible with HBase 1.1.2 its fine for me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user nielsbasjes commented on a diff in the pull request:
https://github.com/apache/flink/pull/2330#discussion_r79816123
--- Diff:
flink-batch-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/TableInputFormat.java
---
@@ -67,18 +66,23 @@
protected
Github user nielsbasjes commented on a diff in the pull request:
https://github.com/apache/flink/pull/2330#discussion_r79816145
--- Diff:
flink-batch-connectors/flink-hbase/src/test/java/org/apache/flink/addons/hbase/TestTableInputFormatITCase.java
---
@@ -0,0 +1,112
Github user nielsbasjes commented on a diff in the pull request:
https://github.com/apache/flink/pull/2330#discussion_r79816134
--- Diff:
flink-batch-connectors/flink-hbase/src/test/java/org/apache/flink/addons/hbase/TestTableInputFormatITCase.java
---
@@ -0,0 +1,112
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2275
Perhaps you are running into similar problems as described here:
http://stackoverflow.com/questions/2890259/running-each-junit-test-in-a-separate-jvm-in-eclipse
#JustTryingToHelp
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2330
Current version has a problem in building the shaded jars.
I runs into an infinite loop in creating the dependency-reduced-pom.xml as
described here:
**Shade Plugin gets stuck in
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2330
I managed to resolve the problems with running these unit tests.
These problems were caused by version conflicts in guava.
Now we have a HBaseMiniCluster that is started, a table with
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2330
I did a few serious attempts to create a unit test that fires the
HBaseMiniCluster ... and failed.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2330
I will add a unit test for this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2330
Question: Is this change good?
Or do you have more things that I need to change before it can be committed?
---
If your project is set up for it, you can reply to this email and have your
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2330
I had another look at the "multiple tables" question. The name of the table
comes from the getTableName method that is to be implemented by the subclass. I
consider it to be extremel
Github user nielsbasjes commented on a diff in the pull request:
https://github.com/apache/flink/pull/2330#discussion_r73482199
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/ContinuousFileReaderOperator.java
---
@@ -328,7 +328,11
Github user nielsbasjes commented on a diff in the pull request:
https://github.com/apache/flink/pull/2330#discussion_r73480826
--- Diff:
flink-batch-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/TableInputFormat.java
---
@@ -237,7 +244,7 @@ private void
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2330
Note that this version still assumes that the single instance will only see
multiple splits for the same table. Is that a safe assumption?
---
If your project is set up for it, you can reply to
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2330
Now I see why I missed these two; They are newer than the 1.0.3 I was
working with.
Is it a good idea to add ' throws IOException' to these two in
RichInputFormat ?
---
If your
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2330
Yes, that is indeed the right place to do this.
Bummer this method does not allow throwing exceptions.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2330
Oh damn,
I just noticed a major issue in this: In order to create the input splits
the table needs to be available "before" the call to the 'open' method.
---
If
GitHub user nielsbasjes opened a pull request:
https://github.com/apache/flink/pull/2330
FLINK-4311 Fixed several problems in TableInputFormat
Question: Do you guys want a unit test for this?
In HBase itself I have done this in the past yet this required a large
chunk of
Github user nielsbasjes commented on a diff in the pull request:
https://github.com/apache/flink/pull/2317#discussion_r72990290
--- Diff: flink-dist/src/main/flink-bin/yarn-bin/yarn-session.sh ---
@@ -52,5 +52,5 @@ log_setting="-Dlog.file="$log"
-Dlog4j.con
GitHub user nielsbasjes opened a pull request:
https://github.com/apache/flink/pull/2317
[FLINK-4287] Ensure the yarn-session.sh classpath contains all Hadoop
related paths
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2275
Question: Does this also work when starting yarn-session.sh?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user nielsbasjes commented on the issue:
https://github.com/apache/flink/pull/2275
I noticed that the system property java.security.krb5.conf isn't set
anywhere in the code.
Shouldn't there be a config setting/property that allows the user to set
this?
-
GitHub user nielsbasjes opened a pull request:
https://github.com/apache/flink/pull/2043
[FLINK-3886] Give a better error when the application Main class is not
public.
A simple fix that reduces the time needed to find the cause of this simple
developer error.
You can merge this
Github user nielsbasjes commented on the pull request:
https://github.com/apache/flink/pull/1489#issuecomment-171665915
I read that Kafka 0.9 supports Kerberos authentication (I have not yet
tried this). Is that supported in this first release or should I open a Jira
ticket for that
GitHub user nielsbasjes opened a pull request:
https://github.com/apache/flink/pull/1411
[FLINK-3082] Fixed confusing error about an interface that no longer exists
The ManualTimestampSourceFunction interface does not exist.
Yet there are error messages that thell you to take a
Github user nielsbasjes commented on the pull request:
https://github.com/apache/flink/pull/1342#issuecomment-156952942
FYI: I let my test run over the weekend (i.e. for 65 hours) with the 5 & 10
minutes ticket times and it is still running fine.
---
If your project is set up fo
Github user nielsbasjes commented on the pull request:
https://github.com/apache/flink/pull/1342#issuecomment-156423436
I redid my test to make sure it still all works as desired;
I had our IT guys drop the ticket expire time for my user account down to 5
minutes and the max
Github user nielsbasjes commented on a diff in the pull request:
https://github.com/apache/flink/pull/1342#discussion_r44766715
--- Diff: flink-yarn/src/main/java/org/apache/flink/yarn/Utils.java ---
@@ -135,7 +138,54 @@ public static void setTokensFor(ContainerLaunchContext
Github user nielsbasjes commented on a diff in the pull request:
https://github.com/apache/flink/pull/1342#discussion_r44765814
--- Diff: flink-dist/src/main/flink-bin/bin/config.sh ---
@@ -249,7 +249,15 @@ if [ -n "$HADOOP_HOME" ]; then
Github user nielsbasjes commented on the pull request:
https://github.com/apache/flink/pull/1342#issuecomment-156122982
Found it and fixed it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user nielsbasjes commented on the pull request:
https://github.com/apache/flink/pull/1342#issuecomment-156117069
I made a mistake that the dependency was still in there 'hard'.
When I switch it to 'provided' or leave it out it fails with a
ClassNotFoun
Github user nielsbasjes commented on the pull request:
https://github.com/apache/flink/pull/1342#issuecomment-156116370
I fixed the authentication to use reflection.
To make this work I had to switch back to an older (deprecated) version of
the token retrieval API because
Github user nielsbasjes commented on the pull request:
https://github.com/apache/flink/pull/1342#issuecomment-155891180
@StephanEwen: I agree. I'll get on it.
@mxm : The current code uses the 1.1.2 API and was tested against a 0.98
HBase cluster. I'm confident this p
Github user nielsbasjes commented on the pull request:
https://github.com/apache/flink/pull/1342#issuecomment-155708227
I created a test topology that puts the current time in an HBase cell
several times per second.
In the cluster I did this on HBase is configured to use Kerberos
Github user nielsbasjes commented on a diff in the pull request:
https://github.com/apache/flink/pull/1342#discussion_r44391146
--- Diff: flink-yarn/src/main/java/org/apache/flink/yarn/Utils.java ---
@@ -135,7 +142,40 @@ public static void setTokensFor(ContainerLaunchContext
Github user nielsbasjes commented on the pull request:
https://github.com/apache/flink/pull/1342#issuecomment-155376887
I ran this version over night but the VPN from my system to the cluster
stopped before the Kerberos ticket could expire.
I am quite confident that this patch is
GitHub user nielsbasjes opened a pull request:
https://github.com/apache/flink/pull/1342
[FLINK-2977] Added support for accessing a Kerberos secured HBase
installation.
See https://issues.apache.org/jira/browse/FLINK-2977
You can merge this pull request into a Git repository by
42 matches
Mail list logo