Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3760
@fhueske
Do you have some time to check this now considering Flink - 1.3 release is
out?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/2332
So I think at a first level let us have put/delete mutations alone for
Streaming ? Since am not aware of how flink users are currently interacting
with HBase not sure on what HBase ops should be
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/2332
@nragon
I agree. But your use case does it have increments/appends?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/2332
I can take this up and come up with a design doc. Reading thro the comments
here and the final decision points I think only puts/deletes can be considered
idempotent. But increments/decrements
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/2332
@delding
Are you still working on this?
@nragon
Let me know how can be of help here? I can work on this PR too since I have
some context on the existing PR though it is stale
Github user ramkrish86 closed the pull request at:
https://github.com/apache/flink/pull/3881
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3881
I won't be available for next 2 to 3 hours. So feel free to decide based on
your convenience in case you need to make the RC candidate for 1.3 release. I
am sorry that I could not ma
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3881
@tillrohrmann
Thanks for the new PR. I just executed your change with 101, 99 , 100 as
the checkpoint order. In this case 100 should be the latest one though the
actual ids are not sorted
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3881#discussion_r116211254
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/zookeeper/ZooKeeperStateHandleStore.java
---
@@ -346,17 +346,20 @@ public int exists
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3881#discussion_r116211132
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/zookeeper/ZooKeeperStateHandleStore.java
---
@@ -346,17 +346,20 @@ public int exists
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3881#discussion_r116211032
--- Diff: pom.xml ---
@@ -101,7 +101,8 @@ under the License.
0.7.4
5.0.4
3.4.6
- 2.12.0
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3881#discussion_r116188234
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/zookeeper/ZooKeeperStateHandleStore.java
---
@@ -346,11 +346,7 @@ public int exists
GitHub user ramkrish86 opened a pull request:
https://github.com/apache/flink/pull/3881
FLINK-6284 Incorrect sorting of completed checkpoints in
ZooKeeperCompletedCheckpointStore
ZooKeeperCompletedCheckpointStore
Thanks for contributing to Apache Flink. Before you open
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3760
@fhueske
Thanks for the update. Got it.
But I would like to say that if there any issues/JIRA that I could be of
help for the 1.3 release fork, I would happy to help. Pls point me to
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3760
I checked the recent failure.
`Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 20.417 sec
<<< FAILURE! - in
org.apache.flink.runtime.jobmanager.JobManagerRegistr
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3760
@tonycox
Thanks for the comment? Just curious. Any reason for the repush - is it for
checking the test failures again?
---
If your project is set up for it, you can reply to this email and
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3760
The failures seems not directly related.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3760
@fhueske , @tonycox
Can you have a look at this PR? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user ramkrish86 opened a pull request:
https://github.com/apache/flink/pull/3760
FLINK-5752 Support push down projections for HBaseTableSource (Ram)
Ran mvn clean verify -DskipTests
In this patch
`Arrays.sort(nestedFields[i]);`
Am doing this before doing
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3334
@StephanEwen
No problem. I appreciate your time and efforts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3334
@StephanEwen
I saw in another JIRA one of your comment where you talked about
refactoring CheckPointcoordinator and Pendingcheckpoint. So you woud this PR to
wait till then?
---
If your
GitHub user ramkrish86 opened a pull request:
https://github.com/apache/flink/pull/3478
Flink 4816 Executions failed from "DEPLOYING" should retain restored
checkpoint information
Thanks for contributing to Apache Flink. Before you open your pull request,
please take the
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3334
Just updated and did a force push to avoid the merge commit. Now things are
fine.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3334
Ping for reviews here!!!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3334
@StephanEwen , @wenlong88 , @shixiaogang
Pls have a look at the latest push. Now I am tracking the failures in the
checkpointing and incrementing a new counter based on it. Added test cases
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3334
I thinkI got a better way to trck this. Will update the PR sooner.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3334
Thanks for the input. I read the code. There are two ways a checkpoint
fails (as per my code understanding). If for some reason checkpointing cannot
be performed we send DeclineCheckpoint message
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3334
I think I got what you are saying here. Since Execution#triggerCheckpoint
is the actual checkpoint call and currently we don't track it if there is a
failure. So your point is it is better
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3334
@wenlong88
Can you tell more when you say checkpointing failure and trigger failure? I
think if you are saying about tracking the number of times the execution fails
after restoring from a
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3334#discussion_r103638771
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
---
@@ -537,12 +562,27 @@ else if
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3334#discussion_r103612421
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
---
@@ -537,12 +562,27 @@ else if
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3334#discussion_r103612320
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
---
@@ -121,6 +121,8 @@
/** The
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3334
@StephanEwen - Ping for initial reviews. Will work on it based on the
feedback.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
GitHub user ramkrish86 opened a pull request:
https://github.com/apache/flink/pull/3334
FLINK-4810 Checkpoint Coordinator should fail ExecutionGraph after "n"
unsuccessful checkpoints
unsuccessful checkpoints
Thanks for contributing to Apache Flink. Before you
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
Thanks to all @fhueske , @tonycox and @wuchong for helping in getting this
in.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
@fhueske - Are you fine with that pom change? If so we can get this in.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
I think that is not the only reason. Some where either these tests are
creating more static objects or bigger objects that live for the life time of
the JVM. May be this test exposes it. Actually
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
@fhueske - so are you ok with @tonycox suggestion of setting MAxPermSize
for hbase module?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
Even jhat was not able to view the file as it had a problem in parsing the
hprof file. So my question is if MaxPermSize is 128M - why does it work with
jdk 8?
---
If your project is set up for
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
I tried multiple ways to take the dump file from this mvn test command run.
I get a hprof file which on opening in heap dump analyser throws EOF exception
or NPE exception.
@tonycox - were
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
I tried to compare TableInputFormat and the new one. But the interesting
part is HBaseInputformat does not fail in jdk 8 which was my default in my test
env.
---
If your project is set up for
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
Thanks to @tonycox for helping me in reproducing this error. Changing to
JDK 7 creates this issue and it creates due to permGen space running out of
memory. I don't have a soln for this. It
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
> Results :
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
This is what I get as test result.
---
If your project is set up for it, you can reply to this email and have y
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
It is CentOS Linux release 7.0.1406 (Core).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
I tried even that. The test runs fine for me. No OOME I get.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
`02/02/2017 06:19:20DataSink (collect())(6/32) switched to
SCHEDULED
02/02/2017 06:19:20 DataSink (collect())(2/32) switched to SCHEDULED
02/02/2017 06:19:20 DataSink (collect
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
> Will need to figure out what's the reason for that before I can merge the
PR.
I tried running those tests again in my linux box and all went through
without any error.
---
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
@fhueske - A gentle reminder !!!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
@fhueske - Please have a look at the javadoc.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
> For now I'd suggest to keep the scope of the PR as it is right now. A bit
more Java documentation on HBaseTableSource to explain how it is used would be
great.
We can imple
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
bq.ProjectableTableSource works in scan process.
Ya got it. I was just trying to relate with this HBase thing and could find
that we try to read all cols and then do a flatMap and then return
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
I think going through the PR for
https://issues.apache.org/jira/browse/FLINK-3848 - I think we try to project
only the required columns. Similarly we could do here also. So my and @tonycox
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
` Table result = tableEnv
.sql("SELECT test1.f1.q1, test1.f2.q2 FROM test1 where
test1.f1.q1 < 103");`
I just tried this query and it
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
> Regarding the HBaseTableSchema, we could also use it only internally and
not expose it to the user. The HBaseTableSource would have a method addColumn()
and forward the calls to its inter
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
Just a general question
`def projectFields(fields: Array[Int]): ProjectableTableSource[T]`
Is it mandatory to only have int[] here. Can we have String[] that allows
to specify
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
Updated the PR fixing the comments. The comments were simple but the adding
AbstractTableInputFormat and moving the code back and forth makes this one a
bigger change. But internally they are
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
To understand better
> We could make flat schema an optional mode or implement it as a separate
TableSource as well.
and this one
> This could be solved if we use
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r98828506
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSourceInputFormat.java
---
@@ -0,0 +1,144
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r98828345
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSourceInputFormat.java
---
@@ -0,0 +1,144
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r98828283
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSource.java
---
@@ -0,0 +1,83 @@
+/*
+ * Licensed to
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
@tonycox
I have addressed all your latest comments including making HBaseTableSource
a ProjectableTableSource.
@wuchong , @fhueske
Are you guys fine with the latest updates. If so
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r98487401
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSchema.java
---
@@ -0,0 +1,140 @@
+/*
+ * Licensed
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r98486712
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSchema.java
---
@@ -0,0 +1,140 @@
+/*
+ * Licensed
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r98486522
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSchema.java
---
@@ -0,0 +1,140 @@
+/*
+ * Licensed
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
Am not sure how we will manage this. But one thing that can be done is that
if invalid family name is added to the scan - HBase internally throws
FamilyNotFoundException - so that we can track
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
For ProjectableTableSource I think I need some clarity. Because currently
is based on int[] representing the fields. So am not sure how to map them in
terms of qualifiers under a family.
---
If
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
@fhueske ,
I have fixed the comments as per @wuchong . And he has said +1 after fixing
them. Would like to see the PR and see if it is fine with you too.
---
If your project is set up for it
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
> Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread
"RpcServer.reader=6,bindAddress=testing-docker-bb4f2e37-e79f-42a3-a9e9-4995e42c70ba,po
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
@wuchong
I think I have updated the last comments from you. Thank you for all your
help/support here.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r98145293
--- Diff:
flink-connectors/flink-hbase/src/test/java/org/apache/flink/addons/hbase/example/HBaseTableSourceITCase.java
---
@@ -0,0 +1,196
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
Fixed all the minor comments given above. @tonycox , @wuchong , @fhueske .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97934537
--- Diff:
flink-connectors/flink-hbase/src/test/java/org/apache/flink/addons/hbase/example/HBaseTableSourceITCase.java
---
@@ -0,0 +1,248
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97934510
--- Diff:
flink-connectors/flink-hbase/src/test/java/org/apache/flink/addons/hbase/example/HBaseTableSourceITCase.java
---
@@ -0,0 +1,248
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97934502
--- Diff:
flink-connectors/flink-hbase/src/test/java/org/apache/flink/addons/hbase/example/HBaseTableSourceITCase.java
---
@@ -0,0 +1,248
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97934495
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSource.java
---
@@ -0,0 +1,65 @@
+/*
+ * Licensed to
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97934428
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSchema.java
---
@@ -0,0 +1,137 @@
+/*
+ * Licensed
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97934446
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSchema.java
---
@@ -0,0 +1,137 @@
+/*
+ * Licensed
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97934406
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSchema.java
---
@@ -0,0 +1,137 @@
+/*
+ * Licensed
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
@fhueske , @tonycox , @wuchong - FYI.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
Updated the code with the comments and have pushed again. I think I have
addressed all the comments here. Feedback/comments welcome. I also found that
it is better to use the TableInputSplit to
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97738061
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSchema.java
---
@@ -0,0 +1,135 @@
+/*
+ * Licensed
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97736554
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSchema.java
---
@@ -0,0 +1,135 @@
+/*
+ * Licensed
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97709565
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSourceInputFormat.java
---
@@ -0,0 +1,160
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97709590
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSourceInputFormat.java
---
@@ -19,99 +19,113 @@
package
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97709469
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSchema.java
---
@@ -0,0 +1,135 @@
+/*
+ * Licensed
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97709482
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSchema.java
---
@@ -0,0 +1,135 @@
+/*
+ * Licensed
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97709518
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSource.java
---
@@ -22,54 +22,63 @@
import
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97709503
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSource.java
---
@@ -22,54 +22,63 @@
import
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97709488
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSchema.java
---
@@ -0,0 +1,135 @@
+/*
+ * Licensed
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
@fhueske , @tonycox , @wuchong
I have updated the PR based on all the feedbacks here. Now you could see
that we now support CompoisteRowType and we are able to specify multiple column
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
Good news is that with the help of this Composite RowType and modifying my
code accordingly and debugging things I could get the basic thing to work. Now
I will work on stitching things together
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
Thanks for all the inputs here. I have been trying to make my existing code
work with the composite RowTypeInfo. Once that is done I will try to introduce
the HBaseTableSchema.
Also I would
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97304540
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSource.java
---
@@ -0,0 +1,75 @@
+/*
+ * Licensed to
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97055300
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSource.java
---
@@ -0,0 +1,75 @@
+/*
+ * Licensed to
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97015379
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSourceInputFormat.java
---
@@ -0,0 +1,322
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97015321
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSource.java
---
@@ -0,0 +1,75 @@
+/*
+ * Licensed to
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97015267
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSourceInputFormat.java
---
@@ -0,0 +1,322
Github user ramkrish86 commented on the issue:
https://github.com/apache/flink/pull/3149
Thanks for the ping here @tonycox .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r96790253
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSourceInputFormat.java
---
@@ -0,0 +1,322
Github user ramkrish86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r96790136
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSourceInputFormat.java
---
@@ -0,0 +1,322
1 - 100 of 313 matches
Mail list logo