Re: Watermark Alignment on Flink Runner's UnboundedSourceWrapper

2023-05-23 Thread Jan Lukavský

Hi Talat,

your analysis is correct, aligning watermarks for jobs with high 
watermark skew in input partitions really results in faster checkpoints 
and reduces the size of state. There are generally two places you can 
implement this - in user code (the source) or inside runner. The user 
code can use some external synchronization (e.g. ZooKeeper) to keep 
track of progress of all individual sources. Another option is to read 
the watermark from Flink's Rest API (some inspiration here [1]).


Another option would be to make use of [2] and implement this directly 
in FlinkRunner. I'm not familiar with any possible limitations of this, 
this was added to Flink quite recently (we would have to support this 
only when running on Flink 1.15+).


If you would like to go for the second approach, I'd be happy to help 
with some guidance.


Best,

 Jan

[1] 
https://github.com/O2-Czech-Republic/proxima-platform/blob/master/flink/utils/src/main/java/cz/o2/proxima/flink/utils/FlinkGlobalWatermarkTracker.java
[2] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-182%3A+Support+watermark+alignment+of+FLIP-27+Sources


On 5/23/23 01:05, Talat Uyarer via dev wrote:
Maybe the User list does not have knowledge about this. That's why I 
also resend on the Dev list. Sorry for cross posting



Hi All,

I have a stream aggregation job which reads from Kafka and writes some 
Sinks.


When I submit my job Flink checkpoint size keeps increasing if I use 
unaligned checkpoint settings and it does not emit any window results.
If I use an aligned checkpoint, size is somewhat under control(still 
big) but Checkpoint alignment takes a long time.


I would like to implement something similar [1]. I believe 
if UnboundedSourceWrapper pause reading future watermark partitions it 
will reduce the size of the checkpoint and I can use unaligned 
checkpointing. What do you think about this approach ? Do you have 
another solution ?


One more question: I was reading code to implement the above idea. I 
saw this code [2] Does Flink Runner have a similar implementation?


Thanks

[1] https://github.com/apache/flink/pull/11968
[2] 
https://github.com/apache/beam/blob/master/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/state/FlinkStateInternals.java#L207

Re: Watermark Alignment on Flink Runner's UnboundedSourceWrapper

2023-05-23 Thread Talat Uyarer via dev
Hi Jan,

Yes My plan is implementing this feature on FlinkRunner. I have one more
question. Does Flink Runner support EventTime or Beam  Custom Watermark ?
Do I need to set AutoWatermarkInterval for stateful Beam Flink Jobs. Or
Beam timers can handle it without setting that param ?

Thanks

On Tue, May 23, 2023 at 12:03 AM Jan Lukavský  wrote:

> Hi Talat,
>
> your analysis is correct, aligning watermarks for jobs with high watermark
> skew in input partitions really results in faster checkpoints and reduces
> the size of state. There are generally two places you can implement this -
> in user code (the source) or inside runner. The user code can use some
> external synchronization (e.g. ZooKeeper) to keep track of progress of all
> individual sources. Another option is to read the watermark from Flink's
> Rest API (some inspiration here [1]).
>
> Another option would be to make use of [2] and implement this directly in
> FlinkRunner. I'm not familiar with any possible limitations of this, this
> was added to Flink quite recently (we would have to support this only when
> running on Flink 1.15+).
>
> If you would like to go for the second approach, I'd be happy to help with
> some guidance.
>
> Best,
>
>  Jan
>
> [1]
> https://github.com/O2-Czech-Republic/proxima-platform/blob/master/flink/utils/src/main/java/cz/o2/proxima/flink/utils/FlinkGlobalWatermarkTracker.java
> 
> [2]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-182%3A+Support+watermark+alignment+of+FLIP-27+Sources
> 
> On 5/23/23 01:05, Talat Uyarer via dev wrote:
>
> Maybe the User list does not have knowledge about this. That's why I also
> resend on the Dev list. Sorry for cross posting
>
>
> Hi All,
>
> I have a stream aggregation job which reads from Kafka and writes some
> Sinks.
>
> When I submit my job Flink checkpoint size keeps increasing if I use
> unaligned checkpoint settings and it does not emit any window results.
> If I use an aligned checkpoint, size is somewhat under control(still big)
> but Checkpoint alignment takes a long time.
>
> I would like to implement something similar [1]. I believe
> if UnboundedSourceWrapper pause reading future watermark partitions it will
> reduce the size of the checkpoint and I can use unaligned checkpointing.
> What do you think about this approach ? Do you have another solution ?
>
> One more question: I was reading code to implement the above idea. I saw
> this code [2] Does Flink Runner have a similar implementation?
>
> Thanks
>
> [1] https://github.com/apache/flink/pull/11968
> 
> [2]
> https://github.com/apache/beam/blob/master/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/state/FlinkStateInternals.java#L207
> 
>
>


Re: Watermark Alignment on Flink Runner's UnboundedSourceWrapper

2023-05-23 Thread Jan Lukavský
Yes, FlinkRunner supports Beam's event-time semantics without any 
additional configuration options.


 Jan

On 5/23/23 09:52, Talat Uyarer via dev wrote:

Hi Jan,

Yes My plan is implementing this feature on FlinkRunner. I have one 
more question. Does Flink Runner support EventTime or 
Beam  Custom Watermark ? Do I need to set AutoWatermarkInterval for 
stateful Beam Flink Jobs. Or Beam timers can handle it without setting 
that param ?


Thanks

On Tue, May 23, 2023 at 12:03 AM Jan Lukavský  wrote:

Hi Talat,

your analysis is correct, aligning watermarks for jobs with high
watermark skew in input partitions really results in faster
checkpoints and reduces the size of state. There are generally two
places you can implement this - in user code (the source) or
inside runner. The user code can use some external synchronization
(e.g. ZooKeeper) to keep track of progress of all individual
sources. Another option is to read the watermark from Flink's Rest
API (some inspiration here [1]).

Another option would be to make use of [2] and implement this
directly in FlinkRunner. I'm not familiar with any possible
limitations of this, this was added to Flink quite recently (we
would have to support this only when running on Flink 1.15+).

If you would like to go for the second approach, I'd be happy to
help with some guidance.

Best,

 Jan

[1]

https://github.com/O2-Czech-Republic/proxima-platform/blob/master/flink/utils/src/main/java/cz/o2/proxima/flink/utils/FlinkGlobalWatermarkTracker.java


[2]

https://cwiki.apache.org/confluence/display/FLINK/FLIP-182%3A+Support+watermark+alignment+of+FLIP-27+Sources



On 5/23/23 01:05, Talat Uyarer via dev wrote:

Maybe the User list does not have knowledge about this. That's
why I also resend on the Dev list. Sorry for cross posting


Hi All,

I have a stream aggregation job which reads from Kafka and writes
some Sinks.

When I submit my job Flink checkpoint size keeps increasing if I
use unaligned checkpoint settings and it does not emit any window
results.
If I use an aligned checkpoint, size is somewhat under
control(still big) but Checkpoint alignment takes a long time.

I would like to implement something similar [1]. I believe
if UnboundedSourceWrapper pause reading future watermark
partitions it will reduce the size of the checkpoint and I can
use unaligned checkpointing. What do you think about this
approach ? Do you have another solution ?

One more question: I was reading code to implement the above
idea. I saw this code [2] Does Flink Runner have a similar
implementation?

Thanks

[1] https://github.com/apache/flink/pull/11968


[2]

https://github.com/apache/beam/blob/master/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/state/FlinkStateInternals.java#L207




Beam High Priority Issue Report (34)

2023-05-23 Thread beamactions
This is your daily summary of Beam's current high priority issues that may need 
attention.

See https://beam.apache.org/contribute/issue-priorities for the meaning and 
expectations around issue priorities.

Unassigned P1 Issues:

https://github.com/apache/beam/issues/26785 [Failing Test]: Tests for Flink 
runners are failing in "Java PreCommit check"
https://github.com/apache/beam/issues/26723 [Failing Test]: Tour of Beam 
Frontend Test suite is perma-red on master
https://github.com/apache/beam/issues/26616 [Failing Test]: 
beam_PostCommit_Java_DataflowV2 SpannerReadIT multiple test failing
https://github.com/apache/beam/issues/26550 [Failing Test]: 
beam_PostCommit_Java_PVR_Spark_Batch
https://github.com/apache/beam/issues/26547 [Failing Test]: 
beam_PostCommit_Java_DataflowV2
https://github.com/apache/beam/issues/26354 [Bug]: BigQueryIO direct read not 
reading all rows when set --setEnableBundling=true
https://github.com/apache/beam/issues/26343 [Bug]: 
apache_beam.io.gcp.bigquery_read_it_test.ReadAllBQTests.test_read_queries is 
flaky
https://github.com/apache/beam/issues/26329 [Bug]: BigQuerySourceBase does not 
propagate a Coder to AvroSource
https://github.com/apache/beam/issues/26041 [Bug]: Unable to create 
exactly-once Flink pipeline with stream source and file sink
https://github.com/apache/beam/issues/25975 [Bug]: Reducing parallelism in 
FlinkRunner leads to a data loss
https://github.com/apache/beam/issues/24776 [Bug]: Race condition in Python SDK 
Harness ProcessBundleProgress
https://github.com/apache/beam/issues/24389 [Failing Test]: 
HadoopFormatIOElasticTest.classMethod ExceptionInInitializerError 
ContainerFetchException
https://github.com/apache/beam/issues/24313 [Flaky]: 
apache_beam/runners/portability/portable_runner_test.py::PortableRunnerTestWithSubprocesses::test_pardo_state_with_custom_key_coder
https://github.com/apache/beam/issues/23944  beam_PreCommit_Python_Cron 
regularily failing - test_pardo_large_input flaky
https://github.com/apache/beam/issues/23709 [Flake]: Spark batch flakes in 
ParDoLifecycleTest.testTeardownCalledAfterExceptionInProcessElement and 
ParDoLifecycleTest.testTeardownCalledAfterExceptionInStartBundle
https://github.com/apache/beam/issues/22913 [Bug]: 
beam_PostCommit_Java_ValidatesRunner_Flink is flakes in 
org.apache.beam.sdk.transforms.GroupByKeyTest$BasicTests.testAfterProcessingTimeContinuationTriggerUsingState
https://github.com/apache/beam/issues/22605 [Bug]: Beam Python failure for 
dataflow_exercise_metrics_pipeline_test.ExerciseMetricsPipelineTest.test_metrics_it
https://github.com/apache/beam/issues/21714 
PulsarIOTest.testReadFromSimpleTopic is very flaky
https://github.com/apache/beam/issues/21708 beam_PostCommit_Java_DataflowV2, 
testBigQueryStorageWrite30MProto failing consistently
https://github.com/apache/beam/issues/21706 Flaky timeout in github Python unit 
test action 
StatefulDoFnOnDirectRunnerTest.test_dynamic_timer_clear_then_set_timer
https://github.com/apache/beam/issues/21643 FnRunnerTest with non-trivial 
(order 1000 elements) numpy input flakes in non-cython environment
https://github.com/apache/beam/issues/21476 WriteToBigQuery Dynamic table 
destinations returns wrong tableId
https://github.com/apache/beam/issues/21469 beam_PostCommit_XVR_Flink flaky: 
Connection refused
https://github.com/apache/beam/issues/21424 Java VR (Dataflow, V2, Streaming) 
failing: ParDoTest$TimestampTests/OnWindowExpirationTests
https://github.com/apache/beam/issues/21262 Python AfterAny, AfterAll do not 
follow spec
https://github.com/apache/beam/issues/21260 Python DirectRunner does not emit 
data at GC time
https://github.com/apache/beam/issues/21121 
apache_beam.examples.streaming_wordcount_it_test.StreamingWordCountIT.test_streaming_wordcount_it
 flakey
https://github.com/apache/beam/issues/21104 Flaky: 
apache_beam.runners.portability.fn_api_runner.fn_runner_test.FnApiRunnerTestWithGrpcAndMultiWorkers
https://github.com/apache/beam/issues/20976 
apache_beam.runners.portability.flink_runner_test.FlinkRunnerTestOptimized.test_flink_metrics
 is flaky
https://github.com/apache/beam/issues/20108 Python direct runner doesn't emit 
empty pane when it should
https://github.com/apache/beam/issues/19814 Flink streaming flakes in 
ParDoLifecycleTest.testTeardownCalledAfterExceptionInStartBundleStateful and 
ParDoLifecycleTest.testTeardownCalledAfterExceptionInProcessElementStateful
https://github.com/apache/beam/issues/19465 Explore possibilities to lower 
in-use IP address quota footprint.


P1 Issues with no update in the last week:

https://github.com/apache/beam/issues/23525 [Bug]: Default PubsubMessage coder 
will drop message id and orderingKey
https://github.com/apache/beam/issues/21645 
beam_PostCommit_XVR_GoUsingJava_Dataflow fails on some test transforms




Local Combiner for GroupByKey on Flink Streaming jobs

2023-05-23 Thread Talat Uyarer via dev
Sorry for cross posting

-- Forwarded message -
From: Talat Uyarer 
Date: Fri, May 19, 2023, 2:25 AM
Subject: Local Combiner for GroupByKey on Flink Streaming jobs
To: 


Hi,

I have a stream aggregation job which is running on Flink 1.13 I generate
DAG by using Beam SQL. My SQL query has a TUMBLE window. Basically My
pipeline reads from kafka aggregate, counts/sums some values by streamin
aggregation and writes a Sink.

BeamSQl uses Groupbykey for the aggregation part. When I read the
translation code for Group By Key class in Flink Runner [1] I could not see
any local combiner. I see ReducerFunction but I feel it works on the
reducer side. If this is true. How can I implement a local reducer in
Source step to improve shuffling performance or Do I miss something?

If you need more information about my pipeline I share some below.

Thanks
[1]
https://github.com/apache/beam/blob/master/runners/flink/src/main/java/org/apache/beam/runners/flink/FlinkStreamingTransformTranslators.java#L905


This is my SQL query : "SELECT log_source_id, SUM(size) AS total_size FROM
PCOLLECTION  GROUP BY log_source_id, TUMBLE(log_time, INTERVAL '1' MINUTE)"
When I submit the job Flink generates two fused steps Source -> Sink Step.
I shared the Task Name below.
First Step Source step:
Source:
Kafka_IO/KafkaIO.Read.ReadFromKafkaViaUnbounded/Read(KafkaUnboundedSource)
->
Flat Map ->
ParMultiDo(AvroBytesToRowConverter) ->
BeamCalcRel_47/ParDo(Calc)/ParMultiDo(Calc) ->
BeamAggregationRel_48/assignEventTimestamp/AddTimestamps/ParMultiDo(AddTimestamps)
->
BeamAggregationRel_48/Window.Into()/Window.Assign.out ->
BeamAggregationRel_48/Group.CombineFieldsByFields/ToKvs/selectKeys/AddKeys/Map/ParMultiDo(Anonymous)
->
ToBinaryKeyedWorkItem

Second Step is Aggregation and Sink Step:

BeamAggregationRel_48/Group.CombineFieldsByFields/ToKvs/GroupByKey ->
ToGBKResult ->
BeamAggregationRel_48/Group.CombineFieldsByFields/Combine/ParDo(Anonymous)/ParMultiDo(Anonymous)
->
BeamAggregationRel_48/Group.CombineFieldsByFields/ToRow/ParMultiDo(Anonymous)
->
BeamAggregationRel_48/mergeRecord/ParMultiDo(Anonymous) ->
BeamCalcRel_49/ParDo(Calc)/ParMultiDo(Calc) ->
ParMultiDo(RowToOutputFormat) ->
ParMultiDo(SinkProcessor)


[PMC Request] Add gpg key to release keys file

2023-05-23 Thread Danny McCormick via dev
Hey everyone, as part of automating our release process (see thread here -
https://lists.apache.org/thread/mw9dbbdjtkqlvs0mmrh452z3jsf68sct), could a
PMC member please add the infra supplied gpg public key to our release KEYS
file? I added it to our dev KEYS file
 already and pasted the
key below.

Thanks,
Danny

pub rsa4096 2023-05-03 [SC]
913C3392A770C781EDC4DDABD20316F712213422
uid [ unknown] Apache Beam Automated Release Signing <
priv...@beam.apache.org>
sig 3 D20316F712213422 2023-05-03 Apache Beam Automated Release Signing <
priv...@beam.apache.org>

-BEGIN PGP PUBLIC KEY BLOCK-

mQINBGRSdvABEADToWOiUtHpoQiwoQwjZ7V1I3QQwW8NTJUUmdUC/5mSx7f+N2vv
iPP4BRisGe7Jk6RX76duevZ+OopdJCUbi2m4Cp/9MjWet2F0UsrackISi5JiS0Oq
msgqjnUGcnq54dBSv7UFl7SKuic69fuNuqxoEhLqvdK3VqpeDJGIXGJR3y7sCzYh
X/8f3LhcvAxYpSJnwDxsV06ZxZGH4O9mNyh5hj0ovkmo0BxSmNwwvHFr2mHecQ0q
KdozvxZEZJDWTNsZchrxrbD+jTQP+qmdQof/cyocgVilAPb5n3+dlF836DIv45cW
7pnDHXfhYK92Cx9ZmZv8BFVL7/MCHiLPTNtRgcrKjG2swghZxZJs7wZddlkf0A/a
7egvzfrj73UwLQKjr8lr0WfaIuumcAO3ZJYXkfwAcCPW3Gu4ZJydNc0GcRsUuO5c
rr23jMhrHMyPW5BwXfAqn8sSGJuIX6nWk/HYwKFGRYGScOQbE8OmQJa1n0P9ky+E
DaWRLAGG5WSpDIhXYuKM7i+VzgjGPZIlhSyRIz8/1DDVqYKIAAuTYiYoc2UKOwgW
GIszz4GS7yOfb+sVtCdsZAsSUpW5sgp/I3j72xhoeCApAkAFndsDI7NWLyh1PqdZ
VVx4LZQhAGMCmIvSGY4qldODEUEZT6331/IJBrB/whQPPXAznaSao03kjQARAQAB
tD9BcGFjaGUgQmVhbSBBdXRvbWF0ZWQgUmVsZWFzZSBTaWduaW5nIDxwcml2YXRl
QGJlYW0uYXBhY2hlLm9yZz6JAk4EEwEKADgWIQSRPDOSp3DHge3E3avSAxb3EiE0
IgUCZFJ28AIbAwULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAKCRDSAxb3EiE0IifC
EADKpgZnU9+hHyCeKrqc5POfSTOgx6Cz0ulnTBO1w3yv4ula5M8xr/pBxL8RVOQy
TkiWuop8tqw4h/evGuY6von6N6WT+bjU541r9lP00Tm3Flaaqd2ZjAYceqnapsM1
umzbtsyXtoaNT8Q57ZZww5QQxHtLcVbWWNIIUi3NudA2LYiOofwvM+G5uKaNaa+2
gwTb5Bse47RjAH2uXxzLmQ8Mr5N1N0u6sNELLf3Dhqys6QBJhuuYR3HSMVXmoD+S
9T0dzZnAQ8Gub+g1GN0HSRott0OHifSN7jtjnLAxLbPh8XQLqoc+CJzOzcKrbMnW
+kVOISQhchRYhePzBtLvb+Wg06+dmbEQIfg4TTOd+iI040Yc70tjZww5yWprVv7N
Q/yvpagwj+QDRhpKZYodcJbyAMmeJmjULqZx+yKmrGVxiCmOxw+Kdr3iRsl+gGKK
7n+jHUclWGTg1R7iOCZdeux+AF8VuspgGyLPIJKUp8uRPQg4J/F2Fw6SglE6sRz9
99WjVprAgCH3rtf1kZeEg+4inOldOf+61d4p4TBxKOd906TtN0X3nay/zwRORlh+
2ptkza9USz6w9bk5hU2OnwpKnZV5K+LIX7JeJWn3HjRQRvU7TYjfqxWomiZAFy1e
v2uLS2NlO1r0LRqk7rHkOZZSzipfAPNfolT/anSWX5D+1w==
=7XQl
-END PGP PUBLIC KEY BLOCK-


Re: [PMC Request] Add gpg key to release keys file

2023-05-23 Thread Robert Bradshaw via dev
Done.

On Tue, May 23, 2023 at 7:36 AM Danny McCormick via dev 
wrote:

> Hey everyone, as part of automating our release process (see thread here -
> https://lists.apache.org/thread/mw9dbbdjtkqlvs0mmrh452z3jsf68sct), could
> a PMC member please add the infra supplied gpg public key to our release
> KEYS file? I added it to our dev KEYS file
>  already and pasted the
> key below.
>
> Thanks,
> Danny
>
> pub rsa4096 2023-05-03 [SC]
> 913C3392A770C781EDC4DDABD20316F712213422
> uid [ unknown] Apache Beam Automated Release Signing <
> priv...@beam.apache.org>
> sig 3 D20316F712213422 2023-05-03 Apache Beam Automated Release Signing <
> priv...@beam.apache.org>
>
> -BEGIN PGP PUBLIC KEY BLOCK-
>
> mQINBGRSdvABEADToWOiUtHpoQiwoQwjZ7V1I3QQwW8NTJUUmdUC/5mSx7f+N2vv
> iPP4BRisGe7Jk6RX76duevZ+OopdJCUbi2m4Cp/9MjWet2F0UsrackISi5JiS0Oq
> msgqjnUGcnq54dBSv7UFl7SKuic69fuNuqxoEhLqvdK3VqpeDJGIXGJR3y7sCzYh
> X/8f3LhcvAxYpSJnwDxsV06ZxZGH4O9mNyh5hj0ovkmo0BxSmNwwvHFr2mHecQ0q
> KdozvxZEZJDWTNsZchrxrbD+jTQP+qmdQof/cyocgVilAPb5n3+dlF836DIv45cW
> 7pnDHXfhYK92Cx9ZmZv8BFVL7/MCHiLPTNtRgcrKjG2swghZxZJs7wZddlkf0A/a
> 7egvzfrj73UwLQKjr8lr0WfaIuumcAO3ZJYXkfwAcCPW3Gu4ZJydNc0GcRsUuO5c
> rr23jMhrHMyPW5BwXfAqn8sSGJuIX6nWk/HYwKFGRYGScOQbE8OmQJa1n0P9ky+E
> DaWRLAGG5WSpDIhXYuKM7i+VzgjGPZIlhSyRIz8/1DDVqYKIAAuTYiYoc2UKOwgW
> GIszz4GS7yOfb+sVtCdsZAsSUpW5sgp/I3j72xhoeCApAkAFndsDI7NWLyh1PqdZ
> VVx4LZQhAGMCmIvSGY4qldODEUEZT6331/IJBrB/whQPPXAznaSao03kjQARAQAB
> tD9BcGFjaGUgQmVhbSBBdXRvbWF0ZWQgUmVsZWFzZSBTaWduaW5nIDxwcml2YXRl
> QGJlYW0uYXBhY2hlLm9yZz6JAk4EEwEKADgWIQSRPDOSp3DHge3E3avSAxb3EiE0
> IgUCZFJ28AIbAwULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAKCRDSAxb3EiE0IifC
> EADKpgZnU9+hHyCeKrqc5POfSTOgx6Cz0ulnTBO1w3yv4ula5M8xr/pBxL8RVOQy
> TkiWuop8tqw4h/evGuY6von6N6WT+bjU541r9lP00Tm3Flaaqd2ZjAYceqnapsM1
> umzbtsyXtoaNT8Q57ZZww5QQxHtLcVbWWNIIUi3NudA2LYiOofwvM+G5uKaNaa+2
> gwTb5Bse47RjAH2uXxzLmQ8Mr5N1N0u6sNELLf3Dhqys6QBJhuuYR3HSMVXmoD+S
> 9T0dzZnAQ8Gub+g1GN0HSRott0OHifSN7jtjnLAxLbPh8XQLqoc+CJzOzcKrbMnW
> +kVOISQhchRYhePzBtLvb+Wg06+dmbEQIfg4TTOd+iI040Yc70tjZww5yWprVv7N
> Q/yvpagwj+QDRhpKZYodcJbyAMmeJmjULqZx+yKmrGVxiCmOxw+Kdr3iRsl+gGKK
> 7n+jHUclWGTg1R7iOCZdeux+AF8VuspgGyLPIJKUp8uRPQg4J/F2Fw6SglE6sRz9
> 99WjVprAgCH3rtf1kZeEg+4inOldOf+61d4p4TBxKOd906TtN0X3nay/zwRORlh+
> 2ptkza9USz6w9bk5hU2OnwpKnZV5K+LIX7JeJWn3HjRQRvU7TYjfqxWomiZAFy1e
> v2uLS2NlO1r0LRqk7rHkOZZSzipfAPNfolT/anSWX5D+1w==
> =7XQl
> -END PGP PUBLIC KEY BLOCK-
>


[Release-2.48.0] Uploading images to dockerhub

2023-05-23 Thread Ritesh Ghorse via dev
Hey everyone,

I'm at the stage of pushing docker containers to the apache repository of
[Dockerhub](https://hub.docker.com/search?q=apache%2Fbeam&type=image).
Since I'm not a part of the `beammaintainers` group, I'm getting permission
denied.

For the people in the `beammaintainers` group, please let me know how to
proceed with this one.

Thanks!