of these schemes.
>
> Thanks!
> Ramya
>
> On Tue, Sep 26, 2023 at 1:40 AM Moritz Mack wrote:
>
>> Actually, I doubt a provider chain can solve your problem. it will
>> always return the credentials of the first provider in the chain that can
>> provide some rega
ws ProviderChain (which
> is *not* supported by the Javadocs), is there any other solution that
> currently exists within beam that I can leverage? Or is the only option for
> me to contribute to the open source and write my own custom implementation?
>
> Thanks,
> Ramya
>
&
Hi Ramya,
unfortunately only a subset of AWS credential providers is supported.
Additionally, programmatic configuration is discouraged as there are
options that are hard to support and there's no decent way to validate.
Please have a look at the Javadocs to see what is supported:
https://beam.ap
Hi Jon,
I just want to check in here briefly, are you still looking for support on this?
Sadly yes, this totally lacks documentation and isn’t straight forward to set
up.
/Moritz
On 21.06.23, 23:47, "Jon Molle via user" wrote:
Hi Pavel, Thanks for your response! I took a look at running Beam
Hi Jon,
sorry for the late replay.
A while ago I was struggling with as well. Unfortunately, there’s no direct way
to do this per pipeline.
However, you can set default arguments by passing them to the job service
container using the environment variable _JAVA_OPTIONS.
I hope this still helps!
ache/beam/releases/tag/v2.47.0<https://urldefense.com/v3/__https:/github.com/apache/beam/releases/tag/v2.47.0__;!!CiXD_PY!UoiR6oywupQ0vxlUpxBE-F-bRravkguTQOPk-a7djBb24sHZZW7QL3i7V1-o6KXmsbexU5yjitVjHQ$>
Any knowledge in this would be appreciated.
Thanks
Sachin
On Mon, Sep 5, 2022 at 12:
Hi Evan,
Not sure why maven suggests using “compileOnly”.
That’s certainly wrong, make sure to use “implementation” in your case.
Cheers, Moritz
On 21.04.23, 01:52, "Evan Galpin" wrote:
Hi all, I'm trying to make use of ParquetIO. Based on what's documented in
maven central, I'm including t
Dear All,
The runner for Spark 2 was deprecated quite a while back in August 2022 with
the release of Beam 2.41.0 [1]. We’re planning to move ahead with this and
finally remove support for Spark 2 (beam-runners-spark) to only maintain
support for Spark 3 (beam-runners-spark-3) going forward.
N
Hi Sankar,
First, as Alexey pointed out, please try and migrate to the Beam AWS SDK v2 as
soon as possible. The SDK v1 (including the Kinesis module) has been long
deprecated and will be removed some time soon.
The AWS API doesn’t support cross-account access for Kinesis using an ARN. This
is
Hi Sachin,
I’d recommend migrating to the new AWS 2 IOs in
beam-sdks-java-io-amazon-web-services2 (using Amazon’s Java SDK v2) rather soon.
The previous ones (beam-sdks-java-io-amazon-web-services and
beam-sdks-java-io-kinesis) are both deprecated and not actively maintained
anymore.
Please ha
Hi Sigalit,
Could you explain a bit more in detail what you mean by 2 different types of
messages?
Do they share the same schema, e.g. using a union / one of type? Or are you in
fact talking about different messages with separate schemas (e.g. discriminated
using a message header)?
The recomme
Could you share some more details what you’ve tried so far?
I suppose you are using the JdbcIO, right? Have you looked at
JdbcIO.PoolableDataSourceProvider?
/ Moritz
On 28.07.22, 17:35, "Koka, Deepthi via dev" wrote:
Hi Team, We have an issue with the Oracle connections being used up and we ha
I noticed there’s also a similar bug open for the Spark runner
https://github.com/apache/beam/issues/21378
Problem seems to be in SimpleDoFnRunner.TimerInternalsTimer#clear(), which
doesn’t work with InMemoryTimerInternals (anymore).
https://github.com/apache/beam/blob/master/runners/core-java/sr
Yes, that’s exactly what I was referring to.
A - hopefully - easy way to avoid this problem might be to change the Spark
configuration to use the following:
--conf
"spark.metrics.conf.driver.sink.jmx.class"="com.salesforce.einstein.data.platform.connectors.JmxSink"
--conf
"spark.metrics.conf.ex
uire us to make another sink? -Yushu On Thu, Jul 14, 2022 at 1:05 PM
Moritz Mack <
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.
Exercise caution when opening attachments or clicking any links.
ZjQcmQRYFpfptBannerEnd
Thanks Mo
Hi Yushu,
Wondering, how did you configure your Spark metrics sink? And what version of
Spark are you using?
Key is to configure Spark to use one of the sinks provided by Beam, e.g.:
"spark.metrics.conf.*.sink.csv.class"="org.apache.beam.runners.spark.metrics.sink.CsvSink"
Currently there’s sup
Hi Yushu,
Have a look at org.apache.beam.runners.spark.translation.EvaluationContext in
the Spark runner. It maintains that mapping between PCollections and RDDs
(wrapped in the BoundedDataset helper). As Reuven just pointed out, values are
timestamped (and windowed) in Beam, therefore BoundedD
Hi Beam AWS user,
as you might know, there’s currently two different versions of AWS IO
connectors in Beam for the Java SDK:
* amazon-web-services [1] and kinesis [2] for the AWS Java SDK v1
* amazon-web-services2 (including kinesis) [3] for the AWS Java SDK v2
With the recent release
forward.
Regards,
Moritz
From: Alexey Romanenko
Date: Thursday, 10. March 2022 at 11:09
To: user@beam.apache.org
Cc: Moritz Mack
Subject: Re: Write S3 File with CannedACL
The contributions are very welcome! So, if you decide to go forward with this,
please, take a look on these guides [1][2
.withValueSerializer(TestObjectSerializer.class)
//cannot directly use KafkaAvroSerializer
.withTopic(options.getOutputTopic()) // just need
serializer for value
.values());
}
Regards,
Anjana
From: Moritz Mack<mailto:mm...@talend.
can provide the necessary configuration
(schema.registry.url, …)
But I’m not sure I fully understand your issue. Could you share some code
snippets?
Regards, Moritz
From: Moritz Mack
Reply to: "user@beam.apache.org"
Date: Friday, 17. December 2021 at 12:57
To: "user@beam.apache.org
Hi Anjana,
Have you checked the Javadocs of KafkaIO?
It is pretty straight forward:
PCollection> input = pipeline
.apply(KafkaIO.read()
.withBootstrapServers("broker_1:9092,broker_2:9092")
.withTopic("my_topic")
.withKeyDeserializer(LongDeserializer.class)
// Use Conflu
22 matches
Mail list logo