Any help here?
On Sun, 2 Mar 2025 at 1:40 PM, Taher Koitawala wrote:
> Hi All,
> Curious question! Has someone done a benchmark on native flink
> vs beam-flink-runner?
>
> Im wanting to know or ask, if there are differences in the following
> areas( Consider S
4. Chaining
I ask this to assess what is a better approach for us as we are wanting to
choose an engine for our stream processing platform.
Regards,
Taher Koitawala
Folks any idea on this?
On Wed, 15 Jan 2025 at 7:08 AM, Taher Koitawala wrote:
> Adding Flink community here just incase anyone has more info on this.
>
> On Tue, 14 Jan 2025 at 1:00 PM, Taher Koitawala
> wrote:
>
>> Hi All,
>> Been moving from Flink develo
Adding Flink community here just incase anyone has more info on this.
On Tue, 14 Jan 2025 at 1:00 PM, Taher Koitawala wrote:
> Hi All,
> Been moving from Flink development to Beam development i want am
> using FileIO parquet Io.
>
> Folks in Flink we could use strea
that with Kakfa.
How would i do the same in Beam using FlinkRunner. Please can you all help
me apply this same ? Is there a specific way to pass streaming file sink to
FileIo?
Regards,
Taher koitawala
er project,
> compared to streaming insert's 1gb/s [2].
>
> [1] https://cloud.google.com/bigquery/quotas#write-api-limits
> [2] https://cloud.google.com/bigquery/quotas#streaming_inserts
>
> On Thu, Feb 22, 2024 at 8:57 AM Taher Koitawala
> wrote:
>
>> Hi All,
>
Hi All,
I want to ask questions regarding sinking a very high volume
stream to Bigquery.
I will read messages from a Pubsub topic and write to Bigquery. In this
steaming job i am worried about hitting the bigquery streaming inserts
limit of 1gb per second on streaming Api writes
I am fi
Also auto creation is not there
On Thu, Mar 5, 2020 at 3:59 PM Taher Koitawala wrote:
> Proposal is to add more sources and also have time event time or
> processing enhancements further on them
>
> On Thu, Mar 5, 2020 at 3:50 PM Andrew Pilloud wrote:
>
>> I believe we ha
-table/
>
> Existing GCP tables can also be loaded through the GCP datacatalog
> metastore. What are you proposing that is new?
>
> Andrew
>
>
> On Thu, Mar 5, 2020, 12:29 AM Taher Koitawala wrote:
>
>> Hi All,
>> We have been using Apache Beam extensively t
_INFO spanner on
(pubsub.card_number = spanner.card_number);
Also to consider that if any of the sources or sinks change, we only change
the SQL and done!.
Please let me know your thoughts about this.
Regards,
Taher Koitawala
performant enough
or not. If someone has already tried this out and can give me a few
caveats, then that would be really awesome.
Regards,
Taher Koitawala
start writing the connector.
>
> Thanks,
> Max
>
> On 26.10.19 12:36, Taher Koitawala wrote:
> > Thank you Alex and Max,
> > My jira id is taherk77. Please add me.
> >
> > Regards,
> > Taher Koitawala
> >
> > On Sat, Oct 26, 2019, 3:53
Thank you Alex and Max,
My jira id is taherk77. Please add me.
Regards,
Taher Koitawala
On Sat, Oct 26, 2019, 3:53 PM Maximilian Michels wrote:
> That sounds great. How about you start looking into this Taher? If
> necessary, Sijie could provide additional insight into
I would be interested in contributing to the Pulsar Beam connector. That's
one of the reasons i started the email thread.
Regards,
Taher Koitawala
On Sat, Oct 26, 2019, 9:41 AM Sijie Guo wrote:
> This is Sijie Guo from StreamNative and Pulsar PMC.
>
> Maximilian - thank you fo
built-in/
>
> Cheers
>
> Reza
>
> On Thu, 24 Oct 2019 at 13:56, Taher Koitawala wrote:
>
>> Hi All,
>> Been wanting to know if we have a Pulsar connector for Beam.
>> Pulsar is another messaging queue like Kafka and I would like to build a
>&
Hi All,
Been wanting to know if we have a Pulsar connector for Beam.
Pulsar is another messaging queue like Kafka and I would like to build a
streaming pipeline with Pulsar. Any help would be appreciated..
Regards,
Taher Koitawala
16 matches
Mail list logo