Ny one ? We doing a series of tests to be confident, but if there is some
data folks, who have had RAID 5 on kafka, have to share, please do.
Regards.
On Mon, Mar 23, 2020 at 11:29 PM Vishal Santoshi
wrote:
> << In RAID 5 one can loose more than only one disk RAID here wil
<< In RAID 5 one can loose more than only one disk RAID here will be data
corruption.
>> In RAID 5 if one looses more than only one disk RAID there will be data
corruption.
On Mon, Mar 23, 2020 at 11:27 PM Vishal Santoshi
wrote:
> One obvious issue is disk failure toleration .
different brokers with the added caveat that we loose the whole broker as
well ?
On Mon, Mar 23, 2020 at 10:42 PM Vishal Santoshi
wrote:
> We have a pretty busy kafka cluster with SSD and plain JBOD. We
> planning or thinking of using RAID 5 ( hardware raid or 6 drive SSD
> bokers ) instea
We have a pretty busy kafka cluster with SSD and plain JBOD. We planning or
thinking of using RAID 5 ( hardware raid or 6 drive SSD bokers ) instead
of JBID for various reasons. Hss some one used RAID 5 ( we know that there
is a write overhead parity bit on blocks and recreating a dead drive )
Sorry, this was meant to go to flink :)
On Mon, Mar 16, 2020 at 6:47 PM Vishal Santoshi
wrote:
> We have been on flink 1.8.x on production and were planning to go to flink
> 1.9 or above. We have always used hadoop uber jar from
> https://mvnrepository.com/artifact/org.apache.flink/fli
We have been on flink 1.8.x on production and were planning to go to flink
1.9 or above. We have always used hadoop uber jar from
https://mvnrepository.com/artifact/org.apache.flink/flink-shaded-hadoop2-uber
but
it seems they go up to 1.8.3 and their distribution ends 2019. How do or
where do we g
a back up and recovery use case would be smaller cluster... on may be I
could do this using mm2.properties ? but not apparent.
On Sun, Jan 19, 2020 at 7:58 AM Vishal Santoshi
wrote:
> Verified.
>
> On Sat, Jan 18, 2020 at 12:40 PM Ryanne Dolan
> wrote:
>
>> I think that
Verified.
On Sat, Jan 18, 2020 at 12:40 PM Ryanne Dolan wrote:
> I think that's right. If there is no per-topic retention configured, Kafka
> will use the cluster default.
>
> Ryanne
>
> On Sat, Jan 18, 2020, 10:21 AM Vishal Santoshi
> wrote:
>
> > Last
retention ?
On Tue, Jan 14, 2020 at 10:11 PM Vishal Santoshi
wrote:
> Thanks
>
> On Tue, Jan 14, 2020 at 8:47 AM Ryanne Dolan
> wrote:
>
>> Take a look at the DefaultConfigPropertyFilter class, which supports
>> customizable blacklists via config.properties.blacklist.
>&g
Thanks
On Tue, Jan 14, 2020 at 8:47 AM Ryanne Dolan wrote:
> Take a look at the DefaultConfigPropertyFilter class, which supports
> customizable blacklists via config.properties.blacklist.
>
> Ryanne
>
> On Tue, Jan 14, 2020, 6:05 AM Vishal Santoshi
> wrote:
>
>
re by allowing you to control how
> Connect creates topics, to some extent.
>
> Ryanne
>
> On Mon, Jan 13, 2020, 9:55 PM Vishal Santoshi
> wrote:
>
> > Can I override the retention on target topics through mm2.properties ?
> It
> > should be as simple as stating the r
Can I override the retention on target topics through mm2.properties ? It
should be as simple as stating the retention.ms globally ? Am also
curious whether it can more at a single channel level ?
For example A->B, topic on B should have a retention of x and for B->A the
retention is y..
Is tha
And can you share the patch...
On Sun, Dec 22, 2019 at 10:34 PM Vishal Santoshi
wrote:
> We also have a large number of topics 1500 plus and in a cross DC
> replication. How do we increase the default timeouts ?
>
>
> On Wed, Dec 11, 2019 at 2:26 PM Ryanne Dolan
> wrote:
>
We also have a large number of topics 1500 plus and in a cross DC
replication. How do we increase the default timeouts ?
On Wed, Dec 11, 2019 at 2:26 PM Ryanne Dolan wrote:
> Hey Peter. Do you see any timeouts in the logs? The internal scheduler will
> timeout each task after 60 seconds by defa
+1
On Mon, Nov 11, 2019 at 2:07 PM Ryanne Dolan wrote:
> Rajeev, the config errors are unavoidable at present and can be ignored or
> silenced. The Plugin error is concerning, and was previously described by
> Vishal. I suppose it's possible there is a dependency conflict in these
> builds. Can
.properites
* restart mm2
should work provided we do not change the client_id
Thanks.
On Mon, Nov 4, 2019 at 3:08 PM Ryanne Dolan wrote:
> > BTW any ideas when 2.4 is being released
>
> Looks like there are a few blockers still.
>
> On Mon, Nov 4, 2019 at 2:06 PM Vishal
rror/src/test/java/org/apache/kafka/connect/mirror/MirrorMakerConfigTest.java#L182
>
> Keep in mind this should be configured for the target cluster, and it might
> not take effect until the workers finish rebalancing.
>
> Ryanne
>
> On Sun, Nov 3, 2019, 9:40 AM Vishal Santosh
Hello folks,
Was doing stress tests and realized that the replication
to the target cluster and thus the configuration of the KafkaProducer has a
default acks of -1 ( all ) and that was prohibitively expensive. It should
have been a simple a->b.producer.acks = 1 ( or b.producer.a
a thread dump and the only thread I see on poll is I think for the
config org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:262)
On Thu, Oct 24, 2019 at 10:36 PM Vishal Santoshi
wrote:
> I might have created a build from the trunk, rather then the 2.4 branch ,
> bu
I might have created a build from the trunk, rather then the 2.4 branch ,
but will confirm.
On Thu, Oct 24, 2019 at 4:44 PM Vishal Santoshi
wrote:
> The above may not be an issue as in it just uses the returned class
> loader to resolve the Connector I think . What is not obvious,
Config:347)
And from there on nothing..
On Thu, Oct 24, 2019 at 3:02 PM Vishal Santoshi
wrote:
> Hey Ryanne,
>
>Seeing the below ERROR in the logs and then, it seems the
> process does not consume ( it does not exit with any errors ) . And this is
> intermittent. As in
asks being created.
>
> Ryanne
>
> On Sat, Oct 19, 2019 at 1:28 AM Vishal Santoshi >
> wrote:
>
> > Here is what I see
> >
> > * The max tasks are a a cap on a Connector across the cluster. If have 8
> > VMs but 8 max tasks my assumption that there wo
019 at 8:04 PM Vishal Santoshi
wrote:
> I misspoke
>
> >> I now have 8 VMs 8 cpus with 48 max tasks and it did spread to the the
> 8 VMs. I then upscaled to 12 VMs and the tasks *have not *migrated as I
> would expect .
>
>
>
>
> On Fri, Oct 18, 2019 at 8:00
I misspoke
>> I now have 8 VMs 8 cpus with 48 max tasks and it did spread to the the
8 VMs. I then upscaled to 12 VMs and the tasks *have not *migrated as I
would expect .
On Fri, Oct 18, 2019 at 8:00 PM Vishal Santoshi
wrote:
> OK, You will have to explain :)
>
> I had 12
://blog.softwaremill.com/docker-support-in-new-java-8-finally-fd595df0ca54
On Fri, Oct 18, 2019 at 4:15 PM Ryanne Dolan wrote:
> What is tasks.max? Consider bumping to something like 48 if you're running
> on a dozen nodes.
>
> Ryanne
>
> On Fri, Oct 18, 2019, 1:43 PM V
assigned any
replicationThere is plenty to go around. ( more then a couple of
thousand partitions ) . is there something I am missing As in my
current case 5 of the 12 VMs are idle..
Vishal
On Fri, Oct 18, 2019 at 7:05 AM Vishal Santoshi
wrote:
> Oh sorry a. COUNTER... is more like
hanged. Maybe submit a PR?
>
> Ryanne
>
> On Thu, Oct 17, 2019 at 10:00 AM Vishal Santoshi <
> vishal.santo...@gmail.com>
> wrote:
>
> > Hmm ( I did both )
> >
> > another->another_test.enabled = true
> >
> > another->another_test.topics
Oh sorry a. COUNTER... is more like it
On Fri, Oct 18, 2019, 6:58 AM Vishal Santoshi
wrote:
> Will do
> One more thing the age/latency metrics seem to be analogous as in they
> seem to be calculated using similar routines. I would think a metric
> tracking
> the
es like tasks.max are honored without the A->B or A prefix, but
> auto.offset.reset is not one of them.
>
> Ryanne
>
> On Wed, Oct 16, 2019 at 9:13 AM Vishal Santoshi >
> wrote:
>
> > Hey Ryanne,
> >
> >
> > How do I override auto.offset.reset
there is some way in general of overriding consumer and
producer configs through mm2.properties in MM2 ?
Regards.
On Tue, Oct 15, 2019 at 3:44 PM Vishal Santoshi
wrote:
> Thank you so much for all your help. Will keep you posted on tests I do..
> I hope this is helpful to other fol
he.org/jira/browse/KAFKA-6080 for updates on
> exactly-once semantics in Connect.
>
> Ryanne
>
> On Tue, Oct 15, 2019 at 1:24 PM Vishal Santoshi >
> wrote:
>
> > >> You are correct. I'm working on a KIP and PoC to introduce
> > transactions to
> &
r groups there either. In
> fact, they don't commit() either. This is nice, as it eliminates a lot of
> the rebalancing problems legacy MirrorMaker has been plagued with. With
> MM2, rebalancing only occurs when the number of workers changes or when the
> assignments change (e.g. new
g is configured correctly but with too much latency to
> successfully commit within the default timeouts. You may want to increase
> the number of tasks substantially to achieve more parallelism and
> throughput.
>
> Ryanne
>
> On Mon, Oct 14, 2019, 2:30 PM Vishal Santoshi
>
)
On Mon, Oct 14, 2019 at 3:15 PM Vishal Santoshi
wrote:
> I think this might be it.. Could you confirm. It seems to be on the path
> to commit the offsets.. but not sure...
>
> [2019-10-14 15:29:14,531] ERROR Scheduler for MirrorSourceConnector caught
> exception in scheduled task
ing sent.
>
> Ryanne
>
> On Mon, Oct 14, 2019 at 10:46 AM Vishal Santoshi <
> vishal.santo...@gmail.com>
> wrote:
>
> > 2nd/restore issue ( I think I need to solve the offsets topic issue
> > before I go with the scale up and down issue )
> >
> &
topic? It
> should exist alongside the config and status topics. Connect should create
> this topic, but there are various reasons this can fail, e.g. if the
> replication factor is misconfigured. You can try creating this topic
> manually or changing offsets.storage.replicatio
Using https://github.com/apache/kafka/tree/trunk/connect/mirror as a guide,
I have build from source the origin/KIP-382 of
https://github.com/apache/kafka.git.
I am seeing 2 issues
* I brought up 2 processes on 2 different nodes ( they are actually pods on
k8s but that should not matter ). They s
37 matches
Mail list logo