There was a typo in the question - should have been ...
I can tolerate the [replicant]
Hello,
We are using kafka version 0.8.1 and the python kafka client.
Everything has been working fine and suddenly this morning I saw
a OffsetOutOfRange on one of the partitions. (We have 20 partitions in our
kafka cluster)
We fixed it by seeking to the head offset and restarting the app.
I dug deeper and saw this during normal operation:
In the kafka broker log:
[2014-11-03 21:39:25,658] ERROR [KafkaApi-8] Error when processing fetch
request for partition [activity.stream,5] offset 7475239 from consumer with
correlation id 69 (kafka.server.KafkaApis)
kafka.common.OffsetOutOfRange
I dug deeper and saw this during normal operation:
In the kafka broker log:
[2014-11-03 21:39:25,658] ERROR [KafkaApi-8] Error when processing fetch
request for partition [activity.stream,5] offset 7475239 from consumer with
correlation id 69 (kafka.server.KafkaApis)
kafka.common.OffsetOutOfRange
Hello,
I understand what this error means, just not sure why I keep running into
it after 24-48 hrs of running fine consuming > 300 messages / second.
What happens when a kafka log rolls over and some old records are aged
out? I mean what happens to the offsets? We are using a python client w
ting their offset to, e.g. the head of
> the log.
>
> How frequent do your clients read / write the offsets in ZK?
>
> Guozhang
>
> On Thu, Nov 6, 2014 at 6:23 PM, Jimmy John wrote:
>
> > Hello,
> >
> > I understand what this error means, just not sure wh
Livefyre (http://web.livefyre.com/) uses kafka for the real time
notifications, analytics pipeline and as the primary mechanism for general
pub/sub.
thx...
jim
On Sat, Nov 8, 2014 at 7:41 AM, Gwen Shapira wrote:
> Done!
>
> Thank you for using Kafka and letting us know :)
>
> On Sat, Nov 8, 201
How do I configure the application kafka log dir?
Right now the default is /var/log/upstart/kafka.log . I want to point it to
a different mount dir... e.g.
/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
--logdir /mnt/kafka/kafka-app-logs
But the above gives me errors :
im
On Mon, Nov 17, 2014 at 11:29 AM, Harsha wrote:
> you can configure it under /opt/kafka/config/log4j.properties and look
> for kafka.log.dir
>
> On Mon, Nov 17, 2014, at 11:11 AM, Jimmy John wrote:
> > How do I configure the application kafka log dir?
> >
> > Right
Looks like there is already a tkt for this:
https://issues.apache.org/jira/browse/KAFKA-1204
thx
jim
On Mon, Nov 17, 2014 at 1:59 PM, Jimmy John wrote:
> I tried that but it did not work. Dug a little deeper and saw this line
> in bin/kafka-run-class.sh:
>
> KAFKA_LOG4J_OPTS="
I was just fighting this same situation. I never expected the new producer
send() method to block as it returns a Future and accepts a Callback.
However, when I tried my unit test, just replacing the old producer with
the new, I immediately started getting timeouts waiting for metadata. I
struggled
Hi ,
I have a general query -
As per the code in Kafka producer the serialization happens before
partitioning , Is my understanding correct ? If yes whats the reason for it
?
Regards,
Liju John
Hi,
I am new to kafka and still learning .I have a query .. As per my
understanding the serialization is happening before the partitioning and
grouping of messages per broker . Is my understanding correct and what is
the reason for the same?
Regards,
Liju John
pplication.
Thanks.
John Dong
that my consumers are pulling the msgs from the
topic but at somepoint in time it throws exception that the current offset
is greater that the latest offset of the partition .
Is it because of retension the latest offset gets reset ? How can I handle
this scenarios
Regards,
Liju John
S/W developer
There are various prior questions including..
http://search-hadoop.com/m/4TaT4ts2oz1/disaster+recovery/v=threaded
Is there a clear document on disaster recovery patterns for K and their
respective trade offs.
How are actual prod deployments dealing with this.
For instance I want my topics replicat
Has anyone done a comparison of these two.
How do they compare in terms of features and scale but also disaster
recovery provision, convenience. Operability etc
Re kafka-1539
Is the community executing random failure testing for Kafka?
It would seem that sick testing would have found 1539 and some other bugs
that were recently fixed.
Is the community considering such testing?
Thanks John
or press enter to exit:
Ideas?
Thanks
-John
Hi Jun:
Is there a pre-built distribution that I can download? I am having
trouble with maven with my machine.
-John
On Thu, Dec 6, 2012 at 5:18 PM, Jun Rao wrote:
> Hmm, this seems like a Maven issue. Do you have a local Maven repo?
>
> Thanks,
>
> Jun
>
> On Thu
Regards,
Liju John
Just to add more info -
Our message size = 1.5MB , so effectively there was no batching as our
batch size is 200.
Is there any case when the RecordAccumulator can grow beyond the configured
buffer.memory ?
Regards,
Liju John
On Wed, Aug 5, 2015 at 12:41 PM, Liju John wrote:
> Hi,
>
It would seem (if the metrics registry is accurate) that replica
fetcher threads can persist after a leadership election. Even when the
broker itself is elected leader.
This also seems to occur after a reassignment too (as evident by the 5
different thread entries for the same partition in the regi
)
[2015-09-18 02:57:25,654] INFO [kafka-log-cleaner-thread-0], Stopped
(kafka.log.LogCleaner)
-John
with this on a reoccurring basis.
-John
On Fri, Sep 18, 2015 at 8:48 AM Todd Palino wrote:
> Yes, this is a known concern, and it should be fixed with recent commits.
> In the meantime, you'll have to do a little manual cleanup.
>
> The problem you're running into is a cor
Hi,
Just wondering when is 0.9 version of the kafka library releasing? I am
particularly interested in the KafkaConsumer pause/resume version.
Is there any other way to pause consumer without triggering a rebalancing
process in 0.8.x?
Thanks,
John
ve detailed the
issue here https://issues.apache.org/jira/browse/KAFKA-2572. Can anyone
offer any advice or suggestions? Thanks in advance.
John
t into new
Zookeeper ensemble
8. Shutdown old Kafka cluster
9. Restart data feeds into new Kafka cluster
The Kafka documentation is great and I've tested out the topic reassignment
and consumer offset import and export, but, again, just want to ensure I am
not missing anything.
Thanks
--John
Hi Everyone,
Perhaps a silly question...does one need to shut down incoming data feeds
to Kafka prior to moving partititions via the kafka-reassign-partitions.sh
script? My thought is yes, but just want to be sure.
Thanks
--John
Nice! Thanks Gwen!
--John
On Mon, Nov 2, 2015 at 1:03 PM, Gwen Shapira wrote:
> Actually, no. You can move partitions online.
>
> The way it works is that:
> 1. A new replica is created for the partition in the new broker
> 2. It starts replicating from the leader until it catc
intended
to be brokers within the same cluster. Once the data is moved to the new
brokers, the old brokers will be deleted, or at least that's what I am
intending to do.
Please confirm if my approach makes sense or if there is a problem and/or a
better way to do it.
Thanks
--John
On Mon, Nov 2,
Can a correlationID be created from a ConsumerRecord that will allow for
identification of the corresponding RecordMetaData instance that was returned
from the Producer.send() method?
I am Looking at the JavaDocs and the Producer returns RecordMetadata which has
the following signature:
Record
r and ConsumerRecord
Correlation ID is for a request (i.e. separate ID for produce request and a
fetch request), not a record. So it can't be used in the way you are trying to.
On Wed, Dec 9, 2015 at 9:30 AM, John Menke wrote:
> Can a correlationID be created from a ConsumerRecord that will allow
John,
Your question was a bit confusing because CorrelationID has a specific
meaning in the Kafka protocols, but those are an implementation detail that
you, as a user of the API, should not need to worry about. CorrelationIDs
as defined by the protocol are not exposed to the user (and do not
I'm reading new client design of version 0.9. and I has a question of
inFlightRequests in and out.
Here is the basic flow :
When Sender send a ClientRequest to NetworkClient, it add to
inFlightRequests indicator in-flight requests
```
private void doSend(ClientRequest request, long now) {
I have set the host.name option in the server.properties file, but the Broker
is still binding to all interfaces, and logging that's what it is doing.
This is with kafka 0.9.0 running on a Solaris 10 server with 3 Virtual
interfaces installed, in addition to the Physical interface.
Hi,
I am writing a kafka client, I tried to send a produce request and I get
back an error code on the partition as "*2*". but i couldn't find any
documentation on the description of this error code, any help?
Thanks,
John
Never mind, Found the documentation, it was crc32 error - I fixed it.
On Mon, Feb 1, 2016 at 9:24 PM john pradeep wrote:
> Hi,
> I am writing a kafka client, I tried to send a produce request and I get
> back an error code on the partition as "*2*". but i couldn't fin
I just ran into this issue in our load environment, unfortunately I came up
with the same options outlined above. Any better solutions would be most
appreciated otherwise I'm now considering the use of delete topic in any
critical environment off the table.
On Wed, Feb 3, 2016 at 10:10 AM Ivan Dy
What I ended up doing, after having similar issues your having, was:
- stop all the brokers
- rm -rf all the topic data across the brokers
- delete the topic node in ZK
- set auto.create.topics.enable=false in the server.properties
- start the brokers up again
The topic stayed deleted this wa
Looking for a replication scheme whereby a copy of my stream is replicated
into another dc such that the same events appear in the same order with the
same offsets in each dc.
This makes it easier for me to build replicated state machines as I get
exactly the same data in each dc
Is there any way
Hi seen some discussion on this but nothing definitive.
If I have a 0.8.1.1 back end then can I safely use a 0.8.2 client?
I can't upgrade the back end yet but want to start using Scala 2.11 in my
client App but the lack of a 2.12 Kafka client dependency is holding me a
2.10. The earliest Kafka
t tested the gradle to install the artifacts today, because I don't
want to break what's working.
What is the correct dependency which I should be using?
Thanks
John
***
The information contained in this communication is
3
Topic[0], offset: 151866, key: key1
Topic[0], offset: 151867, key: key2
Topic[0], offset: 151869, key: key3
The first offset is 151866. Is this correct behavior?
Thanks,
John
Thanks Ismael - I've got a good build installed now.
John
-Original Message-
From: isma...@gmail.com [mailto:isma...@gmail.com] On Behalf Of Ismael Juma
Sent: Tuesday, February 09, 2016 6:00 PM
To: users@kafka.apache.org
Subject: Re: Building the 0.9.0 branch
Hi John,
The core ja
*Use Case: Disaster Recovery & Re-indexing SOLR*
I'm using Kafka to hold messages from a service that prepares "documents"
for SOLR.
A second micro service (a consumer) requests these messages, does any final
processing, and fires them into SOLR.
The whole thing is (in part) designed to be used
verything. If you set the offset to 10,
> I'll read the second and third messages, and so on.
>
> see more here:
>
> http://research.microsoft.com/en-us/um/people/srikanth/netdb11/netdb11papers/netdb11-final12.pdf
> and here: http://kafka.apache.org/documentatio
3
etc... (and that could be an oversimplification for purposes of the
introduction - I get that...)
Feel free to comment or not - I'm going to keep digging into it as best I
can - any clarifications will be gratefully accepted...
On Wed, Feb 17, 2016 at 1:50 PM, John Bickerstaff
wrote:
keep up with the incoming messages sent over by
Storm? If so, do I add brokers to the cluster, do I add more topics, a
combo thereof or something else?
As always, any thoughts from people who know more than I do are
appreciated. :)
Thanks
--John
This may not be helpful, but the first thing I've learned to check in
similar situations is whether there is significant time-drift between VMs
and actual hardware. Some combination of time-drift and a time-sensitive
security check could be causing this. IIRC, CentOS has a funky issue with
gettin
Hi Alex,
Great info, thanks! I asked a related question this AM--is a full queue
possibly a symptom of back pressure within Kafka?
--John
On Thu, Feb 18, 2016 at 12:38 PM, Alex Loddengaard
wrote:
> Hi Saurabh,
>
> This is occurring because the produce message queue is full when
?
--John
I have tuned the producers
On Thu, Feb 18, 2016 at 3:59 PM, Alex Loddengaard wrote:
> Hi John,
>
> I should preface this by saying I've never used Storm and KafkaBolt and am
> not a streaming expert.
>
> However, if you're running out of buffer i
10X2.
Important notes:
1. Replication factor is 1
2. async producer
3. request.required.acks is 1
Any ideas?
--John
t - a count that bore no relation to the number of
messages in the topic - which worried me because I couldn't explain it --
and things I can't explain make me nervous in the context of disaster
recovery...
I appreciate your confirmation of my theory about what is going on.
--JohnB (aka s
Hmmm... I don't know for sure, but any chance a re-boot of Zookeeper would
help?
Is your topic still in /admin/delete_topics? (On Zookeeper I mean)
Also, how important is it to know what happened as opposed to just getting
to a runnable state again?
In other words, what time/effort will it cos
Guozhang,
Do you know the ticket for for changing the "batching criterion from
#.messages to bytes." I am unable to find it. Working on porting
a similar change to pykafka.
John
On Sat, Mar 5, 2016 at 4:29 PM, Guozhang Wang wrote:
> Hello,
>
> Did you have compression tur
I'm running WordCountProcessorDemo with Processor API. and change something
below
1. config 1 stream-thread and 1 replicas
2. change inMemory() to persistent()
MyKakfa version is 0.10.0.0. After running streaming application, I check
msg output by console-consumer
➜ kafka_2.10-0.10.0.0 bin/kafka-c
ill output kafka:
so this may be problem of console-consume.
2017-06-07 18:24 GMT+08:00 john cheng :
> I'm running WordCountProcessorDemo with Processor API. and change
> something below
> 1. config 1 stream-thread and 1 replicas
> 2. change inMemory() to persistent()
>
001
p:0,o:3,k:msg1,v:0002
p:0,o:4,k:msg3,v:0002
p:1,o:0,k:msg2,v:0001
p:1,o:1,k:msg4,v:0001
p:1,o:2,k:msg2,v:0002
p:1,o:3,k:msg2,v:0003
2017-06-07 18:42 GMT+08:00 john cheng :
> I add some log on StoreChangeLog
>
> for (K k : this.dirty) {
> V v = getter.g
I have two app instance, input topic has 2 partitions, each instance config
one thread and one replicas.
also, instance1's state-store is /tmp/kafka-streams, instance2's
state-store is /tmp/kafka-streams2.
now I do this experiment to study checkpointin kafka streams (0.10.0.0).
1. start instance1,
Kafka streams topology can define one or many SourceNode.
The picture on official document <
http://kafka.apache.org/0102/documentation/streams#streams_architecture_tasks
>
only draw one source node in some place:
1. Stream Partitions and Tasks
2. Threading Model
3. Local StateStore
And the topolo
e is only one source node.
>
> There is no 1-to-1 relationship between input topics and source node,
> and thus, the picture is not wrong...
>
> Do you find that the picture is confusion/miss-leading?
>
>
> -Matthias
>
> On 6/8/17 5:58 PM, john cheng wrote:
> > Kaf
cksDB contains all 6 messages and thus there is nothing
> to restore.
>
> Does this make sense?
>
> -Matthias
>
> On 6/7/17 7:02 PM, john cheng wrote:
> > I have two app instance, input topic has 2 partitions, each instance
> config
> > one thread and one replica
Hi there, I'm testing Kafka Streams's print() method, here is the code:
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "dsl-wc1");
props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
KStream source = builder.stream("dsl-input1");
KTable countTable = source
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java#L1345
This Line:
log.info("{} Adding assigned standby tasks {}", logPrefix, partitionAssignor
.activeTasks());
The parameter is active task, but the info content is stand
ok, I'll open an PR to fix this.
2017-06-17 0:59 GMT+08:00 Matthias J. Sax :
> Thanks for reporting this!
>
> Would you like to open a MINOR PR to fix it? Don't think we need a Jira
> for this.
>
> -Matthias
>
> On 6/16/17 9:26 AM, john cheng wrote:
> > ht
logs?
Thanks
--John
Hi Everyone,
What causes a broker to leave a cluster even when the broker remains
running? Is it loss of sync with Zookeeper?
--John
.
Thanks
--John
-XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80
On Sun, Jul 9, 2017 at 8:13 AM, John Yost wrote:
> Hey Everyone,
>
> When we originally upgraded from 0.9.0.1 to 0.10.0 with the exact same
> settings we immediately observed OOM errors. I upped the heap size from 6
> GB to 10 GB and that
) of these observations?
--John
Hey Ismael,
Thanks a bunch for responding so quickly--really appreciate the follow-up!
I will have to get those details tomorrow when I return to the office.
Thanks again, will forward details ASAP tomorrow.
--John
On Sun, Jul 9, 2017 at 10:41 AM, Ismael Juma wrote:
> Hi John,
>
>
rence result in consumed and/or produced
messages piling up in a buffer, and, consequently, increase the broker
memory heap size requirement due to the format mismatch? That would be
awesome because that means we just need to update the
log.message.format.version to 0.9.0 until we upgrade the clients.
-
--much, much appreciated!
--John
On Mon, Jul 10, 2017 at 11:52 AM, Matt Andruff
wrote:
> Total shot in the dark but could it be related, this talks about CPU but
> could have an impact on memory as well:
> http://kafka.apache.org/0102/documentation.html#upgrade_10_
> performance_impact
&
appreciated!
--John
On Mon, Jul 10, 2017 at 12:26 PM, Ismael Juma wrote:
> Hi John,
>
> Yes, down conversion when consuming messages does increase JVM heap usage
> as we have to load the data into the JVM heap to convert it. If down
> conversion is not needed, we are able to send t
?
--John
On Tue, Jul 11, 2017 at 6:15 AM, Pierre Coquentin <
pierre.coquen...@gmail.com> wrote:
> Hi,
>
> We are using kafka 0.10.2 with 2 brokers and 2 application nodes composed
> of 6 consumers each (all in one group). And recently we experienced
> disconnection of both
Hi Everyone,
I personally found that the 0.8.x clients do not work with 0.10.0. We
upgraded our clients (KafkaSpout and custom consumers) to 0.9.0.1 and then
Kafka produce/consume worked fine.
--John
On Tue, Jul 18, 2017 at 6:36 AM, Sachin Mittal wrote:
> OK.
>
> Just a doubt I hav
I saw this recently as well. This could result from either really long GC
pauses or slow Zookeeper responses. The former can result from too big of a
memory heap or sub-optimal GC algorithm/GC configuration.
--John
On Tue, Jul 18, 2017 at 3:18 AM, Mackey star wrote:
> [2017-07-15 08:45:19,
During my Kafka installation, I got some questions with some of the
parameter configurations
I see that log.flush.interval.messages and log.flush.interval.ms are
commented out in the default kafka server properties file. I read two
conflicting statements about these parameters. In one place, I r
artões logo]
*John Medeiros*Research and Development
*Portoseg S/A - Crédito, Financiamento e Investimento*+55 11 2393 5533
<+55%2011%202393-5533>
Alameda Barão de Piracicaba, 618, 4º Andar, Torre B, Lado B
CEP: 01216- 012 - São Paulo - SP - Brazil
www.portoseguro.com.br
[
ng the stability of our cluster as well as the
ability to replay topics that will have both 0.9.0.1 and 0.10.0.1-formatted
messages.
Thanks
--John
Ah, cool, thanks Ismael!
--John
On Tue, Sep 19, 2017 at 10:20 AM, Ismael Juma wrote:
> 0.10.0.1 consumers understand the older formats. So, the conversion only
> happens when the message format is newer than what the consumer
> understands. For the producer side, the conversi
The only thing I can think of is message format...do the client and broker
versions match? If the clients are a lower version than brokers (i.e.,
0.9.0.1 client, 0.10.0.1 broker), then I think there could be message
format conversions both for incoming messages as well as for replication.
--John
Oh wow, okay, not sure what it is then.
On Thu, Sep 21, 2017 at 11:57 AM, Elliot Crosby-McCullough <
elliot.crosby-mccullo...@freeagent.com> wrote:
> I cleared out the DB directories so the cluster is empty and no messages
> are being sent or received.
>
> On 21 September 2
s from
one topic to another if errors occur.
Thanks in advance for any pointers you guys can give me,
-John
Hello All,
I have been trying to create an application on top of Kafka Streams. I am
newbie to Kafka & Kakfa streams. So please excuse if I my understanding are
wrong.
I got the application running fine on a single instance ec2 instance in
AWS. Now I am looking at scaling and ran in to some issue
e when I checked earlier today. Anyways once again
thanks a lot for the response. I will raise a JIRA as you suggested and I
hope this isn't the case with local state stores.
Thanks,
Tony
On Wed, Oct 18, 2017 at 9:21 PM, Tony John wrote:
> Hello All,
>
> I have been trying to create
Hi All,
I am facing CommitFailedException in my streams application. As per the log
I tried changing the max.poll.interval.ms and max.poll.records. But both
didn't help. PFA the full stack trace of the exception and below is the
streams configuration used. What else could be wrong?
val props = Pr
I would say the key thing is that the each Kafka server write to a separate
set of 1..n disks and plan accordingly.
--John
On Mon, Nov 6, 2017 at 6:23 AM, chidigam . wrote:
> Hi All,
> Let say, I have big machine, which having 120GB RAM, with lot of cores,
> and very high disk
LUE)
props.put(StreamsConfig.consumerPrefix(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG),
3)
streams = KafkaStreams(builder, StreamsConfig(props))
streams.start()
Thanks,
Tony
On Thu, Nov 2, 2017 at 4:39 PM, Tony John wrote:
> Hi All,
>
> I am facing CommitFailedException in my streams application. A
In addition, in my experience, a memory heap > 8 GB leads to long GC pauses
which causes the ISR statuses to constantly change, leading to an unstable
cluster.
--John
On Wed, Nov 8, 2017 at 4:30 AM, chidigam . wrote:
> Meaning, already read the doc, but couldn't relate, having lar
I did and it did not help. The heap size was the issue.
--John
On Wed, Nov 8, 2017 at 9:30 AM, Ted Yu wrote:
> Did you use G1GC ?
> Thanks
> Original message ----From: John Yost
> Date: 11/8/17 5:48 AM (GMT-08:00) To: users@kafka.apache.org Cc:
> ja...@scholz
of memory issues seems not
> related to the CommitFailed error. Do you have any stateful operations in
> your app that use an iterator? Did you close the iterator after complete
> using it?
>
>
> Guozhang
>
>
> On Tue, Nov 7, 2017 at 12:42 AM, Tony John
> wrote
I've seen this before and it was due to long GC pauses due in large part to
a memory heap > 8 GB.
--John
On Thu, Nov 9, 2017 at 8:17 AM, Json Tu wrote:
> Hi,
> we have a kafka cluster which is made of 6 brokers, with 8 cpu and
> 16G memory on each broker’s machine, and w
ing format. once I set the message format to 0.9.0.1, the memory
requirements went WAY down, I reset the memory heap to 6 GB, and our Kafka
cluster has been awesome since.
--John
On Thu, Nov 9, 2017 at 9:09 AM, Viktor Somogyi
wrote:
> Hi Json.
>
> John might have a point. It is not reason
Yep, the team here, including Ismael, pointed me in the right direction,
which was much appreciated. :)
On Thu, Nov 9, 2017 at 10:02 AM, Viktor Somogyi
wrote:
> I'm happy that it's solved :)
>
> On Thu, Nov 9, 2017 at 3:32 PM, John Yost wrote:
>
> > Excellent point
Great point by Girish--its the delays of syncing with Zookeeper that are
particularly problematic. Moreover, Zookeeper sync delays and session
timeouts impact other systems as well such as Storm.
--John
On Thu, Nov 30, 2017 at 10:14 AM, Girish Aher wrote:
> We did not face any problems w
Hello all:
I encountered an issue. Have filed a issue:
https://issues.apache.org/jira/browse/KAFKA-6510
Anybody have encountered that before?
Thanks.
Hi All,
I have been running a streams application for sometime. The application
runs fine for sometime but after a day or two I see the below log getting
printed continuously on to the console.
WARN 2018-02-05 02:50:04.060 [kafka-producer-network-thread | producer-1]
org.apache.kafka.clients.Net
80
records. Adjusting up recordsProcessedBeforeCommit=76501
Thanks,
Tony
On Tue, Feb 6, 2018 at 3:21 AM, Guozhang Wang wrote:
> Hello Tony,
>
>
> Could you share your Streams config values so that people can help further
> investigating your issue?
>
>
> Guozhang
>
>
> On Mon,
1 - 100 of 363 matches
Mail list logo