Re: STOMP binding for Kafka

2013-01-14 Thread Mridul Jain
yes; unless something is available already.

Thanks
Mridul

On Mon, Jan 14, 2013 at 11:09 AM, Jun Rao  wrote:

> Are you thinking of writing a JMS wrapper over Kafka client API?
>
> Thanks,
>
> Jun
>
> On Fri, Jan 11, 2013 at 9:43 PM, Mridul Jain 
> wrote:
>
> > I have to replace some existing messaging system with Kafka and the
> > applications running on top are/would be either JMS or STOMP compliant in
> > different languages. I am looking for simple capabilities with Kafka
> > high-level client apis give and not anything fancy/transactional like
> > ActiveMQ as our main requirement is scale and performance.
> > I have to start working on the exact list of features though...just have
> a
> > high-level requirement right now and evaluating options. I do use Kafka
> for
> > my platform with storm and high level client apis; but other apps which
> > want to get onto this for subscription, use ActiveMQ via JMS apis today
> or
> > might be even STOMP compliant.
> >
> > Thanks
> >
> > On Sat, Jan 12, 2013 at 10:55 AM, Jun Rao  wrote:
> >
> > > Right now, there is no such plan. What features are you looking for by
> > > using STOMP?
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Fri, Jan 11, 2013 at 9:15 PM, Mridul Jain 
> > > wrote:
> > >
> > > > Is there any plan or interest in the following...if yes, I might
> > solicit
> > > > some review as I have a need for STOMP binding and might have to
> > develop
> > > it
> > > > anyway.
> > > >
> > > > Thanks
> > > > Mridul
> > > >
> > > > On Sat, Jan 12, 2013 at 10:38 AM, Jun Rao  wrote:
> > > >
> > > > > Kafka currently doesn't support those.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > > > On Fri, Jan 11, 2013 at 8:37 PM, Mridul Jain <
> jain.mri...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > hi,
> > > > > > Is there a STOMP/JMS binding for Kafka? I was following the
> thread
> > > > below:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> http://mail-archives.apache.org/mod_mbox/incubator-kafka-dev/201109.mbox/%3ccahpe1evmz5ghbsmgbqyrj+q3s17te_yfjuckv+e8jshykp+...@mail.gmail.com%3E
> > > > > >
> > > > > > But it is inconclusive.
> > > > > >
> > > > > > - Mridul
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: STOMP binding for Kafka

2013-01-14 Thread Jun Rao
I am not aware of anything like that available right now. Let us know if
you hit any issue.

Thanks,

Jun

On Mon, Jan 14, 2013 at 12:47 AM, Mridul Jain  wrote:

> yes; unless something is available already.
>
> Thanks
> Mridul
>
> On Mon, Jan 14, 2013 at 11:09 AM, Jun Rao  wrote:
>
> > Are you thinking of writing a JMS wrapper over Kafka client API?
> >
> > Thanks,
> >
> > Jun
> >
> > On Fri, Jan 11, 2013 at 9:43 PM, Mridul Jain 
> > wrote:
> >
> > > I have to replace some existing messaging system with Kafka and the
> > > applications running on top are/would be either JMS or STOMP compliant
> in
> > > different languages. I am looking for simple capabilities with Kafka
> > > high-level client apis give and not anything fancy/transactional like
> > > ActiveMQ as our main requirement is scale and performance.
> > > I have to start working on the exact list of features though...just
> have
> > a
> > > high-level requirement right now and evaluating options. I do use Kafka
> > for
> > > my platform with storm and high level client apis; but other apps which
> > > want to get onto this for subscription, use ActiveMQ via JMS apis today
> > or
> > > might be even STOMP compliant.
> > >
> > > Thanks
> > >
> > > On Sat, Jan 12, 2013 at 10:55 AM, Jun Rao  wrote:
> > >
> > > > Right now, there is no such plan. What features are you looking for
> by
> > > > using STOMP?
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Fri, Jan 11, 2013 at 9:15 PM, Mridul Jain 
> > > > wrote:
> > > >
> > > > > Is there any plan or interest in the following...if yes, I might
> > > solicit
> > > > > some review as I have a need for STOMP binding and might have to
> > > develop
> > > > it
> > > > > anyway.
> > > > >
> > > > > Thanks
> > > > > Mridul
> > > > >
> > > > > On Sat, Jan 12, 2013 at 10:38 AM, Jun Rao 
> wrote:
> > > > >
> > > > > > Kafka currently doesn't support those.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Jun
> > > > > >
> > > > > > On Fri, Jan 11, 2013 at 8:37 PM, Mridul Jain <
> > jain.mri...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > hi,
> > > > > > > Is there a STOMP/JMS binding for Kafka? I was following the
> > thread
> > > > > below:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> http://mail-archives.apache.org/mod_mbox/incubator-kafka-dev/201109.mbox/%3ccahpe1evmz5ghbsmgbqyrj+q3s17te_yfjuckv+e8jshykp+...@mail.gmail.com%3E
> > > > > > >
> > > > > > > But it is inconclusive.
> > > > > > >
> > > > > > > - Mridul
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: partitioning

2013-01-14 Thread Stan Rosenberg
On Fri, Jan 11, 2013 at 12:37 AM, Jun Rao  wrote:

> Our current partitioning strategy is to mod key by # of partitions, not #
> brokers. For better balancing partitions over brokers, one simple strategy
> is to over partition, i.e., you have a few times of more partitions than
> brokers. That way, if one adds more brokers overtime, you can just move
> some existing partitions to the new broker.
>
> Consistent hashing requires changing # partitions dynamically.However, for
> some applications, they may prefer not to change partitions.
>

> What's your use case for consistent hashing?
>

My use case is essentially the same as above, i.e., dynamic load balancing.
 I now understand why the current partitioning strategy is used as opposed
to consistent hashing; partition "stickiness"
is definitely to be desired for the sake of moving computation to data.
However, the dynamic rebalancing as described in "Kafka Replication
Design", sect. 1.2 looks very similar to what's typically achieved by using
consistent hashing.
Is this rebalancing implemented in 0.8 or am I reading the now obsolete
documentation? :)  (If yes, could you please refer me to the code.)

Thanks,

stan


Re: partitioning

2013-01-14 Thread Maxime Brugidou
I'm not sure what design doc you are looking at (v1 probably?, v3 is here:
https://cwiki.apache.org/KAFKA/kafka-detailed-replication-design-v3.html )
but If I understand correctly, consistent hashing for partitioning is more
about remapping as few keys as possible when adding/deleting partitions,
which you can implement already with a custom partitioner by doing
partition_id = abs(num_partitions * hash(key)/hash_space). But the added
value is mitigated by the fact that if you add/delete partitions you
already destroy your partitioning and make it kind of useless?

However if you are talking about partition assignment to brokers that's
another part, and I guess the current state of things is just a simple
round robin to assign partitions on topic creation (to be confirmed?). It
could be interesting to have partitions assigned to brokers with some
consistent hashing so that adding a broker requires moving as few
partitions as possible. That process is done manually as of now using a
ReassignPartition command, and it could be automated with a tool (provided
that you over-partition as Jun recommends, so that you have some
granularity and the load gets spread evenly over brokers).


On Mon, Jan 14, 2013 at 5:04 PM, Stan Rosenberg wrote:

> On Fri, Jan 11, 2013 at 12:37 AM, Jun Rao  wrote:
>
> > Our current partitioning strategy is to mod key by # of partitions, not #
> > brokers. For better balancing partitions over brokers, one simple
> strategy
> > is to over partition, i.e., you have a few times of more partitions than
> > brokers. That way, if one adds more brokers overtime, you can just move
> > some existing partitions to the new broker.
> >
> > Consistent hashing requires changing # partitions dynamically.However,
> for
> > some applications, they may prefer not to change partitions.
> >
>
> > What's your use case for consistent hashing?
> >
>
> My use case is essentially the same as above, i.e., dynamic load balancing.
>  I now understand why the current partitioning strategy is used as opposed
> to consistent hashing; partition "stickiness"
> is definitely to be desired for the sake of moving computation to data.
> However, the dynamic rebalancing as described in "Kafka Replication
> Design", sect. 1.2 looks very similar to what's typically achieved by using
> consistent hashing.
> Is this rebalancing implemented in 0.8 or am I reading the now obsolete
> documentation? :)  (If yes, could you please refer me to the code.)
>
> Thanks,
>
> stan
>


kafka-0.7.2 sbt and remote installs help

2013-01-14 Thread Joseph Crotty
Trying to install kafka on remote machines with no internet access. Not
sure how to approach the sbt update && sbt package pieces. I tried using
sbt (lib/sbt-launch.jar) on a local machine with internet access as follows:

$ sudo su - kafka
$ cd /usr/local/kafka-0.7.2-incubating-src
$ cat ./sbt
java -Xmx1024M -XX:MaxPermSize=512m\
-Dsbt.ivy.home=$HOME/.ivy2/ -Divy.home=$HOME/.ivy2/\
-jar `dirname $0`/lib/sbt-launch.jar "$@"
$ ./sbt publish-local

Looking over /home/kafka/.ivy2/ I see everything I think kafka needs except
for scala itself.

I then copied /home/kafka/.ivy2/ to an internet-less target machine which
is set up identically to my local box. I then run sbt to no avail - gist is
here . At the bottom of the gist I have
included the contents of /home/kafka/.ivy2/.

Ideas?

Joe


Question about kafka consumer stream/partition

2013-01-14 Thread Bae, Jae Hyeon
Hi

I know if the number of kafka consumers is greater than the number of
partitions in the kafka broker cluster, several kafka consumers will
be idle.

My question is, does the number of kafka consumers mean the number of
kafka streams?

For example, I have one broker with one partition. What if I create
the consumer with 4 streams? Will 3 streams be idle? Or Will 4 streams
get the split data from the one partition?

Thank you
Best, Jae


Re: hadoop-consumer code in contrib package

2013-01-14 Thread Felix GV
I think you may be misunderstanding the way Kafka works.

A kafka broker is never supposed to clear messages just because a consumer
read them.

The kafka broker will instead clear messages after their retention period
ends, though it will not delete the messages at the exact time when they
expire. Instead, a background process will periodically delete a batch of
expired messages. The retention policies guarantee a minimum retention
time, not an exact retention time.

It is the responsibility of each consumer to keep track of which messages
they have consumed already (by recording an offset for each consumed
partition). The high-level consumer stores these offsets in ZK. The simple
consumer has no built-in capability to store and manage offsets, so it is
the developer's responsibility to do so. In the case of the hadoop consumer
in the contrib package, these offsets are stored in offset files within
HDFS.

I wrote a blog post a while ago that explains how to use the offset files
generated by the contrib consumer to do incremental consumption (so that
you don't get duplicated messages by re-consuming everything in subsequent
runs).

http://felixgv.com/post/69/automating-incremental-imports-with-the-kafka-hadoop-consumer/

I'm not sure how up to date this is, regarding the current Kafka versions,
but it may still give you some useful pointers...

--
Felix

--
Felix


On Mon, Jan 14, 2013 at 1:34 PM, navneet sharma  wrote:

> Hi,
>
> I am trying to use the code supplied in hadoop-consumer package. I am
> running into following issues:
>
> 1) This code is using SimpleConsumer which is actually contacting Kafka
> Broker without Zookeeper. Because of which messages are not getting cleared
> from broker.
> And i am getting duplicate messages in each run.
>
> 2) The retention policy specified as log.retention.hours in
> server.properties is not working. Not sure if its due to SimpleConsumer.
>
> Is it expected behaviour. Is there any code using high level consumer for
> same work?
>
> Thanks,
> Navneet Sharma
>


Re: kafka-0.7.2 sbt and remote installs help

2013-01-14 Thread Jun Rao
You need to do "./sbt update " and "./sbt package" on your local machine
with internet access. Then, you can copy the whole dir to your remote
machine.

Thanks,

Jun

On Mon, Jan 14, 2013 at 10:23 AM, Joseph Crotty wrote:

> Trying to install kafka on remote machines with no internet access. Not
> sure how to approach the sbt update && sbt package pieces. I tried using
> sbt (lib/sbt-launch.jar) on a local machine with internet access as
> follows:
>
> $ sudo su - kafka
> $ cd /usr/local/kafka-0.7.2-incubating-src
> $ cat ./sbt
> java -Xmx1024M -XX:MaxPermSize=512m\
> -Dsbt.ivy.home=$HOME/.ivy2/ -Divy.home=$HOME/.ivy2/\
> -jar `dirname $0`/lib/sbt-launch.jar "$@"
> $ ./sbt publish-local
>
> Looking over /home/kafka/.ivy2/ I see everything I think kafka needs except
> for scala itself.
>
> I then copied /home/kafka/.ivy2/ to an internet-less target machine which
> is set up identically to my local box. I then run sbt to no avail - gist is
> here . At the bottom of the gist I have
> included the contents of /home/kafka/.ivy2/.
>
> Ideas?
>
> Joe
>


Re: Is this a good overview of kafka?

2013-01-14 Thread Felix GV
Hello,

Your (non-question) statements seem mostly right to me. There is a bit of
confusion regarding your statement about partitions, however.

Partitions are primarily used to represent the smallest unit of
parallelism. If you need to split consumption among a pool of processes,
you need to have enough partitions for each of those consuming processes,
otherwise some of them will receive nothing.

Another property of partitions is that ordering is maintained within a
partition. If your use case requires it, you can implement a custom
partitioner so that a particular field within your produced messages
determines what partition the message is sent to. For example if you
partitioned using a User ID field within the messages, you would be
guaranteed that all messages pertaining to a certain user would end up in
the same partition, and that they would be correctly ordered. You should be
aware, however, that this guarantee is only maintained as long as there are
no consumer re-balance (which happens when adding or removing a consumer or
a broker).

Concerning your questions:

A consumer registers for topics, not for partitions, and it always
registers under the name of a consumer group. If there is only one consumer
registered for a given topic and consumer group, then that consumer will
receive messages from every available partition within that topic. If there
are multiple consumers registered under the same consumer group for a given
topic, then they will share that topic's available partitions among
themselves, which ensures that each partition is consumed by only one
consumer.

The high-level consumer uses Zookeeper to coordinate with the other
consumers and make sure that the partitions are appropriately assigned.

--
Felix


On Mon, Jan 14, 2013 at 2:15 PM, S Ahmed  wrote:

> Just want to verify that I understand the various components correctly,
> please chime in where appropriate :)
>
> producer = puts messages on broker(s)
> consumer = pulls messages of a broker
>
> In terms of how messages are organized on a cluster of brokers, producers
> put messages by providing a "topic".
>
> At the broker side of things, messages are stored by topic but can also be
> logicially seperated by a "partition", so that all messages for a
> particular topic are directed to a particular broker.
>
> On the consumer side, when you pull messages off, I know you can dedicated
> a consumer (or group of consumers) to a particular partition somehow. But
> what if you wanted to just randomly pull messages off?  Say I have 3
> brokers, and 5 consumers.  How does the consumer know which broker to
> connect too, and co-ordinate with the other consumers?
>
> Is there a flow diagram for the above scenerio? (or any other scenerio so I
> can understand how the communication takes place).
>


Re: SyncProducer vs Producer

2013-01-14 Thread Jun Rao
Producer is the high level api whereas SyncProducer is the lower level api.
Producer takes one or more messages and converts them to a request which is
sent by SyncProducer. Producer is actually the client api that everyone
should be using.

Thanks,

Jun

On Mon, Jan 14, 2013 at 10:26 AM, navneet sharma <
navneetsharma0...@gmail.com> wrote:

> Hi,
>
> I am not able to understand difference between
> kafka.javaapi.producer.Producer and kafka.javaapi.producer.SyncProducer.
>
> Its become even more confusing since Producer expects Generics stating
> message type whereas SyncProducer doesnt.
>
> Also, SyncProducer deals with ByteBufferMessageSet
>
> Basically i am trying to push data from Kafka broker to Hadoop HDFS and i
> am trying to understand contrib/hadoop-consumer code.
>
> Any pointers?
>
> Thanks,
> Navneet Sharma
>


Re: Question about kafka consumer stream/partition

2013-01-14 Thread Neha Narkhede
> My question is, does the number of kafka consumers mean the number of
> kafka streams?
>

Yes. To know the total number of consumers/streams in a group, you need to
add up the number of streams on every consumer instance


> For example, I have one broker with one partition. What if I create
> the consumer with 4 streams? Will 3 streams be idle? Or Will 4 streams
> get the split data from the one partition?
>

3 streams will be idle. Partition is the smallest granularity of
consumption. This is because if we allow streams to share partitions, there
will be a lot of distributed locking involved affecting throughput
adversely.

Thanks
Neha


Re: Is this a good overview of kafka?

2013-01-14 Thread Stan Rosenberg
Hi Felix,

Would you mind elaborating on what you said regarding the ordering
guaranteed; inlined below.

Thanks,

stan

On Mon, Jan 14, 2013 at 6:08 PM, Felix GV  wrote:

>
> For example if you partitioned using a User ID field within the messages,
> you would be
> guaranteed that all messages pertaining to a certain user would end up in
> the same partition, and that they would be correctly ordered. You should be
> aware, however, that this guarantee is only maintained as long as there are
> no consumer re-balance (which happens when adding or removing a consumer or
> a broker).
>

Why would consumer re-balance or broker failure alter the above partition
invariant?


Re: Is this a good overview of kafka?

2013-01-14 Thread Felix GV
Sure, I'll try to give a better explanation :)

Little disclaimer though: My knowledge is based on my reading of the Kafka
design paper  more than a year ago, so
right off the bat, it's possible that I may be forgetting or assuming
things which I shouldn't... Also, Kafka was pre-0.7 at the time, and we've
been running 0.7.0-ish in prod for a while now, so it's possible that some
of my understanding is outdated in the context of 0.7.2, and there are
definitely a fait bit of things that changed in 0.8 but I don't know what
changed well enough to make informed statements about 0.8. All that to say
that you should take your version of Kafka into account. And it certainly
doesn't hurt to read the design paper either ;)

So, my understanding is that when a Kafka broker comes online:

   - The broker contacts the ZK ensemble and registers itself. It also
   registers partitions for each of to the topics that exist in ZK (and
   according to the settings its own broker config file).
   - Producers are watching the online partitions in ZK, and when it
   changes, ZK fires off an event to them so that they can update their
   partition count. This partition count is used as a modulo on the hash
   returned by the brokers' partitioning function. So even if you have a
   custom partitioning function that deterministically gives out the same hash
   for a given bucket of messages, if you apply a different modulo to that
   hash, then on course it's going to make the messages of that bucket go to a
   different partition. This is done so that all online partitions get to have
   some data.
   - Consumers are also watching the online partitions in ZK. When it
   changes, ZK fires off an event to them, and they start re-balancing, so
   that the partitions are spread as fairly as possible between the consumers.
   In that process, partitions are assigned to consumers, and those
   partition-assignments could (and may very well) be different than the ones
   that were in place before the re-balance.

When a Kafka broker goes offline, if also affects the online partition
count, so the producers will again send their messages to different
partitions (so that all messages have somewhere to go) and the consumers
will re-balance again (to prevent starving a consumer whose partitions
became unavailable).

When a consumer goes online:

   - The consumer registers itself in ZK using its consumer group.
   - If there are other consumers watching that consumer group, then they
   will get notified and a re-balance of the whole group will be triggered,
   just like in the above case.

When a consumer goes offline, a re-balance is triggered as well for the
same reasons.

In the case of consumers going online or offline, this does not change the
ordering guarantees within the partitions per say. BUT, if your consumers
were keeping any sort of internal state in relation to the ordered data
they were consuming, then that state won't be relevant anymore, because
they will start consuming form different partitions after the rebalance.
Depending on the type of processing you're doing, that may or may not break
the work your consumer is doing.

Thus, the only event that does has no chance of affecting the stickiness of
a (data bucket ==> consumer process), is producers going online or offline.
Broker changes definitely alter which message buckets go into which
partitions. Consumer changes don't affect the content of partitions, but it
does change which consumer is consuming which partition.

If ordering guarantees are important to you, then I guess the best thing to
do might be to add watches on the same type of stuff that triggers the
changes described above, and to act accordingly when those changes happen
(by flushing the internal state, restarting the consumers, rolling back the
ZK offset to some checkpoint in the past, or whatever else is relevant in
your use case...)

Hopefully that was clear (and accurate) enough...!

--
Felix


On Mon, Jan 14, 2013 at 9:38 PM, Stan Rosenberg wrote:

> Hi Felix,
>
> Would you mind elaborating on what you said regarding the ordering
> guaranteed; inlined below.
>
> Thanks,
>
> stan
>
> On Mon, Jan 14, 2013 at 6:08 PM, Felix GV  wrote:
>
> >
> > For example if you partitioned using a User ID field within the messages,
> > you would be
> > guaranteed that all messages pertaining to a certain user would end up in
> > the same partition, and that they would be correctly ordered. You should
> be
> > aware, however, that this guarantee is only maintained as long as there
> are
> > no consumer re-balance (which happens when adding or removing a consumer
> or
> > a broker).
> >
>
> Why would consumer re-balance or broker failure alter the above partition
> invariant?
>


About kafka 0.8 producer auto detect broker

2013-01-14 Thread gj1989lh
Hi,
We know in Kafka 0.7, we can specify zk.connect. And with zookeeper, the 
producer can dynamically detect broker. But in Kafka 0.8, we can't specify 
zk.connect for producer. How does the producer in Kafka 0.8 auto detect broker? 
I have done two experiments.
In first one, I configure the broker.list as broker1 and broker2, and set 
num.partitions=4 for all brokers. Then, I start broker 1 and broker 3. Without 
create topic1, I produce message directly, there are 4 partitions generated. 
Two in broker 1 and two in broker 3. How can the producer detect broker 3?
In second one, I configure the broker.list as broker 1 and broker2, and set 
num.partitions=4 for all brokers. Then I start broker 3 only. Without create 
topic2, I produce message directly. As a result ,there is no partition created 
for topic 2, and the producer report ConnectException, it says fetching topic 
metadata for topics [Set(topic 2)] from broker 3 failed 
(kafka.client.ClientUtils$)


Can I guess that the auto detecting ability for producer is weakened in Kafka 
0.8? Why you get rid of zookeeper from producer in Kafka 0.8?


Many thanks and best regards!




Re: SyncProducer vs Producer

2013-01-14 Thread navneet sharma
If that is the case:
"Producer is actually the client api that everyone
should be using."

Then why contrib/hadoop-consumer is using SynProducer. Can i modify the
code to use Producer?
Will it have any impact on the system?

Thanks,
Navneet Sharma


On Tue, Jan 15, 2013 at 5:16 AM, Jun Rao  wrote:

> Producer is the high level api whereas SyncProducer is the lower level api.
> Producer takes one or more messages and converts them to a request which is
> sent by SyncProducer. Producer is actually the client api that everyone
> should be using.
>
> Thanks,
>
> Jun
>
> On Mon, Jan 14, 2013 at 10:26 AM, navneet sharma <
> navneetsharma0...@gmail.com> wrote:
>
> > Hi,
> >
> > I am not able to understand difference between
> > kafka.javaapi.producer.Producer and kafka.javaapi.producer.SyncProducer.
> >
> > Its become even more confusing since Producer expects Generics stating
> > message type whereas SyncProducer doesnt.
> >
> > Also, SyncProducer deals with ByteBufferMessageSet
> >
> > Basically i am trying to push data from Kafka broker to Hadoop HDFS and i
> > am trying to understand contrib/hadoop-consumer code.
> >
> > Any pointers?
> >
> > Thanks,
> > Navneet Sharma
> >
>


Number of Partitions Per Broker

2013-01-14 Thread Andrew Psaltis
All,
I was re-reading this: 
https://cwiki.apache.org/confluence/display/KAFKA/Operations and noticed that 
the number of partitions is 1. Is this accurate? In our environment we are 
currently running 20+ partitions per topic - with two brokers, the gut feel was 
this would speed up our ability to read from many threads in a consumer group. 
What I am lacking is a true understanding of the  pros/cons of having more 
partitions  on a given broker, what are they? Are there guidelines to follow in 
setting up the partitions?

Thanks in advance,
Andrew