I'm using Samza 0.9.1.
Lukas
On 10/29/15, Yi Pan wrote:
> Hi, Lukas,
>
> Which version of checkpoint-tool are you using?
>
> -Yi
>
> On Thu, Oct 29, 2015 at 5:39 PM, Lukas Steiblys
> wrote:
>
>> Hello,
>>
>> I’m trying to write the checkpoint
Hello,
I’m trying to write the checkpoints for a Samza task supplying these arguments
to the checkpoint tool:
bin/checkpoint-tool.sh
--new-offsets=file:///checkpoints/client-metrics.properties
--config-path=file:///checkpoints/task.properties
However, it doesn’t actually write the checkpoints
uldn't we have to come up with a new setting Yarn.ProcessCount
> ?
>
> On Mon, Oct 19, 2015 at 3:49 PM, Lukas Steiblys
> wrote:
>
>> I have been thinking lately about the most non-invasive way to add
>> multithreading capabilities to ThreadJobFactory, as that is the main
>
I have been thinking lately about the most non-invasive way to add
multithreading capabilities to ThreadJobFactory, as that is the main method we
run our jobs in production. Looking at the master branch code in Git, I have
found the following:
a.. The best way would be to simply spin up a new
Hi Yan,
If I understand correctly, a good way to increase parallelism in
ThreadJobFactory in 0.9.1 is to simply pass in N as the container count into
JobCoordinator(config, N) and then spin up a new ThreadJob for each
container. Does that sound right?
I am not sure how this is affected by 0.
pport what Lukas saying. Samza packaging requirements are not
> friendly,
> > I use the ThreadJobFactory for the same reason.
> >
> > Bruno
> >
> > On Tue, Sep 15, 2015 at 5:39 PM, Lukas Steiblys
> > wrote:
> >
> > > Hi Yan,
> > >
&g
; wrote:
>
> > Hi,
> >
> > I support what Lukas saying. Samza packaging requirements are not
> friendly,
> > I use the ThreadJobFactory for the same reason.
> >
> > Bruno
> >
> > On Tue, Sep 15, 2015 at 5:39 PM, Lukas Steiblys
> > wro
Hi Yan,
We use Samza in a production environment using ProcessJobFactory in Docker
containers because it greatly simplifies our deployment process and makes
much better use of resources.
Is there any plan to make the ThreadJobFactory or ProcessJobFactory
multithreaded? I will look into doing
You need to fix your line endings from Windows to Unix format.
Lukas
-Original Message-
From: Raja.Aravapalli
Sent: Wednesday, September 2, 2015 6:48 AM
To: dev@samza.apache.org
Subject: run-job.sh
Hi Team,
When I submit the job using "samza-shell/src/main/bash/run-job.sh"
I am get
s,
> Michael
>
> On Sat, May 30, 2015 at 12:22 AM, Lukas Steiblys
> wrote:
>
> > Yes, I think switching to ThreadJobFactory is a good solution. I think
> the
> > reasons why I switched to ProcessJobFactory earlier no longer hold true.
> >
> > Thanks.
> >
n Fri, May 29, 2015 at 12:59 PM, Lukas Steiblys
wrote:
Yes, I'm talking about the child process crashing. I'd like the parent to
die as well if the child crashes so Docker can understand that the process
failed and restart the container.
Lukas
-Original Message- From: Yi Pan
Se
n the JobCoordinator. SAMZA-680 would be the good place to start these
kind of discussion.
Thanks!
-Yi
On Fri, May 29, 2015 at 8:39 AM, Lukas Steiblys
wrote:
Hi Yan,
The memory usage is not very high, but I'm trying to cut the usage any way
I can.
The bigger problem is when the job crash
>
> The parent process is used to manage the lifecycle of the actual process. I
> am curious how much memory the parent process takes?
>
> Thanks,
>
> Fang, Yan
> yanfang...@gmail.com
>
> On Thu, May 28, 2015 at 2:30 PM, Lukas Steiblys >
> wrote:
>
&
Hello,
I’m running Samza tasks using ProcessJobFactory and after I start the job, the
initial process spawns a new process that is the actual process where the code
is run. The problem is that the parent process stays active even after the job
is started and that messes with the way I deploy Sa
or a single job instance?
On Thu, May 21, 2015 at 7:46 PM, Lukas Steiblys
wrote:
500 is a bit extreme unless you're planning on running the job on some 200
machines and try to exploit their full power. I personally run 4 in
production for our system processing 100 messages/s and there'
500 is a bit extreme unless you're planning on running the job on some 200
machines and try to exploit their full power. I personally run 4 in
production for our system processing 100 messages/s and there's plenty of
room to grow.
Lukas
On Thursday, May 21, 2015, Michael Ravits wrote:
> Hi,
>
>
I tried running everything from the /vagrant directory directly and it fails as
well so this might actually be a synced folder issue.
Lukas
From: Lukas Steiblys
Sent: Wednesday, February 18, 2015 1:46 AM
To: dev@samza.apache.org
Subject: Re: RocksDBException: IO error: directory: Invalid
unds like you're having some sort of permission issue or symbolic link
> issue. Where is the sym link pointing from/to? I just want to rule out the
> case that RocksDB JNI or Samza aren't working with state stores that have a
> symlinked directory.
>
> Cheers,
> Chri
I made a copy of the synced folder instead of having a symbolic link and
that also solved the problem, but it's not an ideal solution.
Lukas
-Original Message-
From: Lukas Steiblys
Sent: Tuesday, February 17, 2015 3:25 PM
To: dev@samza.apache.org
Subject: Re: RocksDBExceptio
, Lukas Steiblys
wrote:
1. I'm running it as another user, but in the user's home directory so it
has no problem writing or reading files.
2. See below.
3. I'm running Windows on my machine so I don't think I'll be able to run
it
outside the VM.
Can you try to run it insi
I could try and built a test job if you can’t reproduce it locally.
Lukas
From: Chris Riccomini
Sent: Tuesday, February 17, 2015 2:19 PM
To: Lukas Steiblys
Cc: dev@samza.apache.org ; Chris Riccomini
Subject: Re: RocksDBException: IO error: directory: Invalid argument
Hey Lukas,
Let me try
e the job
starts? This will get fully restored when the changelog restoration happens.
Cheers,
Chris
On Tue, Feb 17, 2015 at 1:38 PM, Lukas Steiblys
wrote:
Actually there's a symlink in the running user's home directory to
/vagrant where the jobs are executed, but even then, it do
Actually there's a symlink in the running user's home directory to /vagrant
where the jobs are executed, but even then, it doesn't have any problems
writing or reading the files.
Lukas
-Original Message-----
From: Lukas Steiblys
Sent: Tuesday, February 17, 2015
s for /vagrant and all of its
subdirectories, and make sure that they match up with what you expect (+rw
for the Samza job's user)?
3. If you try running the job outside of the VM, does it work?
Cheers,
Chris
On Tue, Feb 17, 2015 at 12:57 PM, Lukas Steiblys
wrote:
Yeah, I made sure the s
,
Chris
On Tue, Feb 17, 2015 at 12:13 PM, Lukas Steiblys
wrote:
It starts out with a fresh FS. I deleted all the state, but the job still
fails on the first get.
Lukas
-Original Message- From: Chris Riccomini
Sent: Tuesday, February 17, 2015 12:12 PM
To: Chris Riccomini
Cc: dev@s
p your job.
2. Clear the state from the FS.
3. Start your job.
Does it work?
Cheers,
Chris
On Tue, Feb 17, 2015 at 12:07 PM, Chris Riccomini
wrote:
Hey Lukas,
Could you try clearing out the state, and starting the job?
Cheers,
Chris
On Tue, Feb 17, 2015 at 11:33 AM, Lukas Steiblys
wro
state, and starting the job?
Cheers,
Chris
On Tue, Feb 17, 2015 at 11:33 AM, Lukas Steiblys
wrote:
This happens every time even if I spin up a new VM. Happens after a
restart as well.
Lukas
-Original Message- From: Chris Riccomini
Sent: Tuesday, February 17, 2015 11:01 AM
To: dev
y, it seems to me that the
directory would continue to exist after a job is restarted. If you delete
your state directory, and restart your job, does the problem temporarily go
away until a subsequent restart happens?
Cheers,
Chris
On Tue, Feb 17, 2015 at 10:55 AM, Lukas Steiblys
wrote:
Hi Chri
ur logs:
info("Got storage engine base directory: %s" format storeBaseDir)
It sounds like something is getting messed up with the directory where the
RocksDB store is trying to keep its data.
Cheers,
Chris
On Mon, Feb 16, 2015 at 3:50 PM, Lukas Steiblys
wrote:
Hello,
I was se
Hello,
I was setting up the key-value storage engine in Samza and ran into an
exception when querying the data.
I added these properties to the config:
stores.engaged-store.factory=org.apache.samza.storage.kv.RocksDbKeyValueStorageEngineFactory
stores.engaged-store.changelog=kafka.eng
30 matches
Mail list logo