Hi, I need to create custom EL function.
It accepts:
String StringWithDatetime
String Pattern (to parse Date)
It returns:
time in seconds.
Please tell me:
1. Where can I find example?
2. Where do I have to put implementation of this function?
ay only make the function available in workflows, but I'm
> >not sure; if not, there should be a similar property in oozie-site you can
> >set for coordinators if you need that.
> >
> >Once my proper tutorial blog post is posted, I'll add a link to this
&g
zie/libext/ or /var/lib/oozie/ (they're the same location, one
> of them is a symlink but I forget which) and restart Oozie; you should not
> run the oozie-setup.sh command.
>
> - Robert
>
>
> On Mon, Jul 15, 2013 at 6:46 AM, Serega Sheypak >wrote:
>
> > Hm... fu
tion and a CDH
> packages/parcel installation is where to put your jar and to run or not run
> the oozie-setup.sh script.
>
> - Robert
>
>
> On Mon, Jul 15, 2013 at 9:43 AM, Serega Sheypak >wrote:
>
> > We are using parcels.
> > Also you said that I h
of the other steps from my original instructions apply to either type
> of installation.
>
>
> - Robert
>
>
>
> On Mon, Jul 15, 2013 at 10:19 AM, Serega Sheypak
> wrote:
>
> > Now I'm confused. Do I have repackage war if I use parcels?
> >
> >
> &g
Hi, I have a hyge amount of data partitioned by hour:
my/data/archive//MM/dd/HH
The problem is that this data can't process this data in parallel.
For example If I want to process
my/data/archive/2013/07/16/01 I need to process
my/data/archive/2013/07/16/00 at first.
I've wrote a coordinator
your case, both values should be 1. You don't need to set
> concurrency explicitly since default is 1, but throttle you can change
> from 0 to 1.
>
> Question,
>
> If second action depended on output from first action, how did second
> action become 'READY' before t
It would be great if you post your workflow definition.
2013/7/30 Kamesh Bhallamudi
> Hi All,
> I am facing the problem while configuring pig action. Please help me where
> I am doing wrong. Please find the exception
>
> Failing Oozie Launcher, Main class
> [org.apache.oozie.action.hadoop.Pig
-param
> UBIDATA=${output}/ubi
> -param
> UPIDATA=${output}/upi
> -param
> OUTPUT=${output}
> -param
> JAR_PATH=${nameNode}/workflows/mr-workflow/lib/
>
>
>
>
>
>
> On Tue, Jul 30, 2013 at 2:05 PM, Serega Sheypak >wrot
s for parameter UBIDATA, UPIDATA and OUTPUT
>
>
> On Tue, Jul 30, 2013 at 2:59 PM, Serega Sheypak >wrote:
>
> > It's not the workflow definiton. It's action tag with nested tags.
> > 1. You have to provide values for the variables. It's hard to guess
> &g
ileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1122)
> at org.apache.hadoop.mapred.Child.main(Child.java:249)
>
>
>
> On Tue, Jul 30, 2013 at 4:01 PM
> is running fine.
>
>
> On Tue, Jul 30, 2013 at 4:36 PM, Serega Sheypak >wrote:
>
> > Fix is correct. Now your pig script is trying to run.It has an error.
> Looks
> > like you have a problem with alias.
> > You need to fix your pig script.
> >
> &
Hi, we have more than 20 running coordinators and more than 60 workflows
used in these coordinators.
Coordinators materialize on hour/day/week/several weeks manner.
We have 2 major problems:
1. We want general approach for collecting mapreduce (pig) counters. Many
our pig UDFs do report counters t
1. Is there any possibility to list all submitted coordinators?
2. Is there any possibility to get coordinator definition for oozie?
3. Is there any possiblity to get oozie coordinator configuration?
4. Is there any possibility to get oozie coordinator action materialization
configuration?
LI: http://oozie.apache.org/docs/4.0.0/DG_CommandLineTool.html
> REST: http://oozie.apache.org/docs/4.0.0/WebServicesAPI.html
>
> e.g. This will return all coordinators:
> $ oozie jobs -jobtype coordinator
>
> - Robert
>
>
> On Mon, Sep 9, 2013 at 11:28 AM, Serega Sheypak
.
>
>
>
> - Robert
>
>
> On Tue, Sep 10, 2013 at 12:19 AM, Serega Sheypak
> wrote:
>
> > Thanks for the reply. I don't want to use CLI, I want to query plain old
> > REST. In worst case I would use Java API.
> > Is composite filter is supp
Hi, I have rather complex input dataset dependency.
My coordinator should run each day
Coordinator does get two input datasets:
-daily result
-weekly result
and produces one daily output dataset.
Example:
Imagine now is 24.09 (the 24th of Septemeber)
Corodinator should get:
/daily_dataset/2013/0
Hi, we are using Oozie vervsion Oozie BUILD_VERSION [3.3.2-cdh4.3.0]
compiled by [jenkins] on [2013.05.28-04:29:38GMT]
I want to PurgeService work.
I did set these props:
oozie.service.PurgeService.older.than
90
oozie.service.PurgeService.coord.older.than 90
oozie.service.PurgeService.purg
The same problem with 3.3.2-CDH-4.3
Don't know what to do. We have more than 2000 materializations and we can't
setup purging policies
четверг, 9 августа 2012 г., 21:14:38 UTC+4 пользователь Jabir Ahmed написал:
>
> this is where its failing
>
>@Override
> public void init(Services servi
What do you mean by that?
write custom SQL to cleanup Oozie DB?
I would like yo use built in Oozie service and its properties.
2013/10/2 Jabir Ahmed
> Its quite straight forward to purge old records. . You can set up your own
> script to clean up
> On Oct 2, 2013 2:47 PM, "
instead. That said, the PurgeService should
> already be listed under "oozie.services" so it should be running by
> default.
>
> - Robert
>
>
> On Wed, Oct 2, 2013 at 2:06 AM, Serega Sheypak >wrote:
>
> > Hi, we are using Oozie vervsion Oozie BUILD_VERSIO
zie-site.xml
> in the oozie.services.ext property; if it does, then the PurgeService is
> enabled.
>
> - Robert
>
>
>
>
> On Wed, Oct 2, 2013 at 10:49 AM, Serega Sheypak >wrote:
>
> > Hi Robert, thanks for reply.
> >
> > Cloudera distirbution d
purge.executions= 11
in counters section, but nothing happens or I don't see i
2013/10/4 Serega Sheypak
> Hi, I've tried this one:
>
>
> oozie.services.ext
>
> org.apache.oozie.service.PurgeService
>
>
>
>
>
>
>
Nothing helps. Looks like I'm missing somthing. Oozie works, but there is
no purge happens.
2013/10/4 Serega Sheypak
> purge.executions= 11
> in counters section, but nothing happens or I don't see i
>
>
> 2013/10/4 Serega Sheypak
>
At first you in need to post workflow XML.
08.10.2013 18:44 пользователь "Nitin Pawar"
написал:
> Hello,
>
> I have a working setup as oozie on my vm. (oozie can schedule jobs with
> jobtacker).
> I am using hive-action to run the examples query.
>
> After the job is over oozie, marks the status
>
>
>
>
> Hive failed, error
> message[${wf:errorMessage(wf:lastErrorNode())}]
>
>
>
>
>
>
> On Tue, Oct 8, 2013 at 9:18 PM, Serega Sheypak >wrote:
>
> > At first you in need to post workflow XML.
> > 08.10.2013 18
No problem, hope it helps
2013/10/8 Nitin Pawar
> I would try to do thing mentioned on that thread.
> I do have hive-site.xml on the path but not on workflow.xml
>
> I will give that a try.
>
> Thanks a bunch Serega.
>
> Thanks,
> Nitin
>
>
> On Wed, Oct
ing.
>
> Thanks,
> Nitin
>
>
> On Wed, Oct 9, 2013 at 2:20 AM, Serega Sheypak >wrote:
>
> > No problem, hope it helps
> >
> >
> >
> > 2013/10/8 Nitin Pawar
> >
> > > I would try to do thing mentioned on that thread.
> &g
Hi, I completely don't understand how timezones work in oozie.
Oozie server is in GMT+0400
Oozie timezone property: oozie.processing.timezone=UTC
I want to run daily coordinator at 00:02 each day
What do I have to set in coordinator fields?
Right know I try to set
startTime=00:03
timeZone=GMT+040
=00:03/startTime=04:03.
> Please let me know what you got from this three cases.
>
> After getting your response, i can explain to you (if needed).
>
> Regards,
> Mohammad
>
>
>
>
>
> From: Serega Sheypak
> To: user@oozi
zone are used.
>
> We typically recommend users to leave the "oozie.processing.timezone" at
> UTC and to do the math for setting the times in your coordinator (like
> Mohammad said).
>
>
> - Robert
>
>
>
> On Thu, Oct 10, 2013 at 2:56 AM, Serega Sheypak >
, but this
*${coord:current(3)} looks like magic number... Why 3, not -1 or 123?
Because server has GMT+04:00 timezone?*
2013/10/13 Serega Sheypak
> Thank you very much. I did it the way you say, now I have other problem. ^(
>
> I want my coordinator to run each hour and take data from
; For example, coordinator action at 7:00 UTC (11:00 your time) should use
> 6:00 UTC data as current(-1).
>
> I would request not to use current(-3). Again data directory should be
> based on UTC as well.
>
> I know these are confusing sometime.
>
> Regards,
> Mohammad
Maybe you are missing shared lib in hdfs?
And you don't put special prop to force oozie to use shared lib?
http://blog.cloudera.com/blog/2012/12/how-to-use-the-sharelib-in-apache-oozie/
2013/10/21 Kadir Sert
> Hi,
> To be able to submit a job from a client machine, Client should have
> same co
We use custom java action.
It works. Just don't forget to put hive-jdbc dependency to your libs folder
2013/10/21 Lars Francke
> Hi,
>
> I was looking for a Impala action in Oozie but couldn't find any
> discussion about this at all.
>
> I'm thinking about opening a JIRA but I'm a) not sure if
user "abc" did start coordinator and user "xyz" tries to kill it.
"xyz" is not allowed to kill coordinators submitted by "abc".
>I also see no way to kill the Coordinator job from the Oozie web interface?
Which interface you are talking about?
2013/10/21
> After launching a coordinator job tha
t;s not the best tool to monitor oozie. I suggest you to install fresh HUE
and you'll get a lot of useful stuff there.
>
> [image: Inline image 1]
>
> You'll see that there are two eternally running coordinator jobs that I
> have no idea how to kill.
>
> Thanks,
>
1. You need issue invalidate metadata to your entry impalad daemon (which
is used by bi tool or hue to connect) after adding new partition to
partitioned table.
2. Impala does prepare aggregates very fast and it can insert into table.
Its time saving solution. Hive is too slow.
22.10.2013 7:29 пол
It's by design. Action is presented as map-only job with fake input. Oozie
packages jar and sends it to HDFS. Then this jar is launched.
2013/10/22 Praveen Sripati
> Hi,
>
> I created a simple Oozie work flow with Sqoop, Hive and Pig actions. For
> each of there actions, Oozie launches a MR lau
Which user you used to -run it?
2013/10/22
> Hi,
>
> I am running OOZIE workflow., but when I am trying to kill it, I get the
> error
>
> Caused by: org.apache.oozie.service.AuthorizationException: E0509: User
> [hue] not authorized for Coord job [--oozie-oozi-C]
>
>
>
> Pl
Did you use the same logged user to run coordinator
and do you use the same user to kill it?
2013/10/22
> Actually, I am using HUE to run OOZIE cooardinator. And I am logged into
> HUE as a different user.
>
> Thanks,
> Shouvanik
>
> -Original Message-
&
is it User [hue]?
2013/10/22
> Yes.
>
> -Original Message-
> From: Serega Sheypak [mailto:serega.shey...@gmail.com]
> Sent: Tuesday, October 22, 2013 4:14 PM
> To: user@oozie.apache.org
> Subject: Re: problem with OOZIE
>
> Did you use the same logged user to
Did you use hue to start the coordinator?
Do you have some kind of security installed?
Do you use cloudera distirbution?
2013/10/22
> I guess since I am using HUE console, the user is [hue]. But I am logged
> inside hue as a separate user.
>
> -Original Message-----
> From:
> --
>
> hadoop.proxyuser.oozie.hosts
> michael-hadoop-5.tsh.thomson.com
>
>
>
> I have other aspects of oozie working now, but this is the next impediment
> to solve.
>
> -Michael
>
>
>
>
> On Mon, Oct 21, 2013 at 3:57 PM, Serega Shey
hosts
*
oozie.service.ProxyUserService.proxyuser.hue.groups
*
2013/10/22
> Yup, inside core-site.xml
>
> -Original Message-
> From: Serega Sheypak [mailto:serega.shey...@gmail.com]
> Sent: Tuesday, October 22, 2013 7:49 PM
> To: user@oozie.apache.org
>
ion? : NO, its Hortonworks
>
> -----Original Message-
> From: Serega Sheypak [mailto:serega.shey...@gmail.com]
> Sent: Tuesday, October 22, 2013 6:13 PM
> To: user@oozie.apache.org
> Subject: Re: problem with OOZIE
>
> Did you use hue to start the coordinator?
> Do you have s
be by design, but I don't see any purpose without someone
> telling
> > me why. It's more of an overhead. As I mentioned I ran a work flow with
> > three actions and three more launcher MR jobs ran.
> >
> > Praveen
> >
> >
> > On Tue, Oct 22, 20
already done using AMBARI.
>
> -Original Message-
> From: Serega Sheypak [mailto:serega.shey...@gmail.com]
> Sent: Tuesday, October 22, 2013 7:54 PM
> To: user@oozie.apache.org
> Subject: Re: Why can't I kill an Oozie coordinator job?
>
> I'm using
Serega. I am glad to answer :)
>
> Yes, that's "hdfs"
>
> How to register user with name "hdfs"?
>
> Thanks,
> Shouvanik
>
> -Original Message-
> From: Serega Sheypak [mailto:serega.shey...@gmail.com]
> Sent: Tuesday, October 22, 2013
e (e.g. the Pig client cannot be restarted and resumed).
>
>
> - Robert
>
>
> On Tue, Oct 22, 2013 at 7:40 AM, Serega Sheypak >wrote:
>
> > The other purpose is to have common launch mehanism for all stuff.
> >
> > My tipical workflow brings up to 50MB of a
Command:545 - USER[-] GROUP[-]
> TOKEN[-] APP[-] JOB[0002212-131008235441741-oozie-oozi-C]
> ACTION[0002212-131008235441741-oozie-oozi-C@3] Execute command [wf_end]
> key [0002217-131008235441741-oozie-oozi-W]
> 2013-10-22 13:31:44,940 DEBUG WfEndXCommand:545 - USER[-] GROUP[-]
> TOKEN[-] APP
It's better to run map-reduce using built in map-reduce action. and don't
try to invent your own.
1. Try to move to separate custom java actions your setup/cleanup code
2. use built in map-reduce.
Oozie knows nothing about spawned MR job in your action. And you would have
to reinvent oozie functio
Yes, I've seen that.
It's better to fix problems with inserts into partitioned tables before :)
2013/10/23 Jarek Jarcec Cecho
> Just a quick note, based on the roadmap [1] it seems that the refresh will
> be significantly improved in Impala 1.2.
>
> Jarcec
>
> Links:
> 1:
> http://blog.cloudera
Its mapreduce duty to select which TT node use to run task.
Try to put your local stuff into hdfs and use distributed cache
30.10.2013 19:22 пользователь написал:
> I have two actions that need to run on the same datanode (due to stuff on
> the local filesystem). Is there any way to ensure this
putting the final result into HDFS.
>
> Any other ideas on ways to do this?
> -Michael
>
>
> On Wed, Oct 30, 2013 at 12:20 PM, Serega Sheypak
> wrote:
>
> > Its mapreduce duty to select which TT node use to run task.
> > Try to put your local stuff into hdfs and
Did you try this:
${nameNode}/user/hadoop/examples/output-data/formal/${YEAR}${MONTH}${DAY}/
?
Don't understand the reason to replace path separators later...
16.11.2013 17:52 пользователь "renguihe" написал:
> hi,
> I want to get unique form of the outputdata uri.
> In my coordinator.xml,I writ
You can try wrap these steps into oozie java action.
java action is executed as map-only job on random node using only one
mapper.
1. Dummy dirty
1.1. download from HDFS locally (use java api or sh script)
1.2. run your R app feeding it with downloaded data.
Problem: you have to install R on each
A don't see any.
We do use custon rmr build managed with puppet.
2013/11/27 ZORAIDA HIDALGO SANCHEZ
> Thanks Serega,
>
> does the second option any other advantage rather than avoiding to install
> R in each node?
>
> El 27/11/13 13:12, "Serega Sheypak" escr
Did you try this one:
..
*oozie.launcher.mapred.child.java.opts*
${oozieLauncherJVMOpts}
...
https://issues.apache.org/jira/browse/OOZIE-619
2013/12/20 Robert Kanter
> I remember seeing something similar with java-opts where they were being
> set somewhere else (mapred-site?) and declared
Hi, I'm getting weird exception while running Java action.
This log line is the LAST line of code in main method. There are no
System.exit or something like that. Main method is executed without any
exceptions.
***
2014-05-12 00:15:02,238 INFO my.java.MainClass: AFTER!!! ToolRunner.run(new
Tools()
Case is not clear :)
Can you write examples:
I read A, see B and then produce C,D.
>>b which from the data stream categorize data according to
These delails doesn't halp to understand what you want from oozie, sorry.
2014-09-25 15:06 GMT+04:00 Jakub Stransky :
> Hello experienced oozie users
No :)
2014-10-21 12:20 GMT+04:00 prabha k :
> Is there any easy option to load 500 tables data into HDFS in one shot
> after validating the data.
>
> Thanks
> PK
>
What do you try to solve?
oozie.launcher.mapreduce.task.classpath.user.precedence used for
oozie-action
mapreduce.task.classpath.user.precedence used for MR job
2015-02-12 6:39 GMT+03:00 Som Satpathy :
> Has any one been able to successfully apply the
> 'oozie.launcher.mapreduce.task.classpath.u
What are trying to do?
Generally, it works w/o any problems.
2015-02-18 21:54 GMT+03:00 xeonmailinglist :
> Hi,
>
> Oozie works with YARN?
>
in Oozie?
>
>
>
> On 18-02-2015 20:50, Serega Sheypak wrote:
>
>> are trying to do?
>> Generally, it works w/o any problems.
>>
>
>
xeonmailinglist :
> It is a pure map-reduce job.
> Can I create an oozie workflow in java with actions like mentioned?
>
>
>
> On 18-02-2015 21:02, Serega Sheypak wrote:
>
>> what is job?
>> Is it pure map-reduce | pig | hive?
>> Oozie is workflow runner and a
word-count is map-reduce job.
Map reads words and send them as keys to reduce.
Reduce counts occurrence for each word.
It's impossible to suspend map-reduce execution. What are you trying to
verify?
The only way is only way is to split word-count on two map-reduece jobs:
map-only and mr-job.
Map-o
Hi, I have a coordinator. It wakes up each day and consumes hourly-based
paths.
The problem is that I coordinator should consume these paths:
/my/path/2015/05/20
/my/path/2015/05/21
/my/path/2015/05/22
...
/my/path/2015/06/04
How can I specify these instances using
event" section. In " you will need to mention the start and
> end instance. you can write
> ${coord:current(-23)}${coord:current(0)
> HTH.
> Regards,Mohammad
>
>
> On Friday, May 8, 2015 4:33 AM, Serega Sheypak <
> serega.shey...@gmail.com> wrote:
>
&g
pache.org/docs/4.1.0/CoordinatorFunctionalSpec.html#a6.6.1._coord:currentint_n_EL_Function_for_Synchronous_Datasets
>
> Thanks,
> -Idris
>
>
>
> On Sun, May 10, 2015 at 12:31 AM, Serega Sheypak >
> wrote:
>
> > Hi, the problem is that I need to feed last 24 hours + 4 hours more.
> >
Hi,
192MB is not an issue if you are gonig to process gigabytes of data.
>adding my dependencies to the classpath of my tasktrackers
you should be prepared to start to resolve weird jar-hell problems.
Probably, you can save seconds putting your jars into tasktracker classpath
2015-07-20 15:47 GM
Can you go to oozie server and grab logs from there? My assumption that
${jobTracker}
${nameNode}
have incorrect values. Probably ${jobTracker} is wrong and oozie can't even
start launcher mapper.
2015-11-04 13:43 GMT+01:00 Jaydeep Vishwakarma <
jaydeep.vishwaka...@inmobi.com>:
> You are doing -
Jeetendra G :
> hey Serega I have cross verified jobTracker value is correct only I am
> using yarn so resource manager will become the job tracker?
>
> On Wed, Nov 4, 2015 at 6:17 PM, Serega Sheypak
> wrote:
>
> > Can you go to oozie server and grab logs from there?
che.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
> > tmp to done: hdfs://
> >
> hadoop01.housing.com:8020/mr-history/tmp/hdfs/job_1446620926630_0017-1446629350323-hdfs-oozie%3Alauncher%3AT%3Djava%3AW%3Djava%2Dmain%2Dwf%3AA%3Djava%2Dnode%3AI-1446629362786-1-0-SUCCEEDED-default
ie its works perfectly.
>
>
> On Wed, Nov 4, 2015 at 7:43 PM, Jeetendra G
> wrote:
>
> > it says job succeeded attaching
> >
> > On Wed, Nov 4, 2015 at 7:37 PM, Serega Sheypak >
> > wrote:
> >
> >> Hm... ok, can you see what happens on YARN RM
probably you need to increase mem for oozie launcher itself?
2016-02-04 20:57 GMT+01:00 Liping Zhang :
> Dear Oozie user and dev,
>
> We have a our spark job need to be run as a workflow in oozie.
>
>
> 1.Now the spark job can be run successfully in submmit command line as
> below:
>
> spark-subm
Hi, oozie workflow by default expects special file layout:
The directory structure looks like this:
- wf-app-dir/workflow.xml
- wf-app-dir/lib
- wf-app-dir/lib/myJavaClasses.JAR
Is there any way to specify custom wf--app-dir/lib for each oozie workflow
action? My workflow actions are j
Hi, I have pretty big workflow containing more than 15 steps.
Each step is implemented as java-action. I don't want to put all workflow
dependencies under ${oozie.wf.application.path}/lib folder. Is it possible
to specify ${oozie.wf.application.path}/lib
ttps://oozie.apache.org/docs/4.2.0/WorkflowFunctionalSpec.html#a3.2.2.1_Adding_Files_and_Archives_for_the_Job
>
> Thanks,
> Abhishek
>
> On Mon, Apr 25, 2016 at 2:21 AM, Serega Sheypak
> wrote:
>
> > Hi, oozie workflow by default expects special file layout:
> >
&
ant to use the
> oozie.action.sharelib.for.java then create those directories under
> /user/oozie/share/lib/lib_*/.
>
> Thanks,
> Abhishek
>
> On Mon, Apr 25, 2016 at 9:12 PM, Serega Sheypak >
> wrote:
>
> > Hi, thanks, I know about archives option
> >
&g
or.java then create those directories under
> /user/oozie/share/lib/lib_*/.
>
> Thanks,
> Abhishek
>
> On Mon, Apr 25, 2016 at 9:12 PM, Serega Sheypak
> wrote:
>
> > Hi, thanks, I know about archives option
> >
> >
> http://blog.cloudera.com/blog/2014/05/ho
Hi, I'm using oozie java action to start scalding/spark/mr job. Oozie java
action launcher has correct classpath here:
oozie.action.conf.xml
actionConf.addResource(new Path("file:///",
System.getProperty("oozie.action.conf.xml")));
Is there any way to pass that conf to spawned job? Right now star
duce, etc) the conf is passed by Oozie.
> Since java action only has a main method, you will have to load the action
> config yourself.
>
>
> Sent from Yahoo Mail for iPhone <https://yho.com/footer0>
>
>
> On Tuesday, April 26, 2016, 1:58 AM, Serega Sheypak <
&g
Hi, did anyone make it work property in his project?
I need to do dry run for my workflows.
The usecase is:
User writes workflow and wants to:
1. Check if it valid
2. do dryrun, see how it flows without executing steps.
Let say I have wflow with three steps:
1. disctp data from $A to $B
2. run sp
Hi, did anyone make it work property in his project?
I need to do dry run for my workflows.
The usecase is:
User writes workflow and wants to:
1. Check if it valid
2. do dryrun, see how it flows without executing steps.
Let say I have wflow with three steps:
1. disctp data from $A to $B
2. run sp
> which essentially just checks that everything resolves correctly in the
> workflow.xml without actually running any of the actions. If successful,
> it returns the String "OK". If there's a problem, it throws an exception
> that should contain the details of the p
_of_Workflow_Job>*
> we
> can see that with -dryrun option does not create nor run a job.
>
> So for the killer feature request, I think it's not possible ATM.
>
> Regards,
>
> Andras
>
> --
> Andras PIROS
> Software Engineer
> <http://www.cloudera
Hi, did anyone try to integrate oozie coordinator with kafka?
use case:
System publishes message to kafka topic (sample message)
- cluster: hdfs://prod-cluster
- path: /my/input/data
- format: avro
Oozie coordinator listens to kafka topic, consumes message and starts
workflow.
s://oozie.apache.org/docs/4.3.0/CoordinatorFunctionalSpec.
> html#a5._Dataset>*
> .
>
> Andras
>
> On Sat, Dec 16, 2017 at 2:54 PM, Serega Sheypak
> wrote:
>
> > Hi, did anyone try to integrate oozie coordinator with kafka?
> > use case:
> >
> > System pu
ase in that realm.
>
> Thanks
>
> On Mon, Dec 18, 2017 at 7:04 PM, Serega Sheypak
> wrote:
>
> > Hi, I know default coordinator functionality, but it's limited (almost)
> to
> > HDFS.
> > Kafka (any other pub/sub or queue like rabbitMQ, whateve
> @Serega @Artem do you have ideas where Oozie HDFS path handling is way
> inflexible?
I'm using oozie for last 5 years. It's inflexible. I've explained why in
initial message.
The whole idea with this input / output events is wy too complex and
over engineered.
91 matches
Mail list logo