Hi all,
I've done a bit of work on this and have modified the doit.groovy to:
import org.apache.tools.ant.Task
import java.lang.Class.*
import java.lang.reflect.*
def groovydoit()
{
Task body = (Task) elements.get("sequential").get(0)
println body.dump()
Class c = Class.forName("org.apache.tools.ant.taskdefs.Sequential")
def m = c.getDeclaredMethods()
m.each() {
println it
}
Field f = c.getDeclaredField("nestedTasks")
f.setAccessible(true)
Vector nestedTasks = (Vector)f.get(body)
for (Enumeration e = nestedTasks.elements(); e.hasMoreElements();)
{
Task nestedTask = (Task)e.nextElement();
nestedTask.perform()
}
}
You'll notice that I'm using reflection to get the private field *nestedTasks *in *org.apache.tools.ant.taskdefs.Sequential*. After
that, I can do what execute() would do and I'm iterating over the element body.
I'm thinking that if I can do:
* Serialize each nested task to a string representation
* Make my tokenization substitutions
* Take the resultant string and form a Task from it
then, this would solve the tokenization challenge. Can anyone suggest how I
can do the above?
Thanks,
Steve Amerige
SAS Institute, Deployment Software Development
On 1/25/2012 1:12 PM, Scot P. Floess wrote:
Steve,
Definitely gonna need to think about this ;) At this point, "I got
nothin'" ;)
On Wed, 25 Jan 2012, Steve Amerige wrote:
Hi Flossy,
On 1/25/2012 10:00 AM, Scot P. Floess wrote:
Just at a first glance - one consideration I'd mention is that what you
list is not syntactically correct XML... So, I think if you wanted
something like that you will need to preprocess and convert to an XML that
can be processed. Unless that is what you are saying and I've
misunderstood ;)
Thanks for the reply. You're right in that the syntax wasn't correct. Here
is the revision:
<target name="myentrypoint">
<!-- ... -->
<mytokenizer>
%{mytask} %{myattribute}="%{myvalue}" %{/mytask}
%{myblockofcode}
</mytokenizer>
<!-- ... -->
</target>
The *mytokenizer *task would do substitutions for the %{*/token/*} entries.
The requirement is that the tokenization happen not as a pre-process step,
but during the execution of the *myentrypoint *target. Compiling code is not
available to me. Changing the requirements I need to live with is not
possible. I will define the *mytokenizer *macrodef or scriptdef in the same
file as the *myentrypoint *and I can use Ant 1.7, Ant-Contrib, and Groovy
code.
I'm trying to figure out how to define *mytokenizer *to solve this problem.
Thanks,
Steve Amerige
SAS Institute, Deployment Software Development
Scot P. Floess RHCT (Certificate Number 605010084735240)
Chief Architect FlossWare http://sourceforge.net/projects/flossware
http://flossware.sourceforge.net
https://github.com/organizations/FlossWare
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@ant.apache.org
For additional commands, e-mail: user-h...@ant.apache.org