Guys, Last week was a great time in New Orleans at SBL. Many good relationships were forged or rekindled and hope some exciting things will come from the time spent there.
There was one session that I think gives us something against which to measure ourselves (not that it's my personal goal to compete with commercial vendors, but nonetheless these are good benchmarks). In the Computer Technology Track there was a 1 hour session call Bible Technology Shootout where Logos, Bibleworks, Accordance, and Olive Tree were given the floor to show how their software solved 5 common Bible research problems. It might be fun to see how each of your frontends can answer these challenges, and where we need to provide better support in the engine to give you the results you need to offer this functionality to your end user. As we wrap up 1.6.1 of the engine, I have some work I would like to commit to start the next phase of development work on SWORD 1.7.x. Some of this work will help with these problems. Primarily, we've been wanting to move the hardcoded clucene fields out of the core framework and place each field extraction code into a modular class. This will allow us to add additional fields (beside the current 'key', 'content', 'lemma', 'prox', and 'proxlem') by simply adding a new filter that extracts the data for that field from a verse. These could be added by frontends so they can support additional things like indexing footnotes and headers if they feel it is beneficial for their users (I think Bibletime current includes these). Or anything else from the data-- easily from entry attributes-- that they feel would be useful to have fast indexed searching against. The first addition I'm hoping to commit is a field 'lemmamorph' that takes the form: lem...@morph1 lem...@morph2 lem...@morph3 etc. It was apparent that most major commercial Bible software vendors supply searching of data in this format and expect the users to know le...@morph syntax (though many have wizards and completion to help users arrive at the syntax). So, here are the challenges presented to the vendors at that session: 1) Give the parsing of a word and its meaning from a standard source.) 2) Show all the occurrences of a word in the NT and LXX and show the Hebrew word which corresponds with the Greek in the LXX (if there is a correspondence). 3) Find all the occurrences of οἱ δὲ in Matthew's Gospel followed by a finite verb within the clause. 4) I want to study a part of speech, e.g., demonstrative pronouns or interjections. How do I get all of the lemmas for that part of speech, get all the occurrences of those lemmas, and the results organized in such a way that I could write an article/monograph on that part of speech from the data? 5) I want to study the infections of the Hebrew middle weak verb, and I want to see what the range of possible variations are for each of the conjugations (perfect, imperative, etc.), person, number, gender, stem. This means I need to find all the middle weak verbs, find all their occurrences, and organize them in such a way that the variations of their inflections are immediately apparent. The goal of the data organization would be to allow me to write an article about variations of the Hebrew middle weak verb. Have fun with these :) Troy _______________________________________________ sword-devel mailing list: sword-devel@crosswire.org http://www.crosswire.org/mailman/listinfo/sword-devel Instructions to unsubscribe/change your settings at above page