On Sun, Oct 7, 2012 at 1:02 AM, rusi <rustompm...@gmail.com> wrote: > On Oct 7, 9:15 am, Ramchandra Apte <maniandra...@gmail.com> wrote: >> On Sunday, 7 October 2012 00:13:58 UTC+5:30, Darryl Owens wrote: >> > I am currently starting my PhD in software quality assurance and have been >> > doing a lot of reading round this subject. I am just trying to find out if >> > there is any relevant/current research in the production of a generic >> > quality assurance tool i.e. a tool/methodology that can accept many >> > languages for the following areas: >> >> > • Problems in code/coding errors >> >> > • Compiler bugs >> >> > • Language bugs >> >> > • Users mathematical model >> The main tests for python is:
http://docs.python.org/library/unittest.html For other languages, and even in python, you can roll your own. I'd begin by algorithming each particular language's calls(based on the statistical probabilities of languages that are utilized, and designed in a hierarchical order of the utilization), language bugs, and mathematical models needed performed, then perform the necessary function calls/series of calls. Pass data, and check the returns. CMD errors in some cases, and checking for error logs from URL calls. I'd suggest the bug repositories for the OS, browser, or app framework the language is launched in(version/build #, etc), or some form of url scraping the data from these in order to correct/check known problems. -- Best Regards, David Hutto CEO: http://www.hitwebdevelopment.com -- http://mail.python.org/mailman/listinfo/python-list