Thank you, Lie and Andrew for your help. I have studied NLTK quite closely but its parsers seem to be only for demo. It has a very limited grammar set, and even a parser that is supposed to be "large" does not have enough grammar to cover common words like "I".
I need to parse a large amount of texts collected from the web (around a couple hundred sentences at a time) very quickly, so I need a parser with a broad scope of grammar, enough to cover all these texts. This is what I mean by 'random'. An advanced programmer has advised me that Python is rather slow in processing large data, and so there are not many parsers written in Python. He recommends that I use Jython to use parsers written in Java. What are your views about this? Thank you very much. -- http://mail.python.org/mailman/listinfo/python-list