emma_list:
if not w in seen:
seen.add(w)
unique_lemma_list.append(w)
for lemma in unique_lemma_list:
sense_number_new = len(wn.synsets(lemma, pos))
sense_number = sense_number + sense_number_new
return sense_number/len(unique_lemma_list)
>>> average_polysemy('n')
1
On Sun, Sep 9,
> extra copies of everything.
>
> On Sunday, September 9, 2012 at 2:31 AM, John H. Li wrote:
>
> Thanks first, I could understand the second approach easily. The first
> approach is a bit puzzling. Why are seen=set() and seen.add(x) still
> necessary there if we can use
is more efficient speed wise, but uses
> more memory (extraneous set hanging around), whereas the second is slower
> (``in`` is slower in lists than in sets) but uses less memory.
>
> On Sunday, September 9, 2012 at 1:56 AM, John H. Li wrote:
>
> Many thanks. If I want keep the ord
x in oriignal:
> if not x in uniqued:
> uniqued.append(x)
>
> The difference between is option #1 is more efficient speed wise, but uses
> more memory (extraneous set hanging around), whereas the second is slower
> (``in`` is slower in lists than in sets) but uses less mem
First, thanks very much for your kind help.
1)Further more, I test the function of insert. It did work as follows:
>>> text = ['The', 'Fulton', 'County', 'Grand']
>>> text.insert(3,'like')
>>> text
['The', 'Fulton', 'County', 'like', 'Grand']
>>>
2) I tested the text from nltk. It is list actual