def tokenizer(str, chr=' '): #heres a useful tool just like StringTokenizer feature in Java
if chr != '': chr = chr[0]
else: chr = ' '
x = ""
tokens = [""]
z = 0
for x in str:
if x != chr:
tokens[z] = tokens[z] + x
else:
z = z + 1
tokens.append("")
return tokens
Maybe I don't understand your intent here, but why don't you just use the builtin str.split?
>>> tokenizer('abc de fgh') ['abc', 'de', 'fgh'] >>> 'abc de fgh'.split() ['abc', 'de', 'fgh'] >>> 'abc.de.fgh'.split('.') ['abc', 'de', 'fgh']
list = ['abc', 'def', 'xyz'] str = '' for x in list: str+=list+'.'
A loop with += on strings is generally an inefficient way to go. You would do better with something like:
>>> lst = ['abc', 'def', 'xyz'] >>> '.'.join(lst) 'abc.def.xyz'
Steve -- http://mail.python.org/mailman/listinfo/python-list