On Aug 12, 7:35 pm, Dustan <[EMAIL PROTECTED]> wrote: > On Aug 12, 5:09 pm, Steven Bethard <[EMAIL PROTECTED]> wrote: > > > def iter_primes(): > > # an iterator of all numbers between 2 and +infinity > > numbers = itertools.count(2) > > > # generate primes forever > > while True: > > > # get the first number from the iterator (always a prime) > > prime = numbers.next() > > yield prime > > > # remove all numbers from the (infinite) iterator that are > > # divisible by the prime we just generated > > numbers = itertools.ifilter(prime.__rmod__, numbers) > > This is kind of OT (from the thread), but in this iter_primes > function, numbers is becoming an ifilter of an ifilter of an ifilter > of an ifilter of an ifilter of an ifilter of... Is that really at all > efficient for larger numbers (millions or billions, or even bigger)?
To word my question better: How does this algorithm compare and contrast to maintaining a dictionary or set and iterating through those structures to find out if a number is divisible? All of those algorithms I've described are O(n^2), if I'm not mistaken, but as far as memory-efficiency and the likes, how do they compare/contrast? -- http://mail.python.org/mailman/listinfo/python-list