On Feb 8, 3:15 pm, dw314159 <dw314...@gmail.com> wrote: > I am observing the same behavior in the Django shell. Here the actual > query runtime is about the same between Oracle and PostgreSQL back- > ends, but the total turnaround time is about 18 times longer with > Oracle. I believe the following code demonstrates this case: > > from django.db import connection > import minilims.log.models as log > import time > time_list = [] > for n in range(0, 20): > t1 = time.time() > entries = log.Param.objects.filter(log = 6).order_by('stuff', 'id') > entry = [x for x in entries] > t2 = time.time() > time_list.append(t2 - t1) > print len(connection.queries), 'queries ran.' > average_time = sum(time_list) / len(time_list) > # display minimum, average, and maximum turnaround time > print min(time_list), average_time, max(time_list) > # display average query time > print sum([float(x['time']) for x in connection.queries]) / > len(connection.queries) > > The above code in the shell using a PostgreSQL backend reports: > > >>> # display minimum, average, and maximum turnaround time > >>> print min(time_list), average_time, max(time_list) > > 0.203052997589 0.211852610111 0.234575033188 > > >>> # display average query time > >>> print sum([float(x['time']) for x in connection.queries]) / > >>> len(connection.queries) > > 0.0557 > > However, running the same code with an Oracle back-end, after > restarting the shell, results in: > > >>> # display minimum, average, and maximum turnaround time > >>> print min(time_list), average_time, max(time_list) > > 3.59030008316 3.64263659716 4.33223199844 > > >>> # display average query time > >>> print sum([float(x['time']) for x in connection.queries]) / > >>> len(connection.queries) > > 0.05825 > > Any ideas?
What does the model look like, and how many rows does your query return? The Oracle backend has to do some extra processing over the data to get its results into the expected format. This may be what you're seeing, although I would be surprised if it is really dominating the query time by that much. LOB columns can also slow the backend down significantly, since we have to make extra round-trips to the database to read their contents. Neither of these things would be included in the recorded query time. If your model includes LOB columns or has a large number of fields, then your best bet is probably to use QuerySet.defer() or QuerySet.only() to limit the fields returned to those that are specifically of interest. -- You received this message because you are subscribed to the Google Groups "Django users" group. To post to this group, send email to django-users@googlegroups.com. To unsubscribe from this group, send email to django-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-users?hl=en.