I had the same kind of problems when I started with django.  My problem was
the tracebacks wanted to include the full contents of any QuerySet variables
I had in my functions, and that just wasn't going to work given they
sometimes had more than >500,000 rows.  I "fixed" it by putting my QuerySet
variables in a little wrapper object, which prevented the debug code from
trying to evaluate them and include their string representation in the
traceback information.  Perhaps a little inelegant but it made the code
debuggable.

Cheers,
Karen

On 3/1/07, Tim Chase <[EMAIL PROTECTED]> wrote:
>
>
> I've encountered an odd problem that I'm hoping someone out there
> has encountered and can offer tips.
>
> I've got a fairly gargantuan database of phone information (my
> company manages cell-phone accounts for other companies).
> However, when I try to troubleshoot my views/templates, the
> tracebacks seem to want to try and pull back some humongous
> portion of data (600k+ records in one table, and this is the
> "small" testing DB).  This involves heaps of system memory as
> Python/Django deals with multi-megabyte strings that it composes
> to ship off in the response, and then also drags browsers to a
> crawl as they try and hang-on to this volumnous output.
>
> While it's helpful to see some of the data, is there a way to
> limit traceback extractions from taking 10+ minutes? :)
>
> Thanks,
>
> -tkc
>
>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to