@Josh
OK, I tested against a table with 1.8M records, joined in 3 other tables 
and then queried out 10K all the way up to 1.2M records with 20 columns and 
had no problems. Worked fine with both the as_dict = False and as_dict = 
True arguments to executesql().  This was with latest web2py trunk on 
Python 2.7.5 and MS SQL 2012 Express.

I'm a bit confused how you're even able to get a None value returned by 
executesql() - if the DB returns no records you don't get None you get an 
empty list [ ].  Try adding 
print len(test)

to your code immediately after your 
test = db.executesql('SELECT * from Table')

line.  Verify that it is really showing no records from the DB (or actually 
if it truly is None giving an error). Do this before you do any other 
processing of the results to be certain it isn't somewhere later in your 
code that isn't accidentally altering the value of test under certain 
conditions. (Yeah been there, done that, wasn't much fun)

On Tuesday, November 19, 2013 4:43:14 PM UTC-6, Brian M wrote:
>
> Wow, you actually need to pull 50,000 rows at a time?! Are you sure there 
> isn't some sort of aggregating that couldn't be done at the database level 
> to cut that down? While I work with large tables I have not been retrieving 
> anywhere near that many rows at once. I will give it a shot and see if I 
> can reproduce.
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to