Anyone storing Facebook IDs will hit that limit, so as a PostgreSQL shop
developing lots of FB apps, we can't wait for this to be released.
On Friday, 27 April 2012 04:40:06 UTC+10, villas wrote:
>
> Thinking more about this, an INTEGER will generally store over 2 billion
> records. Is the r
Anyone storing Facebook IDs will hit that limit, so as a PostgreSQL shop
developing lots of FB apps, we can't wait for this to be released.
Should have some test results on Monday
On Friday, 27 April 2012 04:40:06 UTC+10, villas wrote:
>
> Thinking more about this, an INTEGER will generally st
>
>
>
> As I understand it the id's get rebuilt if you export a table as csv-file
> and import it on another computer. When the whole database gets exported
> and imported the relationships stays consistent.
>
>
Yes, I've learned this the hard way. Moving the database between computers
is OK
In my experience you're better off using the native database backup/restore
mechanism if you want to retain IDs.
web2py's csv import keeps references but with new sequential IDs as long as you
include all the relevant tables in the export. from memory, if you don't want
the actual data of a ref
I also did Python first, web second and was fortunate enough to have
the time to compare pretty much every single framework out there. The
main reasons web2py is my preferred framework:
- it is lean and easy to understand 'all the way down'
- this means you are not forced into doing anything the
Does anyone have an opinion about the web2py video lectures from
codeschool.org ? I had a look at the first video and I thought it was
quite a nice intro:
http://www.youtube.com/playlist?list=PL978B2CE2D788F745
As a 'bystander', I personally think that Niphlod's response is of
such good quality that the gist of it deserves inclusion in the book.
+1
Having this option would make it really simple to change between the
full-blown DAL result set and a faster stripped down one (which could
then be adapted with the processor to keep the rest of the code
working.)
> I've been thinking about something like this as well. Instead of a separate
> se
not sure if there is a difference in the resulting file, but I usually
use db.category.export_to_csv_file(open(...))
On Feb 16, 3:24 pm, Richard Penman wrote:
> I exported a table from sqlite with:
> open(filename, 'w').write(str(db(db.category.id).select()))
>
> Output file looks as expected.
>
> We could not call it 1.100 because it would break the old automatic
> upgrade mechanism (1 comes before 9, yikes).
>
Why not make the upgrade from old version a two-step process? Or is
this too high a price to pay for being absolutely clear that the API
not been broken?
I know bumping major ve
10 matches
Mail list logo