How do I prevent locking problems with cron jobs?
I have a cron job that goes through a database queue and slowly
processes and deletes each item on the queue. There are webpages that
read and display that queue, and other pages that add to the queue. I
want to prevent the slow cron job from bloc
I routinely run into migration problems. I suspect this happens when I
change a column's datatype, or when I remove a table from db.py and
then later make a new table with the same name as the old one.
In these situations, the migrations get messed up and I get
stacktraces in sql.py with errors l
in) 2) add
> the column again with the new type. In this case web2py understands
> you do not want to keep the data and will not attempt to do it.
>
> On Nov 14, 7:08 pm,toomim wrote:
>
>
>
> > I routinely run into migration problems. I suspect this happens when I
> >
What's the best way to use web2py to process a queue of data in a cron
job without race conditions?
My code basically does this:
for task in db(status != 'done').select():
success = do_long_thing_with(task) # takes a long time, might
fail
if success:
task.update_record(status = '
s == 'processing')
> success = do_long_thing_with(task)
> fail
> if success:
> task.update_record(status = 'done')
>
> This way, records that have a status of processing will not get pulled
> in by a concurrent thread.
>
> -Thadeus
>
>
>
&
)
> fail
> if success:
> task.update_record(status = 'done')
>
> -Thadeus
>
>
>
> On Tue, Dec 29, 2009 at 1:52 AM, toomim wrote:
> > sk.update_record(status == 'proce
--
You received this message because you are subscribed to the Google Groups
ile queue is not empty:
> task = db(~status.belongs(('done','failed'))).select(limitby=
> (0,1))[0]
> success = do_long_thing(task)
> if success:
> task.update(status = 'done')
> else:
> task.update
olledback.
>
> You can insert db.commit() (and db.rollback()) everywhere to break the
> transaction in smaller parts.
>
> Massimo
>
> On Dec 29, 8:54 pm, toomim wrote:
>
>
>
> > Mossimo, unfortunately this proposal does not solve the multi-
> > threading probl
There have been a couple posts to this list from people who couldn't
get cron's @reboot to work. I found the following problems:
1) the crondance function has an argument for "startup", setting it to
True tells it to run the @reboot tasks. However, crondance is never
called with startup=True, so r
I'm building an app that can have ~50 users connecting at a time. I
want to know if pages load slowly for them so I can make it faster.
How can I find and record page the load times my users experience?
--
You received this message because you are subscribed to the Google Groups
"web2py-users" g
p;workerId=A3I5DLZHYT46GS&turkSubmitTo=https%3A%2F%2Fwww.mturk.com
[Mon Jul 19 18:55:50 2010] [error] [client 117.204.99.178] mod_wsgi
(pid=7730): Exception occurred processing WSGI script '/home/toomim/
projects/utility/web2py/wsgihandler.py'.
[Mon Jul 19 18:55:50 2010] [error] [c
signmentId=1WL68USPJR0HY1ENS50GN6IJ33ZY32&hitId=1TK6NH2ZSBU3RCI3F8FK7JE1YXMG96&workerId=AJ8R357DF74FF&turkSubmitTo=https%3A%2F%2Fwww.mturk.com
On Jul 19, 7:01 pm, Michael Toomim wrote:
> I'm getting errors like these in my apache error logs:
>
> [Mon Jul 19 18:55:20 2010] [error] [client 65
And after a while apache completely freezes.
On Jul 19, 7:05 pm, Michael Toomim wrote:
> This message about "bucket brigade" is also appearing in the apache
> error log:
>
> [Mon Jul 19 19:01:53 2010] [error] [client 183.87.223.111] (9)Bad file
> descriptor: mod_wsgi
or
However, I've gotten around 3000 "premature end of script" errors, and
only 3 of these IOErrors.
Is there a way to identify what is causing the "Premature end of
script" errors?
On Jul 19, 7:50 pm, Graham Dumpleton
wrote:
> On Jul 20, 12:01 pm, Michael Toomim wrote:
>
le times.
>
> > Multiple registrations of logging handler could occur if it isn't done
> > in a thread safe why, ie., so as to avoid multiple threads doing it at
> > the same time.
>
> > Graham
>
> > > Can
> > > you tell us more about the version
le times.
>
> > Multiple registrations of logging handler could occur if it isn't done
> > in a thread safe why, ie., so as to avoid multiple threads doing it at
> > the same time.
>
> > Graham
>
> > > Can
> > > you tell us more about the vers
lable.
>
> The other difference with above is that I think by setting ServerLimit
> to 30, you have effectively overridden MaxClients down to 30 even
> though set to 256. You have thus in part limited the exact problems
> described in:
>
> http://blog.dscpl.com.au/2009/03/lo
somewhere further up the stack. I don't know
any ways to investigate memory consumption to see where it's being
used.
On Jul 20, 8:23 pm, Graham Dumpleton
wrote:
> On Jul 21, 1:03 pm, Michael Toomim wrote:
>
> > THANK YOU ALL SO MUCH for your help!
>
> > I jus
Ah, preventing multithreading is a good idea to try too.
It wasn't a file descriptor problem either, I had
Files used: 1376 out of 75556
On Jul 20, 9:14 pm, Graham Dumpleton
wrote:
> On Jul 21, 1:41 pm, Michael Toomim wrote:
>
> > I'm using daemon mode... I didn't
r the most part. I am sure that these errors arise from the fact that
> > web2py uses execfile in many places over and over again, which is a
> > discouraged practice among the python community, and you see why now.
>
> > --
> > Thadeus
>
> > On Tue, Jul 20, 20
; for the most part. I am sure that these errors arise from the fact that
> > web2py uses execfile in many places over and over again, which is a
> > discouraged practice among the python community, and you see why now.
>
> > --
> > Thadeus
>
> > On Tue, Jul 20, 20
I'm running a background database processing task, and I only want to
have ONE task running so I don't have to worry about race conditions.
What is the best way to do this?
I run this task from a cron @reboot. It runs this script:
while True:
time.sleep(10)
process_queue()
I'm worried th
; On Feb 18, 5:13 pm, Michael Toomim wrote:
>
>
>
> > I'm running a background database processing task, and I only want to
> > have ONE task running so I don't have to worry about race conditions.
> > What is the best way to do this?
>
> > I run th
what I did:
In cron/crontab:
@reboot toomim *applications/init/cron/background_work.py
(Thanks to achipa and massimo for fixing @reboot recently!)
In cron/background_work.py:
import time, sys, commands, datetime
# Lock the database so we don't accidentally run two background
Oh yeah, and that database lock line only works with postgresql. You
might try file locking if you're not using it.
On Feb 26, 2:05 pm, Michael Toomim wrote:
> I got this to work by using @reboot cron. It works great, and super
> simple. Cross-platform, needs no external daemo
How is a database "Null" entry represented in python when using the
DAL? How can you query for null rows? How can you set them?
Is this the same as None?
And if you create a database row without setting a value for a column,
this is set to Null=None, right?
Thank you!
--
You received this mess
I'm so excited! I was about to try moving to rocket myself, because I
need the scalability and it is very useful for my app to run without
apache. THANKS GUYS!
On Mar 11, 8:08 am, mdipierro wrote:
> We moved from cherrypy wsgiserver toRocket, by Timothy Farrell.
>
> I included an older version,
Hi guys, I've found the following functions to be commonly useful in
practice. Has anyone else written anything similar? Is there a better
idiom here, or better names or interfaces for these?
def get_one(query):
result = db(query).select()
assert len(result) <= 1, "GAH get_one called when
Did you do anything special to use apachebench on the cherrypy
server? When I run "ab http://localhost/init/"; I get a
"apr_socket_recv: Connection refused (111)" error from apachebench.
If I do the same command when running the latest hg tip of web2py
(with rocket), the benchmark works.
I'm try
I can't create an index on postgresql using executesql. Here's what
happens:
>> db.executesql('create index bq_index on bonus_queue (hitid);')
...but the index does not show up in psql. It does not return
anything. It seems like the command might be blocking psql, because if
I run another index
I'm using web2py+rocket to serve jobs on mechanical turk. The server
probably gets a hit per second or so by workers on mechanical turk
using it.
When I have no users, everything is fast. But in active use, I notice
that web pages often load reay slow in my web browser, but the
httpserver.log
Actually it's handling about 5 requests per second, so there is def
some concurrency.
On Mar 26, 10:06 pm, Michael Toomim wrote:
> I'm using web2py+rocket to serve jobs on mechanical turk. The server
> probably gets a hit per second or so by workers on mechanical turk
> using
very low.
> Are your models very complex?
>
> On 27 Mar, 00:06, Michael Toomim wrote:
>
>
>
> > I'm using web2py+rocket to serve jobs on mechanical turk. The server
> > probably gets a hit per second or so by workers on mechanical turk
> > using it.
>
>
> >> all other simple selects and inserts, right?
>
> > no, except for sqlite. sqlite serializes all requests because locks
> > the db. That could explain the 0.20s if you have lots of queries per
> > request, but not the 54s for the server.
>
> > On Mar 28, 4:22
time.
On Mar 29, 12:10 pm, Timothy Farrell wrote:
> On 3/29/2010 1:39 PM, Michael Toomim wrote:
>
> > I was having slowness problems with cherrypy too! That's why I
> > switched to rocket. So perhaps it's something common to cherrypy and
> > rocket, or
I see, thank you. I want to measure the web server's response time
when I deploy this on turk... Unfortunately the rocket log does not
report time to serve a request. Do you think it is easy to get that
information from rocket? Do you store the start and stop times for
each request? I see start
.
On Apr 4, 4:44 pm, Michael Toomim wrote:
> I see, thank you. I want to measure the web server's response time
> when I deploy this on turk... Unfortunately the rocket log does not
> report time to serve a request. Do you think it is easy to get that
> information from rocket?
ierro wrote:
> Some more questions:
>
> how much ram?
> can you check memory usage? A memory leak may cause slowness.
> are you using cron? when cron starts it may spike memory usage.
> are you experience the slowness from localhost or from remote
> machines?
>
> On Apr 4
he logging a bit more flexible and
> release a 1.1 in the next few days.
>
> In the meantime, look into the cron thing.
>
> -tim
>
> On 4/4/2010 6:44 PM, Michael Toomim wrote:
>
>
>
> > I see, thank you. I want to measure the web server's response t
and I'm using postgres not sqlite.
On Apr 5, 12:44 pm, Michael Toomim wrote:
> Thanks guys. Each time I run a test, though, it costs me money
> because I'm paying people on mechanical turk. And if it's slow, it
> gives me a bad reputation. So I don't want to ru
Now that I'm on apache, I find that the logging library iceberg wrote
no longer works:
http://groups.google.com/group/web2py/browse_thread/thread/ae37920ce03ba165/6e5d746f6222f70a
I suspect this is because of the stdout/stderr problem with wsgi, but
I thought that would only affect print statemen
Oops I'm sorry, when I upgraded the log file moved and I was looking
at the wrong one. It works!
On Apr 7, 11:44 pm, Michael Toomim wrote:
> Now that I'm on apache, I find that the logging library iceberg wrote
> no longer
> works:http://groups.google.com/group/web2py/b
I wanted the equivalent of sqlite's "create index if not exists" on
postgresql. Here's a solution for web2py. It is useful whenever you
set up a new database, or migrate new tables to an existing database
after a code update and want to ensure the right indexes are set up.
def create_indices_on_po
Great! I am also trying to implement this. Richard and Auden, have
you gotten anything working yet? How about we share solutions?
My current difficulty is figuring out where to create the persistent
background thread.
- If I spawn it in a controller or model file will it be limited to
10 seco
I just figured out why transactions were confusing me and would like
to share.
The default for postgresql makes every statement its own transaction
unless you issue "BEGIN TRANSACTION", in which case nothing commits
until you issue "COMMIT". This is called "autocommit" mode in
postgresql, because
This actually looks like a problem in interfacing web2py with the
newest ipython, version .10. I am getting this error too.
It happens while you're typing the python, before you execute it. For
instance, type:
In [1]: for i in range(5):
and you will get the error printed to the screen before y
. So... perhaps I really need something different, or perhaps it
would be good to change the cron semantics.
I will think on this and get back to you.
On Jan 1, 3:44 pm, mdipierro wrote:
> Thank you for the patch toomim. I am uploading it in trunk now.
> User achipa wrotecron.py and I believe @rebo
I'm using hg tip and can't add a column to a table. It figures out
what to do, but doesn't alter the postgresql database. Then any
commands that need to use that column fail.
It was able to create the table in the first place, but cannot add a
column now that the table has been created. How should
mmatically. Is this different for each db? Hopefully it's at
least easy for postgresql.
On Nov 18 2009, 11:46 am, mdipierro wrote:
> Yes, I would.
>
> On Nov 18, 2:03 am,toomim wrote:
>
>
>
> > Thanks! Would you accept a patch that makes the error messages more
> >
So I should just wait?
On Jan 6, 8:26 am, mdipierro wrote:
> You are correct and that is the future. It is different for every db.
> The new DAL will have hooks to implement it.
>
> Massimo
>
> On Jan 6, 3:14 am, Michael Toomim wrote:
>
>
>
> > Here's wha
Nope. On step 2, when I run "python web2py ...", I get no errors at
all. It updates the .table file and sql.log, does not change the DB,
and gives me no errors.
On Jan 6, 7:30 am, mdipierro wrote:
> Did you get any operational error at all?
>
> On Jan 6, 3:00 am, Mi
I'm still having this problem too (previous posts linked below). I
would love to find a solution. I'm not sure how to debug.
VP: Can you provide instructions for reproducing this bug using ab? I
had trouble using ab in the past. I am also on a VPS.
Since my last post (linked below), I have tr
Thanks, I just investigated this, but it looks like it did not fix the
problem.
In 8.4.6 Postgres changed the default "wal_sync_method" to "fdatasync,"
because the old default "open_datasync" failed on ext4. I use ext3 (on
ubuntu 9.10), but I tried changing this option in my postgres database
I find it easiest and cleanest to reformat data structures in python,
using list comprehensions. Javascript sucks for loops. So instead of
jsonifying the raw database output, fix it first:
export_optimizer_records = [{'FreezeTime': r.panel_1hrs.FreezeTime,
'StringID': r.panel_1hrs.StringID, 'Po_
I have pool_size=100, and get the error.
On Jan 17, 12:20 pm, Massimo Di Pierro
wrote:
> You should really have
>
> db = DAL('postgres://name:password@localhost:5432/db',pool_size=20)
>
> The reason is that client-server databases may set a max to number of
> open connections and it takes time to
The problem for me is that this occurs on a webapp used by mechanical
turk, and it fails when I have hundreds of mechanical turkers using my
app... which only happens when I pay them hundreds of dollars. So
it's hard to reproduce right now without hundreds of dollars.
I am excited to try using VP
1.74.5. I will upgrade when I can reproduce the problem locally.
On Jan 17, 5:13 pm, Massimo Di Pierro
wrote:
> How old web2py? We have had bugs in the past that may cause your
> problem.
> You should try upgrade.
>
> Massimo
>
> On Jan 17, 6:58 pm, Michael Toomim wrote:
Yes, this echos my experiences exactly! Using apache ab benchmark
alone would NOT trigger the error. I had plenty of RAM available.
Seems to be a concurrency bug.
On Jan 19, 10:53 am, VP wrote:
> What is curious is that RAM is still available, with this error.
> Monitoring CPU (using top) shows
Thanks for the great work on the scheduler niphlod!
On Wednesday, August 1, 2012 1:19:48 PM UTC-7, Niphlod wrote:
>
> The consideration behind that is that if your function doesn't return
> anything, you don't need the results. Backward compatibility is quite
> broken in that sense (but schedule
Wow, this is cool!
But I'm hitting a bug in rewrite_on_error:
http://code.google.com/p/web2py/issues/detail?id=964
--
I'm really excited about the new scheduler -X option.
What do -E -b -L do? I don't see them in --help or in the widget.py code.
On Wednesday, August 29, 2012 10:17:48 PM UTC-7, Michael Toomim wrote:
>
> Wow, this is cool!
>
> But I'm hitting a bug in rewrite_on_error:
Sometimes you write things that are just really exciting.
--
That is true!
It makes me think, perhaps it would be worthwhile at some point to pause,
take stock of all the features, which ones might be better than which other
ones, and write up a set of best practices to put into the book.
On Wednesday, August 29, 2012 12:49:29 PM UTC-7, Richard wrote:
>
This makes sense to me too!
The simple way would break backwards compatibility. But this could be
avoided if hash function first checks to see if a schema file exists WITH
the password, and returns that, else returns a hash w/o the password.
On Tuesday, August 28, 2012 10:17:02 AM UTC-7, Chris
Oh, I see, these are scheduler.py options!
-b: sets the heartbeat time
-L: sets the logging level
-E: sets the max empty runs
On Wednesday, August 29, 2012 10:23:29 PM UTC-7, Michael Toomim wrote:
>
> What do -E -b -L do? I don't see them in --help or in the widget.py code.
--
I just discovered this sweet hidden improvement:
>> db(db.mytable.id>1).select()
The Rows object now prints out the number of rows in the repr() function!
That's so useful!
Thanks everyone!
--
This is awesome! Thanks for the example!
On Thursday, August 30, 2012 1:56:09 PM UTC-7, Anthony wrote:
>
>
>
> db.define_table('person', Field('name'), Field('email'))
> db.define_table('dog', Field('name'), Field('owner', 'reference person'))
> db.executesql([SQL code returning person.name and do
If you want a realtime solution, you can trigger an email on each error.
Here's a sketch:
1. Add an error handler to routes.py:
routes_onerror = [
('appname/*', '/appname/default/show_error')
]
2. and in default.py:
def show_error():
Scheduler.insert(function_name='send_self_email'
This is cool! how do we use it?
On Sunday, January 9, 2011 5:07:28 PM UTC-8, Dane wrote:
>
> Hey all, thought you might be interested to know that I just patched a
> project HamlPy, a library for converting a pythonic haml-like syntax
> to django templates/html, to work with web2py templates.
>
Anyone have a recipe to make the scheduler run on boot? I'm using ubuntu.
Web2py is run in apache (using the recipe in the book), so I can't just use
the cron @reboot line.
This is the line that needs to be run when my system boots:
python /home/web2py/web2py/web2py.py -K
It seems ubuntu us
I think the best combination of web2py and bottle would be, as you
suggested—importing the web2py DAL into bottle.
The DAL is the most important thing that bottle lacks, and the web2py DAL
is great to plug into other projects. I use it a lot for that.
That said, in my experience, you will quick
a different network interface.
On Thursday, May 3, 2012 1:22:25 PM UTC-7, Michael Toomim wrote:
>
> Anyone have a recipe to make the scheduler run on boot? I'm using ubuntu.
> Web2py is run in apache (using the recipe in the book), so I can't just use
> the cron @reboot line.
>
Also:
1. replace "friendbo" with the name of your app.
2. To start/stop the scheduler, use
"sudo start web2py-scheduler"
"sudo stop web2py-scheduler"
"sudo status web2py-scheduler"
...etc.
On Saturday, May 5, 2012 6:47:33 PM UTC-7, Michael Toomim
I need to be able to dispatch to a different controller based on a database
lookup. So a user will go to a url (say '/dispatch'), and we'll look up in
the database some information on that user, choose a new controller and
function, and call that controller and function with its view.
I've almo
troller, f=function, vars=request.vars,
>> args=request.args))
>>
>>
>> On Friday, 11 May 2012 10:17:19 UTC+1, Michael Toomim wrote:
>>>
>>> I need to be able to dispatch to a different controller based on a
>>> database lookup. So a user will go
2:
Traceback (most recent call last):
File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner
self.run()
File "/home/toomim/projects/utility/web2py/gluon/newcron.py", line 234,
in run
shell=self.shell)
File "/usr/lib/python2.6/subprocess.py", l
Here's a common scenario. I'm looking for the best implementation using the
scheduler.
I want to support a set of background tasks (task1, task2...), where each
task:
• processes a queue of items
• waits a few seconds
It's safe to have task1 and task2 running in parallel, but I cannot have
Thanks for the response, niphlod! Let me explain:
The task can be marked FAILED or EXPIRED if:
• The code in the task throws an exception
• A run of the task exceeds the timeout
• The system clock goes past stop_time
And it will just not plain exist if:
• You have just set up the code
•
To respond to your last two points:
You're right that models only runs on every request... I figured if my
website isn't getting any usage then the tasks don't matter anyway. :P
Yes, I think there are design issues here, but I haven't found a better
solution. I'm very interested in hearing bet
I just got bit by the reserved-word problem:
https://groups.google.com/d/msg/web2py/aSPtD_mGXdM/c7et_2l_54wJ
I am trying to port a postgres database to a friend's mysql database, but
we are stuck because the DAL does not quote identifiers.
This problem has been discussed a fair amount:
https://g
over.
Looks like a bug in the scheduler.
I don't recommend using the scheduler as a task queue to anybody.
On Tuesday, June 12, 2012 10:24:15 PM UTC-7, Michael Toomim wrote:
>
> Here's a common scenario. I'm looking for the best implementation using
> the scheduler.
>
&g
Er, let me rephrase: I don't recommend using the scheduler for *infinitely
looping background tasks*.
On Monday, June 25, 2012 4:54:30 PM UTC-7, Michael Toomim wrote:
>
> This scenario is working out worse and worse.
>
> Now I'm getting tasks stuck in the 'RUNNIN
All, thank you for the excellent discussion!
I should explain why I posted that recommendation. The "vision" of using
the scheduler for background tasks was:
"Woohoo, this scheduler will *automatically handle locks*—so I don't need
to worry about stray background processes running in parallel
_name)
tasks[0].update_record(period=period)
db.commit()
check_daemon('process_launch_queue_task')
check_daemon('refresh_hit_status')
check_daemon('process_bonus_queue')
On Tuesday, June 26, 2012 7:57:25 PM UTC-7, Michael Toomim wrote:
>
>
I'm totally interested in solutions! It's a big problem I need to solve.
The recurring maintenance task does not fix the initialization
problem—because now you need to initialize the recurring maintenance task.
This results in the same race condition. It does fine with the 40,000
records proble
The problem with terminating the processes is:
• sometimes they don't respond to control-c, and need a kill -9
• or sometimes that doesn't work, maybe the os is messed up
• or sometimes the developer might run two instances simultaneously,
forgetting that one was already running
You're righ
(while contributing to fix
> current issues)
>
> BTW: - "responding to ctrl+c" fixed in trunk recently
>- "os messed up maybe" require you to check the os, python
> programs can't be omniscient :D
>- "messy developers", no
eems to be biting a few of us.
On Jun 20, 2012, at 1:19 PM, Rene Dohmen wrote:
> I'm having the same problem:
> https://groups.google.com/d/msg/web2py/hCsxVaDLfT4/K6UMbG5p5uAJ
>
>
> On Mon, Jun 18, 2012 at 9:30 AM, Michael Toomim wrote:
> I just got bit by t
This is all a great unearthing of the Mystery of Transactions. Thanks for
the investigation, Doug.
This was difficult for me to learn when I got into web2py as well. Perhaps
we could write up all this knowledge somewhere, now that you're figuring it
out?
Can we have a section on Transactions i
On Wednesday, June 27, 2012 5:02:26 PM UTC-7, ptressel wrote:
>
> This won't solve your installation / setup issue, but I wonder if it would
> help with the overrun and timeout problems... Instead of scheduling a
> periodic task, what about having the task reschedule itself? When it's
> done w
Maybe this should go into the docs somewhere. Maybe the scheduler docstring
next to the upstart script? Maybe post an issue on google code to update the
docs? http://code.google.com/p/web2py/issues/list
On Jun 28, 2012, at 5:41 AM, Tyrone wrote:
> Hi Guys,
>
> Although this script works great
This is a nice solution, and clever, thanks!
The upside (compared to postgres locks, as discussed above) is this works
for any database. The downside is it creates a whole new table.
On Thursday, July 5, 2012 2:49:36 PM UTC-7, nick name wrote:
>
> This might have been solved in this week, but in
On Friday, July 6, 2012 6:35:43 PM UTC-7, Massimo Di Pierro wrote:
>
> 2. Remove "Share" link from welcome app
>> I think we agreed to remove "Share" link because it's not used very much.
>>
>
> I think we agreed to remove the link to addtoany. Do you really want to
> remove the share tab at the
t is default behavior
>
> On Tuesday, July 10, 2012 5:04:43 PM UTC-4, Michael Toomim wrote:
> I just upgraded to the trunk. I'm trying to log into the admin, but there's
> no password entry box.
>
> What's wrong? How can I debug this?
>
>
Ah, so I was wrong, great, thank you!
On Jul 10, 2012, at 3:20 PM, Massimo Di Pierro wrote:
> The normal behavior is, as Dave indicated, that you must be over https or
> from localhost. The change in trunk is that the condition is false, the login
> form is not even displayed to prevent you fro
I just upgraded from a modified 1.98.2 to 1.99.4 and now I'm getting
an infinite redirect when logging in with OAuth20 and facebook.
I'm having trouble debugging. Can someone help?
What happens:
User goes to /user/login
This calls this code in tools.py:
# we need to pass through
Well, I don't need to debug this anymore. I switched to a different
facebook app, and I'm no longer having the problem.
On Dec 21, 7:55 pm, Michael Toomim wrote:
> I just upgraded from a modified 1.98.2 to 1.99.4 and now I'm getting
> an infinite redirect when loggin
Here's an improved way to create indices in the DAL. Works only with
postgresql and sqlite.
def create_indices(*fields):
'''
Creates a set of indices if they do not exist
Use like:
create_indices(db.posts.created_at,
db.users.first_name,
After some thought, I'm really liking this design for virtual
fields... what if lazy/virtual fields were declared directly in
db.define_table()? Like so:
db.define_table('item',
Field('unit_price','double'),
Field('quantity','integer'),
VirtualField
I think we need more tools for fixing broken migrations!
When I have something broken, sometimes I go into the sql console,
edit the database manually, and then use these functions to tell
web2py that I've changed the table in sql. (However, I haven't had to
use these for at least a year... maybe
1 - 100 of 120 matches
Mail list logo