You could queue the tasks in a broker (rabbitmq for example) and use a
fixed number of celery workers to process them and save them to the
database, assuming:
- you don't have 1000 requests per second all the time
- you don't need the data to be stored immediately
On 1/4/17, Avraham Serour wrote
can you elaborate on this scenario? how and why can your database shutdown?
I second the suggestion about returning error and telling the client to try
later, but it should be >=500 not 400. 4xx Are for client errors, 5xx are
for server errors. see http://www.restapitutorial.com/httpstatuscodes.ht
Or you could return a http status 400 if the transaction couldn't be
committed. So, the client should re-run the request later.
If this is not an option, you should lay upon a queue as you suggested.
Bad things always can happen. So, try your best to avoid i/o problems
if your clients cannot re-m
3 matches
Mail list logo