You're exactly right - I'll probably wind up with two instances, once 
RAM-only for caching, and a persistent one for sessions.

On Friday, 7 June 2019 14:15:35 UTC+1, Lisandro wrote:
>
> I'm not exactly sure how many sessions my app is handling, but this 
> numbers will give you an idea:
>
>  - My websites receive about 500k visits (sessions) in an average day.
>  - The server handles about 2.5 million requests in an average day.
>  - I use RedisSession(session_expiry=36000), that is, sessions handled by 
> Redis expire after 10 hours.
>  - I also use Redis to store in cache the final HTML of public pages for 5 
> minutes.
>  - My Redis instance uses about 12gb of RAM. 
>  - My Redis instance consumes only about 8% of CPU (that is the 8% of one 
> single CPU, notice Redis is single-threaded).
>
>
> When you say "I'd want to ensure disk-persistence for them (but not for 
> cached things like search results)", how do you plan to achieve that? I'm 
> no expert, but I think the disk-persistance option in Redis is global. If 
> you want persistance for sessions and not for other cached things, I think 
> you will need to different instances of Redis. 
>
>
> El viernes, 7 de junio de 2019, 7:09:26 (UTC-3), Tim Nyborg escribió:
>>
>> Thanks for this.  Let me know if you find a resolution to the 'saving to 
>> disk' latency issue.  Redis sessions would be an improvement, but I'd want 
>> to ensure disk-persistence for them (but not for cached things like search 
>> results).  How many sessions are you storing, and how much RAM does it 
>> consume?
>>
>> On Thursday, 6 June 2019 20:33:28 UTC+1, Lisandro wrote:
>>>
>>> If you're going to add Redis, let me add a couple of comments about my 
>>> own experience:
>>>
>>>  - Using Redis to store sessions (not only to cache) was a huge 
>>> improvement in my case. I have public websites, some of them with much 
>>> traffic, so my app handles many sessions. I was using the database for 
>>> handling sessions, but when I changed to Redis, the performance improvement 
>>> was considerable. 
>>>
>>>  - Do some tests with the argument "with_lock" available in RedisCache 
>>> and RedisSessions (from gluon.contrib). In my specific case, using 
>>> with_lock=False is better, but of course this depends on each specific 
>>> scenario.
>>>
>>>  - An advise: choose proper values for "maxmemory" and 
>>> "maxmemory-policy" options from Redis configuration. The first one sets the 
>>> max amount of memory that Redis is allowed to use, and "maxmemory-policy" 
>>> allows you to choose how Redis should evict keys when it hits the 
>>> maxmemory: https://redis.io/topics/lru-cache. 
>>>
>>>
>>> El jueves, 6 de junio de 2019, 12:15:38 (UTC-3), Tim Nyborg escribió:
>>>>
>>>> This is really good to know.  I've a similar architecture to you, and 
>>>> am planning to add redis to the stack soon.  Knowing about issues to be on 
>>>> the lookout for is very helpful.
>>>>
>>>> On Friday, 24 May 2019 16:26:50 UTC+1, Lisandro wrote:
>>>>>
>>>>> I've found the root cause of the issue: the guilty was Redis.
>>>>>
>>>>> This is what was happening: Redis has an option for persistance 
>>>>> <https://redis.io/topics/persistence> wich stores the DB to the disk 
>>>>> every certain amount of time. The configuration I had was the one that 
>>>>> comes by default with Redis, that stores the DB every 15 minutes if at 
>>>>> least 1 key changed, every 5 minutes if at least 10 keys changed, and 
>>>>> every 
>>>>> 60 seconds if 10000 keys changed. My Redis instance was saving DB to the 
>>>>> disk every minute, and the saving process was taking about 70 seconds. 
>>>>> Apparently, during that time, many of the requests were hanging. What I 
>>>>> did 
>>>>> was to simply disable the saving process (I can do it in my case because 
>>>>> I 
>>>>> don't need persistance). 
>>>>>
>>>>> I'm not sure why this happens. I know that Redis is single-threaded, 
>>>>> but its documentation states that many tasks (such as saving the DB) run 
>>>>> in 
>>>>> a separate thread that Redis creates. So I'm not sure how is that the 
>>>>> process of saving DB to the disk is making the other Redis operations 
>>>>> hang. 
>>>>> But this is what was happening, and I'm able to confirm that, after 
>>>>> disabling the DB saving process, my application response times have 
>>>>> decreased to expected values, no more timeouts :)
>>>>>
>>>>> I will continue to investigate this issue with Redis in the proper 
>>>>> forum. 
>>>>> I hope this helps anyone facing the same issue.
>>>>>
>>>>> Thanks for the help!
>>>>>
>>>>> El lunes, 13 de mayo de 2019, 13:49:26 (UTC-3), Lisandro escribió:
>>>>>>
>>>>>> After doing a lot of reading about uWSGI, I've discovered that "uWSGI 
>>>>>> cores are not CPU cores" (this was confirmed by unbit developers 
>>>>>> <https://github.com/unbit/uwsgi/issues/233#issuecomment-16456919>, 
>>>>>> the ones that wrote and mantain uWSGI). This makes me think that the 
>>>>>> issue 
>>>>>> I'm experiencing is due to a misconfiguration of uWSGI. But as I'm a 
>>>>>> developer and not a sysadmin, it's being hard for me to figure out 
>>>>>> exactly 
>>>>>> what uWSGI options should I tweak. 
>>>>>>
>>>>>> I know this is out of the scope of this group, but I'll post my uWSGI 
>>>>>> app configuration anyway, in case someone still wants to help:
>>>>>>
>>>>>> [uwsgi]
>>>>>> pythonpath = /var/www/medios/
>>>>>> mount = /=wsgihandler:application
>>>>>> master = true
>>>>>> workers = 40
>>>>>> cpu-affinity = 3
>>>>>> lazy-apps = true
>>>>>> harakiri = 60
>>>>>> reload-mercy = 8
>>>>>> max-requests = 4000
>>>>>> no-orphans = true
>>>>>> vacuum = true
>>>>>> buffer-size = 32768
>>>>>> disable-logging = true
>>>>>> ignore-sigpipe = true
>>>>>> ignore-write-errors = true
>>>>>> listen = 65535
>>>>>> disable-write-exception = true
>>>>>>
>>>>>>
>>>>>> Just to remember, this is running on a machine with 16 CPUs.
>>>>>> Maybe I should *enable-threads*, set *processes* options and maybe 
>>>>>> tweak *cpu-affinity. *
>>>>>> My application uses Redis for caching, so I think I can enable 
>>>>>> threads safely. 
>>>>>> What do you think?
>>>>>>
>>>>>>
>>>>>> El jueves, 9 de mayo de 2019, 21:10:57 (UTC-3), Lisandro escribió:
>>>>>>>
>>>>>>> I've checked my app's code once again and I can confirm that it 
>>>>>>> doesn't create threads. It only uses subprocess.cal() within functions 
>>>>>>> that 
>>>>>>> are called in the scheduler environment, I understand that's the proper 
>>>>>>> way 
>>>>>>> to do it because those calls don't run in uwsgi environment.
>>>>>>>
>>>>>>> In the other hand, I can't disable the master process, I use 
>>>>>>> "lazy-apps" and "touch-chain-reload" options of uwsgi in order to 
>>>>>>> achieve 
>>>>>>> graceful reloading, because acordingly to the documentation about 
>>>>>>> graceful reloading 
>>>>>>> <https://uwsgi-docs.readthedocs.io/en/latest/articles/TheArtOfGracefulReloading.html>
>>>>>>> :
>>>>>>> *"All of the described techniques assume a modern (>= 1.4) uWSGI 
>>>>>>> release with the master process enabled."*
>>>>>>>
>>>>>>> Graceful reloading allows me to update my app's code and reload 
>>>>>>> uwsgi workers smoothly, without downtime or errors. What can I do if I 
>>>>>>> can't disable master process?
>>>>>>>
>>>>>>> You mentioned the original problem seems to be a locking problem due 
>>>>>>> to threads. If my app doesn't open threads, where else could be the 
>>>>>>> cause 
>>>>>>> of the issue? 
>>>>>>>
>>>>>>> The weirdest thing for me is that the timeouts are always on core 0. 
>>>>>>> I mean, uwsgi runs between 30 and 45 workers over 16 cores, isn't too 
>>>>>>> much 
>>>>>>> of a coincidence that requests that hang correspond to a few workers 
>>>>>>> always 
>>>>>>> assigned on core 0?
>>>>>>>
>>>>>>>
>>>>>>> El jueves, 9 de mayo de 2019, 17:10:19 (UTC-3), Leonel Câmara 
>>>>>>> escribió:
>>>>>>>>
>>>>>>>> Yes I meant stuff exactly like that.
>>>>>>>>
>>>>>>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/web2py/eba6b284-0b24-41ee-98c8-09dd96450a84%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to