Hello,


My name is Lior Brown, a research assistant from Ariel University, and 
contributer of squid-cache.


I am currently working on a research project involving request dispatching and 
peer selection within the Squid-cache core.

During my development, I have encountered several mechanisms and logic blocks 
within the source code that deliberately prevent or queue parallel HTTP 
requests for the same URL, effectively serializing them.

I would like to understand the fundamental design rationale behind these 
restrictions. Specifically:

Are these blocks in place due to specific architectural constraints (such as 
memory management or Store Entry state transitions)?

Are there known side effects or risks I should be aware of if I attempt to 
implement parallel requesting for identical objects in a research environment?

I want to ensure I fully understand the system's design philosophy before 
proceeding with any modifications.


Thank you for your time and insights.

best regards,



_______________________________________________
squid-dev mailing list
[email protected]
https://lists.squid-cache.org/listinfo/squid-dev

Reply via email to