Yes! i can load the Url
("http://www.aaasports.co.kr/front/productlist.php?code=0010&brandcode=2&listnum=30&sort=&block=0&gotopage=1";)
without issue. I think the time to load the URL is longer than any other
URL, but it does not take much long.
I searched for the Postman you suggested
Thanks.!
I had better use an async task worker! i need to study the worker...
the error happens with a specific URL.
2018년 1월 9일 화요일 오후 8시 7분 15초 UTC+9, Jason 님의 말:
>
> You really should use an async task worker like Celery for this, to get
> the scraping outside of Django's request-response
yes, i use a urlopen to crawl the some site.
like this:
html = urlopen(page)
bs0bj = BeautifulSoup(html, "html.parser")
and error happen in " /usr/lib/python3.5/urllib/request.py in do_open, "
when i open 'the site', the loading time is longer than others. then others
which i crawl dont
I use pythonanywhere. At this time, "urlopen error [Errno 110] Connection
timed out" occurs. So I set CACHES TIMEOUT to None in Django settings.py.
However, error 110 still occurs. Do I need to change the timeout value in
urllib / request.py? I would really appreciate your help in this matter.
4 matches
Mail list logo