python proxy checker ,change to threaded version
Hello ALL, i have some python proxy checker . and to speed up check, i was decided change to mutlthreaded version, and thread module is first for me, i was tried several times to convert to thread version and look for many info, but it not so much easy for novice python programmar . if anyone can help me really much appreciate!! thanks in advance! import urllib2, socket socket.setdefaulttimeout(180) # read the list of proxy IPs in proxyList proxyList = open('listproxy.txt').read() def is_bad_proxy(pip): try: proxy_handler = urllib2.ProxyHandler({'http': pip}) opener = urllib2.build_opener(proxy_handler) opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib2.install_opener(opener) req=urllib2.Request('http://www.yahoo.com') # <---check whether proxy alive sock=urllib2.urlopen(req) except urllib2.HTTPError, e: print 'Error code: ', e.code return e.code except Exception, detail: print "ERROR:", detail return 1 return 0 for item in proxyList: if is_bad_proxy(item): print "Bad Proxy", item else: print item, "is working" -- View this message in context: http://old.nabble.com/python-proxy-checker-%2Cchange-to-threaded-version-tp26672548p26672548.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: python proxy checker ,change to threaded version
r0g wrote: > > elca wrote: >> Hello ALL, >> >> i have some python proxy checker . >> >> and to speed up check, i was decided change to mutlthreaded version, >> >> and thread module is first for me, i was tried several times to convert >> to >> thread version >> >> and look for many info, but it not so much easy for novice python >> programmar >> . >> >> if anyone can help me really much appreciate!! >> >> thanks in advance! >> >> >> import urllib2, socket >> >> socket.setdefaulttimeout(180) >> # read the list of proxy IPs in proxyList >> proxyList = open('listproxy.txt').read() >> >> def is_bad_proxy(pip): >> try: >> proxy_handler = urllib2.ProxyHandler({'http': pip}) >> opener = urllib2.build_opener(proxy_handler) >> opener.addheaders = [('User-agent', 'Mozilla/5.0')] >> urllib2.install_opener(opener) >> req=urllib2.Request('http://www.yahoo.com') # <---check >> whether >> proxy alive >> sock=urllib2.urlopen(req) >> except urllib2.HTTPError, e: >> print 'Error code: ', e.code >> return e.code >> except Exception, detail: >> >> print "ERROR:", detail >> return 1 >> return 0 >> >> >> for item in proxyList: >> if is_bad_proxy(item): >> print "Bad Proxy", item >> else: >> print item, "is working" > > > > The trick to threads is to create a subclass of threading.Thread, define > the 'run' function and call the 'start()' method. I find threading quite > generally useful so I created this simple generic function for running > things in threads... > > > def run_in_thread( func, func_args=[], callback=None, callback_args=[] ): > import threading > class MyThread ( threading.Thread ): >def run ( self ): > > # Call function > if function_args: > result = function(*function_args) > else: > result = function() > > # Call callback > if callback: > if callback_args: > callback(result, *callback_args) > else: > callback(result) > > MyThread().start() > > > You need to pass it a test function (+args) and, if you want to get a > result back from each thread you also need to provide a callback > function (+args). The first parameter of the callback function receives > the result of the test function so your callback would loo something > like this... > > def cb( result, item ): > if result: > print "Bad Proxy", item > else: > print item, "is working" > > > And your calling loop would be something like this... > > for item in proxyList: > run_in_thread( is_bad_proxy, func_args=[ item ], cb, callback_args=[ > item ] ) > > > Also, you might want to limit the number of concurrent threads so as not > to overload your system, one quick and dirty way to do this is... > > import time > if threading.activeCount() > 9: time.sleep(1) > > Note, this is a far from exact method but it works well enough for one > off scripting use > > Hope this helps. > > > Suggestions from hardcore pythonistas on how to my make run_in_thread > function more elegant are quite welcome also :) > > > Roger Heathcote > -- > http://mail.python.org/mailman/listinfo/python-list > > Hello :) thanks for your reply ! i will test it now and will comment soon thanks again -- View this message in context: http://old.nabble.com/python-proxy-checker-%2Cchange-to-threaded-version-tp26672548p26673953.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
python mechanize proxy support question
Hello All Happy New Year! i have some question about python mechanize 's proxy support. im making some web client script, and i would like to insert proxy support function into my script. for example ,if i have such like following some script. how can i add proxy support into my mechanize script? i was look for some reference , but not so much good hint from google. params = urllib.urlencode({'id':id, 'passwd':pw}) rq = mechanize.Request("http://www.example.com"; params) rs = mechanize.urlopen(rq) whenever i open this 'www.example.com' website , i would like open go through proxy. Thanks in advance! -- View this message in context: http://old.nabble.com/python-mechanize-proxy-support-question-tp27009696p27009696.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: python mechanize proxy support question
alex23 wrote: > > On Jan 4, 5:42 pm, elca wrote: >> how can i add proxy support into my mechanize script? >> i was look for some reference , but not so much good hint from google. > > There are examples on using proxies with mechanize on the module's > home page: > http://wwwsearch.sourceforge.net/mechanize/ > -- > http://mail.python.org/mailman/listinfo/python-list > > HI.. that is only support mechanize.browser module.. actually im looking mechanize.urlopen method. thanks -- View this message in context: http://old.nabble.com/python-mechanize-proxy-support-question-tp27009696p27011661.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
python mechanize.browser proxy handdling question
Hello All, i was encoutered some problem while im using mechanize.Browser() with proxy handling function. i have working snippet of script about mechanize.urlopen, but i don't know how to implement with mechanize.Brower module. if anyone can show me some sample? if anyone help me much appreciate! Thanks! proxy_handler = mechanize.ProxyHandler({"http": "http://1.1.1.1:8080"}) opener = mechanize.build_opener(proxy_handler) -- View this message in context: http://old.nabble.com/python-mechanize.browser-proxy-handdling-question-tp27172870p27172870.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: using cx_freeze on linux
thanks diez paul, Diez B. Roggisch-2 wrote: > > james27 schrieb: >> hi.. >> im new to cx_freeze . >> i was searched much of time.but i couldn't found solution. >> i want to make exe file which can run on windows. >> is it possible to make windows exe file by use cx_freeze on linux? >> if anyone can help ,much appreciate. > > No, that's what py2exe *on windows* is for. > > Diez > -- > http://mail.python.org/mailman/listinfo/python-list > > -- View this message in context: http://www.nabble.com/using-cx_freeze-on-linux-tp25936152p25938094.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
form have 2 iframe.problem to input text in other iframe
hello, i have some form which split by iframe. subject field is no probelm ,but content field was come from another iframe source.. so i can't input text in content's field.. im using PAMIE and win32com module.. i have to put text in 'contents.contentsValue' here. but i have no luck..anyone can help me? if so really appreciate follow is some of html source code. thanks in advance Paul. -- View this message in context: http://www.nabble.com/form-have-2-iframe.problem-to-input-text-in-other-iframe-tp25989646p25989646.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
PAMIE and beautifulsoup problem
hello, currently im making some web scrap script. and i was choice PAMIE to use my script. actually im new to python and programming. so i have no idea ,if i use PAMIE,it really helpful to make script to relate with win32-python. ok my problem is , while im making script,i was encounter two probelm. first , i want to let work my script Beautifulsoup and PAMIE. so i was googled, and only i can found 1 hint. follow script is which i was found in google. but it not work for me. im using PAMIE3 version.even if i changed to pamie 2b version ,i couldn't make it working. from BeautifulSoup import BeautifulSoup Import cPAMIE url = 'http://www.cnn.com' ie = cPAMIE.PAMIE(url) bs = BeautifulSoup(ie.pageText()) and follow is my script. how to make it to work ? from BeautifulSoup import BeautifulSoup from PAM30 import PAMIE url = 'http://www.cnn.com' ie = PAMIE(url) bs = BeautifulSoup(ie.pageText()) my second problem is,while im making script,i think sometime i need normal IE interface. is it possible to change PAMIE's IE interface to just normal IE interface(InternetExplorer.Application)? i don't want to open new IE window to work with normal IE interface,want to continue work with current PAMIE's IE windows. sorry for my bad english Paul -- View this message in context: http://www.nabble.com/PAMIE-and-beautifulsoup-problem-tp26021305p26021305.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
how can i use lxml with win32com?
hello... if anyone know..please help me ! i really want to know...i was searched in google lot of time. but can't found clear soultion. and also because of my lack of python knowledge. i want to use IE.navigate function with beautifulsoup or lxml.. if anyone know about this or sample. please help me! thanks in advance .. -- View this message in context: http://www.nabble.com/how-can-i-use-lxml-with-win32com--tp26044339p26044339.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: PAMIE and beautifulsoup problem
hello! im very sorry to late reply. follow script i was executed. from BeautifulSoup import BeautifulSoup from PAM30 import PAMIE url = 'http://www.cnn.com' ie = PAMIE(url) bs = BeautifulSoup(ie.pageText()) and i was got such like follow error in wingide. i can guess because of current version of PAM30 don't have attribute 'pageText'. but i couldn't found what is same attribute in PAM30 module. and other problem is while im using PAMIE,is it possible to change normal ' Dispatch("InternetExplorer.Application") ' ? i mean.don't open another internet explorer windows, and using current PAMIE session. thanks in advance! AttributeError: PAMIE instance has no attribute 'pageText' File "C:\test12.py", line 7, in bs = BeautifulSoup(ie.pageText()) Gabriel Genellina-7 wrote: > > En Fri, 23 Oct 2009 03:03:56 -0300, elca escribió: > >> follow script is which i was found in google. >> but it not work for me. >> im using PAMIE3 version.even if i changed to pamie 2b version ,i couldn't >> make it working. > > You'll have to provide more details. *What* happened? You got an > exception? Please post the complete exception traceback. > >> from BeautifulSoup import BeautifulSoup >> Import cPAMIE >> url = 'http://www.cnn.com' >> ie = cPAMIE.PAMIE(url) >> bs = BeautifulSoup(ie.pageText()) > > Also, don't re-type the code. Copy and paste it, directly from the program > that failed. > > -- > Gabriel Genellina > > -- > http://mail.python.org/mailman/listinfo/python-list > > -- View this message in context: http://www.nabble.com/PAMIE-and-beautifulsoup-problem-tp26021305p26044579.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: how can i use lxml with win32com?
Hello, im very sorry . first my source is come from website which consist of html mainly. and i want to make web scraper. i was found some script source in internet. following is script source which can beautifulsoup and PAMIE work together. but if i run this script source error was happened. AttributeError: PAMIE instance has no attribute 'pageText' File "C:\test12.py", line 7, in bs = BeautifulSoup(ie.pageText()) and following is orginal source until i was found in internet. from BeautifulSoup import BeautifulSoup from PAM30 import PAMIE url = 'http://www.cnn.com' ie = PAMIE(url) bs = BeautifulSoup(ie.pageText()) if possible i really want to make it work together with beautifulsoup or lxml with PAMIE. sorry my bad english. thanks in advance. Stefan Behnel-3 wrote: > > Hi, > > elca, 25.10.2009 02:35: >> hello... >> if anyone know..please help me ! >> i really want to know...i was searched in google lot of time. >> but can't found clear soultion. and also because of my lack of python >> knowledge. >> i want to use IE.navigate function with beautifulsoup or lxml.. >> if anyone know about this or sample. >> please help me! >> thanks in advance .. > > You wrote a message with nine lines, only one of which gives a tiny hint > on > what you actually want to do. What about providing an explanation of what > you want to achieve instead? Try to answer questions like: Where does your > data come from? Is it XML or HTML? What do you want to do with it? > > This might help: > > http://www.catb.org/~esr/faqs/smart-questions.html > > Stefan > -- > http://mail.python.org/mailman/listinfo/python-list > > -- View this message in context: http://www.nabble.com/how-can-i-use-lxml-with-win32com--tp26044339p26045617.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: how can i use lxml with win32com?
Hello, yes there is some reason why i nave to insist internet explorere interface. because of javascript im trying to insist use PAMIE. i was tried some other solution urlopen or mechanize and so on. but it hard to use javascript. can you show me some sample for me ? :) such like if i want to extract some text in CNN website with 'CNN Shop' 'Site map' in bottom of CNN website page by use PAMIE. thanks for your help. motoom wrote: > > > On 25 Oct 2009, at 07:45 , elca wrote: > >> i want to make web scraper. >> if possible i really want to make it work together with >> beautifulsoup or >> lxml with PAMIE. > > Scraping information from webpages falls apart in two tasks: > > 1. Getting the HTML data > 2. Extracting information from the HTML data > > It looks like you want to use Internet Explorer for getting the HTML > data; is there any reason you can't use a simpler approach like using > urllib2.urlopen()? > > Once you have the HTML data, you could feed it into BeautifulSoup or > lxml. > > Mixing up 1 and 2 into a single statement created some confusion for > you, I think. > > Greetings, > -- > http://mail.python.org/mailman/listinfo/python-list > > -- View this message in context: http://www.nabble.com/how-can-i-use-lxml-with-win32com--tp26044339p26045673.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: how can i use lxml with win32com?
hello, www.cnn.com in main website page. for example ,if you see www.cnn.com's html source, maybe you can find such like line of html source. http://www.turnerstoreonline.com/ CNN Shop and for example if i want to extract 'CNN Shop' text in html source. and i want to add such like function ,with following script source. from BeautifulSoup import BeautifulSoup from PAM30 import PAMIE from time import sleep url = 'http://www.cnn.com' ie = PAMIE(url) sleep(10) bs = BeautifulSoup(ie.getTextArea()) #from here i want to add such like text extract function with use PAMIE and lxml or beautifulsoup. thanks for your help . in the cnn website's html source there i motoom wrote: > > > On 25 Oct 2009, at 08:06 , elca wrote: > >> because of javascript im trying to insist use PAMIE. > > I see, your problem is not with lxml or BeautifulSoup, but getting the > raw data in the first place. > > >> i want to extract some text in CNN website with 'CNN Shop' >> 'Site map' in bottom of CNN website page > > What text? Can you give an example? I'd like to be able to reproduce > it manually in the webbrowser so I get a clear idea what exactly > you're trying to achieve. > > Greetings, > > -- > http://mail.python.org/mailman/listinfo/python-list > > -- View this message in context: http://www.nabble.com/how-can-i-use-lxml-with-win32com--tp26044339p26045766.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: how can i use lxml with win32com?
hello, im very sorry my english. yes i want to extract this text 'CNN Shop' and linked page 'http://www.turnerstoreonline.com'. thanks a lot! motoom wrote: > > > On 25 Oct 2009, at 08:33 , elca wrote: > >> www.cnn.com in main website page. >> for example ,if you see www.cnn.com's html source, maybe you can >> find such >> like line of html source. >> http://www.turnerstoreonline.com/ CNN Shop >> and for example if i want to extract 'CNN Shop' text in html source. > > So, if I understand you correctly, you want your program to do the > following: > > 1. Retrieve the http://cnn.com webpage > 2. Look for a link identified by the text "CNN Shop" > 3. Extract the URL for that link. > > The result would be http://www.turnerstoreonline.com > > Is that what you want? > > Greetings, > -- > http://mail.python.org/mailman/listinfo/python-list > > -- View this message in context: http://www.nabble.com/how-can-i-use-lxml-with-win32com--tp26044339p26045811.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: how can i use lxml with win32com?
Hello, thanks for your reply. actually what i want to parse website is some different language site. so i was quote some common english website for easy understand. :) by the way, is it possible to use with PAMIE and beautifulsoup work together? Thanks a lot motoom wrote: > > elca wrote: > >> yes i want to extract this text 'CNN Shop' and linked page >> 'http://www.turnerstoreonline.com'. > > Well then. > First, we'll get the page using urrlib2: > > doc=urllib2.urlopen("http://www.cnn.com";) > > Then we'll feed it into the HTML parser: > > soup=BeautifulSoup(doc) > > Next, we'll look at all the links in the page: > > for a in soup.findAll("a"): > > and when a link has the text 'CNN Shop', we have a hit, > and print the URL: > > if a.renderContents()=="CNN Shop": > print a["href"] > > > The complete program is thus: > > import urllib2 > from BeautifulSoup import BeautifulSoup > > doc=urllib2.urlopen("http://www.cnn.com";) > soup=BeautifulSoup(doc) > for a in soup.findAll("a"): > if a.renderContents()=="CNN Shop": > print a["href"] > > > The example above can be condensed because BeautifulSoup's find function > can also look for texts: > > print soup.find("a",text="CNN Shop") > > and since that's a navigable string, we can ascend to its parent and > display the href attribute: > > print soup.find("a",text="CNN Shop").findParent()["href"] > > So eventually the whole program could be collapsed into one line: > > print > BeautifulSoup(urllib2.urlopen("http://www.cnn.com";)).find("a",text="CNN > Shop").findParent()["href"] > > ...but I think this is very ugly! > > > > im very sorry my english. > > You English is quite understandable. The hard part is figuring out what > exactly you wanted to achieve ;-) > > I have a question too. Why did you think JavaScript was necessary to > arrive at this result? > > Greetings, > -- > http://mail.python.org/mailman/listinfo/python-list > > -- View this message in context: http://www.nabble.com/how-can-i-use-lxml-with-win32com--tp26044339p26045979.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: how can i use lxml with win32com?
Hello, actually what i want is, if you run my script you can reach this page 'http://news.search.naver.com/search.naver?sm=tab_hty&where=news&query=korea+times&x=0&y=0' that is korea portal site and i was search keyword using 'korea times' and i want to scrap resulted to text name with 'blogscrap_save.txt' if you run this script ,you can see following article "Yesan County: How do you like them apples? 코리아헤럴드 | carp fishing at the Yedang Reservoir - Korea`s biggest - taking a nice stroll... During the curator`s recitation of Yun`s life and times as a resistance and freedom fighter, he would emphsize random ... " and also can see following article and so on " 10,000 Nepalese Diaspora Emerging in Korea 코리아타임스 세계 | 2009.10.23 (금) 오후 9:31 Although the Nepalese community in Korea is worker dominated, there are... yoga is popular among Nepalese. These festivals are the times when expatriate Nepalese feel nostalgic for their... " so actual process to scrap site is, first i want to use keyword and want to save resulted article with only text. i was attached currently im making script but not so much good and can't work well. especially extract part is really hard for novice,such like for me :) thanks in advance.. http://www.nabble.com/file/p26046215/untitled-1.py untitled-1.py motoom wrote: > > elca wrote: > >> actually what i want to parse website is some different language site. > > A different website? What website? What text? Please show your actual > use case, instead of smokescreens. > > >> so i was quote some common english website for easy understand. :) > > And, did you learn something from it? Were you able to apply the > technique to the other website? > > >> by the way, is it possible to use with PAMIE and beautifulsoup work >> together? > > If you define 'working together' as like 'PAMIE produces a HTML text and > BeautifulSoup parses it', then maybe yes. > > Greetings, > > -- > "The ability of the OSS process to collect and harness > the collective IQ of thousands of individuals across > the Internet is simply amazing." - Vinod Valloppillil > http://www.catb.org/~esr/halloween/halloween4.html > -- > http://mail.python.org/mailman/listinfo/python-list > > -- View this message in context: http://www.nabble.com/how-can-i-use-lxml-with-win32com--tp26044339p26046215.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: how can i use lxml with win32com?
Hi, thanks a lot. studying alone is tough thing :) how can i improve my skill... paul kölle wrote: > > elca schrieb: >> Hello, > Hi, > >> following is script source which can beautifulsoup and PAMIE work >> together. >> but if i run this script source error was happened. >> >> AttributeError: PAMIE instance has no attribute 'pageText' >> File "C:\test12.py", line 7, in >> bs = BeautifulSoup(ie.pageText()) > You could execute the script line by line in the python console, then > after the line "ie = PAMIE(url)" look at the "ie" object with "dir(ie)" > to check if it really looks like a healthy instance. ...got bored, just > tried it -- looks like pageText() has been renamed to getPageText(). > Try: > text = PAMIE('http://www.cnn.com').getPageText() > > cheers > Paul > >> >> and following is orginal source until i was found in internet. >> >> from BeautifulSoup import BeautifulSoup >> from PAM30 import PAMIE >> url = 'http://www.cnn.com' >> ie = PAMIE(url) >> bs = BeautifulSoup(ie.pageText()) >> >> if possible i really want to make it work together with beautifulsoup or >> lxml with PAMIE. >> sorry my bad english. >> thanks in advance. >> >> >> >> >> >> >> Stefan Behnel-3 wrote: >>> Hi, >>> >>> elca, 25.10.2009 02:35: >>>> hello... >>>> if anyone know..please help me ! >>>> i really want to know...i was searched in google lot of time. >>>> but can't found clear soultion. and also because of my lack of python >>>> knowledge. >>>> i want to use IE.navigate function with beautifulsoup or lxml.. >>>> if anyone know about this or sample. >>>> please help me! >>>> thanks in advance .. >>> You wrote a message with nine lines, only one of which gives a tiny hint >>> on >>> what you actually want to do. What about providing an explanation of >>> what >>> you want to achieve instead? Try to answer questions like: Where does >>> your >>> data come from? Is it XML or HTML? What do you want to do with it? >>> >>> This might help: >>> >>> http://www.catb.org/~esr/faqs/smart-questions.html >>> >>> Stefan >>> -- >>> http://mail.python.org/mailman/listinfo/python-list >>> >>> >> > > -- > http://mail.python.org/mailman/listinfo/python-list > > -- View this message in context: http://www.nabble.com/how-can-i-use-lxml-with-win32com--tp26044339p26046638.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: how can i use lxml with win32com?
paul kölle wrote: > > elca schrieb: >> Hi, >> thanks a lot. >> studying alone is tough thing :) >> how can i improve my skill... > 1. Stop top-posting. > 2. Read documentation > 3. Use the interactive prompt > > cheers > Paul > >> >> >> paul kölle wrote: >>> elca schrieb: >>>> Hello, >>> Hi, >>> >>>> following is script source which can beautifulsoup and PAMIE work >>>> together. >>>> but if i run this script source error was happened. >>>> >>>> AttributeError: PAMIE instance has no attribute 'pageText' >>>> File "C:\test12.py", line 7, in >>>> bs = BeautifulSoup(ie.pageText()) >>> You could execute the script line by line in the python console, then >>> after the line "ie = PAMIE(url)" look at the "ie" object with "dir(ie)" >>> to check if it really looks like a healthy instance. ...got bored, just >>> tried it -- looks like pageText() has been renamed to getPageText(). >>> Try: >>> text = PAMIE('http://www.cnn.com').getPageText() >>> >>> cheers >>> Paul >>> >>>> and following is orginal source until i was found in internet. >>>> >>>> from BeautifulSoup import BeautifulSoup >>>> from PAM30 import PAMIE >>>> url = 'http://www.cnn.com' >>>> ie = PAMIE(url) >>>> bs = BeautifulSoup(ie.pageText()) >>>> >>>> if possible i really want to make it work together with beautifulsoup >>>> or >>>> lxml with PAMIE. >>>> sorry my bad english. >>>> thanks in advance. >>>> >>>> >>>> >>>> >>>> >>>> >>>> Stefan Behnel-3 wrote: >>>>> Hi, >>>>> >>>>> elca, 25.10.2009 02:35: >>>>>> hello... >>>>>> if anyone know..please help me ! >>>>>> i really want to know...i was searched in google lot of time. >>>>>> but can't found clear soultion. and also because of my lack of python >>>>>> knowledge. >>>>>> i want to use IE.navigate function with beautifulsoup or lxml.. >>>>>> if anyone know about this or sample. >>>>>> please help me! >>>>>> thanks in advance .. >>>>> You wrote a message with nine lines, only one of which gives a tiny >>>>> hint >>>>> on >>>>> what you actually want to do. What about providing an explanation of >>>>> what >>>>> you want to achieve instead? Try to answer questions like: Where does >>>>> your >>>>> data come from? Is it XML or HTML? What do you want to do with it? >>>>> >>>>> This might help: >>>>> >>>>> http://www.catb.org/~esr/faqs/smart-questions.html >>>>> >>>>> Stefan >>>>> -- >>>>> http://mail.python.org/mailman/listinfo/python-list >>>>> >>>>> >>> -- >>> http://mail.python.org/mailman/listinfo/python-list >>> >>> >> > > -- > http://mail.python.org/mailman/listinfo/python-list > > hello, im sorry ,also im not familiar with newsgroup. so this position is bottom-posting position? if wrong correct me.. thanks , in addition i was testing just before you sent text = PAMIE('http://www.naver.com').getPageText() i have some question... how can i keep open only one windows? not open several windows. following is my scenario. after open www.cnn.com i want to go http://www.cnn.com/2009/US/10/24/teen.jane.doe/index.html with keep only one windows. text = PAMIE('http://www.cnn.com').getPageText() sleep(5) text = PAMIE('http://www.cnn.com/2009/US/10/24/teen.jane.doe/index.html') thanks in advance :) -- View this message in context: http://www.nabble.com/how-can-i-use-lxml-with-win32com--tp26044339p26046897.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: how can i use lxml with win32com?
motoom wrote: > > elca wrote: > >> http://news.search.naver.com/search.naver?sm=tab_hty&where=news&query=korea+times&x=0&y=0 >> that is korea portal site and i was search keyword using 'korea times' >> and i want to scrap resulted to text name with 'blogscrap_save.txt' > > Aha, now we're getting somewhere. > > Getting and parsing that page is no problem, and doesn't need JavaScript > or Internet Explorer. > > import urllib2 > import BeautifulSoup > doc=urllib2.urlopen("http://news.search.naver.com/search.naver?sm=tab_hty&where=news&query=korea+times&x=0&y=0";) > soup=BeautifulSoup.BeautifulSoup(doc) > > > By analyzing the structure of that page you can see that the articles > are presented in an unordered list which has class "type01". The > interesting bit in each list item is encapsulated in a tag with > class "sh_news_passage". So, to parse the articles: > > ul=soup.find("ul","type01") > for li in ul.findAll("li"): > dd=li.find("dd","sh_news_passage") > print dd.renderContents() > print > > This example prints them, but you could also save them to a file (or a > database, whatever). > > Greetings, > > > > -- > "The ability of the OSS process to collect and harness > the collective IQ of thousands of individuals across > the Internet is simply amazing." - Vinod Valloppillil > http://www.catb.org/~esr/halloween/halloween4.html > -- > http://mail.python.org/mailman/listinfo/python-list > > Hi, thanks for your help.. thread is too long, so i will open another new post. thanks a lot Paul -- View this message in context: http://www.nabble.com/how-can-i-use-lxml-with-win32com--tp26044339p26055191.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
handling PAMIE and lxml
Hello, i was open anther new thread ,old thread is too long. first of all,i really appreciate other many people's help in this newsgroup. im making webscraper now. but still problem with my script source. http://elca.pastebin.com/m52e7d8e0 i was upload my script in here. so anybody can modify it. main problem is , if you see line number 74 ,75 in my source, you can see this line "thepage = urllib.urlopen(theurl).read()". i want to change this line to work with Pamie not urllib. if anyone help me,really appreciate. thanks in advance. Paul -- View this message in context: http://www.nabble.com/handling-PAMIE-and-lxml-tp26055230p26055230.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: handling PAMIE and lxml
Simon Forman-2 wrote: > > On Mon, Oct 26, 2009 at 3:05 AM, elca wrote: >> >> Hello, >> i was open anther new thread ,old thread is too long. > > Too long for what? > >> first of all,i really appreciate other many people's help in this >> newsgroup. >> im making webscraper now. >> but still problem with my script source. >> http://elca.pastebin.com/m52e7d8e0 >> i was upload my script in here. >> so anybody can modify it. >> main problem is , if you see line number 74 ,75 in my source, >> you can see this line "thepage = urllib.urlopen(theurl).read()". >> i want to change this line to work with Pamie not urllib. > > Why? > >> if anyone help me,really appreciate. >> thanks in advance. > > I just took a look at your code. I don't want to be mean but your > code is insane. > > 1.) you import HTMLParser and fromstring but don't use them. > > 2.) the page_check() function is useless. All it does is sleep for > len("www.naver.com") seconds. Why are you iterating through the > characters in that string anyway? > > 3.) On line 21 you have a pointless pass statement. > > 4.) The whole "if x:" statement on line 19 is pointless because both > branches do exactly the same thing. > > 5.) The variables start_line and end_line you're using strings. This > is not php. Strings are not automatically converted to integers. > > 6.) Because you never change end_line anywhere, and because you don't > use break anywhere in the loop body, the while loop on line 39 will > never end. > > 7.) The while loop on line 39 defines the getit() function (over and > over again) but never calls it. > > 8.) On line 52 you define a list call "results" and then never use it > anywhere. > > 9.) In getit() the default value for howmany is 0, but on line 68 you > subtract 1 from it and the next line you return if not howmany. This > means if you ever forget to call getit() with a value of howmany above > zero that if statement will never return. > > 8.) In the for loop on line 54, in the while loop on line 56, you > recursively call getit() on line 76. wtf? I suspect lines 73-76 are > at the wrong indentation level. > > 9.) On line 79 you have a "bare" except, which just calls exit(1) on > the next line. This replaces the exception you had (which contains > important information about the error encountered) with a SystemExit > exception (which does not.) Note that an uncaught exception will exit > your script with a non-zero return code, so all you're doing here is > throwing away debugging information. > > 10.) On line 81 you have 'return()'. This line will never be reached > because you just called exit() on the line before. Also, return is > not a function, you do not need '()' after it. > > 11.) Why do you sleep for half a second on line 83? > > > I cannot believe that this script does anything useful. I would > recommend playing with the interactive interpreter for awhile until > you understand python and what you're doing. Then worry about Pamie > vs. urllib. > -- > http://mail.python.org/mailman/listinfo/python-list > > Hi, thanks for your advice, all your words is correct. first of all, i would like to say that script source is not finished version, just i was get it here and there, and just collect it. im not familiar with python,currently im learning python. but where is end of learning pyton? there is some end of learning python or programming? i don't think so . also i know what i doing with my script at least. and what is 'wtf' ? :) -- View this message in context: http://www.nabble.com/handling-PAMIE-and-lxml-tp26055230p26068732.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Regular express question
Hello, i have some text document to parse. sample text is such like follow in this document, i would like to extract such like SUBJECT = 'NETHERLANDS MUSIC EPA' CONTENT = 'Michael Buble performs in Amsterdam Canadian singer Michael Buble performs during a concert in Amsterdam, The Netherlands, 30 October 2009. Buble released his new album entitled 'Crazy Love'. EPA/OLAF KRAAK ' if anyone help me,much appreciate " NETHERLANDS MUSIC EPA | 36 before Michael Buble performs in Amsterdam Canadian singer Michael Buble performs during a concert in Amsterdam, The Netherlands, 30 October 2009. Buble released his new album entitled 'Crazy Love'. EPA/OLAF KRAAK " -- View this message in context: http://old.nabble.com/Regular-express-question-tp26139434p26139434.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
disable image loading to speed up webpage load
Hi, im using win32com 's webbrowser module. i have some question about it.. is it possible to disable image loading to speed up webpage load? any help ,much appreciate thanks in advance -- View this message in context: http://old.nabble.com/disable-image-loading-to-speed-up-webpage-load-tp26155440p26155440.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: disable image loading to speed up webpage load
Diez B. Roggisch-2 wrote: > > elca schrieb: >> Hi, >> im using win32com 's webbrowser module. >> i have some question about it.. >> is it possible to disable image loading to speed up webpage load? >> any help ,much appreciate >> thanks in advance > > Use urllib2. > > Diez > -- > http://mail.python.org/mailman/listinfo/python-list > > you can show me some more specific sample or demo? -- View this message in context: http://old.nabble.com/disable-image-loading-to-speed-up-webpage-load-tp26155440p26192072.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: join , split question
Jon Clements-2 wrote: > > On Nov 5, 2:08 pm, Bruno Desthuilliers 42.desthuilli...@websiteburo.invalid> wrote: >> Stuart Murray-Smith a écrit : >> >> >>> Hello, i have some text file list such like following format. >> >>> i want to change text format to other format. >> >>> i was upload it pastebin site >> >>>http://elca.pastebin.com/d71261168 >> >> >> Dead link. >> >> >> With what ? >> >> >http://elca.pastebin.com/d4d57929a >> >> Yeah, fine. And where's the code you or the OP (if not the same person) >> have problems with ? > > I'd certainly be curious to see it, especially with the pagecheck() > line 22 @ http://elca.pastebin.com/f5c69fe41 > > -- > http://mail.python.org/mailman/listinfo/python-list > > thanks all! i was resolved :) -- View this message in context: http://old.nabble.com/join-%2C-split-question-tp26193334p26225646.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: disable image loading to speed up webpage load
Tim Roberts wrote: > > elca wrote: >> >>im using win32com 's webbrowser module. > > Win32com does not have a webbrowser module. Do you mean you are using > Internet Explorer via win32com? > >>i have some question about it.. >>is it possible to disable image loading to speed up webpage load? > > If you are using IE, then you need to tell IE to disable image loading. I > don't know a way to do that through the IE COM interface. > -- > Tim Roberts, t...@probo.com > Providenza & Boekelheide, Inc. > -- > http://mail.python.org/mailman/listinfo/python-list > > Hello, yes right, i mean IE com interface. thanks for your reply.. if anyone can help much appreciate ! -- View this message in context: http://old.nabble.com/disable-image-loading-to-speed-up-webpage-load-tp26155440p26261219.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
how to close not response win32 IE com interface
hello, these day im making some script that use win32 IE com interface. one of problem is , my internet line is very slow, so sometimes my IE.navigate("http://www.example.com";) not response timely. it looks hang and open status, not complete status. so my IE.navigate function is not correctly working. anyone can help me? in that case ,how to close or restart my script from start. thanks in advance Paul -- View this message in context: http://old.nabble.com/how-to-close-not-response-win32-IE-com-interface-tp26265055p26265055.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: how to close not response win32 IE com interface
Michel Claveau - MVP-2 wrote: > > Hi! > > The only way I know is to use sendkeys. > > @+ > > Michel Claveau > -- > http://mail.python.org/mailman/listinfo/python-list > > Hello, actually this is not hang status, i mean..it slow response, so in that case, i would like to close IE and want to restart from start. so closing is no problem ,problem is ,how to set timeout ,for example if i set 15sec, if not webpage open less than 15 sec i want to close it and restart from start. is this possible to use with IE com interface? really hard to find solution Paul, -- View this message in context: http://old.nabble.com/how-to-close-not-response-win32-IE-com-interface-tp26265055p26281603.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
win32com IE interface PAMIE javascript click
Hello, these day im making some script. i have encounter some problem with my script work. problem is i want to click emulate javascript on following site. http://news.naver.com/main/presscenter/category.nhn this site is news site. and everyday news content also changed, but javascript is not changed. for example i want to click javascript every inside 'li' element . how can i make it work with Pamie or win32com IE interface? thanks in advance http://www.bloter.net/wp-content/bloter_html/2009/11/11/19083.html 데스크톱 가상화 놓고 한판 대결…시트릭스와 MS vs. VM웨어, http://www.bloter.net/wp-content/bloter_html/2009/11/11/19105.html http://static.naver.com/newscast/2009//1615301154902609.jpg "블로그·카페로 PLM 정보 공유" -- View this message in context: http://old.nabble.com/win32com-IE-interface-PAMIE-javascript-click-tp26302679p26302679.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
how to install python-spidermonkey on windows
Hello all, im making some script with python mechanize, one of problem is it really hard to find which support javascript supported web client scraping or crawler. actually i was found some such as python-spidermonkey and pykhtml and so on. but most of all only support on linux . i want to make my python script with exe file. so definitely i have to install on windows platform. my question is ..are there any method to can install python-spidermonkey or pykhtml on windows platform? i really need to support windows platform. if anyone can hint or help really appreicate! thanks in advance Paul -- View this message in context: http://old.nabble.com/how-to-install-python-spidermonkey-on-windows-tp26331307p26331307.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
python win32com problem
hello , these day im very stress of one of some strange thing. i want to enumurate inside list of url, and every enumurated url i want to visit i was uplod incompleted script source in here => http://elca.pastebin.com/m6f911584 if anyone can help me really appreciate thanks in advance Paul -- View this message in context: http://old.nabble.com/python-win32com-problem-tp26358976p26358976.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: python win32com problem
Jon Clements-2 wrote: > > On Nov 15, 1:08 pm, elca wrote: >> hello , these day im very stress of one of some strange thing. >> >> i want to enumurate inside list of url, and every enumurated url i want >> to >> visit >> >> i was uplod incompleted script source in here => >> >> http://elca.pastebin.com/m6f911584 >> >> if anyone can help me really appreciate >> >> thanks in advance >> >> Paul >> >> -- >> View this message in >> context:http://old.nabble.com/python-win32com-problem-tp26358976p26358976.html >> Sent from the Python - python-list mailing list archive at Nabble.com. > > How much effort have you put into this? It looks like you've just > whacked together code (that isn't valid -- where'd the magical > 'buttons' variable come from), given up and cried for help. > > Besides, I would suggest you're taking completely the wrong route. > You'll find it one hell of a challenge to automate a browser as you > want, that's if it supports exposing the DOM anyway. And without being > rude, would definitely be beyond your abilities from your posts to > c.l.p. > > Download and install BeautifulSoup from > http://www.crummy.com/software/BeautifulSoup/ > - you seem to have quite a few HTML based needs in your pastebin, so > it'll come in useful for the future. > > Here's a snippet to get you started: > > from urllib2 import urlopen > from BeautifulSoup import BeautifulSoup as BS > > url = urlopen('http://news.naver.com/main/presscenter/category.nhn') > urldata = url.read() > soup = BS(urldata) > atags = soup('a', attrs={'href': lambda L: L and L.startswith('http:// > news.khan.co.kr')}) > for atag in atags: > print atag['href'] > > I'll leave it to you where you want to go from there (ie, follow the > links, or automate IE to open said pages etc...) > > I strongly suggest reading the urllib2 and BeautifulSoup docs, and > documenting the above code snippet -- you should then understand it, > should be less stressed, and have something to refer to for similar > requirements in the future. > > hth, > Jon. > -- > http://mail.python.org/mailman/listinfo/python-list > > Hello, thanks for your kind reply. your script is working very well im making scraper now. and im making with PAMIE but still slow module but i have no choice because of javascript support. before i was try to look for method with mechanize but almost failed. if mechanize can support javascript maybe my best choice will be mechanize. ok anyway..there is almost no choice so i have to go "automate IE to open said pages etc.." i want to visit every collect link with IE com interface.. for example i was collect 10 url ...i want to visit every 10 url. would you help me some more? if so much appreciate thanks -- View this message in context: http://old.nabble.com/python-win32com-problem-tp26358976p26361229.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
DOM related question and problem
Hello, these day im making python script related with DOM. problem is these day many website structure is very complicate . what is best method to check DOM structure and path.. i mean...following is some example. what is best method to check can extract such like following info quickly? before i was spent much time to extract such info . and yes im also new to python and DOM. IE.Document.Frames(1).Document.forms('comment').value = 'hello' if i use DOM inspector, can i extract such info quickly ? if so would you show me some sample? here is some site . i want to extract some dom info. today i was spent all day long to extract what is dom info. but failed http://www.segye.com/Articles/News/Politics/Article.asp?aid=20091118001261&ctg1=06&ctg2=00&subctg1=06&subctg2=00&cid=010101060 at the end of this page,can find some comment input box. i want to know what kind of dom element should have to use, such like IE.Document.Frames(1).Document.forms('comment').value = 'hello' anyhelp much appreciate thanks -- View this message in context: http://old.nabble.com/DOM-related-question-and-problem-tp26412730p26412730.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: DOM related question and problem
Chris Rebert-6 wrote: > > On Wed, Nov 18, 2009 at 10:04 AM, elca wrote: >> Hello, >> these day im making python script related with DOM. >> >> problem is these day many website structure is very complicate . >> >> what is best method to check DOM structure and path.. >> >> i mean...following is some example. >> >> what is best method to check can extract such like following info >> quickly? >> >> before i was spent much time to extract such info . >> >> and yes im also new to python and DOM. >> >> IE.Document.Frames(1).Document.forms('comment').value = 'hello' >> >> if i use DOM inspector, can i extract such info quickly ? if so would you >> show me some sample? >> >> here is some site . i want to extract some dom info. >> >> today i was spent all day long to extract what is dom info. but failed >> >> http://www.segye.com/Articles/News/Politics/Article.asp?aid=20091118001261&ctg1=06&ctg2=00&subctg1=06&subctg2=00&cid=010101060 >> >> at the end of this page,can find some comment input box. >> >> i want to know what kind of dom element should have to use, such like >> >> IE.Document.Frames(1).Document.forms('comment').value = 'hello' >> >> anyhelp much appreciate thanks > > This sounds suspiciously like a spambot. Why do you want to submit > comments in an automated fashion exactly? > > Cheers, > Chris > -- > http://blog.rebertia.com > -- > http://mail.python.org/mailman/listinfo/python-list > > Hello this is not spambot actually. it related with my blog scraper.. anyone can help me or advice much appreciate -- View this message in context: http://old.nabble.com/DOM-related-question-and-problem-tp26412730p26418556.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
mechanize login problem with website
Hello I'm making auto-login script by use mechanize python. Before I was used mechanize with no problem, but http://www.gmarket.co.kr in this site I couldn't make it . whenever i try to login always login page was returned even with correct gmarket id , pass, i can't login and I saw some suspicious message "top.location.reload();" I think this related with my problem, but don't know exactly how to handle . i was upload my script in here http://paste.pocoo.org/show/151607/ if anyone can help me much appreciate thanks in advance -- View this message in context: http://old.nabble.com/mechanize-login-problem-with-website-tp26420202p26420202.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: mechanize login problem with website
elca wrote: > > Hello > > I'm making auto-login script by use mechanize python. > > Before I was used mechanize with no problem, but http://www.gmarket.co.kr > in this site I couldn't make it . > > whenever i try to login always login page was returned even with correct > gmarket id , pass, i can't login and I saw some suspicious message > > "top.location.reload();" > > I think this related with my problem, but don't know exactly how to handle > . > > i was upload my script in here > > # -*- coding: cp949 -*- > from lxml.html import parse, fromstring > import sys,os > import mechanize, urllib > import cookielib > import re > from BeautifulSoup import BeautifulSoup,BeautifulStoneSoup,Tag > > try: > > params = urllib.urlencode({'command':'login', >'url':'http%3A%2F%2Fwww.gmarket.co.kr%2F', >'member_type':'mem', >'member_yn':'Y', >'login_id':'tgi177', >'image1.x':'31', >'image1.y':'26', >'passwd':'tk1047', >'buyer_nm':'', >'buyer_tel_no1':'', >'buyer_tel_no2':'', >'buyer_tel_no3':'' > >}) > rq = mechanize.Request("http://www.gmarket.co.kr/challenge/login.asp";) > rs = mechanize.urlopen(rq) > data = rs.read() > > > logged_in = r'input_login_check_value' in data > > if logged_in: > print ' login success !' > rq = mechanize.Request("http://www.gmarket.co.kr";) > rs = mechanize.urlopen(rq) > data = rs.read() > print data > > else: > print 'login failed!' > pass > quit() > except: > pass > > > if anyone can help me much appreciate thanks in advance > > i was updated my script source -- View this message in context: http://old.nabble.com/mechanize-login-problem-with-website-tp26420202p26421474.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: DOM related question and problem
Stefan Behnel-3 wrote: > > elca, 18.11.2009 19:04: >> these day im making python script related with DOM. >> >> problem is these day many website structure is very complicate . >> [...] >> what is best method to check can extract such like following info >> quickly? > > This should help: > > http://blog.ianbicking.org/2008/12/10/lxml-an-underappreciated-web-scraping-library/ > > Stefan > -- > http://mail.python.org/mailman/listinfo/python-list > > hello yes..i know this website already. but failed to use it lxml solution -- View this message in context: http://old.nabble.com/DOM-related-question-and-problem-tp26412730p26455800.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
python socket service related question!
Hello,All im totally new to socket programming in python. i was read some tutorial and manual, but i didn't found what i want to make python related socket script in manual or tutorial. i want to make socket script which can send some info to server and also receive some info from server. For example, i want to send my login information to server, and want to receive result reply from server. but i have no idea..how to send my login information(id and password) to server. i was captured with wireshark, some process to send login info to server. and i was found port number is 5300 and server ip is 58.225.56.152 and i was send id is 'aaa' and password 'bbb' and i was received 'USER NOT FOUND' result from server. how can i make this kind of process with python socket ? if anyone help me some reference or some example or anything help much appreciate! 00 50 56 f2 c8 cc 00 0c 29 a8 f8 c0 08 00 45 00 .PV.).E. 0010 00 e2 2a 19 40 00 80 06 d0 55 c0 a8 cb 85 3a e1 @u:. 0020 38 98 05 f3 15 9a b9 86 62 7b 0d ab 0f ba 50 18 8...b{P. 0030 fa f0 26 14 00 00 50 54 3f 09 a2 91 7f 13 00 00 ..&...PT?... 0040 00 1f 14 00 02 00 00 00 00 00 00 00 07 00 00 00 0050 61 61 61 61 61 61 61 50 54 3f 09 a2 91 7f 8b 00 aaaPT?.. 0060 00 00 1f 15 00 08 00 00 00 07 00 00 00 61 61 61 .aaa 0070 61 61 61 61 07 00 00 00 62 62 62 62 62 62 62 01 bbb. 0080 00 00 00 31 02 00 00 00 4b 52 0f 00 00 00 31 39 ...1KR19 0090 32 2e 31 36 38 2e 32 30 33 2e 31 33 33 30 00 00 2.168.203.1330.. 00a0 00 4d 69 63 72 6f 73 6f 66 74 20 57 69 6e 64 6f .Microsoft Windo 00b0 77 73 20 58 50 20 50 72 6f 66 65 73 73 69 6f 6e ws XP Profession 00c0 61 6c 20 53 65 72 76 69 63 65 20 50 61 63 6b 20 al Service Pack 00d0 32 14 00 00 00 31 30 30 31 33 30 30 35 33 31 35 210013005315 00e0 37 38 33 37 32 30 31 32 33 03 00 00 00 34 37 30 783720123470 00 0c 29 a8 f8 c0 00 50 56 f2 c8 cc 08 00 45 00 ..)PV.E. 0010 00 28 ae 37 00 00 80 06 8c f1 3a e1 38 98 c0 a8 .(.7..:.8... 0020 cb 85 15 9a 05 f3 0d ab 0f ba b9 86 63 35 50 10 c5P. 0030 fa f0 5f 8e 00 00 00 00 00 00 00 00 .._. 00 0c 29 a8 f8 c0 00 50 56 f2 c8 cc 08 00 45 00 ..)PV.E. 0010 00 4c ae 38 00 00 80 06 8c cc 3a e1 38 98 c0 a8 .L.8..:.8... 0020 cb 85 15 9a 05 f3 0d ab 0f ba b9 86 63 35 50 18 c5P. 0030 fa f0 3e 75 00 00 50 54 3f 09 a2 91 7f 16 00 00 ..>u..PT?... 0040 00 1f 18 00 01 00 00 00 0e 00 00 00 55 73 65 72 User 0050 20 4e 6f 74 20 46 6f 75 6e 64 Not Found -- View this message in context: http://old.nabble.com/python-socket-service-related-question%21-tp27743609p27743609.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list