Re: Get pexpect to work
Jurian Sluiman wrote: > Ok, somebody helped my and found with "help(child.sendline)" that the > number (7) is the number of characters from my password. > > Still there doesn't seem to be that anything strange is happening. With > the logfile printed out, I found that child.expect places a 0 behind the > next rule. Is this always what's happening? And is that 0 causing all my > troubles? > > I'm a newbie with python, so I don't know much about it. This is (again) > the output, but with a sys.stdout line between it: > > >>> child = pexpect.spawn("vpnc-connect tudelft\ nopass.conf") > >>> child.logfile = sys.stdout > >>> child.expect(".* password .*: ") > Enter password for [EMAIL PROTECTED]: 0 > >>> child.sendline("[my_password]") > 7 > > Any help is really appreciated! I can search on the Internet, but with no > clue to search for and which keywords to use, all results don't help me. > > Thanks, > Jurian > > PS. Sorry for my bad English, I hope you can understand it. I use a slightly different approach for starting vpnc-connect if that will help: 1. have your script create a temporary vpnc configuration file (including your password): f1=open('/tmp/vpn.conf', 'w') f1.write('IPSec gateway ' + vpnAddress + '\n') f1.write('IPSec ID ' + vpnGroup + '\n')etc. 2. create a temporary results file, such as f2 = open('/tmp/vpncResults.txt', 'w') 3. start the vpn client: p = subprocess.Popen('/usr/sbin/vpnc-connect /tmp/vpn.conf', shell=True, stdout=f2).stdout 4. poll the results file to determine whether or not the connection succeeded 5. delete the temporary configuration file Creating a temporary configuration file keeps my password out of clear text other than for the few seconds the configuration file lives on my hard drive. In reality, my program runs within the Twisted event-driven framework, so I just create a deferred object when invoking vpnc-connect and wait for the callback to see if the connection was successful, but an earlier incarnation of my program worked with the above code. -- http://mail.python.org/mailman/listinfo/python-list
Re: Serial port failure
Rob wrote: > Hi all, > > I am fairly new to python, but not programming and embedded. I am > having an issue which I believe is related to the hardware, triggered > by the software read I am doing in pySerial. I am sending a short > message to a group of embedded boxes daisy chained via the serial port. > When I send a 'global' message, all the connected units should reply > with their Id and Ack in this format '0 Ack' To be certain that I > didn't miss a packet, and hence a unit, I do the procedure three times, > sending the message and waiting for a timeout before I run through the > next iteration. Frequently I get through the first two iterations > without a problem, but the third hangs up and crashes, requiring me to > remove the Belkin USB to serial adapter, and then reconnect it. Here > is the code: > > import sys, os > import serial > import sret > import time > > from serial.serialutil import SerialException > > GetAck Procedure > > def GetAck(p): > response = "" > > try: > response = p.readline() > except SerialException: > print ">>>>>Timed out<<<<<" > return -1 > res = response.split() > > #look for ack in the return message > reslen = len(response) > if reslen > 5: > if res[1] == 'Ack': > return res[0] > elif res[1] == 'Nak': > return 0x7F > else: > return -1 > > > >>>>> Snip <<<<<< > > GetNumLanes Procedure > > def GetNumLanes(Lanes): > print "Looking for connected units" > # give a turn command and wait for responses > msg = ".g t 0 336\n" > > for i in range(3): > port = OpenPort() > time.sleep(3) > print port.isOpen() > print "Request #%d" % (i+1) > try: > port.writelines(msg) > except OSError: > print "Serial port failure. Power cycle units" > port.close() > sys.exit(1) > > done = False > # Run first connection check > #Loop through getting responses until we get a -1 from GetAck > while done == False: > # lane will either be -1 (timeout), 0x7F (Nak), > # or the lane number that responded with an Ack > lane = GetAck(port) > if lane >= '0': > if False == Lanes.has_key(lane): > Lanes[lane] = True > else: > done = True > port.close() > time.sleep(3) > > # Report number of lanes found > NumLanes = len(Lanes) > if NumLanes == 1: > print "\n\nFound 1 unit connected" > else: > print "\n\nFound %d units connected" % NumLanes > > return NumLanes > > > >>>>>> Snip <<<<<< > > Main Program Code Section > > > #open the serial port > # capture serial port errors from trying to open the port > > port = OpenPort() > > # If we got to here, the port exists. Set the baud rate and timeout > values > > # I need to determine how many lanes are on this chain > # First send a turn command > > #Create a dictionary of lanes so I can check each lane's responses > Lanes = {} > #<><><><><><><><><><><><><><><><> > # Call the lane finder utility > NumLanes = GetNumLanes(Lanes) > #<><><><><><><><><><><><><><><><> > > #if no lanes responded, exit from the utility > if 0 == NumLanes: > print "I can't find any units connected." > print "Check your connections and try again" > sys.exit(1) > > # list the lanes we have in our dictionary > for n in Lanes: > print "Lane - %s" % n > > Now, here is the error message that I get > > [EMAIL PROTECTED]:~/py$ ./Thex.py > Looking for connected units > True > Request #1 > True > Request #2 > Serial port failure. Power cycle units > [EMAIL PROTECTED]:~/py$ ./Thex.py > The serial port is unavailable. > Disconnect your USB to Serial adapter, Then > reconnect it and try again. > [EMAIL PROTECTED]:~/py$ > > Does anyone have any ideas? > > Thanks, > > rob < [EMAIL PROTECTED] > In the second iteration of your loop, you appear to be opening a port that is already open: for i in range(3): port = OpenPort() thus the error message: "the serial port is unavailable". --Drake Smith -- http://mail.python.org/mailman/listinfo/python-list
Re: Serial port failure
[EMAIL PROTECTED] wrote: ..snip> > In the second iteration of your loop, you appear to be opening a port > that is already open: > > for i in range(3): > port = OpenPort() > > thus the error message: "the serial port is unavailable". > > --Drake Smith Skip that -- I didn't notice that your port.close() was indented. -- http://mail.python.org/mailman/listinfo/python-list
Python style: exceptions vs. sys.exit()
I have a general question of Python style, or perhaps just good programming practice. My group is developing a medium-sized library of general-purpose Python functions, some of which do I/O. Therefore it is possible for many of the library functions to raise IOError Exceptions. The question is: should the library function be able to just dump to sys.exit() with a message about the error (like "couldn't open this file"), or should the exception propagate to the calling program which handles the issue? Thanks in advance for anyone who can either answer my question or point me to where this question has already been answered. -- http://mail.python.org/mailman/listinfo/python-list
Intel C Compiler
Hello, I'm an engineer who has access to the Intel C/C++ compiler (icc), and for the heck of it I compiled Python2.7 with it. Unsurprisingly, it compiled fine and functions correctly as far as I know. However, I was interested to discover that the icc compile printed literally thousands of various warnings and remarks. Examples: Parser/node.c(13): remark #2259: non-pointer conversion from "int" to "short" may lose significant bits n->n_type = type; Parser/metagrammar.c(156): warning #1418: external function definition with no prior declaration Py_meta_grammar(void) I was just wondering if anyone from the Python development team uses icc, or finds any value in the icc compilation info. Similarly, I would be interested to know if they use icc for benchmarking comparisons (yes, I know that Intel has been accused of crippling amd processors so let's not have a flame war please). Regards, Drake -- http://mail.python.org/mailman/listinfo/python-list
Scraping multiple web pages help
Hello everyone, For a research project, I need to scrape a lot of comments from regulations.gov https://www.regulations.gov/docketBrowser?rpp=25&so=DESC&sb=commentDueDate&po=0&dct=PS&D=ED-2018-OCR-0064 But partly what's throwing me is the url addresses of the comments. They aren't consistent. I mean, there's some consistency insofar as the numbers that differentiate the pages all begin after that 0064 number in the url listed above. But the differnetiating numbers aren't even all the same amount of numbers. Some are 4 (say, 4019) whereas others are 5 (say, 50343). But I dont think they go over 5. So this is a problem. I dont know how to write the code to access the multiple pages. I should also mention I'm new to programing, so that's also a problem (if you cant already tell by the way I'm describing my problem). I should also mention that, I think, there's an API on regulations.gov, but I'm such a beginner that I dont evem really know where to find it, or even what to do with it once I do. That's how helpless am right now. Any help anyone could offer would be much appreciated. D -- https://mail.python.org/mailman/listinfo/python-list
trying to begin a code for web scraping
Hi everyone, I'm trying to write code to scrape this website <https://www.regulations.gov/document?D=ED-2018-OCR-0064-5403> ( regulations.gov) of its comments, but I'm having trouble figuring out what to link onto in the inspect page (like when I right click on inspect with the mouse). Although I need to write code to scrape all 11,000ish of the comments related to this event (by putting a code in a loop?), I'm still at the stage of looking at individual comments. So, for example, with this comment <https://www.regulations.gov/document?D=ED-2018-OCR-0064-5403>, I know enough to right click on inspect and to look at the xml? (This is how much of a beginner I am--what am I looking at when I right click inspect?) Then, I control F to find where the comment is in the code. For that comment, the word I used control F on was "troubling." So, I found the comment buried in the xml But my issue is this. I don't know what to link onto to scrape the comment (and I assume that this same sequence of letters would apply to scraping all of the comments in general). I assume what I grab is GIY1LSJISD. I'm watching this video, and the person is linking onto "tr" and "td," but mine is not that easy. In other words, what is the most essential language (bit of xml? code), the copying of which would allow me to extract not only this comment, but all of the comments, were I to put this bit of language(/xml?) my code? ... ... soup.findALL ('?') In sum, what I need to know is, how do I tell my Python code to ignore all of the surrounding code and go straight in and grab the comment. Of course, I need to grab other things too like the name, category, date, and so on, but I haven't gotten that far yet. Right now, I'm just trying to figure out what I need to insert into my code so that I can get the comment. Help! I'm trying to learn code on the fly. I'm an experienced researcher but am new to coding. Any help you could give me would be tremendously awesome. Best, Drake -- https://mail.python.org/mailman/listinfo/python-list
trying to retrieve comments with activated API key
Hi everyone, I'm further along than I was last time. I've installed python and am running this in spyder. This is the code I'm working with: import requests import csv import time import sys api_key = 'my api key' docket_id = 'ED-2018-OCR-0064' total_docs = 32068 docs_per_page = 1000 Where the "my api key" is, is actually my api key. I was just told not to give it out. But I put this into spyder and clicked run and nothing happened. I went to right before "import requests and clicked run." I think. I'm better in R. But I'm horrible at both. If you can't already tell. Well, this might have happened: runfile('/Users/susan/.spyder-py3/temp.py', wdir='/Users/susan/.spyder-py3') but I don't know what to do with it, if it actually happened. But I feel like I'm missing something super important. Like, for instance, how is python being told to go to the right website? Again, I'm trying to retrieve these comments <https://www.regulations.gov/docketBrowser?rpp=25&so=DESC&sb=commentDueDate&po=0&dct=PS&D=ED-2018-OCR-0064> off of regulations.gov. I don't know if this helps, but the interactive API interface is here <https://regulationsgov.github.io/developers/console/>. I've installed anaconda. I was told to get postman. Should I get postman? Help! At the end of the day, I'm trying to use python to get the comments from regulations.gov into a csv file so that I can analyze them in R. And then I think that I only need the name, comment, date, and category in the JSON dictionary. I can't send a picture through the list, but I have one of what I'm talking about. Drake -- https://mail.python.org/mailman/listinfo/python-list
Re: trying to retrieve comments with activated API key
Still having issues: 7. import requests 8. import csv 9. import time 10. import sys 11. api_key = 'PUT API KEY HERE' 12. docket_id = 'ED-2018-OCR-0064' 13. total_docs = 32068 14. docs_per_page = 1000 15. 16. r = requests.get("https://api.data.gov:443/regulations/v3/documents.json <https://api.data.gov/regulations/v3/documents.json>", 17. params={ 18."api_key": api_key, 19."dktid": docket_id, 20."rrp": docs_per_page, }) I'm getting an "imported but unused" by 8, 9 & 10. But then also, on the interactive API <https://regulationsgov.github.io/developers/console/#!/documents.json/documents_get_0>, it has something called a "po"--i.e., page offset. I was able to figure out that "rrp" means results per page, but now I'm unsure how to get all 32000 comments into a response object. Do I have to add an "ro" on line 21? or something else? Also, I ran the code as is starting on line 7 and got this: runfile('/Users/susan/.spyder-py3/temp.py', wdir='/Users/susan/.spyder-py3') This was in the terminal, I think. Since nothing popped up in what I assume is the environment--I'm running this in spyder--I assume it didn't work. Drake On Fri, Mar 8, 2019 at 12:29 PM Chris Angelico wrote: > On Sat, Mar 9, 2019 at 7:26 AM Drake Gossi wrote: > > > > Yea, it looks like the request url is: > > > > import requests > > import csv > <---on > these three, I have an "imported but unused warning" > > import time > <--- > > import sys > <--- > > api_key = 'PUT API KEY HERE' > > docket_id = 'ED-2018-OCR-0064' > > total_docs = 32068 <-but then also, what happens here? does it have to > do with po page offset? how do I get all 32000 instead of a 1000 > > docs_per_page = 1000 > > > > Can you post on the list, please? Also - leave the original text as > is, and add your responses underneath, like this; don't use colour to > indicate what's your text and what you're replying to, as not everyone > can see colour. Text is the only form that truly communicates. > > ChrisA > -- https://mail.python.org/mailman/listinfo/python-list
Re: trying to retrieve comments with activated API key
Ok, I think it worked. Embarrassingly, in my environment, I was clicked on "help" rather than "variable explorer." This is what is represented: api_key string 1 DEMO KEY docket_id string 1 ED-2018-OCR-0064 docs_per_page int 1 1000 total_docs int 1 32068 Does this mean I can add on the loop? that is, to get all 32000? and is this in JSON format? it has to be, right? eventually I'd like it to be in csv, but that's because I assume I have to manipulate it was R later... D On Fri, Mar 8, 2019 at 12:54 PM Chris Angelico wrote: > On Sat, Mar 9, 2019 at 7:47 AM Drake Gossi wrote: > > > > Still having issues: > > > > 7. import requests > > 8. import csv > > 9. import time > > 10. import sys > > 11. api_key = 'PUT API KEY HERE' > > 12. docket_id = 'ED-2018-OCR-0064' > > 13. total_docs = 32068 > > 14. docs_per_page = 1000 > > 15. > > 16. r = requests.get(" > https://api.data.gov:443/regulations/v3/documents.json > > <https://api.data.gov/regulations/v3/documents.json>", > > 17. params={ > > 18."api_key": api_key, > > 19."dktid": docket_id, > > 20."rrp": docs_per_page, > > > > }) > > > > I'm getting an "imported but unused" by 8, 9 & 10. > > Doesn't matter - may as well leave them in. > > > But then also, on the interactive API > > < > https://regulationsgov.github.io/developers/console/#!/documents.json/documents_get_0 > >, > > it has something called a "po"--i.e., page offset. I was able to figure > out > > that "rrp" means results per page, but now I'm unsure how to get all > 32000 > > comments into a response object. Do I have to add an "ro" on line 21? or > > something else? > > Probably you'll need a loop. For now, don't worry about it, and > concentrate on getting the first page. > > > Also, I ran the code as is starting on line 7 and got this: > > > > runfile('/Users/susan/.spyder-py3/temp.py', > wdir='/Users/susan/.spyder-py3') > > > > This was in the terminal, I think. Since nothing popped up in what I > assume > > is the environment--I'm running this in spyder--I assume it didn't work. > > Ahh, that might be making things confusing. > > My recommendation is to avoid Spyder altogether. Just run your script > directly and let it do its downloading. Keep it simple! > > ChrisA > -- > https://mail.python.org/mailman/listinfo/python-list > -- https://mail.python.org/mailman/listinfo/python-list
Re: [Python-Dev] [ANN] Python 2.3.7 and 2.4.5, release candidate 1
On Mar 2, 2008, at 4:05 PM, Martin v. Löwis wrote: > Assuming no major problems crop up, a final release of Python 2.4.4 > will > follow in about a week's time. I do suppose you mean 2.4.5. 2.4.5 won't build for me from the svn checkout on Mac OS X 10.5.2: gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused- madd -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include - DPy_BUILD_CORE -c ./Modules/posixmodule.c -o Modules/posixmodule.o ./Modules/posixmodule.c: In function ‘posix_setpgrp’: ./Modules/posixmodule.c:3145: error: too few arguments to function ‘setpgrp’ make: *** [Modules/posixmodule.o] Error 1 I can only presume I'm doing something wrong at this point, since I don't consider myself a Mac OS X developer. -Fred -- Fred Drake -- http://mail.python.org/mailman/listinfo/python-list
Re: [Python-Dev] [ANN] Python 2.3.7 and 2.4.5, release candidate 1
On Mar 2, 2008, at 7:43 PM, Fred Drake wrote: > 2.4.5 won't build for me from the svn checkout on Mac OS X 10.5.2: Neither does 2.3.7 now that I've tried that: gcc -u __dummy -u _PyMac_Error -framework System -framework CoreServices -framework Foundation -o python.exe \ Modules/python.o \ libpython2.3.a -ldl Undefined symbols: "__dummy", referenced from: ld: symbol(s) not found collect2: ld returned 1 exit status make: *** [python.exe] Error 1 Of course, I wasn't using an earlier 2.3.x version on this box. I would really like to be able to use 2.4.5, since I've been using 2.4.4 for work for a while now. -Fred -- Fred Drake -- http://mail.python.org/mailman/listinfo/python-list
Re: [Python-Dev] [ANN] Python 2.3.7 and 2.4.5, release candidate 1
On Mar 2, 2008, at 8:35 PM, Kevin Teague wrote: > This issue was fixed for Python 2.5. As the issue notes, you can > work around it with: > > ./configure MACOSX_DEPLOYMENT_TARGET=10.5 Indeed, that works wonderfully for me for 2.4.5. > But it would be really nice if the configure fix for 2.5 was > backported to 2.4.5 since Zope is still on 2.4 and Mac OS X skipped > system builds for 2.4 going direct from 2.3 -> 2.5. Yes, it would be very nice if this worked out of the box on Mac OS X 10.5.2. It's definitely a surprise for those of us who built our 2.4.4 on Mac OS X 10.4.x. -Fred -- Fred Drake -- http://mail.python.org/mailman/listinfo/python-list
Re: OpenSource documentation problems
On Thursday 01 September 2005 22:53, Steve Holden wrote: > So, probably the best outcome of this current dialogue would be a change > to the bottom-of-page comment so instead of saying > > """Release 2.4, documentation updated on 29 November 2004. > See About this document... for information on suggesting changes. """ > > it said > > """Release 2.4, documentation updated on 29 November 2004. > See About this document... for information on suggesting changes, or > mail your suggestions to [EMAIL PROTECTED]""" The reason I changed the text there and on the "About..." page was to avoid it all coming to the doc team (of one) as email, where it too often was lost whenever I was swamped by whatever work projects I was involved in at the time. That's a big reason to continue to emphasize using SourceForge instead of my mailbox. Ideally, emails to docs at python.org would result in issues being created somewhere, simply so they don't get lost. It probably doesn't make sense for those to land in SourceForge automatically, since then everyone has to read every plea for a printable version of the documents. At one time, there was hope that we could get a Roundup tracker running for the webmaster address, to help make sure that each request received an appropriate response, and I secretly hoped to point the docs address at that as well. Unfortunately, not enough time was available from people with sufficient Roundup know-how to finish that effort. I still think that would be really nice. -Fred -- Fred L. Drake, Jr. -- http://mail.python.org/mailman/listinfo/python-list