On 09Sep2018 17:06, Chip Wachob <wach...@gmail.com> wrote:
Before I jump in, the 1000 foot view is I have to send an array of 512
bytes down the SPI loop, and read back 512 bytes that were latched in
from a control interface.  Unfortunately, there's a glitch in the FTDI
part and I can't just send the 512 bytes.. the part times out and and
causes the script to terminate early...  So, my solution was to
'chunk' the data into smaller groups which the part _can_ handle.
This works fine until I come to the point where I concatenate the
'chunks' into my 512 byte array..  which other functions will then
process.

Sounds good to me.

The libraries that I'm using are from the project link below.  I'm
learning from all of you how much code to post, I didn't want to post
the entire library, but the way it's set up it is hard not to.. as has
been pointed out.. methods of classes and then there's other files
that contain some of the low-level workings..

https://github.com/adafruit/Adafruit_Python_GPIO

Thanks.

Yes, I realize now that I left out important information.  I have my
own transfer function which is the supplied transfer function with a
bunch of GPIO wrapped around it, and the 'chunking'.  This way I can
call it when I need to send and receive data from the loop, which is
often.  In the end, I will have to talk with several different SPI
devices, all of which will have to have different GPIO line wiggling
going on before and after they call the spi.transfer.
[...]
   def transfer(self, data):
Ok, this is a method, a particular type of function associated with a class
instance. (All objects are class instances, and the class is their type.)

So to call this you would normally have a control object of some kind. [...]
Ah, it looks like you should have an SpiDev instance, inferring from this
code:
https://github.com/adafruit/Adafruit_Python_GPIO/blob/master/Adafruit_GPIO/SPI.py

So suppose you've got such an object and a variable referring to it, let's
say it is named "spi". You'd normally call the transfer function like this:
 spi.transfer(some_data)
[...]
When you call a Python method, the instance itself is implicitly passed as
the parameter "self" (well, the first parameter - in Python we always call
this "self" like C++ uses "this").

Ah, the one 'thorn' in my side is "this".  I have considerable
difficulty with the 'this' concept.  That probably means that my code
could be 'better', but I tend to avoid "this" like the plague.

It isn't as big a deal as you might imagine. An object method essentially gets the object as a piece of context. So a method call like this:

 spi.transfer(data)

just has the "spi" object available within the method for use, for example is probably is already set up with all the hardware access you're going to make use of.


So lower down in the function, when it goes:

 self._ft232h._write(str(bytearray(data)))

all you're doing is making use of the already initialised _ft232h object to do the write, and that comes along with the "spi" object you called the transfer method through.

So from your point of view it's just context for the transfer function.

       logger.debug('SPI transfer with command {0:2X}.'.format(command))

Write a debugging message.

I saw this, but I don't know how to turn it 'on' so I could see the
log messages.  I wouldn't mind doing this with my code as well.
Perhaps creating my own log?  Right now, I have a verbose flag that I
pass to all my functions.. called disp.  Then for each place where I
want to see a value, etc, I have a line that reads:  if(disp): print "
What data is this ", data

Ah, logging. Yes, it is probably worth learning a little about, just to hook it it. You can at least get away from passing your "disp" variable around - just call the right logging call (warning, info, debug, error etc) and tune which messages get out from your main programme.

The logging module is documented here:

 https://docs.python.org/2.7/library/logging.html#module-logging

So your main programme would set up the logging, including a logging level. The module comes with five predefined levels from DEBUG through to CRITICAL. Then in your code you write messages like the debug one above. When you're debugging you might set the systems level to DEBUG and see heaps of messages, or INFO to see progress reporting, or WARNING to just see when things go wrong. Then your code just writes messages are a suitable level and the global setting controls which ones get out.

All you really need to do is "import logging" and then just call "logging.warning" or "logging.debug" etc. There's a tutorial here:

 https://docs.python.org/2.7/howto/logging.html#logging-basic-tutorial

Get off the ground like that and return to your main programme. The whole logging system can be rather complicated if you get sucked into learning it thoroughly.

[...]
       # Send command and length.
       self._assert_cs()
I would guess that this raises a control signal. Ah, RS-232? So
clear-to-send then.

This is actually a chip select line that electrically wiggles a pin on
a device telling it 'Hey, I'm talking to YOU".

Ah, hence the "assert". Ok, thanks.

       self._ft232h._write(str(bytearray((command, len_low, len_high))))

Ok, it looks like this is Python 2, not Python 3. Let's unpack it.

Yes, Adafruit insists that this work has to be done in Python 2.7.  I
started working on this and tried to use Python 3.x but things were
failing all over the place.  Unfortunately, at the time I didn't
understand that Python 3 wasn't backward compatible with Python 2.x,
1.x, etc..  Lesson learned.

Python 3 was the big break to the language to make a swathe of changes which broke backward compatibility, all for the good as far as I can see. Previous to that changes worked hard to remain backward compatible and not break code. The idea with Python 3 was to make several breaking changes which had been bubbling away for years, and hopefully never make that break again in future.

Bytes versus strings versus unicode was a big change with Python 2/3, and code working at the hardware levewl like your tends to notice it. So the library is working with bytearrays (because Python 2 and 3 have them and they look like buffers of bytes) but still having to convert these into str to write them to "files".

In Python 2 the "str" type is 8 bit characters (or no specified character set, btw) that look like characters (yea, even unto characters really just being strings of length 1) and there's a separate "unicode" type for decent strings which support Unicode text.

In Python 3 "str" is Unicode, and there's a bytes types for bytes, which looks like a list if integers with values 0..255.

"(command, len_low, len_high)" is a tuple of 3 values. A tuple is like a
read only list. We're passing that to the bytearray() constructor, which
will accept an iterable of values and make a bytes buffer. And we want to
write that to "self._ft232h". Which expects a str, which is why I think this
is Python 2. In Python 2 there's no "bytes" type and str is an immutable
array of 8-bit character values. So writing bytes uses str.

Okay, so in Python 2 (the world I'm left to live within) if I want to
handle bytes, they actually have to be strings?  This sort of makes
sense.

Well, yes and no. File I/O .write methods tend to accept str. Which is 8-bit character values, so you can use them for bytes. Thing genuinely using bytes (as small integers, as you do when assemble a flags value etc) can get by with bytearrays, which look like arrays of bytes (because they are).

But for historic reasons these interfaces will be writing "str" because that's what they expect to get.

When I try to display the contents of my arrays via a print
statement, I will get \xnn for my values, but at some point the nn
becomes what appears to be ASCII characters.  Sometimes I get, what I
would call, extended ASCII 'art' when I try to print the arrays.
Interesting art, but not very helpful in the information department.

Yeah, not so great. Python's trying to be slightly nice here. Values with printable ASCII correspondents get prints as that character and other values get the \xnn treatment. Handy if you're working with text (because many many character sets has the 128 ASCII values as their lower portion, so this is usually not insane), less handy for nontext.

Try importing the hexlify function from the binascii module:

 from binascii import hexlify
 bs = bytearray( (1,2,3,65,66) )
 print(repr(bs))
 print(hexlify(bs))

[...]
faffing is a new term, but given the context I'm guessing it is
equivalent to 'mucking about' or more colorful wording which I won't
even attempt to publish here.

You are correct:

https://en.wikipedia.org/wiki/Glossary_of_British_terms_not_widely_used_in_the_United_States#F

Why this faffing about with str and bytearray? Probably for Python 2/3
compatibility, and because you want to deal with bytes (small ints) instead
of characters. Ignore it: we're dealing with bytes.

More questions in the next installment since the reconstruction
methods are discussed there..

Excellent.

Cheers,
Cameron Simpson <c...@cskk.id.au>
_______________________________________________
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor

Reply via email to