ANN: eGenix mxODBC 3.2.0 - Python ODBC Database Interface

2012-08-28 Thread eGenix Team: M.-A. Lemburg


ANNOUNCING

 eGenix.com mxODBC

   Python ODBC Database Interface

   Version 3.2.0


mxODBC is our commercially supported Python extension providing
 ODBC database connectivity to Python applications
on Windows, Mac OS X, Unix and BSD platforms


This announcement is also available on our web-site for online reading:
http://www.egenix.com/company/news/eGenix-mxODBC-3.2.0-GA.html



INTRODUCTION

mxODBC provides an easy-to-use, high-performance, reliable and robust
Python interface to ODBC compatible databases such as MS SQL Server,
MS Access, Oracle Database, IBM DB2 and Informix , Sybase ASE and
Sybase Anywhere, MySQL, PostgreSQL, SAP MaxDB and many more:

 http://www.egenix.com/products/python/mxODBC/

The "eGenix mxODBC - Python ODBC Database Interface" product is a
commercial extension to our open-source eGenix mx Base Distribution:

 http://www.egenix.com/products/python/mxBase/



NEWS

The 3.2.0 release of our mxODBC is a new release of our popular Python
ODBC Interface for Windows, Linux, Mac OS X and FreeBSD.

New Features in 3.2
---

 * Switched to unixODBC 2.3.1+ API: mxODBC is now compiled against
   unixODBC 2.3.1, which finally removes the problems with the ABI
   change between 2.2 and 2.3 by switching to a new library version
   (libodbc.so.2).

 * mxODBC connection objects can now be used as context managers to
   implicitly commit/rollback transactions.

 * mxODBC cursor objects can now be used as context managers to
   implicitly close the cursor when leaving the block (regardless of
   whether an exception was raised or not)

 * mxODBC added support for adjustable .paramstyles. Both 'qmark'
   (default) and 'named' styles are supported and can be set on
   connections and cursors. The 'named' style allows easier porting of
   e.g. Oracle native interface code to mxODBC.

 * mxODBC now supports a writable connection.autocommit attribute to
   easily turn on/off the connection's auto commit mode.

 * mxODBC added support for adjustable TIMESTAMP precision via the new
   connection/cursor.timestampresolution attribute.

 * mxODBC will round to nearest nanosecond fraction instead of
   truncating the value. This will result in fewer conversion errors
   due to floating point second values.

 * mxODBC's connect APIs Connect() and DriverConnect() support setting
   connection options prior to connecting to the database via a new
   connection_options parameter. This allows enabling e.g. the MARS
   feature in SQL Server Native Client.

 * The connection.cursor() constructor now has a new cursor_options
   parameters which allows configuring the cursor with a set of cursor
   options.

 * The .scroll() method supports far more ODBC drivers than before.

 * Updated the SQL lookup object to include more ODBC SQL parameter
   codes, including special ones for SQL Server and IBM DB2.

 * mx.ODBC.Manager will now prefer unixODBC over iODBC. Previous
   mxODBC releases used the order iODBC, unixODBC, DataDirect when
   looking for a suitable ODBC manager on Unix platforms. unixODBC is
   more widely supported nowadays and provides better Unicode support
   than iODBC.

For the full set of features mxODBC has to offer, please see:

http://www.egenix.com/products/python/mxODBC/#Features

Driver Compatibility Enhancements
-

 * Added work-around for Oracle Instance Client to prevent use of
   direct execution. cursor.executedirect() will still work, but won't
   actually use direct execution with the Oracle driver.

 * Added work-around for Oracle Instant Client to prevent segfaults in
   the driver when querying the cursor.rowcount or cursor.rownumber.

 * Added check to make sure that Python type binding mode is not used
   with Oracle Instance Client as this can cause segfaults with the
   driver and generally doesn't work.

 * Added a work-around to have the IBM DB2 driver return correct
   .rowcount values.

 * Improved Sybase ASE driver compatibility: this only supports Python
   type binding, which is now enabled per default.

 * Added work-around for PostgreSQL driver, which doesn't support
   scrollable cursors.

 * Add support for MS SQL Server ODBC Driver 1.0 for Linux to mxODBC

 * Improved compatibility of the mxODBC native Unicode string format
   handling with Unix ODBC drivers when running UCS4 builds of Python.

 * mxODBC 3.2 now always uses direct execution with the FreeTDS ODBC
   driver. This results in better compatibility with SQL Server and
   faster execution across the board.

 * Add work-around to have FreeTDS work with 64-bit integers outside
   the 32-bit signed integer range.

 * FreeTDS' .rowcount attribu

Re: Python 2.6 and Sqlite3 - Slow

2012-08-28 Thread Cameron Simpson
On 27Aug2012 13:41, bruceg113...@gmail.com  wrote:
| When using the database on my C Drive, Sqlite performance is great!   (<1S)
| When using the database on a network, Sqlite performance is terrible! (17S)

Let me first echo everyone saying not to use SQLite on a network file.

| I like your idea of trying Python 2.7

I doubt it will change anything.

| Finally, the way my program is written is:
|   loop for all database records:
|  read a database record
|  process data
|  display data (via wxPython)
| 
| Perhaps, this is a better approach:
|  read all database records
|  loop for all records:
| process data
| display data (via wxPython)

Yes, provided the "read all database records" is a single select
statement. In general, with any kind of remote resource you want to
minimise the number of transactions - the to and fro part, because each
such item tends to have latency while something is sent to and again
receiving from. So if you can say "gimme all the records" you get one
"unit" of latency at the start and end, versus latency around each
record fetch.

Having said all that, because SQLite works directly against the file, if
you say to it "giev me all the records" and the file is remote, SQLite
will probably _still_ fetch each record individually internally, gaining
you little.

This is why people are suggesting a database "server": then you can say
"get me all the records" over the net, and the server does
local-to-the-server file access to obtain the data. So all the "per
record" latency is at its end, and very small. Not to mention any
cacheing it may do.

Of course, if your requirements are very simple you might be better off
with a flat text file, possibly in CSV format, and avoid SQLite
altogether.

Cheers,
-- 
Cameron Simpson 

I do not trust thee, Cage from Hell, / The reason why I cannot tell, /
But this I know, and know full well: / I do not trust thee, Cage from Hell.
- Leigh Ann Hussey, leigh...@sybase.com, DoD#5913
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What do I do to read html files on my pc?

2012-08-28 Thread mikcec82
Il giorno lunedì 27 agosto 2012 12:59:02 UTC+2, mikcec82 ha scritto:
> Hallo,
> 
> 
> 
> I have an html file on my pc and I want to read it to extract some text.
> 
> Can you help on which libs I have to use and how can I do it?
> 
> 
> 
> thank you so much.
> 
> 
> 
> Michele

Thank you to all.

Hi Chris, thank you for your hint. I'll try to do as you said and to be clear:

I have to work on an HTML File. This file is  not a website-file, neither it 
comes from internet.
It is a file created by a local software (where "local" means "on my pc").

On this file, I need to do this operation:

1) Open the file
2) Check the occurences of the strings:
2a) , in this case I have this code:




DTC CODE Read:



 
 
 
 
 





2b) NOT PASSED, in this case I have this code:




CODE CHECK


: NOT PASSED


Note: color in "" 
can be "red" or "orange"

2c) OK or PASSED
   
3) Then, I need to fill an excel file following this rules:
3a) If 2a or 2b occurs on htmlfile, I'll write NOK in excel file
3b) If 2c occurs on htmlfile, I'll write OK in excel file

Note:
1) In this example, in 2b case, I have "CODE CHECK" in the code, but I could 
also have "TEXT CHECK" or "CHAR CHECK".
2) The research of occurences can be done either by tag ("") or via  (NOT PASSED, PASSED). But I would to use the first 
method.
==

In my script I have used the second way to looking for, i.e.:

**
fileorig = "C:\Users\Mike\Desktop\\2012_05_16_1___p0201_13.html"

f = open(fileorig, 'r')
nomefile = f.read()

for x in nomefile:
if '' in nomefile:
print 'NOK'
else :
print 'OK'
**
But this one works on charachters and not on strings (i.e.: in this way I have 
searched NOT string by string, but charachters-by-charachters).

===

I hope I was clear.

Thank for your help
Michele
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What do I do to read html files on my pc?

2012-08-28 Thread Oscar Benjamin
On Tue, 28 Aug 2012 03:09:11 -0700 (PDT), mikcec82 
 wrote:

f = open(fileorig, 'r')
nomefile = f.read()




for x in nomefile:
if '' in nomefile:
print 'NOK'
else :
print 'OK'


You don't need the for loop. Just do:

nomefile = f.read()
if '' in nomefile:
   print('NOK')


**
But this one works on charachters and not on strings (i.e.: in this 

way I h=

ave searched NOT string by string, but charachters-by-charachters).


Oscar

--
http://mail.python.org/mailman/listinfo/python-list


Re: What do I do to read html files on my pc?

2012-08-28 Thread mikcec82
Il giorno lunedì 27 agosto 2012 12:59:02 UTC+2, mikcec82 ha scritto:
> Hallo,
> 
> 
> 
> I have an html file on my pc and I want to read it to extract some text.
> 
> Can you help on which libs I have to use and how can I do it?
> 
> 
> 
> thank you so much.
> 
> 
> 
> Michele

Hi Oscar,
I tried as you said and I've developed the code as you will see.
But, when I have a such situation in an html file, in wich there is a 
repetition of a string (XX in this case):
CODE Target:0201
CODE Read:  
CODE CHECK  : NOT PASSED
TEXT Target:  13
TEXT Read:XX
TEXT CHECK  : NOT PASSED
CHAR Target:  AA
CHAR Read:XX
CHAR CHECK  : NOT PASSED 

With this code (created starting from yours)

index = nomefile.find('')
print '_ found at location', index

index2 = nomefile.find('XX')
print 'XX_ found at location', index2

found = nomefile.find('XX')
while found > -1:
print "XX found at location", found
found = nomefile.find('XX', found+1)

I have an answer like this:

_ found at location 51315
XX_ found at location 51315
XX found at location 51315
XX found at location 51316
XX found at location 51317
XX found at location 52321
XX found at location 53328

I have done it to find all occurences of '' and 'XX' strings. But, as you 
can see, the script find the occurrences of XX also at locations 51315, 51316 , 
51317 corresponding to string .

Is there a way to search all occurences of XX avoiding  location?

Thank you.
Michele
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What do I do to read html files on my pc?

2012-08-28 Thread Peter Otten
mikcec82 wrote:

> Il giorno lunedì 27 agosto 2012 12:59:02 UTC+2, mikcec82 ha scritto:
>> Hallo,
>> 
>> 
>> 
>> I have an html file on my pc and I want to read it to extract some text.
>> 
>> Can you help on which libs I have to use and how can I do it?
>> 
>> 
>> 
>> thank you so much.
>> 
>> 
>> 
>> Michele
> 
> Hi Oscar,
> I tried as you said and I've developed the code as you will see.
> But, when I have a such situation in an html file, in wich there is a
> repetition of a string (XX in this case):
> CODE Target:  0201
> CODE Read:
> CODE CHECK: NOT PASSED
> TEXT Target:  13
> TEXT Read:  XX
> TEXT CHECK: NOT PASSED
> CHAR Target:AA
> CHAR Read:  XX
> CHAR CHECK: NOT PASSED
> 
> With this code (created starting from yours)
> 
> index = nomefile.find('')
> print '_ found at location', index
> 
> index2 = nomefile.find('XX')
> print 'XX_ found at location', index2
> 
> found = nomefile.find('XX')
> while found > -1:
> print "XX found at location", found
> found = nomefile.find('XX', found+1)
> 
> I have an answer like this:
> 
> _ found at location 51315
> XX_ found at location 51315
> XX found at location 51315
> XX found at location 51316
> XX found at location 51317
> XX found at location 52321
> XX found at location 53328
> 
> I have done it to find all occurences of '' and 'XX' strings. But, as
> you can see, the script find the occurrences of XX also at locations
> 51315, 51316 , 51317 corresponding to string .
> 
> Is there a way to search all occurences of XX avoiding  location?

Remove the wrong positives afterwards:

start = nomefile.find("XX")
while start != -1:
if nomefile[start:start+4] == "":
start += 4
else:
print "XX found at location", start
start += 3
start = nomefile.find("XX", start)

By the way, what do you want to do if there are runs of "X" with repeats 
other than 2 or 4?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2.6 and Sqlite3 - Slow

2012-08-28 Thread bruceg113355
On Tuesday, August 28, 2012 4:27:48 AM UTC-4, Cameron Simpson wrote:
> On 27Aug2012 13:41, bruceg113...@gmail.com  wrote:
> 
> | When using the database on my C Drive, Sqlite performance is great!   (<1S)
> 
> | When using the database on a network, Sqlite performance is terrible! (17S)
> 
> 
> 
> Let me first echo everyone saying not to use SQLite on a network file.
> 
> 
> 
> | I like your idea of trying Python 2.7
> 
> 
> 
> I doubt it will change anything.
> 
> 
> 
> | Finally, the way my program is written is:
> 
> |   loop for all database records:
> 
> |  read a database record
> 
> |  process data
> 
> |  display data (via wxPython)
> 
> | 
> 
> | Perhaps, this is a better approach:
> 
> |  read all database records
> 
> |  loop for all records:
> 
> | process data
> 
> | display data (via wxPython)
> 
> 
> 
> Yes, provided the "read all database records" is a single select
> 
> statement. In general, with any kind of remote resource you want to
> 
> minimise the number of transactions - the to and fro part, because each
> 
> such item tends to have latency while something is sent to and again
> 
> receiving from. So if you can say "gimme all the records" you get one
> 
> "unit" of latency at the start and end, versus latency around each
> 
> record fetch.
> 
> 
> 
> Having said all that, because SQLite works directly against the file, if
> 
> you say to it "giev me all the records" and the file is remote, SQLite
> 
> will probably _still_ fetch each record individually internally, gaining
> 
> you little.
> 
> 
> 
> This is why people are suggesting a database "server": then you can say
> 
> "get me all the records" over the net, and the server does
> 
> local-to-the-server file access to obtain the data. So all the "per
> 
> record" latency is at its end, and very small. Not to mention any
> 
> cacheing it may do.
> 
> 
> 
> Of course, if your requirements are very simple you might be better off
> 
> with a flat text file, possibly in CSV format, and avoid SQLite
> 
> altogether.
> 
> 
> 
> Cheers,
> 
> -- 
> 
> Cameron Simpson 
> 
> 
> 
> I do not trust thee, Cage from Hell, / The reason why I cannot tell, /
> 
> But this I know, and know full well: / I do not trust thee, Cage from Hell.
> 
> - Leigh Ann Hussey, leigh...@sybase.com, DoD#5913



Cameron,

I did some testing and approach #1 is significantly faster than approach #2:
Approach #1:
  read all database records
  loop for all records:
 process data
 display data (via wxPython) 

Approach #2:
   loop for all database records:
  read a database record
  process data
  display data (via wxPython)

Various test results to read 50 records from a network drive. 
  #1  0:00:00.078000 
  #2  0:00:04.219000

  #1  0:00:00.875000
  #2  0:00:08.031000

  #1  0:00:00.063000
  #2  0:00:06.109000

  #1  0:00:00.078000
  #2  0:00:05.11

  #1  0:00:00.156000
  #2  0:00:02.625000

This explains some of my slowness issues.

Note: When the network drive is behaving (not slow), approach #2 is close to 
approach #1.


>From the site: http://www.sqlite.org/different.html
--
Most SQL database engines are implemented as a separate server process. 
Programs that want to access the database communicate with the server using 
some kind of interprocess communication (typically TCP/IP) to send requests to 
the server and to receive back results. SQLite does not work this way. With 
SQLite, the process that wants to access the database reads and writes directly 
from the database files on disk. There is no intermediary server process.

There are advantages and disadvantages to being serverless. The main 
advantage is that there is no separate server process to install, setup, 
configure, initialize, manage, and troubleshoot. This is one reason why SQLite 
is a "zero-configuration" database engine. Programs that use SQLite require no 
administrative support for setting up the database engine before they are run. 
Any program that is able to access the disk is able to use an SQLite database.

On the other hand, a database engine that uses a server can provide better 
protection from bugs in the client application - stray pointers in a client 
cannot corrupt memory on the server. And because a server is a single 
persistent process, it is able control database access with more precision, 
allowing for finer grain locking and better concurrency.

Most SQL database engines are client/server based. Of those that are 
serverless, SQLite is the only one that this author knows of that allows 
multiple applications to access the same database at the same time. 
--


Doesn't the last paragraph imply that SQLite can operate on a network drive.

Thanks,
Bruce 

-- 
http://mail.python.org/mailman/listinfo/python-list


ctypes - python2.7.3 vs python3.2.3

2012-08-28 Thread Rolf
ctypes works as I would expect with python2.7.3.

However, when I upgrade to python3.2.3 things don't seem to work right. Look 
below for details.

I am not sure where I am going wrong.

Shared Library
==
#include 
#include 

extern "C"
{
   int main();
   uint32_t myfunction (char **);
}

uint32_t myfunction (char ** _mydata)
{
   char mydata[16];

   strcpy(mydata, "Hello Dude!");

   *_mydata = mydata;

   return 0;
}

int main()
{
   return 0;
}

Python 2.7.3 which works as I would expect
==
> python2.7 -V
Python 2.7.3

> cat py27.py
#!/usr/bin/env python2.7

from __future__ import print_function
from __future__ import unicode_literals

from ctypes import *

lib = CDLL('libtest.so')
o_result = c_char_p()
lib.myfunction(pointer(o_result))
print(repr(o_result.value))

> ./py27.py
'Hello Dude!'

Python 3.2.3 return string gets mangled
===
> python3 -V
Python 3.2.3

> cat py3.py
#!/usr/bin/env python3

from ctypes import *

lib = CDLL('libtest.so')
o_result = c_char_p()
lib.myfunction(pointer(o_result))
print(repr(o_result.value))

> ./py3.py
b'\xd8\xb0y\to Dude!'

Every time I run it, I get a different set of values.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ctypes - python2.7.3 vs python3.2.3

2012-08-28 Thread John Gordon
In <18eb8025-7545-4d10-9e76-2e41deaad...@googlegroups.com> Rolf 
 writes:

> uint32_t myfunction (char ** _mydata)
> {
>char mydata[16];

>strcpy(mydata, "Hello Dude!");

>*_mydata = mydata;

>return 0;
> }

mydata is an auto variable, which goes out of scope when myfunction()
exits.  *_mydata ends up pointing to garbage.

-- 
John Gordon   A is for Amy, who fell down the stairs
gor...@panix.com  B is for Basil, assaulted by bears
-- Edward Gorey, "The Gashlycrumb Tinies"

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ctypes - python2.7.3 vs python3.2.3

2012-08-28 Thread MRAB

On 28/08/2012 22:35, Rolf wrote:

ctypes works as I would expect with python2.7.3.

However, when I upgrade to python3.2.3 things don't seem to work right. Look 
below for details.

I am not sure where I am going wrong.

Shared Library
==
#include 
#include 

extern "C"
{
int main();
uint32_t myfunction (char **);
}

uint32_t myfunction (char ** _mydata)
{
char mydata[16];

strcpy(mydata, "Hello Dude!");

*_mydata = mydata;

return 0;
}

int main()
{
return 0;
}


[snip]
What you're doing in 'myfunction' looks wrong to start with. It's
returning the address of the local array 'mydata' which allocated on
the stack when the function is entered. When the function is left it's
deallocated, so the address becomes a dangling pointer. That it gave a
reasonable result with Python 2.7.3 is down to pure luck.
--
http://mail.python.org/mailman/listinfo/python-list


Re: protobuf + pypy

2012-08-28 Thread Natalia Bidart
On Tue, Aug 21, 2012 at 6:55 PM, Pedro Larroy
 wrote:
> Hi
>
> Anyone knows if it's possible to use protobuffers with pypy?   Seems
> there isn't much info on the web about this.

So, in my experience, the easiest way to confirm if something works
with PyPy (when you can't find proper bibliography in the web) is to
try to install it in a pypy virtualenv [0]:

(my-pypy-env)nessita@dali:~/projects/pypy/my-pypy-env$ pip install protobuf
Downloading/unpacking protobuf
  Downloading protobuf-2.4.1.tar.gz (56Kb): 56Kb downloaded
  Storing download in cache at
/home/nessita/.pip_download_cache/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fp%2Fprotobuf%2Fprotobuf-2.4.1.tar.gz
  Running setup.py egg_info for package protobuf

Requirement already satisfied (use --upgrade to upgrade): distribute
in ./site-packages/distribute-0.6.24-py2.7.egg (from protobuf)
Installing collected packages: protobuf
  Running setup.py install for protobuf
Skipping installation of
/home/nessita/projects/pypy/my-pypy-env/site-packages/google/__init__.py
(namespace package)

Installing 
/home/nessita/projects/pypy/my-pypy-env/site-packages/protobuf-2.4.1-py2.7-nspkg.pth
Successfully installed protobuf
Cleaning up...

(my-pypy-env)nessita@dali:~/projects/pypy/my-pypy-env$ pypy
Python 2.7.2 (341e1e3821ff, Jun 07 2012, 15:38:48)
[PyPy 1.9.0 with GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
And now for something completely different: ``Python 2.x is not dead''
 from google.protobuf import descriptor

Seems to work :-) (though I have no app using it right now).

Cheers, Natalia.

[0] Instructions on how to create a PyPy virtualenv:
http://morepypy.blogspot.com.ar/2010/08/using-virtualenv-with-pypy.html
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2.6 and Sqlite3 - Slow

2012-08-28 Thread Pedro Larroy
Try incrementing the variable cursor.arraysize a lot.

Pedro.

On Tue, Aug 28, 2012 at 11:20 PM, Dennis Lee Bieber
 wrote:
> On Tue, 28 Aug 2012 10:25:35 -0700 (PDT), bruceg113...@gmail.com
> declaimed the following in gmane.comp.python.general:
>
>>
>> Doesn't the last paragraph imply that SQLite can operate on a network drive.
>>
>
> Most anything "can operate" on a network drive... But should it?
>
> The main thing the documentation is explaining is that one
> application accessing the database FILE does NOT LOCK OTHERS from
> accessing the file. Nothing about how the file is accessed. A
> low-activity web service would allow lots of people to concurrently
> access it -- but the processes that are doing said access are all local
> to the database file.
>
> Technically, M$ Access/JET (which is also file server database) also
> permits multiple clients -- but the locking becomes a pain.
> --
> Wulfraed Dennis Lee Bieber AF6VN
> wlfr...@ix.netcom.comHTTP://wlfraed.home.netcom.com/
>
> --
> http://mail.python.org/mailman/listinfo/python-list
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: issue with struct.unpack

2012-08-28 Thread 9bizy
This is what I have to reproduce the challenge I am having below:


import csv
import struct


data = []

for Node in csv.reader(file('s_data.xls')):
data.append(list((file('s_data.xls'


data = struct.unpack('!B4HH', data)
print "s_data.csv: ", data

I tries so many format for the struct.unpack but I got this errors:

Traceback (most recent call last):
  
data = struct.unpack('!B4HH', data)
struct.error: unpack requires a string argument of length 11

On Saturday, 25 August 2012 19:34:39 UTC+1, 9bizy  wrote:
> I am trying to unpack values from sensor data I am retrieving through a 
> serial cable, but I get errors while using struct.unpack, how can I use 
> struct.unpack to unload the data in a readable format?
> 
> 
> 
> I checked the python documentation for struct and I can seen to find any 
> argument for this.
> 
> 
> 
> I have data = struct.unpack('char',data) but I still get errors
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: issue with struct.unpack

2012-08-28 Thread 9bizy
On Saturday, 25 August 2012 20:16:54 UTC+1, MRAB  wrote:
> On 25/08/2012 19:34, 9bizy wrote:
> 
> > I am trying to unpack values from sensor data I am retrieving through
> 
> > a serial cable, but I get errors while using struct.unpack, how can I
> 
> > use struct.unpack to unload the data in a readable format?
> 
> >
> 
> > I checked the python documentation for struct and I can seen to find
> 
> > any argument for this.
> 
> >
> 
> > I have data = struct.unpack('char',data) but I still get errors
> 
> >
> 
> The format strings are described here for Python 3:
> 
> 
> 
>  http://docs.python.org/3.2/library/struct.html
> 
> 
> 
> and here for Python 2:
> 
> 
> 
>  http://docs.python.org/2.7/library/struct.html

I used this documents but they do not explain or provide an example on how to 
use struct.unpack for sensor data from an external source or even data from a 
excel sheet.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: issue with struct.unpack

2012-08-28 Thread MRAB

On 28/08/2012 23:35, 9bizy wrote:

On Saturday, 25 August 2012 20:16:54 UTC+1, MRAB  wrote:

On 25/08/2012 19:34, 9bizy wrote:

> I am trying to unpack values from sensor data I am retrieving through

> a serial cable, but I get errors while using struct.unpack, how can I

> use struct.unpack to unload the data in a readable format?

>

> I checked the python documentation for struct and I can seen to find

> any argument for this.

>

> I have data = struct.unpack('char',data) but I still get errors

>

The format strings are described here for Python 3:



 http://docs.python.org/3.2/library/struct.html



and here for Python 2:



 http://docs.python.org/2.7/library/struct.html


I used this documents but they do not explain or provide an example on how to 
use struct.unpack for sensor data from an external source or even data from a 
excel sheet.


If you want to read from an Excel file you should be using the 'xlrd'
module. You can find it here: http://www.python-excel.org/
--
http://mail.python.org/mailman/listinfo/python-list


Re: issue with struct.unpack

2012-08-28 Thread MRAB

On 28/08/2012 23:34, 9bizy wrote:

This is what I have to reproduce the challenge I am having below:


import csv
import struct


data = []

for Node in csv.reader(file('s_data.xls')):


That tries to read the file as CSV, but, judging from the extension,
it's in Excel's format. You don't even use what is read, i.e. Node.


 data.append(list((file('s_data.xls'


That opens the file again and 'list' causes it to read the file as
though it were a series of lines in a text file, which, as I've said,
it looks like it isn't. The list of 'lines' is appended to the list
'data', so that's a list of lists.


 data = struct.unpack('!B4HH', data)
 print "s_data.csv: ", data

I tries so many format for the struct.unpack but I got this errors:

Traceback (most recent call last):

 data = struct.unpack('!B4HH', data)
struct.error: unpack requires a string argument of length 11


[snip]
It's complaining because it's expecting a string argument but you're
giving it a list instead.

--
http://mail.python.org/mailman/listinfo/python-list


Re: issue with struct.unpack

2012-08-28 Thread 9bizy
On Tuesday, 28 August 2012 23:49:54 UTC+1, MRAB  wrote:
> On 28/08/2012 23:34, 9bizy wrote:
> 
> > This is what I have to reproduce the challenge I am having below:
> 
> >
> 
> >
> 
> > import csv
> 
> > import struct
> 
> >
> 
> >
> 
> > data = []
> 
> >
> 
> > for Node in csv.reader(file('s_data.xls')):
> 
> 
> 
> That tries to read the file as CSV, but, judging from the extension,
> 
> it's in Excel's format. You don't even use what is read, i.e. Node.
> 
> 
> 
> >  data.append(list((file('s_data.xls'
> 
> >
> 
> That opens the file again and 'list' causes it to read the file as
> 
> though it were a series of lines in a text file, which, as I've said,
> 
> it looks like it isn't. The list of 'lines' is appended to the list
> 
> 'data', so that's a list of lists.
> 
> >
> 
> >  data = struct.unpack('!B4HH', data)
> 
> >  print "s_data.csv: ", data
> 
> >
> 
> > I tries so many format for the struct.unpack but I got this errors:
> 
> >
> 
> > Traceback (most recent call last):
> 
> >
> 
> >  data = struct.unpack('!B4HH', data)
> 
> > struct.error: unpack requires a string argument of length 11
> 
> >
> 
> [snip]
> 
> It's complaining because it's expecting a string argument but you're
> 
> giving it a list instead.

How do I then convert data to a string argument in this case?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: popen4 - get exit status

2012-08-28 Thread Tim Johnson
* Dennis Lee Bieber  [120828 07:11]:
> On Mon, 27 Aug 2012 15:43:59 -0800, Tim Johnson 
> declaimed the following in gmane.comp.python.general:
> 
> > * Benjamin Kaplan  [120827 15:20]:
> > > The popen* functions are deprecated. You should use the subprocess module
> > > instead.
> >   No, I'm stuck with py 2.4 on one of the servers I'm using and
> 
>   Shouldn't be a problem:
> 
> -=-=-=-
> 17.1 subprocess -- Subprocess management 
> 
> New in version 2.4. 

  Thanks Dennis, I had misread the docs, you're right - after
  rereading them I has able to implement subprocess - glad to have
  it as I no longer have any servers to 'service' with anything
  older than 2.4. And Ben, please accept my apologies for seeming so
  dismissive.

  cheers
  tj

> The subprocess module allows you to spawn new processes, connect to
> their input/output/error pipes, and obtain their return codes. This
> module intends to replace several other, older modules and functions,
> such as: 
> -=-=-=-
> -- 
>   Wulfraed Dennis Lee Bieber AF6VN
> wlfr...@ix.netcom.comHTTP://wlfraed.home.netcom.com/
> 
> -- 
> http://mail.python.org/mailman/listinfo/python-list

-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: issue with struct.unpack

2012-08-28 Thread MRAB
On 29/08/2012 00:01, 9bizy wrote:> On Tuesday, 28 August 2012 23:49:54 
UTC+1, MRAB  wrote:

>> On 28/08/2012 23:34, 9bizy wrote:
>> > This is what I have to reproduce the challenge I am having below:
>> >
>> > import csv
>> > import struct
>> >
>> > data = []
>> >
>> > for Node in csv.reader(file('s_data.xls')):
>>
>> That tries to read the file as CSV, but, judging from the extension,
>> it's in Excel's format. You don't even use what is read, i.e. Node.
>>
>> >  data.append(list((file('s_data.xls'
>> >
>> That opens the file again and 'list' causes it to read the file as
>> though it were a series of lines in a text file, which, as I've said,
>> it looks like it isn't. The list of 'lines' is appended to the list
>> 'data', so that's a list of lists.
>> >
>> >  data = struct.unpack('!B4HH', data)
>> >  print "s_data.csv: ", data
>> >
>> > I tries so many format for the struct.unpack but I got this errors:
>> >
>> > Traceback (most recent call last):
>> >
>> >  data = struct.unpack('!B4HH', data)
>> > struct.error: unpack requires a string argument of length 11
>> >
>> [snip]
>> It's complaining because it's expecting a string argument but you're
>> giving it a list instead.
>
> How do I then convert data to a string argument in this case?
>
The question is: what are you trying to do?

If you're trying to read an Excel file, then you should be trying the
'xlrd' module. You can find it here: http://www.python-excel.org/

If your trying to 'decode' a binary file, then you should open it in
binary mode (with "rb"), read (some of) it as a byte string and then
pass it to struct.unpack.
--
http://mail.python.org/mailman/listinfo/python-list


Re: issue with struct.unpack

2012-08-28 Thread 9bizy
On Wednesday, 29 August 2012 00:36:40 UTC+1, MRAB  wrote:
> On 29/08/2012 00:01, 9bizy wrote:> On Tuesday, 28 August 2012 23:49:54 
> 
> UTC+1, MRAB  wrote:
> 
>  >> On 28/08/2012 23:34, 9bizy wrote:
> 
>  >> > This is what I have to reproduce the challenge I am having below:
> 
>  >> >
> 
>  >> > import csv
> 
>  >> > import struct
> 
>  >> >
> 
>  >> > data = []
> 
>  >> >
> 
>  >> > for Node in csv.reader(file('s_data.xls')):
> 
>  >>
> 
>  >> That tries to read the file as CSV, but, judging from the extension,
> 
>  >> it's in Excel's format. You don't even use what is read, i.e. Node.
> 
>  >>
> 
>  >> >  data.append(list((file('s_data.xls'
> 
>  >> >
> 
>  >> That opens the file again and 'list' causes it to read the file as
> 
>  >> though it were a series of lines in a text file, which, as I've said,
> 
>  >> it looks like it isn't. The list of 'lines' is appended to the list
> 
>  >> 'data', so that's a list of lists.
> 
>  >> >
> 
>  >> >  data = struct.unpack('!B4HH', data)
> 
>  >> >  print "s_data.csv: ", data
> 
>  >> >
> 
>  >> > I tries so many format for the struct.unpack but I got this errors:
> 
>  >> >
> 
>  >> > Traceback (most recent call last):
> 
>  >> >
> 
>  >> >  data = struct.unpack('!B4HH', data)
> 
>  >> > struct.error: unpack requires a string argument of length 11
> 
>  >> >
> 
>  >> [snip]
> 
>  >> It's complaining because it's expecting a string argument but you're
> 
>  >> giving it a list instead.
> 
>  >
> 
>  > How do I then convert data to a string argument in this case?
> 
>  >
> 
> The question is: what are you trying to do?
> 
> 
> 
> If you're trying to read an Excel file, then you should be trying the
> 
> 'xlrd' module. You can find it here: http://www.python-excel.org/
> 
> 
> 
> If your trying to 'decode' a binary file, then you should open it in
> 
> binary mode (with "rb"), read (some of) it as a byte string and then
> 
> pass it to struct.unpack.

Thank you MRAB this was helpful.
-- 
http://mail.python.org/mailman/listinfo/python-list


Sending USB commands with Python

2012-08-28 Thread Adam W.
So I'm trying to get as low level as I can with my Dymo label printer, and this 
method described the PDF 
http://sites.dymo.com/Documents/LW450_Series_Technical_Reference.pdf seems to 
be it.

I'm unfamiliar with dealing with the USB interface and would greatly appreciate 
it if someone could tell me how to send and receive these commands with Python. 
 Perhaps if you were feeling generous and wanted to write a bit of sample code, 
sending the "Get Printer Status" command and receiving the response (page 17 of 
the PDF) would be perfect to get me on my way.

Thanks,
Adam
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Make error when installing Python 1.5

2012-08-28 Thread Jason Swails
On Sun, Aug 26, 2012 at 9:54 PM, Steven D'Aprano <
steve+comp.lang.pyt...@pearwood.info> wrote:

> Yes, you read the subject line right -- Python 1.5. Yes, I am nuts ;)
>
> (I like having old versions of Python around for testing historical
> behaviour.)
>
> On Debian squeeze, when I try to build Python 1.5, I get this error:
>
> fileobject.c:590: error: conflicting types for ‘getline’
> /usr/include/stdio.h:651: note: previous declaration of ‘getline’ was here
> make[1]: *** [fileobject.o] Error 1
> make[1]: Leaving directory `/home/steve/personal/python/Python-1.5.2/
> Objects'
> make: *** [Objects] Error 2
>

FWIW, I got the same error when I tried (Gentoo, with both GCC 4.1.2 and
4.5.3), and it worked just fine when I tried it on a CentOS 5 machine
(consistent with your observations).  There's a reasonably easy fix,
though, that appears to work.

You will need the compile line for that source file (and you'll need to go
into the Objects/ dir).  For me it was:

gcc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c -o fileobject.o
fileobject.c

Following Cameron's advice, use the -E flag to produce a pre-processed
source file, such as the command below:

gcc -E -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c -o fileobject_.c
fileobject.c

Edit this fileobject_.c file and remove the stdio prototype of getline.
 Then recompile using the original compile line (on fileobject_.c):

gcc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c -o fileobject.o
fileobject_.c

For me this finishes fine.  Then go back to the top-level directory and
resume "make".  It finished for me (and seems to be working):

Batman src # python1.5
Python 1.5.2 (#1, Aug 28 2012, 20:13:23)  [GCC 4.5.3] on linux3
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> import sys
>>> dir(sys)
['__doc__', '__name__', '__stderr__', '__stdin__', '__stdout__', 'argv',
'builtin_module_names', 'copyright', 'exc_info', 'exc_type', 'exec_prefix',
'executable', 'exit', 'getrefcount', 'hexversion', 'maxint', 'modules',
'path', 'platform', 'prefix', 'ps1', 'ps2', 'setcheckinterval',
'setprofile', 'settrace', 'stderr', 'stdin', 'stdout', 'version']
>>> sys.version
'1.5.2 (#1, Aug 28 2012, 20:13:23)  [GCC 4.5.3]'
>>>

Good luck,
Jason
-- 
http://mail.python.org/mailman/listinfo/python-list


xlrd-0.8.0 .xlsx formatting_info=True not imlemented

2012-08-28 Thread python-excel
hi,

i just tried xlrd-0.8.0 so as to be able to read xlsx files only to discover:

  NotImplementedError: formatting_info=True not yet implemented

there's a post from 2009 stating that the current intention is to not
support formatting_info:

  https://groups.google.com/forum/?fromgroups=#!topic/python-excel/Thso62fdiSk

is that still the current intention?

if so, is there any other way to tell how many digits excel would round to
when displaying a floating point number? that's my only reason for needing
formatting_info=True.

cheers,
raf

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Flexible string representation, unicode, typography, ...

2012-08-28 Thread rusi
On Aug 28, 4:57 am, Neil Hodgson  wrote:
> wxjmfa...@gmail.com:
>
> > Go "has" the integers int32 and int64. A rune ensure
> > the usage of int32. "Text libs" use runes. Go has only
> > bytes and runes.
>
>      Go's text libraries use UTF-8 encoded byte strings. Not arrays of
> runes. See, for example,http://golang.org/pkg/regexp/
>
>     Are you claiming that UTF-8 is the optimum string representation and
> therefore should be used by Python?
>
>     Neil




This whole rune/go business is a red-herring.
In the other thread Peter Otten wrote:

> wxjmfa...@gmail.com wrote:
> > By chance and luckily, first attempt.
> > c:\python32\python -m timeit "('€'*100+'€'*100).replace('€'
> > , 'œ')"
> > 100 loops, best of 3: 1.48 usec per loop
> > c:\python33\python -m timeit "('€'*100+'€'*100).replace('€'
> > , 'œ')"
> > 10 loops, best of 3: 7.62 usec per loop
>
> OK, that is roughly factor 5. Let's see what I get:
>
> $ python3.2 -m timeit '("€"*100+"€"*100).replace("€", "œ")'
> 10 loops, best of 3: 1.8 usec per loop
> $ python3.3 -m timeit '("€"*100+"€"*100).replace("€", "œ")'
> 1 loops, best of 3: 9.11 usec per loop
>
> That is factor 5, too. So I can replicate your measurement on an AMD64 Linux
> system with self-built 3.3 versus system 3.2.
>
> > Note
> > The used characters are not members of the latin-1 coding
> > scheme (btw an *unusable* coding).
> > They are however charaters in cp1252 and mac-roman.
>
> You seem to imply that the slowdown is connected to the inability of latin-1
> to encode "œ" and "€" (to take the examples relevant to the above
> microbench). So let's repeat with latin-1 characters:
>
> $ python3.2 -m timeit '("ä"*100+"ä"*100).replace("ä", "ß")'
> 10 loops, best of 3: 1.76 usec per loop
> $ python3.3 -m timeit '("ä"*100+"ä"*100).replace("ä", "ß")'
> 1 loops, best of 3: 10.3 usec per loop
>
> Hm, the slowdown is even a tad bigger. So we can safely dismiss your theory
> that an unfortunate choice of the 8 bit encoding is causing it. Do you

In summary:
1. The problem is not on jmf's computer
2. It is not windows-only
3. It is not directly related to latin-1 encodable or not

The only question which is not yet clear is this:
Given a typical string operation that is complexity O(n), in more
detail it is going to be O(a + bn)
If only a is worse going 3.2 to 3.3, it may be a small issue.
If b is worse by even a tiny amount, it is likely to be a significant
regression for some use-cases.

So doing some arm-chair thinking (I dont know the code and difficulty
involved):

Clearly there are 3 string-engines in the python 3 world:
- 3.2 narrow
- 3.2 wide
- 3.3 (flexible)

How difficult would it be to giving the choice of string engine as a
command-line flag?
This would avoid the nuisance of having two binaries -- narrow and
wide.
And it would give the python programmer a choice of efficiency
profiles.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sending USB commands with Python

2012-08-28 Thread Jorge Mazzonelli
Hi,
I recommend the use of the module PyUSB in sourceforge:
http://pyusb.sourceforge.net/

Also take a look to the tutorial :
http://pyusb.sourceforge.net/docs/1.0/tutorial.html

as far as I can remember, you'll need to first find the device based on the
idvendor / idproduct (provided in the pdf). then you'll need to setup the
configuration (usually the default but on your case you'll need to check
which one, since the doc says there are 2 exposed). then the endpoints.
(all of this is on the tutorial)

With that you need to write the command and then read the result from the
endpoints.
The status command you want will be 0x1B 0x41.

Hope this helps.

Jorge

On Tue, Aug 28, 2012 at 9:04 PM, Adam W.  wrote:

> So I'm trying to get as low level as I can with my Dymo label printer, and
> this method described the PDF
> http://sites.dymo.com/Documents/LW450_Series_Technical_Reference.pdfseems to 
> be it.
>
> I'm unfamiliar with dealing with the USB interface and would greatly
> appreciate it if someone could tell me how to send and receive these
> commands with Python.  Perhaps if you were feeling generous and wanted to
> write a bit of sample code, sending the "Get Printer Status" command and
> receiving the response (page 17 of the PDF) would be perfect to get me on
> my way.
>
> Thanks,
> Adam
> --
> http://mail.python.org/mailman/listinfo/python-list
>



-- 
-- 
(\__/)
(='.'=)This is Bunny. Copy and paste bunny into your
(")_(")signature to help him gain world domination.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sending USB commands with Python

2012-08-28 Thread hamilton

On 8/28/2012 8:54 PM, Dennis Lee Bieber wrote:

2)  does the printer appear as a serial port by the OS? Or as a
printer device?


The OP posted the link to the manual.

If your not going to at least look it over, .


USB Printer Interface

The LabelWriter 450 series printers all communicate with the host 
computer using a full-speed USB 2.0 interface. This interface also 
operates with USB Version 1.1 or later. The printers implement the 
standard USB Printer Class Device interface for communications (see 
http://www.usb.org/developers/devclass/).


hamilton

PS: Page 14
--
http://mail.python.org/mailman/listinfo/python-list


Re: Flexible string representation, unicode, typography, ...

2012-08-28 Thread Chris Angelico
On Wed, Aug 29, 2012 at 12:42 PM, rusi  wrote:
> Clearly there are 3 string-engines in the python 3 world:
> - 3.2 narrow
> - 3.2 wide
> - 3.3 (flexible)
>
> How difficult would it be to giving the choice of string engine as a
> command-line flag?
> This would avoid the nuisance of having two binaries -- narrow and
> wide.
> And it would give the python programmer a choice of efficiency
> profiles.

To what benefit?

3.2 narrow is, I would have to say, buggy. It handles everything up to
\u without problems, but once you have any character beyond that,
your indexing and slicing are wrong.

3.2 wide is fine but memory-inefficient.

3.3 is never worse than 3.2 except for some tiny checks, and will be
more memory-efficient in many cases.

Supporting narrow would require fixing the handling of surrogates.
Potentially a huge job, and you'll end up with ridiculous performance
in many cases.

So what you're really asking for is a command-line option to force all
strings to have their 'kind' set to 11, UCS-4 storage. That would be
doable, I suppose; it wouldn't require many changes (just a quick
check in string creation functions). But what would be the advantage?
Every string requires 4 bytes per character to store; an optimization
has been lost.

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: the meaning of r?.......ï¾

2012-08-28 Thread Robert Miles

On 7/23/2012 1:10 PM, Dennis Lee Bieber wrote:

On Mon, 23 Jul 2012 16:42:51 +0200, Henrik Faber 
declaimed the following in gmane.comp.python.general:



If that was written by my coworkers, I'd strangle them.


My first real assignment, 31 years ago, was porting an application
to CDC MP-60 FORTRAN (what I called "FORTRAN MINUS TWO"). This was a
minimal FORTRAN implementation in which one could not do things like:

ix = 20
call xyz(ix, ix+2, ix-2)

forcing us to produce such abominations as

ix = 20

jinx = ix + 2
minx = ix - 2

call xyz(ix, jinx, minx)



One of my first jobs involved helping maintain a Fortran program
originally written for an early IBM 360 with only 64 kilobytes
of memory.  It included an assembler routine to do double precision
floating point (that early computer couldn't do it as hardware
instructions) and another assembler routine to do dynamic overlays -
load one more subroutine into memory just before calling it (and
then usually overwriting it with the next subroutine to be called
after finishing the first one).  Originally, the computer operators
had to reload the operating system when this program finished,
because it had to overwrite the operating system in order to have
enough memory to run.

When I worked on it, it ran under IBM's DOS (for mainframes).
I never saw any attempts to make it run under Microsoft's DOS
(for microcomputers).

--
http://mail.python.org/mailman/listinfo/python-list


Re: Flexible string representation, unicode, typography, ...

2012-08-28 Thread Ian Kelly
On Tue, Aug 28, 2012 at 8:42 PM, rusi  wrote:
> In summary:
> 1. The problem is not on jmf's computer
> 2. It is not windows-only
> 3. It is not directly related to latin-1 encodable or not
>
> The only question which is not yet clear is this:
> Given a typical string operation that is complexity O(n), in more
> detail it is going to be O(a + bn)
> If only a is worse going 3.2 to 3.3, it may be a small issue.
> If b is worse by even a tiny amount, it is likely to be a significant
> regression for some use-cases.

As has been pointed out repeatedly already, this is a microbenchmark.
jmf is focusing in one one particular area (string construction) where
Python 3.3 happens to be slower than Python 3.2, ignoring the fact
that real code usually does lots of things other than building
strings, many of which are slower to begin with.  In the real-world
benchmarks that I've seen, 3.3 is as fast as or faster than 3.2.
Here's a much more realistic benchmark that nonetheless still focuses
on strings: word counting.

Source: http://pastebin.com/RDeDsgPd


C:\Users\Ian\Desktop>c:\python32\python -m timeit -s "import wc"
"wc.wc('unilang8.htm')"
1000 loops, best of 3: 310 usec per loop

C:\Users\Ian\Desktop>c:\python33\python -m timeit -s "import wc"
"wc.wc('unilang8.htm')"
1000 loops, best of 3: 302 usec per loop

"unilang8.htm" is an arbitrary UTF-8 document containing a broad swath
of Unicode characters that I pulled off the web.  Even though this
program is still mostly string processing, Python 3.3 wins.  Of
course, that's not really a very good test -- since it reads the file
on every pass, it probably spends more time in I/O than it does in
actual processing.  Let's try it again with prepared string data:


C:\Users\Ian\Desktop>c:\python32\python -m timeit -s "import wc; t =
open('unilang8.htm', 'r', encoding
='utf-8').read()" "wc.wc_str(t)"
1 loops, best of 3: 87.3 usec per loop

C:\Users\Ian\Desktop>c:\python33\python -m timeit -s "import wc; t =
open('unilang8.htm', 'r', encoding
='utf-8').read()" "wc.wc_str(t)"
1 loops, best of 3: 84.6 usec per loop

Nope, 3.3 still wins.  And just for the sake of my own curiosity, I
decided to try it again using str.split() instead of a StringIO.
Since str.split() creates more strings, I expect Python 3.2 might
actually win this time.


C:\Users\Ian\Desktop>c:\python32\python -m timeit -s "import wc; t =
open('unilang8.htm', 'r', encoding
='utf-8').read()" "wc.wc_split(t)"
1 loops, best of 3: 88 usec per loop

C:\Users\Ian\Desktop>c:\python33\python -m timeit -s "import wc; t =
open('unilang8.htm', 'r', encoding
='utf-8').read()" "wc.wc_split(t)"
1 loops, best of 3: 76.5 usec per loop

Interestingly, although Python 3.2 performs the splits in about the
same time as the StringIO operation, Python 3.3 is significantly
*faster* using str.split(), at least on this data set.


> So doing some arm-chair thinking (I dont know the code and difficulty
> involved):
>
> Clearly there are 3 string-engines in the python 3 world:
> - 3.2 narrow
> - 3.2 wide
> - 3.3 (flexible)
>
> How difficult would it be to giving the choice of string engine as a
> command-line flag?
> This would avoid the nuisance of having two binaries -- narrow and
> wide.

Quite difficult.  Even if we avoid having two or three separate
binaries, we would still have separate binary representations of the
string structs.  It makes the maintainability of the software go down
instead of up.

> And it would give the python programmer a choice of efficiency
> profiles.

So instead of having just one test for my Unicode-handling code, I'll
now have to run that same test *three times* -- once for each possible
string engine option.  Choice isn't always a good thing.

Cheers,
Ian
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sending USB commands with Python

2012-08-28 Thread alex23
On Aug 29, 1:03 pm, hamilton  wrote:
> The OP posted the link to the manual.
> If your not going to at least look it over, .

Speaking for myself, I _don't_ go out of my way to read extra material
to help someone with a problem here. If it's worth mentioning, mention
it in the question.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sending USB commands with Python

2012-08-28 Thread hamilton

On 8/28/2012 11:04 PM, alex23 wrote:

On Aug 29, 1:03 pm, hamilton  wrote:

The OP posted the link to the manual.
If your not going to at least look it over, .


Speaking for myself, I _don't_ go out of my way to read extra material


But, you will give advice that has no value.


Anything you post here from now on will be suspect.

hamilton

--
http://mail.python.org/mailman/listinfo/python-list


Geodetic functions library GeoDLL 32 Bit and 64 Bit

2012-08-28 Thread Fred
Hi developers,

who develops programs with geodetic functionality like world-wide coordinate 
transformations or distance calculations, can use geodetic functions of my 
GeoDLL. The Dynamic Link Library can easily be used with most of the modern 
programming languages like C, C++, C#, Basic, Delphi, Pascal, Java, Fortran, 
Visual-Objects and others to add geodetic functionality to own applications. 
For many programming languages ​appropriate Interfaces are available.

GeoDLL supports 2D and 3D coordinate transformation, geodetic datum shift and 
reference system convertion with Helmert, Molodenski and NTv2 (e.g. BeTA2007, 
AT_GIS_GRID, CHENYX06), meridian strip changing, user defined coordinate and 
reference systems, distance calculation, Digital Elevation Model, INSPIRE 
support, Direct / Inverse Solutions and a lot of other geodetic functions. 

The DLL is very fast, save and compact because of forceful development in C++ 
with Microsoft Visual Studio 2010. The geodetic functions of the current 
version 12.35 are available in 32bit and 64bit architecture. All functions are 
prepared for multithreading and server operating.

You find a free downloadable test version on 
http://www.killetsoft.de/p_gdlb_e.htm
Notes about the NTv2 support can be found here: 
http://www.killetsoft.de/p_gdln_e.htm
Report on the quality of the coordinate transformations: 
http://www.killetsoft.de/t_1005_e.htm 

Fred
Email: info_at_killetsoft.de
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Geodetic functions library GeoDLL 32 Bit and 64 Bit

2012-08-28 Thread George Silva
Hi Fred.

Do you know about proj4? proj4 is opensource library that does the
coordinate transformations side of geospatial for many many already tested
projects.

Does your libraries do anything that proj4 does not?

On Wed, Aug 29, 2012 at 2:51 AM, Fred  wrote:

> Hi developers,
>
> who develops programs with geodetic functionality like world-wide
> coordinate transformations or distance calculations, can use geodetic
> functions of my GeoDLL. The Dynamic Link Library can easily be used with
> most of the modern programming languages like C, C++, C#, Basic, Delphi,
> Pascal, Java, Fortran, Visual-Objects and others to add geodetic
> functionality to own applications. For many programming languages
> appropriate Interfaces are available.
>
> GeoDLL supports 2D and 3D coordinate transformation, geodetic datum shift
> and reference system convertion with Helmert, Molodenski and NTv2 (e.g.
> BeTA2007, AT_GIS_GRID, CHENYX06), meridian strip changing, user defined
> coordinate and reference systems, distance calculation, Digital Elevation
> Model, INSPIRE support, Direct / Inverse Solutions and a lot of other
> geodetic functions.
>
> The DLL is very fast, save and compact because of forceful development in
> C++ with Microsoft Visual Studio 2010. The geodetic functions of the
> current version 12.35 are available in 32bit and 64bit architecture. All
> functions are prepared for multithreading and server operating.
>
> You find a free downloadable test version on
> http://www.killetsoft.de/p_gdlb_e.htm
> Notes about the NTv2 support can be found here:
> http://www.killetsoft.de/p_gdln_e.htm
> Report on the quality of the coordinate transformations:
> http://www.killetsoft.de/t_1005_e.htm
>
> Fred
> Email: info_at_killetsoft.de
> --
> http://mail.python.org/mailman/listinfo/python-list
>



-- 
George R. C. Silva

Desenvolvimento em GIS
http://geoprocessamento.net
http://blog.geoprocessamento.net
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sending USB commands with Python

2012-08-28 Thread Tim Roberts
"Adam W."  wrote:
>
>So I'm trying to get as low level as I can with my Dymo label printer, 
>and this method described the PDF 
>http://sites.dymo.com/Documents/LW450_Series_Technical_Reference.pdf 
>seems to be it.
>
>I'm unfamiliar with dealing with the USB interface and would greatly
>appreciate it if someone could tell me how to send and receive these
>commands with Python.  Perhaps if you were feeling generous and 
>wanted to write a bit of sample code, sending the "Get Printer 
>Status" command and receiving the response (page 17 of the PDF) 
>would be perfect to get me on my way.

Well, it's more than "a bit of sample code".  You would essentially be
writing a device driver.

Which operating system are you using?  If you are on Windows, then the
operating system has already loaded a printer driver for this device.  You
can't talk to the USB pipes without uninstalling that driver.  It would be
just about as easy for you to learn to use GDI to write to the printer like
a normal application, and that way the code would work on the NEXT
generation of printer, too.

The libusb or libusbx libraries can be used to talk to USB devices.  There
is a Python binding.  On Windows, you still need to have a driver, but the
libusbx instructions can help you find an install one.
-- 
Tim Roberts, t...@probo.com
Providenza & Boekelheide, Inc.
-- 
http://mail.python.org/mailman/listinfo/python-list