Re: Need some help here

2006-09-20 Thread di

"Kareem840" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
> Hello. Unfortunately, I am in need of money to pay my credit card
> bills. If you could spare just $1, I would be grateful. I have a Paypal
> account. [EMAIL PROTECTED] I swear this will go to my card
> balances. Thank you.
>

There's this clown in Africa that will help you, please contact him. 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with python code

2016-03-29 Thread Yum Di
import random
import time

pizzatype = [3.50,4.20,5.20,5.80,5.60]
drinktype = [0.90,0.80,0.90]
topping = [0.50,0.50,0.50,0.50]

def Total_cost_cal (pt ,dt ,t):
total = pt + dt + t
return total

print ("Welcome to Pizza Shed!")

order = raw_input ("\n\nPLEASE PRESS ENTER TO ORDER." )

tablenum = input ("Enter table number from 1-25 \n ")
while tablenum>25 or tablenum <=0:
tablenum = input ("Enter the correct table number, there are only 25 tables 
")

#Pizza menu with prices

print ("-")

print ("Let me help you with your order!")

print ("-")

order = raw_input ("\n\nPLEASE PRESS ENTER TO SELECT YOUR PIZZA." )

print ("Menu")

print (
"1 = cheese and tomato: 3.50, "
"2 = ham and pineapple: 4.20, "
"3 = vegetarian: 5.20, "
"4 = meat feast: 5.80, "
"5 = seafood: 5.60 " )

menu = input("Enter the type of pizza that you want to order from 1-5 \n")
while menu>5 or menu <=1:
menu = input ("Enter the right number ")
pizza_cost = pizzatype[menu]

print ("--")

pizza_amount = input ("Enter the amount of Pizzas that you want to order ")
while pizza_amount > 10 or pizza_amount <=0:
pizza_amount = input ("Maximum amount is 10, Please enter again ")

print ("")

#base

print ("Base")

print (
"1 = thin and crispy,"
"2 = traditional" )

base = input ("Select a base from 1-2 \n")
while base>2 or base<=1:
base = input ("There are only 2 types, Please enter again ")

if base_type == 1:
print "You have chosen thin and crispy"
elif base_type == 2:
print ("You have chosen traditional")

print ("---")

#extra toppings

print ("Extra Toppings")

toppings = input ("Enter a number for your choice of extra topping \n Enter 1 
for extra cheese \n Enter 2 for extra pepperoni \n Enter 3 for extra pineapple 
\n Enter 4 for extra peppers \n" )
while toppings >4 or toppings < 0:
toppings = input ("There are only 4 types of extra toppings, Please try 
again " )
topping_cost = topping[toppings]

print ("-")

#drink

print ("Drink")

print (
"1 = Cola: 0.90, "
"2 = Lemonande: 0.80, "
"3 = Fizzy Orange: 0.90 "
)

drink = input ("Enter a number for your choice of drinks " )
while drink>3 or drink<0:
drink = input ("Choices start from 0 to 3 " )
drink_cost = drinktype[drink]
drink_amount = input ("Enter the amount of drinks")
while drink_amount >10 or drink_amount<1:
drink_amount = input (" You can only have upto 10 drinks, Please try again")


print ("")

pizzatotal = pizza_cost*pizza_amount
drinktotal = drink_cost*drink_amount

total_cost = total_cost_cal(pizzatotal, drinktotal, topping_cost)

print ("")
print ("Calculating bill")
print ("")
print ("")

print ("Thank You for ordering at Pizza Shed! ")

I still don't get it.. Sorry, I'm still quite new to this
I've have made few minor changes, but it still doesn't work
-- 
https://mail.python.org/mailman/listinfo/python-list


Calculate Bill

2016-03-29 Thread Yum Di
import random
import time

print ("Welcome to Pizza Shed!")

order = raw_input ("\n\nPLEASE PRESS ENTER TO ORDER." )

tablenum = input ("Enter table number from 1-25 \n ")
while tablenum>25 or tablenum <=0:
tablenum = input ("Enter the correct table number, there are only 25 tables 
")

#Pizza menu with prices

print ("-")

print ("Let me help you with your order!")

print ("-")

order = raw_input ("\n\nPLEASE PRESS ENTER TO SELECT YOUR PIZZA." )

print ("Menu")

print (
"1 = cheese and tomato: 3.50, "
"2 = ham and pineapple: 4.20, "
"3 = vegetarian: 5.20, "
"4 = meat feast: 5.80, "
"5 = seafood: 5.60 " )

pizza_choice = input("Enter the type of pizza that you want to order from 1-5 
\n")
while pizza_choice>5 or pizza_choice <=1:
pizza_choice = input ("Enter the right number ")

print ("--")

pizza_amount = input ("Enter the amount of Pizzas that you want to order ")
while pizza_amount > 10 or pizza_amount <=0:
pizza_amount = input ("Maximum amount is 10, Please enter again ")

print ("")

#base

print ("Base")

print (
"1 = thin and crispy,"
"2 = traditional" )

base = input ("Select a base from 1-2 \n")
while base>2 or base<=1:
base = input ("There are only 2 types, Please enter again ")

if base == 1:
print "You have chosen thin and crispy"
elif base == 2:
print ("You have chosen traditional")

print ("---")

#extra toppings

print ("Extra Toppings")

toppings = input ("Enter a number for your choice of extra topping \n Enter 1 
for extra cheese \n Enter 2 for extra pepperoni \n Enter 3 for extra pineapple 
\n Enter 4 for extra peppers \n" )
while toppings >4 or toppings < 0:
toppings = input ("There are only 4 types of extra toppings, Please try 
again " )

if toppings == 1:
print "You have chosen extra cheese"
elif toppings == 2:
print ("You have chosen pepperoni")
elif toppings == 3:
print ("You have chosen pineapple")
elif toppings == 4:
print ("You have chosen peppers")

print ("-")

#drink

print ("Drink")

print (
"1 = Cola: 0.90, "
"2 = Lemonande: 0.80, "
"3 = Fizzy Orange: 0.90 "
)

drink = input ("Enter a number for your choice of drinks " )
while drink>3 or drink<0:
drink = input ("Choices start from 0 to 3 " )
drink_amount = input ("Enter the amount of drinks")
while drink_amount >10 or drink_amount<1:
drink_amount = input (" You can only have upto 10 drinks, Please try again")

if drink == 1:
print "You have chosen Cola"
elif drink == 2:
print ("You have chosen Lemonande")
elif drink == 3:
print ("You have chosen Fizzy Orange")

print ("")
print ("Calculating bill")
print ("")
print ("")

print ("Thank You for ordering at Pizza Shed! ")

Hey.. this code works. However, i need it to calculate the total cost.
I dont know how to do that. Can someone help me..
thanks
-- 
https://mail.python.org/mailman/listinfo/python-list


HELP! With calculating

2016-03-30 Thread Yum Di
import random
import time

print ("Welcome to Pizza Shed!")

tablenum = input ("Enter table number from 1-25 \n ")
while tablenum>25 or tablenum <=0:
tablenum = input ("Enter the correct table number, there are only 25 tables 
")

#Pizza menu with prices

print ("-")

print ("Let me help you with your order!")

print ("-")

print ("Menu")

print (
"1 = cheese and tomato: 3.50, "
"2 = ham and pineapple: 4.20, "
"3 = vegetarian: 5.20, "
"4 = meat feast: 5.80, "
"5 = seafood: 5.60 " )

pizza_choice = input("Enter the type of pizza that you want to order from 1-5 
\n")
while pizza_choice>5 or pizza_choice <=1:
pizza_choice = input ("Enter the right number ")

if pizza_choice == 1:
print "You have chosen cheese and tomato. The cost for this is 3.50"
elif pizza_choice == 2:
print ("You have chosen ham and pineapple. The cost for this is 4.20")
elif pizza_choice == 3:
print ("You have chosen vegetarian. The cost for this is 5.20")
elif pizza_choice == 4:
print ("You have chosen meat feast. The cost for this is 5.80")
elif pizza_choice == 5:
print ("You have chosen sea food. The cost for this is 5.60")

print ("--")

pizza_amount = input ("Enter the amount of Pizzas that you want to order ")
while pizza_amount > 10 or pizza_amount <=0:
pizza_amount = input ("Maximum amount is 10, Please enter again ")

print ("")

#base

print ("Base")

print (
"1 = thin and crispy,"
"2 = traditional" )

base = input ("Select a base from 1-2 \n")
while base>2 or base<=1:
base = input ("There are only 2 types, Please enter again ")

if base == 1:
print "You have chosen thin and crispy"
elif base == 2:
print ("You have chosen traditional")

print ("---")

#extra toppings

print ("Extra Toppings")

toppings = input ("Enter a number for your choice of extra topping \n Enter 1 
for extra cheese \n Enter 2 for extra pepperoni \n Enter 3 for extra pineapple 
\n Enter 4 for extra peppers \n" )
while toppings >4 or toppings < 0:
toppings = input ("There are only 4 types of extra toppings, Please try 
again " )

if toppings == 1:
print ("You have chosen extra cheese. The cost for this is 0.50")
elif toppings == 2:
print ("You have chosen pepperoni. The cost for this is 0.50")
elif toppings == 3:
print ("You have chosen pineapple. The cost for this is 0.50")
elif toppings == 4:
print ("You have chosen peppers. The cost for this is 0.50")

print ("-")

#drink

print ("Drink")

print (
"1 = Cola: 0.90, "
"2 = Lemonande: 0.80, "
"3 = Fizzy Orange: 0.90 " )

drink = input ("Enter a number for your choice of drinks " )
while drink>3 or drink<0:
drink = input ("Choices start from 0 to 3 " )

if drink == 1:
print "You have chosen Cola. The cost for this is 0.90"
elif drink == 2:
print ("You have chosen Lemonande. The cost for this is 0.80")
elif drink == 3:
print ("You have chosen Fizzy Orange. The cost for this is 0.90")

drink_amount = input ("Enter the amount of drinks")
while drink_amount >10 or drink_amount<0:
drink_amount = input (" You can only have upto 10 drinks, Please try again")

print ("")
print ("Calculating bill")
print ("")
print ("")

print ("Thank You for ordering at Pizza Shed! ")


Hey, this is my code..
I need to calculate the total cost, but I m not sure how to do that.
I m still new at python.

Can someone please help me
-- 
https://mail.python.org/mailman/listinfo/python-list


dynamic setattr

2012-07-27 Thread Mariano Di Felice
Hi, 
  I have a property file (.ini) that has multiple sections and relative keys, 
as default structure.

Now, I would like to export from my utility class methods getter and setter.
I have started as is:

class Utility:

  keys = {"STANDUP": ["st_key1", "st_key2", "st_key3", "st_key4"],
  "DEFAULT": ["def_key1", "def_key2", "def_key3", "def_key4", 
"def_key5"]}


  def __init__(self):
for section, keyList in  keys .items():
for key in keyList:
setattr(self, "get_%s" % key, self.get_value(section, key))
setattr(self, "set_%s" % key, lambda 
value:self.set_value(section, key, value) )

  def get_value(section, key):
if file_ini.has_option(section, key):
return lambda: file_ini.get(section, key)
return lambda: None

  def set_value(section, key, value):
print "set section: %s - key: %s - value: %s" % (section, key, value)
if file_ini.has_option(section, key):
file_ini.set(section , key ,value)


if __name__ == "__main__":
  utility = Utility()
print "key2: %s" % utility.get_def_key2() ## -> value return 100
print "set value 50 to def_key2"
utility.set_def_key2(50)
print "new value def_key2: %s" % utility.get_def_key2()  ## -> value return 
100


Well, when in code I set value 50 to def_key2, return value of key st_key4 of 
STANDUP section Why?
"set section STANDUP - key st_key4 - value: 200"

Can anyone help me, please???

Thx in Advance
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic setattr

2012-07-27 Thread Mariano Di Felice
Hi Steven,
  Sorry for inconvenients.
I've posted "unsyntax" example just typing from here, just for exaplain my 
problem

Finally, I don't understand why every set_ set value on wrong section/key.
I think setattr syntax is correct, but it doesn't works!

About java/python concept, yeah! You all right!
But I need a conversion class (as Utility) that expose getter/setter of any 
keys.

Thx!

Il giorno venerdì 27 luglio 2012 15:46:59 UTC+2, Steven D'Aprano ha scritto:
> On Fri, 27 Jul 2012 05:49:45 -0700, Mariano Di Felice wrote:
> 
> > Hi,
> >   I have a property file (.ini) that has multiple sections and relative
> >   keys, as default structure.
> 
> Have you looked at Python's standard INI file library?

I already use it!

> 
> http://docs.python.org/library/configparser.html
> 
> 
> > Now, I would like to export from my utility class methods getter and
> > setter. I have started as is:
> > 
> > class Utility:
> > 
> >   keys = {"STANDUP": ["st_key1", 
> "st_key2", "st_key3", "st_key4"],
> >   "DEFAULT": ["def_key1", 
> "def_key2", "def_key3",
> >   "def_key4", "def_key5"]}
> 
> This defines a *shared* class attribute. As it is attached to the class, 
> not an instance, every instance will see the same shared dict.
> 
> 
> >   def __init__(self):
> > for section, keyList in  keys .items(): 
> > for key in keyList:
> 
> As given, this is a SyntaxError. Please do not retype your code from 
> memory, always COPY AND PASTE your actual code.
> 
> In this case, it is easy to fix the syntax error by fixing the 
> indentation. But what other changes have you made by accident?
> 
> Your code:
> 
> def __init__(self):
> for section, keyList in  keys .items(): 
> 
> looks for a *global variable* called keys, *not* the shared class 
> attribute Utility.keys. By design, attributes are not in the function 
> scope. If you want to access an attribute, whether class or instance, you 
> must always refer to them as attributes.
> 
> 
> def __init__(self):
> for section, keyList in  self.keys.items():  # this will work
> 
> 
> > setattr(self, "get_%s" % key, 
> self.get_value(section,
> > key)) 
> > setattr(self, "set_%s" % key, lambda
> > value:self.set_value(section, key, value) )
> 
> 
> What a mess. What is the purpose of this jumble of code?
> 
> My guess is that you are experienced with Java, and you are trying to 
> adapt Java idioms and patterns to Python. Before you do this, you should 
> read these two articles by a top Python developer who also knows Java 
> backwards:
> 
> http://dirtsimple.org/2004/12/python-is-not-java.html
> http://dirtsimple.org/2004/12/java-is-not-python-either.html
> 
> 
> 
> > if __name__ == "__main__":
> >utility = Utility()
> >  print "key2: %s" % utility.get_def_key2() ## -> value 
> return 100
> 
> Again, another SyntaxError. This can be fixed. But the next part cannot.
> 
> Except for two comments, 100 does not exist in your sample code. Python 
> doesn't magically set values to 100. The code you give cannot possibly 
> return 100 since nowhere in your code does it set anything to 100.
> 
> If you actually run the code you provide (after fixing the SyntaxErrors), 
> you get this error:
> 
> py> utility = Utility()
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
>   File "<stdin>", line 5, in __init__
> NameError: global name 'keys' is not defined
> 
> 
> If you fix that and try again, you get this error:
> 
> py> utility = Utility()
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
>   File "<stdin>", line 7, in __init__
> TypeError: get_value() takes exactly 2 arguments (3 given)
> 
> 
> The results you claim you get are not true.
> 
> 
> Please read this page and then try again:
> 
> http://sscce.org/
> 
> 
> 
> -- 
> Steven

-- 
http://mail.python.org/mailman/listinfo/python-list


Class.__class__ magic trick help

2012-08-20 Thread Massimo Di Pierro
I discovered I can do this:

class A(object): pass
class B(object):
__class__ = A #  magic

b = B()
isinstance(b,A) # returns True (as if B derived from A)
isinstance(b,B) # also returns True

I have some reasons I may want to do this (I an object with same
methods as a dict but it is not derived from dict and I want
isinstance(x,dict)==True to use it in place of dict in some other
code).

What are the problems with the magic trick above? Why does it work?

massimo


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class.__class__ magic trick help

2012-08-20 Thread Massimo Di Pierro
The fact is this works:

>>> class B(object):
...__class__ = dict
>>> b=B()

but this does not

>>> class B(object):
...def __init__(self):
...self.__class__ = dict
>>> b=B()
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 3, in __init__
TypeError: __class__ assignment: only for heap types



On Aug 20, 1:39 pm, Ian Kelly  wrote:
> On Mon, Aug 20, 2012 at 12:01 PM, Massimo Di Pierro
>
>  wrote:
> > I discovered I can do this:
>
> >     class A(object): pass
> >     class B(object):
> >         __class__ = A # <<<< magic
>
> >     b = B()
> >     isinstance(b,A) # returns True (as if B derived from A)
> >     isinstance(b,B) # also returns True
>
> > I have some reasons I may want to do this (I an object with same
> > methods as a dict but it is not derived from dict and I want
> > isinstance(x,dict)==True to use it in place of dict in some other
> > code).
>
> > What are the problems with the magic trick above? Why does it work?
>
> Normally with __class__ assignment, you would assign to the __class__
> attribute of the *instance*, not the class declaration.  This actually
> changes the class of the object, and so isinstance(b, B) would no
> longer return True.
>
> I've never heard of assigning it in the class declaration, and as far
> as I know, this behavior isn't documented anywhere.  I expect that
> what's happening here is that Python is not actually updating the
> class of the instance, but that A is merely assigned to the
> "__class__" attribute in the class dict, and that isinstance is
> somehow (perhaps accidentally) finding this.  So I think this is
> probably a bug, and I would not rely on it to work correctly in all
> cases.
>
> In any event, the use case that you're looking for is usually
> accomplished using abstract base classes.  Instead of "isinstance(x,
> dict)", you should use "isinstance(x, collections.MutableMapping)",
> and then inherit your class from or register it with the
> MutableMapping ABC.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class.__class__ magic trick help

2012-08-20 Thread Massimo Di Pierro
Consider this code:

class SlowStorage(dict):
def __getattr__(self,key):
  return self[key]
def __setattr__(self,key):
  self[key]=value

class FastStorage(dict):
def __init__(self, __d__=None, **kwargs):
self.update(__d__,**kwargs)
def __getitem__(self,key):
return self.__dict__.get(key,None)
def __setitem__(self,key,value):
self.__dict__[key] = value
def __delitem__(self,key):
delattr(self,key)
def __copy__(self):
return Storage(self)
def __nonzero__(self):
return len(self.__dict__)>0
def pop(self,key,default=None):
if key in self:
default = getattr(self,key)
delattr(self,key)
return default
def clear(self):
self.__dict__.clear()
def __repr__(self):
return repr(self.__dict__)
def keys(self):
return self.__dict__.keys()
def values(self):
return self.__dict__.values()
def items(self):
return self.__dict__.items()
  def iterkeys(self):
return self.__dict__.iterkeys()
def itervalues(self):
return self.__dict__.itervalues()
def iteritems(self):
return self.__dict__.iteritems()
def viewkeys(self):
return self.__dict__.viewkeys()
def viewvalues(self):
return self.__dict__.viewvalues()
def viewitems(self):
return self.__dict__.viewitems()
def fromkeys(self,S,v=None):
return self.__dict__.fromkeys(S,v)
def setdefault(self, key, default=None):
try:
return getattr(self,key)
except AttributeError:
setattr(self,key,default)
return default
def clear(self):
self.__dict__.clear()
def len(self):
return len(self.__dict__)
def __iter__(self):
return self.__dict__.__iter__()
def has_key(self,key):
return key in self.__dict__
def __contains__(self,key):
return key in self.__dict__
def update(self,__d__=None,**kwargs):
if __d__:
for key in __d__:
kwargs[key] = __d__[key]
self.__dict__.update(**kwargs)
def get(self,key,default=None):
return getattr(self,key) if key in self else default

>>> s=SlowStorage()
>>> a.x=1  ### (1)
>>> a.x### (2)
1 # ok
>>> isinstance(a,dict)
True # ok
>>> print dict(a)
{'x':1} # ok (3)


>>> s=FastStorage()
>>> a.x=1  ### (4)
>>> a.x### (5)
1 # ok
>>> isinstance(a,dict)
True # ok
>>> print dict(a)
{} # not ok (6)

Lines (4) and (5) are about 10x faster then lines (1) and (2). I like
FastStorage better but while (3) behaves ok, (6) does not behave as I
want.

I intuitively understand why FastStorage is cannot cast into dict
properly.

What I do not know is how to make it do the casting properly without
losing the 10x speedup of FastStorage over SlowStorage.

Any idea?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class.__class__ magic trick help

2012-08-21 Thread Massimo Di Pierro
On Aug 21, 2:40 am, Oscar Benjamin  wrote:
> On Mon, 20 Aug 2012 21:17:15 -0700 (PDT), Massimo Di Pierro
>
>
>
>
>
>
>
>
>
>  wrote:
> > Consider this code:
> > class SlowStorage(dict):
> >     def __getattr__(self,key):
> >           return self[key]
> >     def __setattr__(self,key):
> >           self[key]=value
> > class FastStorage(dict):
> >     def __init__(self, __d__=None, **kwargs):
> >         self.update(__d__,**kwargs)
> >     def __getitem__(self,key):
> >         return self.__dict__.get(key,None)
> >     def __setitem__(self,key,value):
> >         self.__dict__[key] = value
> >     def __delitem__(self,key):
> >         delattr(self,key)
> >     def __copy__(self):
> >         return Storage(self)
> >     def __nonzero__(self):
> >         return len(self.__dict__)>0
> >     def pop(self,key,default=None):
> >         if key in self:
> >             default = getattr(self,key)
> >             delattr(self,key)
> >         return default
> >     def clear(self):
> >         self.__dict__.clear()
> >     def __repr__(self):
> >         return repr(self.__dict__)
> >     def keys(self):
> >         return self.__dict__.keys()
> >     def values(self):
> >         return self.__dict__.values()
> >     def items(self):
> >         return self.__dict__.items()
> >       def iterkeys(self):
> >         return self.__dict__.iterkeys()
> >     def itervalues(self):
> >         return self.__dict__.itervalues()
> >     def iteritems(self):
> >         return self.__dict__.iteritems()
> >     def viewkeys(self):
> >         return self.__dict__.viewkeys()
> >     def viewvalues(self):
> >         return self.__dict__.viewvalues()
> >     def viewitems(self):
> >         return self.__dict__.viewitems()
> >     def fromkeys(self,S,v=None):
> >         return self.__dict__.fromkeys(S,v)
> >     def setdefault(self, key, default=None):
> >         try:
> >             return getattr(self,key)
> >         except AttributeError:
> >             setattr(self,key,default)
> >             return default
> >     def clear(self):
> >         self.__dict__.clear()
> >     def len(self):
> >         return len(self.__dict__)
> >     def __iter__(self):
> >         return self.__dict__.__iter__()
> >     def has_key(self,key):
> >         return key in self.__dict__
> >     def __contains__(self,key):
> >         return key in self.__dict__
> >     def update(self,__d__=None,**kwargs):
> >         if __d__:
> >             for key in __d__:
> >                 kwargs[key] = __d__[key]
> >         self.__dict__.update(**kwargs)
> >     def get(self,key,default=None):
> >         return getattr(self,key) if key in self else default
> > >>> s=SlowStorage()
> > >>> a.x=1  ### (1)
> > >>> a.x    ### (2)
> > 1 # ok
> > >>> isinstance(a,dict)
> > True # ok
> > >>> print dict(a)
> > {'x':1} # ok (3)
>
> Try:
>
> >>> a.items()
>
> What does that show?
>
>
>
>
>
>
>
>
>
>
>
> > >>> s=FastStorage()
> > >>> a.x=1  ### (4)
> > >>> a.x    ### (5)
> > 1 # ok
> > >>> isinstance(a,dict)
> > True # ok
> > >>> print dict(a)
> > {} # not ok (6)
> > Lines (4) and (5) are about 10x faster then lines (1) and (2). I
> like
> > FastStorage better but while (3) behaves ok, (6) does not behave as
> I
> > want.
> > I intuitively understand why FastStorage is cannot cast into dict
> > properly.
> > What I do not know is how to make it do the casting properly without
> > losing the 10x speedup of FastStorage over SlowStorage.
> > Any idea?
>
> I don't really understand what your trying to do but since you didn't
> add the __setattr__ method to FastStorage the item is not added to
> the dictionary when you do a.x = 1
>
> Oscar

>>> a.items()
[('x',1')]

all the APIs work as expected except casting to dict.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class.__class__ magic trick help

2012-08-21 Thread Massimo Di Pierro
Hello Oscar,

thanks for your help but your proposal of adding:

def __setitem__(self,key,value):
   self.__dict__[key] = value
   dict.__setitem__(self, key, value)

does not help me.

What I have today is a class that works like SlowStorage. I want to
replace it with NewStorage because it is 10x faster. That is the only
reason. NewStorage does everything I want and all the APIs work like
SlowStorage except casting to dict.

By defining __setitem__ as you propose, you solve the casting to dict
issue but you have two unwanted effects: each key,value is store twice
(in different places), accessing the elements becomes slower the
SlowStprage which is my problem in the first place.

The issue for me is understanding how the casting dict(obj) works and
how to change its behavior so that is uses methods exposed by obj to
do the casting, if all possible.

Massimo

-- 
http://mail.python.org/mailman/listinfo/python-list


Secure ssl connection with wrap_socket

2011-07-05 Thread Andrea Di Mario
Hi, I'm a new python user and I'm writing a small web service with ssl.
I want use a self-signed certificate like in wiki:
http://docs.python.org/dev/library/ssl.html#certificates
I've used wrap_socket, but if i try to use
cert_reqs=ssl.CERT_REQUIRED, it doesn't work with error:

urllib2.URLError: 

It works only with CERT_NONE (the default) but with this option i
could access to the service in insicure mode.

Have you some suggestions for my service?

Thanks. Regards.

-- 
Andrea Di Mario
-- 
http://mail.python.org/mailman/listinfo/python-list


Notifications when process is killed

2011-08-01 Thread Andrea Di Mario
Hi, i've created a twisted server application and i want that the
server send me a message when someone stops or kills the process.
I want to override reactor.stop(), but do this way send me message
when the process is stopped by a system kill?
Could you suggest me if there's a way to do this?

Thanks, regards.

-- 
Andrea Di Mario
-- 
http://mail.python.org/mailman/listinfo/python-list


Notifications when process is killed

2011-08-01 Thread Andrea Di Mario
Thanks Thomas, it is what i'm looking for.

Regards

-- 
Andrea Di Mario
-- 
http://mail.python.org/mailman/listinfo/python-list


Notifications when process is killed

2011-08-02 Thread Andrea Di Mario
> You won't be able to catch SIGTERM, as Thomas said, but if you need to
> know what caused a process to end, the best way is to have code in the
> parent process to catch SIGCHLD. When the child ends, for any reason,
> its parent is sent SIGCHLD with some parameters, including the signal
> number that caused the termination; you can then log anything you
> want.

Hi, i understand, i've read that SIGKILL can't catch, but nothing
about SIGTERM.
If i use SIGCHLD, i will have difficult when parent receive a SIGTERM, or not?

Thanks, regards.

--
Andrea Di Mario
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Accuracy of multiprocessing.Queue.qsize before any Queue.get invocations?

2022-05-13 Thread Martin Di Paola

If the queue was not shared to any other process, I would guess that its
size is reliable.

However, a plain counter could be much simpler/safer.

The developer of multiprocessing.Queue, implemented
size() thinking in how to share the size and maintain a reasonable
consistency between process.
He/she probably didn't care how well works in a single-process scenario
as this is a very special case.

Thanks,
Martin.


On Thu, May 12, 2022 at 06:07:02PM -0500, Tim Chase wrote:

The documentation says[1]


Return the approximate size of the queue. Because of
multithreading/multiprocessing semantics, this number is not
reliable.


Are there any circumstances under which it *is* reliable?  Most
germane, if I've added a bunch of items to the Queue, but not yet
launched any processes removing those items from the Queue, does
Queue.qsize accurately (and reliably) reflect the number of items in
the queue?

 q = Queue()
 for fname in os.listdir():
   q.put(fname)
 file_count = q.qsize() # is this reliable?
 # since this hasn't yet started fiddling with it
 for _ in range(os.cpu_count()):
   Process(target=myfunc, args=(q, arg2, arg3)).start()

I'm currently tracking the count as I add them to my Queue,

 file_count = 0
 for fname in os.listdir():
   q.put(fname)
   file_count += 1

but if .qsize is reliably accurate before anything has a chance to
.get data from it, I'd prefer to tidy the code by removing the
redunant counting code if I can.

I'm just not sure what circumstances the "this number is not
reliable" holds.  I get that things might be asynchronously
added/removed once processes are running, but is there anything that
would cause unreliability *before* other processes/consumers run?

Thanks,

-tkc

[1]
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Queue.qsize





--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Changing calling sequence

2022-05-13 Thread Martin Di Paola

You probably want something like overload/multiple dispatch. I quick search
on PyPI yields a 'multipledispatch' package.

I never used, however.

On Wed, May 11, 2022 at 08:36:26AM -0700, Tobiah wrote:

On 5/11/22 06:33, Michael F. Stemper wrote:

I have a function that I use to retrieve daily data from a
home-brew database. Its calling sequence is;

def TempsOneDay( year, month, date ):

After using it (and its friends) for a few years, I've come to
realize that there are times where it would be advantageous to
invoke it with a datetime.date as its single argument.


You could just use all keyword args:

def TempsOneDay(**kwargs):

   if 'date' in kwargs:
   handle_datetime(kwargs['date'])
   elif 'year' in kwargs and 'month' in kwargs and 'day' in kwargs:
   handle_args(kwargs['year'], kwargs['month'], kwargs['day'])
   else:
   raise Exception("Bad keyword args")

TempsOneDay(date=datetime.datetime.now)

TempsOneDay(year=2022, month=11, day=30)

--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Request for assistance (hopefully not OT)

2022-05-17 Thread Martin Di Paola

Try to reinstall python and only python and if you succeeds, then try to
reinstall the other tools.

For this, use "apt-get" instead of "apt"

$ sudo apt-get reinstall python3

When a system is heavily broken, be extra careful and read the output of
the programs. If "apt-get" says than in order to reinstall python it will have
to remove half of your computer, abort. Better ask than sorry.

Best of the lucks.

Martin.

On Tue, May 17, 2022 at 06:20:39AM -0500, o1bigtenor wrote:

Greetings

I was having space issues in my /usr directory so I deleted some
programs thinking that the space taken was more an issue than having
older versions of the program.

So one of the programs I deleted (using rm -r) was python3.9.
Python3.10 was already installed so I thought (naively!!!) that things
should continue working.
(Python 3.6, 3.7 and 3.8 were also part of this cleanup.)

So now I have problems.

Following is the system barf that I get when I run '# apt upgrade'.

What can I do to correct this self-inflicted problem?

(running on debian testing 5.17

Setting up python2.7-minimal (2.7.18-13.1) ...
Could not find platform independent libraries 
Could not find platform dependent libraries 
Consider setting $PYTHONHOME to [:]
/usr/bin/python2.7: can't open file
'/usr/lib/python2.7/py_compile.py': [Errno 2] No such file or
directory
dpkg: error processing package python2.7-minimal (--configure):
installed python2.7-minimal package post-installation script
subprocess returned error exit status 2
Setting up python3.9-minimal (3.9.12-1) ...
update-binfmts: warning: /usr/share/binfmts/python3.9: no executable
/usr/bin/python3.9 found, but continuing anyway as you request
/var/lib/dpkg/info/python3.9-minimal.postinst: 51: /usr/bin/python3.9: not found
dpkg: error processing package python3.9-minimal (--configure):
installed python3.9-minimal package post-installation script
subprocess returned error exit status 127
dpkg: dependency problems prevent configuration of python3.9:
python3.9 depends on python3.9-minimal (= 3.9.12-1); however:
 Package python3.9-minimal is not configured yet.

dpkg: error processing package python3.9 (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of python2.7:
python2.7 depends on python2.7-minimal (= 2.7.18-13.1); however:
 Package python2.7-minimal is not configured yet.

dpkg: error processing package python2.7 (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of python3.9-dev:
python3.9-dev depends on python3.9 (= 3.9.12-1); however:
 Package python3.9 is not configured yet.

dpkg: error processing package python3.9-dev (--configure):
dependency problems - leaving unconfigured
. . .
Errors were encountered while processing:
python2.7-minimal
python3.9-minimal
python3.9
python2.7
python3.9-dev
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Filtering XArray Datasets?

2022-06-07 Thread Martin Di Paola

Hi, I'm not an expert on this so this is an educated guess:

You are calling drop=True and I presume that you want to delete the rows
of your dataset that satisfy a condition.

That's a problem.

If the underlying original data is stored in a dense contiguous array,
deleting chunks of it will leave it with "holes". Unless the backend
supports sparse implementations, it is likely that it will go for the
easiest solution: copy the non-deleted rows in a new array.

I don't know the details of you particular problem but most of the time
the trick is in not letting the whole data to be loaded.

Try to see if instead of loading all the dataset and then performing the
filtering/selection, you can do the filtering during the loading.

An alternative could use filtering "before" doing the real work. For
example, if you have a CSV of >100GB you could write a program X that
copies the dataset into a new CSV but doing the filtering. Then, you
load the filtered dataset and do the real work in a program Y.

I explicitly named X and Y as, in principle, they are 2 different programs using
even 2 different technologies.

I hope this email can give you hints of how to fix it. In my last
project I had a similar problem and I ended up doing the filtering on
Python and the "real work" in Julia.

Thanks!
Martin.


On Mon, Jun 06, 2022 at 02:28:41PM -0800, Israel Brewster wrote:

I have some large (>100GB) datasets loaded into memory in a two-dimensional (X 
and Y) NumPy array backed XArray dataset. At one point I want to filter the data 
using a boolean array created by performing a boolean operation on the dataset 
that is, I want to filter the dataset for all points with a longitude value 
greater than, say, 50 and less than 60, just to give an example (hopefully that 
all makes sense?).

Currently I am doing this by creating a boolean array (data[‘latitude’]>50, for 
example), and then applying that boolean array to the dataset using .where(), with 
drop=True. This appears to work, but has two issues:

1) It’s slow. On my large datasets, applying where can take several minutes 
(vs. just seconds to use a boolean array to index a similarly sized numpy array)
2) It uses large amounts of memory (which is REALLY a problem when the array is 
already using 100GB+)

What it looks like is that values corresponding to True in the boolean array 
are copied to a new XArray object, thereby potentially doubling memory usage 
until it is complete, at which point the original object can be dropped, 
thereby freeing the memory.

Is there any solution for these issues? Some way to do an in-place filtering?
---
Israel Brewster
Software Engineer
Alaska Volcano Observatory
Geophysical Institute - UAF
2156 Koyukuk Drive
Fairbanks AK 99775-7320
Work: 907-474-5172
cell:  907-328-9145

--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: TENGO PROBLEMAS AL INSTALAR PYTHON

2022-07-08 Thread Martin Di Paola




On Fri, Jul 08, 2022 at 04:15:35PM -0600, Mats Wichmann wrote:


In addition... there is no "Python 10.0" ...



Mmm, perhaps that's the problem :D

@Angie Odette Lima Banguera, vamos a necesitar algun traceback o algo
para guiarte. Podes tambien buscar en internet (youtube) q hay varios
tutoriales para los primeros pasos.

Good luck!
--
https://mail.python.org/mailman/listinfo/python-list


Re: Simple message passing system and thread safe message queue

2022-07-18 Thread Martin Di Paola

Hi, I couldn't read your posts, every time I try to open one I'm
redirected to an index page.

I took a look at the smps project and I as far I understand it is a
SSL client that sends messages to a server that implements a store of
messages.

I would suggest to remove the sleep() calls and as a challenge for you,
make the server single-thread using asyncio and friends.

Thanks,
Martin.

On Mon, Jul 18, 2022 at 06:31:28PM +0200, Morten W. Petersen wrote:

Hi.

I wrote a couple of blog posts as I had to create a message passing system,
and these posts are here:

http://blogologue.com/search?category=1658082823X26

Any comments or suggestions?

Regards,

Morten

--
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: list indices must be integers or slices, not str

2022-07-20 Thread Martin Di Paola

offtopic

If you want a pure-python but definitely a more hacky implementation,
you can play around with inspect.stack() and get the variables from
the outer frames.


# code:
x = 32
y = 42
printf("Hello x={x}, y={y}", x=27)

# output:
Hello x=27, y=42


The implementation of printf() was never released in PyPI (I guess I never saw
it as more than a challenge).

But the implementation is quite simple (I did a post about it):
https://book-of-gehn.github.io/articles/2021/07/11/Home-Made-Python-F-String.html

Thanks,
Martin.
On Wed, Jul 20, 2022 at 10:46:35AM -0600, Mats Wichmann wrote:

On 7/20/22 05:04, Frank Millman wrote:


I think the preferred style these days is f'{x[-1]}' which works."

Unfortunately the 'f' option does not work for me in this case, as I am
using a string object, not a string literal.


For that you could consider

https://pypi.org/project/f-yeah/

(straying a bit off thread subject by now, admittedly)
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: exec() an locals() puzzle

2022-07-20 Thread Martin Di Paola

I did a few tests

# test 1
def f():
i = 1
print(locals())
exec('y = i; print(y); print(locals())')
print(locals())
a = eval('y')
print(locals())
u = a
print(u)
f()

{'i': 1}
1
{'i': 1, 'y': 1}
{'i': 1, 'y': 1}
{'i': 1, 'y': 1, 'a': 1}
1

# test 2
def f():
i = 1
print(locals())
exec('y = i; print(y); print(locals())')
print(locals())
a = eval('y')
print(locals())
y = a
print(y)
f()
{'i': 1}
1
{'i': 1, 'y': 1}
{'i': 1}
Traceback (most recent call last):
NameError: name 'y' is not defined


So test 1 and 2 are the same except that the variable 'y' is not
present/present in the f's code.

When it is not present, exec() modifies the f's locals and adds an 'y'
to it but when the variable 'y' is present in the code (even if not
present in the locals()), exec() does not add any 'y' (and the next
eval() then fails)

The interesting part is that if the 'y' variable is in the f's code
*and* it is defined in the f's locals, no error occur but once again the
exec() does not modify f's locals:

# test 3
def f():
i = 1
y = 42
print(locals())
exec('y = i; print(y); print(locals())')
print(locals())
a = eval('y')
print(locals())
y = a
print(y)
f()
{'i': 1, 'y': 42}
1
{'i': 1, 'y': 1}
{'i': 1, 'y': 42}
{'i': 1, 'y': 42, 'a': 42}
42

Why does this happen? No idea.

I may be related with this:

# test 4
def f():
i = 1
print(locals())
exec('y = i; print(y); print(locals())')
print(locals())
print(y)
f()
Traceback (most recent call last):
NameError: name 'y' is not defined

Despite exec() adds the 'y' variable to f's locals, the variable is not
accessible/visible from f's code.

So, a few observations (by no means this is how the vm works):

1) each function has a set of variables defined by the code (let's call
this "code-defined locals" or "cdef-locals").
2) each function also has a set of "runtime locals" accessible from
locals().
3) exec() can add variables to locals() (runtime) set but it cannot add
any to cdef-locals.
4) locals() may be a superset of cdef-locals (but entries in cdef-locals
which value is still undefined are not shown in locals())
5) due rule 4, exec() cannot add a variable to locals() if it is already
 present in the in cdef-locals.
6) when eval() runs, it uses locals() set for lookup

Perhaps rule 5 is to prevent exec() to modify any arbitrary variable of
the caller...

Anyways, nice food for our brains.

On Wed, Jul 20, 2022 at 04:56:02PM +, george trojan wrote:

I wish I could understand the following behaviour:

1. This works as I expect it to work:

def f():
   i = 1
   print(locals())
   exec('y = i; print(y); print(locals())')
   print(locals())
   exec('y *= 2')
   print('ok:', eval('y'))
f()

{'i': 1}
1
{'i': 1, 'y': 1}
{'i': 1, 'y': 1}
ok: 2

2. I can access the value of y with eval() too:

def f():
   i = 1
   print(locals())
   exec('y = i; print(y); print(locals())')
   print(locals())
   u = eval('y')
   print(u)
f()

{'i': 1}
1
{'i': 1, 'y': 1}
{'i': 1, 'y': 1}
1

3. When I change variable name u -> y, somehow locals() in the body of
the function loses an entry:

def f():
   i = 1
   print(locals())
   exec('y = i; print(y); print(locals())')
   print(locals())
   y = eval('y')
   print(y)
f()

{'i': 1}
1
{'i': 1, 'y': 1}
{'i': 1}

---NameError
   Traceback (most recent call last)
Input In [1], in ()  7 print(y)  8 # y
= eval('y')  9 #print('ok:', eval('y'))---> 10 f()

Input In [1], in f()  4 exec('y = i; print(y); print(locals())')
  5 print(locals())> 6 y = eval('y')  7 print(y)

File :1, in 
NameError: name 'y' is not defined1.

Another thing: within the first exec(), the print order seems
reversed. What is going on?

BTW, I am using python 3.8.13.

George
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-27 Thread Martin Di Paola



On Wed, Jul 27, 2022 at 08:32:31PM +0200, Morten W. Petersen wrote:

You're thinking of the backlog argument of listen?


From my understanding, yes, when you set up the "accepter" socket (the
one that you use to listen and accept new connections), you can define
the length of the queue for incoming connections that are not accepted
yet.

This will be the equivalent of your SimpleQueue which basically puts a
limits on how many incoming connections are "accepted" to do a real job.

Using skt.listen(N) the incoming connections are put on hold by the OS
while in your implementation are formally accepted but they are not
allowed to do any meaningful work: they are put on the SimpleQueue and
only when they are popped then they will work (send/recv data).

The difference then between the OS and your impl is minimal. The only
case that I can think is that on the clients' side it may exist a
timeout for the acceptance of the connection so your proxy server will
eagerly accept these connections so no timeout is possible(*)

On a side note, you implementation is too thread-naive: it uses plain
Python lists, integers and boolean variables which are not thread safe.
It is a matter of time until your server will start behave weird.

One option is that you use thread-safe objects. I'll encourage to read
about thread-safety in general and then which sync mechanisms Python
offers.

Another option is to remove the SimpleQueue and the background function
that allows a connection to be "active".

If you think, the handlers are 99% independent except that you want to
allow only N of them to progress (stablish and forward the connection)
and when a handler finishes, another handler "waiting" is activated, "in
a queue fashion" as you said.

If you allow me to not have a strict queue discipline here, you can achieve
the same results coordinating the handlers using semaphores. Once again,
take this email as starting point for your own research.

On a second side note, the use of handlers and threads is inefficient
because while you have N active handlers sending/receiving data, because
you are eagerly accepting new connections you will have much more
handlers created and (if I'm not wrong), each will be a thread.

A more efficient solution could be

1) accept as many connections as you can, saving the socket (not the
handler) in the thread-safe queue.
2) have N threads in the background popping from the queue a socket and
then doing the send/recv stuff. When the thread is done, the thread
closes the socket and pops another from the queue.

So the queue length will be the count of accepted connections but in any
moment your proxy will not activate (forward) more than N connections.

This idea is thread-safe, simpler, efficient and has the queue
discipline (I leave aside the usefulness).

I encourage you to take time to read about the different things
mentioned as concurrency and thread-related stuff is not easy to
master.

Thanks,
Martin.

(*) make your proxy server slow enough and yes, you will get timeouts
anyways.



Well, STP will accept all connections, but can limit how many of the
accepted connections that are active at any given time.

So when I bombed it with hundreds of almost simultaneous connections, all
of them were accepted, but only 25 were actively sending and receiving data
at any given time. First come, first served.

Regards,

Morten

On Wed, Jul 27, 2022 at 8:00 PM Chris Angelico  wrote:


On Thu, 28 Jul 2022 at 02:15, Morten W. Petersen 
wrote:
>
> Hi.
>
> I'd like to share with you a recent project, which is a simple TCP proxy
> that can stand in front of a TCP server of some sort, queueing requests
and
> then allowing n number of connections to pass through at a time:

How's this different from what the networking subsystem already does?
When you listen, you can set a queue length. Can you elaborate?

ChrisA
--
https://mail.python.org/mailman/listinfo/python-list




--
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
--

I am https://leavingnorway.info

Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue

Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen

Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/

On Instagram at https://instagram.com/morphexx/
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Which architectures to support in a CI like Travis?

2022-09-19 Thread Martin Di Paola

I would depend on the project.

In the crytoanalysis tool that I developing, "cryptonita", I just
manipule bytes. Nothing that could depend on the distro so my CI picks
one OS and run the tests there.

Project: https://github.com/cryptonitas/cryptonita
CI: 
https://github.com/cryptonitas/cryptonita/blob/master/.github/workflows/test.yml

On the other extreme I have "byexample", a tool that takes the examples
in your docs and run them as automated tests. It supports different
languages (Python, Ruby, Java, ...) and it works using the interpreter
of each languages.

An there is the challenge for its CI. Because byexample highly depends
on the version of the interpreter, the CI config tries a lot of
different scenarios

Project: https://byexamples.github.io/
CI: 
https://github.com/byexamples/byexample/blob/master/.github/workflows/test.yml

I don't tests different distros but I should for some cases that I
suspect that it could be differences in how some interpreters behave.

An about OS, I'm planning to add MacOS to the CI because I know that
some users had problems in the past in that platform because how
byexample interacts with the terminal.

So two projects, both in Python, but with two totally different
dependencies on the environment where they run, so their CI are
different.

The two examples are using Gitlab actions but the same applies to
TravisCI.

Thanks,
Martin.


On Sun, Sep 18, 2022 at 09:46:45AM +, c.bu...@posteo.jp wrote:

Hello,

I am using TravisCI for my project on GitHub. The project is packaged
for Debian, Ubuntu, Arch and several other distros.

All this distros support multiple architectures and they have their own
test machines to take care that all packages working on all archs.

On my side (upstream) I wonder which arch I should "support" in my
TravisCI configuration. I wan't to speed up travis and I want to save
energy and carbon.

I suspect that my Python code should run on much every platform that
offers a Python interpreter. Of course there are edge cases. But they
would be captured by the distros own test environments.

So is there a good and objective reason to support multiple (and maybe)
exotic platforms in a CI pipeline on upstream?

Kind
Christian
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: on the python paradox

2022-12-11 Thread Martin Di Paola

On Mon, Dec 05, 2022 at 10:37:39PM -0300, Sabrina Almodóvar wrote:

The Python Paradox
   Paul Graham
   August 2004

[SNIP]

Hence what, for lack of a better name, I'll call the Python paradox: 
if a company chooses to write its software in a comparatively 
esoteric language, they'll be able to hire better programmers, 
because they'll attract only those who cared enough to learn it. And 
for programmers the paradox is even more pronounced: the language to 
learn, if you want to get a good job, is a language that people 
don't learn merely to get a job.


[SNIP]


I don't think that an esoteric language leads to better programmers.

I know really good people that work mostly in assembly which by today
standard would be considered "esoteric".

They are really good at their field but they write shitty code in higher
languages as python.

That same goes for the other direction: I saw Ruby programmers writing C
code and trust me, it didn't result in good quality code.

I would be more inclined to think that a good programmer is not the one
that knows an esoteric language but the one that can jump from one
programming paradigm to another.

And when I say "jump" I mean that he/she can understand the problem to
solve, find the best tech stack to solve it and do it in an efficient
manner using that tech stack correctly.

It is in the "using that tech stack correctly" where some programmers
that "think" they know languages A, B and C get it wrong.

Just writing code that "compiles" and "it does not immediately crash" is
not enough to say that "you are using the tech stack correctly".


On Wed, Dec 07, 2022 at 10:58:09AM -0500, David Lowry-Duda wrote:

On Mon, Dec 05, 2022 at 10:37:39PM -0300, Sabrina Almodóvar wrote:

The Python Paradox
   Paul Graham
   August 2004

[SNIP]

Hence what, for lack of a better name, I'll call the Python paradox: 
if a company chooses to write its software in a comparatively 
esoteric language, they'll be able to hire better programmers, 
because they'll attract only those who cared enough to learn it. And 
for programmers the paradox is even more pronounced: the language to 
learn, if you want to get a good job, is a language that people 
don't learn merely to get a job.


[SNIP]


I wonder what the appropriately esoteric language is today?

We can sort of think of go/rust as esoteric versions of C/C++. But 
what would be the esoteric python?


Perhaps Julia? I don't know of any large software projects happening 
in julia world that aren't essentially scientific computing libraries 
(but this is because *I* work mostly with scientific computing 
libraries and sometimes live under a rock).


- DLD
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


byexample: free, open source tool to find snippets of code in your docs and execute them as regression tests

2021-05-03 Thread Martin Di Paola

Hi everyone, I would like to share a free, open source tool with you that
I've been developing in the last few years.

You'll be probably familiar with things like this in the Python
documentation:

```
  >>> 1 + 3
  4
```

byexample will find those snippets, it will execute "1 + 3" and the
output will be compared with the expected one (the "4") so you can know
that your docs are in sync with your code.

If you are familiar with Python's doctest module, it is the same idea
but after a few years of using it I found some limitations that I tried
to break.

That's how byexample was born: it allows you find and execute
the snippets/examples written in different languages in different files.

You could run Ruby and C++ code written in the docstrings of your Python
source code or in a fenced code block of a Markdown file.

You could "capture" the output of one example and "paste" it into
another as a way to share data between examples.

```
  $ cat somefile   # a Shell example ( will capture a word)
  Lorem ipsum  sit amet.

  >>> "" == "dolor"  # Python example  # byexample: +paste
  True
```

You could even "type" text when your example is interactive and requires
some input:

```
  >>> name = input("your name please: ")  # byexample: +type
  your name please: [john]

  >>> print(name)
  john
```

There are a few more features but this email is long enough.

The full set of features and tutorials are in https://byexamples.github.io
(by the way, the examples in that web page are the tests of byexample!)

Repo: https://github.com/byexamples/byexample (feel free to submit any
issue or question)

You can install it with pip:

  pip install byexample

And if you are a fan of Python's doctest (as I am), there is a
compatibility mode that you may want to check:
https://byexamples.github.io/byexample/recipes/python-doctest

I would like to receive your feedback.

Thanks for your time!
Martin.
--
https://mail.python.org/mailman/listinfo/python-list


Re: Use Chrome's / Firefox's dev-tools in python

2021-05-22 Thread Martin Di Paola

Hello,

I'm not 100% sure but I think that I understand what you are trying to
do. I faced the same problem a few months ago.

I wanted to know when a particular blog posted a new article.

My plan was to query the blog every day running a python script, get the
list of articles it has and comparing them with the list of the previous
day.

I used Selenium (https://selenium-python.readthedocs.io/) and on top of
that I implemented a thin layer to manipulate the web page called
"selectq" (https://github.com/SelectQuery/sQ)

You could write a similar script.

Using Selenium or selectq will open a web browser but given that it is
fully automated, it should not be a problem (well, yes, it may run a
little slow however).

The good side is that both can inject javascript if you have to.

Would this work for you or am I saying nonsense?

Thanks!
Martin.

On Fri, May 21, 2021 at 03:46:50AM -0700, max pothier wrote:

Hello,
Thanks for you answer!
Actually my goal is not to automatically get the file once I open the page, but 
more to periodically check the site and get a notification when there's new 
homework or, at the morning, know when an hour is cancelled, so I don't want to 
have to open the browser every time.
I have pretty good javascript knowledge so if you could better explain that 
idea, it would be a great help.
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Use Chrome's / Firefox's dev-tools in python

2021-05-23 Thread Martin Di Paola
"unselectable text" not necessary means that it is an image. There is 
a CSS property that you can change to make a text 
selectable/unselectable.


And if it is an image, it very likely that it comes from the server as 
such, so "intercepting" the packet coming from there will be for 
nothing: you will have the same image.


About the "packet listener", you could setup a proxy between your 
browser and the server and use the proxy to see the messages. "Burp" is 
the classical tool for this.


But I have the feeling that the solution is easier.

Try the following: do it manually but take note of the steps you do.

Example:

1) Go to page https://www.parisclassenumerique.fr
2) Click in the upper-right menu button and choose "Tutoriels". Now the 
URL is 
https://www.parisclassenumerique.fr/lutece/jsp/site/Portal.jsp?page_id=9

3) Then click in "Comment démarrer sur PCN ?", on the left panel

... and so on.

Basically you can then translate those steps to Selenium/selectq and 
automate them. It's here where I could help you but I cannot do much 
without more info because I don't know which page you are looking and in 
which link you are trying to click and stuff like that.


On Sun, May 23, 2021 at 01:36:48AM -0700, max pothier wrote:

Hi,
Seems like that could be a method of doing things. Just one clarification: the 
website has unselectable text, looks like it's an image strangely generated, so 
if I can get the packet with it, it would be perfect. As I said (I think), 
logging in with Selenium was already possible, and I could get a screenshot of 
the page after logging in.
If you got this working like a packet listener in browser capable of seeing 
packet data, I'd gladly accept the code.
I've tried to do this for 3 years now (since I came into that school 
basically), looks like it's coming to an end!
Thanks!
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Data structure for plotting monotonically expanding data set

2021-06-05 Thread Martin Di Paola
One way to go is using Pandas as it was mentioned before and Seaborn for 
plotting (built on top of matplotlib)


I would approach this prototyping first with a single file and not with 
the 1000 files that you have.


Using the code that you have for parsing, add the values to a Pandas 
DataFrame (aka, a table).


# load pandas and create a 'date' object to represent the file date
# You'll have "pip install pandas" to use it
import pandas as pd

file_date = pd.to_datetime('20210527')

# data that you parsed as list of lists with each list being
# each line in your file.
data = [
["alice", 123, file_date],
["bob", 4, file_date],
["zebedee", 999, file_date]
]

# then, load it as a pd.DataFrame
df = pd.DataFrame(data, columns=['name', 'kb', 'date'])

# print it
print(df)
name   kb   date
  0alice  123 2021-05-27
  1  bob4 2021-05-27
  2  zebedee  999 2021-05-27

Now, this is the key point: You can save the dataframe in a file
so you don't have to process the same file over and over.

Pandas has different formats, some are more suitable than others.

# I'm going to use "parquet" format which compress really well
# and it is quite fast. You'll have "pip install pyarrow" to use it
df.to_parquet('df-20210527.pq')

Now you repeat this for all your files so you will end up with ~1000 
parquet files.


So, let's say that you want to plot some lines. You'll need to load 
those dataframes from disk.


You read each file, get a Pandas DataFrame for each and then
"concatenate" them into a single Pandas DataFrame

all_dfs = [pd.read_parquet() for  in <...>]
df = pd.concat(all_dfs, ignore_index=True)

Now, the plotting part. You said that you wanted to use matplotlib. I'll 
go one step forward and use seaborn (which it is implemented on top of 
matplotlib)


import matplotlib.pyplot as plt
import seaborn as sns

# plot the mean of 'kb' per date as a point. Per each point
# plot a vertical line showing the "spread" of the values and connect
# the points with lines to show the slope (changes) between days
sns.pointplot(data=df, x="date", y="kb")
plt.show()

# plot the distribution of the 'kb' values per each user 'name'.
sns.violinplot(data=df, x="name", y="kb")
plt.show()

# plot the 'kb' per day for the 'alice' user
sns.lineplot(data=df.query('name == "alice"'), x="date", y="kb")
plt.show()

That's all, a very quick intro to Pandas and Seaborn.

Enjoy the hacking.

Thanks,
Martin.


On Thu, May 27, 2021 at 08:55:11AM -0700, Edmondo Giovannozzi wrote:

Il giorno giovedì 27 maggio 2021 alle 11:28:31 UTC+2 Loris Bennett ha scritto:

Hi,

I currently a have around 3 years' worth of files like

home.20210527
home.20210526
home.20210525
...

so around 1000 files, each of which contains information about data
usage in lines like

name kb
alice 123
bob 4
...
zebedee 999

(there are actually more columns). I have about 400 users and the
individual files are around 70 KB in size.

Once a month I want to plot the historical usage as a line graph for the
whole period for which I have data for each user.

I already have some code to extract the current usage for a single from
the most recent file:

for line in open(file, "r"):
columns = line.split()
if len(columns) < data_column:
logging.debug("no. of cols.: %i less than data col", len(columns))
continue
regex = re.compile(user)
if regex.match(columns[user_column]):
usage = columns[data_column]
logging.info(usage)
return usage
logging.error("unable to find %s in %s", user, file)
return "none"

Obviously I will want to extract all the data for all users from a file
once I have opened it. After looping over all files I would naively end
up with, say, a nested dict like

{"20210527": { "alice" : 123, , ..., "zebedee": 999},
"20210526": { "alice" : 123, "bob" : 3, ..., "zebedee": 9},
"20210525": { "alice" : 123, "bob" : 1, ..., "zebedee": 999},
"20210524": { "alice" : 123, ..., "zebedee": 9},
"20210523": { "alice" : 123, ..., "zebedee": 999},
...}

where the user keys would vary over time as accounts, such as 'bob', are
added and latter deleted.

Is creating a potentially rather large structure like this the best way
to go (I obviously could limit the size by, say, only considering the
last 5 years)? Or is there some better approach for this kind of
problem? For plotting I would probably use matplotlib.

Cheers,

Loris

--
This signature is currently under construction.


Have you tried to use pandas to read the data?
Then you may try to add a column with the date and then join the datasets.
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Recommendation for drawing graphs and creating tables, saving as PDF

2021-06-11 Thread Martin Di Paola

You could try https://plantuml.com and http://ditaa.sourceforge.net/.

Plantuml may not sound as the right tool but it is quite flexible and 
after a few tweak you can create a block diagram as you shown.


And the good thing is that you *write* which elements and relations are 
in your diagram and it is Plantuml which will draw it for you.


On the other hand, in Ditaa you have to do the layout but contrary to 
most of the GUI apps, Ditaa processes plaintext (ascii art if you want).


For simple things, Ditaa is probably a good option too.

Finally, I use https://pandoc.org/ to transform my markdowns into PDFs 
for a textbook that I'm writing (and in the short term for my blog).


None of those are "libraries" in the sense that you can load in Python, 
however nothing should prevent you to call them from Python with 
`subprocess`.


By the way, I'm interesting too in to know other tools for making 
diagrams.


On Fri, Jun 11, 2021 at 08:52:20AM -0400, Neal Becker wrote:

Jan Erik Moström wrote:


I'm doing something that I've never done before and need some advise for
suitable libraries.

I want to

a) create diagrams similar to this one
https://www.dropbox.com/s/kyh7rxbcogvecs1/graph.png?dl=0 (but with more
nodes) and save them as PDFs or some format that can easily be converted
to PDFs

b) generate documents that contains text, lists, and tables with some
styling. Here my idea was to save the info as markdown and create PDFs
from those files, but if there is some other tools that gives me better
control over the tables I'm interested in knowing about them.

I looked around around but could only find two types of libraries for a)
libraries for creating histograms, bar charts, etc, b) very basic
drawing tools that requires me to figure out the layout etc. I would
prefer a library that would allow me to state "connect A to B", "connect
C to B", "connect B to D", and the library would do the whole layout.

The closest I've found it to use markdown and mermaid or graphviz but
... PDFs (perhaps I should just forget about PDFs, then it should be
enough to send people to a web page)

(and yes, I could obviously use LaTeX ...)

= jem


Like this?
https://pypi.org/project/blockdiag/

--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: optimization of rule-based model on discrete variables

2021-06-14 Thread Martin Di Paola
From what I'm understanding it is an "optimization problem" like the 
ones that you find in "linear programming".


But in your case the variables are not Real (they are Integers) and the 
function to minimize g() is not linear.


You could try/explore CVXPY (https://www.cvxpy.org/) which it's a solver 
for different kinds of "convex programming". I don't have experience 
with it however.


The other weapon in my arsenal would be Z3 
(https://theory.stanford.edu/~nikolaj/programmingz3.html) which it's 
a SMT/SAT solver with a built-in extension for optimization problems.


I've more experience with this so here is a "draft" of what you may be 
looking for.



from z3 import Integers, Optimize, And, If

# create a Python array X with 3 Z3 Integer variables named x0, x1, x2
X = Integers('x0 x1 x2')
Y = Integers('y0 y1')

# create the solver
solver = Optimize()

# add some restrictions like lower and upper bounds
for x in X:
  solver.add(And(0 <= x, x <= 2)) # each x is between 0 and 2
for y in Y:
  solver.add(And(0 <= y, y <= 2))

def f(X):
  # Conditional expression can be modeled too with "If"
  # These are *not* evaluated like a normal Python "if" but
  # modeled as a whole. It'll be the solver which will "run it"
  return If(
And(x[0] == 0, x[1] == 0),  # the condition
Y[0] == 0,  # Y[0] will *must* be 0 *if* the condition holds
Y[0] == 2   # Y[0] will *must* be 2 *if* the condition doesn't hold
)

solver.add(f(X))

# let's define the function to optimize
g = Y[0]**2
solver.maximize(g)

# check if we have a solution
solver.check() # this should return 'sat'

# get one of the many optimum solutions
solver.model()


I would recommend you to write a very tiny problem with 2 or 3 variables 
and a very simple f() and g() functions, make it work (manually and with 
Z3) and only then build a more complex program.


You may find useful (or not) these two posts that I wrote a month ago 
about Z3. These are not tutorials, just personal experience with 
a concrete example.


Combine Real, Integer and Bool variables:
https://book-of-gehn.github.io/articles/2021/05/02/Planning-Space-Missions.html

Lookup Tables (this may be useful for programming a f() "variable" 
function where the code of f() (the decision tree) is set by Z3 and not 
by you such f() leads to the optimum of g())

https://book-of-gehn.github.io/articles/2021/05/26/Casting-Broadcasting-LUT-and-Bitwise-Ops.html


Happy hacking.
Martin.


On Mon, Jun 14, 2021 at 12:51:34PM +, Elena via Python-list wrote:

Il Mon, 14 Jun 2021 19:39:17 +1200, Greg Ewing ha scritto:


On 14/06/21 4:15 am, Elena wrote:

Given a dataset of X={(x1... x10)} I can calculate Y=f(X) where f is
this rule-based function.

I know an operator g that can calculate a real value from Y: e = g(Y)
g is too complex to be written analytically.

I would like to find a set of rules f able to minimize e on X.


There must be something missing from the problem description.
 From what you've said here, it seems like you could simply find
a value k for Y that minimises g, regardless of X, and then f would
consist of a single rule: y = k.

Can you tell us in more concrete terms what X and g represent?


I see what you mean, so I try to explain it better: Y is a vector say [y1,
y2, ... yn], with large (n>>10), where yi = f(Xi) with Xi = [x1i, x2i, ...
x10i] 1<=i<=n. All yi and xji assume discrete values.

I already have a dataset of X={Xi} and would like to find the rules f able
to minimize a complicated-undifferenciable Real function g(f(X)).
Hope this makes more sense.

x1...x10 are 10 chemical components that can be absent (0), present (1),
modified (2). yi represent a quality index of the mixtures and g is a
global quality of the whole process.

Thank you in advance

ele
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: [ANN] Austin -- CPython frame stack sampler v3.0.0 is now available

2021-07-02 Thread Martin Di Paola

Very nice. I used rbspy for Ruby programs https://rbspy.github.io/
and it can give you some insights about the running code that other 
profiling techniques may not give you.


I'll use it in my next performance-bottleneck challenge.

On Fri, Jul 02, 2021 at 04:04:24PM -0700, Gabriele Tornetta wrote:

I am delighted to announce the release 3.0.0 of Austin. If you haven't heard of 
Austin before, it is an open-source frame stack sampler for CPython, 
distributed under the GPLv3 license. It can be used to obtain statistical 
profiling data out of a running Python application without a single line of 
instrumentation. This means that you can start profiling a Python application 
straight away, even while it's running in a production environment, with 
minimal impact on performance.

The best way to leverage Austin is to use the new extension for VS Code, which 
brings interactive flame graphs straight into the text editor to allow you to 
quickly jump to the source code with a simple click. You can find the extension 
on the Visual Studio Marketplace and install it directly from VS Code:

https://marketplace.visualstudio.com/items?itemName=p403n1x87.austin-vscode

To see how to make the best of Austin with VS Code to find and fix performance 
issues, check out this blog post, which shows you the editor extension in 
action on a real Python project:

https://p403n1x87.github.io/how-to-bust-python-performance-issues.html

The latest release comes with many improvements, including a re-worked 
sleepless mode that now gives an estimate of CPU time, initial support for 
Python 3.10, better support for Python-based binaries like gunicorn, uWSGI, 
etc. on all supported platforms.

Austin is a pure C application that has no dependencies other than the C 
standard library. Its source code is hosted on GitHub at

https://github.com/P403n1x87/austin

The README contains installation and usage details, as well as some examples of 
Austin in action. Details on how to contribute to Austin's development can be 
found at the bottom of the page.

Austin can be installed easily on the following platforms and from the 
following sources:

Linux:
- Snap Store
- Debian repositories

macOS:
- Homebrew

Windows:
- Chocolatey
- Scoop

An Austin image, based on Ubuntu 20.04, is also available from Docker Hub:

https://hub.docker.com/r/p403n1x87/austin

Austin is also simple to compile from sources as it only depends on the 
standard C library if you don't have access to the above-listed sources.


You can stay up-to-date with the project's development by following Austin on 
Twitter (https://twitter.com/AustinSampler).

All the best,
Gabriele
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Empty list as a default param - the problem, and my suggested solution

2021-08-14 Thread Martin Di Paola
I don't know if it is useful but it is an interesting 
metaprogramming/reflection challenge.


You used `inspect` but you didn't take its full potential. Try to see if 
you can simplify your code and see if you can come with a decorator

that does not require special parameters.


from new import NEW
@NEW

... def new_func(a=[]):
... a.append('new appended')
... return a
...

new_func()

['new appended']

new_func()

['new appended']

Spoiler - My solution is at 
https://book-of-gehn.github.io/articles/2021/08/14/Fresh-Python-Defaults.html



On Fri, Aug 13, 2021 at 03:44:20PM -0700, guruyaya wrote:

I am fairly sure all of us know about this python quirk:

def no_new_func(a=[]):

...a.append('new')
...return a


no_new_func()

['new']

no_new_func()

['new', 'new']




For some time I was bothered about that there's no elegant way to use empty 
list or dict as a default parameter. While this can be solved like this:

def no_new_func(a=None):

...if a == None:
   a = []
...a.append('new')
...return a

I have to say I find this solution very far from the spirit of python. Kinda 
ugly, and not explicit. So I've decided to try and create a new module, that 
will try and make, what I think, is a more beautiful and explicit:


from new import NEW
@NEW.parse

... def new_func(a=NEW.new([])):
... a.append('new appended')
... return a
...

new_func()

['new appended']

new_func()

['new appended']

I'd like to hear your thoughts on my solution and code. You can find and give 
your feedback in this project
https://github.com/guruyaya/new
If I see that people like this, I will upload it to pip. I'm not fully sure about the 
name I choose (I thought about the "new" keyword used in JAVA, not sure it 
applies here as well)

Thanks in advance for your feedback
Yair
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: on perhaps unloading modules?

2021-08-17 Thread Martin Di Paola
This may not answer your question but it may provide an alternative 
solution.


I had the same challenge that you an year ago so may be my solution will 
work for you too.


Imagine that you have a Markdown file that *documents* the expected 
results.


--8<---cut here---start->8---
This is the final exam, good luck!

First I'm going to load your code (the student's code):

```python

import student

```

Let's see if you programmed correctly a sort algorithm

```python

data = [3, 2, 1, 3, 1, 9]
student.sort_numbers(data)

[1, 1, 2, 3, 3, 9]
```

Let's now if you can choose the correct answer:

```python

t = ["foo", "bar", "baz"]
student.question1(t)

"baz"
```
--8<---cut here---end--->8---

Now you can run the snippets of code with:

  byexample -l python the_markdown_file.md

What byexample does is to run the Python code, capture the output and 
compare it with the expected result.


In the above example "student.sort_numbers" must return the list sorted.  
That output is compared by byexample with the list written below.


Advantages? Each byexample run is independent of the other and the 
snippet of codes are executed in a separated Python process. byexample 
takes care of the IPC.


I don't know the details of your questions so I'm not sure if byexample 
will be the tool for you. In my case I evaluate my students giving them 
the Markdown and asking them to code the functions so they return the 
expected values.


Depending of how many students you have you may considere to complement 
this with INGInious. It is designed to run students' assignments 
assuming nothing on the untrusted code.


Links:

https://byexamples.github.io/byexample/
https://docs.inginious.org/en/v0.7/


On Sun, Aug 15, 2021 at 12:09:58PM -0300, Hope Rouselle wrote:

Hope Rouselle  writes:

[...]


Of course, you want to see the code.  I need to work on producing a
small example.  Perhaps I will even answer my own question when I do.


[...]

Here's a small-enough case.  We have two students here.  One is called
student.py and the other is called other.py.  They both get question 1
wrong, but they --- by definition --- get question 2 right.  Each
question is worth 10 points, so they both should get losses = 10.

(*) Student student.py

--8<---cut here---start->8---
def question1(t): # right answer is t[2]
 return t[1] # lack of attention, wrong answer
--8<---cut here---end--->8---

(*) Student other.py

--8<---cut here---start->8---
def question1(t): # right answer is t[2]
 return t[0] # also lack of attention, wrong answer
--8<---cut here---end--->8---

(*) Grading

All is good on first run.

Python 3.5.2 [...] on win32
[...]

reproducible_problem()

student.py, total losses 10
other.py, total losses 10

The the problem:


reproducible_problem()

student.py, total losses 0
other.py, total losses 0

They lose nothing because both modules are now permanently modified.

(*) The code of grading.py

--8<---cut here---start->8---
# -*- mode: python; python-indent-offset: 2 -*-
def key_question1(t):
 # Pretty simple.  Student must just return index 2 of a tuple.
 return t[2]

def reproducible_problem(): # grade all students
 okay, m = get_student_module("student.py")
 r = grade_student(m)
 print("student.py, total losses", r) # should be 10
 okay, m = get_student_module("other.py")
 r = grade_student(m)
 print("other.py, total losses", r) # should be 10

def grade_student(m): # grades a single student
 losses  = question1_verifier(m)
 losses += question2_verifier(m)
 return losses

def question1_verifier(m):
 losses = 0
 if m.question1( (0, 1, 2, 3) ) != 2: # wrong answer
   losses = 10
 return losses

def question2_verifier(m):
 m.question1 = key_question1
 # To grade question 2, we overwrite the student's module by giving
 # it the key_question1 procedure.  This way we are able to let the
 # student get question 2 even if s/he got question 1 incorrect.
 losses = 0
 return losses

def get_student_module(fname):
 from importlib import import_module
 mod_name = basename(fname)
 try:
   student = import_module(mod_name)
 except Exception as e:
   return False, str(e)
 return True, student

def basename(fname): # drop the the .py extension
 return "".join(fname.split(".")[ : -1])
--8<---cut here---end--->8---
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: basic auth request

2021-08-21 Thread Martin Di Paola
While it is correct to say that Basic Auth without HTTPS is absolutely 
insecure, using Basic Auth *and* HTTPS is not secure either.


Well, the definition of "secure" depends of your threat model.

HTTPS ensures encryption so the content, including the Basic Auth 
username and password, is secret for any external observer.


But it is *not* secret for the receiver (the server): if it was 
compromised an adversary will have access to your password. It is much 
easier to print a captured password than cracking the hashes.


Other authentication mechanisms exist, like OAuth, which are more 
"secure".


Thanks,
Martin


On Wed, Aug 18, 2021 at 11:05:46PM -, Jon Ribbens via Python-list wrote:

On 2021-08-18, Robin Becker  wrote:

On 17/08/2021 22:47, Jon Ribbens via Python-list wrote:
...

That's only true if you're not using HTTPS - and you should *never*
not be using HTTPS, and that goes double if forms are being filled
in and double again if passwords are being supplied.


I think I agree with most of the replies; I understood from reading
the rfc that the charset is utf8 (presumably without ':')


The username can't contain a ':'. It shouldn't matter in the password.


and that basic auth is considered insecure. It is being used over
https so should avoid the simplest net scanning.


It's not insecure over HTTPS. Bear in mind the Basic Auth RFC was
written when HTTP was the standard and HTTPS was unusual. The positions
are now effectively reversed.


I googled a bunch of ways to do this, but many come down to 1) using
the requests package or 2) setting up an opener. Both of these seem to
be much more complex than is required to add the header.

I thought there might be a shortcut or more elegant way to replace the
old code, but it seems not


It's only a trivial str/bytes difference, it shouldn't be any big deal.
But using 'requests' instead is likely to simplify things and doesn't
tend to be an onerous dependency.
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Select columns based on dates - Pandas

2021-09-03 Thread Martin Di Paola
You may want to reshape the dataset to a tidy format: Pandas works 
better with that format.


Let's assume the following dataset (this is what I understood from your 
message):


In [34]: df = pd.DataFrame({
...: 'Country': ['us', 'uk', 'it'],
...: '01/01/2019': [10, 20, 30],
...: '02/01/2019': [12, 22, 32],
...: '03/01/2019': [14, 24, 34],
...: })

In [35]: df
Out[35]:
  Country  01/01/2019  02/01/2019  03/01/2019
0  us  10  12  14
1  uk  20  22  24
2  it  30  32  34

Then, reshape it to a tidy format. Notice how each row now represents 
a single measure.


In [43]: pd.melt(df, id_vars=['Country'], var_name='Date', 
value_name='Cases')

Out[43]:
  CountryDate  Cases
0  us  01/01/2019 10
1  uk  01/01/2019 20
2  it  01/01/2019 30
3  us  02/01/2019 12
4  uk  02/01/2019 22
5  it  02/01/2019 32
6  us  03/01/2019 14
7  uk  03/01/2019 24
8  it  03/01/2019 34

I used strings to represent the dates but it is much handy work
with real date objects.

In [44]: df2 = _
In [45]: df2['Date'] = pd.to_datetime(df2['Date'])

Now we can filter by date:

In [50]: df2[df2['Date'] < '2019-03-01']
Out[50]:
  Country   Date  Cases
0  us 2019-01-01 10
1  uk 2019-01-01 20
2  it 2019-01-01 30
3  us 2019-02-01 12
4  uk 2019-02-01 22
5  it 2019-02-01 32

With that you could create three dataframes, one per month.

Thanks,
Martin.


On Thu, Sep 02, 2021 at 12:28:31PM -0700, Richard Medina wrote:

Hello, forum,
I have a data frame with covid-19 cases per month from 2019 - 2021 like a 
header like this:

Country, 01/01/2019, 2/01/2019, 01/02/2019, 3/01/2019, ... 01/01/2021, 
2/01/2021, 01/02/2021, 3/01/2021

I want to filter my data frame for columns of a specific month range of march 
to September of 2019, 2020, and 2021 only (three data frames).

Any ideas?
Thank you


--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: The task is to invent names for things

2021-10-28 Thread Martin Di Paola

IMHO, I prefer really weird names.

For example if I'm not sure how to name a class that I'm coding, I name it
like XXXYYY (literally). Really ugly.

This is a way to avoid the so called "naming paralysis".

Once I finish coding the class I look back and it should be easy to see "what
it does" and from there, the correct name.

If the "what it does" results in multiple things I refactor it, splitting it
into two or more pieces and name each separately.

Some people prefer using more generic names like "Manager", "Helper",
"Service" but those names are problematic.

Yes, they fit in any place but that's the problem. If I'm coding a class and I
name it as "FooHelper", I may not realize later that the class is doing too
many things (unrelated things), because "it is a helper".

The thing gets wrong with the time; I bet that most of us saw a "Helper" class
with thousands of lines (~5000 lines was my record) that just grows over time.


On Wed, Oct 27, 2021 at 12:41:56PM +0200, Karsten Hilbert wrote:

Am Tue, Oct 26, 2021 at 11:36:33PM + schrieb Stefan Ram:


xyzzy = lambda x: 2 * x

  . Sometimes, this can even lead to "naming paralysis", where
  one thinks excessively long about a good name. To avoid this
  naming paralysis, one can start out with a mediocre name. In
  the course of time, often a better name will come to one's mind.


In that situation, is it preferable to choose a nonsensical
name over a mediocre one ?

Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: threading and multiprocessing deadlock

2021-12-06 Thread Martin Di Paola

Hi!, in short your code should work.

I think that the join-joined problem is just an interpretation problem.

In pseudo code the background_thread function does:

def background_thread()
  # bla
  print("join?")
  # bla
  print("joined")

When running this function in parallel using threads, you will probably
get a few "join?" first before receiving any "joined?". That is because
the functions are running in parallel.

The order "join?" then "joined" is preserved within a thread but not
preserved globally.

Now, I see another issue in the output (and perhaps you was asking about 
this one):


join?
join?
myfnc
myfnc
join?
join?
joined.
joined.

So you have 4 "join?" that correspond to the 4 background_thread 
function calls in threads but only 2 "myfnc" and 2 "joined".


Could be possible that the output is truncated by accident?

I ran the same program and I got a reasonable output (4 "join?", "myfnc" 
and "joined"):


join?
join?
myfnc
join?
myfnc
join?
joined.
myfnc
joined.
joined.
myfnc
joined.

Another issue that I see is that you are not joining the threads that 
you spawned (background_thread functions).


I hope that this can guide you to fix or at least narrow the issue.

Thanks,
Martin.


On Mon, Dec 06, 2021 at 12:50:11AM +0100, Johannes Bauer wrote:

Hi there,

I'm a bit confused. In my scenario I a mixing threading with
multiprocessing. Threading by itself would be nice, but for GIL reasons
I need both, unfortunately. I've encountered a weird situation in which
multiprocessing Process()es which are started in a new thread don't
actually start and so they deadlock on join.

I've created a minimal example that demonstrates the issue. I'm running
on x86_64 Linux using Python 3.9.5 (default, May 11 2021, 08:20:37)
([GCC 10.3.0] on linux).

Here's the code:


import time
import multiprocessing
import threading

def myfnc():
print("myfnc")

def run(result_queue, callback):
result = callback()
result_queue.put(result)

def start(fnc):
def background_thread():
queue = multiprocessing.Queue()
proc = multiprocessing.Process(target = run, args = (queue, 
fnc))
proc.start()
print("join?")
proc.join()
print("joined.")
result = queue.get()
threading.Thread(target = background_thread).start()

start(myfnc)
start(myfnc)
start(myfnc)
start(myfnc)
while True:
time.sleep(1)


What you'll see is that "join?" and "joined." nondeterministically does
*not* appear in pairs. For example:

join?
join?
myfnc
myfnc
join?
join?
joined.
joined.

What's worse is that when this happens and I Ctrl-C out of Python, the
started Thread is still running in the background:

$ ps ax | grep minimal
370167 pts/0S  0:00 python3 minimal.py
370175 pts/2S+ 0:00 grep minimal

Can someone figure out what is going on there?

Best,
Johannes
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Call julia from Python: which package?

2021-12-18 Thread Martin Di Paola
I played with Julia a few months ago. I was doing some data-science 
stuff with Pandas and the performance was terrible so I decided to give 
Julia a try.


My plan was to do the slow part in Julia and call it from Python.  
I tried juliacall (if I don't remember wrong) but I couldn't set up it.


It wasn't as smooth as it was advertised (but hey! you may have a better 
luck than me).


The other thing that I couldn't figure out is *how the data is shared 
between Python and Julia*. Basically numpy/pandas structures cannot be 
used by Julia own libraries as they are, they need to be copied at least 
and this can be a real performance hit.


I desisted the idea but you may still consider this as an real option.  
Just validate how much data you need to share (in my cases where quite 
large datasets, hence the issue).


Having said that, is Julia a real option? May be.

In my case the processing that I needed was quite basic and Julia did 
a really good job.


But I felt that the library is too fragmented. In Python you can relay 
on Pandas/numpy for processing and matplotlib/seaborn for plotting and 
you will 99% covered.


In Julia I need DataFrames.jl, Parquet.jl, CategoricalArrays.jl, 
StatsBase.jl, Statistics.jl and Peaks.jl


And I'm not including any plotting stuff.

This fragmentation is not just "inconvenient" because you need to 
install more packages but it is more difficult to code.


The API is not consistent so you need to be careful and double check 
that what you are calling is really doing what you think.


About the performance, Julia is not magic. It depends on how well it was 
coded each package.


In my case I had a good experience except with Parquet.jl which it 
didn't understand how to handle categories in the dataset and ended up 
loading a lot of duplicated strings and blew up the memory a few times.


I suggest you to write down what you need to speed up and see if it is 
implemented in Julia (do a proof of concept). Only then consider to do 
the switch.


Good luck and share your results!
Martin.


On Fri, Dec 17, 2021 at 07:12:22AM -0800, Dan Stromberg wrote:

On Fri, Dec 17, 2021 at 7:02 AM Albert-Jan Roskam 
wrote:


Hi,

I have a Python program that uses Tkinter for its GUI. It's rather slow so
I hope to replace many or all of the non-GUI parts by Julia code. Has
anybody experience with this? Any packages you can recommend? I found three
alternatives:

* https://pyjulia.readthedocs.io/en/latest/usage.html#
* https://pypi.org/project/juliacall/
* https://github.com/JuliaPy/PyCall.jl

Thanks in advance!



I have zero Julia experience.

I thought I would share this though:
https://stromberg.dnsalias.org/~strombrg/speeding-python/

Even if you go the Julia route, it's probably still best to profile your
Python code to identify the slow ("hot") spots, and rewrite only them.
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: sharing data across Examples docstrings

2022-01-14 Thread Martin Di Paola

Hello,

I understand that you want to share data across examples (docstrings) 
because you are running doctest to validate them (and test).


The doctest implementation evaluates each docstring separately without 
sharing the context so the short answer is "no".


This is a limitation of doctest but it is not the only testing engine 
that you can use.


You could use "byexample" ( https://byexamples.github.io ) which it 
shares the context by default.


byexample has more features and fixes other caveats of doctests, but 
don't take me too serious, I have a natural bias because I'm its author.


If you want to go with byexample, you may want to try its "doctest 
compatibility mode" first so you don't have to rewrite any test.

( https://byexamples.github.io/byexample/recipes/python-doctest )

Let me know if it is useful for you.

Thanks,
Martin.

On Tue, Jan 11, 2022 at 04:09:57PM -0600, Sebastian Luque wrote:

Hello,

I am searching for a mechanism for sharing data across Examples sections
in docstrings within a class.  For instance:

class Foo:

   def foo(self):
   """Method foo title

   The example generating data below may be much more laborious.

   Examples
   
   >>> x = list("abc")  # may be very long and tedious to generate

   """
   pass

   def bar(self):
   """Method bar title

   Examples
   
   >>> # do something else with x from foo Example

   """
   pass


Thanks,
--
Seb
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Waht do you think about my repeated_timer class

2022-02-04 Thread Martin Di Paola




In _run I first set the new timer and then I execute the function. So
that will go mostly OK.


Yes, that's correct however you are not taking into consideration the 
imprecision of the timers.


Timer will call the next _run() after self._interval *plus* some unknown 
arbitrary time (and extra delay).


Let's assume that when you setup an 1 sec Timer but the Timer calls 
_run() after 1.01 secs due this unknown extra delay.


The first time fn() should be called after 1 sec since the begin but it 
is called after 1.01 secs so the extra delay was of 0.01 sec.


The second time fn() should be called after 2 secs since the begin but 
it is called after 2.02 secs. The second fn() was delayed not by 0.01 
but by 0.02 secs.


The third fn() will be delayed by 0.03 secs and so on.

This arbitrary delay is very small however it will sum up on each 
iteration and depending of your application can be a serious problem.


I wrote a post about this and how to create "constant rate loops" which 
fixes this problem:

https://book-of-gehn.github.io/articles/2019/10/23/Constant-Rate-Loop.html

In the post I also describe two solutions (with their trade-offs) for 
when the target function fn() takes longer than the self._interval time.


See if it helps.

Thanks,
Martin.


On Thu, Feb 03, 2022 at 11:41:42PM +0100, Cecil Westerhof via 
Python-list wrote:

Barry  writes:


On 3 Feb 2022, at 04:45, Cecil Westerhof via Python-list 
 wrote:

Have to be careful that timing keeps correct when target takes a 'lot'
of time.
Something to ponder about, but can wait.


You have noticed that your class does call the function at the repeat interval 
but
rather at the repeat interval plus processing time.


Nope:
   def _next(self):
   self._timer = Timer(self._interval, self._run)
   self._timer.start()

   def _run(self):
   self._next()
   self._fn()

In _run I first set the new timer and then I execute the function. So
that will go mostly OK.



The way to fix this is to subtract the last processing elapsed time for the 
next interval.
Sort of a software phase locked loop.

Just before you call the run function record the time.time() as start_time.
Then you can calculate next_interval = max( .001, interval - time.time() - 
start_time)
I use 1ms as the min interval.


But I am working on a complete rewrite to create a more efficient
class. (This means I have to change also the code that uses it.) There
I have to do something like you suggest. (I am already working on it.)


Personally I am also of the opinion that the function should finish in
less as 10% from the interval. (That was one of my rewrites.)

--
Cecil Westerhof
Senior Software Engineer
LinkedIn: http://www.linkedin.com/in/cecilwesterhof
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: How do you log in your projects?

2022-02-09 Thread Martin Di Paola

- On a line per line basis? on a function/method basis?


In general I prefer logging line by line instead per function.

It is easy to add a bunch of decorators to the functions and get the 
logs of all the program but I most of the time I end up with very 
confusing logs.


There are exceptions, yes, but I prefer the line by line where the log 
should explain what is doing the code.



- Which kind of variable contents do you write into your logfiles?
- How do you decide, which kind of log message goes into which level?
- How do you prevent logging cluttering your actual code?


These three comes to the same answer: I think on whom is going to read 
the logs.


If the logs are meant to be read by my users I log high level messages,
specially before parts that can take a while (like the classic 
"Loading...").


If I log variables, those must be the ones set by the users so he/she 
can understand how he/she is controlling the behaviour of the program.


For exceptions I print the message but not the traceback. Across the 
code tag some important functions to put an extra message that will 
enhance the final message printed to the user.


https://github.com/byexamples/byexample/blob/master/byexample/common.py#L192-L238

For example:

for example in examples:
with enhance_exceptions(example, ...):
foo()

So if an exception is raised by foo(), enhance_exceptions() will attach 
to it useful information for the user from the example variable.


In the main, then I do the pretty print
https://github.com/byexamples/byexample/blob/master/byexample/byexample.py#L17-L22

If the user of the logs is me or any other developer I write more debugging 
stuff.

My approach is to not log anything and when I have to debug something 
I use a debugger + some prints. When the issue is fixed I review which 
prints would be super useful and I turn them into logs and the rest is 
deleted.



On Tue, Feb 08, 2022 at 09:40:07PM +0100, Marco Sulla wrote:

These are a lot of questions. I hope we're not off topic.
I don't know if mine are best practices. I can tell what I try to do.

On Tue, 8 Feb 2022 at 15:15, Lars Liedtke  wrote:

- On a line per line basis? on a function/method basis?


I usually log the start and end of functions. I could also log inside
a branch or in other parts of the function/method.


- Do you use decorators to mark beginnings and ends of methods/functions
in log files?


No, since I put the function parameters in the first log. But I think
that such a decorator it's not bad.


- Which kind of variable contents do you write into your logfiles? Of
course you shouldn't leak secrets...


Well, all the data that is useful to understand what the code is
doing. It's better to repeat the essential data to identify a specific
call in all the logs of the function, so if it is called
simultaneously by more clients you can distinguish them


- How do you decide, which kind of log message goes into which level?


It depends on the importance, the verbosity and the occurrences of the logs.


- How do you prevent logging cluttering your actual code?


I have the opposite problem, I should log more. So I can't answer your question.
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: How do you log in your projects?

2022-02-10 Thread Martin Di Paola

? Logs are not intended to be read by end users. Logs are primarily
used to understand what the code is doing in a production environment.
They could also be used to gather metrics data.

Why should you log to give a message instead of simply using a print?


You are assuming that logs and prints are different but they aren't. It 
is the same technology: some string formatted in a particular way sent 
to some place (console or file generally).


But using the logging machinery instead a plain print() you win a few 
features like thread safety and log levels (think in an user that wants 
to increase the verbose level of the output)


When communicating with an user, the logs that are intended to him/her 
can be sent to the console (like a print) in addition to a file.


For user's perspective, they look just like a print.


Why? Traceback is vital to understand what and where the problem is. I
think you're confusing logs with messages. The stack trace can be
logged (I would say must), but the end user generally sees a vague
message with some hints, unless the program is used internally only.


Yes, that's exactly why the traceback is hidden by default because the 
user don't care about it. If the error is due something that the user 
did wrong, then the message should say that and, if possible, add 
a suggestion of how to do it.


For example "The file 'foo.md' was not found." is quite descriptive. If 
you add to that message a traceback, that will just clutter the console.


Tracebacks and other errors and warnings must be logged in a file.  
I totally agree with that. Specially true when we are talking with 
server-like software.


Tracebacks can be printed to the user if a more verbose output is 
enabled. In that case you could even pretty print the traceback with 
syntax highlighting.


I guess that this should be discussed case by case. May be you are 
thinking more in a case where you have a service running and logging and 
I am more thinking in a program that a human executes by hand.


What info and how is presented to the user changes quite a bit.

-- https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: venv and executing other python programs

2022-02-15 Thread Martin Di Paola

I did a few experiments in my machine. I created the following foo.py

  import pandas
  print("foo")

Now "pandas" is installed under Python 3 outside the venv. I can run it 
successfully calling "python3 foo.py".


If I add the shebang "#!/usr/bin/env python3" (notice the 3), I can also 
run it as "./foo.py".


Calling it as "python foo.py" or using the shebang "#!/usr/bin/env 
python" does not work and it makes sense since "pandas" is installed

only for python 3.

New I create a virtual env with "python3 -m venv xxx" and activate it.

Once inside I can run foo.py in 4 different ways:

 - python foo.py
 - python3 foo.py
 - ./foo.py (using the shebang "#!/usr/bin/env python")
 - ./foo.py (using the shebang "#!/usr/bin/env python3")

Now all of that was with "pandas" installed outside of venv but not 
inside.


I did the same experiments with another library, "cryptonita" which it 
is not installed outside but inside and I was able to executed in 
4 different ways too (inside the venv of course):


 - python foo.py
 - python3 foo.py
 - ./foo.py (using the shebang "#!/usr/bin/env python")
 - ./foo.py (using the shebang "#!/usr/bin/env python3")

Do you have a particular context where you are having troubles? May be 
there is something else going on...


Thanks,
Martin.

On Tue, Feb 15, 2022 at 06:35:18AM +0100, Mirko via Python-list wrote:

Hi,

I have recently started using venv for my hobby-programming. There
is an annoying problem. Since venv modifies $PATH, python programs
that use the "#!/usr/bin/env python" variant of the hashbang often
fail since their additional modules aren't install inside in venv.

How to people here deal with that?

Please note: I'm not interested in discussing whether the
env-variant is good or bad. ;-) It's not that *I* use it, but
several progs in /usr/bin/.

Thanks for your time.
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: venv and executing other python programs

2022-02-15 Thread Martin Di Paola
If you have activated the venv then any script that uses /usr/bin/env 
will use executables from the venv

bin folder.


That's correct. I tried to be systematic in the analysis so I tested all 
the possibilities.


I avoid all these issues by not activating the venv. Python has code to 
know

how to use the venv libraries that are installed in it when invoked. It does not
depend on the activate script being run.


I had to test this myself because I didn't believe it but you are right.  
Without having the venv activated, if the shebang explicitly points to 
the python executable of the venv, the program will have access to the 
libs installed in the environment.


The same if I do:

/home/user/venv/bin/python foo.py

Thanks for the info!


Barry




Do you have a particular context where you are having troubles? May be there is 
something else going on...

Thanks,
Martin.

On Tue, Feb 15, 2022 at 06:35:18AM +0100, Mirko via Python-list wrote:

Hi,

I have recently started using venv for my hobby-programming. There
is an annoying problem. Since venv modifies $PATH, python programs
that use the "#!/usr/bin/env python" variant of the hashbang often
fail since their additional modules aren't install inside in venv.

How to people here deal with that?

Please note: I'm not interested in discussing whether the
env-variant is good or bad. ;-) It's not that *I* use it, but
several progs in /usr/bin/.

Thanks for your time.
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list




--
https://mail.python.org/mailman/listinfo/python-list


Re: venv and executing other python programs

2022-02-17 Thread Martin Di Paola


That's correct. I tried to be systematic in the analysis so I tested all
the possibilities.


Your test results were unexpected for `python3 -m venv xxx`. By
default, virtual environments exclude the system and user site
packages. Including them should require the command-line argument
`--system-site-packages`. I'd check sys.path in the environment. Maybe
you have PYTHONPATH set.


Nope, I checked with "echo $PYTHONPATH" and nothing. I also checked 
"sys.path" within and without the environment:


Inside the environment:

['', '/usr/lib/python37.zip', '/usr/lib/python3.7', 
 '/usr/lib/python3.7/lib-dynload', 
 '/home/user/tmp/xxx/lib/python3.7/site-packages']


Outside the environment:

['', '/usr/lib/python37.zip', '/usr/lib/python3.7', 
 '/usr/lib/python3.7/lib-dynload', 
 '/home/user/.local/lib/python3.7/site-packages', 
 '/usr/local/lib/python3.7/dist-packages', 
 '/usr/lib/python3/dist-packages']


Indeed the "sys.path" inside the environment does not include system's 
site-packages.


I'll keep looking


A virtual environment is configured by a "pyvenv.cfg" file that's
either beside the executable or one directory up. Activating an
environment is a convenience, not a requirement.


Thanks, that makes a little more sense!


--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: library not initialized (pygame)

2022-02-19 Thread Martin Di Paola

Could you share the traceback / error that you are seeing?


On Sun, May 02, 2021 at 03:23:21PM -0400, Quentin Bock wrote:

Code:
#imports and variables for game
import pygame
from pygame import mixer
running = True

#initializes pygame
pygame.init()

#creates the pygame window
screen = pygame.display.set_mode((1200, 800))

#Title and Icon of window
pygame.display.set_caption("3.02 Project")
icon = pygame.image.load('3.02 icon.png')
pygame.display.set_icon(icon)

#setting up font
pygame.font.init()
font = pygame.font.Font('C:\Windows\Fonts\OCRAEXT.ttf', 16)
font_x = 10
font_y = 40
items_picked_up = 0
items_left = 3

def main():
   global running, event

   #Game Loop
   while running:
   #sets screen color to black
   screen.fill((0, 0, 0))

   #checks if the user exits the window
   for event in pygame.event.get():
   if event.type == pygame.QUIT:
   running = False
   pygame.quit()

   def display_instruction(x, y):
   instructions = font.render("Each level contains 3 items you
must pick up in each room.", True, (255, 255, 255))
   instructions_2 = font.render("When you have picked up 3 items,
you will advance to the next room, there are 3.", True, (255, 255, 255))
   instructions_3 = font.render("You will be able to change the
direction you are looking in the room, this allows you to find different
objects.", True, (255, 255, 255))
   clear = font.render("Click to clear the text.", True, (255,
255, 255))
   screen.blit(instructions, (10, 40))
   screen.blit(instructions_2, (10, 60))
   screen.blit(instructions_3, (10, 80))
   screen.blit(clear, (10, 120))

   if event.type == pygame.MOUSEBUTTONDOWN:
   if event.type == pygame.MOUSEBUTTONUP:
   screen.fill(pygame.Color('black'))  # clears the screen text

   display_instruction(font_x, font_y)
   pygame.display.update()


main()

the error apparently comes from the first instructions variable saying
library not initialized not sure why, its worked before but not now :/
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Execute in a multiprocessing child dynamic code loaded by the parent process

2022-03-06 Thread Martin Di Paola

Hi everyone. I implemented time ago a small plugin engine to load code
dynamically.

So far it worked well but a few days ago an user told me that he wasn't
able to run in parallel a piece of code in MacOS.

He was using multiprocessing.Process to run the code and in MacOS, the
default start method for such process is using "spawn". My understanding
is that Python spawns an independent Python server (the child) which
receives what to execute (the target function) from the parent process.

In pseudo code this would be like:

modules = loader() # load the plugins (Python modules at the end)
objs = init(modules) # initialize the plugins

# One of the plugins wants to execute part of its code in parallel
# In MacOS this fails
ch = multiprocessing.Process(target=objs[0].sayhi)
ch.start()

The code fails with "ModuleNotFoundError: No module named 'foo'" (where
'foo' is the name of the loaded plugin).

This is because the parent program sends to the serve (the child) what
needs to execute (objs[0].sayhi) using pickle as the serialization
mechanism.

Because Python does not really serialize code but only enough
information to reload it, the serialization of "objs[0].sayhi" just
points to its module, "foo".

Module which it cannot be imported by the child process.

So the question is, what would be the alternatives and workarounds?

I came with a hack: use a trampoline() function to load the plugins
in the child before executing the target function.

In pseudo code it is:

modules = loader() # load the plugins (Python modules at the end)
objs = init(modules) # initialize the plugins

def trampoline(target_str):
   loader() # load the plugins now that we are in the child process

   # deserialize the target and call it
   target = reduction.loads(target_str)
   target()

# Serialize the real target function, but call in the child
# trampoline(). Because it can be accessed by the child it will
# not fail
target_str = reduction.dumps(objs[0].sayhi)
ch = multiprocessing.Process(target=trampoline, args=(target_str,))
ch.start()

The hack works but is this the correct way to do it?

The following gist has the minimal example code that triggers the issue
and its workaround:
https://gist.github.com/eldipa/d9b02875a13537e72fbce4cdb8e3f282

Thanks!
Martin.
--
https://mail.python.org/mailman/listinfo/python-list


Re: Execute in a multiprocessing child dynamic code loaded by the parent process

2022-03-06 Thread Martin Di Paola






The way you've described it, it's a hack. Allow me to slightly redescribe it.

modules = loader()
objs = init(modules)

def invoke(mod, func):
   # I'm assuming that the loader is smart enough to not load
   # a module that's already loaded. Alternatively, load just the
   # module you need, if that's a possibility.
   loader()
   target = getattr(modules[mod], func)
   target()

ch = multiprocessing.Process(target=invoke, args=("some_module", "sayhi"))
ch.start()



Yeup, that would be my first choice but the catch is that "sayhi" may
not be a function of the given module. It could be a static method of
some class or any other callable.

And doing the lookup by hand sounds complex.

The thing is that the use of multiprocessing is not something required by me
(by my plugin-engine), it was a decision of the developer of a particular
plugin so I don't have any control on that.

Using multiprocessing.reduction was a practical decision: if the user
wants to call something non-pickleable, it is not my fault, it is
multiprocessing's fault.

It *would* be my fault if multiprocessing.Process fails only because I'm
loading the code dynamically.


[...] I won't say "the" correct way, as there are other valid
ways, but there's certainly nothing wrong with this idea.


Do you have some in mind? Or may be a project that I could read?

Thanks!
Martin
--
https://mail.python.org/mailman/listinfo/python-list


Re: Execute in a multiprocessing child dynamic code loaded by the parent process

2022-03-06 Thread Martin Di Paola

Try to use `fork` as "start method" (instead of "spawn").


Yes but no. Indeed with `fork` there is no need to pickle anything. In
particular the child process will be a copy of the parent so it will
have all the modules loaded, including the dynamic ones. Perfect.

The problem is that `fork` is the default only in Linux. It works in
MacOS but it may lead to crashes if the parent process is multithreaded
(and the my is!) and `fork` does not work in Windows.
--
https://mail.python.org/mailman/listinfo/python-list


Re: Execute in a multiprocessing child dynamic code loaded by the parent process

2022-03-06 Thread Martin Di Paola





I'm not so sure about that. The author of the plugin knows they're
writing code that will be dynamically loaded, and can therefore
expect the kind of problem they're having. It could be argued that
it's their responsibility to ensure that all the needed code is loaded
into the subprocess.


Yes but I try to always make my libs/programs as much as usable as
possible. "Ergonomic" would be the word.

In the case of the plugin-engine I'm trying to hide any side-effect or
unexpected behaviour of the engine so the developer of the plugin
does not have take that into account.

I agree that if the developer uses multiprocessing he/she needs to know
its implications. But if I can "smooth" any rough corner, I will try to
do it.

For example, the main project (developed by me) uses threads for
concurrency. It would be simpler to load the plugins and instantiate
them *once* and ask the plugins developers to take care of any
race condition (RC) within their implementation.

Because the plugins were instantiated *once*, it is almost guaranteed
that the plugins will suffer from race conditions and they will require
some sort of locking.

This is quite risky: you may forget to protect something and you will
end up with a RC and/or you may put the lock in the wrong place and the
whole thing will not work concurrently.

My decision back then was to instantiate each plugin N+1 times: once in
the main thread and then once per worker thread.

With this, no single plugin instance will be shared so there is no risk
of RC and no need for locking. (Yes, I know, the developer just needs to
use a module variable or a class attribute and it will get a RC and
these are shared but it is definitely not the default scenario).

If sharing is required I provide an object that minimizes the locking
needed.

It was much complex for me at the design and at the implementation level
but I think that it is safer and requires less from the plugin
developer.

Reference: https://byexamples.github.io/byexample/contrib/concurrency-model
--
https://mail.python.org/mailman/listinfo/python-list


Re: Execute in a multiprocessing child dynamic code loaded by the parent process

2022-03-06 Thread Martin Di Paola





Yeup, that would be my first choice but the catch is that "sayhi" may
not be a function of the given module. It could be a static method of
some class or any other callable.


Ah, fair. Are you able to define it by a "path", where each step in
the path is a getattr() call?


Yes but I think that unpickle (pickle.loads()) does that plus
importing any module needed
in the path which it is handy because I can preload the plugins
(modules) before the unpickle but the path may contain others
more-standard modules as well.

Something like "myplugin.re.match". unpickle should import 're' module
automatically will it is loading the function "match".


Fair. I guess, then, that the best thing to do is to preload the
modules, then unpickle. So, basically what you already have, but with
more caveats.


Yes, this will not be transparent for the user, just trying to minimize
the changes needed.

And it will require some documentation for those caveats. And tests.

Thanks for the brainstorming!
Martin.
--
https://mail.python.org/mailman/listinfo/python-list


Re: Execute in a multiprocessing child dynamic code loaded by the parent process

2022-03-07 Thread Martin Di Paola

I understand that yes, pickle.loads() imports any necessary module but
only if they can be find in sys.path (like in any "import" statement).

Dynamic code loaded from a plugin (which we presume it is *not* in
sys.path) will not be loaded.

Quick check. Run in one console the following:

import multiprocessing
import multiprocessing.reduction

import pickle
pickle.dumps(multiprocessing.reduction.ForkingPickler)


In a separated Python console run the following:

import pickle
import sys

'multiprocessing' in sys.modules
False

pickle.loads()

'multiprocessing' in sys.modules
True

So the last check proves that pickle.loads imports any necessary module.

Martin.

On Mon, Mar 07, 2022 at 08:28:15AM +, Barry wrote:




On 7 Mar 2022, at 02:33, Martin Di Paola  wrote:

Yes but I think that unpickle (pickle.loads()) does that plus
importing any module needed


Are you sure that unpickle will import code? I thought it did not do that.

Barry

--
https://mail.python.org/mailman/listinfo/python-list


Re: Execute in a multiprocessing child dynamic code loaded by the parent process

2022-03-08 Thread Martin Di Paola

Then, you must put the initialization (dynamically loading the modules)
into the function executed in the foreign process.

You could wrap the payload function into a class instances to achieve this.
In the foreign process, you call the instance which first performs
the initialization and then executes the payload.


That's what I have in mind: loading the modules first, and then unpickle
and call the real target function.
--
https://mail.python.org/mailman/listinfo/python-list


Re: Could frozendict or frozenmap be of some use for PEP 683 (Immortal objects)?

2022-03-09 Thread Martin Di Paola

I perhaps didn't understand the PEP completely but I think that the goal
of marking some objects as immortal is to remove the refcount from they.

For immutable objects that would make them truly immutable.

However I don't think that the immortality could be applied to any
immutable object by default.

Think in the immutable strings (str). What would happen with a program
that does heavy parsing? I imagine that it will generate thousands of
little strings. If those are immortal, the program will fill its memory
very quickly as the GC will not reclaim their memory.

The same could happen with any frozenfoo object.

Leaving immortality to only a few objects, like True and
None makes more sense as they are few (bound if you want).

On Wed, Mar 09, 2022 at 09:16:00PM +0100, Marco Sulla wrote:

As title. dict can't be an immortal object, but hashable frozendict
and frozenmap can. I think this can increase their usefulness.

Another advantage: frozen dataclass will be really immutable if they
could use a frozen(dict|map) instead of a dict as __dict__
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


ArtWork in PyPyBox - Pure Python

2016-03-30 Thread Salvatore DI DIO
In pure Python, here is a nice image (for me at least)

http://salvatore.diodev.fr/pypybox/

Regards
-- 
https://mail.python.org/mailman/listinfo/python-list


Transcrypt - Games for Kids

2016-06-27 Thread Salvatore DI DIO
Hello,

I am using Transcrypt  (Python to Javascript translator) to create
games for kids, and introduce them to  programming.
You can give an eye here :

https://github.com/artyprog/GFK

Regards
-- 
https://mail.python.org/mailman/listinfo/python-list


Question on ABC classes

2020-10-22 Thread Julio Di Egidio
Hello guys,

I am professional programmer but quite new to Python,
and I am trying to get the grips of some peculiarities
of the language.

Here is a basic question: if I define an ABC class,
I can still instantiate the class unless there are
abstract methods defined in the class.

(In the typical OO language the class would be not
instantiable, period, since it's "abstract".  But
this is not so in Python, to the point that, also
for uniformity, I am feeling compelled to define an
@abstractmethod __init__ in my ABC classes, whether
they need one or not, and whether there are other
abstract methods in the class or not.)

Now, I do read in the docs that that is as intended,
but I am not understanding the rationale of it: why
only if there are abstract methods defined in an ABC
class is instantiation disallowed?  IOW, why isn't
subclassing from ABC enough?

Thanks for any enlightenment,

Julio
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Question on ABC classes

2020-10-22 Thread Julio Di Egidio
On Thursday, 22 October 2020 23:04:25 UTC+2, Ethan Furman  wrote:
> On 10/22/20 9:25 AM, Julio Di Egidio wrote:
> 
> > Now, I do read in the docs that that is as intended,
> > but I am not understanding the rationale of it: why
> > only if there are abstract methods defined in an ABC
> > class is instantiation disallowed?  IOW, why isn't
> > subclassing from ABC enough?
> 
> Let's say you subclass from ABC:
> 
>class Abstract(ABC):
>pass
> 
> Then you subclass from that:
> 
>class Concrete(Abstract):
>pass
> 
> Then subclass from that:
> 
>class MoreConcrete(Concrete):
>pass
> 
> If you do a
> 
>issubclass(, ABC)
> 
> you'll get
> 
>True

Ah, right, that's the point I was missing: how to tell the
compiler when a more derived class is *not* abstract...

I was indeed making the mistake of inheriting from ABC also
in the derived classes, and omitting it in the classes that
are eventually concrete, not realising that ABC isn't just
a keywork or a decorator, so it gets inherited all the way.

> The idea behind abstract classes is the prevention of creating
> non-functional instances

Well, in Python, not in any other OO language, where abstract
is just synonym with must-override (hence cannot instantiate),
no other constraints.

I am now thinking whether I could achieve the "standard"
behaviour via another approach, say with decorators, somehow
intercepting calls to __new__... maybe.  Anyway, abstract
classes are the gist of most library code, and I remain a bit
puzzled by the behaviour implemented in Python: but I am not
complaining, I know it will take me some time before I actually
understand the language...

For now, thank you and Marco very much for your feedback,

Julio
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Question on ABC classes

2020-10-22 Thread Julio Di Egidio
On Friday, 23 October 2020 07:36:39 UTC+2, Greg Ewing  wrote:
> On 23/10/20 2:13 pm, Julio Di Egidio wrote:
> > I am now thinking whether I could achieve the "standard"
> > behaviour via another approach, say with decorators, somehow
> > intercepting calls to __new__... maybe.
> 
> I'm inclined to step back and ask -- why do you care about this?
> 
> Would it actually do any harm if someone instantiated your
> base class? If not, then it's probably not worth going out
> of your way to prevent it.

This is the first little library I try to put together
in Python, and it was natural for me to hit it with all
the relevant decorations as well as type annotations in
order to expose *strict* contracts, plus hopefully have
intellisense work all over the place.

After several days of struggling, I am indeed finding that
impossible in Python, at least with the available tools,
and I am indeed going out of my way to just get some
half-decent results...  But, if I give up on strict
contracts, I can as well give up on type annotations
and the whole lot, indeed why even subclass ABC?  Which
is maybe too drastic, maybe not: it's the next thing I
am going to try, and see what I remain with. :)

Of course, any more hints welcome...

Julio
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Question on ABC classes

2020-10-29 Thread Julio Di Egidio
On Sunday, 25 October 2020 20:55:26 UTC+1, Peter J. Holzer  wrote:
> On 2020-10-22 23:35:21 -0700, Julio Di Egidio wrote:
> > On Friday, 23 October 2020 07:36:39 UTC+2, Greg Ewing  wrote:
> > > On 23/10/20 2:13 pm, Julio Di Egidio wrote:
> > > > I am now thinking whether I could achieve the "standard"
> > > > behaviour via another approach, say with decorators, somehow
> > > > intercepting calls to __new__... maybe.
> > > 
> > > I'm inclined to step back and ask -- why do you care about this?
> > > 
> > > Would it actually do any harm if someone instantiated your
> > > base class? If not, then it's probably not worth going out
> > > of your way to prevent it.
> > 
> > This is the first little library I try to put together
> > in Python, and it was natural for me to hit it with all
> > the relevant decorations as well as type annotations in
> > order to expose *strict* contracts, plus hopefully have
> > intellisense work all over the place.
> 
> I think you are trying to use Python in a way contrary to its nature.
> Python is a dynamically typed language. Its variables don't have types,
> only its objects. And basically everything can be changed at runtime ...

Consider this example:

def abs(x):
  return math.sqrt(x.re**2 + x.im**2)

That of course fails if we pass an object not of the correct
(duck) type.  But the exception is raised by math.sqrt, which,
properly speaking, beside considerations on usability, is a
private detail of the implementation of abs.

The point is more about what the function semantically
represents and to which point the implementation fulfils
that promise/contract.  What is that "abs" really supposed
to provide?  Let's say: << 'abs', given a "complex number"
(duck-typed or not), returns its "real number" magnitude >>.
And that's it, not just the doc but really what *type* of
computation it is, and then the mathematics we can apply
to types.

And an effect is that, by constraining the domains (and
codomains) of functions, we make verification of correctness,
the need for defensive coding, as well as for plain testing,
less onerous by orders of magnitude.

So, at least for a function in a public interface, we *must*
add validation (and the doc string):

def abs(x):
  """...as above..."""
  if not isinstance(x, ComplexNumber):
raise TypeError(...)
  return math.sqrt(x.re**2 + x.im**2)

and then why not rather write:

@typechecked
def abs(x: ComplexNumber) -> RealNumber:
  return math.sqrt(x.re**2 + x.im**2)

i.e. checked by the tools statically and/or at runtime,
with, for easy intellisense, the default doc string that
comes from the signature?

More generally, I do think dynamic typing leads to interesting
patterns of reuse, but I think "unconstrained" is a red herring:
ab initio there is a *requirement* to fulfil.

> It now has type annotations, but they are ignored both by the compiler
> and at run-time. They are only for the benefit of linting tools like
> mypy and editor features like intellisense.

For the chronicle, I have meanwhile switched from pylint to
mypy (in VSCode) and things have improved dramatically, from
configurability to intellisense to static type checking working
pretty well.  I have meanwhile also realised that the built-in
solutions (the typing module and so on) as well as the tools
supporting them (including those for runtime validation) are
still quite young, so instead of venting frustration (and I
must say so far I think the typing module is a mess), as I
have done in the OP, I should be patient, rather possibly
give a hand...

> > and the whole lot, indeed why even subclass ABC?
> 
> Good question. In six years of programming Python, I've never used ABC.
> But then I came from another dynamically typed language to Python.

Back in the day we had a laugh when intellisense was invented...
but now, with the eco-info-system booming with everything and the
contrary of everything (to put it charitably), I think it's about
(formal, as much as possible) verifiability and correctness.

Anyway. my 2c.  Sorry for the long post and thanks for the feedback.

Julio
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Question on ABC classes

2020-10-29 Thread Julio Di Egidio
On Friday, 30 October 2020 05:09:34 UTC+1, Chris Angelico  wrote:
> On Fri, Oct 30, 2020 at 1:06 PM Julio Di Egidio  wrote:
> > On Sunday, 25 October 2020 20:55:26 UTC+1, Peter J. Holzer  wrote:
> > > I think you are trying to use Python in a way contrary to its nature.
> > > Python is a dynamically typed language. Its variables don't have types,
> > > only its objects. And basically everything can be changed at runtime ...
> >
> > Consider this example:
> >
> > def abs(x):
> >   return math.sqrt(x.re**2 + x.im**2)
> >
> > That of course fails if we pass an object not of the correct
> > (duck) type.  But the exception is raised by math.sqrt, which,
> > properly speaking, beside considerations on usability, is a
> > private detail of the implementation of abs.
> 
> Not sure what sorts of errors you're expecting, but for the most part,
> if it's not a complex number, you should get an AttributeError before
> you get into the sqrt.


Yes, sorry, I was glossing over an example that was meant
to be just general.  I could have written something like:

def abs(x):
  ...bunch of possibly complex operations...

and that that is completely opaque both to the user of the
function as well as, in my opinion, to its author (code must
speak to speak for itself!).

Not to mention, from the point of view of formal verification,
this is the corresponding annotated version, and it is in fact
worse than useless:

def abs(x: Any) -> Any:
  ...some code here...

Useless as in plain incorrect: functions written in a totally
unconstrained style are rather pretty much guaranteed not to
accept arbitrary input... and, what's really worse, to be
unsound on part of the input that they do accept: read
undefined behaviour.

> But here's the thing: even if that IS the case, most Python
> programmers are fine with getting an AttributeError stating exactly
> what the problem is, rather than a TypeError complaining that it has
> to be the exact type some other programmer intended. A good example is
> dictionaries and dict-like objects; if your code type checks for a
> dict, I have to give you a dict, but if you just try to do whichever
> operations you need, I can create something more dynamic and give you
> that instead.

I did say "duck-typing or not". I was not talking of restricting
dynamism (python protocols are an implementation of duck typing
and can be used in typing annotations, even generic protocols),
the point is making intentions explicit: to the benefit of the
user of that code, but also to that of the author, as even the
author has to have some "plan" in mind.  As well as, eventually,
to the various tools and checkers.

> Python doesn't push for heavy type checking because it's almost
> certainly not necessary in the majority of cases. Just use the thing
> as you expect it to be, and if there's a problem, your caller can
> handle it. That's why we have call stacks.

I am learning Pandas and I can rather assure you that it is an
absolute pain as well as loss of productivity that, whenever I
misuse a function (in spite of reading the docs), I indeed get
a massive stack trace down to the private core, and have to
figure out, sometimes by looking at the source code, what I
actually did wrong and what I should do instead.  To the point
that, since I want to become a proficient user, I am ending up
reverse engineering the whole thing...  Now, as I said up-thread,
I am not complaining as the whole ecosystem is pretty young,
but here the point is: "by my book" that code is simply called
not production level.

Julio
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: help(list[int]) → TypeError

2020-12-04 Thread Julio Di Egidio
On Thursday, 3 December 2020 at 19:28:19 UTC+1, Paul Bryan wrote:
> Is this the correct behavior? 
> 
> Python 3.9.0 (default, Oct 7 2020, 23:09:01) 
> [GCC 10.2.0] on linux 
> Type "help", "copyright", "credits" or "license" for more information. 
> >>> help(list[int]) 
> Traceback (most recent call last): 
> File "", line 1, in  
> File "/usr/lib/python3.9/_sitebuiltins.py", line 103, in __call__ 
> return pydoc.help(*args, **kwds) 
> File "/usr/lib/python3.9/pydoc.py", line 2001, in __call__ 
> self.help(request) 
> File "/usr/lib/python3.9/pydoc.py", line 2060, in help 
> else: doc(request, 'Help on %s:', output=self._output) 
> File "/usr/lib/python3.9/pydoc.py", line 1779, in doc 
> pager(render_doc(thing, title, forceload)) 
> File "/usr/lib/python3.9/pydoc.py", line 1772, in render_doc 
> return title % desc + '\n\n' + renderer.document(object, name) 
> File "/usr/lib/python3.9/pydoc.py", line 473, in document 
> if inspect.isclass(object): return self.docclass(*args) 
> File "/usr/lib/python3.9/pydoc.py", line 1343, in docclass 
> (str(cls.__name__) for cls in type.__subclasses__(object) 
> TypeError: descriptor '__subclasses__' for 'type' objects doesn't apply to a 
> 'types.GenericAlias' object 
> >>> 
> 
> I would have expected the output to the identical to help(list).

As I get it from the docs (*), these new generics still only work in type 
hinting contexts,
and I'd rather have expected a more useful error message: but, whether that is 
temporary
(possibly a plain bug, as in a forgotten case) or, instead, just "how things 
are", I wouldn't
know... might be a good question for Python developers.

(*) As in this one for a starter, but see also PEP 585:
"*In type annotations* you can now use ...", my emphasis.


Julio
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Lambda in parameters

2020-12-18 Thread Julio Di Egidio
On Friday, 18 December 2020 at 15:20:59 UTC+1, Abdur-Rahmaan Janhangeer wrote:
> The Question: 
> 
> # --- 
> This problem was asked by Jane Street. 
> 
> cons(a, b) constructs a pair, and car(pair) and cdr(pair) returns the first 
> and last element of that pair. For example, car(cons(3, 4)) returns 3, and 
> cdr(cons(3, 4)) returns 4. 
> 
> Given this implementation of cons:
> def cons(a, b): 
> def pair(f): 
> return f(a, b) 
> return pair
> Implement car and cdr. 
> # --- 

Notice that you don't need (Python) lambdas to code it, plain function 
definitions are fine:

# ---
def cons(a, b):
def pair(f):
return f(a, b)
return pair

def car(pair):
def left(a, b):
return a
return pair(left)

pair = cons(1, 2)
assert car(pair) == 1
# ---

That said, few basic comments:  In Python, that 'cons' does not construct a 
pair, it rather returns a function with values a and b in its closure that, 
given some function, applies it to those values.  In fact, Python has tuples 
built-in, how to build them as well as how to access their members.  I take it 
the point of the exercise is how to use a purely functional language, such as 
here a fragment of Python, to encode (i.e. formalize) pairs and their 
operations.

Julio
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How do you find what exceptions a class can throw?

2020-12-20 Thread Julio Di Egidio
On Sunday, 20 December 2020 at 18:18:26 UTC+1, Chris Green wrote:

> If I ignore the exception then the 
> program just exits, if I want the program to do something useful about 
> it (like try again) then I have to catch the specific exception as I 
> don't want to try again with other exceptions.

As other have hinted at, it's about handling, not so much about catching.

That said, the docs are your friend:


"exception poplib.error_proto -- Exception raised on any errors from this 
module (errors from socket module are not caught) [where "error_proto" I 
suppose stands for "protocol error"]. The reason for the exception is passed to 
the constructor as a string."  (It's also documented in the code docs...)

So that's one exception and the only explicit one (see below) specific to that 
module.  Then you should check the exceptions in the socket module:


Incidentally, I have peeked at the source code for poplib, and the initializer 
of poplib.POP3_SSL also raises ValueError on invalid arguments.  I suppose 
similar should be expected from the socket module.  But these are the 
exceptions that typically are not handled: one has to validate input, so that, 
at that point, a ValueError, or a TypeError (or a KeyError, etc.) is rather a 
bug.  Anyway, this is another story...

Julio
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Re: How do you find what exceptions a class can throw?

2020-12-20 Thread Julio Di Egidio
On Sunday, 20 December 2020 at 19:35:21 UTC+1, Karsten Hilbert wrote:

> > If it's a timeout exception I'm going to delay a little while and then 
> > try again. The timeout is probably because the server is busy.
> 
> So what you are looking for is the form of a potential 
> "timeout exception" (say, exception name) ? 
> 
> Provoke one and have a look. 
> 
> Then catch what you saw.



Programmers don't guess...

HTH,

Julio
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Re: Re: How do you find what exceptions a class can throw?

2020-12-20 Thread Julio Di Egidio
On Sunday, 20 December 2020 at 19:54:08 UTC+1, Karsten Hilbert wrote:
> > > So what you are looking for is the form of a potential 
> > > "timeout exception" (say, exception name) ? 
> > > 
> > > Provoke one and have a look. 
> > > 
> > > Then catch what you saw. 
> > 
> >  
> > 
> > Programmers don't guess...
> 
> I did not suggest guessing. 

Yes, you did.  :)

> I suggested gathering scientific evidence by 
> running a controlled experiment.

Programming is not a science: in fact, computer science is a mathematics, and 
engineering is engineering.

Rather (speaking generally), the "trial and error" together with "not reading 
the docs" is a typical beginner's mistake.

> Or should I say "Programmers don't trust..." ? 

Trust me: it takes 100x getting anything done plus keep up with your prayers, 
and it takes 100^100x learning anything solid, as in just forget about it.  
Indeed, consider that we are rather going to the formal verification of 
programs, software, and even hardware...

Julio
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How do you find what exceptions a class can throw?

2020-12-20 Thread Julio Di Egidio
On Sunday, 20 December 2020 at 23:16:10 UTC+1, cameron...@gmail.com wrote:
> On 20Dec2020 20:34, Karsten Hilbert  wrote: 
> >> Trust me: it takes 100x getting anything done plus keep up with your 
> >> prayers, and it takes 100^100x learning anything solid, as in just forget 
> >> about it. Indeed, consider that we are rather going to the formal 
> >> verification of programs, software, and even hardware... 
> > 
> >I sincerly wish you that your hope becomes reality within your 
> >lifetime.
> 
> Aye, since "we are rather going to the formal verification of programs, 
> software, and even hardware" was true when I was at university. In the 
> 1980s and 1990s. 

You could have taken the chance to pay attention, as after 30 years of fake 
agility, lies over the very state of the art, and of course the chronic 
spectacular failures, the defamation game and the misery of an entire industry, 
we are eventually getting back to where we were.  And, while we are still quite 
far from an end-to-end integrated experience, by now, Dec 2020, it is already 
the case that there are enough systems, libraries/components and of course the 
underlying theory that for the practitioner (i.e. the professional in the 
field) the problem at the moment is rather which ones to commit to (i.e. invest 
money and time into).

> Gathering evidence is indeed part of science, and computer science is 
> indeed mathematics, but alas programmering is just a craft and software 
> engineering often ... isn't. 

Programming is a *discipline*, while you keep echoing cheap and vile marketing 
nonsense.

> Anyway, I would hope we're all for more rigour rather than less. 

I am sure you do, rigour mortis eventually...

EOD.

Julio
-- 
https://mail.python.org/mailman/listinfo/python-list


Problem with embedded python

2005-04-26 Thread Ugo Di Girolamo
I have the following code, that seems to make sense to me. 


However, it crashes about 1/3 of the times. 


My platform is Python 2.4.1 on WXP (I tried the release version from 
the msi and the debug version built by me, both downloaded today to 
have the latest version). 


The crash happens while the main thread is in Py_Finalize. 
I traced the crash to _Py_ForgetReference(op) in object.c at line 1847, 
where I have op->_ob_prev == NULL.


What am I doing wrong? I'm definitely not too sure about the way I'm 
handling the GIL. 


Thanks in adv for any suggestion/ comment


Cheers and ciao 


Ugo 


// TestPyThreads.py // 
#include  
#include "Python.h" 


int main() 
{ 
PyEval_InitThreads(); 
Py_Initialize(); 
PyGILState_STATE main_restore_state = PyGILState_UNLOCKED; 
PyGILState_Release(main_restore_state); 


// start the thread 
{ 
PyGILState_STATE state = PyGILState_Ensure(); 
int trash = PyRun_SimpleString( 
"import thread\n" 
"import time\n" 
"def foo():\n" 
"  f = open('pippo.out', 'w', 0)\n" 
"  i = 0;\n" 
"  while 1:\n" 
"f.write('%d\\n'%i)\n" 
"time.sleep(0.01)\n" 
"i += 1\n" 
"t = thread.start_new_thread(foo, ())\n" 
); 
PyGILState_Release(state); 
} 


// wait 300 ms 
Sleep(300); 


PyGILState_Ensure(); 
Py_Finalize(); 
return 0; 

} 
--
http://mail.python.org/mailman/listinfo/python-list


RE: Problem with embedded python - bug?

2005-04-29 Thread Ugo Di Girolamo
I have been having a few more discussions around about this, and I'm starting 
to think that this is a bug.

My take is that, when I call Py_Finalize, the python thread should be shut down 
 gracefully, closing the file and everything. 
Maybe I'm missing a call to something (?PyEval_FinalizeThreads?) but the docs 
seem to say that just PyFinalize should be called.

The open file seems to be the issue, since if I remove all the references to 
the file I cannot get the program to crash.

I can reproduce the same behavior on two different wxp systems, under python 
2.4 and 2.4.1.

Ugo


-Original Message-
From: Ugo Di Girolamo 
Sent: Tuesday, April 26, 2005 2:16 PM
To: 'python-dev@python.org'
Subject: Problem with embedded python

I have the following code, that seems to make sense to me. 


However, it crashes about 1/3 of the times. 


My platform is Python 2.4.1 on WXP (I tried the release version from 
the msi and the debug version built by me, both downloaded today to 
have the latest version). 


The crash happens while the main thread is in Py_Finalize. 
I traced the crash to _Py_ForgetReference(op) in object.c at line 1847, 
where I have op->_ob_prev == NULL.


What am I doing wrong? I'm definitely not too sure about the way I'm 
handling the GIL. 


Thanks in adv for any suggestion/ comment


Cheers and ciao 


Ugo 

// TestPyThreads.py // 
#include  
#include "Python.h" 


int main() 
{ 
PyEval_InitThreads(); 
Py_Initialize(); 
PyGILState_STATE main_restore_state = PyGILState_UNLOCKED; 
PyGILState_Release(main_restore_state); 


// start the thread 
{ 
PyGILState_STATE state = PyGILState_Ensure(); 
int trash = PyRun_SimpleString( 
"import thread\n" 
"import time\n" 
"def foo():\n" 
"  f = open('pippo.out', 'w', 0)\n" 
"  i = 0;\n" 
"  while 1:\n" 
"f.write('%d\\n'%i)\n" 
"time.sleep(0.01)\n" 
"i += 1\n" 
"t = thread.start_new_thread(foo, ())\n" 
); 
PyGILState_Release(state); 
} 


// wait 300 ms 
Sleep(300); 


PyGILState_Ensure(); 
Py_Finalize(); 
return 0; 

} 
--
http://mail.python.org/mailman/listinfo/python-list


RapydScript : Python to Javascript translator

2013-11-17 Thread Salvatore DI DIO
Hello,

If someone  is interested about a fast Python to Javascript translator (not a 
compiler like Brython which is another beast)

Here is a link of a RapydScript Tester.
For now it's only for windows.

Regards

http://salvatore.pythonanywhere.com/static/Projects/RapydScriptDemo.exe

(I can if there is needs publish it for mac and linux)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [ANN] Pythonium Core 0.2.5

2013-11-17 Thread Salvatore DI DIO
Thanks Amirouche,

I am now balanced between RapydScript and Pythonium :-)


Le dimanche 17 novembre 2013 20:17:44 UTC+1, Amirouche Boubekki a écrit :
> Héllo Pythonistas from all over the world,
> 
> 
> 
> I'm very proud to announce the immediate availability of Pythonium Core 
> 0.2.5, a Python 3 to Javascript translator (the best) that generates *fast* 
> *portable* code written in Python.
> 
> 
> 
> 
> It use Python 3 parser and translates the code to JavaScript code.
> 
> 
> I did not say “it's fully compliant” because it's not. That's not the point 
> of this flavor. Its point is to make possible to write Python code and use it 
> in the browsers. All the objects stay vanilla Javascript objects. There is no 
> builtins, no stdlib, except what is available in the wild, because Pythonium 
> can access Javascript objects directly, you can use *whatever* JavaScript 
> library you want.
> 
> 
> 
> 
> There is port of the mrdoob webgl cloud demo available 
> 
> watch: http://pythonium.github.io/
> read: https://github.com/pythonium/pythonium.github.io/blob/master/js/app.py
> 
> 
> 
> 
> The project is hosted at github: https://github.com/pythonium/pythonium
> 
> 
> Don't hesitate to watch/star/fork/create/pr ! Like said earlier, it's the 
> best translator I know of, and it's written in Python.
> 
> 
> 
> 
> 
> How do you get started ?
> ==
> 
> 
> If you know JavaScript it's easy you don't need guidance. Don't forget to 
> read the cookook 
> https://github.com/pythonium/pythonium/wiki/Pythonium-Core-Cookbook
> 
> 
> 
> 
> If you only know backend or desktop Python development, it will be a bit more 
> work. What you can do is take a jQuery or Javascript course, and translate 
> the code on the fly to Python, compile it using the pythonium_core and and 
> run it in nodejs or a browser. Good luck!
> 
> 
> 
> 
> 
> What's next?
> ==
> 
> 
> 
> Now, basicly, I don't know what to do!
> 
> Except bugs in requirejs integration, I don't except to commit more on this 
> flavor of Pythonium, so I could work on  more compliant flavors until 
> reaching full compliance with Python 3.
> 
> 
> 
> 
> BUT, this is not very interesting, having full compliance is nice, but you 
> loose native javascript speed (meh!) I'd rather be working on the next killer 
> todo list or some Kivy-like library for the browser using Pythonium Core.
> 
> 
> 
> 
> What do you think?
> 
> 
> 
> Amirouche

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [ANN] Pythonium Core 0.2.5

2013-11-17 Thread Salvatore DI DIO
Porting Kivy would be really great.


Le dimanche 17 novembre 2013 20:17:44 UTC+1, Amirouche Boubekki a écrit :
> Héllo Pythonistas from all over the world,
> 
> 
> 
> I'm very proud to announce the immediate availability of Pythonium Core 
> 0.2.5, a Python 3 to Javascript translator (the best) that generates *fast* 
> *portable* code written in Python.
> 
> 
> 
> 
> It use Python 3 parser and translates the code to JavaScript code.
> 
> 
> I did not say “it's fully compliant” because it's not. That's not the point 
> of this flavor. Its point is to make possible to write Python code and use it 
> in the browsers. All the objects stay vanilla Javascript objects. There is no 
> builtins, no stdlib, except what is available in the wild, because Pythonium 
> can access Javascript objects directly, you can use *whatever* JavaScript 
> library you want.
> 
> 
> 
> 
> There is port of the mrdoob webgl cloud demo available 
> 
> watch: http://pythonium.github.io/
> read: https://github.com/pythonium/pythonium.github.io/blob/master/js/app.py
> 
> 
> 
> 
> The project is hosted at github: https://github.com/pythonium/pythonium
> 
> 
> Don't hesitate to watch/star/fork/create/pr ! Like said earlier, it's the 
> best translator I know of, and it's written in Python.
> 
> 
> 
> 
> 
> How do you get started ?
> ==
> 
> 
> If you know JavaScript it's easy you don't need guidance. Don't forget to 
> read the cookook 
> https://github.com/pythonium/pythonium/wiki/Pythonium-Core-Cookbook
> 
> 
> 
> 
> If you only know backend or desktop Python development, it will be a bit more 
> work. What you can do is take a jQuery or Javascript course, and translate 
> the code on the fly to Python, compile it using the pythonium_core and and 
> run it in nodejs or a browser. Good luck!
> 
> 
> 
> 
> 
> What's next?
> ==
> 
> 
> 
> Now, basicly, I don't know what to do!
> 
> Except bugs in requirejs integration, I don't except to commit more on this 
> flavor of Pythonium, so I could work on  more compliant flavors until 
> reaching full compliance with Python 3.
> 
> 
> 
> 
> BUT, this is not very interesting, having full compliance is nice, but you 
> loose native javascript speed (meh!) I'd rather be working on the next killer 
> todo list or some Kivy-like library for the browser using Pythonium Core.
> 
> 
> 
> 
> What do you think?
> 
> 
> 
> Amirouche

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [ANN] Pythonium Core 0.2.5

2013-11-17 Thread Salvatore DI DIO
Are lists comprehensions are featured in Veloce ?

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RapydScript : Python to Javascript translator

2013-11-18 Thread Salvatore DI DIO

> 
> I don't know about other people here, but I'm a bit leery of just
> 
> downloading Windows binaries from people and running them. Is your
> 
> source code available? Is this an open source / free project?
> 
> 
> 
> ChrisA

You are completly right :-)
Here is the source code :

https://github.com/charleslaw/rapydscript_online

You can see other demos here :

http://salvatore.pythonanywhere.com/RapydScript

The official site of RapydScript :

http://rapydscript.pyjeon.com/

e assumptions before testing :

Here is the official site:

http://rapydscript.pyjeon.com/ 

Regards 

Salvatore




-- 
https://mail.python.org/mailman/listinfo/python-list


Source code of Python to Javascsript translator

2013-11-18 Thread Salvatore DI DIO

>
> I don't know about other people here, but I'm a bit leery of just
>
> downloading Windows binaries from people and running them. Is your
>
> source code available? Is this an open source / free project?
>
>
>
> ChrisA

You are completly right :-)
Here is the source code :

https://github.com/charleslaw/rapydscript_online

You can see other demos here :

http://salvatore.pythonanywhere.com/RapydScript

The official site of RapydScript :

http://rapydscript.pyjeon.com/

e assumptions before testing :

Here is the official site:

http://rapydscript.pyjeon.com/

Regards

Salvatore 
-- 
https://mail.python.org/mailman/listinfo/python-list


RapydBox

2014-02-04 Thread Salvatore DI DIO
Hello,

For those of you who are interested by tools like NodeBox or Processing.
you can give a try to RapydScript here :

https://github.com/artyprog/RapydBox

Regards
-- 
https://mail.python.org/mailman/listinfo/python-list


Explaining names vs variables in Python

2016-03-02 Thread Salvatore DI DIO
Hello,

I know Python does not have variables, but names.
Multiple names cant then be bound to the same objects.

So this behavior 

>>> b = 234
>>> v = 234
>>> b is v
True

according to the above that is ok



But where is the consistency ? if I try :

>>> v = 890
>>> w = 890
>>> v is w
False

It is a little difficult to explain this behavior to a newcommer in Python

Can someone give me the right argument to expose ?

Regards

-- 
https://mail.python.org/mailman/listinfo/python-list


Explaining names vs variables in Python (follow)

2016-03-02 Thread Salvatore DI DIO
Thank you very much all of you.
I better understand now

Regards
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Explaining names vs variables in Python

2016-03-02 Thread Salvatore DI DIO
Thank you very much ast and all of you. 

I better understant now

Regards

-- 
https://mail.python.org/mailman/listinfo/python-list


Experimenting with PyPyJS

2016-03-19 Thread Salvatore DI DIO
Hy all,

I am experimenting PyPyJS and found it not so bad at all.
The virtual machine loads on a few seconds (using firefox).

It s really nice for  learning Python, you have all the standard libraries,
and traceback on errors. I don't have to choose anymore with all transpilers 
around

You can try it here, but please don't tell it s too long to load the VM.
After all, don't you wait when you start a desktop application, or an heavy 
game online ?

Just try it and tell your feeling

Regards

http://salvatore.diodev.fr/pypybox/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Experimenting with PyPyJS

2016-03-19 Thread Salvatore DI DIO
Le samedi 19 mars 2016 16:28:36 UTC+1, Salvatore DI DIO a écrit :
> Hy all,
> 
> I am experimenting PyPyJS and found it not so bad at all.
> The virtual machine loads on a few seconds (using firefox).
> 
> It s really nice for  learning Python, you have all the standard libraries,
> and traceback on errors. I don't have to choose anymore with all transpilers 
> around
> 
> You can try it here, but please don't tell it s too long to load the VM.
> After all, don't you wait when you start a desktop application, or an heavy 
> game online ?
> 
> Just try it and tell your feeling
> 
> Regards
> 
> http://salvatore.diodev.fr/pypybox/

Use Firefox...
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Experimenting with PyPyJS

2016-03-19 Thread Salvatore DI DIO
Le samedi 19 mars 2016 18:00:05 UTC+1, Vincent Vande Vyvre a écrit :
> Le 19/03/2016 16:32, Salvatore DI DIO a écrit :
> > Le samedi 19 mars 2016 16:28:36 UTC+1, Salvatore DI DIO a écrit :
> >> Hy all,
> >>
> >> I am experimenting PyPyJS and found it not so bad at all.
> >> The virtual machine loads on a few seconds (using firefox).
> >>
> >> It s really nice for  learning Python, you have all the standard libraries,
> >> and traceback on errors. I don't have to choose anymore with all 
> >> transpilers around
> >>
> >> You can try it here, but please don't tell it s too long to load the VM.
> >> After all, don't you wait when you start a desktop application, or an 
> >> heavy game online ?
> >>
> >> Just try it and tell your feeling
> >>
> >> Regards
> >>
> >> http://salvatore.diodev.fr/pypybox/
> > Use Firefox...
> 
> That's look fine but:
> 
>PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>   4917 vincent   20   0 1081m 305m  52m R *50.3* 15.4   5:23.75 firefox
>   1094 root  20   0 48152  15m 7180 S *35.0*  0.8   4:43.28 Xorg
>   5421 vincent   20   0  162m  14m  10m R  2.0  0.7   0:02.49 mate-terminal
>  1 root  20   0  3660 1984  ...
> 
> 85.3 % (50.3 + 35) CPU usage just for a rotating square it's too much cost.
> 
> Vincent

Thank you for testing :-)
Strange on Windows I have an average 6% CPU with Firefox 45.0.1

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Experimenting with PyPyJS

2016-03-19 Thread Salvatore DI DIO
Le samedi 19 mars 2016 18:00:05 UTC+1, Vincent Vande Vyvre a écrit :
> Le 19/03/2016 16:32, Salvatore DI DIO a écrit :
> > Le samedi 19 mars 2016 16:28:36 UTC+1, Salvatore DI DIO a écrit :
> >> Hy all,
> >>
> >> I am experimenting PyPyJS and found it not so bad at all.
> >> The virtual machine loads on a few seconds (using firefox).
> >>
> >> It s really nice for  learning Python, you have all the standard libraries,
> >> and traceback on errors. I don't have to choose anymore with all 
> >> transpilers around
> >>
> >> You can try it here, but please don't tell it s too long to load the VM.
> >> After all, don't you wait when you start a desktop application, or an 
> >> heavy game online ?
> >>
> >> Just try it and tell your feeling
> >>
> >> Regards
> >>
> >> http://salvatore.diodev.fr/pypybox/
> > Use Firefox...
> 
> That's look fine but:
> 
>PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>   4917 vincent   20   0 1081m 305m  52m R *50.3* 15.4   5:23.75 firefox
>   1094 root  20   0 48152  15m 7180 S *35.0*  0.8   4:43.28 Xorg
>   5421 vincent   20   0  162m  14m  10m R  2.0  0.7   0:02.49 mate-terminal
>  1 root  20   0  3660 1984  ...
> 
> 85.3 % (50.3 + 35) CPU usage just for a rotating square it's too much cost.
> 
> Vincent

Here is a screenshot

http://salvatore.diodev.fr/pypybox/static/images/pypyjs.png
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Experimenting with PyPyJS

2016-03-19 Thread Salvatore DI DIO
Le samedi 19 mars 2016 19:52:45 UTC+1, Terry Reedy a écrit :
> On 3/19/2016 1:06 PM, Salvatore DI DIO wrote:
> > Le samedi 19 mars 2016 18:00:05 UTC+1, Vincent Vande Vyvre a écrit :
> >> Le 19/03/2016 16:32, Salvatore DI DIO a écrit :
> >>> Le samedi 19 mars 2016 16:28:36 UTC+1, Salvatore DI DIO a écrit :
> >>>> Hy all,
> >>>>
> >>>> I am experimenting PyPyJS and found it not so bad at all.
> >>>> The virtual machine loads on a few seconds (using firefox).
> >>>>
> >>>> It s really nice for  learning Python, you have all the standard 
> >>>> libraries,
> >>>> and traceback on errors. I don't have to choose anymore with all 
> >>>> transpilers around
> >>>>
> >>>> You can try it here, but please don't tell it s too long to load the VM.
> >>>> After all, don't you wait when you start a desktop application, or an 
> >>>> heavy game online ?
> >>>>
> >>>> Just try it and tell your feeling
> >>>>
> >>>> Regards
> >>>>
> >>>> http://salvatore.diodev.fr/pypybox/
> >>> Use Firefox...
> >>
> >> That's look fine but:
> >>
> >> PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
> >>4917 vincent   20   0 1081m 305m  52m R *50.3* 15.4   5:23.75 firefox
> >>1094 root  20   0 48152  15m 7180 S *35.0*  0.8   4:43.28 Xorg
> >>5421 vincent   20   0  162m  14m  10m R  2.0  0.7   0:02.49 
> >> mate-terminal
> >>   1 root  20   0  3660 1984  ...
> >>
> >> 85.3 % (50.3 + 35) CPU usage just for a rotating square it's too much cost.
> >>
> >> Vincent
> >
> > Thank you for testing :-)
> > Strange on Windows I have an average 6% CPU with Firefox 45.0.1
> 
> Win10, same FF, 6 core pentium, 1% +- CPU, 110 MB memory increase.
> 
> 
> -- 
> Terry Jan Reedy

Thanks Terry
-- 
https://mail.python.org/mailman/listinfo/python-list


Nodebox(v1) on the web via RapydScript

2013-10-03 Thread Salvatore DI DIO
Hello,

Nodebox is a program in the spirit of Processing but for Python.

The first version runs only on MAC.
Tom, the creator has partly ported it to Javascript.

But many of you dislike Javascript.
The solution was to use a translator,  Python -> Javascript

Of the both two greats solutions Brython / RapydScript, I've choosen RapydScript
(Brython and RapydScript does not achieve the same goal)

You can see a preview of 'Nodebox on the Web' namely 'RapydBox' here :

http://salvatore.pythonanywhere.com/RapydBox

Regards




-- 
https://mail.python.org/mailman/listinfo/python-list


Calulation in lim (1 + 1 /n) ^n when n -> infinite

2015-11-09 Thread Salvatore DI DIO
Hello,

I was trying to show that this limit was 'e'
But when I try large numbers I get errors

def lim(p):
return math.pow(1 + 1.0 / p , p)

>>> lim(5)
2.718281748862504
>>> lim(9)
2.7182820518605446  


What am i doing wrong ?

Regards
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Calulation in lim (1 + 1 /n) ^n when n -> infinite

2015-11-09 Thread Salvatore DI DIO
Thank you very much Oscar,I was considerind using Mapple :-)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Calulation in lim (1 + 1 /n) ^n when n -> infinite

2015-11-09 Thread Salvatore DI DIO
Thank you very much Chris

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Calulation in lim (1 + 1 /n) ^n when n -> infinite

2015-11-09 Thread Salvatore DI DIO
Thank you very much Oscar, I was considering using Mapple :-)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Calulation in lim (1 + 1 /n) ^n when n -> infinite

2015-11-10 Thread Salvatore DI DIO
Thank you very much Peter
-- 
https://mail.python.org/mailman/listinfo/python-list


Twisted Perspective Broker: get client ip

2011-09-14 Thread Andrea Di Mario
Hi, i'm writing a perspective broker server. Now, i should get the
client IP, that perspective broker writes well in the log. I've tried
to get it from MyRealm with: mind.broker.transport.getPeer(), without
success. I've tried self.transport.getPeer() to, with this result:
exceptions.AttributeError: Listner instance has no attribute 'transport'
It's strange, because PB wrote the client IP, infact in log i've line with:
2011-09-11 16:41:58+0200 [Broker,0,127.0.0.1] 

Could you suggest me something?
Thanks.
Here the code:

from OpenSSL import SSL
from twisted.internet import reactor, ssl
from ConfigParser import SafeConfigParser
from twisted.python import log
from twisted.spread import pb
from twisted.cred import checkers, portal
from zope.interface import implements
import hashlib

class Listner(pb.Avatar):

   def __init__(self, name):
   self.name = name

   def perspective_getDictionary(self, dictionary):
  print dictionary

   def perspective_simplyAccess(self, access):
  print access

def verifyCallback(connection, x509,  errnum, errdepth, ok):
   if not ok:
  log.msg("Certificato non valido: %s" % x509.get_subject())
  return False
   else:
  log.msg("Connessione stabilita, vertificato valido: %s" %
x509.get_subject())
   return True


class MyRealm:
implements(portal.IRealm)
def requestAvatar(self, avatarId, mind, *interfaces):
if pb.IPerspective not in interfaces:
raise NotImplementedError
return pb.IPerspective, Listner(avatarId), lambda:None


if __name__ == "__main__":

   CONFIGURATION = SafeConfigParser()
   CONFIGURATION.read('server.conf')
   PORT = CONFIGURATION.get('general', 'port')
   LOGFILE = CONFIGURATION.get('general', 'log')

log.startLogging(open(LOGFILE,'a'))


   myContextFactory =
ssl.DefaultOpenSSLContextFactory(CONFIGURATION.get('general',
'keypath'), CONFIGURATION.get('general', 'certpath'))
   ctx = myContextFactory.getContext()
   ctx.set_verify(SSL.VERIFY_PEER |
SSL.VERIFY_FAIL_IF_NO_PEER_CERT, verifyCallback)

   ctx.load_verify_locations(CONFIGURATION.get('general', 'cacert'))

   p = portal.Portal(MyRealm())
   c = checkers.FilePasswordDB('passwords.txt',
caseSensitive=True, cache=True)
   p.registerChecker(c)
   factory = pb.PBServerFactory(p)
   reactor.listenSSL(int(PORT), factory, myContextFactory)
   reactor.run()

-- 
Andrea Di Mario
-- 
http://mail.python.org/mailman/listinfo/python-list


Global variables in a C extension for Python

2011-12-28 Thread Lorenzo Di Gregorio
Hello,

I've written a C extension for Python which works so far, but now I've
stumbled onto a simple problem for which I just can't find any example
on the web, so here I am crying for help ;-)

I'll trying to reduce the problem to a minimal example.  Let's say I
need to call from Python functions of a C program like:

static int counter = 0;
void do_something(...) {
... counter++; ...
}
void do_something_else(...) {
... counter++; ...
}

So they access a common global variable.  I've written the wrappers
for the functions, but I'd like to place "counter" in the module's
space and have wrappers accessing it like self->counter.  I do not
need to make "counter" visible to Python, I just need the global
static variable available for C.

I've got somehow a clue of how this should work, but not much more
than a clue, and I'd appreciate to see a simple example.

Best Regards,
Lorenzo
-- 
http://mail.python.org/mailman/listinfo/python-list


__future__ and __rdiv__

2012-01-22 Thread Massimo Di Pierro
Hello everybody,

I hope somebody could help me with this problem. If this is not the right place 
to ask, please direct me to the right place and apologies.
I am using Python 2.7 and I am writing some code I want to work on 3.x as well. 
The problem can be reproduced with this code:

# from __future__ import division
class Number(object):
def __init__(self,number):
self.number=number
def __rdiv__(self,other):
return other/self.number
print 10/Number(5)

It prints 2 as I expect. But if I uncomment the first line, I get:

Traceback (most recent call last):
  File "test.py", line 8, in 
print 10/Number(5)
TypeError: unsupported operand type(s) for /: 'int' and 'Number'

Is this a bug or the __future__ division in 3.x changed the way operators are 
overloaded? Where can I read more?

Massimo
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __future__ and __rdiv__

2012-01-22 Thread Massimo Di Pierro
Thank you. I tried __rtruediv__ and it works.

On Jan 23, 2012, at 12:14 AM, Ian Kelly wrote:

> On Sun, Jan 22, 2012 at 10:22 PM, Massimo Di Pierro 
>  wrote:
> Hello everybody,
> 
> I hope somebody could help me with this problem. If this is not the right 
> place to ask, please direct me to the right place and apologies.
> I am using Python 2.7 and I am writing some code I want to work on 3.x as 
> well. The problem can be reproduced with this code:
> 
> # from __future__ import division
> class Number(object):
>def __init__(self,number):
>self.number=number
>def __rdiv__(self,other):
>return other/self.number
> print 10/Number(5)
> 
> It prints 2 as I expect. But if I uncomment the first line, I get:
> 
> Traceback (most recent call last):
>  File "test.py", line 8, in 
>print 10/Number(5)
> TypeError: unsupported operand type(s) for /: 'int' and 'Number'
> 
> Is this a bug or the __future__ division in 3.x changed the way operators are 
> overloaded? Where can I read more?
> 
> 
> In Python 3, the / operator uses __truediv__ and the // operator uses 
> __floordiv__.
> In Python 2, the / operator uses __div__, unless the future import is in 
> effect, and then it uses __truediv__ like Python 3.
> 
> http://docs.python.org/reference/datamodel.html#emulating-numeric-types
> 
> Cheers,
> Ian

-- 
http://mail.python.org/mailman/listinfo/python-list


I/O Multiplexing and non blocking socket

2006-12-01 Thread Salvatore Di Fazio
Hi guys,
I'm looking for a tutorial to make a client with a i/o multiplexing and
non blocking socket.

Anybody knows where is a tutorial?
Tnx

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I/O Multiplexing and non blocking socket

2006-12-01 Thread Salvatore Di Fazio

Jean-Paul Calderone ha scritto:

> On 1 Dec 2006 06:07:28 -0800, Salvatore Di Fazio <[EMAIL PROTECTED]> wrote:
> >Hi guys,
> >I'm looking for a tutorial to make a client with a i/o multiplexing and
> >non blocking socket.
> >
> >Anybody knows where is a tutorial?
>
> http://twistedmatrix.com/projects/core/documentation/howto/clients.html
>
> Jean-Paul

Thank you guys, but I would like to use the standard libraries

-- 
http://mail.python.org/mailman/listinfo/python-list


Thread help

2006-12-01 Thread Salvatore Di Fazio
Hi guys,
I would make 3 threads for a client application.

Tnx

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Thread help

2006-12-01 Thread Salvatore Di Fazio
Grant Edwards ha scritto:

> You should use 4.

Yes, but I don't know how can I make a thread :)

-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >