Re: on implementing a toy oop-system

2022-09-28 Thread Meredith Montgomery
Meredith Montgomery  writes:

> r...@zedat.fu-berlin.de (Stefan Ram) writes:
>
>> Meredith Montgomery  writes:
>>>Is that at all possible somehow?  Alternatively, how would you do your
>>>toy oop-system?
>>
>>   Maybe something along those lines:
>>
>> from functools import partial
>>
>> def counter_create( object ):
>> object[ "n" ]= 0
>> def counter_increment( object ):
>> object[ "n" ]+= 1
>> def counter_value( object ):
>> return object[ "n" ]
>>
>> counter_class =( counter_create, counter_increment, counter_value )
>>
>> def inherit_from( class_, target ):
>> class_[ 0 ]( target )
>> for method in class_[ 1: ]:
>> target[ method.__name__ ]= partial( method, target )
>>
>> car = dict()
>>
>> inherit_from( counter_class, car )
>>
>> print( car[ "counter_value" ]() )
>> car[ "counter_increment" ]()
>> print( car[ "counter_value" ]() )
>>
>>   . The "create" part is simplified. I just wanted to show how
>>   to make methods like "counter_increment" act on the object
>>   that inherited them using "partial".
>
> I really liked this idea.  I organized it my way.  Have a look.  (Thank
> you for the lecture!)

But it lacks consistency.

> from functools import partial
>
> def Counter(name = None):
>   o = {"name": name if name else "untitled", "n": 0}
>   def inc(o):
> o["n"] += 1
> return o
>   o["inc"] = inc
>   def get(o):
> return o["n"]
>   o["get"] = get
>   return o

This parent class is not defined in the same way as the child class
below.  The class below uses partial to fix the object in the method,
but the parent one does not.  We need consistency.  

But if we curry the parent class's methods (that is, if we apply partial
on it to fix the object in its first argument), we will curry them a
second time in inherit_from.  That won't work.  I can't see an elegant
solution there, so what I'm going to do is to keep a copy of the
uncurried original method.

The code below works, but you can see it's kinda ugly.  I wish I could
uncurry a procedure, but I don't think this is possible.  (Is it?)

# -*- mode: python; python-indent-offset: 2 -*-
def Counter(name = None):
  self = {"name": name if name else "untitled", "n": 0}
  def inc(self):
self["n"] += 1
return self
  self["inc_uncurried"] = inc
  self["inc"] = curry(inc, self)
  def get(self):
return self["n"]
  self["get_uncurried"] = get
  self["get"] = curry(get, self)
  return self

def Car(maker):
  self = {"maker": maker, "state": "off"}
  inherit_from(Counter, self)
  def on(self):
if self["is_on"]():
  raise ValueError("oh, no: car is already on")
self["inc"]()
print(f"{self['maker']}: bruum!")
self["state"] = "on"
return self
  self["on_uncurried"] = on
  self["on"] = curry(on, self)
  def off(self):
if self["is_off"]():
  raise ValueError("oh, no: car is already off")
print(f"{self['maker']}: spat!")
self["state"] = "off"
return self
  self["off_uncurried"] = off
  self["off"] = curry(off, self)
  def is_on(self):
return self["state"] == "on"
  self["is_on_uncurried"] = is_on
  self["is_on"] = curry(is_on, self)
  def is_off(self):
return self["state"] == "off"
  self["is_off_uncurried"] = is_off
  self["is_off"] = curry(is_off, self)
  return self

def main():
  car1 = Car("Ford")
  car2 = Car("VW")
  for i in range(5):
car1["on"](); car1["off"]()
  for i in range(3):
car2["on"](); car2["off"]()
  print(f"car turned on = {car1['get']()} ({car1['maker']})")
  print(f"car turned on = {car2['get']()} ({car2['maker']})")

>>> main()
Ford: bruum!
Ford: spat!
Ford: bruum!
Ford: spat!
Ford: bruum!
Ford: spat!
Ford: bruum!
Ford: spat!
Ford: bruum!
Ford: spat!
VW: bruum!
VW: spat!
VW: bruum!
VW: spat!
VW: bruum!
VW: spat!
car turned on = 5 (Ford)
car turned on = 3 (VW)
-- 
https://mail.python.org/mailman/listinfo/python-list


Implementation of an lru_cache() decorator that ignores the first argument

2022-09-28 Thread Robert Latest via Python-list
Hi all,

in a (Flask) web application I often find that many equal (SQLAlchemy) queries
are executed across subsequent requests. So I tried to cache the results of
those queries on the module level like this:

@lru_cache()
def query_db(db, args):
# do the "expensive" query
return result

This obviously doesn't work because each request uses a new database session,
so the db argument always changes from one call to the next, triggering a new
query against the database. But even if that weren't so, the function would
keep returning the same value forever (unless it's kicked out of the cache) and
not reflect the (infrequent) changes on the database. So what I need is some
decorator that can be used like this:

@lru_ignore_first(timeout=10)
def query_db(db, args):
# do the "expensive" query
return result

This is what I came up with. I'm quite happy with it so far.  Question: Am I
being too clever? is it too complicated? Am I overlooking something that will
come back and bite me later? Thanks for any comments!

from functools import wraps, lru_cache
from time import time, sleep

def lru_ignore_first(timeout=0, **lru_args):

class TimeCloak():
'''All instances compare equal until timeout expires'''
__slots__ = ('x', 't', 'timeout')

def __init__(self, timeout):
self.timeout = timeout
self.t = 0
self.x = None

def __hash__(self):
return self.t

def __eq__(self, other):
return self.t == other.t

def update(self, x):
self.x = x
if self.timeout:
t = int(time())
if t >= self.t + self.timeout:
self.t = t

cloak = TimeCloak(timeout)

def decorator(func):

@lru_cache(**lru_args)
def worker(cloak, *a, **b):
return func(cloak.x, *a, **b)

@wraps(func)
def wrapped(first, *a, **kw):
cloak.update(first)
return worker(cloak, *a, **kw)

return wrapped

return decorator

@lru_ignore_first(3)
def expensive(first, par):
'''This takes a long time'''
print('Expensive:', first, par)
return par * 2

for i in range(10):
r = expensive(i, 100)
sleep(1)
print(r)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Implementation of an lru_cache() decorator that ignores the first argument

2022-09-28 Thread Chris Angelico
On Thu, 29 Sept 2022 at 05:36, Robert Latest via Python-list
 wrote:
> in a (Flask) web application I often find that many equal (SQLAlchemy) queries
> are executed across subsequent requests. So I tried to cache the results of
> those queries on the module level like this:
>
> @lru_cache()
> def query_db(db, args):
> # do the "expensive" query
> return result
>
> ...
> This is what I came up with. I'm quite happy with it so far.  Question: Am I
> being too clever? is it too complicated? Am I overlooking something that will
> come back and bite me later? Thanks for any comments!
>
> def lru_ignore_first(timeout=0, **lru_args):
> ...

I think this code is fairly specific to what you're doing, which means
the decorator won't be as reusable (first hint of that is the entire
"timeout" feature, which isn't mentioned at all in the function's
name). So it's probably not worth trying to do this multi-layered
approach, and it would be as effective, and a lot simpler, to just
have code at the top of the query_db function to do the cache lookup.
But you may find that your database is *itself* able to do this
caching for you, and it will know when to evict from cache. If you
really have to do it yourself, keep it really really simple, but have
an easy way *in your own code* to do the cache purge; that way, you
guarantee correctness, even at the expense of some performance.

In terms of overall database performance, though: are you using
transactions correctly? With PostgreSQL, especially, the cost of doing
a series of queries in one transaction is barely higher than doing a
single query in a transaction; or, putting it the other way around,
doing several sequential transactions costs several times as much as
doing one combined transaction. Check to see that you aren't
accidentally running in autocommit mode or anything. It could save you
a lot of hassle!

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Implementation of an lru_cache() decorator that ignores the first argument

2022-09-28 Thread dn
On 29/09/2022 07.22, Robert Latest via Python-list wrote:
...

> This is what I came up with. I'm quite happy with it so far.  Question: Am I
> being too clever? is it too complicated? Am I overlooking something that will
> come back and bite me later? Thanks for any comments!

Thank you for the chuckle: "Yes", you are clever; and "yes", this is
likely a bit too clever (IMHO).

The impression is that LRU will put something more-concrete, 'in front
of' SQLAlchemy - which is more abstract, and which is in-turn 'in front
of' the RDBMS (which is concrete...). Is this the code-smell making
one's nose suspicious?

The criticism is of SQLAlchemy. If the problem can't be solved with that
tool, perhaps it is not the right-tool-for-the-job...

Bias: With decades of SQL/RDBMS experience, it is easy to say, "drop the
tool".

+1 @Chris: depending upon how many transactions-between, it seems likely
find that the RDBMS will cache sufficiently, as SOP.

YMMV, ie there's only one way to find-out!
-- 
Regards,
=dn
-- 
https://mail.python.org/mailman/listinfo/python-list