combinations of all rows and cols from a dataframe

2023-03-29 Thread marc nicole
Hello everyone,

Given a dataframe like this:

2 6
8 5

I want to yield the following list of lists:
[  [[2],[6,5]],
[[2],[6]],
[[2],[5]],
[[8],[6,5]],
[[8],[6]],
[[8],[5]],
[[6],[2,8]],
[[6],[8]],
[[6],[2]],
[[5],[2,8]],
[[5],[2]],
[[5],[8]],
[[6,5],[2,8]]  ]

I have written the following (which doesn't yield the expected results)

import pandas as pd
> from itertools import combinations
> import numpy as np
> resList=[]
> resListTmp=[]
> resListTmp2=[]
> dataframe =
> pd.read_excel("C:\\Users\\user\\Desktop\\testData.xlsx",index_col=False,header=None)

for i in range(0, len(dataframe)+1):
> for j in range(0, len(dataframe.columns)):
> for k in range (0,len(dataframe)+1):
> for xVals in list(combinations(dataframe.iloc[k:i,j], i)):
> if list(xVals) not in resListTmp:
> resListTmp.append(list(xVals))
> resListTmp2.append(resListTmp)
> resList.append(resListTmp2)
> print(resList)
>

What is wrong with my code?
-- 
https://mail.python.org/mailman/listinfo/python-list


cubes library docs are not accurate, first example failing unexpectedly

2023-06-08 Thread marc nicole via Python-list
Hello to All,

I want to create a cube from csv data file and to perform and aggregation
on it, the code is below:

from sqlalchemy import create_enginefrom cubes.tutorial.sql import
create_table_from_csvfrom cubes import Workspace, Cell, browser
import dataif __name__ == '__main__':
engine = create_engine('sqlite:///data.sqlite')
create_table_from_csv(engine,
 "../data/data.csv",
 table_name="irbd_balance",
 fields=[
("category", "string"),
("category_label", "string"),
 ("subcategory", "string"),
  ("subcategory_label", "string"),
  ("line_item", "string"),
 ("year", "integer"),
 ("amount", "integer")],
create_id=True
  )
print("done. file data.sqlite created")
workspace = Workspace()
workspace.register_default_store("sql", url="sqlite:///data.sqlite")
workspace.import_model("../model.json")


cube = workspace.cube("irbd_balance")

browser = workspace.browser("irbd_balance")

cell = Cell(cube)
result = browser.aggregate(cell, drilldown=["year"])
for record in result.drilldown:
print(record)





The tutorial and the library are available here:

https://pythonhosted.org/cubes/tutorial.html
The error stack is :


result = browser.aggregate(cell, drilldown=["year"])
  File "C:\Users\path\venv\lib\site-packages\cubes\browser.py", line
145, in aggregate
result = self.provide_aggregate(cell,
  File "C:\path\venv\lib\site-packages\cubes\sql\browser.py", line
400, in provide_aggregate
(statement, labels) = self.aggregation_statement(cell,
  File "C:\path\venv\lib\site-packages\cubes\sql\browser.py", line
532, in aggregation_statement
raise ArgumentError("List of aggregates should not be empty")
cubes.errors.ArgumentError: List of aggregates should not be empty

It seems the tutorial contains some typos.

Any idea how to fix this? Else is there any other better olap cubes
library for Python that has great docs?
-- 
https://mail.python.org/mailman/listinfo/python-list


best tool to extract domain hierarchy from a dimension in an OLAP dataset (csv)

2024-01-13 Thread marc nicole via Python-list
Hi all,

I have a csv OLAP dataset that I want to extract the domain hierarchies
from each of its dimensions.

Anybody could recommend a Python tool that could manage this properly?

Thanks
-- 
https://mail.python.org/mailman/listinfo/python-list


How to replace a cell value with each of its contour cells and yield the corresponding datasets seperately in a list according to a Pandas-way?

2024-01-21 Thread marc nicole via Python-list
Hello,

I have an initial dataframe with a random list of target cells (each cell
being identified with a couple (x,y)).
I want to yield four different dataframes each containing the value of one
of the contour (surrounding) cells of each specified target cell.

the surrounding cells to consider for a specific target cell are : (x-1,y),
(x,y-1),(x+1,y);(x,y+1), specifically I randomly choose 1 to 4 cells from
these and consider for replacement to the target cell.

I want to do that through a pandas-specific approach without having to
define the contour cells separately and then apply the changes on the
dataframe (but rather using an all in one approach):
for now I have written this example which I think is not Pandas specific:























































*def select_target_values(dataframe, number_of_target_values):
target_cells = []for _ in range(number_of_target_values):row_x
= random.randint(0, len(dataframe.columns) - 1)col_y =
random.randint(0, len(dataframe) - 1)target_cells.append((row_x,
col_y))return target_cellsdef select_contours(target_cells):
contour_coordinates = [(0, 1), (1, 0), (0, -1), (-1, 0)]contour_cells =
[]for target_cell in target_cells:# random contour count for
each cellcontour_cells_count = random.randint(1, 4)try:
contour_cells.append([tuple(map(lambda i, j: i + j,
(target_cell[0], target_cell[1]), contour_coordinates[iteration_]))
 for iteration_ in range(contour_cells_count)])except
IndexError:continuereturn contour_cellsdef
apply_contours(target_cells, contour_cells):target_cells_with_contour =
[]# create one single list of cellsfor idx, target_cell in
enumerate(target_cells):target_cell_with_contour = [target_cell]
target_cell_with_contour.extend(contour_cells[idx])
target_cells_with_contour.append(target_cell_with_contour)return
target_cells_with_contourdef create_possible_datasets(dataframe,
target_cells_with_contour):all_datasets_final = []
dataframe_original = dataframe.copy()#check for nans
list_tuples_idx_cells_all_datasets = list(filter(lambda x:
utils_tuple_list_not_contain_nan(x),
[list(tuples) for tuples in list(itertools.product(

*target_cells_with_contour))]))target_original_cells_coordinates =
list(map(lambda x: x[0],
[target_and_contour_cell for target_and_contour_cell in
 target_cells_with_contour]))for
dataset_index_values in list_tuples_idx_cells_all_datasets:
all_datasets = []for idx_cell in range(len(dataset_index_values)):
  dataframe_cpy = dataframe.copy()dataframe_cpy.iat[
target_original_cells_coordinates[idx_cell][1],
target_original_cells_coordinates[idx_cell][0]] =
dataframe_original.iloc[dataset_index_values[idx_cell][1],
dataset_index_values[idx_cell][0]]
all_datasets.append(dataframe_cpy)
all_datasets_final.append(all_datasets)return all_datasets_final*


If you have a better Pandas approach (unifying all these methods into one
that make use of dataframe methods only) please let me know.

thanks!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to replace a cell value with each of its contour cells and yield the corresponding datasets seperately in a list according to a Pandas-way?

2024-01-21 Thread marc nicole via Python-list
Thanks for the reply,

I think using a Pandas (or a Numpy) approach would optimize the execution
of the program.

Target cells could be up to 10% the size of the dataset, a good example to
start with would have from 10 to 100 values.

Let me know your thoughts, here's a reproducible example which I formatted:



from numpy import random
import pandas as pd
import numpy as np
import operator
import math
from collections import deque
from queue import *
from queue import Queue
from itertools import product


def select_target_values(dataframe, number_of_target_values):
target_cells = []
for _ in range(number_of_target_values):
row_x = random.randint(0, len(dataframe.columns) - 1)
col_y = random.randint(0, len(dataframe) - 1)
target_cells.append((row_x, col_y))
return target_cells


def select_contours(target_cells):
contour_coordinates = [(0, 1), (1, 0), (0, -1), (-1, 0)]
contour_cells = []
for target_cell in target_cells:
# random contour count for each cell
contour_cells_count = random.randint(1, 4)
try:
contour_cells.append(
[
tuple(
map(
lambda i, j: i + j,
(target_cell[0], target_cell[1]),
contour_coordinates[iteration_],
)
)
for iteration_ in range(contour_cells_count)
]
)
except IndexError:
continue
return contour_cells


def create_zipf_distribution():
zipf_dist = random.zipf(2, size=(50, 5)).reshape((50, 5))

zipf_distribution_dataset = pd.DataFrame(zipf_dist).round(3)

return zipf_distribution_dataset


def apply_contours(target_cells, contour_cells):
target_cells_with_contour = []
# create one single list of cells
for idx, target_cell in enumerate(target_cells):
target_cell_with_contour = [target_cell]
target_cell_with_contour.extend(contour_cells[idx])
target_cells_with_contour.append(target_cell_with_contour)
return target_cells_with_contour


def create_possible_datasets(dataframe, target_cells_with_contour):
all_datasets_final = []
dataframe_original = dataframe.copy()

list_tuples_idx_cells_all_datasets = list(
filter(
lambda x: x,
[list(tuples) for tuples in
list(product(*target_cells_with_contour))],
)
)
target_original_cells_coordinates = list(
map(
lambda x: x[0],
[
target_and_contour_cell
for target_and_contour_cell in target_cells_with_contour
],
)
)
for dataset_index_values in list_tuples_idx_cells_all_datasets:
all_datasets = []
for idx_cell in range(len(dataset_index_values)):
dataframe_cpy = dataframe.copy()
dataframe_cpy.iat[
target_original_cells_coordinates[idx_cell][1],
target_original_cells_coordinates[idx_cell][0],
] = dataframe_original.iloc[
dataset_index_values[idx_cell][1],
dataset_index_values[idx_cell][0]
]
all_datasets.append(dataframe_cpy)
all_datasets_final.append(all_datasets)
return all_datasets_final


def main():
zipf_dataset = create_zipf_distribution()

target_cells = select_target_values(zipf_dataset, 5)
print(target_cells)
contour_cells = select_contours(target_cells)
print(contour_cells)
target_cells_with_contour = apply_contours(target_cells, contour_cells)
datasets = create_possible_datasets(zipf_dataset,
target_cells_with_contour)
print(datasets)


main()

Le dim. 21 janv. 2024 à 16:33, Thomas Passin via Python-list <
python-list@python.org> a écrit :

> On 1/21/2024 7:37 AM, marc nicole via Python-list wrote:
> > Hello,
> >
> > I have an initial dataframe with a random list of target cells (each cell
> > being identified with a couple (x,y)).
> > I want to yield four different dataframes each containing the value of
> one
> > of the contour (surrounding) cells of each specified target cell.
> >
> > the surrounding cells to consider for a specific target cell are :
> (x-1,y),
> > (x,y-1),(x+1,y);(x,y+1), specifically I randomly choose 1 to 4 cells from
> > these and consider for replacement to the target cell.
> >
> > I want to do that through a pandas-specific approach without having to
> > define the contour cells separately and then apply the changes on the
> > dataframe
>
> 1. Why do you want a Pandas-specific approach?  Many people would rather
> keep code independent of special libraries if possible;
>
> 2. How big can these collections of target cells be, roughly speaking?
> The size could make a big difference in picking a d

Re: How to replace a cell value with each of its contour cells and yield the corresponding datasets seperately in a list according to a Pandas-way?

2024-01-21 Thread marc nicole via Python-list
It is part of a larger project aiming at processing data according to a
given algorithm
Do you have any comments or any enhancing recommendations on the code?

Thanks.

Le dim. 21 janv. 2024 à 18:28, Thomas Passin via Python-list <
python-list@python.org> a écrit :

> On 1/21/2024 11:54 AM, marc nicole wrote:
> > Thanks for the reply,
> >
> > I think using a Pandas (or a Numpy) approach would optimize the
> > execution of the program.
> >
> > Target cells could be up to 10% the size of the dataset, a good example
> > to start with would have from 10 to 100 values.
>
> Thanks for the reformatted code.  It's much easier to read and think about.
>
> For say 100 points, it doesn't seem that "optimization" would be much of
> an issue.  On my laptop machine and Python 3.12, your example takes
> around 5 seconds to run and print().  OTOH if you think you will go to
> much larger datasets, certainly execution time could become a factor.
>
> I would think that NumPy arrays and/or matrices would have good potential.
>
> Is this some kind of a cellular automaton, or an image filtering process?
>
> > Let me know your thoughts, here's a reproducible example which I
> formatted:
> >
> >
> >
> > from numpy import random
> > import pandas as pd
> > import numpy as np
> > import operator
> > import math
> > from collections import deque
> > from queue import *
> > from queue import Queue
> > from itertools import product
> >
> >
> > def select_target_values(dataframe, number_of_target_values):
> >  target_cells = []
> >  for _ in range(number_of_target_values):
> >  row_x = random.randint(0, len(dataframe.columns) - 1)
> >  col_y = random.randint(0, len(dataframe) - 1)
> >  target_cells.append((row_x, col_y))
> >  return target_cells
> >
> >
> > def select_contours(target_cells):
> >  contour_coordinates = [(0, 1), (1, 0), (0, -1), (-1, 0)]
> >  contour_cells = []
> >  for target_cell in target_cells:
> >  # random contour count for each cell
> >  contour_cells_count = random.randint(1, 4)
> >  try:
> >  contour_cells.append(
> >  [
> >  tuple(
> >  map(
> >  lambda i, j: i + j,
> >  (target_cell[0], target_cell[1]),
> >  contour_coordinates[iteration_],
> >  )
> >  )
> >  for iteration_ in range(contour_cells_count)
> >  ]
> >  )
> >  except IndexError:
> >  continue
> >  return contour_cells
> >
> >
> > def create_zipf_distribution():
> >  zipf_dist = random.zipf(2, size=(50, 5)).reshape((50, 5))
> >
> >  zipf_distribution_dataset = pd.DataFrame(zipf_dist).round(3)
> >
> >  return zipf_distribution_dataset
> >
> >
> > def apply_contours(target_cells, contour_cells):
> >  target_cells_with_contour = []
> >  # create one single list of cells
> >  for idx, target_cell in enumerate(target_cells):
> >  target_cell_with_contour = [target_cell]
> >  target_cell_with_contour.extend(contour_cells[idx])
> >  target_cells_with_contour.append(target_cell_with_contour)
> >  return target_cells_with_contour
> >
> >
> > def create_possible_datasets(dataframe, target_cells_with_contour):
> >  all_datasets_final = []
> >  dataframe_original = dataframe.copy()
> >
> >  list_tuples_idx_cells_all_datasets = list(
> >  filter(
> >  lambda x: x,
> >  [list(tuples) for tuples in
> > list(product(*target_cells_with_contour))],
> >  )
> >  )
> >  target_original_cells_coordinates = list(
> >  map(
> >  lambda x: x[0],
> >  [
> >  target_and_contour_cell
> >  for target_and_contour_cell in target_cells_with_contour
> >  ],
> >  )
> >  )
> >  for dataset_index_values in list_tuples_idx_cells_all_datasets:
> >  all_datasets = []
> >  for idx_cell in range(len(dataset_index_values)):
> >  dataframe_cpy = dataframe.copy()
> >  dataframe_cpy.iat[
> >  target_original_cells_coordinates[idx_

How to create a binary tree hierarchy given a list of elements as its leaves

2024-01-28 Thread marc nicole via Python-list
So I am trying to build a binary tree hierarchy given numerical elements
serving for its leaves (last level of the tree to build). From the leaves I
want to randomly create a name for the higher level of the hierarchy and
assign it to the children elements. For example: if the elements inputted
are `0,1,2,3` then I would like to create firstly 4 elements (say by random
giving them a label composed of a letter and a number) then for the second
level (iteration) I assign each of 0,1 to a random name label (e.g. `b1`)
and `2,3` to another label (`b2`) then for the third level I assign a
parent label to each of `b1` and `b2` as `c1`.

An illustation of the example is the following tree:


[image: tree_exp.PNG]

For this I use numpy's `array_split()` to get the chunks of arrays based on
the iteration needs.
for example to get the first iteration arrays I use `np.array_split(input,
(input.size // k))` where `k` is an even number. In order to assign a
parent node to the children the array range should enclose the children's.
For example to assign the parent node with label `a1` to children `b1` and
`b2` with range respectively [0,1] and [2,3], the parent should have the
range [0,3].

All is fine until a certain iteration (k=4) returns parent with range [0,8]
which is overlapping to children ranges and therefore cannot be their
parent.

My question is how to evenly partition such arrays in a binary way and
create such binary tree so that to obtain for k=4 the first range to be
[0,7] instead of [0,8]?

My code is the following:

#!/usr/bin/python
# -*- coding: utf-8 -*-
import string
import random
import numpy as np


def generate_numbers_list_until_number(stop_number):
if str(stop_number).isnumeric():
return np.arange(stop_number)
else:
raise TypeError('Input should be a number!')


def generate_node_label():
return random.choice(string.ascii_lowercase) \
+ str(random.randint(0, 10))


def main():
data = generate_numbers_list_until_number(100)
k = 1
hierarchies = []
cells_arrays = np.array_split(data, data.size // k)
print cells_arrays
used_node_hierarchy_name = []
node_hierarchy_name = [generate_node_label() for _ in range(0,
   len(cells_arrays))]
used_node_hierarchy_name.extend(node_hierarchy_name)
while len(node_hierarchy_name) > 1:
k = k * 2

# bug here in the following line

cells_arrays = list(map(lambda x: [x[0], x[-1]],
np.array_split(data, data.size // k)))
print cells_arrays
node_hierarchy_name = []

# node hierarchy names should not be redundant in another level

for _ in range(0, len(cells_arrays)):
node_name = generate_node_label()
while node_name in used_node_hierarchy_name:
node_name = generate_node_label()
node_hierarchy_name.append(node_name)
used_node_hierarchy_name.extend(node_hierarchy_name)
print used_node_hierarchy_name
hierarchies.append(list(zip(node_hierarchy_name, cells_arrays)))
-- 
https://mail.python.org/mailman/listinfo/python-list


Can you help me with this memoization simple example?

2024-03-30 Thread marc nicole via Python-list
I am creating a memoization example with a function that adds up / averages
the elements of an array and compares it with the cached ones to retrieve
them in case they are already stored.

In addition, I want to store only if the result of the function differs
considerably (passes a threshold e.g. 50 below).

I created an example using a decorator to do so, the results using the
decorator is slightly faster than without the memoization which is OK, but
is the logic of the decorator correct ? anybody can tell me ?

My code is attached below:



import time


def memoize(f):
cache = {}

def g(*args):
if args[1] == "avg":
sum_key_arr = sum(list(args[0])) / len(list(args[0]))
elif args[1] == "sum":
sum_key_arr = sum(list(args[0]))
if sum_key_arr not in cache:
for (
key,
value,
) in (
cache.items()
):  # key in dict cannot be an array so I use the sum of the
array as the key
if (
abs(sum_key_arr - key) <= 50
):  # threshold is great here so that all values are
approximated!
# print('approximated')
return cache[key]
else:
# print('not approximated')
cache[sum_key_arr] = f(args[0], args[1])
return cache[sum_key_arr]

return g


@memoize
def aggregate(dict_list_arr, operation):
if operation == "avg":
return sum(list(dict_list_arr)) / len(list(dict_list_arr))
if operation == "sum":
return sum(list(dict_list_arr))
return None


t = time.time()
for i in range(200, 15000):
res = aggregate(list(range(i)), "avg")

elapsed = time.time() - t
print(res)
print(elapsed)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can you help me with this memoization simple example?

2024-03-31 Thread marc nicole via Python-list
Thanks for the first comment which I incorporated

but when you say "You can't use a list as a key, but you can use a tuple as
a key,
provided that the elements of the tuple are also immutable."

does it mean  the result of sum of the array is not convenient to use as
key as I do?
Which tuple I should use to refer to the underlying list value as you
suggest?

Anything else is good in my code ?

Thanks

Le dim. 31 mars 2024 à 01:44, MRAB via Python-list 
a écrit :

> On 2024-03-31 00:09, marc nicole via Python-list wrote:
> > I am creating a memoization example with a function that adds up /
> averages
> > the elements of an array and compares it with the cached ones to retrieve
> > them in case they are already stored.
> >
> > In addition, I want to store only if the result of the function differs
> > considerably (passes a threshold e.g. 50 below).
> >
> > I created an example using a decorator to do so, the results using the
> > decorator is slightly faster than without the memoization which is OK,
> but
> > is the logic of the decorator correct ? anybody can tell me ?
> >
> > My code is attached below:
> >
> >
> >
> > import time
> >
> >
> > def memoize(f):
> >  cache = {}
> >
> >  def g(*args):
> >  if args[1] == "avg":
> >  sum_key_arr = sum(list(args[0])) / len(list(args[0]))
>
> 'list' will iterate over args[0] to make a list, and 'sum' will iterate
> over that list.
>
> It would be simpler to just let 'sum' iterate over args[0].
>
> >  elif args[1] == "sum":
> >  sum_key_arr = sum(list(args[0]))
> >  if sum_key_arr not in cache:
> >  for (
> >  key,
> >  value,
> >  ) in (
> >  cache.items()
> >  ):  # key in dict cannot be an array so I use the sum of the
> > array as the key
>
> You can't use a list as a key, but you can use a tuple as a key,
> provided that the elements of the tuple are also immutable.
>
> >  if (
> >  abs(sum_key_arr - key) <= 50
> >  ):  # threshold is great here so that all values are
> > approximated!
> >  # print('approximated')
> >  return cache[key]
> >  else:
> >  # print('not approximated')
> >  cache[sum_key_arr] = f(args[0], args[1])
> >  return cache[sum_key_arr]
> >
> >  return g
> >
> >
> > @memoize
> > def aggregate(dict_list_arr, operation):
> >  if operation == "avg":
> >  return sum(list(dict_list_arr)) / len(list(dict_list_arr))
> >  if operation == "sum":
> >  return sum(list(dict_list_arr))
> >  return None
> >
> >
> > t = time.time()
> > for i in range(200, 15000):
> >  res = aggregate(list(range(i)), "avg")
> >
> > elapsed = time.time() - t
> > print(res)
> > print(elapsed)
>
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Couldn't install numpy on Python 2.7

2024-06-12 Thread marc nicole via Python-list
I am trying to install numpy library on Python 2.7.15 in PyCharm but the
error message I get is:

ERROR: Could not find a version that satisfies the requirement numpy (from
> versions: none)
> ERROR: No matching distribution found for numpy
> c:\python27\lib\site-packages\pip\_vendor\urllib3\util\ssl_.py:164:
> InsecurePlatformWarning: A true SSLContext object is not available. This
> prevents urllib3 fro
> m configuring SSL appropriately and may cause certain SSL connections to
> fail. You can upgrade to a newer version of Python to solve this. For more
> information, see
> https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
>   InsecurePlatformWarning,


Any clues?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [Tutor] How to go about a simple object grabbing in python (given coordinates of arms and objects)

2024-06-22 Thread marc nicole via Python-list
My code is just an attempt at the task, it is not exact as what relates to
the coordinates (e.g., doesn't account for the size of the object. I would
like to have a idea on the general approach to such problems (even a pseudo
code would do)

"Get the hands rapidly enough in the vicinity and then do some fine
coordinated motions to capture the object and then presumably move it."
seems to be a good approach indeed,
The grabbing with both hands code should be more precise.

Thanks for the help anyways!

Le sam. 22 juin 2024 à 23:04, ThreeBlindQuarks 
a écrit :

> Marc,
>
> Could you specify what is wrong with what you are doing? you show us code
> that uses an environment you point to that is largely outside of basic
> Python.
>
> There is no one way to get from point A to point B and various constraints
> you have not mentioned can apply. How many joints does the assemblage have
> and what are the limits and costs associated with each. Cam there be
> barriers along a route, including to the side where they may brush part of
> your equipment. Are other things moving (independently even) that may end
> up blocking.
>
> You seem to need both "hands" and presumably at the same time. So
> solutions can take that into account. You need to define what is meant by
> contacting the object to move and you don't want to approach it and hit
> with some speed.
>
> So, the problem may be in parts. Get the hands rapidly enough in the
> vicinity and then do some fine coordinated motions to capture the object
> and then presumably move it.
>
> If you could point to what code is not doing what is expected, someone who
> knows the details or is willing to learn, might help, If you want an
> overall algorithm, there may be some people could share but they may not
> easily translate into the package of sorts you are using.
>
> But the web site you point us to may well already contain examples of
> doing some aspects that you might learn from.
>
> For me, this is too detailed to focus on as I struggle to figure out how
> to move my hands to different parts of my keyboard while looking ...
>
> And that may be one variant of an algorithm where instead of trying to
> move all the way, you move art-way and LOOK where you are, then repeat.
>
>
> Sent with Proton Mail secure email.
>
> On Saturday, June 22nd, 2024 at 8:41 AM, marc nicole 
> wrote:
>
> > Hello to all of this magnificent community!
> >
> > I have this problem I had already spent a few days on and still can't
> > figure out a proper solution.
> >
> > So, given the x,y,z coordinates of a target object and the offset x,y,z
> of
> > arms of a robot, what is a good algorithm to perform to grab the object
> > between the hands (either from both sides or from below all using both
> > hands).
> >
> > Specifically, my problem is applied to a NAO robot environment where I
> > retrieve a target object coordinates using the following code:
> >
> > tracker_service= session.service("ALTracker")
> > xyz_pos = tracker_service.getTargetPosition(motion.FRAME_TORSO)
> >
> >
> > src:
> >
> http://doc.aldebaran.com/2-8/naoqi/motion/control-cartesian.html#motion-cartesian-effectors
> >
> >
> > Then I get to move the right arm towards nearby the object using the
> > following code:
> >
> > effector = "RArm"
> >
> > frame = motion.FRAME_TORSO
> > effector_offset =
> > almath.Transform(self.motion.getTransform(effector, frame, False))
> > effector_init_3d_position = almath.position3DFromTransform(
> > effector_offset)
> >
> > target_3d_position = almath.Position3D(target_position)
> > move_3d = target_3d_position - effector_init_3d_position
> > moveTransform = almath.Transform.fromPosition(move_3d.x,
> > move_3d.y, move_3d.z)
> > target_transformer_list = list(moveTransform.toVector())
> > times = [2.0]
> > axis_mask_list = motion.AXIS_MASK_VEL
> > self.motion.transformInterpolations(effector, frame,
> > target_transformer_list, axis_mask_list, times).
> >
> > src:
> http://doc.aldebaran.com/1-14/dev/python/examples/almath/index.html?highlight=offset
> > This question is specific to NAO environment but in general how to go
> > about this task? what is a most common algorithm used in this case? Do
> > I have to also get the side of the object in order to know where
> > exactly the arms should be placed?
> > ___
> > Tutor maillist - tu...@python.org
> > To unsubscribe or change subscription options:
> > https://mail.python.org/mailman/listinfo/tutor
>
-- 
https://mail.python.org/mailman/listinfo/python-list


How to go about a simple object grabbing in python (given coordinates of arms and objects)

2024-06-23 Thread marc nicole via Python-list
Hello to all of this magnificent community!

I have this problem I had already spent a few days on and still can't
figure out a proper solution.

So, given the x,y,z coordinates of a target object and the offset x,y,z of
arms of a robot, what is a good algorithm to perform to grab the object
between the hands (either from both sides or from below all using both
hands).

Specifically, my problem is applied to a NAO robot environment where I
retrieve a target object coordinates using the following code:

tracker_service= session.service("ALTracker")
xyz_pos = tracker_service.getTargetPosition(motion.FRAME_TORSO)


src:
http://doc.aldebaran.com/2-8/naoqi/motion/control-cartesian.html#motion-cartesian-effectors


Then I get to move the right arm towards nearby the object using the
following code:

effector = "RArm"

frame = motion.FRAME_TORSO
effector_offset =
almath.Transform(self.motion.getTransform(effector, frame, False))
effector_init_3d_position = almath.position3DFromTransform(
effector_offset)

target_3d_position = almath.Position3D(target_position)
move_3d = target_3d_position - effector_init_3d_position
moveTransform = almath.Transform.fromPosition(move_3d.x,
move_3d.y, move_3d.z)
target_transformer_list = list(moveTransform.toVector())
times = [2.0]
axis_mask_list = motion.AXIS_MASK_VEL
self.motion.transformInterpolations(effector, frame,
target_transformer_list, axis_mask_list, times).

src: 
http://doc.aldebaran.com/1-14/dev/python/examples/almath/index.html?highlight=offset
This question is specific to NAO environment but in general how to go
about this task? what is a most common algorithm used in this case? Do
I have to also get the side of the object in order to know where
exactly the arms should be placed?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [Tutor] How to go about a simple object grabbing in python (given coordinates of arms and objects)

2024-06-24 Thread marc nicole via Python-list
What are the parameters to account for in this type of algorithm? are there
some checks to perform the arm moves ? for example angle moves or cartesian
moves based on some distance thresholds? Any idea about the
pseudo-algorithm is welcome.

Thanks.

Le dim. 23 juin 2024 à 10:33, Alan Gauld via Tutor  a
écrit :

> On 22/06/2024 13:41, marc nicole wrote:
>
> > So, given the x,y,z coordinates of a target object and the offset x,y,z
> of
> > arms of a robot, what is a good algorithm to perform to grab the object
> > between the hands (either from both sides or from below all using both
> > hands).
> >
> > Specifically, my problem is applied to a NAO robot environment where I
> > retrieve a target object coordinates using the following code:
>
> This is almost entirely outside the Python domain and all within
> your 3rd party environment. Do they have a user forum or mailing
> list? You will probably get better results asking there?
>
> Another possibility is that you are using a Python wrapper around
> a C (or other language) library and there might be FAQs, fora or
> lists supporting that. If so you should be able to translate
> their examples to your Python code?
>
> In terms of generic solutions the only thing I can suggest that
> might help is to research collision detection algorithms.
> Wikipedia is likely a good starting point.
>
> --
> Alan G
> Author of the Learn to Program web site
> http://www.alan-g.me.uk/
> http://www.amazon.com/author/alan_gauld
> Follow my photo-blog on Flickr at:
> http://www.flickr.com/photos/alangauldphotos
>
>
>
> ___
> Tutor maillist  -  tu...@python.org
> To unsubscribe or change subscription options:
> https://mail.python.org/mailman/listinfo/tutor
>
-- 
https://mail.python.org/mailman/listinfo/python-list


How to install tensorflow on Python 2.7 in Windows?

2024-06-26 Thread marc nicole via Python-list
Browsing the available version of tensorflow for the dates before January
2021 (date when Python 2.7 stopped being supported) I can't find a
tensorflow version for Python 2.7 that works under Windows.

The reference site I use is https://pypi.org/project/tensorflow/

Anybody can point out a compatible .whl file with Python 2.7 and Windows?
-- 
https://mail.python.org/mailman/listinfo/python-list


Predicting an object over an pretrained model is not working

2024-07-30 Thread marc nicole via Python-list
Hello all,

I want to predict an object by given as input an image and want to have my
model be able to predict the label. I have trained a model using tensorflow
based on annotated database where the target object to predict was added to
the pretrained model. the code I am using is the following where I set the
target object image as input and want to have the prediction output:








class MultiObjectDetection():

def __init__(self, classes_name):

self._classes_name = classes_name
self._num_classes = len(classes_name)

self._common_params = {'image_size': 448, 'num_classes':
self._num_classes,
'batch_size':1}
self._net_params = {'cell_size': 7, 'boxes_per_cell':2,
'weight_decay': 0.0005}
self._net = YoloTinyNet(self._common_params, self._net_params,
test=True)

def predict_object(self, image):
predicts = self._net.inference(image)
return predicts

def process_predicts(self, resized_img, predicts, thresh=0.2):
"""
process the predicts of object detection with one image input.

Args:
resized_img: resized source image.
predicts: output of the model.
thresh: thresh of bounding box confidence.
Return:
predicts_dict: {"stick": [[x1, y1, x2, y2, scores1], [...]]}.
"""
cls_num = self._num_classes
bbx_per_cell = self._net_params["boxes_per_cell"]
cell_size = self._net_params["cell_size"]
img_size = self._common_params["image_size"]
p_classes = predicts[0, :, :, 0:cls_num]
C = predicts[0, :, :, cls_num:cls_num+bbx_per_cell] # two
bounding boxes in one cell.
coordinate = predicts[0, :, :, cls_num+bbx_per_cell:] # all
bounding boxes position.

p_classes = np.reshape(p_classes, (cell_size, cell_size, 1, cls_num))
C = np.reshape(C, (cell_size, cell_size, bbx_per_cell, 1))

P = C * p_classes # confidencefor all classes of all bounding
boxes (cell_size, cell_size, bounding_box_num, class_num) = (7, 7, 2,
1).

predicts_dict = {}
for i in range(cell_size):
for j in range(cell_size):
temp_data = np.zeros_like(P, np.float32)
temp_data[i, j, :, :] = P[i, j, :, :]
position = np.argmax(temp_data) # refer to the class
num (with maximum confidence) for every bounding box.
index = np.unravel_index(position, P.shape)

if P[index] > thresh:
class_num = index[-1]
coordinate = np.reshape(coordinate, (cell_size,
cell_size, bbx_per_cell, 4)) # (cell_size, cell_size,
bbox_num_per_cell, coordinate)[xmin, ymin, xmax, ymax]
max_coordinate = coordinate[index[0], index[1], index[2], :]

xcenter = max_coordinate[0]
ycenter = max_coordinate[1]
w = max_coordinate[2]
h = max_coordinate[3]

xcenter = (index[1] + xcenter) * (1.0*img_size /cell_size)
ycenter = (index[0] + ycenter) * (1.0*img_size /cell_size)

w = w * img_size
h = h * img_size
xmin = 0 if (xcenter - w/2.0 < 0) else (xcenter - w/2.0)
ymin = 0 if (xcenter - w/2.0 < 0) else (ycenter - h/2.0)
xmax = resized_img.shape[0] if (xmin + w) >
resized_img.shape[0] else (xmin + w)
ymax = resized_img.shape[1] if (ymin + h) >
resized_img.shape[1] else (ymin + h)

class_name = self._classes_name[class_num]
predicts_dict.setdefault(class_name, [])
predicts_dict[class_name].append([int(xmin),
int(ymin), int(xmax), int(ymax), P[index]])

return predicts_dict

def non_max_suppress(self, predicts_dict, threshold=0.5):
"""
implement non-maximum supression on predict bounding boxes.
Args:
predicts_dict: {"stick": [[x1, y1, x2, y2, scores1], [...]]}.
threshhold: iou threshold
Return:
predicts_dict processed by non-maximum suppression
"""
for object_name, bbox in predicts_dict.items():
bbox_array = np.array(bbox, dtype=np.float)
x1, y1, x2, y2, scores = bbox_array[:,0], bbox_array[:,1],
bbox_array[:,2], bbox_array[:,3], bbox_array[:,4]
areas = (x2-x1+1) * (y2-y1+1)
order = scores.argsort()[::-1]
keep = []
while order.size > 0:
i = order[0]
keep.append(i)
xx1 = np.maximum(x1[i], x1[order[1:]])
yy1 = np.maximum(y1[i], y1[order[1:]])
xx2 = np.minimum(x2[i], x2[order[1:]])
yy2 = np.minimum(y2[i], y2[order[1:]])
inter = np.maximum(0.0, xx2-xx1+1) * np.maximum(0.0, yy2-yy1+1)
iou = inter/(areas[i]+areas[order[1:]]

Re: Predicting an object over an pretrained model is not working

2024-07-30 Thread marc nicole via Python-list
OK, but how's the probability of small_ball greater than others? I can't
find it anyway, what's its value?

Le mar. 30 juil. 2024 à 21:37, Thomas Passin via Python-list <
python-list@python.org> a écrit :

> On 7/30/2024 2:18 PM, marc nicole via Python-list wrote:
> > Hello all,
> >
> > I want to predict an object by given as input an image and want to have
> my
> > model be able to predict the label. I have trained a model using
> tensorflow
> > based on annotated database where the target object to predict was added
> to
> > the pretrained model. the code I am using is the following where I set
> the
> > target object image as input and want to have the prediction output:
> >
> >
> >
> >
> >
> >
> >
> >
> > class MultiObjectDetection():
> >
> >  def __init__(self, classes_name):
> >
> >  self._classes_name = classes_name
> >  self._num_classes = len(classes_name)
> >
> >  self._common_params = {'image_size': 448, 'num_classes':
> > self._num_classes,
> >  'batch_size':1}
> >  self._net_params = {'cell_size': 7, 'boxes_per_cell':2,
> > 'weight_decay': 0.0005}
> >  self._net = YoloTinyNet(self._common_params, self._net_params,
> > test=True)
> >
> >  def predict_object(self, image):
> >  predicts = self._net.inference(image)
> >  return predicts
> >
> >  def process_predicts(self, resized_img, predicts, thresh=0.2):
> >  """
> >  process the predicts of object detection with one image input.
> >
> >  Args:
> >  resized_img: resized source image.
> >  predicts: output of the model.
> >  thresh: thresh of bounding box confidence.
> >  Return:
> >  predicts_dict: {"stick": [[x1, y1, x2, y2, scores1],
> [...]]}.
> >  """
> >  cls_num = self._num_classes
> >  bbx_per_cell = self._net_params["boxes_per_cell"]
> >  cell_size = self._net_params["cell_size"]
> >  img_size = self._common_params["image_size"]
> >  p_classes = predicts[0, :, :, 0:cls_num]
> >  C = predicts[0, :, :, cls_num:cls_num+bbx_per_cell] # two
> > bounding boxes in one cell.
> >  coordinate = predicts[0, :, :, cls_num+bbx_per_cell:] # all
> > bounding boxes position.
> >
> >  p_classes = np.reshape(p_classes, (cell_size, cell_size, 1,
> cls_num))
> >  C = np.reshape(C, (cell_size, cell_size, bbx_per_cell, 1))
> >
> >  P = C * p_classes # confidencefor all classes of all bounding
> > boxes (cell_size, cell_size, bounding_box_num, class_num) = (7, 7, 2,
> > 1).
> >
> >  predicts_dict = {}
> >  for i in range(cell_size):
> >  for j in range(cell_size):
> >  temp_data = np.zeros_like(P, np.float32)
> >  temp_data[i, j, :, :] = P[i, j, :, :]
> >  position = np.argmax(temp_data) # refer to the class
> > num (with maximum confidence) for every bounding box.
> >  index = np.unravel_index(position, P.shape)
> >
> >  if P[index] > thresh:
> >  class_num = index[-1]
> >  coordinate = np.reshape(coordinate, (cell_size,
> > cell_size, bbx_per_cell, 4)) # (cell_size, cell_size,
> > bbox_num_per_cell, coordinate)[xmin, ymin, xmax, ymax]
> >  max_coordinate = coordinate[index[0], index[1],
> index[2], :]
> >
> >  xcenter = max_coordinate[0]
> >  ycenter = max_coordinate[1]
> >  w = max_coordinate[2]
> >  h = max_coordinate[3]
> >
> >  xcenter = (index[1] + xcenter) * (1.0*img_size
> /cell_size)
> >  ycenter = (index[0] + ycenter) * (1.0*img_size
> /cell_size)
> >
> >  w = w * img_size
> >  h = h * img_size
> >  xmin = 0 if (xcenter - w/2.0 < 0) else (xcenter -
> w/2.0)
> >  ymin = 0 if (xcenter - w/2.0 < 0) else (ycenter -
> h/2.0)
> >  xmax = resized_img.shape[0] if (xmin + w) >
> > resized_img.shape[0] else (xmin + w)
> >  ymax = resized

Re: Predicting an object over an pretrained model is not working

2024-07-31 Thread marc nicole via Python-list
I suppose the meaning of those numbers comes from this line
predicts_dict[class_name].append([int(xmin), int(ymin), int(xmax), int(ymax),
P[index]]) as well as the yolo inference call. But i was expecting zeros
for all classes except smallball. Because the image only shows that, and
that a train and a sheep wont have any target position or any probability
whatsoever in the image weirdobject.jpg


On Wed, 31 Jul 2024, 00:19 dn via Python-list, 
wrote:

> On 31/07/24 06:18, marc nicole via Python-list wrote:
> > Hello all,
> >
> > I want to predict an object by given as input an image and want to have
> my
> > model be able to predict the label. I have trained a model using
> tensorflow
> > based on annotated database where the target object to predict was added
> to
> > the pretrained model. the code I am using is the following where I set
> the
> > target object image as input and want to have the prediction output:
>
> ...
>
>
> > WHile I expect only the dict to contain the small_ball key
>
> > How's that is possible? where's the prediction output?How to fix the
> code?
>
>
> To save us lots of reading and study to be able to help you, please advise:
>
> 1 what are the meanings of all these numbers?
>
> > 'sheep': [[233.0, 92.0, 448.0, -103.0,
> >> 5.3531270027160645], [167.0, 509.0, 209.0, 101.0, 4.947688579559326],
> >> [0.0, 0.0, 448.0, 431.0, 3.393721580505371]]
>
> 2 (assuming it hasn't) why the dict has not been sorted into a
> meaningful order
>
> 3 how can one tell that the image is more likely to be a sheep than a
> train?
>
> --
> Regards,
> =dn
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Predicting an object over an pretrained model is not working

2024-07-31 Thread marc nicole via Python-list
You invitation to read on machine is not helping, if you wanna enlighten us
on this specific case otherwise pls spare me such comments which i know

On Wed, 31 Jul 2024, 16:00 Grant Edwards via Python-list, <
python-list@python.org> wrote:

> On 2024-07-31, marc nicole via Python-list  wrote:
>
> > I suppose the meaning of those numbers comes from this line
> > predicts_dict[class_name].append([int(xmin), int(ymin), int(xmax),
> > int(ymax), P[index]]) as well as the yolo inference call. But i was
> > expecting zeros for all classes except smallball.
>
> That's not how machine learning and object recognition works.
>
> > Because the image only shows that,
>
> You know that. The machine doesn't.
>
> > and that a train and a sheep wont have any target position or any
> > probability whatsoever in the image weirdobject.jpg
>
> That depends on the training data and how the model works.
>
> You should probably do some reading on neural networks, machine
> learning, and pattern/object recognition. You appear to be trying to
> use tools without understanding what they do or how they work.
>
> --
> Grant
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Getting a Process.start() error pickle.PicklingError: Can't pickle : it's not found as __builtin__.module with Python 2.7

2024-09-02 Thread marc nicole via Python-list
Hello,

I am using Python 2.7 on Windows 10 and I want to launch a process
independently of the rest of the code so that the execution continues while
the started process proceeds. I am using Process().start() from Python 2.7
as follows:

from multiprocessing import Process
def do_something(text):
print(text)
if __name__ == "__main__":
q = Process(target=do_something,args=("somecmd") )
q.start()
# following code should execute right after the q.start() call (not
until it returns)
.


But getting the error at the call of Process().start():
pickle.PicklingError: Can't pickle : it's not found as
__builtin__.module

anybody could provide an alternative to call the function do_something() in
a separate thread ?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [Tutor] Getting a Process.start() error pickle.PicklingError: Can't pickle : it's not found as __builtin__.module with Python 2.7

2024-09-03 Thread marc nicole via Python-list
Hello Alan,

Thanks for the reply, Here's the code I tested for the debug:

import time
from multiprocessing import Process

def do_Something():
print('hello world!')

def start(fn):
   p = Process(target=fn, args=())
   p.start()

def ghello():
print ("hello world g")

def fhello():
print('hello world f')

if __name__ == "__main__":
start(do_something)
print("executed")
exit(0)

but neither "Hello World" or "Executed" are displayed in the console which
finishes normally without returning any message.

Module naming is OK and don't think it is a problem related to that.

Now the question, when to use Process/Multiprocess and when to use
Threading in Python?.Thread is there a distinctive use case that can
showcase when to use either? are they interchangeable? to note that using
Threading the console DID display the messages correctly!

Thanks.

Le mar. 3 sept. 2024 à 10:48, Alan Gauld via Tutor  a
écrit :

> On 02/09/2024 15:00, marc nicole via Python-list wrote:
> > Hello,
> >
> > I am using Python 2.7 on Windows 10
>
> Others have pointed out that 2.7 is unsupported and has
> been for many years now. Its also inferior in most
> respects including its error reporting.
> If possible you should upgrade to 3.X
>
> > from multiprocessing import Process
> > def do_something(text):
> > print(text)
> > if __name__ == "__main__":
> > q = Process(target=do_something,args=("somecmd") )
> > q.start()
> > # following code should execute right after the q.start() call
>
> So what does happen? If you put a print statement here does it execute
> before or after the error message? It might make things easier to
> debug(clearer error traceback) if you put the code to create the thread
> into a separate function?
>
> def do_Something(text)...
>
> def start(fn):
>q = Process
>q.start()
>
> if __name_
>start(do_something)
>print('Something here')
>
>
> > But getting the error at the call of Process().start():
> > pickle.PicklingError: Can't pickle : it's not found as
> > __builtin__.module
>
> But please show us the full error trace even if its not much.
>
> Also check your module naming, is there a possibility
> you've named your file do_something.py or similar?
> (I'm guessing the function is what is being pickled?)
>
> > anybody could provide an alternative to call the function do_something()
> in
> > a separate thread ?
>
> Why not just use the Threading module?
> If it's as simple as just running something in a
> thread multiprocessing is probably not needed.
>
> --
> Alan G
> Author of the Learn to Program web site
> http://www.alan-g.me.uk/
> http://www.amazon.com/author/alan_gauld
> Follow my photo-blog on Flickr at:
> http://www.flickr.com/photos/alangauldphotos
>
>
>
> ___
> Tutor maillist  -  tu...@python.org
> To unsubscribe or change subscription options:
> https://mail.python.org/mailman/listinfo/tutor
>
-- 
https://mail.python.org/mailman/listinfo/python-list


How to check whether lip movement is significant using face landmarks in dlib?

2024-10-05 Thread marc nicole via Python-list
I am trying to assess whether the lips of a person are moving too much
while the mouth is closed (to conclude they are chewing).

I try to assess the lip movement through landmarks (dlib) :

Inspired by the mouth example (
https://github.com/mauckc/mouth-open/blob/master/detect_open_mouth.py#L17),
and using it before the following function (as a primary condition for
telling the person is chewing), I wrote the following function:

def lips_aspect_ratio(shape):
# grab the indexes of the facial landmarks for the lip
(mStart, mEnd) = (61, 68)
lip = shape[mStart:mEnd]
print(len(lip))
# compute the euclidean distances between the two sets of
# vertical lip landmarks (x, y)-coordinates
# to reach landmark 68 I need to get lib[7] not lip[6] (while I
get lip[7] I get IndexOutOfBoundError)
A = dist.euclidean(lip[1], lip[6])  # 62, 68
B = dist.euclidean(lip[3], lip[5])  # 64, 66

# compute the euclidean distance between the horizontal
# lip landmark (x, y)-coordinates
C = dist.euclidean(lip[0], lip[4])  # 61, 65

# compute the lip aspect ratio
mar = (A + B) / (2.0 * C)

# return the lip aspect ratio
return mar


How to define an aspect ratio for the lips to conclude they are moving
significantly? Is the mentioned function able to tell whether the lips
are significantly moving while the mouth is closed?
-- 
https://mail.python.org/mailman/listinfo/python-list


How to check whether audio bytes contain empty noise or actual voice/signal?

2024-10-25 Thread marc nicole via Python-list
Hello Python fellows,

I hope this question is not very far from the main topic of this list, but
I have a hard time finding a way to check whether audio data samples are
containing empty noise or actual significant voice/noise.

I am using PyAudio to collect the sound through my PC mic as follows:

FRAMES_PER_BUFFER = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 48000
RECORD_SECONDS = 2import pyaudio
audio = pyaudio.PyAudio()
stream = audio.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=FRAMES_PER_BUFFER,
input_device_index=2)
data = stream.read(FRAMES_PER_BUFFER)


I want to know whether or not data contains voice signals or empty sound,
To note that the variable always contains bytes (empty or sound) if I print
it.

Is there an straightforward "easy way" to check whether data is filled with
empty noise or that somebody has made noise/spoke?

Thanks.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [Tutor] How to stop a specific thread in Python 2.7?

2024-09-25 Thread marc nicole via Python-list
Could you show a python code example of this?


On Thu, 26 Sept 2024, 03:08 Cameron Simpson,  wrote:

> On 25Sep2024 22:56, marc nicole  wrote:
> >How to create a per-thread event in Python 2.7?
>
> Every time you make a Thread, make an Event. Pass it to the thread
> worker function and keep it to hand for your use outside the thread.
> ___
> Tutor maillist  -  tu...@python.org
> To unsubscribe or change subscription options:
> https://mail.python.org/mailman/listinfo/tutor
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to stop a specific thread in Python 2.7?

2024-09-25 Thread marc nicole via Python-list
How to create a per-thread event in Python 2.7?

On Wed, 25 Sept 2024, 22:47 Cameron Simpson via Python-list, <
python-list@python.org> wrote:

> On 25Sep2024 19:24, marc nicole  wrote:
> >I want to know how to kill a specific running thread (say by its id)
> >
> >for now I run and kill a thread like the following:
> ># start thread
> >thread1 = threading.Thread(target= self.some_func(), args=( ...,), )
> >thread1.start()
> ># kill the thread
> >event_thread1 = threading.Event()
> >event_thread1.set()
> >
> >I know that set() will kill all running threads, but if there was thread2
> >as well and I want to kill only thread1?
>
> No, `set()` doesn't kill a thread at all. It sets the `Event`, and each
> thread must be checking that event regularly, and quitting if it becomes
> set.
>
> You just need a per-thred vent instead of a single Event for all the
> threads.
>
> Cheers,
> Cameron Simpson 
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


How to stop a specific thread in Python 2.7?

2024-09-25 Thread marc nicole via Python-list
Hello guys,

I want to know how to kill a specific running thread (say by its id)

for now I run and kill a thread like the following:
# start thread
thread1 = threading.Thread(target= self.some_func(), args=( ...,), )
thread1.start()
# kill the thread
event_thread1 = threading.Event()
event_thread1.set()

I know that set() will kill all running threads, but if there was thread2
as well and I want to kill only thread1?

Thanks!
-- 
https://mail.python.org/mailman/listinfo/python-list


How to break while loop based on events raised in a thread (Python 2.7)

2024-11-27 Thread marc nicole via Python-list
I am using the below class1 code to detect some event through the method

check_for_the_event()
  # Some class1def check_for_the_event(self):
thread1 = threading.Timer(2, self.check_for_the_event)
thread1.start()# some processingif event is detected:
   self.event_ok = Trueelse:
   self.event_ok = False

then I pass an instance of that class to the below class to know when the
event is on or off and act upon its value accordingly using the following
below code:

  # execution of other part of the program (where
self.another_class_instance.event_ok = False and
self.some_other_condition is true)
  current_time = current_time_some_cond = current_max_time
= time.time()
  while not self.another_class_instance.event_ok and
self.some_other_condition:
while self.some_other_condition and not
self.another_class_instance.event_ok:#self.some_other_condition takes
up to 10 secs to be checked (minimum 5 secs)
if time.time() > current_time + 5:
current_time = time.time()
# some processing
else:
while not self.another_class_instance.event_ok:
#processing
if time.time() > current_max_time + 60 * 2 and
not self.another_class_instance.event_ok:
#some other processing
if time.time() > current_time_some_cond + 10
and not self.cond1 and not self.another_class_instance.event_ok:
# some processing that takes 2-3 seconds
self.cond1 = True
current_time_some_cond = time.time()
elif self.cond1 and time.time() >
current_time_some_cond + 10 and not
self.another_class_instance.event_ok:
current_time_some_cond = time.time()
#some other processing that takes 2-3 seconds
else:
pass
else:
pass

The problem is the real time execution of the program (in class2) requires
an instant check to be performed and that cannot wait until the current
loop instructions finish, and I want is to break out of the outer while
loop if the event is on without further checking any other condition.

for now I perform multiple checks at each if or while statement, but is
there a IO async based method that breaks out of the loop when the event is
raised in the thread?
-- 
https://mail.python.org/mailman/listinfo/python-list


How to catch a fatal error in Python 2.7?

2024-12-09 Thread marc nicole via Python-list
Hello,

The fatal error exits the program with a code -1 while referencing the
memory address involved and nothing else.

How to catch it in Python 2.7?

PS: please not I am not talking about exceptions but an error resulting
from the disconnection of my bluetooth microphone abruptly and leading to
the halting of the whole program, I need to be able to do something when it
occurs.

Thanks for the help!
-- 
https://mail.python.org/mailman/listinfo/python-list


How to properly use py-webrtcvad?

2025-01-22 Thread marc nicole via Python-list
Hi,

I am getting audio from my mic using PyAudio as follows:

self.stream = audio.open(format=self.FORMAT,
>  channels=self.CHANNELS,
>  rate=self.RATE,
>  input=True,
>  frames_per_buffer=self.FRAMES_PER_BUFFER,
>  input_device_index=1)


then reading data as follows:

for i in range(0, int(self.RATE / self.FRAMES_PER_BUFFER *
> self.RECORD_SECONDS)):
> data = self.stream.read(4800)


on the other hand I am using py-webrtcvad as follows:

self.vad = webrtcvad.Vad()


and want to use *is_speech*() using audio data from PyAudio.
But getting the error:

 return _webrtcvad.process(self._vad, sample_rate, buf, length)
> Error: Error while processing frame


no matter how I changed the input data format (wav: using
speech_recognition's *get_wav_data*(), using numpy...)

Any suggestions (using Python 2.x)?
Thanks.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to go about describing my software with a component diagram?

2024-12-24 Thread marc nicole via Python-list
the diagram is also attached here

Le mar. 24 déc. 2024 à 18:27, marc nicole  a écrit :

> Hello community,
>
> I have created a Python code where a main algorithm uses three different
> modules (.py) after importing them.
>
> To illustrate and describe it I have created the following component
> diagram?
>
>
> [image: checkso.PNG]
>
> Could it be improved for better description and readability?
>
>
> Thanks!
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to go about describing my software with a component diagram?

2024-12-24 Thread marc nicole via Python-list
it is here https://i.sstatic.net/ykk5Wd0w.png

Le mar. 24 déc. 2024 à 20:03, Michael Torrie via Python-list <
python-list@python.org> a écrit :

> On 12/24/24 10:27 AM, marc nicole via Python-list wrote:
> > the diagram is also attached here
>
> This text-only mailing list does not allow attachments, just FYI.
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


How to go about describing my software with a component diagram?

2024-12-24 Thread marc nicole via Python-list
Hello community,

I have created a Python code where a main algorithm uses three different
modules (.py) after importing them.

To illustrate and describe it I have created the following component
diagram?


[image: checkso.PNG]

Could it be improved for better description and readability?


Thanks!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to go about describing my software with a component diagram?

2024-12-25 Thread marc nicole via Python-list
the purpose of the diagram is to convey a minimalistic idea about the
structure of the code/implementation/software

Le mer. 25 déc. 2024 à 01:49, Thomas Passin via Python-list <
python-list@python.org> a écrit :

> On 12/24/2024 3:42 PM, marc nicole via Python-list wrote:
> > it is here https://i.sstatic.net/ykk5Wd0w.png
>
> This diagram does not make much sense to me:
>
> 1. What is the purpose of the diagram and who is it intended for?
> 2. A module and an algorithm are different kinds of things, yet they are
> connected together as if they are the same.
> 3. Connecting lines should always be labeled, preferably with direction
> indicators that augment the labels.  Otherwise the viewer has to imagine
> what the nature of the connection is.
> 4. It's better if different kinds of things look different.  That could
> be a different box shape, a different color, or some other visual
> difference. Here I am thinking about the box labeled "Algorithm". We
> can't tell if it is intended to mean "A library module that implements a
> certain algorithm", "An algorithm that the three components cooperate to
> implement", "The top-level module for computing an algorithm that
> contains three modules", or something else.
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to go about describing my software with a component diagram?

2024-12-24 Thread marc nicole via Python-list
I want to convey the idea that main.py (main algorithm) imports 3 modules
(V, S, M) (each of them containing .py scripts related to
different functionalities) and use their methods accordingly as per the
requirement: basically the structure of my code and how the modules relate
to each other.

Le mar. 24 déc. 2024 à 21:56, dn via Python-list  a
écrit :

> On 25/12/24 06:27, marc nicole via Python-list wrote:
> > Hello community,
> >
> > I have created a Python code where a main algorithm uses three different
> > modules (.py) after importing them.
> >
> > To illustrate and describe it I have created the following component
> > diagram?
> >
> >
> > [image: checkso.PNG]
> >
> > Could it be improved for better description and readability?
>
>
> Possibly - so little detail as to topic and any hints in the diagram
> redacted! What messages do you want to communicate with this diagram?
>
> Given that the three modules are subordinate contributors to the
> script/algorithm, place the three modules inside a larger "Algorithm"
> shape.
>
> --
> Regards,
> =dn
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to go about describing my software with a component diagram?

2024-12-24 Thread marc nicole via Python-list
The full python package (pypi) being represented as the outermost frame
here including the 4 sub-frames)

Le mar. 24 déc. 2024 à 22:05, marc nicole  a écrit :

> I want to convey the idea that main.py (main algorithm) imports 3 modules
> (V, S, M) (each of them containing .py scripts related to
> different functionalities) and use their methods accordingly as per the
> requirement: basically the structure of my code and how the modules relate
> to each other.
>
> Le mar. 24 déc. 2024 à 21:56, dn via Python-list 
> a écrit :
>
>> On 25/12/24 06:27, marc nicole via Python-list wrote:
>> > Hello community,
>> >
>> > I have created a Python code where a main algorithm uses three different
>> > modules (.py) after importing them.
>> >
>> > To illustrate and describe it I have created the following component
>> > diagram?
>> >
>> >
>> > [image: checkso.PNG]
>> >
>> > Could it be improved for better description and readability?
>>
>>
>> Possibly - so little detail as to topic and any hints in the diagram
>> redacted! What messages do you want to communicate with this diagram?
>>
>> Given that the three modules are subordinate contributors to the
>> script/algorithm, place the three modules inside a larger "Algorithm"
>> shape.
>>
>> --
>> Regards,
>> =dn
>> --
>> https://mail.python.org/mailman/listinfo/python-list
>>
>
-- 
https://mail.python.org/mailman/listinfo/python-list


How to weight terms based on semantic importance

2025-01-15 Thread marc nicole via Python-list
Hello,

I want to weight terms of a large text based on their semantics (not on
their frequency (TF-IDF)).
Is there a way to do that using NLTK or other means? through a vectorizer?

For example: a certain term weights more than others etc...

Thanks
-- 
https://mail.python.org/mailman/listinfo/python-list