Hey,
before you start optimizing. I would suggest, that you measure response times and query
times, data search times and so on. In order to save time, you have to know where you
"loose" time.
Does your service really have to load the whole table at once? Yes that might
lead to quicker respon
On 2023-01-14 23:26:27 -0500, Dino wrote:
> Hello, I have built a PoC service in Python Flask for my work, and - now
> that the point is made - I need to make it a little more performant (to be
> honest, chances are that someone else will pick up from where I left off,
> and implement the same serv
On 1/15/2023 6:14 AM, Peter J. Holzer wrote:
On 2023-01-14 23:26:27 -0500, Dino wrote:
Hello, I have built a PoC service in Python Flask for my work, and - now
that the point is made - I need to make it a little more performant (to be
honest, chances are that someone else will pick up from where
<<< Frank Millman>>> My 'aha' moment came when I understood that a python
object has only three properties - a type, an id, and a value. It does *not*
have a name.
Yes, Frank, it is a bit like how some people need to wrap their minds around a
concept like an anonymous function. It has no name a
Thank you for your answer, Lars. Just a clarification: I am already
doing a rough measuring of my queries.
A fresh query without any caching: < 4s.
Cached full query: < 5 micro-s (i.e. 6 orders of magnitude faster)
Desired speed for my POC: 10 Also, I didn't want to ask a question with way t
Jen Kris wrote:
Avi,
Your comments go farther afield than my original question, but you made some
interesting additional points. For example, I sometimes work with the C API
and sys.getrefcount may be helpful in deciding when to INCREF and DECREF. But
that’s another issue.
The situation I
Thank you, Peter. Yes, setting up my own indexes is more or less the
idea of the modular cache that I was considering. Seeing others think in
the same direction makes it look more viable.
About Scalene, thank you for the pointer. I'll do some research.
Do you have any idea about the speed o
That’s about what I got using a Python dictionary on random data on a high
memory machine.
https://github.com/Gerardwx/database_testing.git
It’s not obvious to me how to get it much faster than that.
From: Python-list on
behalf of Dino
Date: Sunday, January 15, 2023 at 1:29 PM
To: python-lis
On 2023-01-15 10:38:22 -0500, Thomas Passin wrote:
> On 1/15/2023 6:14 AM, Peter J. Holzer wrote:
> > On 2023-01-14 23:26:27 -0500, Dino wrote:
> > > Anyway, my Flask service initializes by loading a big "table" of 100k rows
> > > and 40 columns or so (memory footprint: order of 300 Mb)
> >
> > 30
I think any peformance improvements would have to come from a language change
or better indexing of the data.
From: Python-list on
behalf of Weatherby,Gerard
Date: Sunday, January 15, 2023 at 2:25 PM
To: Dino , python-list@python.org
Subject: Re: Fast lookup of bulky "table"
That’s about what
On 1/15/2023 2:39 PM, Peter J. Holzer wrote:
On 2023-01-15 10:38:22 -0500, Thomas Passin wrote:
On 1/15/2023 6:14 AM, Peter J. Holzer wrote:
On 2023-01-14 23:26:27 -0500, Dino wrote:
Anyway, my Flask service initializes by loading a big "table" of 100k rows
and 40 columns or so (memory footpri
On 16/01/2023 08.36, Weatherby,Gerard wrote:
I think any peformance improvements would have to come from a language change
or better indexing of the data.
Exactly!
Expanding on @Peter's post: databases (relational or not) are best
organised according to use. Some must accept rapid insert/upd
On Sun, 15 Jan 2023 08:27:29 -0500, Dino wrote:
> Do you have any idea about the speed of a SELECT query against a 100k
> rows / 300 Mb Sqlite db?
https://www.sqlite.org/speed.html
The site is old but has a number of comparisons. I have not used SQLite
with Python yet but with both C and C# I'
With Postgresql, one can also do pre-processing in Python.
https://www.postgresql.org/docs/15/plpython.html
While it’s not as convenient to develop as client-side Python, it can be used
to implement complicated constraints or implement filtering on the server side,
which reduces the amount of d
On 1/15/2023 4:49 PM, Stefan Ram wrote:
dn writes:
Some programmers don't realise that SQL can also be used for
calculations, eg the eponymous COUNT(), which saves (CPU-time and
coding-effort) over post-processing in Python.
Yes, I second that! Sometimes, people only re-invent things
in
On 16/01/23 2:27 am, Dino wrote:
Do you have any idea about the speed of a SELECT query against a 100k
rows / 300 Mb Sqlite db?
That depends entirely on the nature of the query and how the
data is indexed. If it's indexed in a way that allows sqlite to
home in directly on the wanted data, it wi
On 1/3/23 22:57, aapost wrote:
I am trying to wrap my head around how one goes about working with and
editing xml elements ... Back to
contemplating and tinkering..
For anyone in a similar situation, xmlschema is actually quite nice.
It didn't have the features I was looking for out of the bo
On 1/11/23 13:21, Dieter Maurer wrote:
aapost wrote at 2023-1-10 22:15 -0500:
On 1/4/23 12:13, aapost wrote:
On 1/4/23 09:42, Dieter Maurer wrote:
...
You might have a look at `PyXB`, too.
It tries hard to enforce schema restrictions in Python code.
...
Unfortunately picking it apart for a w
On 1/15/2023 2:23 PM, Weatherby,Gerard wrote:
That’s about what I got using a Python dictionary on random data on a high
memory machine.
https://github.com/Gerardwx/database_testing.git
It’s not obvious to me how to get it much faster than that.
Gerard, you are a rockstar. This is going to b
On Mon, 16 Jan 2023 at 16:15, Dino wrote:
> BTW, can you tell me what is going on here? what's := ?
>
> while (increase := add_some(conn,adding)) == 0:
See here:
https://docs.python.org/3/reference/expressions.html#assignment-expressions
https://realpython.com/python-walrus-operator/
--
20 matches
Mail list logo