Re: increasing the page size of a dbm store?

2019-11-27 Thread Peter Otten
Tim Chase wrote:

> Working with the dbm module (using it as a cache), I've gotten the
> following error at least twice now:
> 
>   HASH: Out of overflow pages.  Increase page size
>   Traceback (most recent call last):
>   [snip]
>   File ".py", line 83, in get_data
> db[key] = data
>   _dbm.error: cannot add item to database
> 
> I've read over the py3 docs on dbm
> 
> https://docs.python.org/3/library/dbm.html
> 
> but don't see anything about either "page" or "size" contained
> therein.
> 
> There's nothing particularly complex as far as I can tell.  Nothing
> more than a straightforward
> 
>   import dbm
>   with dbm.open("cache", "c") as db:
> for thing in source:
>   key = extract_key_as_bytes(thing)
>   if key in db:
> data = db[key]
>   else:
> data = long_process(thing)
> db[key] = data
> 
> The keys can get a bit large (think roughly book-title length), but
> not huge. I have 11k records so it seems like it shouldn't be
> overwhelming, but this is the second batch where I've had to nuke the
> cache and start afresh.  Fortunately I've tooled the code so it can
> work incrementally and no more than a hundred or so requests have to
> be re-performed.
> 
> How does one increas the page-size in a dbm mapping?  Or are there
> limits that I should be aware of?
> 
> Thanks,
> 
> -tkc
> 
> PS: FWIW, this is Python 3.6 on FreeBSD in case that exposes any
> germane implementation details.

I found the message here

https://github.com/lattera/freebsd/blob/master/lib/libc/db/hash/hash_page.c#L695

but it's not immedately obvious how to increase the page size, and the 
readme

https://github.com/lattera/freebsd/tree/master/lib/libc/db/hash

only states

"""
"bugs" or idiosyncracies

If you have a lot of overflows, it is possible to run out of overflow
pages.  Currently, this will cause a message to be printed on stderr.
Eventually, this will be indicated by a return error code.
"""

what you learned the hard way. 

Python has its own "dumb and slow but simple dbm clone" dbm.dump -- maybe 
it's smart and fast enough for your purpose?

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unable to retrieve data from Juypter notebook

2019-11-27 Thread Pankaj Jangid
> Any help is appreciated..
Could you please elaborate a little bit? I didn't get from where you
want to retrieve data? Is this somewhere hosted? Or you want to see raw
files from your own Jupyter notebook running on your own machine.

Regards,
-- 
Pankaj Jangid
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: increasing the page size of a dbm store?

2019-11-27 Thread Barry
Maybe port to SQLite? I would not choose dbm these days.

Barry

> On 27 Nov 2019, at 01:48, Tim Chase  wrote:
> 
> Working with the dbm module (using it as a cache), I've gotten the
> following error at least twice now:
> 
>  HASH: Out of overflow pages.  Increase page size
>  Traceback (most recent call last):
>  [snip]
>  File ".py", line 83, in get_data
>db[key] = data
>  _dbm.error: cannot add item to database
> 
> I've read over the py3 docs on dbm
> 
> https://docs.python.org/3/library/dbm.html
> 
> but don't see anything about either "page" or "size" contained
> therein.
> 
> There's nothing particularly complex as far as I can tell.  Nothing
> more than a straightforward
> 
>  import dbm
>  with dbm.open("cache", "c") as db:
>for thing in source:
>  key = extract_key_as_bytes(thing)
>  if key in db:
>data = db[key]
>  else:
>data = long_process(thing)
>db[key] = data
> 
> The keys can get a bit large (think roughly book-title length), but
> not huge. I have 11k records so it seems like it shouldn't be
> overwhelming, but this is the second batch where I've had to nuke the
> cache and start afresh.  Fortunately I've tooled the code so it can
> work incrementally and no more than a hundred or so requests have to
> be re-performed.
> 
> How does one increas the page-size in a dbm mapping?  Or are there
> limits that I should be aware of?
> 
> Thanks,
> 
> -tkc
> 
> PS: FWIW, this is Python 3.6 on FreeBSD in case that exposes any
> germane implementation details.
> 
> 
> 
> 
> 
> 
> 
> -- 
> https://mail.python.org/mailman/listinfo/python-list
> 

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python Resources related with web security

2019-11-27 Thread Greg Ewing

On 27/11/19 10:54 am, Mr. Gentooer wrote:


why would I be a troll? I have never used usenet. I am honestly and
genuinely curious.


The reason people are asking is that wanting a manual on how to
search the Web is a bit like wanting a manual on how to walk.
Most people pick it up by watching others and doing it themselves,
so nobody writes anything down about it.

--
Greg

--
https://mail.python.org/mailman/listinfo/python-list


Re: Python Resources related with web security

2019-11-27 Thread Peter J. Holzer
On 2019-11-28 10:56:58 +1300, Greg Ewing wrote:
> On 27/11/19 10:54 am, Mr. Gentooer wrote:
> > why would I be a troll? I have never used usenet. I am honestly and
> > genuinely curious.
> 
> The reason people are asking is that wanting a manual on how to
> search the Web is a bit like wanting a manual on how to walk.

He asked about Usenet, not the web. 

Specifically, about the usenet groups comp.lang.python and
gmane.comp.python.general.

(I suspect he was called a troll because he was mixed up with
"Pycode" ("Mr. Gentooer" and "Pycode" may or may not be the same person,
the Elize-lika posting style is certainly similar, but in the absence of
evidence I will assume that they are not).)

Usenet used to be very popular in the late 1990s and early 2000s. Even
then you did get instructions on how to install and configure a Usenet
client and get access to a Usenet server, it wasn't assumed that you
would pick that up by osmosis. These days such information is probably
harder to come by (especially since Usenet seems to have splintered into
a text-only discussion Usenet and a file-sharing Usenet, so simply
asking your favourite search engine might get you lots of information
about the wrong Usenet).

For general Usenet access I would recommend using a client like
Thunderbird or Pan (unless you like text-only interfaces like slrn
(which I use)) and getting an account with a reputable free provider
like albasani.net. Don't use google groups, don't use aeiou.

That said, I wouldn't recommend comp.lang.python in particular. Not only
is it infested with spammers (those can be dealt with a few killfile
rules), but the gateway to the mailing-list is broken: It rewrites
message-ids which makes it very hard to follow threads. And of course
most of the messages come from the mailinglist, so you get all the
diversity that implies. In short, you get the disadvantages of a
newsgroup, but not the advantages. (At least that was the situation when
I gave up on comp.lang.python and subscribed the mailinglist instead.)

Gmane isn't part of usenet proper. It's more like a mailinglist to nntp
gateway. I'm not sure how functional it currently is. It has been down
and up again over the last few years. The website is currently down,
which doesn't bode well.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unable to retrieve data from Juypter notebook

2019-11-27 Thread harirammanohar159
I launch Jupyter notebook directly from the windows machine where its 
installed, after launching background python.exe will be running and 
automatically browser will redirect to http://localhost:/tree

==log==
[I 11:36:26.234 NotebookApp] JupyterLab extension loaded from 
C:\asdf\AppData\Local\Continuum\anaconda3\lib\site-packages\jupyterlab
[I 11:36:26.238 NotebookApp] JupyterLab application directory is 
C:\asdf\AppData\Local\Continuum\anaconda3\share\jupyter\lab
[I 11:36:26.242 NotebookApp] Serving notebooks from local directory: C:\asdf
[I 11:36:26.242 NotebookApp] The Jupyter Notebook is running at:
[I 11:36:26.242 NotebookApp] 
http://localhost:/?token=xx
[I 11:36:26.243 NotebookApp] Use Control-C to stop this server and shut down 
all kernels (twice to skip confirmation).
[C 11:36:27.633 NotebookApp]

To access the notebook, open this file in a browser:
file:///C:/asdf/AppData/Roaming/jupyter/runtime/nbserver-9196-open.html
Or copy and paste one of these URLs:
http://localhost:/?token=
===

code:
from keras.datasets import cifar10


(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print(x_train.shape)
print(y_train.shape)

=
from the jupyter notebook when i click on Run, its throwing me the mentioned 
error, but able to download the file from the browser
Please let me know if any more info is needed to help out.

-- 
https://mail.python.org/mailman/listinfo/python-list