Re: Newbie question about Python syntax

2019-08-26 Thread Paul St George

On 25/08/2019 02:39, Cameron Simpson wrote:

On 24Aug2019 21:52, Paul St George  wrote:

[snip]>
Aside from "map" being a poor name (it is also a builtin Python 
function), it seems that one creates one of these to control how some 
rendering process is done.


The class reference page you originally cites then specifies the meaning 
of the various attributes you might set on one of these objects.


Cheers,
Cameron Simpson 


Thanks Cameron. As this list has a low noise to signal ratio I cannot 
thank you enough here.


I could have stayed where I belong in Blender Artists, or similar, but 
those lists tend to just offer solutions and as Douglas Adams almost 
said knowledge without understanding is almost meaningless. Here I have 
gained enough understanding (perhaps not to yet make sufficient sense in 
what I say) but to transfer knowledge from solving one problem to 
possibly solving many.


Thank you for your patience and tolerance,

Dr Paul St George
--
http://www.paulstgeorge.com
http://www.devices-of-wonder.com


--
https://mail.python.org/mailman/listinfo/python-list


Re: Using the same data for both validation and prediction in Keras

2019-08-26 Thread Pankaj Jangid
Amirreza Heidari  writes:

> I was reading a tutorial for time series prediction by Neural
> Networks. I found that this code have used the same test data in the
> following code for validation, and later also for prediction.
>
> history = model.fit(train_X, train_y, epochs=50, batch_size=72, 
> validation_data=(test_X, test_y), verbose=2, shuffle=False)
>
> Does it mean that the validation and test data are the same, or there is a 
> default percentage to split the data into validation and prediction?

As per Prof. Andrew Ng, training, cross-validation and testing should
have three different data-sets. If you have a small example set (for
example 10,000 or may be 50,000) then you can split the example set into
60:20:20 ratio for train:validation:testing. But if you have a very
large data-set (1 million, 10 million) then consider using 1% or may be
lesser for validation and testing.

-- 
Pankaj Jangid
-- 
https://mail.python.org/mailman/listinfo/python-list