Theano is a best deep learning python library that allowed for creating complex arithmetic expression function using the NumPy interface in python.
The NumPy is the best computational multidimensional array structure in python programming.
Available Important Features
- Unit Testing Extension And Self Verification – Detect Errors and Diagnose.
- Speed And Stability Optimization – Get the right answer for log (1+x) even when x is tiny.
- Fit Integration With NumPy – Use numpy.ndarray compilation function.
- Enough Symbolic Differentiation – You can use many inputs for derivation functions.
- GPU Transparent – perform much faster data in machine learning than over CPU (Control Processing Unit).
- Dynamic Code Generation – Calculate faster expression.
That is the powerful large scale computational python library investigated in the year 2007. But, That has enough information to be used in college universities especially for machine learning and deep learning topic.
This is the latest version with a lot of new features like interface changed, bug fixed and improvement, etc.
See New Feature Here!
- Supported Numpy 1.13
- conda package is now new updated feature to install that, and add as conda install –c mila –udem theano pygpu
- Supported PyGpu 0.7
- Fixed memory leaks related to element-wise operation on GPU.
- Remove old GPU theano.sandbox.cuda Update New theano.gpuarray in the backend.
- Supported Python version 3, 3.3 and 3.4
- Metrix Dot Product could be the wrong result in some cases while it used with mkl from conda.
- Speed Up elemwise op based on SciPy.
- Now that used sha256 instead of md5 to work on device or system. The md5 and sha256 is for security reason in python.
- Updated bug fix, crash fix, and warning improvements.
- Use RNNBLock Documentation
- Update conv Documentation
Improve Graph Algorithm
- Faster Optimization, with new optional features
- Improve compilation by using massive op params interface
- Improve debuggers using PdbBreakpoint
About seventeen one peoples contributed to this update or release this version library.
- Used AllocDiag for only non-scalar input
- Mixed duplicated diagnosis function – For example, ExtractDiag (That able to extract diagonal to vector), and also AllocDiag (Set diagonal empty array vector).
- Used grad() method instead of L_op() method in ops that require the output to compute mixer.
- Improved op ExtractDiag through theano.tensor.nlinalg, used the only theano.tensor.basic.
- Updated name MultinomialWOReplacementFromUniform to ChoiceFromUniform
- cublas.lib (function)
- nvcc.*flags (function)
- cuda.enabled (function)
- pycuda.init (function)
- enable_initial_driver_test (function)
- gpuarray.sync (function)
- lib.cnmen (function)
- Improved separate convolution for 2D or 3D
- conv3d function removed
- updated grouped convolution for 2D or 3D
- Improved bi-linear fractional sample
- Desperate conv2d function
Graphics Unit For Processing
- GPU only used when needed
- Used GPU in magma library like QR, eigh and Cholesky, matrix inverse or SVD
- Used GpuAdvancedIncSubtensorl_dev20
- GPU used in both OpenCL and CUDA function
- Log gramma function supported for all noncomplex types
- Added long long values in GpuCublasTriangularSolve
- Used disk caching function for kernel OS
- Offset parameter k supported for GpuEye
cuDNN Convolution Algorithm
- Improved cuDNN algorithm for system caching
- Better loading time supported on Mac and Windows.
- Support official for v7.* or c6.*
- Python script to help test cuDNN convolution
Added New Features
- Used matrix function therano.tensor.cov
- Improved AbstractBatchNormTrainGrad function
- Updated Boolean indexing for sub-tensors
- All C code file separated for c_code function
- Used L_op() method for OpFromGraph
- Deleted useless warning while the profile is manually disabled
- Used tri-gamma function
- Deleted therano/compat/six.py
- Updated half and full mode for Images2Neibs op function
- Better and faster testing for Travis CI tests
- Used sigmoid_binary_crossentropy function
- tensor 7() and tensor 6() method added in theano.tensor module.
- Used new flag like cmodule.debug to allow a debug mode. Recently used for cuDNN only.
- Deleted Cop.get_op_params() method
- Updated new flag pickle_test_value help to get disable pickling values for testing.
- added Bassel function of order 0 and 1 from scipy.special function.
- Used revel_multi_index and unravel_index function on control. processing unit.
- Metrix function theano.tensor.cov added
- Used R_op() method for ZeroGrad
- Track stack optimization process in new GPU backend
- Deleted ViewOp subclass during optimization
- Added theano.tensor.signal.pool for index overflows.
- Used disconnected_outputs function to Rop
- Deleted ViewOp subclass during optimization
That is possible to present speed for implementation for the problem-solving. Involve large data.
That’s able to combine the computer algebra system (CAS) with the aspect of optimizing compiler. It can generate C code for mathematical operations. That can minimize the compilation and implementation of all python code. It provides symbolic features for automatic differentiation.
That can surpass to CPU by orders to taking advantage of GPU.
A compiler may apply many complex optimizations to that’s expressions. This may like these:
- Improved numerical stability for example, (log(1+exp(x)) and log (exp(x))
- Arithmetic expression For example (x*y/x-> y, –x->x)
- Added efficient BLAS operation for contexts variety
- GPU used for computation
- Memory aliased to remove calculation
- Fusion loop for subexpression as element-wise
Let’s see some example
- import therano
- from therano import tensor #symbolic floating point scalars
- a = tensor.dscalar()
- b =tensor.dscalar() #simple expression
- c = a + b #values input and evaluate value for c
- f = theano.function ([a,b], c) #bind 1.5 to ‘a’, 2.5 to ‘b’, and calculate ‘c’
- assert 4.0 —
- f [1.5, 2.5], 3.5
Theano library is not a programming language. That is one deep learning in python. That optimize compiler, evaluating expressions especially for matrix points. Typically, Matix did this by the Numpy package. see more theano github.
Numpy is the best package or interface for optimizing multidimensional arrays in the python programming language. Matrix convention for machine learning.
Let’s some example:
The matrix has 3X2 means 3 column and 2 rows. Consider this array program:
- >>> numpy.asarray ([10., 11], [12, 13], [14, 15])
- >>> numpy.asarray ([10., 11], [12, 13], [14, 15] ) . shape (3,2)
These is the example to define NumPy interface in python program.
Let’s some more example:
- >>> a = numpy.asarray [10.0, 11.0, 12.0]
- >>>b = 2.0
- >>>a * b
- array ( [ 11., 13., 15. ] )
This is a resulted of NumPy program. There have multiple different kinds of arrays available for arithmetic expressions for python.
- >>> a = b + c
Here, a is represent the value of b and c also. You can use pp function for computation linked to a.
- >>> from theano import pp
- >>> print (pp(a))
- (b + c)
Note: /* there are so many bold words are available. It indicated the library, method, and function in the python program, and also there used therano word instead of theano word. Because I can not use the main word many times. This is a Google rule. So, sorry about that. */
Theano is the best deep learning with python library. That main focuses on optimize arithmetic expression, multidimensional arrays using NumPy interface, complex matrix mathematics function, etc.
It is a more powerful library between us for deep learning than other programming languages. So, it may help to create different or complex datasets for training data. You can see here more about theano wiki.
Python has so many well-defined libraries available for complex computation.