Skip to main content

MLOps - Day 13

Today's learning :-
  • It is hands on experience day of what ever learned yesterday, theory of it already covered last day as listed below -
  • It was a day of learning ML and python both, started with quick revision and then started Deep learning. Since lazy execution and graphs are the bash of DL.
  • Sklearn(Sci-kit Learn) which use numpy in background doesn't support lazy execution and graphs hence we need a new type of data type which support both both of these.
  • For this we have a new data type Tensor which comes from TensorFlow module but directly we don't use TensorFlow as well.
  • For this we have a new approch which is known as Keras.
  • As humans has neural network in human brain, we can create same kind of neural network in using computer programming to enable computer think like human which makes it intelligent. 
  • Neurons and nodes which receive input from input nodes (features). Neurons decide the weightage of every input and then send it to another neuron which eliminates the features on the basis of function provided in it.
  • This neuron is known as activation function and then it gives the output. This complete process is known as Artificial Neural Network (ANN).
  • Features are in the input layer, output is in output layer and the whole ANN operates in Hidden Layer. If Hidden Layer => 3, then that network is known as deep net and the learning by this network is known as Deep learning.
  • When we have single neuron it is known as Perceptron. 
  • Writing a python program to understand all these
Code :-
import tensorflow as tf
 x = tf.constant(3.0)
y = tf.constant(6.0) 
w = tf.Variable(20.0)
#Back propogation
with tf.GradientTape() as tape:
    #Loss / cost function
    #feed forwarding
    loss = tf.math.abs(w*x - y)
    #optimizer: descent, means change weight
    dx = tape.gradient(loss, w)
    w.assign(w-dx)
    print("Weight : ", w.numpy(), ",", "Descent : ", dx.numpy(), ",", "Loss : ", loss.numpy())
 

Output :-
1st O/P -
Weight :  14.0 , Descent :  6.0 , Loss :  114.0

2nd O/P - 
Weight :  8.0 , Descent :  6.0 , Loss :  78.0
3rd O/P -  
Weight :  2.0 , Descent :  6.0 , Loss :  42.0 
4th O/P -  
Weight :  -4.0 , Descent :  6.0 , Loss :  6.0 


  • Another python code for Single Neuron which is known as Perceptron. 
Code :- ANN Perceptron Linear Regression
import pandas as pd
dataset = pd.read_csv('weight-height.csv')

#This is my dataset
dataset.head(2)

#Summery of dataset
dataset.info()

 Output :-
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10000 entries, 0 to 9999
Data columns (total 3 columns):
 #   Column  Non-Null Count  Dtype  
---  ------  --------------  -----  
 0   Gender  10000 non-null  object 
 1   Height  10000 non-null  float64
 2   Weight  10000 non-null  float64
dtypes: float64(2), object(1)
memory usage: 234.5+ KB

dataset.columns
Output :-
Index(['Gender', 'Height', 'Weight'], dtype='object')


#Target
y = dataset['Weight']

#Predictor
X = dataset['Height']
# Regression
from keras.models import Sequential

#Number of hidden layers required
from keras.layers import Dense

#Optimizer there are many but here I am using Adam
from keras.optimizers import Adam

model = Sequential()
#Number of O/P(units=?)
#Number of I/P(input_shape=?)
#Activation function - By default - Linear activation function
model.add(Dense(units=1, input_shape=(1,)))

model.summary()
Model: "sequential_7"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_5 (Dense)              (None, 1)                 2         
=================================================================
Total params: 2
Trainable params: 2
Non-trainable params: 0
_________________________________________________________________

model.compile(loss='mean_squared_error', optimizer=Adam(0.8))
model.fit(X,y, epochs=20)
Epoch 1/20
10000/10000 [==============================] - 1s 88us/step - loss: 1070.4784
Epoch 2/20
10000/10000 [==============================] - 1s 78us/step - loss: 546.4011
Epoch 3/20
10000/10000 [==============================] - 1s 78us/step - loss: 487.8537
Epoch 4/20
10000/10000 [==============================] - 1s 85us/step - loss: 432.6666
Epoch 5/20
10000/10000 [==============================] - 1s 76us/step - loss: 406.0169
Epoch 6/20
10000/10000 [==============================] - 1s 75us/step - loss: 367.2809
Epoch 7/20
10000/10000 [==============================] - 1s 76us/step - loss: 337.0343
Epoch 8/20
10000/10000 [==============================] - 1s 81us/step - loss: 291.5824
Epoch 9/20
10000/10000 [==============================] - 1s 89us/step - loss: 282.7771
Epoch 10/20
10000/10000 [==============================] - 1s 76us/step - loss: 273.4019
Epoch 11/20
10000/10000 [==============================] - 1s 75us/step - loss: 239.5028
Epoch 12/20
10000/10000 [==============================] - 1s 75us/step - loss: 234.7328
Epoch 13/20
10000/10000 [==============================] - 1s 74us/step - loss: 213.1820
Epoch 14/20
10000/10000 [==============================] - 1s 77us/step - loss: 215.1054
Epoch 15/20
10000/10000 [==============================] - 1s 75us/step - loss: 204.5332
Epoch 16/20
10000/10000 [==============================] - 1s 76us/step - loss: 210.3432
Epoch 17/20
10000/10000 [==============================] - 1s 76us/step - loss: 187.2308
Epoch 18/20
10000/10000 [==============================] - 1s 75us/step - loss: 181.1886
Epoch 19/20
10000/10000 [==============================] - 1s 76us/step - loss: 184.3879
Epoch 20/20
10000/10000 [==============================] - 1s 76us/step - loss: 183.0509
Out[138]:
<keras.callbacks.callbacks.History at 0x15ad03c8>

How to reset weight and rerun the Epoch
w, B = model.get_weights()
w[0,0] = 0.0
B[0] = 0.0
model.set_weights((w,B))
model.get_weights()
[array([[0.]], dtype=float32), array([0.], dtype=float32)]

Comments

Popular posts from this blog

error: db5 error(11) from dbenv->open: Resource temporarily unavailable

If rpm command is not working in your system and it is giving an error message( error: db5 error(11) from dbenv->open: Resource temporarily unavailable ). What is the root cause of this issue? How to fix this issue?   just a single command- [root@localhost rpm]# rpm --rebuilddb Detailed error message- [root@localhost rpm]# rpm -q firefox ^Cerror: db5 error(11) from dbenv->open: Resource temporarily unavailable error: cannot open Packages index using db5 - Resource temporarily unavailable (11) error: cannot open Packages database in /var/lib/rpm ^Cerror: db5 error(11) from dbenv->open: Resource temporarily unavailable error: cannot open Packages database in /var/lib/rpm package firefox is not installed [root@localhost rpm]# RPM manage a database in which it store all information related to packages installed in our system. /var/lib/rpm, this is directory where this information is available. [root@localhost rpm]# cd /var/lib/rpm ...

Failed to get D-Bus connection: Operation not permitted

" Failed to get D-Bus connection: Operation not permitted " - systemctl command is not working in Docker container. If systemctl command is not working in your container and giving subjected error message then simple solution of this error is, create container with -- privileged option and also provide init file full path  /usr/sbin/init [root@server109 ~]# docker container run -dit --privileged --name systemctl_not_working_centos1 centos:7 /usr/sbin/init For detailed explanation and understanding I am writing more about it, please have look below. If we have a daemon based program(httpd, sshd, jenkins, docker etc.) running inside a container and we would like to start/stop or check status of daemon inside docker then it becomes difficult for us to perform such operations , because by default systemctl and service  commands don't work inside docker. Normally we run below commands to check services status in Linux systems. [root@server109 ~]# systemctl status ...

call to function "map" failed: the "map" function was deprecated in Terrafrom

How to change map method to tomap method? Let's say you have multiple tags in your code which was written quite back and that time it was working fine on old Terraform version before v0.12 but if the same code you execute on updated/latest Terrafrom you get subjected error while try to run Terrafrom plan command. Then this article will help you to fix your issue. What is simple solution to fix this issue? Just replace " map " method to " tomap " and just to little bit formatting for the same. Syntax:- map ({"Name", "My_Name"), map("AppName", "My_App")}) tomap ({"Name"  =   "My_Name",  "App_Name"  =   "My_App"}) or tomap ({     "Name"  =   "My_Name",     "App_Name"  =   "My_App" }) #Code with " map " method resource "aws_instance" "My_instance"   ami   =   my_ami   instance_type =   my_type   tags  =   merge(var.tag...