Today's learning :-
import tensorflow as tf
x = tf.constant(3.0)
y = tf.constant(6.0)
w = tf.Variable(20.0)
#Back propogation
with tf.GradientTape() as tape:
#Loss / cost function
#feed forwarding
loss = tf.math.abs(w*x - y)
#optimizer: descent, means change weight
dx = tape.gradient(loss, w)
w.assign(w-dx)
print("Weight : ", w.numpy(), ",", "Descent : ", dx.numpy(), ",", "Loss : ", loss.numpy())
Output :-
1st O/P -
2nd O/P -
import pandas as pd
dataset = pd.read_csv('weight-height.csv')
#This is my dataset
dataset.head(2)
#Summery of dataset
dataset.info()
Output :-
dataset.columns
Output :-
#Target
y = dataset['Weight']
#Predictor
X = dataset['Height']
# Regression
from keras.models import Sequential
#Number of hidden layers required
from keras.layers import Dense
#Optimizer there are many but here I am using Adam
from keras.optimizers import Adam
model = Sequential()
#Number of O/P(units=?)
#Number of I/P(input_shape=?)
#Activation function - By default - Linear activation function
model.add(Dense(units=1, input_shape=(1,)))
model.summary()
model.compile(loss='mean_squared_error', optimizer=Adam(0.8))
model.fit(X,y, epochs=20)
How to reset weight and rerun the Epoch
w, B = model.get_weights()
w[0,0] = 0.0
B[0] = 0.0
model.set_weights((w,B))
model.get_weights()
- It is hands on experience day of what ever learned yesterday, theory of it already covered last day as listed below -
- It was a day of learning ML and python both, started with quick revision and then started Deep learning. Since lazy execution and graphs are the bash of DL.
- Sklearn(Sci-kit Learn) which use numpy in background doesn't support lazy execution and graphs hence we need a new type of data type which support both both of these.
- For this we have a new data type Tensor which comes from TensorFlow module but directly we don't use TensorFlow as well.
- For this we have a new approch which is known as Keras.
- As humans has neural network in human brain, we can create same kind of neural network in using computer programming to enable computer think like human which makes it intelligent.
- Neurons and nodes which receive input from input nodes (features). Neurons decide the weightage of every input and then send it to another neuron which eliminates the features on the basis of function provided in it.
- This neuron is known as activation function and then it gives the output. This complete process is known as Artificial Neural Network (ANN).
- Features are in the input layer, output is in output layer and the whole ANN operates in Hidden Layer. If Hidden Layer => 3, then that network is known as deep net and the learning by this network is known as Deep learning.
- When we have single neuron it is known as Perceptron.
- Writing a python program to understand all these
import tensorflow as tf
x = tf.constant(3.0)
y = tf.constant(6.0)
w = tf.Variable(20.0)
#Back propogation
with tf.GradientTape() as tape:
#Loss / cost function
#feed forwarding
loss = tf.math.abs(w*x - y)
#optimizer: descent, means change weight
dx = tape.gradient(loss, w)
w.assign(w-dx)
print("Weight : ", w.numpy(), ",", "Descent : ", dx.numpy(), ",", "Loss : ", loss.numpy())
Output :-
1st O/P -
Weight : 14.0 , Descent : 6.0 , Loss : 114.0
2nd O/P -
Weight : 8.0 , Descent : 6.0 , Loss : 78.03rd O/P -
Weight : 2.0 , Descent : 6.0 , Loss : 42.0
4th O/P - Weight : -4.0 , Descent : 6.0 , Loss : 6.0
- Another python code for Single Neuron which is known as Perceptron.
import pandas as pd
dataset = pd.read_csv('weight-height.csv')
#This is my dataset
dataset.head(2)
#Summery of dataset
dataset.info()
Output :-
<class 'pandas.core.frame.DataFrame'> RangeIndex: 10000 entries, 0 to 9999 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Gender 10000 non-null object 1 Height 10000 non-null float64 2 Weight 10000 non-null float64 dtypes: float64(2), object(1) memory usage: 234.5+ KB
dataset.columns
Output :-
Index(['Gender', 'Height', 'Weight'], dtype='object')
#Target
y = dataset['Weight']
#Predictor
X = dataset['Height']
# Regression
from keras.models import Sequential
#Number of hidden layers required
from keras.layers import Dense
#Optimizer there are many but here I am using Adam
from keras.optimizers import Adam
model = Sequential()
#Number of O/P(units=?)
#Number of I/P(input_shape=?)
#Activation function - By default - Linear activation function
model.add(Dense(units=1, input_shape=(1,)))
model.summary()
Model: "sequential_7" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_5 (Dense) (None, 1) 2 ================================================================= Total params: 2 Trainable params: 2 Non-trainable params: 0 _________________________________________________________________
model.compile(loss='mean_squared_error', optimizer=Adam(0.8))
model.fit(X,y, epochs=20)
Out[138]:
How to reset weight and rerun the Epoch
w, B = model.get_weights()
w[0,0] = 0.0
B[0] = 0.0
model.set_weights((w,B))
model.get_weights()
[array([[0.]], dtype=float32), array([0.], dtype=float32)]
Comments
Post a Comment
Please share your experience.....