[Wu Enda's after-school programming homework] Neural Network and in-depth learning

Keywords: encoding Python network Windows

This article needs to use the dataset of the dataset, click Download.The running environment used is Python 3.7.Comment on all codes in detail for your own review.

1. Packages that need to be imported

#-*- coding: utf-8 -*-    #Coding Notes

import numpy as np    #Import numpy package
import matplotlib.pyplot as plt   #Import the matplotlib.pyplot package
import h5py #Import h5py

1. Differences between encoding ANSI, GB2312, UNICODE and UTF-8

Encoding Method Characteristic
ANSI Single-byte encoding, up to 255 characters in range 0x00-0xFF
GB2312/GBK GB2312/GBK, the Chinese character national standard code, belongs to double-byte encoding, in which the English alphabet part is identical with iso8859-1, GBK encoding can represent both simplified and traditional characters, GB2312 can only represent simplified characters, GBK is compatible with GB2312 encoding.
UTF-8 UTF-8 was originally designed for network transmission and has the advantage of single-byte byte encoding so byte order is not a concern, but it is also currently used in local document storage formats
UNICODE UNICODE encoding refers to UTF-16 LE. UTF-16 is the default Unicode encoding method on Windows. Commonly used Word documents, standardized software, etc. are coded using Unicode.

Python default script files are ANSCII encoded. When there are characters in the file that are not in the range of ANSCII encoding, use the Encoding Indication to correct a module's definition. If the.py file contains Chinese characters (strictly non-anscii characters), you need to specify the number on the first or second lineCode declaration: # -*-coding=utf-8 -*-or #coding=utf-8.Other codes such as gbk and gb2312 can also be used; otherwise, they will occur.

The default encoding format in Python is ASCII. Chinese characters cannot be printed correctly without modifying the encoding format, so errors will occur when reading Chinese.

2. About import

import statement Module name. Function name
from...import statement Python's from statement lets you import a specified part from a module into the current namespace.
from...import*statement It is also possible to import all the contents of a module into the current namespace.

3. About numpy library and matplotlib Library

numpy library reference https://www.runoob.com/numpy/numpy-linear-algebra.html

matplotlib reference https://www.runoob.com/numpy/numpy-matplotlib.html

[Because our dataset is in h5 format]:The h5py file is a container for two types of objects, datasets and groups, datasets are like array classes, similar to numpy arrays.Groups are folder-like containers, like dictionaries in python, with keys and values.Groups can store datasets or other groups."The key is the name of the group member, and the value is the group member object itself (group or dataset)

2. Functions for loading data and testing them

##Functions to load data
def load_dataset():
    
    #About training sets
    train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r") #Readable Files
    train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
    #train_set_x_orig: Save image data from the training set (209 64x64 images from this training set)
    train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels
    #train_set_y_orig: Save the corresponding classification value of the image of the training set ([0 | 1], 0 means not a cat, 1 means a cat)
   
    #About Test Sets
    test_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")
    test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
    #test_set_x_orig: Save the image data from the test set (this training set has 50 64x64 images)
    test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels
    #test_set_y_orig: Save the corresponding classification value for the image of the test set ([0 | 1], 0 means not a cat, 1 means a cat)
    
    #A cat is not a cat
    classes = np.array(test_dataset["list_classes"][:]) # the list of classes
    #classes: Save two string data of type bytes: [b'non-cat'b'cat']
    
    #Change Array Shape
    train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
    test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))  #Problem
   
    #About train_set_x and test_set_y variables being existing variables
    
    return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes

#Load data into the main program
train_set_x_orig , train_set_y , test_set_x_orig , test_set_y , classes = load_dataset()

#View Loading Picture Information
#The plt.imshow() function is responsible for processing images and displaying their formats, while plt.show() displays functions after plt.imshow() processing.  
index = 2;
plt.imshow(train_set_x_orig[index]);
plt.show()  

print("train_set_y=" + str(train_set_y)) #Check what the labels in the training set look like.
print("test_set_y=" + str(test_set_y)) #Check out what labels are inside the test set.
print(train_set_y.shape)  #Output training set size
print(test_set_y.shape)   #Output Test Set Size
print(train_set_x_orig.shape)  #(209, 64, 64, 3)
print(test_set_x_orig.shape)   #(50, 64, 64, 3)
print(classes)   #[b'non-cat' b'cat']

m_train = train_set_y.shape[1] #Number of pictures in the training set.
m_test = test_set_y.shape[1] #Number of pictures in the test set.
num_px = train_set_x_orig.shape[1] #The width and height of the pictures in the training and test sets (both 64x64).

#Now take a look at what we're loading
print ("Number of training sets: m_train = " + str(m_train))
print ("Number of test sets : m_test = " + str(m_test))
print ("Width of each picture/high : num_px = " + str(num_px))
print ("Size of each picture : (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("training set_Dimension of picture : " + str(train_set_x_orig.shape))
print ("training set_Dimension of label : " + str(train_set_y.shape))
print ("Test Set_Dimension of picture: " + str(test_set_x_orig.shape))
print ("Test Set_Dimension of label: " + str(test_set_y.shape))

Description of variables

Variable Name  
train_set_x_orig The image data stored in the training set (209 64x64 images in this training set)
train_set_y_orig;train_set_y Save the corresponding classification values for the image of the training set ([0 | 1], 0 means not a cat, 1 means a cat)
test_set_x_orig Save the image data from the test set (this training set has 50 64x64 images)
test_set_y_orig;test_set_y Save the corresponding classification value for the image of the test set ([0 | 1], 0 means not a cat, 1 means a cat)
classes Save two string data of type bytes: [b'non-cat'b'cat']
m_train Number of pictures in training set
m_test Number of pictures in the test set
num_px Width and height of pictures in training and test sets (both 64x64)

Processing data

#X_flatten = X.reshape(X.shape [0], -1).T_X.T is the conversion of X
#Reduce and transpose the dimensions of the training set.
train_set_x_flatten  = train_set_x_orig.reshape(train_set_x_orig.shape[0],-1).T
#Reduce and transpose the dimensions of the test set.
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T

print ("The last dimension of training set dimension reduction: " + str(train_set_x_flatten.shape))
print ("training set_Dimension of label : " + str(train_set_y.shape))
print ("Dimension after dimension reduction of test set: " + str(test_set_x_flatten.shape))
print ("Test Set_Dimension of label : " + str(test_set_y.shape))
                                                                #Understanding Matrix Changes
#Divide by 255 to place standardized data at[0,1]Between, now standardize our datasets   #The range of pixel values is 0-255
train_set_x = train_set_x_flatten / 255
test_set_x = test_set_x_flatten / 255

3. Construct sigmoid() function and test it

##Construct the sigmoid() parameter: Z - a scalar or numpy array of any size.Return: s - sigmoid (z)
def sigmoid(z):
    s = 1 / (1 + np.exp(-z))
    return s
##Test sigmoid()
print("====================test sigmoid====================")
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(9.2) = " + str(sigmoid(9.2)))

4. Initialization parameters w and b

Functional functions This function creates a 0 vector with a dimension of (dim, 1) for w and initializes b to 0.
parameter dim - The size of the w vector we want
Return w - Initialization vector with dimension (dim, 1).b - Initialized scalar (corresponding to deviation)
##Initialize the required parameters w and b
def initialize_with_zeros(dim):
 
    w = np.zeros(shape = (dim,1))
    b = 0
    
    #Use assertions to ensure that the data I want is correct
    assert(w.shape == (dim, 1)) #The dimension of w is (dim,1)
    assert(isinstance(b, float) or isinstance(b, int)) #b is of type float or int
    return (w , b)

5. Cost functions and their gradients for forward and backward propagation

Functional functions Cost functions and their gradients for forward and backward propagation
parameter w - Weight, array of varying sizes (num_px * num_px * 3,1)
b - Deviation, a scalar
X-matrix type is (num_px * num_px * 3, number of training)
Y - Real label vector (0 if non-cat, 1 if cat), Matrix dimension (1, number of training data)
Return Negative Logarithmic Likelihood cost of cost-Logistic Regression
Loss gradient relative to w, so the same shape as w
Loss gradient relative to b, so the same shape as B
##Perform Forward and Backward propagation steps to learn parameters.
def propagate(w, b, X, Y):

    m = X.shape[1]

    #Forward Propagation
    A = sigmoid(np.dot(w.T,X) + b) #To calculate the activation value, refer to formula 2.
    cost = (- 1 / m) * np.sum(Y * np.log(A) + (1 - Y) * (np.log(1 - A))) #For cost calculation, refer to formulas 3 and 4.

    #Reverse Propagation
    dw = (1 / m) * np.dot(X, (A - Y).T) #Refer to the deflection formula in the video.
    db = (1 / m) * np.sum(A - Y) #Refer to the deflection formula in the video.

    #Use assertions to make sure my data is correct
    assert(dw.shape == w.shape)
    assert(db.dtype == float)
    cost = np.squeeze(cost)
    assert(cost.shape == ())

    #Create a dictionary and save dw and db.
    grads = {
                "dw": dw,
                "db": db
             }
    return (grads , cost)

##Test propagate
print("====================test propagate====================")
#Initialize some parameters
w, b, X, Y = np.array([[1], [2]]), 2, np.array([[1,2], [3,4]]), np.array([[1, 0]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))

6. Learning w and b by minimizing cost function J

Functional functions The goal is to learn w and b by minimizing the cost function J.For parameter theta, the update rule is theta=theta_alphadtheta, where alpha is the learning rate
parameter w - Weight, array of varying sizes (num_px * num_px * 3,1)
b - Deviation, a scalar
X-Dimension (num_px * num_px * 3, number of training data)
Y - Real label vector (0 if non-cat, 1 if cat), Matrix dimension (1, number of training data)
num_iterations - Number of iterations for optimizing the loop
learning_rate - Gradient down update rule learning rate
print_cost - Print loss values every 100 steps
Return params - Dictionary containing weight w and deviation b
grads - Dictionary containing gradients of weights and deviations relative to cost functions
Costs - A list of all costs calculated during optimization that will be used to draw the learning curve.
##The goal is to learn w and b by minimizing the cost function J.For parameter theta, the update rule is theta=theta_alphadtheta, where alpha is the learning rate
def optimize(w , b , X , Y , num_iterations , learning_rate , print_cost = False):

    costs = []

    for i in range(num_iterations):

        grads, cost = propagate(w, b, X, Y)

        dw = grads["dw"]
        db = grads["db"]

        w = w - learning_rate * dw
        b = b - learning_rate * db

        #Record cost
        if i % 100 == 0:
            costs.append(cost)
        #Print cost data
        if (print_cost) and (i % 100 == 0):
            print("Number of iterations: %i , Error value: %f" % (i,cost))

    params  = {
                "w" : w,
                "b" : b }
    grads = {
            "dw": dw,
            "db": db } 
    return (params , grads , costs)


#Test optimize
print("====================test optimize====================")
w, b, X, Y = np.array([[1], [2]]), 2, np.array([[1,2], [3,4]]), np.array([[1, 0]])
params , grads , costs = optimize(w , b , X , Y , num_iterations=100 , learning_rate = 0.009 , print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))

7. Label for predicting dataset X using w and b

Functional functions Use w and b to predict labels for dataset X.Prediction stored in vector Y_prediction
parameter w-weight, array of varying sizes (num_px * num_px * 3,1)
b-Deviation, a scalar
X-Dimension data (num_px * num_px * 3, number of training data)
Return Y_prediction - A numpy array (vector) containing all predictions for all pictures in X [0 | 1]
##The optimize function outputs the values of the learned W and b, and we can use w and B to predict the label of the dataset X.Prediction stored in vector Y_prediction
def predict(w , b , X ):
    
    m  = X.shape[1] #Number of pictures
    Y_prediction = np.zeros((1,m)) 
    w = w.reshape(X.shape[0],1)

    #Predict the probability of cats appearing in the picture
    A = sigmoid(np.dot(w.T , X) + b)
    for i in range(A.shape[1]):
        #Converting probability a [0, i] to actual prediction p [0, i]
        Y_prediction[0,i] = 1 if A[0,i] > 0.5 else 0
    #Use assertions
    assert(Y_prediction.shape == (1,m))

    return Y_prediction


#Test predict
print("====================test predict====================")
w, b, X, Y = np.array([[1], [2]]), 2, np.array([[1,2], [3,4]]), np.array([[1, 0]])
print("predictions = " + str(predict(w, b, X)))

8. Integrate the above functions into a model() function

Functional functions To integrate these functions into a model() function
parameter X_train - numpy array, training set with dimension (num_px * num_px * 3, m_train)
Y_train - numpy array, training label set with dimension (1, m_train) (vector)
X_test - array of numpy, test set with dimension (num_px * num_px * 3, m_test)
Y_test - numpy array, test label set for (vector) dimensions (1, m_test)
num_iterations - A superparameter representing the number of iterations used to optimize parameters
learning_rate - A superparameter representing the learning rate used in the optimize() update rule
print_cost - Set to true to print cost per 100 iterations
Return d - A dictionary containing information about the model.
##To integrate these functions into a model() function
def model(X_train , Y_train , X_test , Y_test , num_iterations = 2000 , learning_rate = 0.5 , print_cost = False):

    w , b = initialize_with_zeros(X_train.shape[0])

    parameters , grads , costs = optimize(w , b , X_train , Y_train,num_iterations , learning_rate , print_cost)

    #Retrieving parameters w and b from the dictionary Parameters
    w , b = parameters["w"] , parameters["b"]

    #Examples of prediction tests/training sets
    Y_prediction_test = predict(w , b, X_test)
    Y_prediction_train = predict(w , b, X_train)

    #Print the accuracy after training
    print("Training set accuracy:"  , format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100) ,"%")
    print("Test Set Accuracy:"  , format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100) ,"%")

    d = {
            "costs" : costs,
            "Y_prediction_test" : Y_prediction_test,
            "Y_prediciton_train" : Y_prediction_train,
            "w" : w,
            "b" : b,
            "learning_rate" : learning_rate,
            "num_iterations" : num_iterations }
    return d
print("====================test model====================")     
##The actual data is loaded here, see the code section above.
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)

9. Complete Code

#-*- coding: utf-8 -*-    #Coding Notes

import numpy as np    #Import numpy package
import matplotlib.pyplot as plt   #Import the matplotlib.pyplot package
import h5py #Import h5py

##Functions to load data
def load_dataset():
    
    #About training sets
    train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r") #Readable Files
    train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
    #train_set_x_orig: Save image data from the training set (209 64x64 images from this training set)
    train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels
    #train_set_y_orig: Save the corresponding classification value of the image of the training set ([0 | 1], 0 means not a cat, 1 means a cat)
   
    #About Test Sets
    test_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")
    test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
    #test_set_x_orig: Save the image data from the test set (this training set has 50 64x64 images)
    test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels
    #test_set_y_orig: Save the corresponding classification value for the image of the test set ([0 | 1], 0 means not a cat, 1 means a cat)
    
    #A cat is not a cat
    classes = np.array(test_dataset["list_classes"][:]) # the list of classes
    #classes: Save two string data of type bytes: [b'non-cat'b'cat']
    
    #Change Array Shape
    train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
    test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))  #Problem
   
    #About train_set_x and test_set_y variables being existing variables
    
    return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes

#Load data into the main program
train_set_x_orig , train_set_y , test_set_x_orig , test_set_y , classes = load_dataset()

#View Loading Picture Information
#The plt.imshow() function is responsible for processing images and displaying their formats, while plt.show() displays functions after plt.imshow() processing.  
index = 2;
plt.imshow(train_set_x_orig[index]);
plt.show()

print("train_set_y=" + str(train_set_y)) #Check what the labels in the training set look like.
print("test_set_y=" + str(test_set_y)) #Check out what labels are inside the test set.
print(train_set_y.shape)  #Output training set size
print(test_set_y.shape)   #Output Test Set Size
print(train_set_x_orig.shape)  #(209, 64, 64, 3)
print(test_set_x_orig.shape)   #(50, 64, 64, 3)
print(classes)   #[b'non-cat' b'cat']

m_train = train_set_y.shape[1] #Number of pictures in the training set.
m_test = test_set_y.shape[1] #Number of pictures in the test set.
num_px = train_set_x_orig.shape[1] #The width and height of the pictures in the training and test sets (both 64x64).

#Now take a look at what we're loading
print ("Number of training sets: m_train = " + str(m_train))
print ("Number of test sets : m_test = " + str(m_test))
print ("Width of each picture/high : num_px = " + str(num_px))
print ("Size of each picture : (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("training set_Dimension of picture : " + str(train_set_x_orig.shape))
print ("training set_Dimension of label : " + str(train_set_y.shape))
print ("Test Set_Dimension of picture: " + str(test_set_x_orig.shape))
print ("Test Set_Dimension of label: " + str(test_set_y.shape))

#X_flatten = X.reshape(X.shape [0], -1).T_X.T is the conversion of X
#Reduce and transpose the dimensions of the training set.
train_set_x_flatten  = train_set_x_orig.reshape(train_set_x_orig.shape[0],-1).T
#Reduce and transpose the dimensions of the test set.
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T

print ("The last dimension of training set dimension reduction: " + str(train_set_x_flatten.shape))
print ("training set_Dimension of label : " + str(train_set_y.shape))
print ("Dimension after dimension reduction of test set: " + str(test_set_x_flatten.shape))
print ("Test Set_Dimension of label : " + str(test_set_y.shape))
                                                                #Understanding Matrix Changes
#Divide by 255 to place standardized data at[0,1]Between, now standardize our datasets   #The range of pixel values is 0-255
train_set_x = train_set_x_flatten / 255
test_set_x = test_set_x_flatten / 255

##To build sigmoid(), a sigmoid (w ^ T x + b) calculation is required to make the prediction
def sigmoid(z):
    """
    //Parameters:
        z  - Scalar of any size or numpy Array.

    //Return:
        s  -  sigmoid(z)
    """
    s = 1 / (1 + np.exp(-z))
    return s
##Test sigmoid()
print("====================test sigmoid====================")
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(9.2) = " + str(sigmoid(9.2)))


##Initialize the required parameters w and b
def initialize_with_zeros(dim):
    """
        //This function creates a 0 vector with a dimension of (dim, 1) for w and initializes b to 0.

        //Parameters:
            dim  - What we want w The size of the vector (or the number of parameters in this case)

        //Return:
            w  - Dimension is ( dim,1)Initialization vector.
            b  - Initialized scalar (corresponding to deviation)
    """
    w = np.zeros(shape = (dim,1))
    b = 0
    
    #Use assertions to ensure that the data I want is correct
    assert(w.shape == (dim, 1)) #The dimension of w is (dim,1)
    assert(isinstance(b, float) or isinstance(b, int)) #b is of type float or int
    return (w , b)


##Perform Forward and Backward propagation steps to learn parameters.
def propagate(w, b, X, Y):
    """
    //Cost functions and their gradients for forward and backward propagation.
    //Parameters:
        w  - Weights, arrays of varying sizes ( num_px * num_px * 3,1)
        b  - Deviation, a scalar
        X  - The matrix type is ( num_px * num_px * 3,Number of training)
        Y  - True label vectors (0 for non-cats and 1 for cats) with a matrix dimension of(1,Number of training data)

    //Return:
        cost- Negative Logarithmic Likelihood Cost of Logistic Regression
        dw  - Be relative to w The loss gradient, therefore, corresponds to the w Same shape
        db  - Be relative to b The loss gradient, therefore, corresponds to the b Same shape
    """
    m = X.shape[1]

    #Forward Propagation
    A = sigmoid(np.dot(w.T,X) + b) #To calculate the activation value, refer to formula 2.
    cost = (- 1 / m) * np.sum(Y * np.log(A) + (1 - Y) * (np.log(1 - A))) #For cost calculation, refer to formulas 3 and 4.

    #Reverse Propagation
    dw = (1 / m) * np.dot(X, (A - Y).T) #Refer to the deflection formula in the video.
    db = (1 / m) * np.sum(A - Y) #Refer to the deflection formula in the video.

    #Use assertions to make sure my data is correct
    assert(dw.shape == w.shape)
    assert(db.dtype == float)
    cost = np.squeeze(cost)
    assert(cost.shape == ())

    #Create a dictionary and save dw and db.
    grads = {
                "dw": dw,
                "db": db
             }
    return (grads , cost)
##Test propagate
print("====================test propagate====================")
#Initialize some parameters
w, b, X, Y = np.array([[1], [2]]), 2, np.array([[1,2], [3,4]]), np.array([[1, 0]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))


##The goal is to learn w and b by minimizing the cost function J.For parameter theta, the update rule is theta=theta_alphadtheta, where alpha is the learning rate
def optimize(w , b , X , Y , num_iterations , learning_rate , print_cost = False):
    """
    //This function optimizes w and b by running a gradient descent algorithm

    //Parameters:
        w  - Weights, arrays of varying sizes ( num_px * num_px * 3,1)
        b  - Deviation, a scalar
        X  - Dimension is ( num_px * num_px * 3,Number of training data).
        Y  - True label vectors (0 for non-cats and 1 for cats) with a matrix dimension of(1,Number of training data)
        num_iterations  - Number of iterations to optimize the loop
        learning_rate  - Gradient Down Update Rule Learning Rate
        print_cost  - Print loss values every 100 steps

    //Return:
        params  - Include weights w And deviations b Dictionary of
        grads  - Dictionary containing gradients of weights and deviations relative to cost functions
        //Cost-All cost lists calculated during the optimization period will be used to draw the learning curve.

    //Tips:
    //We need to write down two steps and walk through them:
        1)Calculate the cost and gradient of the current parameter using propagate(). 
        2)Use w and b The gradient descent rule updates the parameters.
    """

    costs = []

    for i in range(num_iterations):

        grads, cost = propagate(w, b, X, Y)

        dw = grads["dw"]
        db = grads["db"]

        w = w - learning_rate * dw
        b = b - learning_rate * db

        #Record cost
        if i % 100 == 0:
            costs.append(cost)
        #Print cost data
        if (print_cost) and (i % 100 == 0):
            print("Number of iterations: %i , Error value: %f" % (i,cost))

    params  = {
                "w" : w,
                "b" : b }
    grads = {
            "dw": dw,
            "db": db } 
    return (params , grads , costs)
#Test optimize
print("====================test optimize====================")
w, b, X, Y = np.array([[1], [2]]), 2, np.array([[1,2], [3,4]]), np.array([[1, 0]])
params , grads , costs = optimize(w , b , X , Y , num_iterations=100 , learning_rate = 0.009 , print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
##The optimize function outputs the values of the learned W and b, and we can use w and B to predict the label of the dataset X.Prediction stored in vector Y_prediction
def predict(w , b , X ):
    """
    //Using the learning logistic regression parameter logistic(w,b) to predict whether the label is 0 or 1,

    //Parameters:
        w  - Weights, arrays of varying sizes ( num_px * num_px * 3,1)
        b  - Deviation, a scalar
        X  - Dimension is ( num_px * num_px * 3,Number of training data)

    //Return:
        Y_prediction  - Contain X All predictions for all pictures in [0] | 1]Of numpy Array (vector)

    """

    m  = X.shape[1] #Number of pictures
    Y_prediction = np.zeros((1,m)) 
    w = w.reshape(X.shape[0],1)

    #Predict the probability of cats appearing in the picture
    A = sigmoid(np.dot(w.T , X) + b)
    for i in range(A.shape[1]):
        #Converting probability a [0, i] to actual prediction p [0, i]
        Y_prediction[0,i] = 1 if A[0,i] > 0.5 else 0
    #Use assertions
    assert(Y_prediction.shape == (1,m))

    return Y_prediction
#Test predict
print("====================test predict====================")
w, b, X, Y = np.array([[1], [2]]), 2, np.array([[1,2], [3,4]]), np.array([[1, 0]])
print("predictions = " + str(predict(w, b, X)))


##To integrate these functions into a model() function
def model(X_train , Y_train , X_test , Y_test , num_iterations = 2000 , learning_rate = 0.5 , print_cost = False):
    """
    //Build a logistic regression model by calling previously implemented functions

    //Parameters:
        X_train  - numpy Array,Dimension is ( num_px * num_px * 3,m_train)The training set of
        Y_train  - numpy Array,Dimension is (1, m_train)(Vector) training label set
        X_test   - numpy Array,Dimension is ( num_px * num_px * 3,m_test)Test Set
        Y_test   - numpy Array,Dimension is (1, m_test)Test label set for
        num_iterations  - A hyperparameter that represents the number of iterations used to optimize parameters
        learning_rate  - Express optimize()Update learning rate superparameters used in rules
        print_cost  - Set to true Print cost per 100 iterations

    //Return:
        d  - A dictionary containing information about the model.
    """
    w , b = initialize_with_zeros(X_train.shape[0])

    parameters , grads , costs = optimize(w , b , X_train , Y_train,num_iterations , learning_rate , print_cost)

    #Retrieving parameters w and b from the dictionary Parameters
    w , b = parameters["w"] , parameters["b"]

    #Examples of prediction tests/training sets
    Y_prediction_test = predict(w , b, X_test)
    Y_prediction_train = predict(w , b, X_train)

    #Print the accuracy after training
    print("Training set accuracy:"  , format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100) ,"%")
    print("Test Set Accuracy:"  , format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100) ,"%")

    d = {
            "costs" : costs,
            "Y_prediction_test" : Y_prediction_test,
            "Y_prediciton_train" : Y_prediction_train,
            "w" : w,
            "b" : b,
            "learning_rate" : learning_rate,
            "num_iterations" : num_iterations }
    return d
print("====================test model====================")     
##The actual data is loaded here, see the code section above.
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)

 

Posted by abid786 on Thu, 25 Jul 2019 20:20:53 -0700