
Man machine conversation
training = numpy.array(training) output = numpy.array(output)
training and output are used for tensorflow calculation through numpy transformation matrix. Instead of using tensorflow directly, we use tflearn to abstract the Api of tensorflow.
tensorflow.reset_default_graph() print(len(training[0])) net = tflearn.input_data(shape=[None, len(training[0])]) net = tflearn.fully_connected(net, 8)
- Tensorflow. Reset? Default? Graph() reset initialization graph
- The first is the input layer. The input data is a training sample data dimension. None indicates that the number of samples is not determined.
- Fully connected indicates a full connection and the next hidden layer
net = tflearn.input_data(shape=[None, len(training[0])]) net = tflearn.fully_connected(net, 8) net = tflearn.fully_connected(net, 8) net = tflearn.fully_connected(net, len(output[0]), activation='softmax')
net = tflearn.regression(net) model = tflearn.DNN(net)
For each sample, we have 41 input data, input and the next layer, the hidden layer has 8 neurons, each input and the next layer of neurons are fully connected. Output as our tag,
model.fit(training, output, n_epoch=1000, batch_size=8, show_metric=True)
Training Step: 2979 | total loss: 0.32779 | time: 0.011s | Adam | epoch: 993 | loss: 0.32779 - acc: 0.9299 -- iter: 23/23 -- Training Step: 2982 | total loss: 0.25613 | time: 0.006s | Adam | epoch: 994 | loss: 0.25613 - acc: 0.9489 -- iter: 23/23 -- Training Step: 2985 | total loss: 0.44631 | time: 0.008s | Adam | epoch: 995 | loss: 0.44631 - acc: 0.9140 -- iter: 23/23 -- Training Step: 2988 | total loss: 0.35128 | time: 0.007s | Adam | epoch: 996 | loss: 0.35128 - acc: 0.9261 -- iter: 23/23 -- Training Step: 2991 | total loss: 0.28417 | time: 0.008s | Adam | epoch: 997 | loss: 0.28417 - acc: 0.9345 -- iter: 23/23 -- Training Step: 2994 | total loss: 0.24740 | time: 0.011s | Adam | epoch: 998 | loss: 0.24740 - acc: 0.9279 -- iter: 23/23 -- Training Step: 2997 | total loss: 0.58711 | time: 0.007s | Adam | epoch: 999 | loss: 0.58711 - acc: 0.8560 -- iter: 23/23 -- Training Step: 3000 | total loss: 0.45620 | time: 0.007s | Adam | epoch: 1000 | loss: 0.45620 - acc: 0.8807 -- iter: 23/23
try: with open("data.pickle", "rb") as f: words, labels, training, output = pickle.load(f) except: words = [] labels = [] docs_x = [] docs_y = [] for intent in data["intents"]: for pattern in intent["patterns"]: wrds = nltk.word_tokenize(pattern) words.extend(wrds) docs_x.append(wrds) docs_y.append(intent["tag"]) if intent["tag"] not in labels: labels.append(intent["tag"]) words = [stemmer.stem(w.lower()) for w in words if w not in "?"] words = sorted(list(set(words))) labels = sorted(labels) # print(docs_x) # print(docs_y) training = [] output = [] out_empty = [0 for _ in range(len(labels))] for x, doc in enumerate(docs_x): bag = [] # print(wrds) wrds = [stemmer.stem(w) for w in doc] # print(wrds) for w in words: if w in wrds: bag.append(1) else: bag.append(0) print(bag) output_row = out_empty[:] output_row[labels.index(docs_y[x])] = 1 training.append(bag) # print(len(training)) output.append(output_row) # print(len(output)) training = numpy.array(training) # print(training) output = numpy.array(output) # print(output) with open("data.pickle", "wb") as f: pickle.dump((words, labels, training, output), f) tensorflow.reset_default_graph() print(len(training[0])) net = tflearn.input_data(shape=[None, len(training[0])]) net = tflearn.fully_connected(net, 8) net = tflearn.fully_connected(net, 8) net = tflearn.fully_connected(net, len(output[0]), activation='softmax') net = tflearn.regression(net) model = tflearn.DNN(net) try: model.load("model.tflearn") except: model.fit(training, output, n_epoch=1000, batch_size=8, show_metric=True) model.save("model.tflearn")