|
Post by Umar on Nov 30, 2016 0:19:30 GMT
Hey John,
I saw in a previous email of yours that your wanted to use Flask. The model I am looking at uses flask as well to act as a link between the terminal and the html interface. The model is here on my google drive drive.google.com/drive/folders/0B-enYXLoEzulTm1jM21tTmxzcGM?usp=sharing Inside seq2seq_model/ui you can find the files. Can you look at it and see how it works? We can use the same technique for our purpose.
Thanks
|
|
john
New Member
Posts: 3
|
Post by john on Nov 30, 2016 0:38:47 GMT
Okay I will take a look. Thanks Umar
|
|
john
New Member
Posts: 3
|
Post by john on Dec 1, 2016 8:11:29 GMT
So I've managed to get the HTML interface to work. We need to add these lines of code in app.py after the two import lines. This will look like the following:
from flask import Flask, render_template, request from flask import jsonify import sys import os
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
app = Flask(__name__,static_url_path="/static") ...
However, I was only able type in one input. When I try to type in a second input, the program freezes. I will look into why this is happening.
EDIT: I get some errors saying that some functions in our model are deprecated. What we need to also change in our code is line 166 in seq2seq_model.py toself.saver = tf.train.Saver(tf.global_variables()) and line 107 in execute.py to session.run(tf.global_variables_initializer()).
|
|
|
Post by jmewasiuk on Dec 1, 2016 18:31:29 GMT
FYI: I've been playing around with the decode() method in execute.py to see if we can get more than 1-sentence results. The current sample code is basically looking for the first EOS token and just taking the words up to that point.
In addition, I'm looking at caching results to make it easier for a UI to just send in the input text, then call decode() and the output will be "just provided" so the UI can just format the string. (If that makes sense) Since we are posting a "conversation", having a one sentence response from only one of the trained personalities is only a subset of what we originally wanted to display.
Also, I think the UI with the seq2seq sample is a chatbot style, meaning the user needs to keep feeding input into the decode() method.
It shouldn't be too difficult to do this and it's not really ML related, it's just figuring out where we need to loop the decode() method and how.
|
|