## Jupyter at Bryn Mawr College

Public notebooks: /services/public/dblank / CS245 Programming Languages / 2016-Fall / Labs

# Lab 05: Finding Structure in Time

## CS371: Cognitive Science Bryn Mawr College, Fall 2016 Prof. Blank

This is a team-based project.

In the cell below, add your by-line, and delete this cell:

# 1. Predicting the next word¶

In this lab we will attempt to replicate the results from "Finding Structure in Time":

https://crl.ucsd.edu/~elman/Papers/fsit.pdf

First, we need some text. For this demo, I'll make up a short text. For your assignment, you should generate sentences like Elman did in his paper. You should write a program that will generate random sentences of the appropriate grammar.

In [218]:
text = ("me like you you like me me like apples me like bananas "
"you like bananas you like apples you hate berries me "
"like berries me need berries you need apples you need me").strip()


Next, we write some encoding and decoding functions:

In [219]:
text_words = text.split(" ")

words = list(set(text_words))

def encode(word):
index = words.index(word)
binary = [0] * len(words)
binary[index] = 1
return binary

def decode(pattern):
winner = max(pattern)
index = list(pattern).index(winner)
return label(index)

def label(index):
for word, pattern in patterns.items():
if pattern[index] == 1:
return word
return None

# Reset to max length:
pattern_size = len(encode(words[0]))

patterns = {word: encode(word) for word in words}


text_words is the text corpus, as a list of words. Your's will be too long to display here.

In [220]:
text_words

Out[220]:
['me',
'like',
'you',
'you',
'like',
'me',
'me',
'like',
'apples',
'me',
'like',
'bananas',
'you',
'like',
'bananas',
'you',
'like',
'apples',
'you',
'hate',
'berries',
'me',
'like',
'berries',
'me',
'need',
'berries',
'you',
'need',
'apples',
'you',
'need',
'me']
In [221]:
words

Out[221]:
['berries', 'need', 'me', 'apples', 'like', 'bananas', 'you', 'hate']
In [222]:
patterns.keys()

Out[222]:
dict_keys(['berries', 'bananas', 'need', 'you', 'me', 'like', 'hate', 'apples'])

Testing our encoding and decoding functions:

In [223]:
encode("need")

Out[223]:
[0, 1, 0, 0, 0, 0, 0, 0]
In [224]:
decode(encode("need"))

Out[224]:
'need'
In [247]:
decode([0, 0.6, 0.5, 0, 0.1, 0, 0, 0])

Out[247]:
'need'
In [229]:
label(1)

Out[229]:
'need'

And now, we explore reading through a text, predicting what word comes next.

In [230]:
from conx import SRN

class Predict(SRN):
def initialize_inputs(self):
pass

def inputs_size(self):
# Return the number of inputs:
return len(text_words)

def get_inputs(self, i):
current_word = text_words[i]
next_word = text_words[(i + 1) % len(text_words)]
return (patterns[current_word], patterns[next_word])

In [231]:
net = Predict(len(encode("need")), 5, len(encode("need")))

In [232]:
net.train(max_training_epochs=2000,
report_rate=100,
tolerance=0.3,
epsilon=0.1)

--------------------------------------------------
Training for max trails: 2000 ...
Epoch: 0 TSS error: 101.397531278 %correct: 0.0
Epoch: 100 TSS error: 21.0288009846 %correct: 0.0
Epoch: 200 TSS error: 17.6554494885 %correct: 0.18181818181818182
Epoch: 300 TSS error: 14.6845872245 %correct: 0.30303030303030304
Epoch: 400 TSS error: 12.045306326 %correct: 0.36363636363636365
Epoch: 500 TSS error: 10.2342781218 %correct: 0.3939393939393939
Epoch: 600 TSS error: 9.24119739812 %correct: 0.48484848484848486
Epoch: 700 TSS error: 8.25721892617 %correct: 0.5151515151515151
Epoch: 800 TSS error: 7.19393364174 %correct: 0.5454545454545454
Epoch: 900 TSS error: 6.36376877071 %correct: 0.6363636363636364
Epoch: 1000 TSS error: 5.89511765761 %correct: 0.696969696969697
Epoch: 1100 TSS error: 5.55586028248 %correct: 0.696969696969697
Epoch: 1200 TSS error: 4.87985305528 %correct: 0.696969696969697
Epoch: 1300 TSS error: 4.19677499327 %correct: 0.7272727272727273
Epoch: 1400 TSS error: 3.75729352055 %correct: 0.7878787878787878
Epoch: 1500 TSS error: 3.51941705011 %correct: 0.7878787878787878
Epoch: 1600 TSS error: 3.74943253468 %correct: 0.7272727272727273
Epoch: 1700 TSS error: 4.20421350651 %correct: 0.7575757575757576
Epoch: 1800 TSS error: 3.84047673151 %correct: 0.696969696969697
Epoch: 1900 TSS error: 5.23020456274 %correct: 0.6666666666666666
Epoch: 2000 TSS error: 8.41181206316 %correct: 0.5757575757575758
--------------------------------------------------
Epoch: 2000 TSS error: 8.41181206316 %correct: 0.5757575757575758


And testing the trained network. You may have to train an amount comparable to what Elman did.

In [233]:
net.test()

--------------------------------------------------
Test:
******************************
Input : [0, 0, 1, 0, 0, 0, 0, 0]
Output: [  2.11047114e-02   1.05353114e-03   1.51418711e-02   1.19526865e-05
8.90758773e-01   1.76233384e-04   2.73609951e-03   9.97449915e-04]
Target: [0, 0, 0, 0, 1, 0, 0, 0] Correct
******************************
Input : [0, 0, 0, 0, 1, 0, 0, 0]
Output: [  1.68494595e-01   5.12335672e-07   2.80982943e-03   3.66894766e-01
5.04145564e-02   1.01438571e-01   3.87937518e-01   2.34307921e-03]
Target: [0, 0, 0, 0, 0, 0, 1, 0] Incorrect
******************************
Input : [0, 0, 0, 0, 0, 0, 1, 0]
Output: [  2.49153116e-03   4.51391506e-06   6.51019511e-04   2.87403159e-05
2.20248631e-02   1.04906100e-04   5.16446136e-01   1.33050121e-03]
Target: [0, 0, 0, 0, 0, 0, 1, 0] Incorrect
******************************
Input : [0, 0, 0, 0, 0, 0, 1, 0]
Output: [  7.63738588e-03   7.70121481e-05   5.42627790e-05   1.82421577e-02
9.99785179e-01   3.71534302e-06   3.81533227e-03   7.47507437e-02]
Target: [0, 0, 0, 0, 1, 0, 0, 0] Correct
******************************
Input : [0, 0, 0, 0, 1, 0, 0, 0]
Output: [  2.72138737e-03   1.03817769e-06   6.68314947e-01   1.97579083e-02
2.29589172e-03   3.11517701e-01   2.24290093e-02   9.05405235e-06]
Target: [0, 0, 1, 0, 0, 0, 0, 0] Incorrect
******************************
Input : [0, 0, 1, 0, 0, 0, 0, 0]
Output: [  2.43303730e-04   9.62138636e-04   7.49287384e-01   7.96864676e-05
4.42785887e-01   4.81448906e-05   8.76438154e-04   1.76318817e-05]
Target: [0, 0, 1, 0, 0, 0, 0, 0] Incorrect
******************************
Input : [0, 0, 1, 0, 0, 0, 0, 0]
Output: [  2.45163579e-03   1.00933830e-04   7.95025710e-02   2.01677608e-05
7.58015260e-01   1.26940071e-03   2.15958286e-03   1.05626171e-04]
Target: [0, 0, 0, 0, 1, 0, 0, 0] Correct
******************************
Input : [0, 0, 0, 0, 1, 0, 0, 0]
Output: [  3.01853069e-01   7.01521263e-07   4.03404015e-03   1.44094632e-01
5.55887688e-02   8.51190761e-02   2.98152682e-01   2.33844260e-03]
Target: [0, 0, 0, 1, 0, 0, 0, 0] Incorrect
******************************
Input : [0, 0, 0, 1, 0, 0, 0, 0]
Output: [  1.36616366e-03   3.71843192e-05   7.78310091e-01   3.21284946e-04
4.74596843e-03   1.47235183e-03   1.23189297e-02   1.82179371e-05]
Target: [0, 0, 1, 0, 0, 0, 0, 0] Correct
******************************
Input : [0, 0, 1, 0, 0, 0, 0, 0]
Output: [  1.02372008e-04   1.38967935e-02   4.76467481e-02   1.19610946e-03
9.81004295e-01   2.58341082e-06   7.96482584e-04   6.69151434e-04]
Target: [0, 0, 0, 0, 1, 0, 0, 0] Correct
******************************
Input : [0, 0, 0, 0, 1, 0, 0, 0]
Output: [  2.86566945e-01   1.98456049e-07   4.45602436e-02   2.30433682e-02
3.02662121e-03   7.22371093e-01   2.05122679e-01   1.91805679e-04]
Target: [0, 0, 0, 0, 0, 1, 0, 0] Correct
******************************
Input : [0, 0, 0, 0, 0, 1, 0, 0]
Output: [  9.41504760e-03   1.83388393e-06   9.65616367e-05   2.15182954e-04
3.36982844e-04   1.28766213e-04   9.83429747e-01   2.18583078e-03]
Target: [0, 0, 0, 0, 0, 0, 1, 0] Correct
******************************
Input : [0, 0, 0, 0, 0, 0, 1, 0]
Output: [  7.97012470e-06   1.06170275e-03   1.58996553e-05   4.68757002e-03
9.94052621e-01   1.68900137e-07   2.45129043e-04   1.40333322e-02]
Target: [0, 0, 0, 0, 1, 0, 0, 0] Correct
******************************
Input : [0, 0, 0, 0, 1, 0, 0, 0]
Output: [  1.50275642e-04   5.41158091e-06   4.49034615e-01   3.04748027e-01
1.31470267e-02   2.20919895e-02   1.71914396e-02   1.93466850e-05]
Target: [0, 0, 0, 0, 0, 1, 0, 0] Incorrect
******************************
Input : [0, 0, 0, 0, 0, 1, 0, 0]
Output: [  9.37406148e-02   8.82288042e-04   6.42291232e-05   2.32777862e-04
4.43631892e-06   4.87642341e-03   9.90319422e-01   2.55908234e-02]
Target: [0, 0, 0, 0, 0, 0, 1, 0] Correct
******************************
Input : [0, 0, 0, 0, 0, 0, 1, 0]
Output: [  1.83441629e-06   2.32226398e-02   8.77017301e-07   6.04655782e-03
6.69673881e-01   2.08785137e-07   5.71574746e-03   6.48453780e-02]
Target: [0, 0, 0, 0, 1, 0, 0, 0] Incorrect
******************************
Input : [0, 0, 0, 0, 1, 0, 0, 0]
Output: [  4.52176240e-05   4.20894132e-05   5.19098230e-01   6.27089146e-01
9.53376179e-02   4.79575818e-04   5.24011345e-03   3.22883694e-05]
Target: [0, 0, 0, 1, 0, 0, 0, 0] Incorrect
******************************
Input : [0, 0, 0, 1, 0, 0, 0, 0]
Output: [  2.65817159e-02   4.63910461e-02   3.30035595e-03   8.91241319e-04
6.64390206e-07   4.08686769e-02   9.87687801e-01   5.37722203e-03]
Target: [0, 0, 0, 0, 0, 0, 1, 0] Correct
******************************
Input : [0, 0, 0, 0, 0, 0, 1, 0]
Output: [  1.68474568e-05   4.06370963e-01   4.88651551e-08   2.77661775e-02
2.36846565e-01   1.70497547e-06   5.35921607e-02   5.27398170e-01]
Target: [0, 0, 0, 0, 0, 0, 0, 1] Incorrect
******************************
Input : [0, 0, 0, 0, 0, 0, 0, 1]
Output: [  9.88908880e-01   7.74060325e-07   5.21202381e-03   1.01424465e-02
1.05953996e-01   8.63489687e-03   1.20903664e-02   4.34977768e-04]
Target: [1, 0, 0, 0, 0, 0, 0, 0] Correct
******************************
Input : [1, 0, 0, 0, 0, 0, 0, 0]
Output: [  1.36752918e-03   4.77953113e-06   9.97719464e-01   3.31942383e-04
6.71359503e-03   2.37344984e-03   3.76397287e-02   1.32550461e-06]
Target: [0, 0, 1, 0, 0, 0, 0, 0] Correct
******************************
Input : [0, 0, 1, 0, 0, 0, 0, 0]
Output: [  1.51596227e-03   7.87156987e-03   8.53569453e-02   2.54742654e-05
7.20870026e-01   6.18437659e-05   7.34336944e-04   2.75104430e-04]
Target: [0, 0, 0, 0, 1, 0, 0, 0] Correct
******************************
Input : [0, 0, 0, 0, 1, 0, 0, 0]
Output: [  3.44544470e-01   7.04084292e-07   6.45833494e-03   8.13592158e-02
1.95728602e-02   1.16880265e-01   2.97796244e-01   1.55181788e-03]
Target: [1, 0, 0, 0, 0, 0, 0, 0] Incorrect
******************************
Input : [1, 0, 0, 0, 0, 0, 0, 0]
Output: [  3.17410440e-04   3.14687237e-05   9.90580751e-01   3.37329048e-04
1.80777849e-03   1.14119762e-03   1.72421593e-03   1.53216105e-06]
Target: [0, 0, 1, 0, 0, 0, 0, 0] Correct
******************************
Input : [0, 0, 1, 0, 0, 0, 0, 0]
Output: [  1.32958219e-04   8.20324789e-01   3.19829350e-02   1.11927946e-03
9.66358774e-02   6.74801482e-05   4.44381739e-03   1.56713854e-03]
Target: [0, 1, 0, 0, 0, 0, 0, 0] Correct
******************************
Input : [0, 1, 0, 0, 0, 0, 0, 0]
Output: [  9.87732840e-01   5.43951693e-06   1.78157917e-04   5.90880053e-02
5.17530874e-05   3.13743263e-03   1.93372204e-01   5.27002538e-04]
Target: [1, 0, 0, 0, 0, 0, 0, 0] Correct
******************************
Input : [1, 0, 0, 0, 0, 0, 0, 0]
Output: [  9.92419619e-06   2.41993098e-03   5.75067467e-02   8.12808783e-02
1.65370076e-04   1.33069140e-05   7.71008160e-01   2.78237587e-04]
Target: [0, 0, 0, 0, 0, 0, 1, 0] Correct
******************************
Input : [0, 0, 0, 0, 0, 0, 1, 0]
Output: [  3.28256519e-05   4.35508073e-01   1.10829858e-05   2.08800174e-03
1.08769399e-03   9.32113337e-05   9.77990744e-02   4.88744134e-02]
Target: [0, 1, 0, 0, 0, 0, 0, 0] Incorrect
******************************
Input : [0, 1, 0, 0, 0, 0, 0, 0]
Output: [  2.15241661e-03   3.47028193e-03   8.16172703e-02   6.80181995e-01
7.56172012e-02   1.49831933e-05   1.66703890e-03   1.58680040e-04]
Target: [0, 0, 0, 1, 0, 0, 0, 0] Incorrect
******************************
Input : [0, 0, 0, 1, 0, 0, 0, 0]
Output: [  3.08310623e-02   2.05344187e-02   5.10135153e-02   4.00659795e-03
1.52332171e-06   1.09967567e-01   9.87389738e-01   2.50205606e-03]
Target: [0, 0, 0, 0, 0, 0, 1, 0] Correct
******************************
Input : [0, 0, 0, 0, 0, 0, 1, 0]
Output: [  2.34789454e-04   4.59475180e-02   4.95691855e-09   2.00150300e-02
5.22988282e-01   1.82713086e-06   1.74963293e-01   8.22480171e-01]
Target: [0, 1, 0, 0, 0, 0, 0, 0] Incorrect
******************************
Input : [0, 1, 0, 0, 0, 0, 0, 0]
Output: [  2.37807731e-04   4.32556902e-05   4.74251992e-01   1.41518631e-01
5.05144874e-02   2.25766644e-04   3.15199893e-04   9.34517546e-06]
Target: [0, 0, 1, 0, 0, 0, 0, 0] Incorrect
******************************
Input : [0, 0, 1, 0, 0, 0, 0, 0]
Output: [  8.28121014e-04   3.47704943e-04   9.87438973e-01   6.39413460e-05
4.71882961e-02   9.89063295e-04   5.96253336e-04   4.24409629e-06]
Target: [0, 0, 1, 0, 0, 0, 0, 0] Correct
--------------------------------------------------
Epoch: 2000 TSS error: 44.6544609703 %correct: 0.5757575757575758


That is hard to read. conx comes with a way to override the display of the test input:

In [234]:
net.display_test_input = lambda inputs: print("Input:   " + decode(inputs))

In [235]:
net.test()

--------------------------------------------------
Test:
******************************
Input:   me
Output: [  2.19249592e-02   1.17853572e-03   1.51508181e-02   1.17684923e-05
8.80658274e-01   1.78081640e-04   2.75980500e-03   1.02082983e-03]
Target: [0, 0, 0, 0, 1, 0, 0, 0] Correct
******************************
Input:   like
Output: [  1.68633200e-01   5.37501743e-07   2.67303674e-03   3.83480112e-01
5.36311967e-02   9.12961568e-02   3.90337956e-01   2.49953788e-03]
Target: [0, 0, 0, 0, 0, 0, 1, 0] Incorrect
******************************
Input:   you
Output: [  2.16347056e-03   4.82952415e-06   8.08335137e-04   2.89122275e-05
2.06314123e-02   1.06524202e-04   4.95778876e-01   1.15916079e-03]
Target: [0, 0, 0, 0, 0, 0, 1, 0] Incorrect
******************************
Input:   you
Output: [  9.61742864e-03   6.69739761e-05   4.38371364e-05   1.83955093e-02
9.99758159e-01   3.91596066e-06   5.33965782e-03   8.75034860e-02]
Target: [0, 0, 0, 0, 1, 0, 0, 0] Correct
******************************
Input:   like
Output: [  1.59776461e-03   1.29878715e-06   7.29287646e-01   2.11596787e-02
2.29595164e-03   2.55048556e-01   1.74241902e-02   6.95112533e-06]
Target: [0, 0, 1, 0, 0, 0, 0, 0] Correct
******************************
Input:   me
Output: [  3.07335424e-04   1.39058538e-03   7.41881211e-01   5.20402761e-05
3.72726884e-01   6.48180062e-05   7.46687040e-04   1.89878503e-05]
Target: [0, 0, 1, 0, 0, 0, 0, 0] Incorrect
******************************
Input:   me
Output: [  3.87894545e-03   1.10862830e-04   7.04280442e-02   1.92538352e-05
8.21127263e-01   9.34128216e-04   2.25373282e-03   1.48188002e-04]
Target: [0, 0, 0, 0, 1, 0, 0, 0] Correct
******************************
Input:   like
Output: [  2.56192982e-01   5.97268959e-07   3.62678330e-03   1.70199664e-01
4.94225080e-02   1.08877521e-01   3.20287196e-01   2.14924752e-03]
Target: [0, 0, 0, 1, 0, 0, 0, 0] Incorrect
******************************
Input:   apples
Output: [  1.40579901e-03   3.75261233e-05   7.76999273e-01   3.16727630e-04
5.09474389e-03   1.45644894e-03   1.03626231e-02   1.76586519e-05]
Target: [0, 0, 1, 0, 0, 0, 0, 0] Correct
******************************
Input:   me
Output: [  1.06682722e-04   1.23240065e-02   4.99856722e-02   1.10853709e-03
9.81406606e-01   2.62842450e-06   7.84537135e-04   6.36615205e-04]
Target: [0, 0, 0, 0, 1, 0, 0, 0] Correct
******************************
Input:   like
Output: [  2.85432805e-01   1.99667648e-07   4.40357606e-02   2.27686046e-02
3.04919084e-03   7.22351492e-01   2.06156247e-01   1.94165162e-04]
Target: [0, 0, 0, 0, 0, 1, 0, 0] Correct
******************************
Input:   bananas
Output: [  9.36015374e-03   1.82969909e-06   9.69154178e-05   2.15335855e-04
3.40391797e-04   1.28283504e-04   9.83131423e-01   2.17614662e-03]
Target: [0, 0, 0, 0, 0, 0, 1, 0] Correct
******************************
Input:   you
Output: [  7.98351243e-06   1.06036439e-03   1.58897282e-05   4.68884010e-03
9.94042858e-01   1.69013953e-07   2.45893941e-04   1.40448823e-02]
Target: [0, 0, 0, 0, 1, 0, 0, 0] Correct
******************************
Input:   like
Output: [  1.49736082e-04   5.42130522e-06   4.49134660e-01   3.05200444e-01
1.31620030e-02   2.20300271e-02   1.71746246e-02   1.93392022e-05]
Target: [0, 0, 0, 0, 0, 1, 0, 0] Incorrect
******************************
Input:   bananas
Output: [  9.36684760e-02   8.84306483e-04   6.43202615e-05   2.32915976e-04
4.42099127e-06   4.88098069e-03   9.90338740e-01   2.55761337e-02]
Target: [0, 0, 0, 0, 0, 0, 1, 0] Correct
******************************
Input:   you
Output: [  1.83353213e-06   2.33225277e-02   8.76377696e-07   6.05267999e-03
6.69178789e-01   2.09102421e-07   5.72078856e-03   6.49224323e-02]
Target: [0, 0, 0, 0, 1, 0, 0, 0] Incorrect
******************************
Input:   like
Output: [  4.53289548e-05   4.21343554e-05   5.19203271e-01   6.26932465e-01
9.54529086e-02   4.78352218e-04   5.23400907e-03   3.23054064e-05]
Target: [0, 0, 0, 1, 0, 0, 0, 0] Incorrect
******************************
Input:   apples
Output: [  2.65812939e-02   4.63874061e-02   3.29794795e-03   8.91123398e-04
6.64966748e-07   4.08808152e-02   9.87676790e-01   5.37781493e-03]
Target: [0, 0, 0, 0, 0, 0, 1, 0] Correct
******************************
Input:   you
Output: [  1.68605982e-05   4.06146673e-01   4.88406973e-08   2.77651713e-02
2.36912270e-01   1.70499941e-06   5.36105997e-02   5.27476511e-01]
Target: [0, 0, 0, 0, 0, 0, 0, 1] Incorrect
******************************
Input:   hate
Output: [  9.88890843e-01   7.74040903e-07   5.21700364e-03   1.01339593e-02
1.05964210e-01   8.64054159e-03   1.20940010e-02   4.35022912e-04]
Target: [1, 0, 0, 0, 0, 0, 0, 0] Correct
******************************
Input:   berries
Output: [  1.36747808e-03   4.77960702e-06   9.97719282e-01   3.31952967e-04
6.71283039e-03   2.37359387e-03   3.76371420e-02   1.32549355e-06]
Target: [0, 0, 1, 0, 0, 0, 0, 0] Correct
******************************
Input:   me
Output: [  1.51576315e-03   7.87347276e-03   8.53596693e-02   2.54780360e-05
7.20835826e-01   6.18437867e-05   7.34368984e-04   2.75107671e-04]
Target: [0, 0, 0, 0, 1, 0, 0, 0] Correct
******************************
Input:   like
Output: [  3.44568631e-01   7.04157396e-07   6.45846718e-03   8.13533260e-02
1.95734872e-02   1.16863140e-01   2.97789985e-01   1.55190772e-03]
Target: [1, 0, 0, 0, 0, 0, 0, 0] Incorrect
******************************
Input:   berries
Output: [  3.17413297e-04   3.14686255e-05   9.90580720e-01   3.37330770e-04
1.80775522e-03   1.14119261e-03   1.72434594e-03   1.53221006e-06]
Target: [0, 0, 1, 0, 0, 0, 0, 0] Correct
******************************
Input:   me
Output: [  1.32954070e-04   8.20331629e-01   3.19825450e-02   1.11932636e-03
9.66358241e-02   6.74787980e-05   4.44383620e-03   1.56716210e-03]
Target: [0, 1, 0, 0, 0, 0, 0, 0] Correct
******************************
Input:   need
Output: [  9.87733221e-01   5.43966541e-06   1.78155206e-04   5.90902170e-02
5.17517026e-05   3.13741905e-03   1.93369006e-01   5.26993932e-04]
Target: [1, 0, 0, 0, 0, 0, 0, 0] Correct
******************************
Input:   berries
Output: [  9.92407044e-06   2.42001866e-03   5.75066338e-02   8.12811261e-02
1.65360061e-04   1.33070947e-05   7.71018275e-01   2.78239813e-04]
Target: [0, 0, 0, 0, 0, 0, 1, 0] Correct
******************************
Input:   you
Output: [  3.28237683e-05   4.35526262e-01   1.10820617e-05   2.08812196e-03
1.08775153e-03   9.32060592e-05   9.77945742e-02   4.88765141e-02]
Target: [0, 1, 0, 0, 0, 0, 0, 0] Incorrect
******************************
Input:   need
Output: [  2.15271195e-03   3.47011746e-03   8.16155037e-02   6.80198988e-01
7.56157304e-02   1.49827629e-05   1.66695218e-03   1.58669707e-04]
Target: [0, 0, 0, 1, 0, 0, 0, 0] Incorrect
******************************
Input:   apples
Output: [  3.08317786e-02   2.05337084e-02   5.10131164e-02   4.00662821e-03
1.52320789e-06   1.09967841e-01   9.87391212e-01   2.50207820e-03]
Target: [0, 0, 0, 0, 0, 0, 1, 0] Correct
******************************
Input:   you
Output: [  2.34761297e-04   4.59474863e-02   4.95743532e-09   2.00144274e-02
5.22990532e-01   1.82703964e-06   1.74951905e-01   8.22466840e-01]
Target: [0, 1, 0, 0, 0, 0, 0, 0] Incorrect
******************************
Input:   need
Output: [  2.37808011e-04   4.32540607e-05   4.74280040e-01   1.41513223e-01
5.05119183e-02   2.25784525e-04   3.15222967e-04   9.34475450e-06]
Target: [0, 0, 1, 0, 0, 0, 0, 0] Incorrect
******************************
Input:   me
Output: [  8.28123060e-04   3.47750600e-04   9.87437099e-01   6.39344593e-05
4.71903866e-02   9.88990231e-04   5.96227738e-04   4.24438759e-06]
Target: [0, 0, 1, 0, 0, 0, 0, 0] Correct
--------------------------------------------------
Epoch: 2000 TSS error: 44.7775832351 %correct: 0.6060606060606061


That is better. But we can also do the same for displaying the outputs:

In [236]:
def display_outputs(outputs, result="Outputs", label=None):
print(result + ": " + decode(outputs))

In [237]:
net.display_test_output = display_outputs

In [238]:
net.test()

--------------------------------------------------
Test:
******************************
Input:   me
Outputs: like
Correct: like
******************************
Input:   like
Outputs: you
Incorrect: you
******************************
Input:   you
Outputs: you
Incorrect: you
******************************
Input:   you
Outputs: like
Correct: like
******************************
Input:   like
Outputs: me
Correct: me
******************************
Input:   me
Outputs: me
Incorrect: me
******************************
Input:   me
Outputs: like
Correct: like
******************************
Input:   like
Outputs: you
Incorrect: apples
******************************
Input:   apples
Outputs: me
Correct: me
******************************
Input:   me
Outputs: like
Correct: like
******************************
Input:   like
Outputs: bananas
Correct: bananas
******************************
Input:   bananas
Outputs: you
Correct: you
******************************
Input:   you
Outputs: like
Correct: like
******************************
Input:   like
Outputs: me
Incorrect: bananas
******************************
Input:   bananas
Outputs: you
Correct: you
******************************
Input:   you
Outputs: like
Incorrect: like
******************************
Input:   like
Outputs: apples
Incorrect: apples
******************************
Input:   apples
Outputs: you
Correct: you
******************************
Input:   you
Outputs: hate
Incorrect: hate
******************************
Input:   hate
Outputs: berries
Correct: berries
******************************
Input:   berries
Outputs: me
Correct: me
******************************
Input:   me
Outputs: like
Correct: like
******************************
Input:   like
Outputs: berries
Incorrect: berries
******************************
Input:   berries
Outputs: me
Correct: me
******************************
Input:   me
Outputs: need
Correct: need
******************************
Input:   need
Outputs: berries
Correct: berries
******************************
Input:   berries
Outputs: you
Correct: you
******************************
Input:   you
Outputs: need
Incorrect: need
******************************
Input:   need
Outputs: apples
Incorrect: apples
******************************
Input:   apples
Outputs: you
Correct: you
******************************
Input:   you
Outputs: hate
Incorrect: need
******************************
Input:   need
Outputs: me
Incorrect: me
******************************
Input:   me
Outputs: me
Correct: me
--------------------------------------------------
Epoch: 2000 TSS error: 44.777553161 %correct: 0.6060606060606061


Better! Why is it that sometimes the "output" may be the same as "correct" but still marked as "Incorrect"?

# 2. Analysis¶

Elman produced "dendograms" (tree plots) to show similarity of the hidden activations associated with each word. We can do the same.

In [239]:
%matplotlib inline
import io
import numpy as np
import matplotlib.pyplot as plt
from scipy.cluster import hierarchy
from scipy.spatial import distance


We will plot the hidden unit activations of how close each hidden pattern is to each other. This is a way of seeing the clustering among numeric representations of many dimensions.

To do this, we need to get the "hidden layer activations" for each word, in a proper order.

In [240]:
net.layer[0].propagate(encode("need"))

Out[240]:
array([ 0.94643207,  0.02253122,  0.75723384,  0.03539211,  0.99686973])

Next, we will go through the words in the text, and get the hidden layer activations.

Note that each time, we are overwriting the previous activations. A better method would be to somehow average each set of hidden layer activations.

In [241]:
hiddens_dict = {}
for word in text_words:
hiddens_dict[word] = net.layer[0].propagate(patterns[word])


Next, we get those hidden layer activations in the order that matches the "words" list:

In [242]:
hiddens = []
for word in words:
hiddens.append(hiddens_dict[word])


Now, we are ready to process the hidden layer activations to display as a dendrogram.

http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.dendrogram.html

In [243]:
linkage = hierarchy.linkage(hiddens)


Let's make the output big enough to easily see (units are based on DPI):

In [244]:
plt.rcParams["figure.figsize"] = (13, 5)

In [245]:
threshold = 0.3
clusters = hierarchy.fcluster(linkage, threshold, criterion="distance")

In [246]:
hierarchy.dendrogram(linkage, color_threshold=0.3, leaf_label_func=label, leaf_rotation=90)
plt.xlabel("Words")
plt.ylabel("Distance")

Out[246]:
<matplotlib.text.Text at 0x7f54a78e25f8>

You may want to explore the options for the dendrogram here:

http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.dendrogram.html

# 3. Team Reflections¶

Write about what you have found. You can write much here. You can explain the experiment, results found, where they agree with Elman, or disagree with his results.

# 4. Reflections¶

As per usual, please reflect deeply on this week's lab. What was challenging, easy, or surprising? Connect the topics onto what you already know.