Andrej Karpathy Video
Code
Pulling the dataset we will be working on:
curl https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt -o input.txt
Reading it into python
with open('input.txt', 'r', encoding='utf-8') as f:
text = f.read()
Data inspection
print("length of dataset in characters: ", len(text))
print("length of data: ", len(data))
print(text[:1000])
chars = sorted(list(set(text)))
vocab_size = len(chars)
print(''.join(chars))
print(vocab_size)
Tokeniser
stoi = { ch:i for i,ch in enumerate(chars) }
itos = { i:ch for i,ch in enumerate(chars) }
encode = lambda s: [stoi[c] for c in s]
# defines function taking in string, outputs list of ints
decode = lambda l: ''.join([itos[i] for i in l])
# input: list of integers, outputs string
print(encode("hello world"))
print(decode(encode("hello world")))
import torch
data = torch.tensor(encode(text), dtype=torch.long)
print(data.shape, data.dtype)
print(data[:1000])
n = int(0.9*len(data))
train_data = data[:n]
val_data = data[n:]
Understanding the context influence of n+1th token
block_size = 8
print(train_data[:block_size])
x = train_data[:block_size]
y = train_data[1:block_size+1]
for t in range(block_size):
context = x[:t+1]
target = y[t]
print(f"at input {context}\n" +
f"target {target}")
Note that within the block_size of 8, there are 8 total examples.