{"id":183275,"date":"2024-02-21T13:25:45","date_gmt":"2024-02-21T19:25:45","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2024\/02\/lets-build-the-gpt-tokenizer"},"modified":"2024-02-21T13:25:45","modified_gmt":"2024-02-21T19:25:45","slug":"lets-build-the-gpt-tokenizer","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2024\/02\/lets-build-the-gpt-tokenizer","title":{"rendered":"Let\u2019s build the GPT Tokenizer"},"content":{"rendered":"<p><\/p>\n<p><iframe style=\"display: block; margin: 0 auto; width: 100%; aspect-ratio: 4\/3; object-fit: contain;\" src=\"https:\/\/www.youtube.com\/embed\/zduSFxRajkE?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope;\n   picture-in-picture\" allowfullscreen><\/iframe><\/p>\n<p>W\/ Andrej Karpathy<\/p>\n<hr>\n<p>The Tokenizer is a necessary and pervasive component of Large Language Models (LLMs), where it translates between strings and tokens (text chunks). Tokenizers are a completely separate stage of the LLM pipeline: they have their own training sets, training algorithms (Byte Pair Encoding), and after training implement two fundamental functions: encode() from strings to tokens, and decode() back from tokens to strings. In this lecture we build from scratch the Tokenizer used in the GPT series from OpenAI. In the process, we will see that a lot of weird behaviors and problems of LLMs actually trace back to tokenization. We\u2019ll go through a number of these issues, discuss why tokenization is at fault, and why someone out there ideally finds a way to delete this stage entirely.<\/p>\n<p>Chapters:<br \/> 00:00:00 intro: Tokenization, GPT-2 paper, tokenization-related issues.<br \/> 00:05:50 tokenization by example in a Web UI (tiktokenizer)<br \/> 00:14:56 strings in Python, Unicode code points.<br \/> 00:18:15 Unicode byte encodings, ASCII, UTF-8, UTF-16, UTF-32<br \/> 00:22:47 daydreaming: deleting tokenization.<br \/> 00:23:50 Byte Pair Encoding (BPE) algorithm walkthrough.<br \/> 00:27:02 starting the implementation.<br \/> 00:28:35 counting consecutive pairs, finding most common pair.<br \/> 00:30:36 merging the most common pair.<br \/> 00:34:58 training the tokenizer: adding the while loop, compression ratio.<br \/> 00:39:20 tokenizer\/LLM diagram: it is a completely separate stage.<br \/> 00:42:47 decoding tokens to strings.<br \/> 00:48:21 encoding strings to tokens.<br \/> 00:57:36 regex patterns to force splits across categories.<br \/> 01:11:38 tiktoken library intro, differences between GPT-2\/GPT-4 regex.<br \/> 01:14:59 GPT-2 encoder.py released by OpenAI walkthrough.<br \/> 01:18:26 special tokens, tiktoken handling of, GPT-2\/GPT-4 differences.<br \/> 01:25:28 minbpe exercise time! write your own GPT-4 tokenizer.<br \/> 01:28:42 sentencepiece library intro, used to train Llama 2 vocabulary.<br \/> 01:43:27 how to set vocabulary set? revisiting gpt.py transformer.<br \/> 01:48:11 training new tokens, example of prompt compression.<br \/> 01:49:58 multimodal [image, video, audio] tokenization with vector quantization.<br \/> 01:51:41 revisiting and explaining the quirks of LLM tokenization.<br \/> 02:10:20 final recommendations.<br \/> 02:12:50??? <span class=\"wp-smiley emoji emoji-smile\" title=\":)\">smile<\/span>  <\/p>\n<p>Exercises:<br \/> - Advised flow: reference this document and try to implement the steps before I give away the partial solutions in the video. The full solutions if you\u2019re getting stuck are in the minbpe code <a href=\"https:\/\/github.com\/karpathy\/minbpe\/bl\">https:\/\/github.com\/karpathy\/minbpe\/bl<\/a>\u2026<\/p>\n<p>Links:<\/p>\n<div class=\"more-link-wrapper\"> <a class=\"more-link\" href=\"https:\/\/lifeboat.com\/blog\/2024\/02\/lets-build-the-gpt-tokenizer\">Continue reading \u201cLet\u2019s build the GPT Tokenizer\u201d | &gt;<\/a><\/div><\/p>\n","protected":false},"excerpt":{"rendered":"<p>W\/ Andrej Karpathy The Tokenizer is a necessary and pervasive component of Large Language Models (LLMs), where it translates between strings and tokens (text chunks). Tokenizers are a completely separate stage of the LLM pipeline: they have their own training sets, training algorithms (Byte Pair Encoding), and after training implement two fundamental functions: encode() from [\u2026]<\/p>\n","protected":false},"author":709,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1495,41,6],"tags":[],"class_list":["post-183275","post","type-post","status-publish","format-standard","hentry","category-health","category-information-science","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/183275","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/709"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=183275"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/183275\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=183275"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=183275"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=183275"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}