Text Vectorization

Vuk Dukic
Founder, Senior Software Engineer
October 9, 2023

Introduction

Machine Learning (ML) models are used for making predictions. Predictions could be about the weather, whether a user clicks on an ad/movie/song, the answer to a question etc. In order to make a prediction the model needs to be provided some input data that contains information that can be used to make a prediction.

The way input data is presented to a model is quite critical and can determine how easy it is for a model to extract information from it. LLMs are no different, today we'll dive into how we need to present input data to them.

Blog image

Text Vectorization: Converting text to numbers

On receiving input, ML models perform a series of operations like multiplications to provide a numerical output that is translated into a prediction. In the world of LLMs the model is provided with a prompt made up of text, however running the mathematical operations associated with the internal workings of an LLM requires converting the text to numerical values.

The conversion of text into numerical values is called text vectorization. A vector is a sequence of numbers and is analogous to an array of numbers in the context of programming. When dealing with ML libraries it’s common for arrays of numbers to be converted to vector objects since they make mathematical operations run more efficiently. For example in numpy to make an array a vector you'd do something like:

import numpy as np

# Array of numbers
a = [1,2,3]

# Convert array to a vector
vector_a = np.asarray(a)

Tokenization

Tokenization is the process of breaking a piece of text into its units called tokens. Tokens can be alphabets, words, or groups of alphabets that make up words often called subwords based on the methodology. A tokenizer is an algorithm that's responsible for tokenizing text.

The simplest tokenizer that one can imagine (for English) is to split a document at every space character or punctuation.

import re

def tokenize_document(document):
    # Define a regular expression pattern to 
    # match spaces and punctuation as separate tokens
    pattern = r"[\s.,;!?()]+|[.,;!?()]"

    # Use re.split to tokenize the document
    tokens = re.split(pattern, document)

    # Remove empty tokens
    tokens = [token for token in tokens if token]

    return tokens

text = 'sample sentence. It contains punctuation!'

tokenize_document(text)

>>> ['sample', 'sentence', 'It', 'contains', 'punctuation'] 

Building a vocabulary

A vocabulary is the set of all tokens that an ML model would be able to recognize. The English language has around 170k words. Imagine how huge this number would be for a multilingual use case.

We need to put a cap on the size of our vocabulary to ensure computational efficiency. The size of the vocabulary is often capped by counting the frequency of tokens in a huge corpus of text and choosing the top-k tokens. Where k would correspond to the vocabulary size.

from collections import Counter

def build_top_k_vocab(corpus, k):
    # Initialize a Counter to count token frequencies
    token_counter = Counter()

    # Tokenize and count tokens in each document
    for document in corpus:
        tokens = tokenize_document(document)
        token_counter.update(tokens)

    # Get the top 10 tokens by frequency
    top_k_tokens = [token for token, _ in 
                   token_counter.most_common(k)]

    return set(top_k_tokens)

# Example usage:
corpus = [
    "This is a sample sentence with some words.",
    "Another sample sentence with some repeating words.",
    "And yet another sentence to build the vocabulary.",
]

build_top_k_vocab(corpus, k=5)
>>> {'sample', 'sentence', 'some', 'with', 'words'}

Converting Tokens to Numerical Values

Now that we have a vocabulary we assign an id to each token in our vocabulary. This can be done through simple enumeration and maintaining a map/dictionary between the ids and corresponding token. The id map will help us keep track of tokens that are present in a document.

The identified set of tokens now have to be translated to features that'll help an ML model extract information. While a token can be represented as a scalar or vector a document is always going to be represented as a vector of the token’s representation that it is made up of.

Some of the common ways to encode a document into features are:

  1. Binary Document-Term Vector: Each document is represented as a vector whose size is equal to the size of the vocabulary. The id of each token in the vocabulary corresponds to the index position in the vector. A value of 1 is assigned to the index position in the vector, when the corresponding token is present in the document and 0 if it’s absent.

  2. Bag of Words (BoW): Similar to approach 1, but the vector’s indices map to the frequency of the token in the document.

  3. N-gram vectors: This approach extends the above approach by allowing us to have index positions corresponding to bi-grams, tri-grams etc.

  4. Tf-idf: Similar to BoW, but instead of just the frequency a token is assigned a value based on its frequency in a document and how many unique documents in the corpus the token occurs in. The intuition is that tokens that are rare in general but are present a large number of times in a specific document are important, but the tokens that are plentiful across all documents (like the words "and, an, the, it etc.") are not.

  5. Embeddings: This approach is used in most deep neural networks such as LLMs. Each id is mapped to a unique n-dimensional vector called an embedding. The advantage of this approach is that rather than having a single hand-crafted feature such as the ones mentioned above for each token, an LLM can learn a better high dimensional feature via back propagation. An embedding is supposed to be able to capture the meaning or context around which a token occurs.

Share this article:
View all articles

Related Articles

Service Industry Transformation: AI Chatbots and Automation featured image
January 9, 2026
This post examines how AI chatbots are reshaping service-driven industries such as hospitality, healthcare, and retail. These businesses face constant pressure from high conversation volume, limited staff availability, and rising customer expectations. The article explains how chatbots handle repetitive, predictable interactions like bookings, FAQs, and status updates, allowing human teams to focus on in-person service and complex situations. It emphasizes that automation does not remove the human touch but strengthens it by reducing burnout and improving response times. The summary concludes by describing Anablock’s approach to service industry automation, focusing on task completion, system integration, and smooth escalation to human support.
Enhancing Customer Onboarding with AI Chatbots featured image
January 8, 2026
This article focuses on the critical role onboarding plays in retention and long-term customer success. Many users churn early because they feel overwhelmed, confused, or unsupported during their first interactions with a product or service. The post explains how AI chatbots transform onboarding into an interactive, real-time experience by guiding users step by step, answering questions inside the flow of work, and offering help when users stall. It highlights benefits for both users and internal teams, including faster activation, reduced support tickets, and clearer insight into onboarding friction points. The summary also covers how Anablock designs onboarding assistants using real product data to deliver personalized, context-aware guidance.
AI Chatbots vs Human Customer Service: Finding the Perfect Balance featured image
January 7, 2026
This article breaks down the real differences between AI chatbots and human customer service and explains why choosing one over the other is a false choice. AI chatbots excel at handling repetitive, high-volume requests instantly, while human agents bring empathy, judgment, and adaptability to complex situations. The most effective solution is a hybrid support model where automation resolves simple issues and prepares context before handing off to humans. When designed correctly, this approach improves efficiency, shortens response times, and delivers a better customer experience without sacrificing the human touch.

Unlock the Full Power of AI-Driven Transformation

Schedule Demo

See how Anablock can automate and scale your business with AI.

Book Demo

Start a Support Agent

Talk directly with our AI experts and get real-time guidance.

Call Now

Send us a Message

Summarize this page content with AI