Text Vectorization

Vuk Dukic
Founder, Senior Software Engineer
October 9, 2023

Introduction

Machine Learning (ML) models are used for making predictions. Predictions could be about the weather, whether a user clicks on an ad/movie/song, the answer to a question etc. In order to make a prediction the model needs to be provided some input data that contains information that can be used to make a prediction.

The way input data is presented to a model is quite critical and can determine how easy it is for a model to extract information from it. LLMs are no different, today we'll dive into how we need to present input data to them.

Blog image

Text Vectorization: Converting text to numbers

On receiving input, ML models perform a series of operations like multiplications to provide a numerical output that is translated into a prediction. In the world of LLMs the model is provided with a prompt made up of text, however running the mathematical operations associated with the internal workings of an LLM requires converting the text to numerical values.

The conversion of text into numerical values is called text vectorization. A vector is a sequence of numbers and is analogous to an array of numbers in the context of programming. When dealing with ML libraries it’s common for arrays of numbers to be converted to vector objects since they make mathematical operations run more efficiently. For example in numpy to make an array a vector you'd do something like:

import numpy as np

# Array of numbers
a = [1,2,3]

# Convert array to a vector
vector_a = np.asarray(a)

Tokenization

Tokenization is the process of breaking a piece of text into its units called tokens. Tokens can be alphabets, words, or groups of alphabets that make up words often called subwords based on the methodology. A tokenizer is an algorithm that's responsible for tokenizing text.

The simplest tokenizer that one can imagine (for English) is to split a document at every space character or punctuation.

import re

def tokenize_document(document):
    # Define a regular expression pattern to 
    # match spaces and punctuation as separate tokens
    pattern = r"[\s.,;!?()]+|[.,;!?()]"

    # Use re.split to tokenize the document
    tokens = re.split(pattern, document)

    # Remove empty tokens
    tokens = [token for token in tokens if token]

    return tokens

text = 'sample sentence. It contains punctuation!'

tokenize_document(text)

>>> ['sample', 'sentence', 'It', 'contains', 'punctuation'] 

Building a vocabulary

A vocabulary is the set of all tokens that an ML model would be able to recognize. The English language has around 170k words. Imagine how huge this number would be for a multilingual use case.

We need to put a cap on the size of our vocabulary to ensure computational efficiency. The size of the vocabulary is often capped by counting the frequency of tokens in a huge corpus of text and choosing the top-k tokens. Where k would correspond to the vocabulary size.

from collections import Counter

def build_top_k_vocab(corpus, k):
    # Initialize a Counter to count token frequencies
    token_counter = Counter()

    # Tokenize and count tokens in each document
    for document in corpus:
        tokens = tokenize_document(document)
        token_counter.update(tokens)

    # Get the top 10 tokens by frequency
    top_k_tokens = [token for token, _ in 
                   token_counter.most_common(k)]

    return set(top_k_tokens)

# Example usage:
corpus = [
    "This is a sample sentence with some words.",
    "Another sample sentence with some repeating words.",
    "And yet another sentence to build the vocabulary.",
]

build_top_k_vocab(corpus, k=5)
>>> {'sample', 'sentence', 'some', 'with', 'words'}

Converting Tokens to Numerical Values

Now that we have a vocabulary we assign an id to each token in our vocabulary. This can be done through simple enumeration and maintaining a map/dictionary between the ids and corresponding token. The id map will help us keep track of tokens that are present in a document.

The identified set of tokens now have to be translated to features that'll help an ML model extract information. While a token can be represented as a scalar or vector a document is always going to be represented as a vector of the token’s representation that it is made up of.

Some of the common ways to encode a document into features are:

  1. Binary Document-Term Vector: Each document is represented as a vector whose size is equal to the size of the vocabulary. The id of each token in the vocabulary corresponds to the index position in the vector. A value of 1 is assigned to the index position in the vector, when the corresponding token is present in the document and 0 if it’s absent.

  2. Bag of Words (BoW): Similar to approach 1, but the vector’s indices map to the frequency of the token in the document.

  3. N-gram vectors: This approach extends the above approach by allowing us to have index positions corresponding to bi-grams, tri-grams etc.

  4. Tf-idf: Similar to BoW, but instead of just the frequency a token is assigned a value based on its frequency in a document and how many unique documents in the corpus the token occurs in. The intuition is that tokens that are rare in general but are present a large number of times in a specific document are important, but the tokens that are plentiful across all documents (like the words "and, an, the, it etc.") are not.

  5. Embeddings: This approach is used in most deep neural networks such as LLMs. Each id is mapped to a unique n-dimensional vector called an embedding. The advantage of this approach is that rather than having a single hand-crafted feature such as the ones mentioned above for each token, an LLM can learn a better high dimensional feature via back propagation. An embedding is supposed to be able to capture the meaning or context around which a token occurs.

Share this article:
View all articles

Related Articles

Creating Omnichannel Customer Support with AI Chatbots featured image
January 23, 2026
Customers move between channels constantly, and siloed support forces them to repeat themselves. This post explains the difference between multichannel and omnichannel, and how AI chatbots create continuity by identifying users, capturing context, and routing requests consistently across web chat, messaging apps, and other entry points. It also covers why consistency matters, including standardized answers and unified logging into your CRM or helpdesk. Finally, it shows how Anablock approaches omnichannel support by designing the full system, not just installing another widget.
Automating Support Ticket Triage with AI and CRM Integrations featured image
January 22, 2026
Support teams waste massive time on manual triage, reading, tagging, and routing tickets before real problem-solving even begins. This post explains how AI-powered ticket triage can instantly understand ticket intent, urgency, and category, then route each case to the right queue using your existing helpdesk rules. The biggest gains come when triage connects to CRM data, letting the system prioritize based on customer value, plan tier, active opportunities, or recent escalations, not just the words in the message. You’ll also see how this improves agent productivity, reduces transfers, and speeds up first responses. Finally, it outlines Anablock’s implementation approach, using secure API integrations, your real taxonomy and SLAs, and a gradual rollout that builds confidence from auto-tagging to full automation.
Improving Healthcare Appointment Scheduling with AI Chatbots featured image
January 21, 2026
Healthcare scheduling often breaks under phone volume, after-hours demand, and constant reschedules. This post explains how an AI healthcare chatbot can handle booking flows 24/7, collect the right patient details, and offer time slots that follow real clinic rules. It also covers automated reminders that cut no-shows, plus extra patient support like prep instructions and intake guidance. Finally, it explains how Anablock designs healthcare bots around real operational constraints and data protection so clinics get efficiency without sacrificing patient experience.

Unlock the Full Power of AI-Driven Transformation

Schedule Demo

See how Anablock can automate and scale your business with AI.

Book Demo

Start a Support Agent

Talk directly with our AI experts and get real-time guidance.

Call Now

Send us a Message

Summarize this page content with AI