GUEST POST – Crafting Unique AI Personas: Harnessing the Power of Logit Bias in Large Language Models

Anthony Puppo

Large Language Models (LLMs) have revolutionized our interaction with software. However, there’s a catch – their responses can be monotonous and impersonal. This is where ‘personas’ come in. They add a human touch to LLMs, transforming generic outputs into customized responses that resonate with users. This is particularly handy in applications like customer service bots and virtual assistants. But how do we create these personas without hefty costs or time investments? The good news is, we can tweak a set of common parameters in most LLMs to influence their output, and that’s what we’ll explore today.

The examples in this blog post utilize C#, Semantic KernelSharpToken and the OpenAI API. If you’d like to follow along and experiment yourself, first create a new console project:

dotnet new console --framework net7.0

Then install dependencies using NuGet:

dotnet add package Microsoft.SemanticKernel --prerelease
dotnet add package SharpToken

Additionally, I’ve written a small demo application that utilizes some of the techniques discussed in this blog. It is open-source and available on GitHub.

Introduction To How LLMs Work

In simplified terms, LLMs function a bit like a predictive text engine. They interpret a sentence with each word or part of a word as a ‘token’. The tokens are then transformed into a numerical format that the model can process.

var encoding = GptEncoding.GetEncoding("gpt-3.5-turbo");
var rawTokens = encoding.Encode("Wonderful day we're having!");
var textTokens = rawTokens.Select((x) => $"\"{encoding.Decode(new() { x })}\"").ToList();

Console.WriteLine($"Raw tokens: {string.Join(", ", rawTokens)}");
Console.WriteLine($"Tokenized text: {string.Join(", ", textTokens)}");

// Output:
// Raw tokens: 62372, 1285, 1938, 584, 2351, 3515, 0
// Tokenized text: "Wonder", "ful", " day", " we", "'re", " having", "!"

Based on its prior training, the model predicts what should come next in the sequence. For example, after “I don’t like”, the model might suggest “apples”. This prediction can be thought of as the model’s first draft — it’s good, but can we make it better?

LLM Parameters

Consumers of LLMs have the ability to adjust certain parameters. This can yield creative, varied, and engaging results.


This parameter adjusts the entropy of our model’s output. A high temperature makes the model’s output diverse and creative, while a lower temperature results in more focused and predictable responses.

Top P

Top P (also known as nucleus sampling) guides the selection of next tokens based on cumulated probabilities. This is a more nuanced measure of controlling randomness that can often lead to more diverse outputs. For example, if the model is predicting the next word in “The cat climbed up the ___”, and the options tree, roof, and wall add up to around 90% probability, Top P, if set at 90%, restricts the model to select among those three options.

Frequency and Presence Penalties

Frequency penalty discourages the overuse of specific tokens, and presence penalty penalizes tokens previously used in the output, irrespective of frequency. These mechanisms can be instrumental in quelling repetition and promoting diversity.

Logit Bias

Logit bias directly manipulates the logits (the raw, unnormalized scores predicted by the model) for specific tokens before they are passed through the softmax function for probability distribution. By adjusting the logit bias, one can promote or demote particular tokens. For instance, if we want the model to avoid using a certain token, we can assign a negative logit bias to it, making it less likely to be chosen. Likewise, assigning a higher bias will have the model favor the token.

Persona Generation Using Logit Bias

For a practical demonstration, let’s consider a scenario where we desire our model to generate shorter sentences. To achieve this, we can manipulate the bias for common punctuation such as “.”, “!”, and “?”.

First, setup the kernel so we can interact with the model:

var kernel = new KernelBuilder()
    .WithOpenAIChatCompletionService("gpt-3.5-turbo", "<your-openai-api-key>")

Then call it with our custom settings:

var result = await kernel.InvokeSemanticFunctionAsync(
    "Describe a rainbow.",
    requestSettings: new OpenAIRequestSettings()
        Temperature = 0,
        TopP = 1,
        FrequencyPenalty = 0,
        PresencePenalty = 0,
        TokenSelectionBiases = new[] { ".", "!", "?" }
            .SelectMany((x) => encoding.Encode(x))
            .ToDictionary((x) => x, (x) => 10)


And we get the following:

A rainbow is a beautiful and natural phenomenon. It appears as a circular arc of colors in the sky. It is formed when sunlight is refracted, or bent, as it passes through raindrops. The sunlight is then reflected inside the raindrop and refracted again. This process causes the light to separate into its component colors. The colors of a rainbow, from top to bottom, are red, orange, yellow, green, blue, indigo, and violet. The colors are vibrant and distinct. The rainbow usually appears after rain showers when the sun is still shining. It can also be seen near waterfalls or fountains. The sight of a rainbow is often associated with joy, hope, and wonder. It is a mesmerizing display of nature’s beauty.

Conversely, if we make the bias negative:

// Surrounding code omitted for brevity...
TokenSelectionBiases = new[] { ".", "!", "?" }
    .SelectMany((x) => encoding.Encode(x))
    .ToDictionary((x) => x, (x) => -10)

We then get something like this:

A rainbow is a beautiful and natural phenomenon that occurs when sunlight is refracted, or bent, by water droplets in the air, creating a spectrum of colors in the sky. Typically, a rainbow appears as a semi-circular arc of vibrant colors, with red being the outermost color and violet being the innermost color, although sometimes a full circle can be seen in certain conditions. The colors of a rainbow, in order, are red, orange, yellow, green, blue, indigo, and violet, often remembered by the acronym ROYGBIV. Each color of the rainbow is distinct and blends seamlessly into the next, creating a stunning display of hues that can be seen against a backdrop of dark clouds or a clear blue sky. Rainbows are often seen after rain showers when the sun emerges from behind the clouds, casting its rays onto the raindrops in the air, causing them to act as tiny prisms that refract the sunlight and create the colorful spectrum. The sight of a rainbow is often associated with feelings of joy, wonder, and hope, as it is a symbol of beauty and harmony in nature. Rainbows are not physical objects that can be touched or approached, but rather optical illusions that appear to be located at a specific distance from the observer, making them seem elusive and magical. Overall, a rainbow is a breathtaking and ephemeral display of colors that captivates the imagination and reminds us of the wonders of the natural world around us.

The first response is more concise and straightforward, providing a clear and simple explanation of a rainbow. It uses a more casual and conversational tone, making it easier to understand for a general audience.

The second is more detailed and comprehensive, providing a more scientific explanation of a rainbow. It uses a more formal and academic tone, making it suitable for a more knowledgeable audience or someone seeking a deeper understanding. The language is more descriptive and the sentences are longer, contributing to a more elaborate and thorough explanation.

The effects of tweaking logit_bias are evident in the given examples, and these modifications show how we can mold the model’s responses to be more in line with a specific persona. By amplifying or diminishing this bias, we can guide the model to generate responses that are concise or verbose, casual or formal, simple or detailed, depending on the desired personality. However, the key lies in balance. Overdoing it might result in an overbearing or inconsistent persona, while underdoing it might make the persona feel generic.

Next Steps

So, what are some of the options for putting the topics discussed here into practice?

  1. Experiment with Logit Bias: Get hands-on experience with this feature. Start with simple tweaks to the bias values and observe how the output changes. As you gain familiarity, attempt to create a more complex persona by adjusting the bias for a wider range of tokens.
  2. Dive into Stylometry: Learn more about stylometry. This field of study can provide insights into how writing styles can be quantified and analyzed, which can be helpful in creating more nuanced personas.
  3. Implement Part-of-Speech Tagging: Incorporate part-of-speech tagging. It can be useful in understanding the grammatical structure of the sentences generated by the model (or text from a pre-existing persona you are attempting to emulate). This understanding can help you tune the logit bias more effectively.
  4. Randomize Character Creation: Create a corpus of words relevant to the desired persona. Use this corpus to randomly assign attributes to the model’s persona. This can add an element of unpredictability to the model, making it more engaging.
  5. Explore Token Frequencies and TF-IDF: Rather than merely looking at the plain text, consider tokenizing the text. This approach can be combined with Term Frequency-Inverse Document Frequency (TF-IDF) to assess the frequency of model tokens. This insight can guide the adjustment of logit bias values more appropriately (since models are operating at the token level).
  6. Combine Parameters: Don’t limit yourself to logit bias. Try combining it with other parameters like temperature and top_p for more nuanced control over the output. Remember, the aim is to create a persona that is consistent, engaging, and believable.

Closing Thoughts

Crafting unique personas can be a tricky pursuit. Logit bias, however, offers a promising starting point. It’s a tool to help you steer your models outputs towards a more personalized touch. Yet, it’s important to note, it’s but one piece of the puzzle. While other parameters might not singularly make a huge impact in persona development, their combined use could unlock more possibilities. The journey to mastering persona creation in LLMs is an intriguing one, and hopefully, this has given you a useful compass to navigate it.

1 comment

Discussion is closed. Login to edit/delete existing comments.

  • Andreas VolkmannMicrosoft employee 0

    Anthony, very creative use of the token selection bias!
    Exciting to see what else we can achieve with this.

    One idea: Could try to either encourage or prevent it from asking questions by only controlling the logit bias for `?`.

Feedback usabilla icon