Tokenization

Tokenization is the process by which an AI Model separates linguistic character symbols into processable units.

Many factors differentiate which tokens the AI considers more probable for a given output, including the Model's training, its Finetune, and any applicable Prompt Tuning.

Users themselves can edit token probabilities through Phrase Biasing, as seen in the Bias sets found here.