Presets

The presets provided by NovelAI can be severely lacking at times. So, if none of them appeal to you, here is a collection of ones created by the community.

''To avoid redundancy, it's best to remove any presets that end up in NAI proper. If you've got a preset you'd like to share, then by all means, add it to the pile.''

Test 2
by OccultSage Typical P picks out probable 'dense' words, then Tail Free slices off the bottom end, then we apply temperature.

Then we use Top A to slice off after temperature.

With very little rep-pen.

Typical P acts like its own inherent rep-pen, without being rep-pen.


 * Typical Sampling:
 * Tail-Free Sampling:
 * Temperature/Randomness:
 * Top-A:
 * Repetition Penalty:

Pro Writer 2
by BasileusSee here for more details on the data behind this.


 * Repetition Penalty:
 * High values here have a strong influence on the increasing complexity and improving word choice; however, it can be more prone to logical errors, so I wouldn't advise turning this up much higher than it already is
 * R2=0.586 (strong influence on readability)
 * Repetition Penalty Slope:
 * Only the barest hint - this has no impact on readability and slight negative influence on quality, so use sparingly
 * Tail-Free Sampling:
 * This is very aggressive for TFS, because we want to eliminate the possibility of "bad" tokens that will derail the text as much as possible without making the text boring
 * R2=0.369 (moderate influence on readability)
 * Randomness:
 * This is moderately high because within the pool of "good" tokens that remain after aggressive TFS, we want to be very "creative" with our word choice. As long as you leave repetition and TFS alone, you can use this as a sort of control for creativity/complexity and adjust it as high as you like
 * R2=0.16 (light influence on readability)

The Pro Writer preset is based on mass statistical analysis of (moderately) large volumes of Euterpe output, using ProWritingAid.com's AI powered analysis to identify "optimum" settings for readability (and quality, though readability has a stronger correlation to generation settings). Readability refers roughly to the complexity of generated prose - if readability is too low the text comes across as simple and uncreative, but if readability is too high it becomes too dense and nonsensical.

Generation settings largely influence things like sentence structure and word choice, rather than storytelling or logic.

This version 2.0 is based on iterative analysis of combined settings, rather than the independent analysis of each setting used for version 1.0 (still available on it's own in case anyone prefers it).

Permanent home and potential future minor version updates will be here for ease of updating individual files: https://www.sgreens.net/index.php?view=article&id=112:pro-writer-2&catid=14

Examples

The following are samples with no module, memory, etc... to illustrate a baseline and demonstrate a comparison between randomness settings:

Default Randomness: [Ragged fingers of sunlight] slanted across the floor, and a small fire burned in an iron brazier. A man sat on the edge of his cot with one leg crossed over another as he stared at me through bloodshot eyes that were too old for their owner's face. His hair was thinning around the temples; it had been long when I saw him last, but now only stubble remained where once there'd been a full head of dark curls. The lines etched into his forehead looked deeper than ever before, giving him something like a permanent scowl. He wore nothing more than breeches and boots—the same clothes I remembered seeing him wear back home years ago. Max Randomness: [Ragged fingers of sunlight] filtered through the trees and dappled her skin with a golden glow. The morning was crisp, but not cold; the air smelled sweetly clean after last night's rainstorm. A breeze blew over her face as she stepped from the tree line into open ground on one side of a small meadow that sloped down to the edge of an even smaller stream. She glanced around for any sign of movement or danger before lowering herself onto her knees beside the watercourse. There were no sounds save those made by nature—birdsong, wind rustling leaves overhead, insects buzzing in the grasses along the banks. In this place where nothing manmade could intrude upon its natural beauty it seemed possible to hear all things at once: the whisperings of birds, the sighs and moans of distant animals, the gentle trickle of flowing waters, the creak of branches in the breeze above her head, the soft thump of something heavy falling into stillness far away...

A Typical Swansong v1.1
by Minari Swansong"Updated version replacing all earlier versions posted here and elsewhere. A preset focused on enforcing coherence while maintaining good creativity. Uses the following settings in the stated order:"


 * Typical Sampling
 * Tail-Free Sampling
 * Temperature (Randomness)
 * Top A


 * Repetition Penalty


 * Repetition Penalty Range
 * Repetition Penalty Slope
 * Presence Penalty:
 * Frequency Penalty

"It's recommended to pair these settings with a Phrase Bias to   and   to, and disable preamble in the settings. Also feel free to tweak Temperature to whichever value you wish: When your prompt is relatively empty, it's advised to reduce it to somewhere within 1-1.5, then increase it later as your context fills. If you really want to, you can even max it out at 2.5 and see what happens!"Example Prompt: You hide in the bushes, waiting for one of the game animals inhabiting the forest to enter the clearing. Half an hour later, you notice a beautiful doe wander out into the open. You stealthily reach for your Result (Temperature 1.5, No Module): bow and draw it back, your arrow poised like an invisible line connecting your bow to the target animal's neck, when the animal pauses its grazing and looks your way with what appears to be curiosity in her eyes. A cold feeling settles over you as you watch the deer approach you. It's moving slowly, almost as though she knows you are there, waiting for her. She stops right in front of you, facing away, not even bothered by the noise you make as you stand up straight and look her in the eye. There is no fear, only interest in her brown eyes. Her ears are erect and forward, as if she's trying to see you. A twinge of familiarity sets you on edge, as though this creature has looked into your soul before.

Moonlight Chronicler v2.2c
by Nyks While keeping same purpose in mind as the original, version 2.2c of Moonlit Chronicler has a completely reorganized sampling order, ditching Top-K entirely. The values themselves differ very slightly from the stock. It should be better with its writing style compared to the original, while still staying true to its described purpose.

Notice: If you find any repetition on short prompts and rarely ones with full context, after a dinkus (***), you may want to try to disable preamble as it is one of the main culprits of that kind of repetition on Euterpe.


 * Sampling Order:
 * Randomness:
 * Recommend Output Length:
 * Repetition Penalty:
 * Repetition Penalty Range:
 * Repetition Penalty Slope:
 * Tail-Free Sampling:
 * Top-A:
 * Typical Sampling:

Best Guess 2
by Baker-Anon Hello frens!

Baker here. For your consideration, I'd like to present:

Best Guess 2

It's nothing special, really; just Best Guess settings applied to Euterpe.

But this time around, there are NO changes to the context settings.

Also, the sampling method has been changed to the TFS setting from Basic Coherency.

Optimal for keeping the story on track when you're balls deep in degenerate fetish scenes, results may vary when used in open-ended idea generation situations.

Hopefully, there's one anon out there who this works good for besides me; this is for you <3
 * Randomness:
 * Output Length:
 * Repetition Penalty:
 * Top-K Sampling:
 * Top-P Sampling:
 * Tail-Free Sampling:
 * Repetition Penalty Range:
 * Repetition Penalty Slope:

Turpy
by Anon"slightly modified Co-writer has been my goto"
 * Randomness:
 * Output Length:
 * Repetition Penalty:
 * Top-K Sampling:
 * Top-P Sampling:
 * TFS Sampling:
 * Repetition Penalty Range:
 * Repetition Penalty Slope:

Storyteller
by ???"It's Storywriter (Sigurd's default preset), except top-p happens before temperature."
 * Randomness:
 * Output Length:
 * Repetition Penalty:
 * Top-P Sampling:
 * Repetition Penalty Range:
 * Repetition Penalty Slope:

Sphinx Moth v2
by Nyks Experimental - uses custom order of Top-K, Nucleus and Temperature

Reborn from its sandy pit, Sphinx rises again with all of its max randomness glory.

Sphinx Moth is now better than ever, picking out the best tokens and giving them equal chance of being chosen. Truly harnessing the creativity of high ends of Randomness, you can expect a wide array of creativity in a way that is still written with prose that makes sense!

Be ready to wrangle with this beast, for it may avoid a detail or two in place of a more creative route.
 * Randomness:
 * Output Length:
 * Repetition Penalty:
 * Top-K Sampling:
 * Nucleus Sampling:
 * Repetition Penalty Range:

Monkey Business
by Belverk Model agnostic preset done using the token probabilities viewer, debug options and some Sage advice from OccultSage, finetuned for my personal preferences. Tokens that have a lower likelihood than 2% of appearing get mostly culled, while tokens in the range of 94% and above likelihood get bumped to 100%. This behavior is familiar to everyone who has used 0.992 TFS presets before, although works better after adjustments and the TFS overhaul.

With an increased randomness applied after filtering this should give users a consistent and natural, yet creative output experience. Rep penalty curve returns, biased towards my scaffolding method of using lorebook entries, with emphasis on attempting to preserve accurate output of colors.

Why is it named Monkey Business? I made the preset on a whim and tested it on the Monkey World Domination prompt, which proved very useful for testing token logprobs and filtering. Name aside, it's a serious preset and the evolution of my tweaks on Sage's coherent creativity. Currently I am using Sigurd, but I've done testing on early Euterpe, and TFS works the same way for both. Euterpe is more insistent on what tokens come next, so if you want more creativity and plot twists while keeping the same TFS, feel free to bump the randomness up.
 * Randomness:
 * Output Length:
 * Repetition Penalty:
 * Top-K Sampling:
 * Tail-Free Sampling:
 * Repetition Penalty Range:
 * Repetition Penalty Slope:
 * TFS applied first, then randomness/temperature

Damn Decent TFS
by chmod007 A generation configuration focused on a subjective model-specific sweet spot.

Generation settings calibrated using New Story defaults with No Module.

Order: top_a, typical_p, tfs, temperature
 * Randomness:
 * Output Length:
 * Repetition Penalty:
 * Tail-Free Sampling:
 * Top-A Sampling:
 * Typical Sampling:
 * Repetition Penalty Range:
 * Repetition Penalty Slope:
 * Presence Penalty:
 * Frequency Penalty:

The Old Familiar

 * Randomness:
 * Top-K Sampling:
 * Nucleus Sampling:
 * Repetition Penalty:
 * Repetition Penalty Range:
 * Repetition Penalty Slope:

Optimal Machine
by lion"A variant of Belverk's Optimal Whitepaper v2 that I like to use in many of my generator scenarios. Tends to work well for generating content based off of examples high up in context."


 * Randomness:
 * Top-K Sampling:
 * Nucleus Sampling:
 * Repetition Penalty:
 * Repetition Penalty Range:
 * Repetition Penalty Slope:

Fated Outcome
by Pause This Preset will always return the same output until something is changed in the Context, allowing a sense of permanence and fate within the world of your narrative.

Fun cases with this Preset include "time warping" to see how a character would have reacted if you said or did something different, and testing the effects of different token associations on the flow of a Story. Additionally, lore details and names should have a significantly higher chance of being correct.

NOTE: This Preset makes the Retry button useless while active, as the same output will always be returned until something changes.


 * Randomness:
 * Max Output Length:
 * Repetition Penalty:
 * Top-K Sampling:
 * Nucleus Sampling:
 * Tail-Free Sampling:
 * Repetition Penalty Range:
 * Repetition Penalty Slope:

Pussy Tentacles (Jeral V4)
by HydroStorm


 * Randomness:
 * Max Output Length:
 * Repetition Penalty:
 * Tail-Free Sampling:
 * Repetition Penalty Range:
 * Repetition Penalty Slope:

Damn Decent TFS [Sigurd V4]
by chmod007 A generation configuration focused on a subjective model-specific sweet spot.

Generation settings calibrated using New Story defaults with No Module.

Lorebook, token, and context settings are pristine.


 * Randomness:
 * Output Length:
 * Repetition Penalty:
 * Tail-Free Sampling:
 * Repetition Penalty Range:
 * Repetition Penatly Slope:

Complex
by Orion"Been getting really good results with these settings, based off of the 'Complex Readability Grade' posted in Basileus' findings in #novelai-research. With good usage of Tone, Word Choice and maybe Author in the Author's Notes, as well as a decent amount of context for the AI to consider after starting a story, you can get some stunningly evocative prose while the story's progression remains pretty consistent. Of course, it will still need guidance from time to time, and it might require some slight adjustments to Randomness and TFS occasionally based on your preferences, but I think this is going to be my go-to for serious stories until somebody finds a setting preset that's even better than this."


 * Randomness:
 * Tail-Free Sampling:
 * Repetition Penalty:
 * Repetition Penalty Range:
 * Repetition Penalty Slope: