⭐Premium Features

Description of the different premium features

Get a Taste Tier

Skip The Waiting Line

Our waiting room allows limiting how many people can use the platform at the same time and allows us to make sure enough hardware is provisioned to support our active users. We can offer our service for free thanks to premium subscribers that are willing to support financially the platform. This benefit let you skip the waiting line every time you want to chat with our bots.

Memory Manager (COMING SOON)

While you don't get access to Semantic Memory 2.0, you will soon get access to the Memory Manager that let you add memories that you believe are important to your conversation. These memories must be added manually and are not created automatically.

Expected Launch Date: May 30th 2024

True Supporter Tier

4K Context (Memory)

When you chat with our bots, we generate a text prompt that is composed of the bot personality and example dialogues and a portion of your conversation history. The longer the prompt, the more server resources it uses. Our current model has a limit of 4096 tokens (words) but for our Free Tier, we limit that to 2048 tokens.

When calculating total tokens (words) we need to account for both input and output, so assuming an allowed output of 180 tokens and a bot definition of 900 tokens, in that example, our free tier users would have 1000 tokens (words) to fit a few of the last turns of their conversation.

In that same example, premium subscribers would have 3048 tokens (words) to fit the conversation, so that means 3x more of the conversation history can be included in the prompt.

That means the response can better leverage what was discussed before.

Longer Responses

Thanks to the 4096 tokens, True Supporters, and I'm All In tiers benefits from a max response of up to 300 tokens instead of the default 180. This means less often will the response appear to be incomplete by having been truncated.

Semantic Memory

Even with 4K Context, because we need to fit our prompt within the 4096 tokens that we have available, we can only include a portion of your conversation history. With semantic memory, we try to find a semantically (based on meaning) relevant portion of your previous conversation and include these tidbits into the prompt, even if they are not the most recent things discussed.

An example of this would be if you were discussing the details of a particular book and then shifted the conversation to a different topic, like music. If later in the conversation you refer back to the book, even if several turns have occurred since the book was last mentioned, the messages the most relevant about the book would get added to the prompt.

The advantage of semantic memory is that it doesn’t solely rely on the recentness of the conversation but also its relevance.

Semantic Memory 2.0

Soon to be released (May 30th 2024), read about it here

Conversation Images

This new feature allows generating images within your conversation. Our AI will use the bot image, bot definition and the last turn of your conversation to generate images on request. You can read more about this feature.

As a True Supporter, you'll have the capability to generate images on all trained chatbots; however, the ability to train new chatbots to create images in conversation requires the All In tier.

ChatGPT for SFW Roleplay

With this benefits, we use chatGPT to formulate the response when possible. Because of openAI terms of service, this feature is only used on SFW bots and when the conversation is not dealing with sexual, hate or violence themes. ChatGPT has been trained on a much larger amount of data than our own model, which makes it highly versatile and capable of generating more diverse, coherent, and contextually relevant responses.

I'm All In Tier

Priority Generation Queue

This benefit ensures that your requests are given priority over other users. Whenever you initiate a conversation or send a message, your request is placed at the front of the queue for immediate processing. This results in faster response times and a smoother chatting experience, especially during peak usage hours or when server demand is high.

Conversation Images on Private Chatbots

You can train the AI on your private chatbots images so that they can generate conversation images.

Generation Settings

You can experiment by controlling inference temperature, top_p and top_k. This is for the most advanced users that like to experiment.

Access to 70B model

You get the access to test Airoboros70B-2.2 which is a smarter (but slower) model. That model leverage 5x more parameters than our default model (13B) to generate responses.

8k Context (Memory)

You get access to 8192 tokens, doubling your bot memory over what can fit in 4K context!

Last updated