Handy Augmented AI chatbot - React App

I created a website that connects an AI chatbot to your handy with a customizable system prompt. Youll need an OpenAI api key.

**Instructions:
-Input your handy connection key and click connect
-put your openai api key at the top (api uses gpt-4o-mini which is super cheap)
-You can use the default character or create new characters
-There is a character builder model in the drop down that can create compatible characters fast
-Copy its output, click create new character, paste, click add character and youll have a new character that can control the handy.

Cons:
-openai so can’t be too R rated

Pros:
Its works and I don’t think theres anything like this out there.

The default system prompt is that of a femdom mistress but you can make new characters

If you hit a censor, click remove last interaction.

There is conversation history which you can reset, its unique to each character

Function names are kind of corny, an ai came up with them, might update later.

Features of Version 2 (work in progress):

-Text to speech, and speech to text (for hands-free communications)
-ability to customize functions on the ui

Potential features of V3:
-Option to connect silly tavern or oobabooga api endpoints so you can connect your uncensored LLMs to your handy

-Dynamically load uncensored models onto a cluster so people without NASA level hardware can mess around with an R rated model.

-Possibly connect the model to a Unity 3D avatar

Make sure you click disconnect after use to return handy to default settings

32 Likes

that’s an amazing project, i have a suggestion however.
due to OpenAI’s censorship and the obvious privacy concerns, would that app be compatible with LLM servers ran locally ? this can be done with https://lmstudio.ai/ on desktop for example
that would solve the OpenAI API issue and censorship/privacy concerns all at once

that would also allow users to run different models according to their wishes, like some models are made to be good at imitating characters or else

5 Likes

Hope this moves forward well, very excited for more development in the LLM direction. Sillytavern + openrouter should be a good combination, its what I always use for my AI chats. It costs money per message, but you can run it on any old laptop and get fast responses

1 Like

Yeah probably, I’ve only worked with oobabooga and know of silly tavern, both of which can have an open api endpoint. I’d imagine its also the same for LM studio.

endpoints can be made using the following when launching:

oobabooga:
python server.py --api --model

sillytavern:
npm run start – --host 0.0.0.0 --port 8000

The react code wouldn’t be much different from current setup:

Example:
const OobaboogaChat = () => {
const [inputText, setInputText] = useState(‘’);
const [responseText, setResponseText] = useState(‘’);

const apiUrl = ‘http://localhost:5000/api/generate’;

const fetchFromOobabooga = async () => {
try {

  const response = await axios.post(apiUrl, {
    prompt: inputText,   // Input text from the user
    max_new_tokens: 150, // Limit the number of tokens for response
  });

  setResponseText(response.data.results[0].text); // Update the state with the response
} catch (error) {
  console.error('Error fetching data from Oobabooga:', error);
}

};

I’ve made some minor updates to the current version to reduce setup also, now only thing required is handy connection key and openai api key.

I’ll add probably the local LLM connector in a future update I’m just currently trying to set up my own multi gpu rig (for neural network stuff lol but reconfiguring for llms too)

Openai is not ideal, but I don’t think they really care, the censors from what I understand are from its training data. They’re more busy hunting down people trying to understand o1

1 Like

I’ll add an local llm connector in the future, in the mean time openai is the simplest way, but it aint too shabby

2 Likes

For sure, sounds good. If you want to get familiar with sillytavern at some point (its what most ppl use for nsfw chats) just ping me, I can run you through everything & troubleshoot. Will continue to follow your progress :grin:

1 Like

yeah some models are trained with censorship so even if you set no system prompt or even order the model to say something censored, it’ll decline anyway.
this can be reproduced easily by self hosting a LLM.


getting an err.429, any idea what causes this ?

This looks amazing. I can’t interface with it since I don’t use a Handy, but being able to connect this to Intiface and paired with SillyTavern would be amazing. The possibilities…

I think thats related to rate limits. I hit that when I used an api key with $0 in it. Id check your openai dashboard to see the status of your key and account.

If you go to profile and billing, you can see how much you have.

If you go dashboard, and api keys, you can generate a new one which might help

Btw i like the dark format, ill work on the UI a bit later. I might use tailwind css.

I was hacking around with ollama the other week wondering why nobody had done this just yet. Do you have any plans on open sourcing this project? I could try contributing to allow for custom local models to be used in place of openai.

1 Like

push it to github and ill work on it with you if you want.

1 Like

This is amazing!! Are you in our Discord server/Reddit thread? Please post this there as well. Our community has been asking about something like this for a while.

1 Like

I hope get my Handy soon, in the meanwhile any plans to make it work with Vibration based toys?

Sure, ill post after I update the UI

1 Like

Sure, I need some time to prepare it. Ill tag you once its ready.

Sure, ill tag you once the github is set up.

2 Likes

Im getting 429 error with a fresh key?
Maybe the default prompt is blocked on ChatGPT? Like, it is when trying to paste it.

Hey is a Version for the autoblow ai ultra a idea?

1 Like

Tried it out, was great as proof of concept. Looking forward to see where you take this!

2 Likes