I saw Generative AI for Beginners from Microsoft on GitHub. I’ve looked at https://fmhy.pages.dev/ai but I’m not sure what I’m really looking for.
I write fiction, and I want a chatbot that will function like chat gpt3.5, but not shut down if things get bloody or sexy, as they so often do.
You know ready, aim, fire? I’m in the AIM stage.
Check out the ai horde. https://aihorde.net or direct llm frontend at https://lite.koboldai.net. Free Foss crowdsourced with uncensored models that won’t ever be rugpulled
I ought to have known you’d have a good answer! Thank you!
Spread the word!
I’ve been using GPT4All on my laptop and using mostly 7B models due to my RAM limitations and I am amazed how good some of them are.
It’s been really easy to use. There are models you can download from within the UI or you can get adventurous and download them from elsewhere, they just need to be in the .gguf format. I get most from TheBloke on hugging face.
So far my favourite has been solar-10.7B-instruct-v1.0-uncensored, it has been astonishingly good.
Oooh, do tell me more, please. I’ve been toying with the idea of setting up gpt4all myself, but I haven’t really had the time to look into it very much yet. I have a couple of questions, though:
- I guess it’s safe to assume that it runs on linux?
- Is it possible, with some scripting, to provide additional training data, such as connecting it with a wikipedia crawler?
- By combining it with some script-foo, can I have it also look up stuff for me on the fly, for example "extract THIS kind of information from THAT site ?
Yes it runs on Linux, my laptop is running Manjaro and I installed it from the AUR. I’m not sure if the scripting is possible, there is an openAI compliant web API you can turn on so maybe possible through that, you would probably have to feed in the content of the site with the prompt though, I’m not sure there is a better way but I guess that sort of behaviour is a bit out of scope for GPT4All.
There is a local documents feature that allows it to access text files on your machine that you give it specific access to but I think it’s fairly limited in its ability.
The GPT services out there use something called ‘tools’.
They get presented to the model and the model can ‘call’ a tool with arguments, which can then extract some data and input it into the context for the model to continue.I found out, the models which can run on a normal PC (or even a Laptop) are okay, but not super great. (around or a bit worse than ChatGpt3)
The good stuff (e.g. Nous-Capybara 31B or the Mistral/Mixtral ones) needs some more memory and compute.deleted by creator
From what I’ve heard Mistal is what you’d want to generate explicit content. Not sure what you’d want to run locally, but be warned that is slooooww unless you’ve got a beefy laptop
ollama helps you to easily run llms locally: https://ollama.com/
I’m running llama2-uncensored on my laptop with 8GB of memory.
I would look into NovelAI for writing, it’s quite specifically for that. It’s a paid servicd similar to chatgpt, but it’s uncensored and private.
You can run your own lightweight LLM on a laptop but the output will be useless. Good output requires big boy compute.
If you do want to run it on your own hardware, look into Ollama. There’s also options to run your own LLM in the cloud with a not too difficult process for non-techies.
Frankly, id find the right LLM for your needs and just pay for it per month, maybe novelai, maybe something else, but chatgpt is not great for creative fiction.
I got a little TOO MUCH involvement from NovelAI. I guess I want suggestion help, idea spitball help, but it’s specialized what I’m looking for.
I want my ai to stay on the shelf with my thesaurus until I’m ready to use it.
Interesting, im vaguely interested in this too. i have half of a world written that i want to turn into a game maybe (probably not but, amhaving fun) I have the hardware to turn what i have into an embedding for an open model, and the hardware to run it. So that’s the way i would go about it, though i can’t advocate for how helpful it would be (yet)
I’ve been playing a bit with llama2 in Ollama it does not have any restrictions perhaps using Ollama to run models locally is something that would solve some problems for you?
Yeah there are a bunch on uncensored models on ollama. It’s stupid easy to use!
gpt4all is the easiest way in, hands down, no contest
You should probably hook up with the SillyTavern crowd. It’s a frontend to chat with LLMs that will do what you want. Its main purpose is chat role-play. You can assign a persona to the LLM and ST will handle the prompt to make it work. It also handles jailbreaks if you want to use one of the big ones (no idea if it works well). You can also connect to other services that run open models, including aihorde.
https://github.com/SillyTavern/SillyTavern
https://www.reddit.com/r/SillyTavernAI/
If you want to host your own model you can find more help here:
There’s a reddit forum called local llama
There’s also !localllama@sh.itjust.works on here
It’s not impossible
Here is an alternative Piped link(s):
https://piped.video/WxYC9-hBM_g?feature=shared
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Whew, that guy has too much energy. He reminds me of the Minecraft videos my kids watch.
Slower.
Calmer.
Better.
He’s energized. You could watch at half speed 😅