Converting default PyTorch LLM Models to GGUF
Once I got it downloaded I tried to use the oogabooga webui, but ran into issues, so I wanted to convert it to gguf format and use with GPT4ALL. I found some good instructions:https://www.secondstate.io/articles/convert-pytorch-to-gguf/ I converted the PyTorch model to GGUF in FP16 weights.Then when I got around to trying to quantize it (with out … Read more