Current situation: I’ve got a desktop with 16 GB of DDR4 RAM, a 1st gen Ryzen CPU from 2017, and an AMD RX 6800 XT GPU with 16 GB VRAM. I can 7 - 13b models extremely quickly using ollama with ROCm (19+ tokens/sec). I can run Beyonder 4x7b Q6 at around 3 tokens/second.

I want to get to a point where I can run Mixtral 8x7b at Q4 quant at an acceptable token speed (5+/sec). I can run Mixtral Q3 quant at about 2 to 3 tokens per second. Q4 takes an hour to load, and assuming I don’t run out of memory, it also runs at about 2 tokens per second.

What’s the easiest/cheapest way to get my system to be able to run the higher quants of Mixtral effectively? I know that I need more RAM Another 16 GB should help. Should I upgrade the CPU?

As an aside, I also have an older Nvidia GTX 970 lying around that I might be able to stick in the machine. Not sure if ollama can split across different brand GPUs yet, but I know this capability is in llama.cpp now.

Thanks for any pointers!

  • OpticalMoose@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    Ollama doesn’t currently support mixing CUDA & ROCm. https://github.com/ollama/ollama/issues/3723#issuecomment-2071134571

    One thing to keep in mind about adding RAM your speed could drop depending on how many slots you populate. For me, I have a 5700G and with 2x16Gb, it runs at 3200Mhz, but with 4x16Gb(same exact product), it only runs at 1800Mhz. In my case, RAM speed has a huge effect on tokens/sec, if I have a model that has to use some RAM.

    You can check AMD’s spec page for your processor, but they don’t really document a lot of this stuff.