How to Run Local LLM for Thesaurus and Writing

I don’t particularly appreciate AI in the field of writing. My experience with AI so far as a reader was that it’s either so good you could have hired a squadron of ghost writers with the same budget to run the AI, or it’s so bad you are spamming Google with the AI-generated contents. Ultimately, the ones that make the final cut from LLMs, from my experience, were the ones that were used in the hands of the professionals. That’s where thesaurus comes in.

I have multiple thesauruses in all shapes and forms. And thesaurus, by definition of it being I suppose, is always sitting in this limbo how academic and professional should it be to list the words and the phrases that are synonymous to certain criteria. But that’s not how I, as a writer, would always look for a word. I’d often start searching for a word, like it’s sitting on the tip of my tongue, what would be a word with similar meaning but for different context. For example, in my previous post, the word “offload” somehow wasn’t on my radar. Best I could come up with was “delegate”, but I wanted a word for things, like softwares and systems. I thought to myself, why not throw in all those contexts and see what LLM has to say?

Preface

I am well aware ChatGPT offers free tier service, no questions asked. But there’s a limit on the number of questions you can ask, and asking for words quickly exceeds that number with the user trying to convey the context of the query. I wasn’t going to subscribe to ChatGPT just for thesauruses; I wanted to make use of my computer first. Good news is, if you are on a Mac, chances are your Apple Silicon Mac is already powerful enough to power a decent local LLM on your machine.

Instructions

For this local LLM, I will use Llama 3 as an example. Llama isn’t the only LLM model available, and Ollama isn’t the only way to achieve this. But for the purpose of running LLM to replace thesaurus, I believe it’s powerful enough.

  1. Download and install Ollama from the website.
  2. Run the following command to run llama: ollama run llama3
  3. The LLM is now running on Terminal. When you are done, simply type /bye to kill the client. The Ollama server can be killed from the menu bar.

Afterthoughts

I’ve tried llama3, 3.1, 3.2, and 3.3 just for the purpose of replacing my thesaurus. All had its ups and downs, but for my Intel Mac, 3.3 was simply too heavy to run locally. As far as I am aware, Ollama doesn’t support GPUs on Intel Macs. The real irony of it all is that having a powerful graphics card is still limited by not having enough VRAM on Intel Mac. Apple Silicon Macs don’t suffer from memory issue as much, as unified memory design lets GPU to grab as much as memory as it wants from one singular pool of RAM.

We are truly living in the future, where the writers on films will finally be done with ancient typewriters once and for all. We are also living in a rather bleak present, where the subscription for ChatGPT Plus (cheapest paid tier) starts at $20/month. I suppose you could call it a business expense, but I’m not keen on the idea of paying 20 bucks just so that I can speak to Furby version of thesaurus.

This is also something I expect Apple and its Apple Intelligence to be able to bolster as a default AI platform on Apple devices. Currently, Apple Intelligence is simply incapable of answering such a query and instead have ChatGPT answer it. Apple Intelligence is yet to be fully deployed, hopefully we will we a contender by the time iOS 19 hits. Now might be a good time to upgrade your machine, if you are so inclined one way or the other.

Leave a comment