Private RAG vs. Gemini Shootout (NIST 800-53 Edition)

Private RAG vs. Gemini Shootout (NIST 800-53 Edition)
(A very silly Gemini-generated images of Karate-clad llamas fighting Cloud gods)

I've been using the "free" version of Gemini included with my ChromeBook plus for a few months but recently decided to upgrade to Advanced and testing with documents in my Google Drive and within Google Docs. The advanced version also gives you better image generation capabilities. In the past I'd ingested NIST 800-53 into PrivateGPT on my Mac M3 with mixed results and though I'd try again on my ASUS Nvidia 3060 I added to my son's old gaming PC this past week, where I had Ollama.

Local Hardware Setup

AMD Ryzen with 12GB Nvidia 3060 Running Ollama
LLM Benchmarks for the Hardware

The stats above were generated by LLM Benchmark

Private RAG Configuration

Ollama 0.3.12

With the exception of having Ollama listening on 0.0.0.0 I used a default install of the latest Ollama with the CUDA 12 drivers that get installed with the shell script

root@rogstrix-b450-ryzen5:/etc/systemd/system/ollama.service.d# cat override.conf 
[Service]
Environment="OLLAMA_HOST=0.0.0.0"

I used the following model for llama3.2

$ ollama show llama3.2:latest
  Model
    architecture        llama     
    parameters          3.2B      
    context length      131072    
    embedding length    3072      
    quantization        Q4_K_M    

PrivateGPT 0.6.2 (Llama 3.2 + Qdrant + Nomic)

Python is a blessing and curse for Development, but PrivateGPT is the easiest of the local RAG tools that I've found to install. It is on my list to get H2oGPT working with Ollama but it is a little trickier, so I put it off.

Since I'm running Ubuntu 24.04 (which uses Python 3.12) I had to install Python 3.11 with uv in a virtual environment I created. I tried to make PrivateGPT run with uv, but ended up sticking with Poetry and once I installed the right Python version it worked with no issue. By default

ubuntu@privatgpt:~/private-gpt-0.6.2/.venv$ cat pyvenv.cfg 
home = /home/ubuntu/.local/share/uv/python/cpython-3.11.6-linux-x86_64-gnu/bin
implementation = CPython
uv = 0.4.18
version_info = 3.11.6
include-system-site-packages = false
prompt = private-gpt-0.6.2
server:
  env_name: ${APP_ENV:ollama}

llm:
  mode: ollama
  max_new_tokens: 512
  context_window: 3900
  temperature: 0.1     #The temperature of the model. Increasing the temperature will make the model answer more creatively. A value of 0.1 would be more factual. (Default: 0.1)

embedding:
  mode: ollama

ollama:
  llm_model: llama3.2
  embedding_model: nomic-embed-text
  api_base: http://100.115.28.90:11434
  embedding_api_base: http://100.115.28.90:11434  # change if your embedding model runs on another ollama
  keep_alive: 5m
  tfs_z: 1.0              # Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.
  top_k: 40               # Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)
  top_p: 0.9              # Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)
  repeat_last_n: 64       # Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)
  repeat_penalty: 1.2     # Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1)
  request_timeout: 120.0  # Time elapsed until ollama times out the request. Default is 120s. Format is float.

vectorstore:
  database: qdrant

qdrant:
  path: local_data/private_gpt/qdrant

AnythingLLM (Llama 3.2 + LanceDB + Anything LLM Embedder)

AnythingLLM Desktop is distributed as an AppImage which installed on an Ubuntu 24.40 desktop without a GPU knowing I would point at a remote Ollama server over my tailnet given it is on a different wireless network.

I configured Ollama at the Ryzen PI describe above.

Vector store after ingesting the NIST standard.

This is the first time I've used LanceDB

Anything LLM is based on MiniLM (a CPU model, which I assume is far less capable than Nomic.

Document Ingestion

Of course on Gemini (ingested from Google Drive or added to the Gemini chat window on my ChromeBook Plus) had no issues with ingestion of the nearly 6 MB NIST standard. Private GPT was the slowest taking at least 30-45 minutes to ingest the document, but it at least shows progress bar to know it was doing something. I left to go grab lunch and it was done when it came back. The initial estimate was 46 seconds but it went way over and about 600MB of GPU VRAM was used during the embedding process. There was minimal CPU usage on the PrivateGPT VM or the Baremetal PC hosting Ollama. AnythingLLM took about 4-5 minutes to ingest, but didn't indicate how long it would take to ingest. As I discovered after the ingestion, AnythingLLM ingestions and embedding does not even use the GPU.

Listing the Top Control Families

What are the top level NIST Control Families

Gemini Advanced

Gemini was the only LLM to properly pull out the 20 controls families listed on Page 8 in the standard. None of the other tools could do this "simple" task.

AnythingLLM

I got two different answers from AnythingLLM, the first with examples of controls.

I created a new thread and it gave me these 15 controls only getting Access Control right.

Understanding the (-1) Control in Each Family

Anyone that has has had to suffer through NIST 800-53 knows the first control is Policies and Procedures. I was curious if any of the LLM's would get this so I came up with this not great prompt to see how they would respond.

What is the similarity across the -1 control in AC-1, CA-1, AU-1, etc.

Gemini Advanced

Gemini quickly and easily answered the prompt, getting the intent of my question.

PrivateGPT

PrivateGPT also answers the question with an up-front caveat.

AnythingLLM

AnythingLLM really struggled to get the question.

Vulnerability Scanning

I crafted this intentionally vague prompt to see what the LLMs could understand about vulnerability scanning best practices.

What vulnerability scanning practices should be implemented to ensure most controls are covered?

Gemini Advanced

Shockingly, I found Gemini's response was the most "word salad" among the three, although it started off well, but went downhill, not really understanding "organization-defined frequency" which is littered throughout NIST 800-53. However, it did mention CVSS and that scanners needed to updated and did mention classes of assets (meaning scan targets) in reasonable manner.

PrivateGPT

The markdown response with bold-faced, numbered list provided by llama3.2 + ChromaDB used by PrivateGPT was more help. It also didn't really understand "organization-defined frequency" but it did suggest monthly. It captured that different types of vulnerabilities exist (software flaws and improper configurations) as well as that the results need to be reviewed and communicated. Genius! It did pick up the idea of "scan coverage" but not "control coverage"

AnythingLLM

Like PrivateGPT, AnythingLLM returned a numbered bulleted list with short explanations. It did the best on this question compared to the others and I actually liked the fact that it capture system vs. component (even though it really doesn't understand what that means) as well as the idea that scanning should be continuous and that you should focus on most vulnerable systems. Given how poorly this configuration performed on the more basic questions, I surprised that it did this well.