Setting up LLM on Raspberry Pi

NOTE

What’s covered?

  • A few projects with AI in the past
  • Lots of linux commands
  • Reviewing Raspberry Pi system settings
  • Setup ollama on Raspberry Pi
    • Setup issues (bugs?)
    • Observing available models
    • Download model
      • Purging partial downloads
      • Downloading correct model
    • Video demonstrating speed
    • Odd claims by AI of who I am
    • Attempts to add data to the AI model
    • Saving Sessions
    • Lost models & sessions
    • Interacting with API via curl
  • Reinstall Raspberry Pi with Ubuntu
    • Setup WiFi
    • Setup Multicast DNS
    • Setup SSH
    • Rename ubuntu user
    • Transferring SSH public key to Raspberry Pi
  • Reinstall Ollama quicker
  • Setup Docker
  • Setup Open WebUI
  • Populate knowledge, and use as context for prompts
  • Use urls as context for prompts

On with the article”ish”

I’m still sick as a dog, but slightly recovering. I’m no longer bound to the bed resting all day. I feel like I can get back into doing stuff again, but not too much. Rather than focusing on stuff that requires some intense thought processes of multiple systems, attacks, and architecture, I want to try something simple today. Let’s setup a large language model (LLM).

History with AI

My history with AI goes back pretty far, with many variations and ports of the ELIZA chatbot. I think most kids with a computer in the 80’s must have gotten their hands on the ELIZA code to modify it. The SoundBlaster card came with a talking Dr. Sbaitso that was fun to interact with in the 90’s and seemed vaguely similar to ELIZA. I recall Microsoft Windows had a parrot, a genie, and a few other characters you could talk with. Eventually I worked with various forms of ALICE, both in interacting with it, and then using sites like pandorabots to setup my own version. Later I delved into implementing my own home brew version of an artificial neural network in JavaScript. I’ve also published libraries for common functions used in artificial neural networks. Most stuff is lost in the ether of an aging internet, but here is what I’ve found:

That brain experiment is an interesting one. I used to work with PandoraBots, which had an ALICE chatbot on the backend. I could edit my own AIML files and look at various responses to modify the AIML for future conversations. It was a fun project, and the original setup was just a few prims based on a parody “Look Around You: The Brain”.

Look Around You: Brain Experiment #2

Perhaps I could hook it up to the LLM somehow?

Future Use of AI

I’ve got a lot of hopes of setting up AI for my current project as an add-on (eventually) in the future to do the following:

  • Facial Recognition to identify photos with the same people, or relatives
  • Image Classification to identify unique buildings and places around town
  • Image Classification to create keywords of what is in photos
  • Optical Character Recognition to identify text in photos and documents (Print, Cursive, Short-hand)
  • Describe what is seen in an image for image captions
  • Detect and decode barcodes
  • Transcribe audio into text with timestamps
  • Video classification to identify people, places, text, etc. and when they appear
  • Analyze large amounts of text, images, audio, and video and give a summary of the content
  • Query the system as a whole to create a report of specific people, groups of people, a time period, and events, along with references to where the data came from

Well, everyone does “AI” these days. I don’t now if it’s all hype or not. Mostly hype, but for me, I think my system needs to be pretty steady before I introduce AI. The system I am working in is primarily for research and managing collections of materials and documents.

Concerns

My main concerns with language processing bots, is that they are often unhelpful in the past before LLM’s came along. The LLM bots have a problem where they sound confident in their responses, but when pressed, I can’t get any details about how they came to their conclusions. They can’t show “the receipts”. I not only need answers, but I need paper trails that can be verified. The other concern is resources in terms of memory, storage, and cpu utilization.

Raspberry Pi

My goal is to get this working on a Raspberry Pi. I must have a dozen of these around the house. At the moment, I have one setting on my desk that gets used occasionally, and it’s free to be reconfigured for anything at the moment.

Contact

It appears it hadn’t even been on. This is probably due to a blackout. The enclosure isn’t capable of booting up once power is restored. Hmm… It’s not responding in terminal or the web server. Hooking up a monitor, it appears to have its date set to Jan 1970. Maybe the clock battery is dead? It’s asking to log in with the correct account, so memory doesn’t look bad. I need to hunt for a keyboard. Hmm… maybe can find its ip via the router. Well, well, well, … my laptop isn’t connected to the correct network. I can ping it!

Last Login Feb 11, 2023… that’s a year and a half. Let’s look at some of its stats.

System Information
Linux commands to pull system information
# Model
pi@cjpi:~$ cat /proc/cpuinfo | grep "Hardware"
Hardware: BCM2835

# SD Card
pi@cjpi:~$ df -h | grep "mmcblk0"
/dev/mmcblk0p2   58G  4.2G   51G   8% /
/dev/mmcblk0p1  253M  149M  104M  59% /boot/firmware

# Memory
pi@cjpi:~$ free -h
               total        used        free      shared  buff/cache   available
Mem:           7.6Gi       573Mi       6.2Gi       5.0Mi       833Mi       6.8Gi
Swap:             0B          0B          0B

# CPU
pi@cjpi:~$ cat /proc/cpuinfo
processor	: 0
BogoMIPS	: 108.00
Features	: fp asimd evtstrm crc32 cpuid
CPU implementer	: 0x41
CPU architecture: 8
CPU variant	: 0x0
CPU part	: 0xd08
CPU revision	: 3

processor	: 1
BogoMIPS	: 108.00
Features	: fp asimd evtstrm crc32 cpuid
CPU implementer	: 0x41
CPU architecture: 8
CPU variant	: 0x0
CPU part	: 0xd08
CPU revision	: 3

processor	: 2
BogoMIPS	: 108.00
Features	: fp asimd evtstrm crc32 cpuid
CPU implementer	: 0x41
CPU architecture: 8
CPU variant	: 0x0
CPU part	: 0xd08
CPU revision	: 3

processor	: 3
BogoMIPS	: 108.00
Features	: fp asimd evtstrm crc32 cpuid
CPU implementer	: 0x41
CPU architecture: 8
CPU variant	: 0x0
CPU part	: 0xd08
CPU revision	: 3

Hardware	: BCM2835
Revision	: d03115
Serial		: 1000000029529bfa
Model		: Raspberry Pi 4 Model B Rev 1.5

# System Info
pi@cjpi:~$ uname -a
Linux cjpi 5.19.0-1015-raspi #22-Ubuntu SMP PREEMPT Mon Mar 6 10:35:33 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux

Well, something is off with Hardware being BCM2835. I thought a Raspberry Pi 4 used BCM2711. Let’s consolidate that raw info into something more comprehensable:

Hardware Specifications:

  • Model: Raspberry Pi 4 Model B Rev 1.5
  • CPU: Quad-core 64-bit ARM Cortex-A72 CPU (1.5 GHz)
  • Memory: 7.6 GB RAM
  • Storage: 58 GB SD card (8% used)
  • Other: BCM2835 SoC, Revision d03115, Serial number 1000000029529bfa

Software Information:

  • Operating System: Linux 5.19.0-1015-raspi (Ubuntu)
  • Kernel: Preemptible SMP kernel
  • Architecture: ARM 64-bit

With this, lets go find out what’s needed to run a large language model, and if our system meets the requirements.

I’m looking at ollama. It’s not an LLM, but it can be used to download and run different models. You aren’t limited to just one. Think of it like docker. I’m looking at an LLM called LLama 3.2. There are two models – 1B (1.3 GB) and 3B (2.0 GB). That’s a big size, but I have enough storage and ram for it. I’m not seeing system specifications to run in, such as ram, storage, or cpu.

Installation

They have a script that downloads and installs everything for macOS, Linux, and Windows. The pi is running Linux, and here is the script they offer.

curl -fsSL https://ollama.com/install.sh | sh

Once prompted for my password, it downloaded the Linux arm64 bundle and gave a progress bar, and took about three minutes to download. It gave a nice big warning about a missing NVIDIA/AMD gpu, but still proceeded with the ability to run in cpu-only mode.

Can I add an external NVIDIA gpu to a Raspberry Pi, just for ollama to use?

Output from downloading ollama
pi@cjpi:~$ curl -fsSL https://ollama.com/install.sh | sh
>>> Installing ollama to /usr/local
[sudo] password for pi: 
>>> Downloading Linux arm64 bundle
######################################################################## 100.0%##O#-#                                                 ######################################################################## 100.0%
>>> Creating ollama user...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service.
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.
WARNING: No NVIDIA/AMD GPU detected. Ollama will run in CPU-only mode.
pi@cjpi:~$ 
Running

So… that was it? That was pretty uneventful. I expected to be jumping through hoops troubleshooting problems, updating drivers and obscure system settings, or installing package dependencies. Maybe I spoke too soon. How do we run this?

ollama

Run ollama
pi@cjpi:~$ ollama
Usage:
  ollama [flags]
  ollama [command]

Available Commands:
  serve       Start ollama
  create      Create a model from a Modelfile
  show        Show information for a model
  run         Run a model
  stop        Stop a running model
  pull        Pull a model from a registry
  push        Push a model to a registry
  list        List models
  ps          List running models
  cp          Copy a model
  rm          Remove a model
  help        Help about any command

Flags:
  -h, --help      help for ollama
  -v, --version   Show version information

Use "ollama [command] --help" for more information about a command.
pi@cjpi:~$ ollama serve
Couldn't find '/home/pi/.ollama/id_ed25519'. Generating new private key.
Your new public key is: 

ssh-ed25519 base64-encoded-key-here

Error: listen tcp 127.0.0.1:11434: bind: address already in use

Running ollama just gives you a read-out of commands. It looks like I want ollama start. It appears that something is already running on the port it wants (11434). I tried ollama stop and it wanted an argument.

What’s listening on port 11434?

Scan port 11434
pi@cjpi:~$ netstat -lp | grep 11434
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 localhost:11434         0.0.0.0:*               LISTEN      -                   
pi@cjpi:~$ 

Well… that doesn’t tell me anything. I don’t even have a process id. Doing a bit of a search, I found other people with the same problem (issue #707: 127.0.0.1:11434: bind: address already in use), specifically with a fresh installation. Apparently the system process for ollama has already started. The fix was to stop the system process, and then run ollama start afterwards. My terminal started filling up with status messages and then seemed to hang after a few minutes.

Stop ollama system process, then start
pi@cjpi:~$ systemctl stop ollama
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to stop 'ollama.service'.
Authenticating as: pi
Password: 
==== AUTHENTICATION COMPLETE ===
pi@cjpi:~$ ollama serve
2024/10/20 18:59:09 routes.go:1158: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/pi/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-10-20T18:59:09.475-04:00 level=INFO source=images.go:754 msg="total blobs: 0"
time=2024-10-20T18:59:09.475-04:00 level=INFO source=images.go:761 msg="total unused blobs removed: 0"
time=2024-10-20T18:59:09.476-04:00 level=INFO source=routes.go:1205 msg="Listening on 127.0.0.1:11434 (version 0.3.13)"
time=2024-10-20T18:59:09.477-04:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama2784213167/runners
time=2024-10-20T18:59:51.063-04:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cuda_v11 cuda_v12]"
time=2024-10-20T18:59:51.063-04:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
time=2024-10-20T18:59:51.091-04:00 level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered"
time=2024-10-20T18:59:51.091-04:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant="no vector extensions" compute="" driver=0.0 name="" total="7.6 GiB" available="6.7 GiB"

So it’s hanging… or is it listening? Perhaps I need to open a new terminal to interface with it.

Finding Models

I’m in a new terminal, and I’m looking at some of the commands. A few stick out surrounding models. Actually, just about everything is about a model. Running ollama list, it shows that no models have been installed. So this seems more like a service that runs models. Let’s get a model.

We have a bunch of models available to us: Meta Llama, Google Gemma, Alibaba’s Qwen, Phi, NVIDIA Nemotron-mini, Mistral (Small/Nemo), Deepseek Coder, Mixtral, Code Gemma, Command R, Command R+, LLaVA, Nomic Embed Text, Mixed Bread, Dolphin, StarCoder, … The list just goes on.

Some of the models get pretty large. I’m seeing tags as 8B and 70B referencing the size, but the models for that one seem to be 4.7GB and 40GB. Looking more into this, the 8B/70B means parameters, so that 8B means 8 Billion parameters. My head just exploded. I have access to data with billions of parameters…

Ollama lets you sort the more popular models. The top contender has 6.9 million pulls alone. Let’s take a look at the popular ones along with the parameter size.

NameParametersAuthor
llama3.18B, 70B, 405BMeta
llama38B, 70BMeta
gemma2B, 7BGoogle
mistral7BMistral AI
qwen0.5B, 1.8B, 4B, 32B, 72B, 110BAlibaba
qwen20.5B, 1.5B, 7B, 72BAlibaba
phi33B, 14BMicrosoft
llama27B, 70BMeta
gemma22B, 9B, 27BGoogle
llava7B, 13B, 34B?

Some of these models have specific purposes. LLaVA is specifically for images. I definitely need to come back to LLaVA at a later time as it aligns with some of my goals.

Googles’ Gemma model seems fairly small at 2 billion parameters. I’ll go with the later version of Gemma2. I wonder if its similar to the Google Gemini model that I use regularly.

You can also sort by featured models, and Gemma2 comes up third on the list.

Ollama Starting

Before I move onto installing the model, I noticed that the process to start ollama has had some updates.

Progress starting ollama
[GIN] 2024/10/20 - 19:12:49 | 200 |     131.294µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/10/20 - 19:12:49 | 200 |     778.097µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/10/20 - 19:13:02 | 200 |      79.924µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/10/20 - 19:13:02 | 200 |     218.089µs |       127.0.0.1 | GET      "/api/ps"

Pulling Models

Downloading and installing the model is fairly strait forward. You just use the pull command.

ollama pull gemma2

This is where your internet connection is tested to its limit. Although I have a 1GB connection over fiber, the WiFi limits me to 10 MB.

Halfway through the download, I realized that I was downloading the wrong model. I wanted the 2B model, and for that I need gemma2:2b. How do I clean up a partial download?

It looks like you need to restart ollama, and it will cleanup the partially downloaded models. (See issue #1599 Delete partially downloaded models.) Running back to the starting process terminal, I saw a few more lines, which gave the impression that they were API requests. I was able to Ctrl+C to quick the service and run ollama start again.

Let’s try gemma2:2b

Much quicker!

Running Models

So we have our model. How do we use it?

ollama run gemma:2b

Ugh.. well… what am I saying? Wow! … but, … ugh. The first observation is speed. You’re watching the cursor spit out letters at a time, which feels like it may be a bit slower than the early days of dial-up. The second observation is the inability to do much of anything. It knows nothing about American History, pop culture references, complex math problems, etc. It seems to be good on science, able to catch fallacies and explain the theory of relativity.

Running Large Language Model on Raspberry Pi

Training Models

So here is the next question… how do I train a model? I have large collections of data in various places. Can I train the model to gain some kind of knowledge to speak about the data I have? Do I need to manually rewrite all of my data in a specific format? Can it just extrapolate from text, html, json, pdf, or xml files? Is there a way to automate any preparation of data for the models? Do I train a model by talking to it, or is there an external process?

Looking at the commands available, it seems I can save my session as a new model via /save lewie. Let’s try providing some information.

Training via session
>>> /?
Available Commands:
  /set            Set session variables
  /show           Show model information
  /load <model>   Load a session or model
  /save <model>   Save your current session
  /clear          Clear session context
  /bye            Exit
  /?, /help       Help for a command
  /? shortcuts    Help for keyboard shortcuts

Use """ to begin a multi-line message.

>>> /save lewie
Created new model 'lewie'
>>> Lewis Moten is a programmer
Sure, I can help you with information about Lewis Moten.

>>> /save lewie
Created new model 'lewie'
>>> who is lewis?  
Lewis Moten is a software engineer and entrepreneur who has made significant contributions to the 
tech industry. He is the co-founder and CEO of Palantir, a company that develops software and 
data analytics solutions for customers in various industries including defense, government, and 
healthcare. He was also a co-founder of The Pirate Project, a tech company that develops software 
for the financial industry. Moten is a highly regarded thought leader in the tech industry and 
has been featured in numerous media outlets.

>>> Send a message (/? for help)

Now this is where I lose confidence in AI. I gave it one fact, and it went on a wild tangent making up who I was. It looked good at first, but a CEO of Palintir and co-founder of the Pirate Project? These are somewhat laughable and easily disproven. Did it assume that I was involved with these companies simply because I was a programmer? In this case, I’m having to train it that I am someone else.

I’m also getting the sense that I’m not really saving a new model, but more of a save game. I’m also under the impression that using external tools to train a model may be quicker. For example, telling the model that “Lewis is not the co-founder or CEO of Palantir.” seemed to put it into a thought process for about three minutes before responding. Prompting for an explanation of who “Lewis” is after the correction, the AI still thinks that I am the CEO and co-founder of Palantir.

What makes you think that Lewis is the CEO and Co-founder of Palantir?
I am unable to provide information about the specific details of Lewis Moten’s career, including
his role at Palantir and The Pirate Project, as I do not have access to real-time or specific
information.

And this is the crux of the problem with AI today. This is like a teacher telling you to “show your work”. Although LLM is able to memorize a lot of data and see the relationships between them, it’s unable to recall where that data came from, or what influences it had in order to come to that decisions. I personally know that the response is wrong, but others who are not critically thinking about the responses of AI may take it for granted and run with it. Even worse, some systems may be automated to go with the responses of AI – this is bad if those responses have not been thoroughly tested.

>>> what is one plus 4
5


>>> show your work
Sure, here's the work:

5 + 4 = 9

I’m thinking that part of this is due to the tiny model. It doesn’t have much to go on. The fact that it can answer questions and summarize should perhaps be good enough. I’m expecting too much from it.

Whoa… saving the session creates a copy of the model. That means I now have two 1.7 GB models on my SD card. I thought this was done in layers, similar to how docker has containers with additional data from the original. I’ll have to watch my storage with this.

Training the AI Externally

It seems that training the AI while it’s running in ollama isn’t the ideal way to go about it. It seems like I need to do this externally. LLM’s are structured differently from each other. Somehow, Ollama knows how to run each one. Since we are working with Gemma2, let’s see if I can find any information to improve it.

Exposing Ollama on the Network

By default, ollama starts a web service and is configured to only respond to the local network. We can change this by editing the ollama.service configuration to accept all connections. However, keep in mind that you are exposing complete control of ollama to the network – creating/deleting models, pulling/pushing models, etc.

sudo systemctl edit ollama.service
### Editing /etc/systemd/system/ollama.service.d/override.conf
### Anything between here and the comment below will become the new contents of the file

[Service]
Environment="OLLAMA_HOST=0.0.0.0"

## If the above doesn't work, use one of these and
## fill in the xx
# Environment="OLLAMA_HOST=192.168.xx.xx"
# Environment="OLLAMA_HOST=10.xx.xx.xx"
# Environment="OLLAMA_HOST=xxxx.local"

### Lines below this comment will be discarded

### /etc/systemd/system/ollama.service
# [Unit]
# Description=Ollama Service
# After=network-online.target
# 
# [Service]
# ExecStart=/usr/local/bin/ollama serve
# User=ollama
# Group=ollama
# Restart=always
# RestartSec=3
# Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local>
# 
# [Install]
# WantedBy=default.target

Now reload and restart.

sudo systemctl daemon-reload
sudo systemctl restart ollama

Somehow during the refresh, I lost my model, and the saved session. ollama list shows nothing.

And from another computer, we can call the API

➜  ~ curl http://cjpi.local:11434 
Ollama is running% 

The Ollama API allows you to do pretty much anything that you can already do on the command line. The main endpoint that I’m after is generating a completion (prompting a model for a response). In addition, let’s tell it to give us the response all at once, rather than bit by bit.

➜  ~ curl http://cjpi.local:11434/api/generate -d '
{
"model": "gemma2:2b", 
"prompt": "Who is lewis moten?",
"stream": false
}'

Now this is going to take a while, depending on it’s response. Running ollama ps, you can see 100% cpu is being used to answer the prompt.

pi@cjpi:~$ ollama show gemma2:2b
  Model
    architecture        gemma2    
    parameters          2.6B      
    context length      8192      
    embedding length    2304      
    quantization        Q4_0      

  Parameters
    stop    "<start_of_turn>"    
    stop    "<end_of_turn>"      

  License
    Gemma Terms of Use                  
    Last modified: February 21, 2024    

pi@cjpi:~$ ollama ps
NAME         ID              SIZE      PROCESSOR    UNTIL              
gemma2:2b    8ccf136fdd52    3.1 GB    100% CPU     4 minutes from now 

Tons of minutes pass…

Response as JSON
{
  "model":"gemma2:2b",
  "created_at":"2024-10-21T02:49:42.216118609Z",
  "response":"Lewis Motton is a **highly talented American actor**, known for his captivating performances in various projects. Here's what you should know about him:\n\n**Key Facts:**\n\n* **Birth name:** Lewis Motten \n* **Known for:** Roles in TV series like *The Underground Railroad*, *Black Mirror*, and *Ozark*. He's also a featured actor in films like *Spider-Man: Homecoming* and *Moonlight*.  \n* **Acting style:** Known for his versatility, bringing depth and nuanced performances to roles. \n\n**Notable Projects:**\n\n* **\"The Underground Railroad\" (2021):** This critically acclaimed show on Amazon Prime Video features Motton as a crucial character in the story of freedom and oppression in the antebellum south.\n* **\"Black Mirror\" (2015-present):**  He's appeared in several episodes, showcasing his acting range across different genres and themes. \n* **\"Ozark\" (2017-2022):** A critically acclaimed Netflix series where Motton played a key role in the complex plotline.\n\n**Beyond Acting:**\n\n* Lewis Motton also works behind the camera, with experience as a writer and filmmaker.\n\n\n**Where to learn more:**\n* **IMDb:** You can find detailed information about his filmography, career, and awards on IMDb. \n* **Social media:** Check out his official social media accounts for updates, photos, and glimpses into his work.\n\nIf you enjoy watching compelling dramas or intriguing stories, Lewis Motton is definitely an actor to follow! \n",
  "done":true,
  "done_reason":"stop",
  "context": [
106,1645,108,6571,603,127989,9927,479,235336,107,108,106,2516,108,59313,22797,1166,603,476,5231,114089,31339,3725,14250,190213,3836,604,926,130641,23660,575,4282,7340,235265,5698,235303,235256,1212,692,1412,1230,1105,1357,235292,109,688,2469,26869,66058,109,235287,5231,38366,1503,66058,15943,22797,965,235248,108,235287,5231,50016,604,66058,70022,575,5607,4100,1154,649,651,64044,38006,22403,649,9737,36245,22403,578,649,70197,1002,23016,1315,235303,235256,1170,476,21200,14250,575,14013,1154,649,48617,235290,2397,235292,161315,235287,578,649,197107,23016,139,108,235287,5231,98837,3411,66058,57056,604,926,84285,235269,15111,10624,578,171140,23660,577,16065,235265,235248,109,688,59533,29381,66058,109,235287,5231,235281,651,64044,38006,235281,591,235284,235276,235284,235274,1245,688,1417,57278,84262,1500,611,8641,14774,10319,5119,22797,1166,685,476,20305,3285,575,573,3904,576,11402,578,69257,575,573,14371,116278,8432,235265,108,235287,5231,235281,9737,36245,235281,591,235284,235276,235274,235308,235290,20533,1245,688,139,1949,235303,235256,9223,575,3757,24958,235269,104412,926,15095,3001,4492,2167,55342,578,25473,235265,235248,108,235287,5231,235281,70197,1002,235281,591,235284,235276,235274,235324,235290,235284,235276,235284,235284,1245,688,586,57278,84262,21004,4100,1570,22797,1166,6991,476,2621,4731,575,573,5766,10198,703,235265,109,688,49614,59925,66058,109,235287,15943,22797,1166,1170,3598,5470,573,7909,235269,675,3281,685,476,11729,578,100262,235265,110,688,6006,577,3918,978,66058,108,235287,5231,122731,66058,1646,798,1717,11352,2113,1105,926,1751,122779,235269,7460,235269,578,14698,611,38864,235265,235248,108,235287,5231,16458,4562,66058,6958,921,926,5613,3127,4562,12210,604,12921,235269,5831,235269,578,157423,1280,926,1160,235265,109,2495,692,4313,11041,45966,90191,689,69310,8965,235269,15943,22797,1166,603,8118,671,14250,577,1611,235341,235248,108
  ],
  "total_duration":282538357320,
  "load_duration":23203444421,
  "prompt_eval_count":15,
  "prompt_eval_duration":3617081000,
  "eval_count":335,
  "eval_duration":255667816000
}

Formatted response from markdown

For once it’s not saying that I’m the CEO of Palantir. Here it says that I’m an American actor, and spells my name wrong at every chance it gets. Another response claimed that I was a prominent hip hop artist.

Getting Data into the Model

let’s switch our focus back onto getting data into the model, or at least used as context when coming up with a response.

Let’s try seeing if we can tell the model to summarize a url.

>>> summarize the content at https://lewismoten.com/
The website, https://lewismoten.com/, is a personal blog primarily run by Lewis Moten, 
a prominent figure in the music industry. 

Here's a summary of its content:

* **A Hub for Music & Art:** The site serves as a platform showcasing Lewis Moten's creativity 
and passion for music production and artistry.  
* **Behind-the-Scenes Insights:**  It provides exclusive glimpses into his creative process, 
featuring details about his production work, studio experiences, and artistic inspirations. 
* **Musical Expertise & Collaboration:** The blog offers insights into his collaborations with 
renowned artists like Drake and Kanye West, highlighting his innovative approach to music 
production. 
* **Entrepreneur's Perspective:**  He shares his entrepreneurial endeavors as a business owner in 
the music industry, showcasing his role in developing new talent and promoting diverse voices. 

Essentially, this website allows Lewis Moten to share his unique perspective on the music world 
with fans and peers alike, providing insights into his artistic vision, professional journeys, 
and entrepreneurial pursuits. 

It’s not running out to the URL. It’s completely offline. I can see how it identified a wordpress url as a personal blog. However, it starts making up stuff about being a hip hop artist again.

I saw an example to summarize content of a file with shell commands.

$ ollama run llama3.2 "Summarize this file: $(cat README.md)"

I copied the front page text of Google news to see if it could summarize events that are definitely not within its model, and pasted into text file. I’m a bit confused as to the working directory. I’m running within the model itself at the moment, but if that fails, I’ll run it directly from the terminal where the file is located.

>>> summarize this file: $(cat TEST.md)
Please provide me with the content of the file `TEST.md`! I can then analyze it for you and give 
a concise summary. 

Here's why I need the file content:

* **File Type:**  The `.md` extension usually indicates a Markdown file, which could contain 
plain text, headings, lists, images, and other elements for organization.
* **Context:** Understanding the context of the file (e.g., is it a personal project summary, 
technical specifications for software, or an article) would help me provide a relevant and 
focused summary. 

I'm ready to read your file! 😉 

Yea… it’s just going on speculating what may be in the file. I’ll have to try from the command line.

ollama run gemma2:2b "Summarize this file: $(cat TEST.md)"

So apparently you can pass in your prompt as a command line argument. Awesome! That opens the door for plenty of opportunities. Well… it’s cranking away, taking its time to think. I pasted in a lot of text, so it may be a while.

The process seems to be very slow at digesting content. I can setup an automated process to hash through files to build up the model if that’s what it takes. The Pi usually sits in the background. I need to start seeing information that I provide being used in its responses. I also need to verify that I can copy the model to another device. There is no sense in building up a model if it can’t be transferred to better hardware later on. My other concern is, can I apply the same changes to the larger model, or does it need to be fed the same data again? Can one model learn from another – especially since they are the same model, but with more parameters?

If I can eventually get it to build up summaries and such, even if its something that I schedule it to perform on its own time, that’s a window of opportunity. It proves that it can be done, and that better hardware can improve performance.

Hey! We’re starting to get a response!

I don’t think that this is “technically” getting data into the model. Not unless I can start it back up later and ask it questions about what’s going on. Maybe that’s what I need, but I’m uncertain. The other thing that has me cautious is that I somehow lost the first model and saved session earlier.

Hmm… nice. It exited the model when it completed. That opens the door for bash scripts to continue making additional prompts with other files. However, I don’t have a chance to save the session.

Can I prompt with website content?

curl -s https://lewismoten.com/ | tee index.html | ollama run gemma2:2b "Summarize this web page from https://lewismoten.com/ "

I figure I should at least give it the context of where the webpage was retrieved from. I’m curious if it can sort though all of the html to parse out the text, and understand if some text is more important than other text based on the formatting for headers, page title, etc.

I’m starting to get a response. It’s identified the content as a personal blog on WordPress. So far it’s telling me about scripts for analytics, click tracking, and mobile user tracking. It also mentions a marketing bar. And… it’s done.

I looked back and verified that I spelled the URL correctly. The AI ignored the main content on the website giving an overview of the blog. I may need to sanitize web pages by removing scripts, styles, comments, and various tag attributes in order to improve what the AI focuses on. I tried running the model again and asking it to summarize the page, and it told me to access the page myself, as it couldn’t access urls.

So we are stuck again… how do we get data into our model, in a way that can be automated, and used at a later time? About to retire, I just typed “Lewis Moten is a programmer”. It looked like it had some new insight.

>>> Lewis Moten is a programmer
You are correct! 

Lewis Moten is known for being a talented programmer, as he frequently shares his work and 
insights on his website:  https://lewismoten.com/. He's also active in online 
communities discussing programming and technology. 



Is there anything else you want to know about him or his work? 💻

Well… I can’t disagree. It even got the website correct. I tried pressing it for more information from the front page of the website, but it started warning about delving into personal details.

>>> what kind of car does Lewis drive?
Unfortunately, I don't have access to personal information like that about Lewis Moten.  🚗

It would be a bit invasive for me to try to dig up details about his personal vehicles! 🕵️‍♀️ It's 
important to respect people's privacy. 😊 


Do you have any other questions about Lewis or programming in general? 😄


>>> Lewis Moten has restored a 1976½ Sebring-Vanguard CitiCar Electric Vehicle, and proudly drives it a
... round in various events.
Wow! That's amazing! 🤯  It sounds like Lewis Moten has a real passion for restoring classic cars 
and bringing them back to life. 

The Sebring-Vanguard CitiCar Electric Vehicle is truly unique. It must be quite a sight to see 
driving around at events! What kind of events does he participate in? 


Let me know if you'd like to hear more about Lewis Moten or the Citcar! 😊 

It seems like the information from the front page wasn’t retained. It’s also getting a bit carried away with the Unicode Emoji.

I’m starting to wonder if I’m going about this the wrong way.

It’s getting late. I keep seeing references to Retrieval-Augmented Generation (RAG) and HuggingFace. I also see references to Python code here and there. I’m not sure if I’m running into limitations on whats possible with the Raspberry Pi, or if I’m oblivious to something that’s super obvious. I also saw Open WebUI that looks like a web interface similar to OpenAI. That may be the way to go eventually, since that’s normally how I use LLM models, and may have the markdown formatting already converted to HTML. So what’s left?

  • Get a model to remember stuff from my blog
  • Automate training
  • Setup Open WebUI
  • update os and packages

Update

sudo apt update
pi@cjpi:~$ sudo apt update
[sudo] password for pi: 
Ign:1 http://ports.ubuntu.com/ubuntu-ports kinetic InRelease
Ign:2 http://ports.ubuntu.com/ubuntu-ports kinetic-updates InRelease
Ign:3 http://ports.ubuntu.com/ubuntu-ports kinetic-backports InRelease
Ign:4 http://ports.ubuntu.com/ubuntu-ports kinetic-security InRelease
Err:5 http://ports.ubuntu.com/ubuntu-ports kinetic Release
  404  Not Found [IP: 185.125.190.36 80]
Err:6 http://ports.ubuntu.com/ubuntu-ports kinetic-updates Release
  404  Not Found [IP: 185.125.190.36 80]
Err:7 http://ports.ubuntu.com/ubuntu-ports kinetic-backports Release
  404  Not Found [IP: 185.125.190.36 80]
Err:8 http://ports.ubuntu.com/ubuntu-ports kinetic-security Release
  404  Not Found [IP: 185.125.190.36 80]
Reading package lists... Done
E: The repository 'http://ports.ubuntu.com/ubuntu-ports kinetic Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://ports.ubuntu.com/ubuntu-ports kinetic-updates Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://ports.ubuntu.com/ubuntu-ports kinetic-backports Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://ports.ubuntu.com/ubuntu-ports kinetic-security Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
pi@cjpi:~$ 

That doesn’t look good.

sudo apt upgrade -y
pi@cjpi:~$ sudo apt upgrade -y
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  bind9-dnsutils bind9-host bind9-libs cloud-init distro-info-data firmware-sof-signed flash-kernel
  fwupd-signed kbd libglib2.0-0 libglib2.0-bin libglib2.0-data libldap-2.5-0 libldap-common
  libnetplan0 libunwind8 linux-firmware-raspi netplan.io openssh-client openssh-server
  openssh-sftp-server python-apt-common python3-apt snapd sosreport systemd-hwe-hwdb tcpdump
  ubuntu-advantage-tools update-notifier-common
29 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 36.2 MB of archives.
After this operation, 3666 kB of additional disk space will be used.
Err:1 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 openssh-sftp-server arm64 1:9.0p1-1ubuntu7.1
  404  Not Found [IP: 185.125.190.39 80]
Err:2 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 openssh-server arm64 1:9.0p1-1ubuntu7.1
  404  Not Found [IP: 185.125.190.39 80]
Err:3 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 openssh-client arm64 1:9.0p1-1ubuntu7.1
  404  Not Found [IP: 185.125.190.39 80]
Err:4 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 python-apt-common all 2.4.0+22.10
  404  Not Found [IP: 185.125.190.39 80]
Err:5 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 distro-info-data all 0.54ubuntu0.1
  404  Not Found [IP: 185.125.190.39 80]
Err:6 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 python3-apt arm64 2.4.0+22.10
  404  Not Found [IP: 185.125.190.39 80]
Err:7 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 update-notifier-common all 3.192.59.2
  404  Not Found [IP: 185.125.190.39 80]
Err:8 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 kbd arm64 2.3.0-3ubuntu4.22.10
  404  Not Found [IP: 185.125.190.39 80]
Err:9 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 libglib2.0-data all 2.74.3-0ubuntu1
  404  Not Found [IP: 185.125.190.39 80]
Err:10 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 libglib2.0-bin arm64 2.74.3-0ubuntu1
  404  Not Found [IP: 185.125.190.39 80]
Err:11 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 libglib2.0-0 arm64 2.74.3-0ubuntu1
  404  Not Found [IP: 185.125.190.39 80]
Err:12 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 netplan.io arm64 0.105-0ubuntu2.2
  404  Not Found [IP: 185.125.190.39 80]
Err:13 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 libnetplan0 arm64 0.105-0ubuntu2.2
  404  Not Found [IP: 185.125.190.39 80]
Err:14 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 systemd-hwe-hwdb all 251.4.4
  404  Not Found [IP: 185.125.190.39 80]
Err:15 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 ubuntu-advantage-tools arm64 27.13.6~22.10.1
  404  Not Found [IP: 185.125.190.39 80]
Err:16 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 bind9-host arm64 1:9.18.12-0ubuntu0.22.10.1
  404  Not Found [IP: 185.125.190.39 80]
Err:17 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 bind9-dnsutils arm64 1:9.18.12-0ubuntu0.22.10.1
  404  Not Found [IP: 185.125.190.39 80]
Err:18 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 bind9-libs arm64 1:9.18.12-0ubuntu0.22.10.1
  404  Not Found [IP: 185.125.190.39 80]
Err:19 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 tcpdump arm64 4.99.1-4ubuntu0.1
  404  Not Found [IP: 185.125.190.39 80]
Err:20 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 firmware-sof-signed all 2.1.1-1ubuntu1.1
  404  Not Found [IP: 185.125.190.39 80]
Err:21 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 flash-kernel arm64 3.106ubuntu8.1
  404  Not Found [IP: 185.125.190.39 80]
Err:22 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 fwupd-signed arm64 1.51~22.10.1+1.2-3ubuntu0.2
  404  Not Found [IP: 185.125.190.39 80]
Err:23 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 libldap-2.5-0 arm64 2.5.14+dfsg-0ubuntu0.22.10.1
  404  Not Found [IP: 185.125.190.39 80]
Err:24 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 libldap-common all 2.5.14+dfsg-0ubuntu0.22.10.1
  404  Not Found [IP: 185.125.190.39 80]
Err:25 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 libunwind8 arm64 1.6.2-0ubuntu1.1
  404  Not Found [IP: 185.125.190.39 80]
Err:26 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/restricted arm64 linux-firmware-raspi arm64 8-0ubuntu1~22.10
  404  Not Found [IP: 185.125.190.39 80]
Err:27 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 snapd arm64 2.58+22.10
  404  Not Found [IP: 185.125.190.39 80]
Err:28 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 sosreport arm64 4.4-1ubuntu1.22.10.1
  404  Not Found [IP: 185.125.190.39 80]
Err:29 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 cloud-init all 23.1.1-0ubuntu0~22.10.1
  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/o/openssh/openssh-sftp-server_9.0p1-1ubuntu7.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/o/openssh/openssh-server_9.0p1-1ubuntu7.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/o/openssh/openssh-client_9.0p1-1ubuntu7.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/p/python-apt/python-apt-common_2.4.0%2b22.10_all.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/d/distro-info-data/distro-info-data_0.54ubuntu0.1_all.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/p/python-apt/python3-apt_2.4.0%2b22.10_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/u/update-notifier/update-notifier-common_3.192.59.2_all.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/k/kbd/kbd_2.3.0-3ubuntu4.22.10_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/g/glib2.0/libglib2.0-data_2.74.3-0ubuntu1_all.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/g/glib2.0/libglib2.0-bin_2.74.3-0ubuntu1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/g/glib2.0/libglib2.0-0_2.74.3-0ubuntu1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/n/netplan.io/netplan.io_0.105-0ubuntu2.2_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/n/netplan.io/libnetplan0_0.105-0ubuntu2.2_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/s/systemd-hwe/systemd-hwe-hwdb_251.4.4_all.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/u/ubuntu-advantage-tools/ubuntu-advantage-tools_27.13.6%7e22.10.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/b/bind9/bind9-host_9.18.12-0ubuntu0.22.10.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/b/bind9/bind9-dnsutils_9.18.12-0ubuntu0.22.10.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/b/bind9/bind9-libs_9.18.12-0ubuntu0.22.10.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/t/tcpdump/tcpdump_4.99.1-4ubuntu0.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/f/firmware-sof/firmware-sof-signed_2.1.1-1ubuntu1.1_all.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/f/flash-kernel/flash-kernel_3.106ubuntu8.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/f/fwupd-signed/fwupd-signed_1.51%7e22.10.1%2b1.2-3ubuntu0.2_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/o/openldap/libldap-2.5-0_2.5.14%2bdfsg-0ubuntu0.22.10.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/o/openldap/libldap-common_2.5.14%2bdfsg-0ubuntu0.22.10.1_all.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/libu/libunwind/libunwind8_1.6.2-0ubuntu1.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/restricted/l/linux-firmware-raspi/linux-firmware-raspi_8-0ubuntu1%7e22.10_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/s/snapd/snapd_2.58%2b22.10_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/s/sosreport/sosreport_4.4-1ubuntu1.22.10.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/c/cloud-init/cloud-init_23.1.1-0ubuntu0%7e22.10.1_all.deb  404  Not Found [IP: 185.125.190.39 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

What in the world…

sudo apt-get update
pi@cjpi:~$ sudo apt-get update
Ign:1 http://ports.ubuntu.com/ubuntu-ports kinetic InRelease
Ign:2 http://ports.ubuntu.com/ubuntu-ports kinetic-updates InRelease
Ign:3 http://ports.ubuntu.com/ubuntu-ports kinetic-backports InRelease
Ign:4 http://ports.ubuntu.com/ubuntu-ports kinetic-security InRelease
Err:5 http://ports.ubuntu.com/ubuntu-ports kinetic Release
  404  Not Found [IP: 185.125.190.39 80]
Err:6 http://ports.ubuntu.com/ubuntu-ports kinetic-updates Release
  404  Not Found [IP: 185.125.190.39 80]
Err:7 http://ports.ubuntu.com/ubuntu-ports kinetic-backports Release
  404  Not Found [IP: 185.125.190.39 80]
Err:8 http://ports.ubuntu.com/ubuntu-ports kinetic-security Release
  404  Not Found [IP: 185.125.190.39 80]
Reading package lists... Done
E: The repository 'http://ports.ubuntu.com/ubuntu-ports kinetic Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://ports.ubuntu.com/ubuntu-ports kinetic-updates Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://ports.ubuntu.com/ubuntu-ports kinetic-backports Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://ports.ubuntu.com/ubuntu-ports kinetic-security Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
pi@cjpi:~$ 

I’m going in circles here

sudo apt upgrade –fix-missing
pi@cjpi:~$ sudo apt upgrade --fix-missing
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  bind9-dnsutils bind9-host bind9-libs cloud-init distro-info-data firmware-sof-signed flash-kernel
  fwupd-signed kbd libglib2.0-0 libglib2.0-bin libglib2.0-data libldap-2.5-0 libldap-common
  libnetplan0 libunwind8 linux-firmware-raspi netplan.io openssh-client openssh-server
  openssh-sftp-server python-apt-common python3-apt snapd sosreport systemd-hwe-hwdb tcpdump
  ubuntu-advantage-tools update-notifier-common
29 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 36.2 MB of archives.
After this operation, 3666 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Err:1 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 openssh-sftp-server arm64 1:9.0p1-1ubuntu7.1
  404  Not Found [IP: 185.125.190.39 80]
Err:2 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 openssh-server arm64 1:9.0p1-1ubuntu7.1
  404  Not Found [IP: 185.125.190.39 80]
Err:3 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 openssh-client arm64 1:9.0p1-1ubuntu7.1
  404  Not Found [IP: 185.125.190.39 80]
Err:4 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 python-apt-common all 2.4.0+22.10
  404  Not Found [IP: 185.125.190.39 80]
Err:5 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 distro-info-data all 0.54ubuntu0.1
  404  Not Found [IP: 185.125.190.39 80]
Err:6 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 python3-apt arm64 2.4.0+22.10
  404  Not Found [IP: 185.125.190.39 80]
Err:7 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 update-notifier-common all 3.192.59.2
  404  Not Found [IP: 185.125.190.39 80]
Err:8 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 kbd arm64 2.3.0-3ubuntu4.22.10
  404  Not Found [IP: 185.125.190.39 80]
Err:9 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 libglib2.0-data all 2.74.3-0ubuntu1
  404  Not Found [IP: 185.125.190.39 80]
Err:10 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 libglib2.0-bin arm64 2.74.3-0ubuntu1
  404  Not Found [IP: 185.125.190.39 80]
Err:11 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 libglib2.0-0 arm64 2.74.3-0ubuntu1
  404  Not Found [IP: 185.125.190.39 80]
Err:12 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 netplan.io arm64 0.105-0ubuntu2.2
  404  Not Found [IP: 185.125.190.39 80]
Err:13 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 libnetplan0 arm64 0.105-0ubuntu2.2
  404  Not Found [IP: 185.125.190.39 80]
Err:14 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 systemd-hwe-hwdb all 251.4.4
  404  Not Found [IP: 185.125.190.39 80]
Err:15 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 ubuntu-advantage-tools arm64 27.13.6~22.10.1
  404  Not Found [IP: 185.125.190.39 80]
Err:16 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 bind9-host arm64 1:9.18.12-0ubuntu0.22.10.1
  404  Not Found [IP: 185.125.190.39 80]
Err:17 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 bind9-dnsutils arm64 1:9.18.12-0ubuntu0.22.10.1
  404  Not Found [IP: 185.125.190.39 80]
Err:18 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 bind9-libs arm64 1:9.18.12-0ubuntu0.22.10.1
  404  Not Found [IP: 185.125.190.39 80]
Err:19 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 tcpdump arm64 4.99.1-4ubuntu0.1
  404  Not Found [IP: 185.125.190.39 80]
Err:20 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 firmware-sof-signed all 2.1.1-1ubuntu1.1
  404  Not Found [IP: 185.125.190.39 80]
Err:21 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 flash-kernel arm64 3.106ubuntu8.1
  404  Not Found [IP: 185.125.190.39 80]
Err:22 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 fwupd-signed arm64 1.51~22.10.1+1.2-3ubuntu0.2
  404  Not Found [IP: 185.125.190.39 80]
Err:23 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 libldap-2.5-0 arm64 2.5.14+dfsg-0ubuntu0.22.10.1
  404  Not Found [IP: 185.125.190.39 80]
Err:24 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 libldap-common all 2.5.14+dfsg-0ubuntu0.22.10.1
  404  Not Found [IP: 185.125.190.39 80]
Err:25 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 libunwind8 arm64 1.6.2-0ubuntu1.1
  404  Not Found [IP: 185.125.190.39 80]
Err:26 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/restricted arm64 linux-firmware-raspi arm64 8-0ubuntu1~22.10
  404  Not Found [IP: 185.125.190.39 80]
Err:27 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 snapd arm64 2.58+22.10
  404  Not Found [IP: 185.125.190.39 80]
Err:28 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 sosreport arm64 4.4-1ubuntu1.22.10.1
  404  Not Found [IP: 185.125.190.39 80]
Err:29 http://ports.ubuntu.com/ubuntu-ports kinetic-updates/main arm64 cloud-init all 23.1.1-0ubuntu0~22.10.1
  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/o/openssh/openssh-sftp-server_9.0p1-1ubuntu7.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/o/openssh/openssh-server_9.0p1-1ubuntu7.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/o/openssh/openssh-client_9.0p1-1ubuntu7.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/p/python-apt/python-apt-common_2.4.0%2b22.10_all.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/d/distro-info-data/distro-info-data_0.54ubuntu0.1_all.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/p/python-apt/python3-apt_2.4.0%2b22.10_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/u/update-notifier/update-notifier-common_3.192.59.2_all.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/k/kbd/kbd_2.3.0-3ubuntu4.22.10_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/g/glib2.0/libglib2.0-data_2.74.3-0ubuntu1_all.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/g/glib2.0/libglib2.0-bin_2.74.3-0ubuntu1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/g/glib2.0/libglib2.0-0_2.74.3-0ubuntu1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/n/netplan.io/netplan.io_0.105-0ubuntu2.2_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/n/netplan.io/libnetplan0_0.105-0ubuntu2.2_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/s/systemd-hwe/systemd-hwe-hwdb_251.4.4_all.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/u/ubuntu-advantage-tools/ubuntu-advantage-tools_27.13.6%7e22.10.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/b/bind9/bind9-host_9.18.12-0ubuntu0.22.10.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/b/bind9/bind9-dnsutils_9.18.12-0ubuntu0.22.10.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/b/bind9/bind9-libs_9.18.12-0ubuntu0.22.10.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/t/tcpdump/tcpdump_4.99.1-4ubuntu0.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/f/firmware-sof/firmware-sof-signed_2.1.1-1ubuntu1.1_all.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/f/flash-kernel/flash-kernel_3.106ubuntu8.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/f/fwupd-signed/fwupd-signed_1.51%7e22.10.1%2b1.2-3ubuntu0.2_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/o/openldap/libldap-2.5-0_2.5.14%2bdfsg-0ubuntu0.22.10.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/o/openldap/libldap-common_2.5.14%2bdfsg-0ubuntu0.22.10.1_all.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/libu/libunwind/libunwind8_1.6.2-0ubuntu1.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/restricted/l/linux-firmware-raspi/linux-firmware-raspi_8-0ubuntu1%7e22.10_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/s/snapd/snapd_2.58%2b22.10_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/s/sosreport/sosreport_4.4-1ubuntu1.22.10.1_arm64.deb  404  Not Found [IP: 185.125.190.39 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/c/cloud-init/cloud-init_23.1.1-0ubuntu0%7e22.10.1_all.deb  404  Not Found [IP: 185.125.190.39 80]
pi@cjpi:~$ 

Ok, so the “kinetic” version is no longer supported by Ubuntu. I need to do a full system upgrade. It looks like everyone wants you to flash your SD card and start anew. How do you upgrade while preserving data? The more I look into this, the more people are recommending clean installs, and to create a separate partition for your home folder.

It looks like there is a release upgrade.

do-release-upgrade
pi@cjpi:~$ do-release-upgrade
Checking for a new Ubuntu release
Your Ubuntu release is not supported anymore.
For upgrade information, please visit:
http://www.ubuntu.com/releaseendoflife

Please install all available updates for your release before upgrading.

You know… I’m starting to wonder how I got kinetic. This was installed 2 years ago. It should have been Fossa or Jellyfish. The release cycle doesn’t even mention kinetic. Am I living in another timeline?

Well, I had thought I was going to flash the SD card anyway. May as well get it over with.

Hmm… my SD card says DeskPi. Did I just go with what was on the card to begin with? Let’s see… purchased Feb 8, 2023. I’m not seeing anything about what is on the SD card. Usually it’s an installer that lets you pick an image to install. It’s definitely an ubuntu server. I’m just a bit confused on why an old server was installed.

Now I’m playing this game of how do we connect the microSD card to the laptop. The laptop has an SD slot, but it’s too big. Ah, one of the hubs has a slot for it! balenaEtcher is still installed on the mac…

Decompress
Flash
Validate

I’m just looking at the screen and thinking, I’m just looking at a big advertisement. So much area is wasted here.

Ok, let’s pick up the pieces and setup a “new” system. I just realized something… no ssh, no keyboard, and I need to use one of my main monitors. Unfortunately, I had ran around the house and donated my “regular” keyboards to the Warren Heritage Society a few weeks ago. I found a spare Kinesis Advantage.

ubuntu seems to have logged in as a user “ubuntu”, but not doing anything… keyboard entry not responding. Reboot? Hmm… problems again. It looks like the front USB ports weren’t working right. Ok… default credentials, u: ubuntu, p: ubuntu.

Password changed.

Setup WiFI

WiFi next. ip link gives me a few ports: lo, eth0, wlan0. I think I’m after the wlan as eth is for ethernet. Ugh… the space key hardly works at all on this keyboard.

sudo nano /etc/netplan/50-netcfg.yaml
network:
  version: 2
  renderer: networkd
  wifis:
    wlan0:
      dhcp4: true
      access-points:
        "Your_SSID":
          password: "Your_WiFi_Password"
# Apply changes
sudo netplan apply

# Show IP Address
ip addr show wlan0
# 3: wlan0: ....
#   link/ether hh:hh:hh:hh:hh:hh brd ff:ff:ff:ff:ff:ff

Of course, I forgot to add a colon, and it complained about indenting… what a red herring. Anyhow, it’s up.

Setup name.local

I prefer to contact my server by name – ie http://cjpi.local. For that, I need multicast DNS.

sudo apt update
sudo apt install avahi-daemon -y
sudo nano /etc/hostname
# replace ubuntu with your host name (ie cjpi)
sudo nano /etc/hosts
# if present, replace 127.0.0.1 ubuntu with 127.0.0.1 cjpi
# if 127.0.0.1 localhost, add another line 127.0.0.1 cjpi
sudo systemctl restart avahi-daemon
sudo systemctl restart systemd-networkd

Hmm… I can’t ping the Pi from the mac. I’m on the same network. Let’s check the router.

It’s there with the old name. Probably because that is the name when it first established connection. I can ping the IP. I just noticed some errors on the pi.

[1972.702920] brcmfmac: brcmf_set_channel: set chanspec 0x100c fail, reason -52
[1972.711226] ...: set chanspec 0x100d ...
[1972.719433] ...: set chanspec 0x100e ...
[1972.727275] ...: set chanspec 0xd022 ...
[1972.839188] ...: set chanspec 0xd026 ...
[1972.950888] ...: set chanspec 0xd02a ...
[1973.062989] ...: set chanspec 0xd02e ...

How cryptic. Are the channels something to do with WiFi? ChatGPT to the rescue. Yes, it is WiFi. The driver for Broadcom Wi-Fi chipsets is trying to change to WiFi channels. -52 means illegal byte sequence or an invalid request. Potential problems: Unsupported frequency or channel based on WiFi hardware, firmware, or country regulations. I could manually specify the channel to use. However, the Pi is online. Let’s skip it for now. Eventually I’d like to update my WiFi firmware as well. That could be the other half of the problem.

Since we are online, let’s upgrade our packages. Maybe there is a new driver for it.

sudo apt upgrade

Interesting… after all of that, I got seven more messages trying to set the same channels, but the prefix numbers (frequencies?) at the beginning of the line were different. I still can’t ping cjpi.local

Lets try resetting.

Ok, the router shows the correct name, and the Mac can ping cjpi.local. Awesome. Now let’s setup SSH so I don’t have to deal with this bad keyboard.

Setup SSH

sudo apt update
sudo apt install openssh-server -y
sudo systemctl enable ssh
sudo systemctl start ssh
sudo systemctl status ssh

Now we get to try and login via the Mac.

➜  ~ ssh cjpi.local
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ED25519 key sent by the remote host is
SHA256:ntCF3xOHlu6nZLKm9NjYPQGeIX5e80vc+w9iuOYTja0.
Please contact your system administrator.
Add correct host key in /Users/lewismoten/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /Users/lewismoten/.ssh/known_hosts:4
Host key for cjpi.local has changed and you have requested strict checking.
Host key verification failed.

Oh brother. Yea, it’s a new system behind the same name. Let’s go and delete some keys! I found three entries for cjpi.local and removed them.

~ ssh cjpi.local                              
The authenticity of host 'cjpi.local (fe80::e65f:1ff:fef8:ca77%en0)' can't be established.
ED25519 key fingerprint is SHA256:ntCF3xOHlu6nZLKm9NjYPQGeIX5e80vc+w9iuOYTja0.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'cjpi.local' (ED25519) to the list of known hosts.
pi@cjpi.local: Permission denied (publickey).

Permission denied… well, we have two things going on here. First, the new image doesn’t have a user named pi, and second, I don’t have a key to identify them.

Rename User

sudo adduser tempuser
sudo usermod -aG sudo tempuser
logout
# login as tempuser
sudo usermod -l pi -d /home/pi -m ubuntu
logout
# login as pi
sudo deluser tempuser
sudo rm -r /home/tempuser

That fixes the first problem. We now need to generate SSH keys for the user. Well… we already have the keys on the Mac. We just need to get the public key onto the Pi.

ssh-copy-id pi@cjpi.local
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/Users/lewismoten/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
pi@cjpi.local: Permission denied (publickey).

Let’s make sure SSH is configured to use public keys

sudo nano /etc/ssh/sshd_config

# Uncomment/add the following lines
PubkeyAuthentication yes
AuthorizedKeysFile %h/.ssh/authorized_keys

sudo systemctl restart ssh

I was still having trouble, and rebooted for good measure. It’s time to manually put the keys onto the system.

I’ve must have typed out a few pages of the public key before giving up. Taking a look at my rsa key, it appears that its for my user account on the mac instead of the pi account. Config looks like this:

Host cjpi.local
  HostName cjpi.local
  User pi
  ForwardAgent yes

Lets try to post the public key on a website and download it to the pi.

curl -O http://www.example.com/foo.txt
cat foo.txt >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

Well… guess what? I’m In!

Ollama Again?

Yes. We’ve got to install it again.

curl -fsSL https://ollama.com/install.sh | sh
ollama start
# error, port bound
sudo systemctl stop ollama
sudo systemctl edit ollama.service

# Add the following two lines
[Service]
Environment="OLLAMA_HOST=0.0.0.0"

sudo systemctl daemon-reload
sudo systemctl restart ollama
ollama pull gemma2:2b
ollama run gemma2:2b

It’s hard to believe those few lines took a few hours to troubleshoot, and now I’m just running rite through them like nothing.

>>> who is lewis moten

Lewis Motén is a renowned American actor and comedian, best known for his work in film, television, and stand-up comedy. 

Here’s a breakdown of what makes him notable:

  • Film & TV: He has starred in films like “Dope”, “The Harder They Fall”, and the Netflix series “Shadow and Bone”.
  • Stand-Up Comedy: Motén is also known for his hilarious stand-up comedy, often incorporating observational humor about race, class, and everyday life. 
  • Voice Acting: He’s lent his voice to various animation projects and video games.

Beyond his professional work, Lewis Motén is a person of significant cultural influence:

  • He embraces his identity as a Black man and frequently uses his platform to address social justice issues through both his art and activism.
  • His personal style and perspective have made him a trendsetter within the entertainment industry.

You can find more information about Lewis Motén on websites like IMDb, Wikipedia, and social media platforms. 

I like the accent over the ‘e’, but no, that’s not me either.

Docker

Let’s setup docker so that other software runs in a sandbox, oblivious to everything else on the system.

sudo apt update
sudo apt upgrade
curl -sSL https://get.docker.com | sh
sudo usermod -aG docker $USER
logout
# ssh back into the pi
ssh cjpi.local
groups
# we see "docker" as one of the groups
docker run hello-world

Open WebUI

I’m following instructions found to install Open WebUI on pimylifeup. Keep in mind, Docker is going to pull 16 images consisting of 1.3 GB to run this.

sudo mkdir -p /opt/stacks/openwebui
cd /opt/stacks/openwebui/
sudo nano compose.yaml
compose.yaml
services:
  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    volumes:
      - ./data:/app/backend/data
    ports:
      - 3000:8080
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped
docker compose up -d

So now I can go to http://cjpi.local:3000/ – it took a while after docker started before anything responded.

Hmm… I thought the whole idea behind this was that you could run everything offline. I need to login to a third party?

I’m still a bit confused. Am I creating a local account, or am I creating an account with Open WebUI?

Anyhow, I’ve setup the account. It’s already connected to my ollama service, as gemma2:2b is the only model available.

I like the interface. Much easier to use. It looks like it saves all of the chat history to continue conversations later.

I think I found something in the admin panel under documents. It mentions a RAG Template and embedding a model. Perhaps this is where I need to look to update the model with some knowledge.

And here’s another place – workspaces have a second for Knowledge.

Once you have a collection, you can hit the hashtag key to add it to your prompt.

Now the question is – will it deduce information that makes sense now? If this works … man, I’ve got tons of research files to throw at it.

gemma2:2b 4:46 am

Based on the provided context, Lewis Moten worked as a lawn maintenance technician from June 1985 to September 1988. He also worked briefly as a snow remover in West Virginia before buying software and working his way up through jobs at McDonald’s. The text offers details about his work experience but doesn’t provide much personal information.

1 Employment.md

Holy smokes. It sort of works? It pulled out one of the documents and gave some information from my jobs from middle school to high school.

I think I pulled it off. I mean, yea – it’s a crummy response, slow, and doesn’t have everything about me to pull from, but still – it’s working. I don’t know if I can apply the changes to all models, but – it’s a knowledge collection. Does everyone on the server have access to it? How does that work when calling the API?

As a final measure, let’s see if it can summarize content from URLs. That’s something I can’t do with the free LLM’s today like Google Gemini and OpenAI’s ChatGPT.

Nope. It’s saying it can’t interact with external websites. Wait… it’s knowledge. Type #https://lewismoten.com/ and then select it as knowledge to add to the prompt. Let’s see if the response is any better than the one I got earlier talking about various scripts for click tracking, mobile tracking, and analytics.

This website appears to be a basic WordPress site showcasing a blog post titled “Lewis Moten – Unveiling the Code: Insights, Inspiration, and Innovation.” The content is likely about coding, innovation, and possibly some personal stories by Lewis Moten. It also features links for accessing additional resources like starting your own site using WordPress.

Well… it’s better. Horrible, but better. It still mentions the link to start your own site. At least it got to the tagline, but I feel like it’s not even getting to the main content that is on the page. Is it limited in how much content it will read in each document? Do I need to somehow split the documents up into smaller pieces of knowledge, rather than presenting as full documents?

You know what? That is something to consider for another day.

Discover more from Lewis Moten

Subscribe now to keep reading and get access to the full archive.

Continue reading