Importing data manually into a longhorn volume

Published on April 09, 2024

I was in the process of migrating Shiori from my docker environment to the new k3s cluster I'm setting up. Shiori is a bookmarks manager that uses an SQLite database and a folder to store the data from the bookmarks. I didn't want to switch engines just yet since I want to improve SQLite's performance first, so I decided to move the data directly to a longhorn volume.

This probably is super simple and vastly known but it wasn't clear for me at first. Posting it here for future reference and for anyone that might find it useful.

Considering that I already have the data from the docker volume in a tar.gz file exported with the correct hierachy the migration process is way simpler than I anticipated. I just need to create the Longhorn volume and the volume claim, create a pod that has access to the volume and pipe the data into the pod to the appropriate location.

First create your volume in the way that you prefer. You can apply the YAML directly or use the Longhorn UI to create the volume. I created mine using the UI beforhand.

With the volume and volume claim (named shiori-data) created I'm going to create a pod that has access to the volume via the volume claim. I'm going to use the same shiori image that I'm going to use in the final pod that will use the volume claim since I'm lucky to have the tar command in there. If you don't have it, you can use a different image that has tar bundled in it.

apiVersion: v1
kind: Pod
  name: shiori-import-pod
  namespace: shiori
    - name: data
        claimName: shiori-data
    - name: shiori
        - mountPath: "/tmp/shiori-data"
          name: data
   # In my personal case, I need to specify user, group and filesystem group to match the longhorn volume
   # with the docker image specification.
    runAsUser: 1000
    runAsGroup: 1000
    fsGroup: 1000

With the pod running I can copy the data into the volume by piping it into an exec call and upacking it with tar on the fly:

cat shiori_data.tar.gz | kubectl exec -i -n namespace shiori-import-pod -- tar xzvf - -C /tmp/shiori-data/

Note: I tried using kubectl cp before to copy the file into the pod -since internally uses the same approach-, but I had some issues apparently due to different tar versions on my host machine and the destination pod so I decided to use the pipe approach and it worked. The result should be the same.

With the data copied into the volume I can now delete the import pod and deploy the application using the approrpiate volume claim. In my case I just need to change the mountPath in the deployment container spec to the correct path where the application expects the data to be.

I don't know why I expected this to be harder than it really is, but I am happy that I was able to migrate everything in less than an hour.

If you want to approach me directly about this post use the most appropriate channel from the about page.