summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--scratch.txt30
-rw-r--r--src/getting-started-maas.txt35
-rw-r--r--src/qutebrowser-notes.txt17
-rw-r--r--switching-to-lxc-lxd-for-django-dev-work-cuts.txt42
-rw-r--r--switching-to-lxc-lxd-for-django-dev-work.txt188
-rw-r--r--vagrant-custom-box.txt171
6 files changed, 481 insertions, 2 deletions
diff --git a/scratch.txt b/scratch.txt
index d344076..ab479fd 100644
--- a/scratch.txt
+++ b/scratch.txt
@@ -1,7 +1,33 @@
-I don't want you to be like me. I want you to figure out who you are, how to think your own thoughts and maybe, if you're lucky, figure out what you're supposed to be doing. One of the easiest ways to get the kind of perspective you need to figure these things out is to travel, particularly outside your own culture
+## Octavio Paz quote
+> Modern man likes to pretend that his thinking is wide-awake. But this wide-awake thinking has led us into the maze of a nightmare in which the torture chambers are endlessly repeated in the mirrors of reason. When we emerge, perhaps we will realize that we have been dreaming with our eyes open, and that the dreams of reason are intolerable. And then, perhaps, we will begin to dream once more with our eyes closed. <cite>&ndash;Octavio Paz</cite>
+
+## Stopping travel
+Full time travelers who stop traveling, regardless of how long or why, tend to feel like we've failed somehow. Which is silly, but I'm no exception. I feel it anyway. I have been feeling it lately.
+
+I like living on the road for two main reasons. One, we spend more time outside. There is nothing so valuable as spending all day outside. Two, it satisfies a pretty basic curiosity: what does it look like around that bend? What is the view like from the other side of the hill? What does the river sound like down in that valley? What is like to wake up in middle of the desert? How does it feel to fall asleep in the sand listening to the sea? How does it feel sitting in the shade of a sandstone overhang where someone else sat thousands of years ago? What's the scent of an aspen forest in a downpour? How does the sandstone feel on your fingertips after the thunderstorms pass?
+
+So to answer that question everyone keeps asking me: yes I miss living in the bus. And to answer the follow up question, yes, we're going to get back to that eventually. At the moment we're in San Miguel.
+
+
+We were going to spend the winter down here, stay warm, improve our Spanish a bit and go back to the bus when it warmed up a little. Then we were going to spend spring traveling the southwest desert, see some areas of Arizona, New Mexico and Utah that we hadn't seen yet, and then head up to Wyoming, Idaho and Montana when it got hot, and spend summer at higher, cooler elevations. Good plan right? Well.
+
+When we parked the bus last year we knew that before it went much further it was going to need some work. Significant, time and money-eating work. To get to the places we want to get, we need more power and less worry. The only way I've come up with to get to that point is to either drop in a bigger engine, a 440 or the like, or rebuild the 318 to get better compression, which means boring out the engine, new pistons, maybe new manifolds, probably a new transmission and quite a few other things that are not cheap. It's all doable, but it takes time and money. There's also the possibility we could move to a different rig[^1], but that again is time and money.
+
+Time and money we don't have right now.
+I think now that true sweetness can only happen in limbo. I don't know why. Is it because we are so unsure, so tantative and waiting? Like it needs that much room, that much space to expand. The not knowing anything really, the hoping, the aching transience. This is not real, not really, and so we let it alone, let it unfold lightely. Those times that can fly. That's the way it seems now looking back.
-All of life is limits, right now we are up against some hard limits
+[^1]: I have never liked driving with a trailer, but it probably makes more sense for the way we travel. We like to set up camp and then spend a few weeks roaming an area. Certain things about trailers make them better for this, like the ability to haul out your black water and go fetch fresh water without breaking camp. The other marked advantage of the trailer and tow vehicle is that when you do need a mechanic's help, you don't lose your house. But pretty sure my family would abandon me if I tried to sell the bus.
+
+
+## Bird watching as a way to get out
+
+
+"Looking for birds, in this case, means seeing the private gardens of the brightly-colored houses in a small mountain town, with their fiery pink and orange blossoms, their mango and papaya trees, and their tangled blooming vines. Birding gets you to places you can’t otherwise go, or never thought to see. It gives you access to new foods and flavors. For example, birding gives you unparalleled access to taste rare fruits and other micro-local foods." - https://www.notesfromtheroad.com/neotropics/tapir-valley.html
+
+
+
+I don't want you to be like me. I want you to figure out who you are, how to think your own thoughts and maybe, if you're lucky, figure out what you're supposed to be doing. One of the easiest ways to get the kind of perspective you need to figure these things out is to travel, particularly outside your own culture
## Failure of materialism
diff --git a/src/getting-started-maas.txt b/src/getting-started-maas.txt
new file mode 100644
index 0000000..adc446b
--- /dev/null
+++ b/src/getting-started-maas.txt
@@ -0,0 +1,35 @@
+MAAS, which stands for *Metal As A Service* allows you to run physical servers as if they were virtual machine instance. This means you can configure, deploy and manage bare metal servers just like you would virtual machines in a cloud environment like Amazon AWS or Microsoft Azure. MAAS gives you the management tools that have made the cloud popular, but with the additional benefits of owning your own hardware.
+
+To run MAAS you'll need a server to run the management software and at least one server which can be managed with a BMC. Canonical recommends letting the MAAS server provide DHCP and DNS for the network the managed machines are connected to. If your setup requires a different approach to DHCP, see the MAAS documentation for more details on how DHCP works in MAAS and how you can adapt it to your current setup.
+
+To install MAAS first download Ubuntu Server 18.04 LTS and follow the step-by-step installation instructions to set up Ubuntu on your MAAS server. Once you have Ubuntu 18.04 up and running, you can install MAAS. To get the latest development release of MAAS, use the PPA `maas/next`. First add the PPA, then update and install.
+
+~~~~console
+sudo add-apt-repository ppa:maas/next
+sudo apt update
+sudo apt install maas
+~~~~
+
+Once MAAS is installed, you'll need initialize it and create an admin user.
+
+~~~~console
+sudo maas init
+~~~~
+
+The init command will ask you ask you to create a username and password for the web-based GUI. You can optionally import your SSH keys as well.
+
+Once the installation is done you can login to the web-based MAAS GUI by pointing your browser to http://<your.maas.ip>:5240/MAAS/.
+
+maas-01.png
+
+Once you login to the MAAS web UI you'll be presented with the MAAS configuration panel where you can set the region name, configure a DNS forwarder for domains not managed by MAAS, as well as configure the images and architectures you want available for MAAS.
+
+maas-01.png
+
+Here you can accept the defaults and click continue. If you did not add your SSH keys in the init step, you'll need to upload them now. Then click "Go to Dashboard" to continue.
+
+The last step is to configure DHCP. When the MAAS Dashboard loads it will alert you that "DHCP is not enabled on any VLAN." To setup DHCP click the "Subnets" menu item and then click the VLAN where you want to enable DHCP.
+
+This will bring up a new page where you can configure your DHCP subnet, start and end IP addresses, and Gateway IP. You can also decide how MAAS handles DHCP, whether directly from the rack controller or relayed to another VLAN. If you don't want MAAS to manage DHCP you can disable it here.
+
+To set up your first MAAS instances with MAAS handling DHCP, click the Configure MAAS-managed DHCP button.
diff --git a/src/qutebrowser-notes.txt b/src/qutebrowser-notes.txt
new file mode 100644
index 0000000..f7887e6
--- /dev/null
+++ b/src/qutebrowser-notes.txt
@@ -0,0 +1,17 @@
+handy commands:
+ :download
+
+## shortcuts
+xo - open url in background tab
+go - edit current url
+gO - edit current url and open result in new tab
+gf - view source
+;y - yank hinted url
+;i - hint only images
+;b - open hint in background tab
+;d - download hinted url
+PP - Open URL from selection in new tab
+ctrl+a Increment no. in URL
+ctrl+x Decrement no. in URL
+
+Solarized theme: https://bitbucket.org/kartikynwa/dotty2hotty/src/1a9ba9b80f07e1f63b740da5e6970dc5a97f1037/qutebrowser.py?at=master&fileviewer=file-view-default
diff --git a/switching-to-lxc-lxd-for-django-dev-work-cuts.txt b/switching-to-lxc-lxd-for-django-dev-work-cuts.txt
new file mode 100644
index 0000000..5f128fe
--- /dev/null
+++ b/switching-to-lxc-lxd-for-django-dev-work-cuts.txt
@@ -0,0 +1,42 @@
+
+Error: Failed to run: /usr/bin/lxd forkstart debian /var/lib/lxd/containers /var/log/lxd/debian/lxc.conf:
+Try `lxc info --show-log local:debian` for more info
+Hmmm. Nothing like a error right after init. Okay so run that suggested command:
+
+~~~~console
+lxc info --show-log local:debian
+If this is your first time running LXD on this machine, you should also run: lxd init
+To start your first container, try: lxc launch ubuntu:18.04
+
+Error: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: connect: permission denied
+~~~~
+
+I after a bit of searching I figured out the permissions problem has to do with privileged vs unprivileged containers. I skipped a part in the initial setup on Arch. Out of the box on Arch you'll need to jump through a few extra hoops to run unprivileged containers, which seem odd and even backward to me because as I understand it that exactly what you want to run. For now I have skipped those extra steps until I better understand them. In the mean time I used the workaround suggested in the Arch wiki, which is to append `-c security.privileged=true` to the end of the `launch` command we used a minute ago. However, I believe this defeats one of the major security benefits of containers, by, uh, containing things. So I wouldn
+
+~~~~console
+sudo lxc launch images:debian/stretch/amd64 debian -c security.privileged=true
+Error: Failed container creation: Create container: Add container info to the database: This container already exists
+~~~~
+
+Okay, so even though we couldn't connect in our previous effort, we did create a container with that name so we need to get rid of it first. Let's see what we have.
+
+~~~~console
+sudo lxc list
++--------+---------+------+------+------------+-----------+
+| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
++--------+---------+------+------+------------+-----------+
+| debian | STOPPED | | | PERSISTENT | |
++--------+---------+------+------+------------+-----------+
+~~~~
+
+Yup, there's our debian container. Now the question is how do delete a container using the `lxc` command? Curiously, the command `lxc-delete`, which you'll discover if you type `man lxc` errored out for me. After a bit of searching I found the LXD equivelant is `lxc delete`:
+
+~~~~console
+sudo lxc delete debian
+~~~~
+
+Okay, not back to our create command:
+
+~~~~console
+sudo lxc launch images:debian/stretch/amd64 debian -c security.privileged=true
+~~~~
diff --git a/switching-to-lxc-lxd-for-django-dev-work.txt b/switching-to-lxc-lxd-for-django-dev-work.txt
new file mode 100644
index 0000000..fa856d8
--- /dev/null
+++ b/switching-to-lxc-lxd-for-django-dev-work.txt
@@ -0,0 +1,188 @@
+I've used Vagrant to manage my local development environment for quite some time. The developers I used to work with used it, and everyone seemed happy with it. While I have no particular love for it, it works well enough.Eventually I got comfortable enough with Vagrant that I started using it in my own projects. I even wrote about [setting up a custom Debian 9 Vagrant box]() to mirror the server running this site.
+
+Despite that, I've never really liked Vagrant. Using Virtualbox as a provider -- pretty much the only option when your team uses Linux, Windows, and Mac -- means running a huge VM that gobbles a ton of memory.
+
+My laptop only has 8GB of RAM. The internet being as bloated as it is, my browser is always taking about 2GB, throwing in two Vagrant machines and I'm pretty much maxed out. Plus Django's dev server is painfully slow to reload any changes.
+
+Recently I was talking with one of Canonical's [MAAS](https://maas.io/) developers and topic of containers came up. When I mentioned I really didn't like Docker, he nodded sagely and told me a needed to use LXD. This stuck in my head the way things sometimes do, and later that day I began to look into LXD and LXC. The more I read on the [LinuxContainers](https://linuxcontainers.org/) site, the more I liked the idea. Since I like to tinker, I dove right in and now, a few days later, there's not a Vagrant machine left on my laptop.
+
+To be fair, you can use Vagrant to manage LXC containers, but I don't know why you'd bother. LXD's management tools and config system works great (and I say this as someone very familiar with Vagrant's tools), why add another tool to the mix?[^1]
+
+LXC/LXD is blazing fast, lightweight, and dead simple. To quote, Canonical's [Michael Iatrou](https://blog.ubuntu.com/2018/01/26/lxd-5-easy-pieces), LXC "liberates your laptop from the tyranny of heavyweight virtualization and simplifies experimentation."
+
+Here's how I'm using it for Django development on Arch Linux. I've also included instruction for Ubuntu since I set it up there as well.
+
+### What's the difference between LXC, LXD and `lxc`
+
+I wrote this guide in part because I've been hearing about LXC for ages -- I've even mentioned it in Ubuntu reviews when it got significant updates -- but part of what stopped me from using it is that it sounded overwhelming and confusing, too enterprisey you might say. It's really not though, in fact I found it easier to understand than Vagrant or Docker.
+
+So what is a LXC container, what's LXD, and how are either different than say a VM or for that matter Docker?
+
+* LXC - low-level tools and a library to create and manage containers, powerful, but complicated.
+* LXD - is a daemon which provides a REST API to drive LXC containers, much more user-friendly
+* `lxc` - the command line client for LXD.
+
+In LXC parlance a container is essentially a virtual machine, if you want to get pedantic, see Stéphane Graber's post on the [various terms and components that make up LXD](https://stgraber.org/2016/03/11/lxd-2-0-introduction-to-lxd-112/). For the most part though, interacting with an LXC container is like interacting with a VM. You say ssh, LXD says socket, potato, potahto. Mostly.
+
+An LXC container is not a container in the same sense that Docker talks about containers. Think of it more as a VM that only uses the resources it needs to do whatever it's doing. Running this site in an LXC container uses very little RAM. Running it in Vagrant uses 2GB of RAM because that's what I allocated to the VM -- that's what it uses even if it doesn't need it. LXC is much smarter than that.
+
+Now what about LXD? LXC is the low level tool, you don't really need to go there. If you're doing massive enterprise deployments you probably want the nova-lxd OpenStack plugin, so actually, I can't really see where you'd need to interact directly with LXC. Instead you interact with your LXC container via the LXD API. It uses YAML config files and a command line `lxc`.
+
+That's the basic stack, let's install it.
+
+### Install LXD
+
+On Arch I used the version of [LXD in the AUR](), but Ubuntu users should go with the Snap package. Either way you should get DNSMasq and a few other tools you'll need to handle networking between your machine and the LXC container we'll spin up in a bit. The other thing you'll want is your distros' Btrfs or ZFS tools.
+
+Part of LXC's magic relies on either Btrfs and ZFS to read a virtual disk not as a file the way Virtualbox and others do, but as a block device. Both filesystems also offer copy-on-write cloning and snapshot features, which makes it simple and fast to spin up new containers. It takes about 6 seconds to install and boot a complete and fully functional LXC container on my laptop, and most of that time is downloading the image file from the remote server. It takes about 3 seconds to clone that fully provisioned base container into a new container.
+
+In the end I set up my Arch machine using Btrfs and Ubuntu using ZFS to see if I could see any difference (so far, that would be no, the only difference I've run across in my research is that Btrfs can run LXC containers inside LXC containers. Turtles all the way down).
+
+Assuming you have Snap packages set up already, Debian and Ubuntu users can get everything they need to install and run LXC with these commands:
+
+~~~~console
+apt install zfsutils-linux
+~~~~
+
+And then install the snap version of lxd with:
+
+~~~~console
+snap install lxd
+~~~~
+
+Once that's done we need to initialize lxd. I went with the defaults for everything. I've printed out entire init command output so you can see what will happen:
+
+~~~~console
+sudo lxd init
+Create a new BTRFS pool? (yes/no) [default=yes]:
+would you like to use LXD clustering? (yes/no) [default=no]:
+Do you want to configure a new storage pool? (yes/no) [default=yes]:
+Name of the new storage pool [default=default]:
+Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:
+Create a new BTRFS pool? (yes/no) [default=yes]:
+Would you like to use an existing block device? (yes/no) [default=no]:
+Size in GB of the new loop device (1GB minimum) [default=15GB]:
+Would you like to connect to a MAAS server? (yes/no) [default=no]:
+Would you like to create a new local network bridge? (yes/no) [default=yes]:
+What should the new bridge be called? [default=lxdbr0]:
+What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
+What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
+Would you like LXD to be available over the network? (yes/no) [default=no]:
+Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
+Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
+~~~~
+
+LXD will then spit out the contents of the profile you just created. It's a YAML file and you can edit it as you see fit after the fact. You can also create more than one profile if you like. To see all installed profiles use:
+
+~~~~console
+lxc profile list
+~~~~
+
+To view the contents of a profile use:
+
+~~~~console
+lxc profile show <profilename>
+~~~~
+
+To edit a profile use:
+
+~~~~console
+lxc profile edit <profilename>
+~~~~
+
+So far I haven't needed to edit the profile by hand.
+
+I've also been happy with all the defaults although, when I do this again, I will probably enlarge the storage pool, and maybe partition off some dedicated disk space. But for now I'm just trying to figure things out so defaults it is. One more step in our setup, by default LXD runs as the lxd group, to interact with containers we'll need to part of that group.
+
+~~~~console
+sudo usermod -a -G lxd yourusername
+~~~~
+
+#####Special note for Arch users.
+
+To run unprivileged containers as your own user, you'll need to jump through a couple extra hoops. As usual, the [Arch User Wiki](https://wiki.archlinux.org/index.php/Linux_Containers#Enable_support_to_run_unprivileged_containers_(optional)) has you covered. Read through and follow those instructions and then reboot and everything below should work as you'd expect.
+
+### Create Your First LXC Container
+
+Okay, now let's create our first container.
+
+This website runs on a Debian VM currently hosted on Vultr.com so I'm going to spin up a Debian container to mirror this environment for local development and testing.
+
+To create a new LXC container we use the `launch` command of the `lxc` tool.
+
+Out of the box there are four ways you can get LXC containers, local (meaning a container base you've created), images (which come from [https://images.linuxcontainers.org/](https://images.linuxcontainers.org/), ubuntu (release versions of Ubuntu), and ubuntu-daily (daily images). The images on linuxcontainers are unofficial, but the Debian image I used worked perfectly. There's also Alpine, Arch CentOS, Fedora, openSuse, Oracle, Palmo, Sabayon and lots of Ubuntu images. Pretty much every architecture you could image is in there.
+
+I created a Debian 9 Stretch container with the amd64 image. To create an LXC container from one of the remote images the basic syntax is `lxc launch images:distroname/version/architecture containername`. For example:
+
+~~~~console
+lxc launch images:debian/stretch/amd64 debian-base
+Creating debian-base
+Starting debian-base
+~~~~
+
+That will grab the amd64 image of Debian 9 Stretch and create a container out of it and then launch it. Now if we look at the list of installed containers we should see this:
+
+~~~~console
+lxc list
++-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+
+| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
++-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+
+| debian-base | RUNNING | 10.171.188.236 (eth0) | fd42:e406:d1eb:e790:216:3eff:fe9f:ad9b (eth0) | PERSISTENT | |
++-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+
+~~~~
+
+Very cool, now what? This is what I love about LXC, we can interact with our container pretty much the same way we'd interact with a VM. Let's connect to the root shell:
+
+~~~~console
+lxc exec debian-base -- /bin/bash
+~~~~
+
+Look at your prompt and you'll notice it says `root@nameofcontainer`. Now you can install everything you need on your container. For me, setting up a django dev environment, that means postgres, python, virtualenv, and, for this site, all the geodjango requirements (postgis, GDAL, etc), along with a few other odds and ends.
+
+You don't have to do it from inside the container though. Part of LXD's charm is to be able to run commands without logging into anything. Instead you can do this:
+
+~~~~console
+lxc exec debian-base -- apt update
+lxc exec debian-base -- apt install postgresql postgis virtualenv
+~~~~
+
+LXD will output the results of your command as if you were SSHed into a VM. Not being one for typing, I created a bash alias that looks like this: `alias luxdev='lxc exec debian-base --'` so that all I need to type is `luxdev <command>`.
+
+What I haven't figured out how to do is chain commands, this does not seem to work:
+
+~~~~console
+lxc exec debian-base -- su - lxf && cd site && source venv/bin/activate && ./manage.py runserver 0.0.0.0:8000
+~~~~
+
+According to a bug report, it should work in quotes, but it doesn't for me. Something must have changed since then, or I'm doing something wrong.
+
+One other thing what was not simple to figure out is how to get a directory on your host machine mounted in your LXC instance. To do that you'll need to edit `/etc/subuid` and `/etc/subgid` to add your user. Use the `id` command to get your user id (it's probably 1000 but if not, adjust the commands below). Once you have your user id, add it to the files with this one liner I got from the [Ubuntu blog](https://blog.ubuntu.com/2016/12/08/mounting-your-home-directory-in-lxd):
+
+~~~~console
+echo 'root:1000:1' | sudo tee -a /etc/subuid /etc/subgid
+~~~~
+
+Then you need to configure your LXC instance to use the same uid:
+
+~~~~console
+lxc config set debian-base raw.idmap 'both 1000 1000'
+~~~~
+
+The last step is to add a device to your config file so LXC will mount it. You'll need to stop and start it for the changes to take effect.
+
+~~~~console
+lxc config device add debian-base sitedir disk source=/path/to/your/directory path=/path/to/where/you/want/folder/in/lxc
+lxc stop debian-base
+lxc start debian-base
+~~~~
+
+#####Shoulders stood upon
+
+* [Stéphane Graber's 12 part series on lxd 2.0](https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/) - He wrote LXC and LXD, this is the best resource I found and highly recommend reading it all.
+* [Mounting your home directory in LXD](https://blog.ubuntu.com/2016/12/08/mounting-your-home-directory-in-lxd)
+* [Official how to](https://linuxcontainers.org/lxd/getting-started-cli/)
+* [Linux Containers Discourse site](https://discuss.linuxcontainers.org/t/deploying-django-applications/996)
+* [LXD networking: lxdbr0 explained](https://blog.ubuntu.com/2016/04/07/lxd-networking-lxdbr0-explained)
+
+
+[^1]: Because you work with developers who use Windows would be one answer I suppose, but LXC/LXD developer Stéphane Graber has some instructions on how you can [interact with LXD from Mac and Windows](https://stgraber.org/2017/02/09/lxd-client-on-windows-and-macos/).
diff --git a/vagrant-custom-box.txt b/vagrant-custom-box.txt
new file mode 100644
index 0000000..d73019d
--- /dev/null
+++ b/vagrant-custom-box.txt
@@ -0,0 +1,171 @@
+I'm a little old fashioned with my love of Vagrant. I should probably keep up with the kids, dig into to Docker and containers, but I like managing servers. I like to have the whole VM at my disposal.
+
+Why Vagrant? Well, I run Arch Linux on my laptop, but I usually deploy sites to either Debian, preferably v9, "Stretch", or (if a client is using AWS) Ubuntu, which means I need a virtual machine to develop and test in. Vagrant is the easiest way I've found to manage that workflow.
+
+When I'm deploying to Ubuntu-based machines I develop using the [Canonical-provided Vagrant box](https://app.vagrantup.com/ubuntu/boxes/bionic64) available through Vagrant's [cloud site](https://app.vagrantup.com/boxes/search). There is, however, no official Debian box provided by Debian. Worse, the most popular Debian 9 box on the Vagrant site has only 512MB of RAM. I prefer to have 1 or 2GB of RAM to mirror the cheap, but surprisingly powerful, [Vultr VPS instances](https://www.vultr.com/?ref=6825229) I generally use (You can use them too, in my experience they're faster and slightly cheaper than Digital Ocean. Here's a referral link that will get you [$50 in credit](https://www.vultr.com/?ref=7857293-4F)).
+
+That means I get to build my own Debian Vagrant box.
+
+Building a Vagrant base box from Debian 9 "Stretch" isn't hard, but most tutorials I found were outdated or relied on third-party tools like Packer. Why you'd want to install, setup and configure a tool like Packer to build one base box is a mystery to me. It's far faster to do it yourself by hand (which is not to slag Packer, it *is* useful when you're building an image from AWS or Digital Ocean or other provider).
+
+Here's my guide to building a Debian 9 "Stretch" Vagrant Box.
+
+### Create a Debian 9 Virtual Machine in Virtualbox
+
+We're going to use Virtualbox as our Vagrant provider because, while I prefer qemu for its speed, I run into more compatibility issues with qemu. Virtualbox seems to work everywhere.
+
+First install Virtualbox, either by [downloading an image](https://www.virtualbox.org/wiki/Downloads) or, preferably, using your package manager/app store. We'll also need the latest version of Debian 9's netinst CD image, which you can [grab from the Debian project](https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/) (scroll to the bottom of that page for the actual downloads).
+
+Once you've got a Debian CD, fire up Virtualbox and create a new virtual machine. In the screenshot below I've selected Expert Mode so I can go ahead and up the RAM (in the screenshot version I went with 1GB).
+
+<img src="images/2019/debian9-vagrant-base-box-virtualmachine.jpg" id="image-1859" class="picfull" />
+
+Click "Create" and Virtualbox will ask you about the hard drive, I stick with the default type, but bump the size to 40GB, which matches the VPS instances I use.
+
+<img src="images/2019/debian9-vagrant-base-box-virtualdisk.jpg" id="image-1860" class="picfull" />
+
+Click "Create" and then go to the main Virtualbox screen, select your new machine and click "Settings". Head to the audio tab and uncheck the Enable Audio option. Next go to the USB tab and disable USB.
+
+<img src="images/2019/debian9-vagrant-base-box-no-audio.jpg" id="image-1855" class="picfull" />
+<img src="images/2019/debian9-vagrant-base-box-no-usb.jpg" id="image-1856" class="picfull" />
+
+Now click the network tab and make sure Network Adapter 1 is set to NAT. Click the "Advanced" arrow and then click the button that says Port Forwarding. Add a port forwarding rule. I call mine SSH, but the name isn't important. The important part is that the protocol is TCP, the Host and Guest IP address fields are blank, the Host port is 2222, the Guest port is 22.
+
+<img src="images/2019/debian9-vagrant-base-box-port-forward_EqGwcg4.jpg" id="image-1858" class="picfull" />
+
+Hit okay to save your changes on both of those screens and now we're ready to boot Debian.
+
+### Install Debian
+
+To get Debian installed first click the start button for your new VM and Virtualbox will boot it up and ask you for the install CD. Navigate to wherever you saved the Debian netinst CD we downloaded earlier and select that.
+
+That should boot you to the Debian install screen. The most important thing here is to make sure you choose the second option, "Install", rather than "Graphical Install". Since we disabled USB, we won't have access to the mouse and the Debian graphical installer won't work. Stick with plain "Install".
+
+<img src="images/2019/debian9-vagrant-base-box-vm-install.jpg" id="image-1861" class="picfull" />
+
+From here it's just a standard Debian install. Select the appropriate language, keyboard layout, hostname (doesn't matter), and network name (also doesn't matter). Set the root password to something you'll remember. Debian will then ask you to create a user. Create a user named "vagrant" (I used "vagrant" for the fullname and username) and set the password to "vagrant".
+
+Tip: to select (or unselect) a check box in the Debian installer, hit the space bar.
+
+Then Debian will get the network time, ask what timezone you're in and start setting up the disk. I go with the defaults all the way through. Next Debian will install the base system, which takes a minute or two.
+
+Since we're using the netinst CD, Debian will ask if we want to insert any other CDs (no), and then it will ask you to choose which mirrors to download packages from. I went with the defaults. Debian will then install Linux, udev and some other basic components. At some point it will ask if you want to participate in the Debian package survey. I always go with no because I feel like a virtual machine might skew the results in unhelpful ways, but I don't know, maybe I'm wrong on that.
+
+After that you can install your software. For now I uncheck everything except standard system utils (remember, you can select and unselect items by hitting the space bar). Debian will then go off and install everything, ask if you want to install Grub (you do -- select your virtual disk as the location for grub), and congratulations, you're done installing Debian.
+
+Now let's build a Debian 9 base box for Vagrant.
+
+### Set up Debian 9 Vagrant base box
+
+Since we've gone to the trouble of building our own Debian 9 base box, we may as well customize it.
+
+The first thing to do after you boot into the new system is to install sudo and set up our vagrant user as a passwordless superuser. Login to your new virtual machine as the root user and install sudo. You may as well add ssh while you're at it:
+
+~~~~console
+apt install sudo ssh
+~~~~
+
+Now we need to add our vagrant user to the sudoers list. To do that we need to create and edit the file:
+
+~~~~console
+visudo -f /etc/sudoers.d/vagrant
+~~~~
+
+That will open a new file where you can add this line:
+
+~~~~console
+vagrant ALL=(ALL) NOPASSWD:ALL
+~~~~
+
+Hit control-x, then "y" and return to save the file and exit nano. Now logout of the root account by typing `exit` and login as the vagrant user. Double check that you can run commands with `sudo` without a password by typing `sudo ls /etc/` or similar. If you didn't get asked for a password then everything is working.
+
+Now we can install the vagrant insecure SSH key. Vagrant sends commands from the host machine over SSH using what the Vagrant project calls an insecure key, which is so called because everyone has it. We could in theory, all hack each other's Vagrant boxes. If this concerns you, it's not that complicated to set up your own more secure key, but I suggest doing that in your Vagrant instance, not the base box. For the base box, use the insecure key.
+
+Make sure you're logged in as the vagrant user and then use these commands to set up the insecure SSH key:
+
+~~~~console
+mkdir ~/.ssh
+chmod 0700 ~/.ssh
+wget https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub -O ~/.ssh/authorized_keys
+chmod 0600 ~/.ssh/authorized_keys
+chown -R vagrant ~/.ssh
+~~~~
+
+Confirm that the key is in fact in the `authorized_keys` file by typing `cat ~/.ssh/authorized_keys`, which should print out the key for you. Now we need to set up SSH to allow our vagrant user to sign in:
+
+~~~~console
+sudo nano /etc/ssh/sshd_config
+~~~~
+
+Uncomment the line `AuthorizedKeysFile ~/.ssh/authorized_keys ~/.ssh/authorized_keys2` and hit `control-x`, `y` and `enter` to save the file. Now restart SSH with this command:
+
+~~~~console
+sudo systemctl restart ssh
+~~~~
+
+### Install Virtualbox Guest Additions
+
+The Virtualbox Guest Addition allows for nice extras like shared folders, as well as a performance boost. Since the VB Guest Additions require a compiler, and Linux header files, let's first get the prerequisites installed:
+
+~~~~console
+sudo apt install gcc build-essential linux-headers-amd64
+~~~~
+
+Now head to the VirtualBox window menu and click the "Devices" option and choose "Insert Guest Additions CD Image" (note that you should download the latest version if Virtualbox asks[^1]). That will insert an ISO of the Guest Additions into our virtual machine's CDROM drive. We just need to mount it and run the Guest Additions Installer:
+
+~~~~console
+sudo mount /dev/cdrom /mnt
+cd /mnt
+sudo ./VBoxLinuxAdditions.run
+~~~~
+
+Assuming that finishes without error, you're done. Congratulations. Now you can add any extras you want your Debian 9 Vagrant base box to include. I primarily build things in Python with Django and Postgresql, so I always install packages like `postgresql`, `python3-dev`, `python3-pip`, `virtualenv`, and some other software I can't live without. Also edit the .bashrc file to create some aliases and helper scripts. Whatever you want all your future Vagrant boxes to have, now is the time to install it.
+
+### Packaging your Debian 9 Vagrant Box
+
+Before we package the box, we're going to zero out the drive to save a little space when we compress it down the road. Here's the commands to zero it out:
+
+~~~~console
+sudo dd if=/dev/zero of=/zeroed bs=1M
+sudo rm -f /zeroed
+~~~~
+
+Once that's done we can package up our box with this command:
+
+~~~~console
+vagrant package --base debian9-64base
+==> debian9-64base: Attempting graceful shutdown of VM...
+==> debian9-64base: Clearing any previously set forwarded ports...
+==> debian9-64base: Exporting VM...
+==> debian9-64base: Compressing package to: /home/lxf/vms/package.box
+~~~~
+
+As you can see from the output, I keep my Vagrant boxes in a folder call `vms`, you can put yours wherever you like. Wherever you decide to keep it, move it there now and cd into that folder so you can add the box. Sticking the `vms` folder I use, the commands would look like this:
+
+~~~console
+cd vms
+vagrant box add debian9-64 package.box
+~~~
+
+Now when you want to create a new vagrant box from this base box, all you need to do is add this to your Vagrantfile:
+
+~~~~console
+Vagrant.configure("2") do |config|
+ config.vm.box = "debian9-64"
+end
+~~~~
+
+Then you start up the box as you always would:
+
+~~~~console
+vagrant up
+vagrant ssh
+~~~~
+
+#####Shoulders stood upon
+
+* [Vagrant docs](https://www.vagrantup.com/docs/virtualbox/boxes.html)
+* [Engineyard's guide to Ubuntu](https://www.engineyard.com/blog/building-a-vagrant-box-from-start-to-finish)
+* [Customizing an existing box](https://scotch.io/tutorials/how-to-create-a-vagrant-base-box-from-an-existing-one) - Good for when you don't need more RAM/disk space, just some software pre-installed.
+
+[^1]: On Arch, using Virtualbox 6.x I have had problems downloading the Guest Additions. Instead I've been using the package `virtualbox-guest-iso`. Note that after you install that, you'll need to reboot to get Virtualbox to find it.