diff options
Diffstat (limited to 'src')
-rw-r--r-- | src/getting-started-maas.txt | 29 | ||||
-rw-r--r-- | src/published/2019-04-07_why-and-how-ditch-vagrant-for-lxd.txt | 199 | ||||
-rw-r--r-- | src/switching-to-lxc-lxd-for-django-dev-work-cuts.txt | 42 |
3 files changed, 262 insertions, 8 deletions
diff --git a/src/getting-started-maas.txt b/src/getting-started-maas.txt index adc446b..74622a5 100644 --- a/src/getting-started-maas.txt +++ b/src/getting-started-maas.txt @@ -1,8 +1,12 @@ -MAAS, which stands for *Metal As A Service* allows you to run physical servers as if they were virtual machine instance. This means you can configure, deploy and manage bare metal servers just like you would virtual machines in a cloud environment like Amazon AWS or Microsoft Azure. MAAS gives you the management tools that have made the cloud popular, but with the additional benefits of owning your own hardware. +Canonical's Metal As A Service (MAAS) allows you to deploy and manage physical hardware in the same way you can deploy and manage virtual machines. This means you can configure, deploy and manage bare metal servers just like you would VMs running on Amazon AWS or Microsoft Azure. MAAS gives you the management tools that have made the cloud popular, but with the additional benefits of physical hardware. -To run MAAS you'll need a server to run the management software and at least one server which can be managed with a BMC. Canonical recommends letting the MAAS server provide DHCP and DNS for the network the managed machines are connected to. If your setup requires a different approach to DHCP, see the MAAS documentation for more details on how DHCP works in MAAS and how you can adapt it to your current setup. +To use MAAS you'll need a server to run the management software and at least one server which can be managed with a BMC (once MAAS in installed you can select different BMC power types according to your hardware setup). -To install MAAS first download Ubuntu Server 18.04 LTS and follow the step-by-step installation instructions to set up Ubuntu on your MAAS server. Once you have Ubuntu 18.04 up and running, you can install MAAS. To get the latest development release of MAAS, use the PPA `maas/next`. First add the PPA, then update and install. +Canonical recommends letting the MAAS server handle DHCP for the network the managed machines are connected to, but if your current infrastructure requires a different approach to DHCP there are other options. The MAAS documentation has more details on [how DHCP works in MAAS](https://docs.maas.io/2.6/en/installconfig-network-dhcp) and how you can adapt it to your current setup. + +To install MAAS first download Ubuntu Server 18.04 LTS and follow the step-by-step installation instructions to set up Ubuntu on your server. Once you have Ubuntu 18.04 up and running, you can install MAAS. + +To get the latest development release of MAAS, use the [maas/next PPA](https://launchpad.net/~maas/+archive/ubuntu/next). First add the PPA, then update and install. ~~~~console sudo add-apt-repository ppa:maas/next @@ -20,16 +24,25 @@ The init command will ask you ask you to create a username and password for the Once the installation is done you can login to the web-based MAAS GUI by pointing your browser to http://<your.maas.ip>:5240/MAAS/. -maas-01.png +<img src="maas-01.png" alt="MAAS web UI login screen" /> -Once you login to the MAAS web UI you'll be presented with the MAAS configuration panel where you can set the region name, configure a DNS forwarder for domains not managed by MAAS, as well as configure the images and architectures you want available for MAAS. +Once you login to the MAAS web UI you'll be presented with the MAAS configuration panel where you can set the region name, configure a DNS forwarder for domains not managed by MAAS, as well as configure the images and architectures you want available for MAAS-managed machines. -maas-01.png +<img src="maas-02.png" alt="MAAS web UI initial setup screen" /> -Here you can accept the defaults and click continue. If you did not add your SSH keys in the init step, you'll need to upload them now. Then click "Go to Dashboard" to continue. +For now you can accept the defaults and click continue. If you did not add your SSH keys in the initialization step, you'll need to upload them now. Then click "Go to Dashboard" to continue. + +<img src="maas-04.png" alt="MAAS web UI SSH keys screen" /> The last step is to configure DHCP. When the MAAS Dashboard loads it will alert you that "DHCP is not enabled on any VLAN." To setup DHCP click the "Subnets" menu item and then click the VLAN where you want to enable DHCP. +<img src="maas-07.png" alt="MAAS web UI Subnet screen" /> + This will bring up a new page where you can configure your DHCP subnet, start and end IP addresses, and Gateway IP. You can also decide how MAAS handles DHCP, whether directly from the rack controller or relayed to another VLAN. If you don't want MAAS to manage DHCP you can disable it here. -To set up your first MAAS instances with MAAS handling DHCP, click the Configure MAAS-managed DHCP button. +<img src="maas-08.png" alt="MAAS web UI Subnet screen" /> + +To set up your first MAAS instances with MAAS handling DHCP, click the "Configure MAAS-managed DHCP" button. + +<img src="maas-09.png" alt="MAAS web UI Subnet screen" /> + diff --git a/src/published/2019-04-07_why-and-how-ditch-vagrant-for-lxd.txt b/src/published/2019-04-07_why-and-how-ditch-vagrant-for-lxd.txt new file mode 100644 index 0000000..837af2b --- /dev/null +++ b/src/published/2019-04-07_why-and-how-ditch-vagrant-for-lxd.txt @@ -0,0 +1,199 @@ +I've used Vagrant to manage my local development environment for quite some time. The developers I used to work with used it and, while I have no particular love for it, it works well enough. Eventually I got comfortable enough with Vagrant that I started using it in my own projects. I even wrote about [setting up a custom Debian 9 Vagrant box](/src/create-custom-debian-9-vagrant-box) to mirror the server running this site. + +The problem with Vagrant is that I have to run a huge memory-hungry virtual machine when all I really want to do is run Django's built-in dev server. + +My laptop only has 8GB of RAM. My browser is usually taking around 2GB, which means if I start two Vagrant machines, I'm pretty much maxed out. Django's dev server is also painfully slow to reload when anything changes. + +Recently I was talking with one of Canonical's [MAAS](https://maas.io/) developers and the topic of containers came up. When I mentioned I really didn't like Docker, but hadn't tried anything else, he told me I really needed to try LXD. Later that day I began reading through the [LinuxContainers](https://linuxcontainers.org/) site and tinkering with LXD. Now, a few days later, there's not a Vagrant machine left on my laptop. + +Since it's just me, I don't care that LXC only runs on Linux. LXC/LXD is blazing fast, lightweight, and dead simple. To quote, Canonical's [Michael Iatrou](https://blog.ubuntu.com/2018/01/26/lxd-5-easy-pieces), LXC "liberates your laptop from the tyranny of heavyweight virtualization and simplifies experimentation." + +Here's how I'm using LXD to manage containers for Django development on Arch Linux. I've also included instructions and commands for Ubuntu since I set it up there as well. + +### What's the difference between LXC, LXD and `lxc` + +I wrote this guide in part because I've been hearing about LXC for ages, but it seemed unapproachable, overwhelming, too enterprisey you might say. It's really not though, in fact I found it easier to understand than Vagrant or Docker. + +So what is a LXC container, what's LXD, and how are either different than say a VM or for that matter Docker? + +* LXC - low-level tools and a library to create and manage containers, powerful, but complicated. +* LXD - is a daemon which provides a REST API to drive LXC containers, much more user-friendly +* `lxc` - the command line client for LXD. + +In LXC parlance a container is essentially a virtual machine, if you want to get pedantic, see Stéphane Graber's post on the [various components that make up LXD](https://stgraber.org/2016/03/11/lxd-2-0-introduction-to-lxd-112/). For the most part though, interacting with an LXC container is like interacting with a VM. You say ssh, LXD says socket, potato, potahto. Mostly. + +An LXC container is not a container in the same sense that Docker talks about containers. Think of it more as a VM that only uses the resources it needs to do whatever it's doing. Running this site in an LXC container uses very little RAM. Running it in Vagrant uses 2GB of RAM because that's what I allocated to the VM -- that's what it uses even if it doesn't need it. LXC is much smarter than that. + +Now what about LXD? LXC is the low level tool, you don't really need to go there. Instead you interact with your LXC container via the LXD API. It uses YAML config files and a command line tool `lxc`. + +That's the basic stack, let's install it. + +### Install LXD + +On Arch I used the version of [LXD in the AUR](https://aur.archlinux.org/packages/lxd/). Ubuntu users should go with the Snap package. The other thing you'll want is your distro's Btrfs or ZFS tools. + +Part of LXC's magic relies on either Btrfs and ZFS to read a virtual disk not as a file the way Virtualbox and others do, but as a block device. Both file systems also offer copy-on-write cloning and snapshot features, which makes it simple and fast to spin up new containers. It takes about 6 seconds to install and boot a complete and fully functional LXC container on my laptop, and most of that time is downloading the image file from the remote server. It takes about 3 seconds to clone that fully provisioned base container into a new container. + +In the end I set up my Arch machine using Btrfs or Ubuntu using ZFS to see if I could see any difference (so far, that would be no, the only difference I've run across in my research is that Btrfs can run LXC containers inside LXC containers. LXC Turtles all the way down). + +Assuming you have Snap packages set up already, Debian and Ubuntu users can get everything they need to install and run LXD with these commands: + +~~~~console +apt install zfsutils-linux +~~~~ + +And then install the snap version of lxd with: + +~~~~console +snap install lxd +~~~~ + +Once that's done we need to initialize LXD. I went with the defaults for everything. I've printed out the entire init command output so you can see what will happen: + +~~~~console +sudo lxd init +Create a new BTRFS pool? (yes/no) [default=yes]: +would you like to use LXD clustering? (yes/no) [default=no]: +Do you want to configure a new storage pool? (yes/no) [default=yes]: +Name of the new storage pool [default=default]: +Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: +Create a new BTRFS pool? (yes/no) [default=yes]: +Would you like to use an existing block device? (yes/no) [default=no]: +Size in GB of the new loop device (1GB minimum) [default=15GB]: +Would you like to connect to a MAAS server? (yes/no) [default=no]: +Would you like to create a new local network bridge? (yes/no) [default=yes]: +What should the new bridge be called? [default=lxdbr0]: +What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: +What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: +Would you like LXD to be available over the network? (yes/no) [default=no]: +Would you like stale cached images to be updated automatically? (yes/no) [default=yes] +Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes +~~~~ + +LXD will then spit out the contents of the profile you just created. It's a YAML file and you can edit it as you see fit after the fact. You can also create more than one profile if you like. To see all installed profiles use: + +~~~~console +lxc profile list +~~~~ + +To view the contents of a profile use: + +~~~~console +lxc profile show <profilename> +~~~~ + +To edit a profile use: + +~~~~console +lxc profile edit <profilename> +~~~~ + +So far I haven't needed to edit a profile by hand. I've also been happy with all the defaults although, when I do this again, I will probably enlarge the storage pool, and maybe partition off some dedicated disk space for it. But for now I'm just trying to figure things out so defaults it is. + +The last step in our setup is to add our user to the lxd group. By default LXD runs as the lxd group, so to interact with containers we'll need to make our user part of that group. + +~~~~console +sudo usermod -a -G lxd yourusername +~~~~ + +#####Special note for Arch users. + +To run unprivileged containers as your own user, you'll need to jump through a couple extra hoops. As usual, the [Arch User Wiki](https://wiki.archlinux.org/index.php/Linux_Containers#Enable_support_to_run_unprivileged_containers_(optional)) has you covered. Read through and follow those instructions and then reboot and everything below should work as you'd expect. + +### Create Your First LXC Container + +Let's create our first container. This website runs on a Debian VM currently hosted on Vultr.com so I'm going to spin up a Debian container to mirror this environment for local development and testing. + +To create a new LXC container we use the `launch` command of the `lxc` tool. + +There are four ways you can get LXC containers, local (meaning a container base you've downloaded), images (which come from [https://images.linuxcontainers.org/](https://images.linuxcontainers.org/), ubuntu (release versions of Ubuntu), and ubuntu-daily (daily images). The images on linuxcontainers are unofficial, but the Debian image I used worked perfectly. There's also Alpine, Arch CentOS, Fedora, openSuse, Oracle, Palmo, Sabayon and lots of Ubuntu images. Pretty much every architecture you could imagine is in there too. + +I created a Debian 9 Stretch container with the amd64 image. To create an LXC container from one of the remote images the basic syntax is `lxc launch images:distroname/version/architecture containername`. For example: + +~~~~console +lxc launch images:debian/stretch/amd64 debian-base +Creating debian-base +Starting debian-base +~~~~ + +That will grab the amd64 image of Debian 9 Stretch and create a container out of it and then launch it. Now if we look at the list of installed containers we should see something like this: + +~~~~console +lxc list ++-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+ +| debian-base | RUNNING | 10.171.188.236 (eth0) | fd42:e406:d1eb:e790:216:3eff:fe9f:ad9b (eth0) | PERSISTENT | | ++-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+ +~~~~ + +Now what? This is what I love about LXC, we can interact with our container pretty much the same way we'd interact with a VM. Let's connect to the root shell: + +~~~~console +lxc exec debian-base -- /bin/bash +~~~~ + +Look at your prompt and you'll notice it says `root@nameofcontainer`. Now you can install everything you need on your container. For me, setting up a Django dev environment, that means Postgres, Python, Virtualenv, and, for this site, all the Geodjango requirements (Postgis, GDAL, etc), along with a few other odds and ends. + +You don't have to do it from inside the container though. Part of LXD's charm is to be able to run commands without logging into anything. Instead you can do this: + +~~~~console +lxc exec debian-base -- apt update +lxc exec debian-base -- apt install postgresql postgis virtualenv +~~~~ + +LXD will output the results of your command as if you were SSHed into a VM. Not being one for typing, I created a bash alias that looks like this: `alias luxdev='lxc exec debian-base -- '` so that all I need to type is `luxdev <command>`. + +What I haven't figured out is how to chain commands, this does not work: + +~~~~console +lxc exec debian-base -- su - lxf && cd site && source venv/bin/activate && ./manage.py runserver 0.0.0.0:8000 +~~~~ + +According to [a bug report](https://github.com/lxc/lxd/issues/2057), it should work in quotes, but it doesn't for me. Something must have changed since then, or I'm doing something wrong. + +The next thing I wanted to do was mount a directory on my host machine in the LXC instance. To do that you'll need to edit `/etc/subuid` and `/etc/subgid` to add your user id. Use the `id` command to get your user and group id (it's probably 1000 but if not, adjust the commands below). Once you have your user id, add it to the files with this one liner I got from the [Ubuntu blog](https://blog.ubuntu.com/2016/12/08/mounting-your-home-directory-in-lxd): + +~~~~console +echo 'root:1000:1' | sudo tee -a /etc/subuid /etc/subgid +~~~~ + +Then you need to configure your LXC instance to use the same uid: + +~~~~console +lxc config set debian-base raw.idmap 'both 1000 1000' +~~~~ + +The last step is to add a device to your config file so LXC will mount it. You'll need to stop and start the container for the changes to take effect. + +~~~~console +lxc config device add debian-base sitedir disk source=/path/to/your/directory path=/path/to/where/you/want/folder/in/lxc +lxc stop debian-base +lxc start debian-base +~~~~ + +That replicates my setup in Vagrant, but we've really just scratched the surface of what you can do with LXD. For example you'll notice I named the initial container "debian-base". That's because this is the base image (fully set up for Djano dev) which I clone whenever I start a new project. To clone a container, first take a snapshot of your base container, then copy that snapshot to create a new container: + +~~~~console +lxc snapshot debian-base debian-base-configured +lxc copy debian-base/debian-base-configured mycontainer +~~~~ + +Now you've got a new container named mycontainer. If you'd like to tweak anything, for example mount a different folder specific to this new project you're starting, you can edit the config file like this: + +~~~~console +lxc config edit mycontainer +~~~~ + +I highly suggest reading through Stéphane Graber's 12 part series on LXD to get a better idea of other things you can do, how to manage resources, manage local images, migrate containers, or connect LXD with Juju, Openstack or yes, even Docker. + +#####Shoulders stood upon + +* [Stéphane Graber's 12 part series on lxd 2.0](https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/) - Graber wrote LXC and LXD, this is the best resource I found and highly recommend reading it all. +* [Mounting your home directory in LXD](https://blog.ubuntu.com/2016/12/08/mounting-your-home-directory-in-lxd) +* [Official how to](https://linuxcontainers.org/lxd/getting-started-cli/) +* [Linux Containers Discourse site](https://discuss.linuxcontainers.org/t/deploying-django-applications/996) +* [LXD networking: lxdbr0 explained](https://blog.ubuntu.com/2016/04/07/lxd-networking-lxdbr0-explained) + + +[^1]: To be fair, I didn't need to get rid of Vagrant. You can use Vagrant to manage LXC containers, but I don't know why you'd bother. LXD's management tools and config system works great, why add yet another tool to the mix? Unless you're working with developers who use Windows, in which case LXC, which is short for, *Linux Container*, is not for you. diff --git a/src/switching-to-lxc-lxd-for-django-dev-work-cuts.txt b/src/switching-to-lxc-lxd-for-django-dev-work-cuts.txt new file mode 100644 index 0000000..5f128fe --- /dev/null +++ b/src/switching-to-lxc-lxd-for-django-dev-work-cuts.txt @@ -0,0 +1,42 @@ + +Error: Failed to run: /usr/bin/lxd forkstart debian /var/lib/lxd/containers /var/log/lxd/debian/lxc.conf: +Try `lxc info --show-log local:debian` for more info +Hmmm. Nothing like a error right after init. Okay so run that suggested command: + +~~~~console +lxc info --show-log local:debian +If this is your first time running LXD on this machine, you should also run: lxd init +To start your first container, try: lxc launch ubuntu:18.04 + +Error: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: connect: permission denied +~~~~ + +I after a bit of searching I figured out the permissions problem has to do with privileged vs unprivileged containers. I skipped a part in the initial setup on Arch. Out of the box on Arch you'll need to jump through a few extra hoops to run unprivileged containers, which seem odd and even backward to me because as I understand it that exactly what you want to run. For now I have skipped those extra steps until I better understand them. In the mean time I used the workaround suggested in the Arch wiki, which is to append `-c security.privileged=true` to the end of the `launch` command we used a minute ago. However, I believe this defeats one of the major security benefits of containers, by, uh, containing things. So I wouldn + +~~~~console +sudo lxc launch images:debian/stretch/amd64 debian -c security.privileged=true +Error: Failed container creation: Create container: Add container info to the database: This container already exists +~~~~ + +Okay, so even though we couldn't connect in our previous effort, we did create a container with that name so we need to get rid of it first. Let's see what we have. + +~~~~console +sudo lxc list ++--------+---------+------+------+------------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++--------+---------+------+------+------------+-----------+ +| debian | STOPPED | | | PERSISTENT | | ++--------+---------+------+------+------------+-----------+ +~~~~ + +Yup, there's our debian container. Now the question is how do delete a container using the `lxc` command? Curiously, the command `lxc-delete`, which you'll discover if you type `man lxc` errored out for me. After a bit of searching I found the LXD equivelant is `lxc delete`: + +~~~~console +sudo lxc delete debian +~~~~ + +Okay, not back to our create command: + +~~~~console +sudo lxc launch images:debian/stretch/amd64 debian -c security.privileged=true +~~~~ |