diff options
author | luxagraf <sng@luxagraf.net> | 2019-12-22 11:50:57 -0500 |
---|---|---|
committer | luxagraf <sng@luxagraf.net> | 2019-12-22 11:50:57 -0500 |
commit | a2128d89bc501071ef1abc83e011f0aa02eca54e (patch) | |
tree | 1f569e148b630f4073cf236dd62940038b18a763 /tech | |
parent | 0d1cba91e435b1d613735d4537a64673e5c2731d (diff) |
brought notes up-to-date
Diffstat (limited to 'tech')
23 files changed, 2072 insertions, 0 deletions
diff --git a/tech/arch-downgrade.txt b/tech/arch-downgrade.txt new file mode 100644 index 0000000..2e34518 --- /dev/null +++ b/tech/arch-downgrade.txt @@ -0,0 +1,67 @@ +Return to an earlier package version +Using the pacman cache + +If a package was installed at an earlier stage, and the pacman cache was not cleaned, install an earlier version from /var/cache/pacman/pkg/. + +This process will remove the current package and install the older version. Dependency changes will be handled, but pacman will not handle version conflicts. If a library or other package needs to be downgraded with the packages, please be aware that you will have to downgrade this package yourself as well. + +# cd /var/cache/pacman/pkg/ +# pacman -U <file_name_of_the_package> + +Once the package is reverted, temporarily add it to the IgnorePkg section of pacman.conf, until the difficulty with the updated package is resolved. +Downgrading the kernel + +If you are unable to boot after a kernel update, then you can downgrade the kernel via a live CD. Use a fairly recent Arch Linux installation medium. Once it has booted, mount the partition where your system is installed to /mnt, and if you have /boot or /var on separate partitions, mount them there as well (e.g. mount /dev/sdc3 /mnt/boot). Then chroot into the system: + +# arch-chroot /mnt /bin/bash + +Here you can go to /var/cache/pacman/pkg and downgrade the packages. At least downgrade linux, linux-headers and any kernel modules. For example: + +# pacman -U linux-3.5.6-1-x86_64.pkg.tar.xz linux-headers-3.5.6-1-x86_64.pkg.tar.xz virtualbox-host-modules-4.2.0-5-x86_64.pkg.tar.xz + +Exit the chroot (with exit), reboot and you should be done. +Arch Linux Archive + +The Arch Linux Archive is a daily snapshot of the official repositories. + +The ALA can be used to install a previous package version, or restore the system to an earlier date. +Rebuild the package + +If the package is unavailable, find the correct PKGBUILD and rebuild it with makepkg. + +For packages from the official repositories, retrieve the PKGBUILD with ABS and change the software version. Alternatively, find the package on the Packages website, click "View Changes", and navigate to the desired version. The files are available through a .tar.gz snapshot, and via the Tree view. + +See also Getting PKGBUILDs from SVN#Checkout an older revision of a package. + +##How to downgrade one package + +Find the package you want under /packages. Download it and install it using pacman -U. + +See also Downgrading packages#Automation for tools that simplify the process. +How to restore all packages to a specific date + +To restore all packages to their version at a specific date, let's say 30 March 2014, you have to direct pacman to this date, by editing your /etc/pacman.conf and use the following server directive: + +[core] +SigLevel = PackageRequired +Server=https://archive.archlinux.org/repos/2014/03/30/$repo/os/$arch + +[extra] +SigLevel = PackageRequired +Server=https://archive.archlinux.org/repos/2014/03/30/$repo/os/$arch + +[community] +SigLevel = PackageRequired +Server=https://archive.archlinux.org/repos/2014/03/30/$repo/os/$arch + +or by replacing your /etc/pacman.d/mirrorlist with the following content: + +## +## Arch Linux repository mirrorlist +## Generated on 2042-01-01 +## +Server=https://archive.archlinux.org/repos/2014/03/30/$repo/os/$arch + +Then update the database and force downgrade: + +# pacman -Syyuu diff --git a/tech/debian 7 digital ocean running slowly.txt b/tech/debian 7 digital ocean running slowly.txt new file mode 100755 index 0000000..9cb2e14 --- /dev/null +++ b/tech/debian 7 digital ocean running slowly.txt @@ -0,0 +1,7 @@ +http://unix.stackexchange.com/questions/68597/console-kit-daemon-hogging-cpu-and-ram + +Kill the console-kit-daemon process if it's still running. Remove the file + +/usr/share/dbus-1/system-service/org.freedesktop.ConsoleKit.service + +(or move it to some place where you could restore it, if necessary). Reboot and you will see that console-kit-daemon no longer automatically starts up.
\ No newline at end of file diff --git a/tech/django-comment-app-features.txt b/tech/django-comment-app-features.txt new file mode 100755 index 0000000..59d4524 --- /dev/null +++ b/tech/django-comment-app-features.txt @@ -0,0 +1,29 @@ +From NWEdible: "Your participation makes this whole thing work, so join in! Comment policy: Wheaton's Law enforced here." + +For users: + - Optional RSS/Email updates to replies (disabled by default) + - Option to sign up for newsletter as part of posting comment (disabled by default) + - optional link/website field + - Threading max 4 levels + +For me: + - auto spam filtering + - auto-close old threads. maybe old threads just get a really hard captcha + - moderation edit/delete + - load as an iframe or some other form of lazy loading + - or we have to regenerate the page more often, which could DDoS the site if I fuck it up. + + - Pull in Webmentions. Particularly interested in Facebook as a source of discussion that gets pulled automatically to the site and displayed inline with comment. Showing the chatter from Twitter feels slightly less useful. A the same time FB could be not that interesting much of the time too... Pull selected things? That's a lot of manual work. Doesn't scale. + + +Things I thought about but didn't use: + +http://posativ.org/isso/docs/quickstart/ +http://tildehash.com/?page=hashover + +forums: +http://camendesign.com/code/nononsense_forum + + +why: +http://www.jeremyscheff.com/2011/08/jekyll-and-other-static-site-generators-are-currently-harmful-to-the-free-open-source-software-movement/
\ No newline at end of file diff --git a/tech/dump database psql mysql.txt b/tech/dump database psql mysql.txt new file mode 100644 index 0000000..8c265fd --- /dev/null +++ b/tech/dump database psql mysql.txt @@ -0,0 +1,13 @@ + + +For mysql: + +mysqldump -u root -p[root_password] [database_name] > dumpfilename.sql + +To dump multiple at once, use the databases flag: + +mysqldump -u root -p[root_password] --databases db1 db2 > combined.sql + +Then to restore: + +mysql -u root -p[root_password] [database_name] < dumpfilename.sql diff --git a/tech/how to downsize images and keep them sharp.txt b/tech/how to downsize images and keep them sharp.txt new file mode 100755 index 0000000..b4f3c62 --- /dev/null +++ b/tech/how to downsize images and keep them sharp.txt @@ -0,0 +1,12 @@ +title: How to Downsize Images and Keep Them Sharp +date: 20140731 13:27:28 +tags: #gimp #photos #webdev + + +When you size down an image you may get moires or jagged edges due to spatial frequency folding. The solution is to pre-blur the picture before sizing down. The rule is simple, if you size down by X, pre-blur with a Gaussian blur with a radius of X. For example, if you take an image from from 2000px to 400 (1:5), you would use a 5px blur radius. + +Then resize your image with scale image (resize in PS) and apply smart sharpen to sharpen up the details. + +This works because that pre-blur step (provided you do it right, using just enough, but not too much) doesn't end up softening the result any more than a direct downscale. If done correctly it removes information that will be lost in the resize anyway. + +based on: http://gimpforums.com/thread-image-quality-and-resizing
\ No newline at end of file diff --git a/tech/how to encrypt a file or directory in linux.txt b/tech/how to encrypt a file or directory in linux.txt new file mode 100644 index 0000000..ebb28c8 --- /dev/null +++ b/tech/how to encrypt a file or directory in linux.txt @@ -0,0 +1,31 @@ +--- +title: encryption - How to encrypt a file or directory in Linux? +date: 2015-10-03T14:06:45Z +source: http://superuser.com/questions/249497/how-to-encrypt-a-file-or-directory-in-linux +tags: security + +--- + +I think it would be gpg. The syntax for files and directories differs though. + +## Encryption + +For files(outputs filename.gpg): + + gpg -c filename + +For dirs: + + gpg-zip -c -o file.gpg dirname + +## Decryption + +For files(outputs filename.gpg): + + gpg filename.gpg + +For dirs: + + gpg-zip -d file.gpg + +Edit: Corrected as @Mk12 pointed out the mistake of compression/decompression for encryption/decryption.
\ No newline at end of file diff --git a/tech/how to secure nginx with let's encrypt.txt b/tech/how to secure nginx with let's encrypt.txt new file mode 100644 index 0000000..c071fd3 --- /dev/null +++ b/tech/how to secure nginx with let's encrypt.txt @@ -0,0 +1,282 @@ +--- +title: How To Secure Nginx with Let's Encrypt on Ubuntu 14.04 +date: 2016-03-20T02:19:27Z +source: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-14-04 +tags: linux, luxagraf + +--- + +### Introduction + +Let's Encrypt is a new Certificate Authority (CA) that provides an easy way to obtain and install free TLS/SSL certificates, thereby enabling encrypted HTTPS on web servers. It simplifies the process by providing a software client, `letsencrypt`, that attempts to automate most (if not all) of the required steps. Currently, as Let's Encrypt is still in open beta, the entire process of obtaining and installing a certificate is fully automated only on Apache web servers. However, Let's Encrypt can be used to easily obtain a free SSL certificate, which can be installed manually, regardless of your choice of web server software. + +In this tutorial, we will show you how to use Let's Encrypt to obtain a free SSL certificate and use it with Nginx on Ubuntu 14.04. We will also show you how to automatically renew your SSL certificate. If you're running a different web server, simply follow your web server's documentation to learn how to use the certificate with your setup. + +![Nginx with Let's Encrypt TLS/SSL Certificate and Auto-renewal][1] + +## Prerequisites + +Before following this tutorial, you'll need a few things. + +You should have an Ubuntu 14.04 server with a non-root user who has `sudo` privileges. You can learn how to set up such a user account by following steps 1-3 in our [initial server setup for Ubuntu 14.04 tutorial][2]. + +You must own or control the registered domain name that you wish to use the certificate with. If you do not already have a registered domain name, you may register one with one of the many domain name registrars out there (e.g. Namecheap, GoDaddy, etc.). + +If you haven't already, be sure to create an **A Record** that points your domain to the public IP address of your server. This is required because of how Let's Encrypt validates that you own the domain it is issuing a certificate for. For example, if you want to obtain a certificate for `example.com`, that domain must resolve to your server for the validation process to work. Our setup will use `example.com` and `www.example.com` as the domain names, so **both DNS records are required**. + +Once you have all of the prerequisites out of the way, let's move on to installing the Let's Encrypt client software. + +## Step 1 — Install Let's Encrypt Client + +The first step to using Let's Encrypt to obtain an SSL certificate is to install the `letsencrypt` software on your server. Currently, the best way to install Let's Encrypt is to simply clone it from the official GitHub repository. In the future, it will likely be available via a package manager. + +### Install Git and bc + +Let's install Git and bc now, so we can clone the Let's Encrypt repository. + +Update your server's package manager with this command: + +Then install the `git` and `bc` packages with apt-get: + + * sudo apt-get -y install git bc + +With `git` and `bc` installed, we can easily download `letsencrypt` by cloning the repository from GitHub. + +### Clone Let's Encrypt + +We can now clone the Let's Encrypt repository in `/opt` with this command: + + * sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt + +You should now have a copy of the `letsencrypt` repository in the `/opt/letsencrypt` directory. + +## Step 2 — Obtain a Certificate + +Let's Encrypt provides a variety of ways to obtain SSL certificates, through various plugins. Unlike the Apache plugin, which is covered in [a different tutorial][3], most of the plugins will only help you with obtaining a certificate which you must manually configure your web server to use. Plugins that only obtain certificates, and don't install them, are referred to as "authenticators" because they are used to authenticate whether a server should be issued a certificate. + +We'll show you how to use the **Webroot** plugin to obtain an SSL certificate. + +### How To Use the Webroot Plugin + +The Webroot plugin works by placing a special file in the `/.well-known` directory within your document root, which can be opened (through your web server) by the Let's Encrypt service for validation. Depending on your configuration, you may need to explicitly allow access to the `/.well-known` directory. + +If you haven't installed Nginx yet, do so with this command: + + * sudo apt-get install nginx + +To ensure that the directory is accessible to Let's Encrypt for validation, let's make a quick change to our Nginx configuration. By default, it's located at `/etc/nginx/sites-available/default`. We'll use `nano` to edit it: + + * sudo nano /etc/nginx/sites-available/default + +Inside the server block, add this location block: + +Add to SSL server block + + location ~ /.well-known { + allow all; + } + +You will also want look up what your document root is set to by searching for the `root` directive, as the path is required to use the Webroot plugin. If you're using the default configuration file, the root will be `/usr/share/nginx/html`. + +Save and exit. + +Reload Nginx with this command: + + * sudo service nginx reload + +Now that we know our `webroot-path`, we can use the Webroot plugin to request an SSL certificate with these commands. Here, we are also specifying our domain names with the `-d` option. If you want a single cert to work with multiple domain names (e.g. `example.com` and `www.example.com`), be sure to include all of them. Also, make sure that you replace the highlighted parts with the appropriate webroot path and domain name(s): + + * cd /opt/letsencrypt + + * ./letsencrypt-auto certonly -a webroot --webroot-path=/usr/share/nginx/html -d example.com -d www.example.com + +**Note:** The Let's Encrypt software requires superuser privileges, so you will be required to enter your password if you haven't used `sudo` recently. + +After `letsencrypt` initializes, you will be prompted for some information. The exact prompts may vary depending on if you've used Let's Encrypt before, but we'll step you through the first time. + +At the prompt, enter an email address that will be used for notices and lost key recovery: + +![Email prompt][4] + +Then you must agree to the Let's Encrypt Subscribe Agreement. Select Agree: + +![Let's Encrypt Subscriber's Agreement][5] + +If everything was successful, you should see an output message that looks something like this: + + Output: + + IMPORTANT NOTES: + - If you lose your account credentials, you can recover through + e-mails sent to sammy@digitalocean.com + - Congratulations! Your certificate and chain have been saved at + /etc/letsencrypt/live/example.com/fullchain.pem. Your + cert will expire on 2016-03-15. To obtain a new version of the + certificate in the future, simply run Let's Encrypt again. + - Your account credentials have been saved in your Let's Encrypt + configuration directory at /etc/letsencrypt. You should make a + secure backup of this folder now. This configuration directory will + also contain certificates and private keys obtained by Let's + Encrypt so making regular backups of this folder is ideal. + - If like Let's Encrypt, please consider supporting our work by: + + Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate + Donating to EFF: https://eff.org/donate-le + +You will want to note the path and expiration date of your certificate, which was highlighted in the example output. + +**Firewall Note:** If you receive an error like `Failed to connect to host for DVSNI challenge`, your server's firewall may need to be configured to allow TCP traffic on port `80` and `443`. + +**Note:** If your domain is routing through a DNS service like CloudFlare, you will need to temporarily disable it until you have obtained the certificate. + +### Certificate Files + +After obtaining the cert, you will have the following PEM-encoded files: + +* **cert.pem:** Your domain's certificate +* **chain.pem:** The Let's Encrypt chain certificate +* **fullchain.pem:** `cert.pem` and `chain.pem` combined +* **privkey.pem:** Your certificate's private key + +It's important that you are aware of the location of the certificate files that were just created, so you can use them in your web server configuration. The files themselves are placed in a subdirectory in `/etc/letsencrypt/archive`. However, Let's Encrypt creates symbolic links to the most recent certificate files in the `/etc/letsencrypt/live/your_domain_name` directory. Because the links will always point to the most recent certificate files, this is the path that you should use to refer to your certificate files. + +You can check that the files exist by running this command (substituting in your domain name): + + * sudo ls -l /etc/letsencrypt/live/your_domain_name + +The output should be the four previously mentioned certificate files. In a moment, you will configure your web server to use `fullchain.pem` as the certificate file, and `privkey.pem` as the certificate key file. + +### Generate Strong Diffie-Hellman Group + +To further increase security, you should also generate a strong Diffie-Hellman group. To generate a 2048-bit group, use this command: + + * sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048 + +This may take a few minutes but when it's done you will have a strong DH group at `/etc/ssl/certs/dhparam.pem`. + +## Step 3 — Configure TLS/SSL on Web Server (Nginx) + +Now that you have an SSL certificate, you need to configure your Nginx web server to use it. + +Edit the Nginx configuration that contains your server block. Again, it's at `/etc/nginx/sites-available/default` by default: + + * sudo nano /etc/nginx/sites-available/default + +Find the `server` block. **Comment out** or **delete** the lines that configure this server block to listen on port 80. In the default configuration, these two lines should be deleted: + +Nginx configuration deletions + + listen 80 default_server; + listen [::]:80 default_server ipv6only=on; + +We are going to configure this server block to listen on port 443 with SSL enabled instead. Within your `server {` block, add the following lines but replace all of the instances of `example.com` with your own domain: + +Nginx configuration additions — 1 of 3 + + listen 443 ssl; + + server_name example.com www.example.com; + + ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; + ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; + +This enables your server to use SSL, and tells it to use the Let's Encrypt SSL certificate that we obtained earlier. + +To allow only the most secure SSL protocols and ciphers, and use the strong Diffie-Hellman group we generated, add the following lines to the same server block: + +Nginx configuration additions — 2 of 3 + + ssl_protocols TLSv1 TLSv1.1 TLSv1.2; + ssl_prefer_server_ciphers on; + ssl_dhparam /etc/ssl/certs/dhparam.pem; + ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; + ssl_session_timeout 1d; + ssl_session_cache shared:SSL:50m; + ssl_stapling on; + ssl_stapling_verify on; + add_header Strict-Transport-Security max-age=15768000; + +Lastly, outside of the original server block (that is listening on HTTPS, port 443), add this server block to redirect HTTP (port 80) to HTTPS. Be sure to replace the highlighted part with your own domain name: + +Nginx configuration additions — 3 of 3 + + server { + listen 80; + server_name example.com www.example.com; + return 301 https://$host$request_uri; + } + +Save and exit. + +Now put the changes into effect by reloading Nginx: + + * sudo service nginx reload + +The Let's Encrypt TLS/SSL certificate is now in place. At this point, you should test that the TLS/SSL certificate works by visiting your domain via HTTPS in a web browser. + +You can use the Qualys SSL Labs Report to see how your server configuration scores: + + In a web browser: + + https://www.ssllabs.com/ssltest/analyze.html?d=example.com + +This SSL setup should report an **A+** rating. + +## Step 4 — Set Up Auto Renewal + +Let's Encrypt certificates are valid for 90 days, but it's recommended that you renew the certificates every 60 days to allow a margin of error. At the time of this writing, automatic renewal is still not available as a feature of the client itself, but you can manually renew your certificates by running the Let's Encrypt client with the `renew` option. + +To trigger the renewal process for all installed domains, run this command: + + * /opt/letsencrypt/letsencrypt-auto renew + +Because we recently installed the certificate, the command will only check for the expiration date and print a message informing that the certificate is not due to renewal yet. The output should look similar to this: + + Output: + + Checking for new version... + Requesting root privileges to run letsencrypt... + /root/.local/share/letsencrypt/bin/letsencrypt renew + Processing /etc/letsencrypt/renewal/example.com.conf + + The following certs are not due for renewal yet: + /etc/letsencrypt/live/example.com/fullchain.pem (skipped) + No renewals were attempted. + +Notice that if you created a bundled certificate with multiple domains, only the base domain name will be shown in the output, but the renewal should be valid for all domains included in this certificate. + +A practical way to ensure your certificates won't get outdated is to create a cron job that will periodically execute the automatic renewal command for you. Since the renewal first checks for the expiration date and only executes the renewal if the certificate is less than 30 days away from expiration, it is safe to create a cron job that runs every week or even every day, for instance. + +Let's edit the crontab to create a new job that will run the renewal command every week. To edit the crontab for the root user, run: + +Add the following lines: + + crontab entry + + 30 2 * * 1 /opt/letsencrypt/letsencrypt-auto renew >> /var/log/le-renew.log + 35 2 * * 1 /etc/init.d/nginx reload + +Save and exit. This will create a new cron job that will execute the `letsencrypt-auto renew` command every Monday at 2:30 am, and reload Nginx at 2:35am (so the renewed certificate will be used). The output produced by the command will be piped to a log file located at `/var/log/le-renewal.log`. + +For more information on how to create and schedule cron jobs, you can check our [How to Use Cron to Automate Tasks in a VPS][6] guide. + +## Step 5 — Updating the Let's Encrypt Client (optional) + +Whenever new updates are available for the client, you can update your local copy by running a `git pull` from inside the Let's Encrypt directory: + + * cd /opt/letsencrypt + + * sudo git pull + +This will download all recent changes to the repository, updating your client. + +## Conclusion + +That's it! Your web server is now using a free Let's Encrypt TLS/SSL certificate to securely serve HTTPS content. + +[1]: https://assets.digitalocean.com/articles/letsencrypt/nginx-letsencrypt.png +[2]: https://www.digitalocean.com/community/articles/initial-server-setup-with-ubuntu-14-04 +[3]: https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-14-04 +[4]: https://assets.digitalocean.com/articles/letsencrypt/le-email.png +[5]: https://assets.digitalocean.com/articles/letsencrypt/le-agreement.png +[6]: https://www.digitalocean.com/community/tutorials/how-to-use-cron-to-automate-tasks-on-a-vps diff --git a/tech/lhp book publishing tools.txt b/tech/lhp book publishing tools.txt new file mode 100755 index 0000000..c138f2c --- /dev/null +++ b/tech/lhp book publishing tools.txt @@ -0,0 +1,40 @@ +lhp-book-publishing-tools + +* Cover mocks: http://mockuphone.com/ +* payment processor: <https://gumroad.com/> +* fonts used: + * Chapter headings: Bodoni + * Body: TisaPro, Bold, Italic + * Headings (h2 $h3) Tradegothic LT CondEighteen + * Code: Inconsolata + +--------------------------- + +* Pandoc settings for html: + + pandoc --toc --toc-depth=2 --smart --template=lib/template.html5 --include-before-body=lib/header.html -t html5 -o rwd.html Draft.txt + +* Prince settings for HTML -> PDF: + + prince rwd.html -o rwd.pdf --javascript + +* Pandoc settings for epub: + + pandoc -S -s --smart -t epub3 --include-before-body=lib/header.html --template=lib/template_epub.html --epub-metadata=lib/epub-metadata.xml --epub-stylesheet=lib/print-epub.css --epub-cover-image=lib/covers/cover-portrait.png --epub-embed-font=lib/TisaPro-Regular.otf --epub-embed-font=lib/TisaPro-Bold.otf --epub-embed-font=lib/TisaPro-Ita.otf --epub-embed-font=lib/TisaPro-BoldIta.otf --epub-embed-font=lib/bodoni-mt-condensed.otf --epub-embed-font=lib/InconsolataforPowerline.otf --epub-embed-font=lib/TradeGothicLTCondensed.otf --toc --toc-depth=2 -o rwd.epub Draft.txt + +* Pandoc settings for Kindle epub: + + pandoc -S -s --smart -t epub3 --include-before-body=lib/header.html --template=lib/template_epub.html --epub-metadata=lib/epub-metadata.xml --epub-stylesheet=lib/print-kindle.css --epub-cover-image=lib/covers/cover-portrait.png --toc --toc-depth=2 -o kindle.epub Draft.txt + + Then that gets run through the Kindlegen tool + +* Pandoc settings for Sample Chapter: + +pandoc --smart --template=lib/template.html5 --include-before-body=lib/header_simple.html -t html5 --email-obfuscation=none -o samplechapter.html SampleChapter.txt && prince samplechapter.html -o RWDSampleChapter.pdf --javascript + +----------- + +helpful links: + +http://kindlegen.s3.amazonaws.com/AmazonKindlePublishingGuidelines.pdf +http://puppetlabs.com/blog/automated-ebook-generation-convert-markdown-epub-mobi-pandoc-kindlegen
\ No newline at end of file diff --git a/tech/lhp idea book on writing with vim.txt b/tech/lhp idea book on writing with vim.txt new file mode 100755 index 0000000..3bbff6e --- /dev/null +++ b/tech/lhp idea book on writing with vim.txt @@ -0,0 +1,7 @@ +lhp-idea - book on writing with vim + +https://groups.google.com/forum/#!topic/comp.editors/BMP85bjXmVM +http://naperwrimo.org/wiki/index.php?title=Vim_for_Writers +http://therandymon.com/woodnotes/vim-for-writers/node12.html +http://usevim.com/page3/ +http://usevim.com/2013/10/23/vim-nanowrimo/
\ No newline at end of file diff --git a/tech/linux rename drive.txt b/tech/linux rename drive.txt new file mode 100644 index 0000000..da86d5e --- /dev/null +++ b/tech/linux rename drive.txt @@ -0,0 +1,32 @@ +title: Rename a drive in Linux +source: http://askubuntu.com/questions/276911/how-to-rename-partitions + + +From the command line + +Replace /dev/sdxN with your partition (e.g. /dev/sdc1). + + for FAT32: + + sudo mlabel -i /dev/sdxN ::"my_label" + + or: + + sudo fatlabel /dev/sdxN my_label + + for NTFS: + + sudo ntfslabel /dev/sdxN my_label + + for exFAT: + + sudo exfatlabel /dev/sdxN my_label + + for ext2/3/4: + + sudo e2label /dev/sdxN my_label + + for BTRFS: + + sudo btrfs filesystem label /dev/sdxN my_label + diff --git a/tech/lx link to header image.txt b/tech/lx link to header image.txt new file mode 100755 index 0000000..b05dd3c --- /dev/null +++ b/tech/lx link to header image.txt @@ -0,0 +1,3 @@ +lx-arc - link to header image + +http://www.stock-photos-illustrations.com/enlarged/6106-05631332/Man-carrying-backpack-looking-at-map-side-view/7
\ No newline at end of file diff --git a/tech/minimal debian install.txt b/tech/minimal debian install.txt new file mode 100755 index 0000000..b63c215 --- /dev/null +++ b/tech/minimal debian install.txt @@ -0,0 +1,107 @@ +Minimal Debian install for a nice, lightweight deskop + +Download and burn Debian netinst CD + +install base system + +Then boot and login as root. + +vi /etc/network/interfaces +#add these lines: +auto wlan0 +iface wlan0 inet dhcp + wpa-ssid YOUR-SSID-HERE + wpa-psk YOUR-PASSWORD-HERE + +restart and pig google to confirm you have wifi + +#then install the basics + +apt-get install sudo vim-gtk tmux git ufw curl Xorg openbox tint2 lxrandr htop terminator zip unzip dmz-cursor-theme python-pip python-dev python3 python3-dev network-manager-gnome clipit vnstats mutt mpd mpc ncmpcpp htop + +apt-get -t jessie-backports install "package" + +visudo + +> # User privilege specification +> root ALL=(ALL:ALL) ALL +> lxf ALL=(ALL:ALL) ALL + +# at this point you can pretty much login as your user and startx + +if necessary, use lxrandr to change screen res + +# browsers +apt-get install chromium +vim /etc/apt/sources.list # add: deb http://packages.linuxmint.com debian import +apt-get update +apt-get install firefox +update-alternatives --install /usr/bin/x-www-browser x-www-browser /usr/bin/firefox 100 + +# download crunchbang theme +tar -vxf Downloads/crunchy-dark-grey.tar.gz +git clone https://github.com/CBPP/cbpp-icon-theme.git + +#set a decent desktop: +feh --bg-center ~/pictures/desktops/? +xdg-mime default feh.desktop image/png + +#set up vim: +mkdir -p .vim/bundle +git clone https://github.com/gmarik/Vundle.vim.git ~/.vim/bundle/Vundle.vim + +#Add dropbox: +sudo dpkg -i Downloads/dropbox_2015.02.12_i386.deb +~/.dropbox-dist/dropboxd +echo fs.inotify.max_user_watches=100000 | sudo tee -a /etc/sysctl.conf; sudo sysctl -p +dropbox start +dropbox status + + + +# download some decent fonts and put them in ~/.fonts then run: +fc-cache -fv + +# then get better font smoothing: +echo "deb http://ppa.launchpad.net/no1wantdthisname/ppa/ubuntu trusty main" | sudo tee /etc/apt/sources.list.d/infinality.list +echo "deb-src http://ppa.launchpad.net/no1wantdthisname/ppa/ubuntu trusty main" | sudo tee -a /etc/apt/sources.list.d/infinality.list +sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E985B27B +sudo apt-get update +sudo apt-get install fontconfig-infinality +sudo bash /etc/fonts/infinality/infctl.sh setstyle +sudo vim /etc/profile.d/infinality-settings.sh + +alsa alsa-tools alsa-utils vrms + +#music: +apt-get install mpd mpc ncmpcpp + +#mail: +apt-get install mutt urlview notmuch gnomekeyring python-gnomekeyring msmtp msmtp-gnome abook w3m +python dotfiles/gnomekeyringstuff/msmtp-gnome-tool.py --username luxagraf@fastmail.fm --server mail.messagingengine.com -s +chmod 600 .msmtprc +python dotfiles/gnomekeyringstuff/msmtp-gnome-tool.py --username luxagraf@fastmail.fm --server mail.messagingengine.com -g + +# misc +apt-get install calibre vlc abiword redshift newsbeuter + +#postgres for psycog2 +apt-get install postgresql postgresql-9.4-postgis-2.1 postgresql-server-dev-9.4 python3-dev + +# this is the basic command for launching standalone browser apps +exo-open ~/.local/share/applications/chrome-confeenhjpkmbceaenohemhdbecmk-Default.desktop" + +# cal stuff I haven't actually used as of this writing: +apt-get install khal +apt-get install +apt-get install vdirsyncer +# fstab for lenovo: + +# /dev/sdc2 +UUID=6adbe77f-06ee-4de0-ac87-50652bb85226 / ext4 rw,relatime,data=ordered 0 1 +UUID=cd2a9cf9-52d9-4686-a601-64d92fc25611 /home ext4 defaults,noatime,discard 0 2 +UUID=0998e52d-8651-49f0-8dc7-3961f7b66f5c /mnt/storage ext4 rw,user,auto 0 0 +UUID=803ce209-2084-4be6-990d-fc7b125853ed /mnt/music ext4 rw,user,auto 0 0 +UUID=ca063eba-fbd0-4ba4-b9cc-95b6d5de9479 /mnt/baksites ext4 nofail,x-systemd.device-timeout=1,rw,user,auto 0 0 +UUID=27553cec-ce83-40b7-95a7-06166744fefe /mnt/docbak ext4 nofail,x-systemd.device-timeout=1,rw,user,auto 0 0 +UUID=e17aa13b-5940-4f73-932b-975fb738fbe7 /mnt/storage/swapfile swap defaults 0 0 diff --git a/tech/script pandoc convert html to markdown.txt b/tech/script pandoc convert html to markdown.txt new file mode 100755 index 0000000..eb091b0 --- /dev/null +++ b/tech/script pandoc convert html to markdown.txt @@ -0,0 +1,3 @@ +script- Pandoc convert HTML to Markdown + +find . -name \*.html -type f -exec pandoc -f html -t markdown -o {}.txt {} \;
\ No newline at end of file diff --git a/tech/set up awstats daily links.txt b/tech/set up awstats daily links.txt new file mode 100644 index 0000000..9a2dd93 --- /dev/null +++ b/tech/set up awstats daily links.txt @@ -0,0 +1,3 @@ +https://awstats.sourceforge.io/docs/awstats_faq.html#DAILY +http://www.internetofficer.com/awstats/daily-stats/ +http://www.internetofficer.com/awstats/day-by-day/install/ diff --git a/tech/set up awstats on ubuntu with nginx.txt b/tech/set up awstats on ubuntu with nginx.txt new file mode 100644 index 0000000..37a60ca --- /dev/null +++ b/tech/set up awstats on ubuntu with nginx.txt @@ -0,0 +1,292 @@ +If you'd like some basic data about your site's visitors, but don't want to let spyware vendors track them around the web, AWStats makes a good solution. It parses your server log files and tells you who came by and what they did. There's no spying, no third-party code bloat. AWStats just analyzes your visitors' footprints. + +Here's how I got AWStats up and running on an Ubuntu 18.04 VPS server running over at [Vultr.com](https://www.vultr.com/?ref=6825229) ([non-affiliate link](https://www.vultr.com/) if you prefer). + +### AWStats with GeoIP + +The first step is to install the AWStats package from the Ubuntu repositories: + +~~~~console +sudo apt install awstats +~~~~ + +This will install the various tools and scripts AWStats needs. Because I like to have some geodata in my stats, I also installed the tools necessary to use the AWStats geoip plugin. Here's what worked for me. + +First we need build-essential and libgeoip: + +~~~~console +sudo apt install libgeoip-dev build-essential +~~~~ + +Next you need to fire up the cpan shell: + +~~~~console +cpan +~~~~ + +If this is your first time in cpan you'll need to run two commands to get everything set up. If you've already got cpan set up, you can skip to the next step: + +~~~~perl +make install +install Bundle::CPAN +~~~~ + +Once cpan is set up, install GeoIP: + +~~~~perl +install Geo::IP +~~~~ + +That should take care of the GeoIP stuff. You can double-check that the database files exist by looking in the directory `/usr/share/GeoIP/` and verifying that there's a file named `GeoIP.dat`. + +Now, on to the log file setup. + +#### Optional Custom Nginx Log Format + +This part isn't strictly necessary. To get AWStats working the next step is to create our config files and build the stats, but first I like to overcomplicate things with a custom log format for Nginx. If you don't customize your Nginx log format then you can skip this section, but make a note of where Nginx is putting your logs, you'll need that in the next step. + +Open up `/etc/nginx/nginx.conf` and add these lines: + +~~~~nginx +log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + '$status $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; +~~~~ + +Now we need to edit our individual nginx config file to use this log format. If you follow the standard nginx practice, your config file should be in `/etc/nginx/sites-enabled/`. For example this site is served by the file `/etc/nginx/sites-enabled/luxagraf.net.conf`. Wherever that file may be in your setup, open it and add this line somewhere in the `server` block. + +~~~~nginx +server { + # ... all your other config ... + access_log /var/log/nginx/yourdomain.com.access.log main; + # ... all your other config ... +} +~~~~ + +### Configure AWStats for Nginx + +As I said in the beginning, AWStats is ancient, it hails from a very different era of the internet. One legacy from the olden days is that AWStats is very strict about configuration files. You have to have one config file per domain you're tracking and that file has to be named in the following way: `awstats.domain.tld.conf`. Those config files must be placed inside the /etc/awstats/ directory. + +If you go take a look at the `/etc/awstats` directory you'll see two files in there: `awstats.conf` and `awstats.conf.local`. The first is a main conf file that serves as a fallback if your own config file doesn't specify a particular setting. The second is an empty file that's meant to be used to share common config settings, which really doesn't make much sense to me. + +I took a tip from [this tutorial](https://kamisama.me/2013/03/20/install-configure-and-protect-awstats-for-multiple-nginx-vhost-on-debian/) and dumped the contents of awstats.conf into awstats.local.conf. That way my actual site config file is very short. If you want to do that, then all you have to put in your config file are a few lines. + +Using the naming scheme mentioned above, my config file resides at `/etc/awstats/awstats.luxagraf.net.conf` and it looks like this (drop your actual domain in place of "yourdomain.com"): + +~~~~ini +# Path to your nginx log file +LogFile="/var/log/nginx/yourdomain.com.access.log" + +# Domain of your vhost +SiteDomain="yourdomain.com" + +# Directory where to store the awstats data +DirData="/var/lib/awstats/" + +# Other domains/subdomain you want included from your logs, for example the www subdomain +HostAliases="www.yourdomain.com" + +# If you customized your log format above add this line: + +LogFormat = "%host - %host_r %time1 %methodurl %code %bytesd %refererquot %uaquot %otherquot" + +# If you did not, uncomment and use this line: +# LogFormat = 1 +~~~~ + +Save that file and open the fallback file `awstats.conf.local`. Now set a few things: + +~~~~ini +# if your site doesn't get a lot of traffic you can leave this at 1 +# but it can make things slow +DNSLookup = 0 + +# find the geoip plugin line and uncomment it: +LoadPlugin="geoip GEOIP_STANDARD /usr/share/GeoIP/GeoIP.dat" +~~~~ + +Then delete the LogFile, SiteDomain, DirData, and HostAliases settings in your `awstats.conf.local` file. We've got those covered in our site-specific config file. + +Okay, that's it for configuring things, let's generate some data to look at. + +### Building Stats and Rotating Log Files + +Now that we have our log files, and we've told AWStats where they are, what format they're in and where to put its analysis, it's time to actually run AWStats and get the raw data analyzed. To do that we use this command: + +~~~~console +sudo /usr/lib/cgi-bin/awstats.pl -config=yourdoamin.com -update +~~~~ + +Alternately, if you have a bunch of config files you'd like to update all at once, you can use this wrapper script conveniently located in a completely different directory: + +~~~~console +/usr/share/doc/awstats/examples/awstats_updateall.pl now -awstatsprog=/usr/lib/cgi-bin/awstats.pl +~~~~ + +You're going to need to run that command regularly to update the AWStats data. One way to do is with a crontab entry, but there are better ways to do this. Instead of cron we can hook into logrotate, which rotates Nginx's log files periodically anyway and conveniently includes a `prerotate` directive that we can use to execute some code. Technically logrotate runs via /etc/cron.daily under the hood, so we haven't really escaped cron, but it's not a crontab we need to keep track of anyway. + +~~~~log +Open up the file `/etc/logrotate.d/nginx` and replace it with this: + + /var/log/nginx/*.log{ + daily + missingok + rotate 30 + compress + delaycompress + notifempty + create 0640 www-data adm + sharedscripts + prerotate + /usr/share/doc/awstats/examples/awstats_updateall.pl now -awstatsprog=/usr/lib/cgi-bin/awstats.pl + if [ -d /etc/logrotate.d/httpd-prerotate ]; then \ + run-parts /etc/logrotate.d/httpd-prerotate; \ + fi \ + endscript + postrotate + invoke-rc.d nginx rotate >/dev/null 2>&1 + endscript + } +~~~~ + +The main things we've changed here are the frequency, moving from weekly to daily rotation in line 2, keeping 30 days worth of logs in line 4, and then calling AWStats in line 11. + +One thing to bear in mind is that if you re-install Nginx for some reason this file will be overwritten. + +Now do a dry run to make sure you don't have any typos or other problems: + +~~~~console +sudo logrotate -f /etc/logrotate.d/nginx +~~~~ + +### Serving Up AWStats + +Now that all the pieces are in place, we need to put our stats on the web. I used a subdomain, awstats.luxagraf.net. Assuming you're using something similar here's an nginx config file to get you started: + +~~~~nginx +server { + server_name awstats.luxagraf.net; + + root /var/www/awstats.luxagraf.net; + error_log /var/log/nginx/awstats.luxagraf.net.error.log; + access_log off; + log_not_found off; + + location ^~ /awstats-icon { + alias /usr/share/awstats/icon/; + } + + location ~ ^/cgi-bin/.*\\.(cgi|pl|py|rb) { + auth_basic "Admin"; + auth_basic_user_file /etc/awstats/awstats.htpasswd; + + gzip off; + include fastcgi_params; + fastcgi_pass unix:/var/run/php/php7.2-fpm.sock; # change this line if necessary + fastcgi_index cgi-bin.php; + fastcgi_param SCRIPT_FILENAME /etc/nginx/cgi-bin.php; + fastcgi_param SCRIPT_NAME /cgi-bin/cgi-bin.php; + fastcgi_param X_SCRIPT_FILENAME /usr/lib$fastcgi_script_name; + fastcgi_param X_SCRIPT_NAME $fastcgi_script_name; + fastcgi_param REMOTE_USER $remote_user; + } + +} +~~~~ + +This config is pretty basic, it passes requests for icons to the AWStats icon dir and then sends the rest of our requests to php-fpm. The only tricky part is that AWStats needs to call a Perl file, but we're calling a PHP file, namely `/etc/nginx/cgi-bin.php`. How's that work? + +Well, in a nutshell, this script takes all our server variables and passes them to stdin, calls the Perl script and then reads the response from stdout, passing it on to Nginx. Pretty clever, so clever in fact that I did not write it. Here's the file I use, taken straight from the Arch Wiki: + +~~~~php +<?php +$descriptorspec = array( + 0 => array("pipe", "r"), // stdin is a pipe that the child will read from + 1 => array("pipe", "w"), // stdout is a pipe that the child will write to + 2 => array("pipe", "w") // stderr is a file to write to +); +$newenv = $_SERVER; +$newenv["SCRIPT_FILENAME"] = $_SERVER["X_SCRIPT_FILENAME"]; +$newenv["SCRIPT_NAME"] = $_SERVER["X_SCRIPT_NAME"]; +if (is_executable($_SERVER["X_SCRIPT_FILENAME"])) { + $process = proc_open($_SERVER["X_SCRIPT_FILENAME"], $descriptorspec, $pipes, NULL, $newenv); + if (is_resource($process)) { + fclose($pipes[0]); + $head = fgets($pipes[1]); + while (strcmp($head, "\n")) { + header($head); + $head = fgets($pipes[1]); + } + fpassthru($pipes[1]); + fclose($pipes[1]); + fclose($pipes[2]); + $return_value = proc_close($process); + } else { + header("Status: 500 Internal Server Error"); + echo("Internal Server Error"); + } +} else { + header("Status: 404 Page Not Found"); + echo("Page Not Found"); +} +?> +~~~~ + +Save that mess of PHP as `/etc/nginx/cgi-bin.php` and then install php-fpm if you haven't already: + +~~~~console +sudo apt install php-fpm +~~~~ + +Next we need to create the password file referenced in our Nginx config. We can create a .htpasswd file with this little shell command, just replace `yourdomain.com` with the same domain you used in your AWStats config and use an actual username in place of `username`: + +~~~~console +printf "username:`openssl passwd -apr1`\n" >> awstats.htpasswd +~~~~ + +Enter your password when prompted and your password file will be created in the expected format for basic auth files. + +Then move that file to the proper directory: + +~~~~console +sudo mv awstats.htpasswd /etc/awstats/ +~~~~ + +Now we have an Nginx config, a script to pass AWStats from PHP to Perl and some basic password protection for our stats site. The last, totally optional, step is to serve it all over HTTPS instead of HTTP. Since we have a password protecting it anyway, this is arguably unnecessary. I do it more out of habit than any real desire for security. I mean, I did write an article [criticizing the push to make everything HTTPS](https://arstechnica.com/information-technology/2016/07/https-is-not-a-magic-bullet-for-web-security/). But habit. + +I have a separate guide on [how to set up Certbot for Nginx on Ubuntu 18.04](/src/certbot-nginx-ubuntu-1804) that you can follow. Once that's installed you can just invoke Certbot with: + +~~~~console +sudo certbot --nginx +~~~~ + +Select the domain name you're serving your stats at (for me that's awstats.luxagraf.net), then select 2 to automatically redirect all traffic to HTTPS and certbot will append some lines to your Nginx config file. + +Now restart Nginx: + +~~~~console +sudo systemctl restart nginx +~~~~ + +Visit your new site in the browser at this URL (changing yourdomain.com to the domains you've been using): [https://awstats.yourdomain.com/cgi-bin/cgi-bin.php?config=yourdomain.com](https://awstats.yourdomain.com/cgi-bin/cgi-bin.php?config=yourdomain.com). If all when well you should see AWStats with a few stats in it. If all did not go well, feel free to drop whatever your error message is in a comment here and I'll see if I can help. + +### Motivations + +And now the why. The "why the hell don't I just use --insert popular spyware here--" part. + +My needs are simple. I don't have ads. I don't have to prove to anyone how much traffic I get. And I don't really care how you got here. I don't care where you go after here. I hardly ever look at my stats. + +When I do look all I want to see is how many people stop by in a given month and if there's any one article that's getting a lot of visitors. I also enjoy seeing which countries visitors are coming from, though I recognize that VPNs make this information suspect. + +Since *I* don't track you I certainly don't want third-party spyware tracking you, so that means any hosted service is out. Now there are some self-hosted, open source spyware packages that I've used, Matomo being the best. It is nice, but I don't need or use most of what it offers. And I really dislike running MySQL on the cheap, underpowered VPS servers I use. It uses way too much memory. Unfortunately Matomo requires MySQL, as does Open Web Analytics. + +By process of elimination (no MySQL), and my very paltry requirements, the logical choice is a simple log analyzer. I went with AWStats because I'd used it in the past. Way in the past. But you know what, AWStats ain't broke. It doesn't spy. It uses no server resources. And it tells you 95 percent of what any spyware tool will tell you (provided you actually [read the documentation](http://www.awstats.org/docs/)). + +In the end, AWStats is good enough without being too much. But for something as simple as it is, AWStats is surprisingly complex to get up and running, which is what inspired this guide. + +##### Shoulders stood upon: + +* [AWStats Documentation](http://www.awstats.org/docs/awstats_config.html) +* [Ubuntu Community Wiki: AWStats](https://help.ubuntu.com/community/AWStats) +* [Arch Wiki: AWStats](https://wiki.archlinux.org/index.php/Awstats) +* [Install, configure and protect Awstats for multiple nginx vhost on Debian](https://kamisama.me/2013/03/20/install-configure-and-protect-awstats-for-multiple-nginx-vhost-on-debian/) diff --git a/tech/set up certbot nginx ubuntu.txt b/tech/set up certbot nginx ubuntu.txt new file mode 100644 index 0000000..f13b1e9 --- /dev/null +++ b/tech/set up certbot nginx ubuntu.txt @@ -0,0 +1,137 @@ +EFF's free certificate service, Certbot, has greatly simplified the task of setting up HTTPS for your websites. The only downside is that the certificates are only good for 90 days. Fortunately renewing is easy, and we can even automate it all with systemd. Here's how to set up Certbot with Nginx *and* make sure your SSL certs renew indefinitely with no input from you. + +This tutorial is aimed at anyone using an Ubuntu 18.04 VPS from cheap hosts like DigitalOcean or [Vultr.com](https://www.vultr.com/?ref=6825229), but should also work for other versions of Ubuntu, Debian, Fedora, CentOS and any other system that uses systemd. The only difference will be the commands you use to install Certbot. See the Certbot site for [instructions](https://certbot.eff.org/) specific to your system. + +Here's how you get Certbot running on Ubuntu 18.04, then we'll dive into setting up automatic renewals via systemd. + +You should not need this with 18.04, but to be on the safe side, make sure you have the `software-properties-common` package installed. + +~~~~console +sudo apt install software-properties-common +~~~~ + +The next part requires that you add a PPA, my least favorite part of Certbot for Ubuntu, as I don't like to rely on PPAs for something as mission critical as my security certificates. Still, as of this writing, there is not a better way. At least go [look at the code](https://launchpad.net/~certbot/+archive/ubuntu/certbot) before you blindly cut and paste. When you're done, here's your cut and paste: + +~~~~console +sudo apt update +sudo add-apt-repository ppa:certbot/certbot +sudo apt update +sudo apt install python-certbot-nginx +~~~~ + +Now you're ready to install some certs. For this part I'm going to show the commands and the output of the commands since the `certbot` command is interactive. Note that the version below will append some lines to your Nginx config file. If you prefer to edit your config file yourself, use this command: `sudo certbot --nginx certonly`, otherwise, here's now to get your certs. + +~~~~console +sudo certbot --nginx + +[sudo] password for $youruser: +Saving debug log to /var/log/letsencrypt/letsencrypt.log +Plugins selected: Authenticator nginx, Installer nginx + +Which names would you like to activate HTTPS for? +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +1: luxagraf.net +2: awstats.luxagraf.net +3: origin.luxagraf.net +4: www.luxagraf.net +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Select the appropriate numbers separated by commas and/or spaces, or leave input blank to select all options shown (Enter 'c' to cancel): 4 +Obtaining a new certificate +Performing the following challenges: +http-01 challenge for www.luxagraf.net +Waiting for verification... +Cleaning up challenges +Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/luxagraf.net.conf + +Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access. +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +1: No redirect - Make no further changes to the webserver configuration. +2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for +new sites, or if you're confident your site works on HTTPS. You can undo this +change by editing your web server's configuration. +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2 + +Traffic on port 80 already redirecting to ssl in /etc/nginx/sites-enabled/luxagraf.net.conf +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Congratulations! You have successfully enabled https://www.luxagraf.net. +You should test your configuration at: https://www.ssllabs.com/ssltest/analyze.html?d=www.luxagraf.net +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +IMPORTANT NOTES: + - Congratulations! Your certificate and chain have been saved at: + /etc/letsencrypt/live/www.luxagraf.net/fullchain.pem + Your key file has been saved at: + /etc/letsencrypt/live/www.luxagraf.net/privkey.pem + Your cert will expire on 2019-01-09. To obtain a new or tweaked + version of this certificate in the future, simply run certbot again + with the "certonly" option. To non-interactively renew *all* of + your certificates, run "certbot renew" + - If you like Certbot, please consider supporting our work by: + Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate + Donating to EFF: https://eff.org/donate-le +~~~~ + +And there you have it, SSL certs for all your domains. + +That's all good and well, but those new certs are only good for 90 days. The odds of you remembering to renew that every 90 days -- even with reminder emails from the EFF -- is near nil. Plus, do you really want to be renewing certs by hand, [like an animal](http://5by5.tv/hypercritical/17)? No, you want to automate everything so you can do better things with your time. + +You could use cron, but the more modern approach would be to create a systemd service and a systemd timer to control when that service runs. + +I highly recommend reading through the Arch Wiki page on [systemd services and timers](https://wiki.archlinux.org/index.php/Systemd/Timers), as well as the [systemd.timer man pages](https://jlk.fjfi.cvut.cz/arch/manpages/man/systemd.timer.5) to get a better understanding of how you can automate other tasks in your system. But for the purposes of this tutorial all you really need to understand is that timers are just like other systemd unit files, but they include a `[Timer]` block which takes parameter for exactly when you want your service file to run. + +Timer files can live right next to your service files in `/etc/systemd/system/`. + +There's no hard and fast rules about naming timers, but it makes sense to use same name as the service file the timer controls, except the timer gets the `.timer` extension. So you'll have two files `myservice.service` and `myservice.timer`. + +Let's start with the service file. I call mine `certbot-renewal`. Open the service file: + +~~~~console +sudo nano /etc/systemd/system/certbot-renewal.service +~~~~ + +This is going to be a super simple service, we'll give it a description and a command to run and that's it: + +~~~~ini +[Unit] +Description=Certbot Renewal + +[Service] +ExecStart=/usr/bin/certbot renew +~~~~ + +Next we need to create a .timer file that will run the certbot.renewal service every day. Create this file: + +~~~~console +sudo nano /etc/systemd/system/certbot-renewal.timer +~~~~ + +And now for the slightly more complex timer: + +~~~~ini +[Unit] +Description=Certbot Renewal Timer + +[Timer] +OnBootSec=500 +OnUnitActiveSec=1d + +[Install] +WantedBy=multi-user.target +~~~~ + +The `[Timer]` directive can take a number of parameters, the ones we've used constitute what's called a monotonic timer, which means they run "after a time span relative to a varying starting point". In other words they're not calendar events like cron. + +Our monotonic timer has two directives, `onBootSec` and `OnUnitActiveSec`. The first should be obvious, our timer will run 500 seconds after the system boots. Why 500? No real reason, I just didn't want to bog down the system at boot. + +The `OnUnitActiveSec` is really what makes this work. This directive measures time relative to when the service that the timer controls was last activated. In our case the `1d` means run the service one day after it last ran. So our timer will run once a day to make sure our scripts stay up to date. + +As a kind of footnote, in systemd parlance calendar-based timers are called realtime timers and can be used to replace cron if you want, though there are some disadvantages, see the Arch Wiki for [a good overview of what you get and what you lose](https://wiki.archlinux.org/index.php/Systemd/Timers#As_a_cron_replacement) if you go that route. + +Okay, the last step for our certbot renewal system is to enable and then start our timer. Note that we don't have to do either to our actual service file because we don't want it active, the timer will control when it runs. + +~~~~console +sudo systemctl enable certbot-renewal.timer +sudo systemctl start certbot-renewal.timer +~~~~ + +Run those commands and you're done. Your timer is now active and your Certbot certificates will automatically renew as long as your server is up and running. diff --git a/tech/set up debian droplet basics + nginx.txt b/tech/set up debian droplet basics + nginx.txt new file mode 100755 index 0000000..1b1af00 --- /dev/null +++ b/tech/set up debian droplet basics + nginx.txt @@ -0,0 +1,248 @@ +Set Up Debian Droplet - Basics + Nginx + +[refernces: +<http://www.howtoforge.com/building-nginx-from-source-on-debian-squeeze> +<http://www.rosehosting.com/blog/how-to-compile-and-install-nginx-from-source-in-debian-7-wheezy/> +<https://www.digitalocean.com/community/articles/how-to-setup-a-firewall-with-ufw-on-an-ubuntu-and-debian-cloud-server> +<https://www.digitalocean.com/community/articles/initial-server-setup-with-debian-7> +<https://www.digitalocean.com/community/articles/how-to-protect-ssh-with-fail2ban-on-debian-7>] + +First login as root and set new root password: + + passwd + +Then create new user: + + adduser whatever + +Then add user to suders list: + + visudo + whatever ALL=(ALL:ALL) ALL + +test by sshing as new user. + +vultr specific: + +sudo vi /etc/hosts +sudo vi /etc/hostname + +##Secure the server + + vi /etc/ssh/sshd_config + +Add these lines: + +Port 25009 +Protocol 2 +PermitRootLogin no +UseDNS no + +Add this line to the bottom of the document, replacing demo with your username: + + AllowUsers whatever + +reload ssh: + + sudo systemctl restart sshd + +test before you log out: + + ssh -p 25009 whatever@123.45.67.890 + +Add ssh keys + + cat ~/.ssh/id_rsa4096.pub | ssh -p 25034 lxf@63.135.175.3 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys" + +--- + +###Install Zsh/Tmux + +(because doing only one thing at a time sucks) + + sudo apt-get update + sudo apt-get install tmux zsh + curl -L https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh | sh + chsh -s /bin/zsh whatever + +###Set up fail2ban and UFW + + sudo apt-get install fail2ban + sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local + sudo vi /etc/fail2ban/jail.local #(add IP to exclusions, up ban time) + sudo systemctl restart fail2ban + + apt-get install ufw + sudo ufw default deny incoming + sudo ufw default deny outgoing + sudo ufw allow 25043/tcp + sudo ufw allow 80/tcp + sudo ufw allow 443/tcp + sudo ufw allow out http + sudo ufw allow out https + sudo ufw allow out 53 + sudo ufw enable + sudo ufw status verbose + +--- + +###Vim + + apt-get install vim + #I point to these in my vimrc, skip if you don't need them + mkdir -p ~/.vim/bundle/ + git clone https://github.com/VundleVim/Vundle.vim.git ~/.vim/bundle/Vundle.vim + +##Setup Nginx + + # check http://nginx.org/en/download.html for the latest version of nginx + # check https://developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_source for latest version of ngx_pagespeed and psol + # latest headers more https://github.com/openresty/headers-more-nginx-module/tags + # naxsi: https://github.com/nbs-system/naxsi/releases + +prereqs for building stuff: + + apt-get -y install build-essential zlib1g-dev libpcre3 libpcre3-dev libbz2-dev libssl-dev tar unzip + +prereqs for geo and ssl: + + apt-get install libgeoip1 libgeoip-dev openssl libssl-dev + # then grab the libraries: + sudo mkdir -p /etc/nginx/geoip + cd /etc/nginx/geoip + sudo wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gz + sudo gunzip GeoIP.dat.gz + sudo wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz + sudo gunzip GeoLiteCity.dat.gz + + #install the GeoIP C library. + cd /tmp + wget geolite.maxmind.com/download/geoip/api/c/GeoIP.tar.gz + tar -zxvf GeoIP.tar.gz + cd GeoIP-* + ./configure + make + sudo make install + + # That's all the pre-reqs, now cd in to nginx and compile: + cd nginx-* + + +config script for nginx source (debian paths): + + ./configure \ + --prefix=/usr/share/nginx \ + --sbin-path=/usr/sbin/nginx \ + --conf-path=/etc/nginx/nginx.conf \ + --pid-path=/var/run/nginx.pid \ + --lock-path=/var/lock/nginx.lock \ + --error-log-path=/var/log/nginx/error.log \ + --http-log-path=/var/log/access.log \ + --user=www-data \ + --group=www-data \ + --without-mail_pop3_module \ + --without-mail_imap_module \ + --without-mail_smtp_module \ + --with-http_stub_status_module \ + --with-http_ssl_module \ + --with-http_v2_module \ + --with-http_gzip_static_module \ + --with-pcre \ + --with-file-aio \ + + +./configure \ +--user=http \ +--group=http \ +--prefix=/etc/nginx \ +--sbin-path=/usr/sbin/nginx \ +--conf-path=/etc/nginx/nginx.conf \ +--pid-path=/var/run/nginx.pid \ +--lock-path=/var/run/nginx.lock \ +--error-log-path=/var/log/nginx/error.log \ +--http-log-path=/var/log/nginx/access.log \ +--with-http_gzip_static_module \ +--with-http_stub_status_module \ +--with-http_ssl_module \ +--with-pcre \ +--with-file-aio \ +--with-http_v2_module \ +--with-http_realip_module \ +--without-http_scgi_module \ +--without-mail_pop3_module \ +--without-mail_imap_module \ +--without-mail_smtp_module \ +--add-module=$HOME/ngx_pagespeed-${NPS_VERSION} ${PS_NGX_EXTRA_FLAGS} + + make + sudo make install + +The next thing is to enable autostart: + + sudo vim /lib/systemd/system/nginx.service + +# Stop dance for nginx +# ======================= +# +# ExecStop sends SIGSTOP (graceful stop) to the nginx process. +# If, after 5s (--retry QUIT/5) nginx is still running, systemd takes control +# and sends SIGTERM (fast shutdown) to the main process. +# After another 5s (TimeoutStopSec=5), and if nginx is alive, systemd sends +# SIGKILL to all the remaining processes in the process group (KillMode=mixed). +# +# nginx signals reference doc: +# http://nginx.org/en/docs/control.html +# +[Unit] +Description=A high performance web server and a reverse proxy server +After=network.target + +[Service] +Type=forking +PIDFile=/run/nginx.pid +ExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;' +ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;' +ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload +ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid +TimeoutStopSec=5 +KillMode=mixed + +[Install] +WantedBy=multi-user.target + + +sudo systemctl enable nginx.service +sudo systemctl start nginx.service +sudo systemctl status nginx.service + +sudo vim /etc/nginx/nginx.conf + + +user www-data; +events { + worker_connections 1024; +} +http { + include mime.types; + include /etc/nginx/naxsi_core.rules; + default_type application/octet-stream; + types_hash_bucket_size 64; + server_names_hash_bucket_size 128; + log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + '$status $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; + + #access_log logs/access.log main; + more_set_headers "Server: Graf Industries Custom Server"; + sendfile on; + keepalive_timeout 65; + gzip on; + pagespeed on; + pagespeed FileCachePath /var/ngx_pagespeed_cache; + limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; + include /etc/nginx/sites-enabled/*.conf; +} + + + sudo cp naxsi-0.53-2/naxci_config/naxsi_core.rule /etc/nginx + diff --git a/tech/set up debian droplet python 3 + gunicorn + supervisor.txt b/tech/set up debian droplet python 3 + gunicorn + supervisor.txt new file mode 100755 index 0000000..3f199b9 --- /dev/null +++ b/tech/set up debian droplet python 3 + gunicorn + supervisor.txt @@ -0,0 +1,166 @@ +Set Up Debian Droplet - Python 3 + gunicorn + supervisor + +[reference: +<http://michal.karzynski.pl/blog/2013/06/09/django-nginx-gunicorn-virtualenv-supervisor/> +<http://wiki.nginx.org/HttpHeadersMoreModule#more_clear_input_headers> +<https://github.com/nbs-system/naxsi/wiki/basicsetup> +<http://pillow.readthedocs.org/en/latest/installation.html#linux-installation> +<http://codeinthehole.com/writing/how-to-install-postgis-and-geodjango-on-ubuntu/> +<http://docs.gunicorn.org/en/latest/configure.html> +<udo update-rc.ttp://edvanbeinum.com/how-to-install-and-configure-supervisord/> +] + +If you really want python3.3 you can compile it from scratch. That would eliminate the need to install virtualenv, since it's part of Python as of 3.3. I've gone that route, but for simplicity's sake most of the time I just use python 3.2 which has is available in the debian stable repos. I also grab pip from the repos, though gunicorn and supervisor I install via pip since I want those on a virtualenv-based per-project basis. + +So start with this: + + apt-get install python3.2 python3.2-dev python3-pip + +And then: + + pip-3.2 install virtualenv + +That gets us a nice python3 working setup, though note that you have to call python with python3 and pip-3.2. To cut down on the typing I just make aliases in my .zshrc along the lines of: + + alias p3="python3 " + alias p3p="pip-3.2 " + +Okay so we can use that to setup a working django environment with `virtualenv`. You can use [`virtualenvwrapper`](http://virtualenvwrapper.readthedocs.org/en/latest/) if you like, I find it to be unnecessary. I do something like this: + + mkdir -p apps/mydjangoapp + cd !$ + virtualenv --distribute --python=python3 venv + source venv/bin/activate + +There are few other things that you may want to install before we get around to actually installing stuff with pip. For example if you plan to use memcached you'll want to install pylibmc which needs: + + sudo apt-get install python-dev libmemcached-dev + +**Apparently pylibmc doesn't work with Python3 yet** + +Then I just load everything I need from my requirements.txt file, which lives in the config folder: + + pip install -r config/requirements.txt + +Where did that file come from? Typically I generate it with `pip freeze > requirements.txt` in my local development environment. + +Among the requirements will be gunicorn. If you don't already have a requirements file then you'd just do this: + + pip install gunicorn + +Okay, so we have our sandboxed python3 environment, along with gunicorn to act as a server for our site. In a minute we'll connect Nginx and gunicorn, but first let's make sure our gunicorn server restarts whenever our machine reboots. To do that we'll use `supervisor`. Here's where it gets tricky though, `supervisor` doesn't run under Python 3. It has no problem *managing* python 3 projects, it just doesn't run under python 3 yet. That means we can't just install it using pip3.2. + +We could install it with the system pip, but debian (and ubuntu) have a supervisor repo, so we can just do: + + sudo apt-get install supervisor + +That will install and start supervisor. Let's add an init script so that supervisord starts up should the server need to reboot. so create the file + + /etc/init.d/supervisord + +And grab the appropriate [init script from the supervisor project](https://github.com/Supervisor/initscripts). I use the Debian script from that link. Paste that script into `/etc/init.d/supervisord` and save. Then make it executable: + + sudo chmod +x /etc/init.d/supervisord + +Now, make sure supervisor isn't running: + + supervisorctl shutdown + +And add supervisor to + +With Supervisor installed you can start and watch apps by creating configuration files in the `/etc/supervisor/conf.d` directory. You might do something like this, in, for example, `/etc/supervisor/conf.d/helloworld.conf`: + + [program:helloworld] + command = /home/<username>/apps/mydjangoapp/venv/bin/gunicorn -c /home/<username>/apps/mydjangoapp/config/gunicorn_config.py config.wsgi + directory = /home/<username>/apps/mydjangoapp/ + user = <non-privledged-user> + autostart = true + autorestart = true + stdout_logfile = /var/log/supervisor/helloworld.log + stderr_logfile = /var/log/supervisor/helloworld_err.log + +You'll need to fill in the correct paths based on your server setup, replacing <username> with your username and `mydjangoapp/etc...` with the actual path to the gunicorn app. This also assumes your gunicorn config file lives in `mydjangoapp/config/`. We'll get to that file in a minute. + +First, let's tell supervisor about our new app: + + sudo supervisorctl reread + +You should see a message `helloworld available`. So Supervisor knows about our app, let's actually add it. + + sudo supervisorctl update + +Now you should see a message that says something like `helloworld: added process group`. Supervisor is now aware of our hello world app and will make sure it automatically starts up whenever our server reboots. You can check the status of our gunicorn app with: + + sudo supervisorctl status + +Right now that will generate an error that looks something like this: + + helloworld FATAL can't find command '/home/<username>/apps/mydjangoapp/venv/bin/gunicorn' + + + + + +Now we just need to set up that gunicorn_config.py file we referenced earlier. + +In my setup that file looks like this: + + from os.path import dirname, abspath,join + # get the root folder for this project, which happens to be two folder up + PROJ_ROOT = abspath(dirname(dirname(dirname(__file__))))+'/' + command = join(PROJ_ROOT, "/venv/bin/gunicorn") + pythonpath = PROJ_ROOT + bind = '127.0.0.1:8002' + workers = 3 + log_level = "warning" + error_logfile = "/home/<username>/logs/gunicorn.error.log" + +This is pretty boilerplate, you just need to adjust the paths and it should work. The other thing to note is the line `bind = '127.0.0.1:8002'`. That's the address we'll pass requests to with Nginx. + +Okay, now let's go back to the Nginx tutorial we worked with in the previous part of this series. Here's what this looks like: + + + + # define an upstream server named gunicorn on localhost port 8002 + upstream gunicorn { + server localhost:8002; + } + + server { + listen 80; + server_name mydomain.com; + root /var/www/mydomain.com/; + error_log <path to logs>/mydomain.error.log main; + access_log <path to logs>/mydomain.access.log main; + # See http://wiki.nginx.org/HttpCoreModule#client_max_body_size + client_max_body_size 0; + + # this tries to serve a static file at the requested url + # if no static file is found, it passes the url to gunicorn + try_files $uri @gunicorn; + + # define rules for gunicorn + location @longgunicorn { + # repeated just in case + client_max_body_size 0; + + # proxy to the gunicorn upstream defined above + proxy_pass http://gunicorn; + + # makes sure the URLs don't actually say http://gunicorn + proxy_redirect off; + # If gunicorn takes > 3 minutes to respond, give up + proxy_read_timeout 3m; + + # make sure these HTTP headers are set properly + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + + } + + + } + + + diff --git a/tech/set up geodjango on debian 7 digital ocean.txt b/tech/set up geodjango on debian 7 digital ocean.txt new file mode 100755 index 0000000..475c6eb --- /dev/null +++ b/tech/set up geodjango on debian 7 digital ocean.txt @@ -0,0 +1,98 @@ +the first thing we need is python 3.4+ which I install by hand: + +prereqs: +sudo apt-get install build-essential libncursesw5-dev libssl-dev libgdbm-dev libc6-dev libsqlite3-dev tk-dev libreadline6-dev + +wget https://www.python.org/ftp/python/3.4.1/Python-3.4.1.tgz +tar -xvzf Python-3.4.1.tgz +cd Python-3.4.1 +./configure --prefix=/opt/python3 +make +sudo make altinstall + +Then we need postgres and the geospatial libs: + +apt-get install postgresql postgresql-contrib binutils libproj-dev gdal-bin + + + +postgis 2 i also do from source: + +wget http://download.osgeo.org/postgis/source/postgis-2.1.3.tar.gz +tar -xvzf postgis-2.1.3.tar.gz +cd postgis-2.1.3 + +preqs: + +apt-get install libpq-dev postgresql-server-dev-all libxml2 libgeos-dev libxml2-dev gdal-bin libgdal-dev + +./configure +make +sudo make install + +Then you just need to create a db user and database + +sudo su - postgres +createuser -P -s -e luxagraf + +The with regular user: +createdb luxagraf -U luxagraf -W -hlocalhost +psql -U luxagraf -W -hlocalhost -d luxagraf +and when you're in postgres: +CREATE EXTENSION postgis; + +Then just load the data: +psql -U luxagraf -W -hlocalhost -d luxagraf -f fullbak.sql (or whatever your backupfile is) + + +The last thing is a virtualenv for our project using python3.4 + +#make sure you use the right pip (included with python 3.4+) +sudo /opt/python3/bin/pip3.4 install virtualenv +then cd to proj dir and do: +/opt/python3/bin/virtualenv --distribute --python=/opt/python3/bin/python3-4 venv +then activate and install what you need + +--- +Once that's set up we need to connect gunicorn to nginx via supervisor. + + sudo apt-get install supervisor + +That will install and start supervisor. Let's add an init script so that supervisord starts up should the server need to reboot. so create the file + + /etc/init.d/supervisord + +And grab the appropriate [init script from the supervisor project](https://github.com/Supervisor/initscripts). I use the Debian script from that link. Paste that script into `/etc/init.d/supervisord` and save. Then make it executable: + + sudo chmod +x /etc/init.d/supervisord + +Now, make sure supervisor isn't running: + + supervisorctl shutdown + +And add supervisor to + +With Supervisor installed you can start and watch apps by creating configuration files in the `/etc/supervisor/conf.d` directory. You might do something like this, in, for example, `/etc/supervisor/conf.d/helloworld.conf`: + + [program:helloworld] + command = /home/<username>/apps/mydjangoapp/venv/bin/gunicorn -c /home/<username>/apps/mydjangoapp/config/gunicorn_config.py config.wsgi + directory = /home/<username>/apps/mydjangoapp/ + user = <non-privledged-user> + autostart = true + autorestart = true + stdout_logfile = /var/log/supervisor/helloworld.log + stderr_logfile = /var/log/supervisor/helloworld_err.log + +You'll need to fill in the correct paths based on your server setup, replacing <username> with your username and `mydjangoapp/etc...` with the actual path to the gunicorn app. This also assumes your gunicorn config file lives in `mydjangoapp/config/`. We'll get to that file in a minute. + +First, let's tell supervisor about our new app: + + sudo supervisorctl reread + +You should see a message `helloworld available`. So Supervisor knows about our app, let's actually add it. + + sudo supervisorctl update + +Now you should see a message that says something like `helloworld: added process group`. Supervisor is now aware of our hello world app and will make sure it automatically starts up whenever our server reboots. You can check the status of our gunicorn app with: + + sudo supervisorctl status
\ No newline at end of file diff --git a/tech/set up gitea on ubuntu 18.04.txt b/tech/set up gitea on ubuntu 18.04.txt new file mode 100644 index 0000000..6d1870b --- /dev/null +++ b/tech/set up gitea on ubuntu 18.04.txt @@ -0,0 +1,233 @@ +I've never liked hosting my git repos on someone else's servers. GitHub especially is not a company I'd do business with, ever. I do have a repo or two hosted over at [GitLab](https://gitlab.com/luxagraf) because those are projects I want to be easily available to anyone. But I store almost everything in git -- notes, my whole documents folder, all my code projects, all my writing, pretty much everything is in git -- but I like to keep all that private and on my own server. + +For years I used [Gitlist](http://gitlist.org/) because it was clean, simple, and did 95 percent of what I needed in a web-based interface for my repos. But Gitlist is abandonware at this point and broken if you're using PHP 7.2. There are few forks that [patch it](https://github.com/patrikx3/gitlist), but it's copyrighted to the original dev and I don't want to depend on illegitimate forks for something so critical to my workflow. Then there's self-hosted Gitlab, which I like, but the system requirements are ridiculous. + +Some searching eventually led me to Gitea, which is lightweight, written in Go and has everything I need. + +Here's a quick guide to getting Gitea up and running on your Ubuntu 18.04 -- or similar -- VPS. + +### Set up Gitea + +The first thing we're going to do is isolate Gitea from the rest of our server, running it under a different user seems to be the standard practice. Installing Gitea via the Arch User Repository will create a `git` user, so that's what I used on Ubuntu 18.04 as well. + +Here's a shell command to create a user named `git`: + +~~~~console +sudo adduser --system --shell /bin/bash --group --disabled-password --home /home/git git +~~~~ + +This is pretty much a standard adduser command such as you'd use when setting up a new VPS, the only difference is that we've added the `--disable-password` flag so you can't actually log in with it. While we will use this user to authenticate over SSH, we'll do so with a key, not a password. + +Now we need to grab the latest Gitea binary. At the time of writing that's version 1.5.2, but be sure to check the [Gitea downloads page](https://dl.gitea.io/gitea/) for the latest version and adjust the commands below to work with that version number. Let's download the Gitea binary and then we'll verify the signing key. Verifying keys is very important when working with binaries since you can't see the code behind them[^1]. + +~~~~console +wget -O gitea https://dl.gitea.io/gitea/1.5.2/gitea-1.5.2-linux-amd64 +gpg --keyserver pgp.mit.edu --recv 0x2D9AE806EC1592E2 +wget https://dl.gitea.io/gitea/1.5.2/gitea-1.5.2-linux-amd64.asc +gpg --verify gitea-1.5.2-linux-amd64.asc gitea +~~~~ + +A couple of notes here, GPG should say the keys match, but then it should also warn that "this key is not certified with a trusted signature!" That means, essentially, that this binary could have been signed by anybody. All we know for sure is that wasn't tampered with in transit[^1]. + +Now let's make the binary executable and test it to make sure it's working: + +~~~~console +chmod +x gitea +./gitea web +~~~~ + +You can stop Gitea with `Ctrl+C`. Let's move the binary to a more traditional location: + +~~~~console +sudo cp gitea /usr/local/bin/gitea +~~~~ + +The next thing we're going to do is create all the directories we need. + +~~~~console +sudo mkdir -p /var/lib/gitea/{custom,data,indexers,public,log} +sudo chown git:git /var/lib/gitea/{data,indexers,log} +sudo chmod 750 /var/lib/gitea/{data,indexers,log} +sudo mkdir /etc/gitea +sudo chown root:git /etc/gitea +sudo chmod 770 /etc/gitea +~~~~ + +That last line should make you nervous, that's too permissive for a public directory, but don't worry, as soon as we're done setting up Gitea we'll change the permissions on that directory and the config file inside it. + +Before we do that though let's create a systemd service file to start and stop Gitea. The Gitea project has a service file that will work well for our purposes, so let's grab it, make a couple changes and then we'll add it to our system: + +~~~~console +wget https://raw.githubusercontent.com/go-gitea/gitea/master/contrib/systemd/gitea.service +~~~~ + +Now open that file and uncomment the line `After=postgresql.service` so that Gitea starts after postgresql is running. The resulting config file should look like this: + +~~~~ini +[Unit] +Description=Gitea (Git with a cup of tea) +After=syslog.target +After=network.target +#After=mysqld.service +After=postgresql.service +#After=memcached.service +#After=redis.service + +[Service] +# Modify these two values and uncomment them if you have +# repos with lots of files and get an HTTP error 500 because +# of that +### +#LimitMEMLOCK=infinity +#LimitNOFILE=65535 +RestartSec=2s +Type=simple +User=git +Group=git +WorkingDirectory=/var/lib/gitea/ +ExecStart=/usr/local/bin/gitea web -c /etc/gitea/app.ini +Restart=always +Environment=USER=git HOME=/home/git GITEA_WORK_DIR=/var/lib/gitea +# If you want to bind Gitea to a port below 1024 uncomment +# the two values below +### +#CapabilityBoundingSet=CAP_NET_BIND_SERVICE +#AmbientCapabilities=CAP_NET_BIND_SERVICE + +[Install] +WantedBy=multi-user.target +~~~~ + +Now we need to move the service file to somewhere systemd expects it and then start and enable the service so Gitea will launch automatically when the server boots. + +~~~~console +sudo cp gitea.service /etc/systemd/system/ +sudo systemctl enable gitea +sudo systemctl start gitea +~~~~ + +There you have it, Gitea is installed, running and will automatically start whenever we restart the server. Now we need to set up Postgresql and then Nginx to serve up our Gitea site to the world. Or at least to us. + +### Setup a Postgresql and Nginx + +Gitea needs a database to store all our data in; I use PostgreSQL. You can also use MySQL, but you're on your own there. Install PostgreSQL if you haven't already: + +~~~~console +sudo apt install postgresql +~~~~ + +Now let's create a new user and database for Gitea: + +~~~~console +sudo su postgres +createuser gitea +createdb gitea -O gitea +~~~~ + +Exit the postgres user shell by hitting `Ctrl+D`. + +Now let's set up Nginx to serve our Gitea site. + +~~~~console +sudo apt update +sudo apt install nginx +~~~~ + +For the next part you'll need a domain name. I use a subdomain, git.mydomain.com, but for simplicity sake I'll refer to `mydomain.com` for the rest of this tutorial. Replace `mydomain.com` in all the instructions below with your actual domain name. + +We need to create a config file for our domain. By default Nginx will look for config files in `/etc/nginx/sites-enabled/`, so the config file we'll create is: + +~~~~console +nano /etc/nginx/sites-enabled/mydomain.com.conf +~~~~ + +Here's what that file looks like: + +~~~~nginx +server { + listen 80; + listen [::]:80; + server_name <mydomain.com>; + + + location / { + proxy_pass http://localhost:3000; + } + + proxy_set_header X-Real-IP $remote_addr; +} +~~~~ + +The main line here is the `proxy_pass` bit, which takes all requests and sends it to gitea, which is listening on `localhost:3000` by default. You can change that if you have something else that conflicts with it, but you'll need to change it here and in the service file that we used to start Gitea. + +The last step is to add an SSL cert to our site so we can clone over https (and SSH if you keep reading). I have another tutorial on setting up [Certbot for Nginx on Ubuntu](/src/certbot-nginx-ubuntu-1804). You can use that to get Certbot installed and auto-renewing certs. Then all you need to do is run: + +~~~~console +sudo certbot --nginx +~~~~ + +Select your Gitea domain, follow the prompts and when you're done you'll be ready to set up Gitea. + +### Setting up Gitea + +Point your browser to `https://mydomain.com/install` and go through the Gitea setup process. That screen looks like this, and you can use these values, except for the domain name (and be sure to enter the password you used when we created the `gitea` user for postgresql). + +One note, if you intend your Gitea instance to be for you alone, I strongly recommend you check the "disable self registration" box, which will stop anyone else from being able to sign up. But, turning off registration means you'll need to create an administrator account at the bottom of the page. + +<img src="images/2018/gitea-install_FAW0kIJ.jpg" id="image-1706" class="picwide" /> + +Okay, now that we've got Gitea initialized it's time to go back and change the permissions on those directories that we set up earlier. + +~~~~console +sudo chmod 750 /etc/gitea +sudo chmod 644 /etc/gitea/app.ini +~~~~ +Now you're ready to create your first repo in Gitea. Click the little button next to the repositories menu on the right side of your Gitea dashboard and that'll walk you through creating your first repo. Once that's done you can clone that repo with: + +~~~~console +git clone https://mydomain.com/giteausername/reponame.git +~~~~ + +Now if you have an existing repo that you want to push to your new Gitea repo, just edit the `.git/config` files to make your Gitea repo the new url, e.g.: + +~~~~ini +[remote "origin"] + url = https://mydomain.com/giteausername/reponame.git + fetch = +refs/heads/*:refs/remotes/origin/* +~~~~ + +Now do this: + +~~~~console +git push origin master +~~~~ + +### Setting up SSH + +Working with git over https is pretty good, but I prefer the more secure method of SSH with a key. To get that working we'll need to add our SSH key to Gitea. That means you'll need a GPG key. If you don't have one already, open the terminal on your local machine and issue this command: + +~~~~console +ssh-keygen -o -a 100 -t ed25519 +~~~~ + +That will create a key named `id_ed25519` in the directory `.ssh/`. If you want to know where that command comes from, read [this article](https://blog.g3rt.nl/upgrade-your-ssh-keys.html). + +Now we need to add that key to Gitea. First open the file `.ssh/id_ed25519.pub` and copy the contents to your clipboard. Now in the Gitea web interface, click on the user menu at the upper right and select "settings". Then across the top you'll see a bunch of tabs. Click the one that reads "SSH / GPG Keys". Click the add key button, give your key a name and paste in the contents of the key. + +Note: depending on how your VPS was set up, you may need to add the `git` user to your sshd config. Open `/etc/ssh/sshd_config` and look for a line that reads something like this: + +~~~~console +AllowUsers myuser myotheruser git +~~~~ + +Add `git` to the list of allowed users so you'll be able to authenticate with the git user over ssh. Now test SSH cloning with this line, substituting your SSH clone url: + +~~~~console +git clone ssh://git@mydomain/giteausername/reponame.git +~~~~ + +Assuming that works then you're all set, Gitea is working and you can create all the repos you need. If you have any problems you can drop a comment in the form below and I'll do my best to help you out. + +If you want to add some other niceties, the Gitea docs have a good guide to [setting up Fail2Ban for Gitea](https://docs.gitea.io/en-us/fail2ban-setup/) and then there's a whole section on [backing up Gitea](https://docs.gitea.io/en-us/backup-and-restore/) that's well worth a read. + +[^1]: You can compile Gitea yourself if you like, there are [instructions on the Gitea site](https://docs.gitea.io/en-us/install-from-source/), but be forewarned its uses quite a bit of RAM to build. diff --git a/tech/set up mysql php.txt b/tech/set up mysql php.txt new file mode 100644 index 0000000..2856f05 --- /dev/null +++ b/tech/set up mysql php.txt @@ -0,0 +1,144 @@ + + +Everything you need for wordpress and piwik: + +apt-get install php5-dev libssh2-1-dev libssh2-php php5-geoip libgeoip-dev mysql-server php5-mysql php5-fpm fcgiwrap + +then run: + +sudo mysql_install_db +sudo /usr/bin/mysql_secure_installation + + +mysql -u root -p +CREATE DATABASE local_stats_piwik; +CREATE USER piwiklocalstats@localhost; +SET PASSWORD FOR piwiklocalstats@localhost= PASSWORD(""); +GRANT ALL PRIVILEGES ON local_stats_piwik. TO piwiklocalstats IDENTIFIED BY ''; + +CREATE DATABASE longhandpixels_lhp_wp; +CREATE USER longhandpixels@localhost; +SET PASSWORD FOR longhandpixels@localhost= PASSWORD(""); +GRANT ALL PRIVILEGES ON longhandpixels_lhp_wp. TO longhandpixels IDENTIFIED BY ''; + +FLUSH PRIVILEGES; + +next edit + +sudo vim /etc/php5/fpm/php.ini + +cgi.fix_pathinfo=0 + +open_basedir = '/home/wp-user:/tmp:/home/lxf/git:/var/www/stats.luxagraf.net:/var/www/rss.luxagraf.net:/var/www/rss.longhandpixels.net:/var/www/longhandpixels.net:/var/www/git.luxagraf.net:/var/www/storage.luxagraf.net:/var/www/dev.longhandpixels.net' + +then: + +sudo service php5-fpm restart + +last thing to do for wordpress is create a user for secure updates. reference: https://www.digitalocean.com/community/tutorials/how-to-configure-secure-updates-and-installations-in-wordpress-on-ubuntu + +sudo adduser wp-user +sudo chown -R wp-user:wp-user ~/apps/longhandpixels.net +sudo su - wp-user +ssh-keygen -t rsa -b 4096 # save in /home/wp-user/wp_rsa +(answer blank to everything, including password) +exit +sudo chown wp-user:www-data /home/wp-user/wp_rsa* +sudo chmod 0640 /home/wp-user/wp_rsa* +sudo mkdir /home/wp-user/.ssh +sudo chown wp-user:wp-user /home/wp-user/.ssh/ +sudo chmod 0700 /home/wp-user/.ssh/ +sudo cp /home/wp-user/wp_rsa.pub /home/wp-user/.ssh/authorized_keys +sudo chown wp-user:wp-user /home/wp-user/.ssh/authorized_keys +sudo chmod 0644 /home/wp-user/.ssh/authorized_keys +sudo vim /home/wp-user/.ssh/authorized_keys + +add this to restrict to local connections: +from="127.0.0.1" ssh-rsa... +then + +sudo apt-get update +sudo apt-get install php5-dev libssh2-1-dev libssh2-php +vim apps/longhandpixels.net/wp-config.php + +add these lines: + +define('FTP_PUBKEY','/home/wp-user/wp_rsa.pub'); +define('FTP_PRIKEY','/home/wp-user/wp_rsa'); +define('FTP_USER','wp-user'); +define('FTP_PASS',''); +define('FTP_HOST','127.0.0.1:sshport'); + +restart nginx and it should work. make sure that wp-user in allowed ssh hosts and /home/wp-user/ is in open_basedir in php.ini. + + +## Piwik specific: + +grab piwik: + +wget http://builds.piwik.org/piwik.zip && unzip piwik.zip +mv piwik apps/app.name +sudo chown -R www-data:www-data apps/app.name +mkdir -p /tmp/cache/tracker/ + + +apt-get install php5-gd libfreetype6 # for nice parklines + +How do I install the GeoIP Geo location PECL extension? from http://piwik.org/faq/how-to/#faq_163 + + sudo pecl install geoip + +Finally, add the following to your php.ini file: + + extension=geoip.so + geoip.custom_directory=/path/to/piwik/misc + +Replace /path/to/piwik with the path to your Piwik installation. + +And finally, if you are using the GeoLite City database there is one more thing you need to do. The PECL extension won’t recognize the database if it’s named GeoLiteCity.dat so make sure it is named GeoIPCity.dat. + +in my case : + + cp GeoLiteCity.dat apps/stats.luxagraf.net/misc/GeoIPCity.dat + +sudo chown -R www-data:www-data apps/stats.luxagraf.net/misc/GeoIPCity.dat + +# postgres, postgis python setup + +apt-get install build-essential python python3 python-dev python3-dev python-pip python3-pip python-setuptools +sudo apt-get install postgresql postgresql-server-dev-all +sudo apt-get install binutils libproj-dev gdal-bin postgis postgresql-9.4-postgis-2.1 + + +Stuff for Pillow: +apt-get install libtiff5-dev libjpeg62-turbo-dev zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev python-tk + +Install uwsgi: + +PIP_REQUIRE_VIRTUALENV=false +sudo pip3 install uwsgi + +sudo vi /etc/systemd/system/uwsgi.service + + +[Unit] +Description=uWSGI Emperor +After=syslog.target + +[Service] +ExecStart=/usr/local/bin/uwsgi --ini /etc/uwsgi/emperor.ini +Restart=always +KillSignal=SIGQUIT +Type=notify +StandardError=syslog +NotifyAccess=all + +[Install] +WantedBy=multi-user.target + + +sudo mkdir -p /etc/uwsgi/vassals/ && cd /etc/uwsgi/vassals +sudo ln -s ~/apps/luxagraf/config/django.ini /etc/uwsgi/vassals/ +sudo systemctl start uwsgi +sudo systemctl enable uwsgi + diff --git a/tech/set up uwsgi django.txt b/tech/set up uwsgi django.txt new file mode 100644 index 0000000..37df453 --- /dev/null +++ b/tech/set up uwsgi django.txt @@ -0,0 +1,82 @@ +How to Set Up Django with uWSGI on Debian 8 + +First make sure uWSGI is installed in the virtualenv. + +virt && pip install uwsgi + +Then because we want to start an emperor with systemd so that the server will come back up on reboot we need the global version installed as well. + +Because I set pip to require a virtualenv by default I have to first disable that: + +PIP_REQUIRE_VIRTUALENV=false + +Then: + +pip install uwsgi + +Now we need a systemd service file. Here's what I use (again, the path is the standard Debian install locations, your system may vary, though I believe Ubuntu is the same) + +[Unit] +Description=uWSGI Emperor +After=syslog.target + +[Service] +ExecStart=/usr/local/bin/uwsgi --ini /etc/uwsgi/emperor.ini +Restart=always +KillSignal=SIGQUIT +Type=notify +StandardError=syslog +NotifyAccess=all + +[Install] +WantedBy=multi-user.target + +Save that to /lib/systemd/system/uwsgi.service + +Then try starting it: + +sudo systemctl start uwsgi + +This should cause an error like so... + +Job for uwsgi.service failed. See 'systemctl status uwsgi.service' and '' for details. + +If you look at the journal you'll see that the problem is that uwsgi can't find the emperor.ini file in our service file. So, let's create that file. + +Most likely the directory /etc/uwsgi doesn't exist so create that and then the emperor.ini file in it: + +mkdir /etc/uwsgi +vim /etc/uwsgi/emperor.ini + +Here's the contents of my emperor.ini: + +[uwsgi] +emperor = /etc/uwsgi/vassals +uid = www-data +gid = www-data +limit-as = 1024 +logto = /tmp/uwsgi.log + +The last step is to create the vassals directory we just reference in emperor.ini: + +sudo mkdir /etc/uwsgi/vassals + +Now go back and try starting again: + +sudo systemctl start uwsgi + +That should work. Then stop it and add it to systemd so it will startup with the system: + +sudo systemctl stop uwsgi +sudo systemctl enable uwsgi +sudo systemctl start uwsgi + +uwsgi is now running. + +The next step is to add a vassal. To do that just symlink your uwsgi file into /etc/uwsgi/vassals. The exact paths will vary, but something like: + +sudo ln -s /path/to/your/project/django.ini /etc/uwsgi/vassals/ + +Further Reading: + +* As mentioned above, [this gist](https://gist.github.com/evildmp/3094281) covers how to setup the Django end of the equation and covers more of what's actually happening in this setup. diff --git a/tech/use mutt and gnome keyring.txt b/tech/use mutt and gnome keyring.txt new file mode 100755 index 0000000..be068cd --- /dev/null +++ b/tech/use mutt and gnome keyring.txt @@ -0,0 +1,36 @@ +First, install the Python bindings for gnome-keyring: + +yum install gnome-python2-gnomekeyring +Download the file ~/.offlineimap.py and add the following settings in ~/.offlineimaprc. This assumes that you use a local IMAP server. + +[general] + +pythonfile = ~/.offlineimap.py + +[Repository localhost] + +type = IMAP +remotehost = localhost +remoteusereval = get_username("localhost") +remotepasseval = get_password("localhost") + +[Repository Zimbra] + +type = IMAP +remotehost = mail.example.com +remoteusereval = get_username("mail.example.com") +remotepasseval = get_password("mail.example.com") +Download the script imap-passwords and run it to add the IMAP usernames and passwords to your keyring. It will prompt you for server, username and password - use the same host names as in .offlineimaprc. + +Now you can run offlineimap in a loop to automatically restart it in the case of some unrecoverable error, like so: + +#!/bin/bash + +while true; do + /usr/bin/offlineimap + echo Restarting in 60 seconds ... + sleep 60 +done +During the first run, gnome-keyring will ask you to authorize offlineimap to access your IMAP authentication data in the default keyring. + +Thanks to Sebastian Rittau for the Keyring Python module.
\ No newline at end of file |