summaryrefslogtreecommitdiff
path: root/src.txt
blob: ff53ff7af52c2f1ee6920cce8a6cbda0c16ac7c0 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
# Notes
## w3m

How do you open a link in a new tab? meh, you don't really need to, just hit "s" for the buffer selection window which has your whole browsing history.

okay back is shift-b. s to list buffers. esc-e to edit, that seems to be the basics. 

Meta U to get the equivelant of ctrl-l (select URL bar), then bash shortcuts work:

ctrl -u to delete everything behind the cursor
Ctrl-a Move cursor to beginning of line -- doesn't work
Ctrl-e Move cursor to end of line
Ctrl-b Move cursor back one letter
Ctrl-f Move cursor forward one letter 

Need to figure out how to save current buffers to file

you can bookmark with esc-a to add esc b to view
# Console-Based Web Browsing With W3M

date:2023-05-15 12:24:54
url:/src/console-based-web-browsing-w3m

Lately I've been browsing the web with a 28-year-old, text-only browser, and it has made me like the web again. Pages load blazingly fast and I find myself using the web like the library it once was -- I connect, I find what I want, I save it offline to read, and I close the browser. It makes me more efficient, less distracted, and I don't ever want to go back to a graphical browser.

The web is a steaming pile of JavaShit though, so I do from time to time have to open pages in a graphical browser. I start in w3m now though. If the page I'm after works, I am happy, if it doesn’t I get to decide: begrudgingly open it in a graphical browser or just skip it. It’s remarkable how often the second option is the one I chose. It’s made me question what all I do on the web, most of it turns out to be unimportant and unnecessary.

It isn't the lack of JavaScript that makes browsing with w3m great. That does help clear up clutter, but it's really an entirely different experience that, the more I use it, the more I love it. It returns the web to text and in some ways I think this may be the ideal form of the web. Text and images.

Perhaps, in hindsight, it might have been better to leave the rest -- email, chat, real-time messaging services and all the rest of the bells and whistles -- perhaps we should have left all that to mobile apps. That's how most people use them anyway.

<img src="images/2023/w3m-screen_LGRQBFb.jpg" id="image-3589" class="picwide caption" />

With w3m I find myself focused on a single task in a way that I am not in Vivaldi (my [graphical browser of choice](https://www.wired.com/story/vivaldi-4-2021/)). With w3m I get the information I want faster. I can save it easier. I can open a rendered page in Vim with a single keystroke. Then I can copy and paste things to my notes. Another keystroke saves the whole page as text. When I'm done I quit and move on to something different. 

Opening w3m is so fast I don't keep it open. I use it when I need it and then I close it. 

This, I've come to think, is the key to eliminating distractions, staying focused, getting worthwhile work done: close the browser when you don't need it.

I never thought of an open web browser as multitasking, but that's what tabs are after all. Worse the web has no edges. An open browser window is a glittering invitation to distraction. 

Unitasking is the way forward friends. When you're done with the page, close the browser.

This is very cumbersome with a graphical browser which has to boot up a ton of stuff and then load all those open tabs you have and it ends up taking so long enough that only a crazy person would close it when they were done with a single task. It'd be like shutting off your laptop every time you closed the lid[^1].

With w3m this is not a factor. I shut it down every time I'm done. And I waste less time because of it. Often I even close out the terminal window that it was in because booting up a terminal window is fast too. Then I find myself staring at my desktop, which happens to be a somber image I took a long time ago in the swamps of Florida, and it always makes me want to close my laptop and go outside, which is why I use it as a desktop.

What does this have to do with w3m? Very little I suppose, other than to say, if you're finding yourself wasting time browsing the internet for hours, try w3m, you might like it, and I can almost garantee you'll save yourself some time that you'd otherwise waste on pointless internet things. Go make something instead. Or give someone a hug or a high five.

[^1]: Cough. Which I also do.

# How I Work on a $75 Tablet

date:2022-11-24 09:50:58
url:/src/how-to-get-work-done-on-a-75-tablet

Fresh out of the box Amazon's Fire tablets are useless. They're just firehoses designed to shove Amazon content down your throat. That's why Amazon sells them for as little as $55 for the 10-inch model. Technically it's $150, but it frequently goes on sale for around, and sometimes under, $75. The time to buy is major shopping holidays, Prime Day and Black Friday/Cyber Monday are your best bet. 

To do any work you'll also want the Finite keyboard. The tablet-keyboard bundle typically runs about $75-$120 depending on the sale. It's $200 not on sale. Don't do that, it's not worth $200.

For $75 though, I think it's worth it. Once I strip the Amazon crap out and install a few useful apps, I have a workable device. The price is key for me. This is what I take when I head out to the beach or into the woods or up some dusty canyon for the day. I don't want to take my $600 laptop to those places. $75 tablet? Sure. Why not get it a little sandy here and there? So far (going on a year now), it's actually survived. Mostly. I did crack the screen, but it's not too bad yet.

It lets me work in places like this, which happens to be where I am typing right now (picnic tables in the middle of nowhere are rare, but I'll take it).

<img src="images/2023/2023-04-11_152857_st-george.jpg" id="image-3587" class="picwide" />

A Fire HD 10 is not the most pleasant thing to type on. The keyboard is cramped and there's no way to map caps lock to control, which trips me up multiple times a day. Still. After a year. But hey, it enables me to get outside and play and still get a little work done when I need to. 

For anyone else who might be interested, here's what I do.

First you need to disable all of Amazon's crap apps. Before you so that though, you need to make sure you have a new launcher and a new web browser installed, because if you turn off Amazon's defaults before you have new ones you will have nothing and you'll be stuck. There are millions of browsers and launchers for Android. I happen to like Vivaldi as a web browser, which you can download from UptoDown.com (which is officially supported by Vivaldi). For a launcher I like [Nova Launcher](https://nova-launcher.en.uptodown.com/android).

Once you have those it's time to start shutting off all the Amazon apps and services. To do that I use [these instructions](https://forum.xda-developers.com/t/guide-no-root-remove-amazon-apps-on-fire-10-hd-2019.4009547/) from the XDA forums. You need to install the adb developer tool, connect that to your fire, and then run a series of commands. The commands themselves are a touch of of date in the XDA article, so to disable some apps on newer tablets you may have to search for the new app names. 

Once you've eliminated Amazon from the Fire HD 10, you have a base on which to build. Over the years I've purposefully built a workflow based around very simple tools that are available everywhere. If it can run a terminal emulator, I can probably work on it. On Android devices, the app I need is Termux. That and a web browser and I can get by. All of those work fine without the Google Play Store installed. If you do need apps from the Play Store I wrote a tutorial on [how to install the Google Play Store](https://www.wired.com/story/how-to-install-google-play-store-on-amazon-fire-tablet/) for Wired that you can use.

For writing and accessing my documents and other files I use Termux, which is available via F-Droid. I write prose and code the same way, using Vim and Git. I track changes using Git and push them to a remote repo I host on a server. When I get back to my laptop, I can pull the work from the tablet and pickup where I left off. To make everything work you also need the Termux:API, which for some reason is a separate app. 

To set things up the way I like them I install Termux and then configure ssh access to my server. Once that's setup I can clone my dotfiles and setup Termux to mirror the way my laptop is set up. I can also [install git annex]() and clone my documents and notes folders. I don't often access these from the tablet, but I like to have them just in case. The last thing I do is clone my writing repository. That gets me a basic setup, but there are some things I do to make life on Android smoother.

First install the termux-api package with:

~~~
pkg install termux-api 
~~~ 

This gives you access to a shell command `termux-clipboard-set` and `-get` so you can copy and past from vim. I added this to my Termux .vimrc and use control copy in visual mode to send that text to the system clipboard:

~~~
vnoremap <C-x> :!termux-clipboard-set<CR>
vnoremap <C-c> :w !termux-clipboard-set<CR><CR>
inoremap <C-v> <ESC>:read !termux-clipboard-get<CR>i
~~~ 

That works for updating this site, but some sites I write for want rich text, which I generate using [Pandoc](https://pandoc.org) and then open in the browser using this script:

~~~
#!/data/data/com.termux/files/usr/bin/sh
cat $1 \
  | pandoc -t html --ascii > /storage/emulated/0/Download/output.html \
  && darkhttpd /storage/emulated/0/Download --daemon --addr 127.0.0.1 \
  && termux-open http://localhost:8080/output.html
~~~

I saved that as rtf.sh, made it executable with `chmod +x`, and put it on my path (which in my setup, includes `~/bin`). Then I run it with whatever file I am working on.

~~~
~/./bin/rtf.sh mymarkdown.txt
~~~

That'll open a new window in my browser with the formatted text and then I can copy and paste to where it needs to go. Note that you'll need to install [darkhttpd](https://github.com/emikulic/darkhttpd) (a very simple web server) with `pkg install darkhttpd`.

#### Issues and Some Solutions

There's no `esc` key on the Finite keyboard, which is a problem for Vim users. I get around it by mapping `jj` to escape in my .vimrc. 

The one thing I have not solved is the caps lock key. I am so used to having that set as both Control and Esc that I hit it several times a day and end up not only not running whatever keycombo shortcut I thought I was about to run, but also activating caps lock and thus messing up the next commands as well because they're now capital letter commands not lowercase. I've considered just prying off the key so it'd be harder to hit, but so far I haven't resorted to that. 

I've tried quite a few key remapping apps but none of them have worked consistently enough to rely on them. Such is life. It's $75, what do want really? I get by. I write and edit in vim, copy/paste things to the browser. That's all I need. Again, part of the reason I can work on a tiny $75 computer is that I have chosen to learn and rely on simple tools that work just about anywhere. 

That said, this thing is not perfect. The keyboard is prone to double typing letters and also not registering a space bar press. I end up spending more time editing when I write with it. I also constantly reach for the trackpad that isn't there. Also, sometimes I get to the middle of the woods and realize I don't have the latest version of the document I want to edit. Git comes to the rescue then though, I just create a new branch, work, push the branch to the remote repo, and then merge it to master by hand when I get back to my laptop. 

If you don't do everything in a terminal you might be able to still get something similar set up using other offline-friendly tools. I'm sure it's possible I just have no need so I haven't explored it. Anyway, if there's something you want to know, or you want me to try to see if it might work for you, feel free to email me, or leave a comment.

# Back to X11

date:2022-05-18 19:11:23
url:/src/back-to-x11

Earlier this year I upgraded my Lenovo laptop with a new, larger SSD. Video takes a staggering amount of disk space. In the process I decided to completely re-install everything. It had probably been at least five years since I've done that.

Normally I would never say anything about this because really, the software you run is just a tool. If it works for you then that's all that matters. However, since I once disregarded this otherwise excellent advice and wrote about how [I use Arch Linux](https://luxagraf.net/src/why-i-switched-arch-linux) and [Sway](https://luxagraf.net/src/guide-to-switching-i3-to-sway), I feel somewhat obligated to follow up and report that I still love Arch, but I no longer run Sway or Wayland. 

I went back to X.org. Sorry Wayland, but much as I love Sway, I did not love wrestling with MIDI controller drivers, JACK, video codecs and hardware acceleration and all the other elements of an audio/video workflow in Wayland. It can be done, but it's more work. I don't want to work at getting software to work. I'm too old for that shit. 

I want to open a video and edit. I want to plug in a microphone and record. If it's any more complicated than that -- and it was for me in Wayland with the mics I own -- I will find something else. Again, I really don't care what my software stack is, so long as I can create what I want to create with it.

So I went back to running Openbox with a Tint2 status bar. And you know what... I really like it.

Wayland was smoother, less graphically glitchy, but meh, whatever. Ninety-five percent of the time I'm writing in Vim in a Urxvt window. I even started [browsing the web in the terminal](https://luxagraf.net/src/console-based-web-browsing-w3m) half the time. I need smooth scrolling and transitions like I need a hole in my head. 

That said, I did take all of Sway's good ideas and try as best I could to replicate them in Openbox. So I still have the same keyboard shortcuts and honestly, aside from the fact that Tint2 has more icons than Waybar, and creating "desktops" isn't dynamic, I can't tell much difference. Even my battery life seems to have improved in X11, and that's why I switched to Wayland in the first place, was the better battery life I was getting. Apparently that's not true with this laptop (a Lenovo Flex 5, as opposed to the X270, which does get better battery life under Wayland).

Anyway, there you have it. X11 for the win. At least for me. For now.

# Indie Web Companies

date:2021-02-22 09:37:29
url:/src/indie-web-companies

Here's a disturbing factoid: **the world’s ten richest men have made $540 billion so far during the pandemic.** Amazon founder Jeff Bezos' worth went up so much between March and September 2020 that he could afford to give all 876,000 Amazon employees a $105k bonus and still have as much money as he had before the pandemic started ([source](https://oxfamilibrary.openrepository.com/bitstream/handle/10546/621149/bp-the-inequality-virus-summ-250121-en.pdf)).

What does that have to do with code? Well, some of my code used to run on Amazon services. Some of my money is in Jeff Bezos' pocket. I was contributing to the economic inequality that Amazon enables. I decided I did not want to do that.

But more than I didn't want to contribute to Amazon's bottom line, I *wanted* to contribute to someone's bottom line, the emphasis being on *someone*. I wanted to redirect the money I was already spending to small businesses, businesses that need the revenue.

We can help each other instead of Silicon Valley billionaires.

Late last year at [work](https://www.wired.com/author/scott-gilbertson/) we started showcasing some smaller, local businesses in affiliate links. It was a pretty simple idea, find some small companies in our communities making worthwhile things and support them by telling others.

One woman whose company I linked to called it "life-changing." It's so strange to me that an act as simple as pasting some HTML into the right text box can changed someone's life. That's amazing. I bring this up not to toot my own horn, but to say that every day there are ways in which you can use the money you spend to help real people trying to make a living. If you've ever charged a little for a web service you probably know how much of a big deal even one more customer means. I want to be that one more customer for someone.

### Small business web hosts, email providers, and domain registrars

My online expenses aren't much, just email, web hosting, storage space, and domain registration. I wanted to find some small business replacements for the megacorps I was using.

I did a ton of research. Web hosting and email servers are tricky, these are critical things that run my business and my wife's business. It's great to support small businesses, but above all the services have to *work*. Luckily for us the forums over at [Low End Talk](https://www.lowendtalk.com/) are full of ideas and long term reviews of exactly these sorts of business -- small companies offering cheap web hosting, email hosting, and domain registration. 

After a few late nights digging through threads, finding the highlights, and then more research elsewhere on the web, I settled on [BuyVM](https://buyvm.net/) for my web hosting. The owner Francisco is very active on Low End Talk and, in my experience for the last three months, is providing a great service *for less* than I was paying at Vultr. It was so much less I was able to get a much larger block storage disk and have more room for my backups, which eliminated my need for Amazon S3/Glacier as well[^2]. I highly recommend BuyVM for your VPS needs. 

For email hosting I actually was already using a small company, [Migadu](https://www.migadu.com/). I liked their service, and I still recommend them if the pricing works for you, but they discountinued the plan I was on and I would have had to move to a more expensive plan to retain the same functionality. 

I jumped ship from Migadu during Black Friday because another small email provider I had heard good things about was having a deal: $100 for life. At that price, so long as it stays in business for 2 years, I won't loss any money. I moved my email to [MxRoute](https://mxroute.com/) and it has been excellant. I liked it so much I bought my parents a domain and freed them from Google. Highly recommend MxRoute.

That left just one element of my web stack at Amazon: domain registration. I'll confess I gave up here. Domain registration are not a space filled with small companies (which to me is like 2-8 people). I gave up. And complained to a friend, who said, try harder. So I did and discovered [Porkbun](https://porkbun.com/), the best domain registrar I've used in the past two decades. I moved my small collection of domain over at the beginning of the year and it was a seamless, super-smooth transition. It lives up to its slogan: "an oddly safisfying experience."

And those are my recommendations for small businesses you can support *and* still have a great technology stack: [Porkbun](https://porkbun.com/) (domain registration), [MxRoute](https://mxroute.com/) (email hosting), and [BuyVM](https://buyvm.net/) (VPS hosting).

The thing I didn't replace was AWS CloudFront. I don't have enough traffic to warrant a CDN, so I just dropped it. If I ever change my mind about that, based on my research, I'll go with [KeyCDN](https://www.keycdn.com/pricing), or possible [Hostry](https://hostry.com/products/cdn/). 

I also haven't found a reliable replacement for SES, which I use to send my newsletters. I wish Sendgrid would spin off a company for non-transational email, but I don't see that happening. I could write another 5,000 words on how the big email providers totally, purposefully fucked up the best distributed communication system around. But I will spare you. 

The point is, these are three small companies providing useful services we developers need. If you're feeling like you'd rather your money went to people trying to make cool, useful stuff, rather than massive corporations, give them a try. If you have other suggestions drop them in the comments and maybe I can put together some sort of larger list.

[Note: none of these links are affiliate links, just services I actually use and therefore recommend.]

[^1]: This is something I'd like to do more, unfortunately there are not cottage industries for most of the things I write about (cameras, laptops, etc). Still, you do what you can I guess.
[^2]: I have a second cloud-based backup stored in Backblaze's B2 system. Backblaze is not a small company by any means, but it's one that, from the research I've been able to do, seems ethically run and about as decent as a corporation can be these days.

# How To Use Webster's 1913 Dictionary, Linux Edition

date:2020-12-09 09:22:58
url:/src/how-use-websters-1913-dictionary-linux-edition

I suspect the overlap of Linux users and writers who care about the Webster's 1913 dictionary is vanishingly small. Quite possible just me. But in case there are others, I am committing these words to internet. Plus I will need them in the future when I forget how I set this up. 

Here is how you install, set up, and configure the command line app `sdcv` so that you too can have the one true dictionary at your fingertips in the command line app of your choosing.

But first, about the one true dictionary.

The one true dictionary is debatable I suppose. Feel free to debate. I have a "compact" version of the Oxford English Dictionary sitting on my desk and it is weighty both literally and figuratively in ways that the Webster's 1913 is not, but any dictionary that deserves consideration as your one true dictionary ought to do more than spit out dry, banal collections of words. 

John McPhee writes eloquently about the power of a dictionary in his famous New Yorker essay, *[Draft No 4](https://www.newyorker.com/magazine/2013/04/29/draft-no-4)*, which you can find in paper in [the compilation of essays by the same name](https://bookshop.org/books/draft-no-4-on-the-writing-process/9780374537975). Fellow New Yorker writer James Somers has [a brilliant essay on the genius of McPhee's dictionary](http://jsomers.net/blog/dictionary) and how you can get it installed on your Mac.

Remarkably, the copy of the Webster's 1913 that Somers put up is still available. So go grab that.

However, while his instructions are great for macOS users, they don't work on Linux and moreover they don't offer access from the shell. I write in Vim, in a tmux session, so I wanted an easy way to look things up without switching apps. 

The answer is named `sdcv`. It is, in the words of its man page, "a simple, cross-platform text-based utility for working with dictionaries in StarDict format." That last bit is key, because the Webster's 1913 file you downloaded from Somers is in StarDict format. I installed `sdcv` from the Arch Community repository, but it's in Debian and Ubuntu's official repos as well. 

Once `sdcv` is install you need to unzip that dictionary.zip file you should have grabbed from Somers' post. That will give you four files. All we need to do now is move them somewhere `sdcv` can find them. By default that's `$(XDG_DATA_HOME)/stardict/dic`, although you can customize that by add thing Environment variable `STARDICT_DATA_DIR` to your .bashrc. I keep my dictionaries in `~/bin/dict` folder so I just drop this in .bashrc:

~~~bash
STARDICT_DATA_DIR="$HOME/bin/dict/
~~~

### How to Lookup Words in Webster's 1913 from the Command Line

To use your new one true dictionary, all you need to do is type `sdcv` and the word you'd like to look up. Add a leading '/' before the word and `sdcv` will use a fuzzy search algorithm, which is handy if you're unsure of the spelling. Search strings can use `?` and `*` for regex searching. I have never used either. 

My use is very simple. I wrote a little Bash function that looks like this:

~~~bash
function d() {
    sdcv "$1" | less
}
~~~

With this I type `d search_term` and get a paged view of the Webster's 1913 entry for that word. Since I always write in a tmux split, I just move my cursor to the blank split, type my search term and I can page through and read it while considering the context in the document in front of me.

### But I Want a GUI

Check out [StarDict](http://www.huzheng.org/stardict/), there are versions for Linux, Windows, and macOS, as well as source code.

# Solving Common Nextcloud Problems

date:2020-11-17 14:27:01
url:/src/solving-common-nextcloud-problems

I love [NextCloud](https://nextcloud.com). Nextcloud allows me to have all the convenience of Dropbox, but hosted by me, controlled by me, and customized to suit my needs. I mainly use the file syncing, calendar, and contacts features, but Nextcloud can do a crazy amount of things.

The problem with NextCloud, and maybe you could argue that this is the price you pay for the freedom and control, is that I find it requires a bit of maintenance to keep it running smoothly. Nextcloud does some decidedly odd things from time to time, and knowing how to deal with them can save you some disk space and maybe avoid syncing headaches.

I should note, that while I call these problems, I **have never lost data** using Nextcloud. These are really more annoyances and some ways to prevent them than *problems*.

### How to Get Rid of Huge Thumbnails in Nextcloud

If Nextcloud is taking up more disk space than you think it should, or your Nextcloud storage space is just running low, the first thing to check is the image thumbnails directory. 

At one point I poked around in the Nextcloud `data` directory and found 11-gigabytes worth of image previews for only 6-gigabytes worth of actual images stored. That is crazy. That should never happen. 

Nextcloud's image thumbnail defaults err on the side of "make it look good in the browser" where as I prefer to err on the side of keep it really small. 

I did some research and came up with a few solutions. First, it looks like my runaway 11-gigabyte problem might have been due to a bug in older versions of Nextcloud. Ideally I will not hit that issue again. But, I don't admin servers with hope and optimism, so I figured out how to tell Nextcloud to generate smaller image previews. I almost never look at the images within the web UI, so I really don't care about the previews at all. I made them much, much smaller than the defaults. Here's the values I use:

~~~bash
occ config:app:set previewgenerator squareSizes --value="32 256"
occ config:app:set previewgenerator widthSizes  --value="256 384"
occ config:app:set previewgenerator heightSizes --value="256"
occ config:system:set preview_max_x --value 500
occ config:system:set preview_max_y --value 500
occ config:system:set jpeg_quality --value 60
occ config:app:set preview jpeg_quality --value="60"
~~~

Just ssh into your Nextcloud server and run all these commands. If you followed the basic Nextcloud install instructions you'll want to run these as your web server user. For me, with NextCloud running on Debian 10, the full command looks like this:

~~~bash
sudo -u www-data php /var/www/nextcloud/occ config:app:set previewgenerator squareSizes --value="32 256"
sudo -u www-data php /var/www/nextcloud/occ config:app:set previewgenerator widthSizes  --value="256 384"
# and so on, running all the commands listed above
~~~

This assumes you installed Nextcloud into the directory `/var/www/nextcloud`, if you installed it somewhere else, adjust the path to the Nextcloud command line tool `occ`.

That will stop Nextcloud from generating huge preview files. So far so good. I deleted the existing previews and reclaimed 11-gigabytes. Sweet. You can pre-generate previews, which will make the web UI faster if you browse images in it. I do not, so I didn't generate any previews ahead of time.

### How to Solve `File is Locked` Issues in Nextcloud

No matter what I do, I always end up with locked file syncing issues. Researching this led me to try using Redis to cache things, but that didn't help. I don't know why this happens. I blame PHP. When in doubt, blame PHP. 

Thankfully it doesn't happen very often, but every six months or so I'll see an error, then two, then they start piling up. Here's how to fix it.

First, put Nextcloud in maintenance mode (again, assuming Debian 10, with Nextcloud in the `/var/www/nextcloud` directory):

~~~bash
sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --on
~~~

Now we're going directly into the database. For me that's Postgresql. If you use Mysql or Mariadb, you may need to adjust the syntax a little. 

~~~bash
psql -U yournextclouddbuser -hlocalhost -d yournextclouddbname
password:
nextclouddbname=> DELETE FROM oc_file_locks WHERE True;
~~~

That should get rid of all the locked file problems. For a while anyway.

Don't forget to turn maintenance mode off:

~~~bash
sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --off
~~~

### Force a File Re-Scan

If you frequently add and remove folders from Nextcloud, you may sometimes run into issues. I usually add a folder at the start of a new project, and then delete it when the project is finished. Mostly this just works, even with shared folders, on the rare occasion that I used them, but sometimes Nextcloud won't delete a folder. I have no idea why. It just throws an unhelpful error in the web admin and refuses to delete the folder from the server.

I end up manually deleting it on the server using: `rm -rf path/to/storage/folder`. Nextcloud however, doesn't always seem to notice that the folder is gone, and still shows it in the web and sync client interfaces. The solution is to force Nextcloud to rescan all its files with this command:

~~~bash
sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --on
sudo -u www-data php /var/www/nextcloud/occ files:scan --path="yournextcloudusername/files/NameOfYourExternalStorage"
sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --off
~~~

Beware that on large data directories this can take some time. It takes about 30 seconds to scan my roughly 30GB of files.

### Mostly Though, Nextcloud is Awesome

Those are three annoyances I've hit with Nextcloud over the years and the little tricks I've used to solve them. Lest anyone think I am complaining, I am not. Not really anyway. The image thumbnail thing is pretty egregious for a piece of software that aims to be enterprise grade, but mostly Nextcloud is pretty awesome.

I rely on Nextcloud for files syncing, Calendar and Contact hosting, and keeping my notes synced across devices. Aside from these three things, I have never had a problem.

####Shoulder's Stood Upon

* [Nextcloud's documentation](https://docs.nextcloud.com) isn't the best, but can help get you pointed in the right direction.
* I tried a few different solutions to the thumbnail problem, especially helpful was this post on [Understanding and Improving Nextcloud Previews](https://ownyourbits.com/2019/06/29/understanding-and-improving-nextcloud-previews/), but nachoparker.
* The [file lock solution](https://help.nextcloud.com/t/file-is-locked-how-to-unlock/1883) comes from the Nextcloud forums.
* The solution to scanning external storages comes from the [Nextcloud forums](https://help.nextcloud.com/t/automate-occ-filescan/35282/4).

# Why I Built My Own Mailing List Software

date:2020-10-24 08:36:15
url:/src/why-i-built-my-own-mailing-list-software

This is not a tutorial. If you don't already know how to write the code you need to run a mailing list, you probably shouldn't try to do it yourself. Still, I wanted to outline the reasons I built my own mailing list software in 2020, when there are dozens of commercial and open source projects that I could have used.

The short answer is that when I plan to use something as a core piece of what I do, I like to understand it completely. The only way to really understand a thing is to either build it yourself from scratch or completely disassemble it and put it back together. 

This is true for software as well as the rest of the world. I ripped all the electrical, propane, plumbing, and engine systems out of [my home (a 1969 RV)](/1969-dodge-travco-motorhome) because I needed to know how every single piece works and how they all work together. I understand them now, and that makes maintaining them much easier. Otherwise I would always be defendant on someone else to keep my home running.

The same is true with software. If the software you're considering is a core part of your personal or business infrastructure, you need to understand every single part of it and how all those parts fit together. 

The question is, should you deconstruct an existing project or write your own from scratch? The answer depends on the situation, the right choice won't always be the same. I do a mix a both and I'm sure most other people do too. There's no one right answer, which means you have to think things through in detail ahead of time. 

When I decided I wanted to [start a mailing list](/jrnl/2020/11/invitation), I looked around at the software that was available and very quickly realized that I had different goals than most mailing list software. That's when you should write your own.

The available commercial software did not respect users privacy and did not allow me any control. There are some services that do provide a modicum of privacy for your subscribers, but you're going to be working against the software to enable it. If you know of a dead simple commercial mailing list software that's built with user privacy in mind, please post a link in the comments, I'd love to have somewhere to point people. 

A big part of privacy is that I wanted to be in control of the data. I host my own publishing systems. I consider myself a writer first, but publisher is a close second. What sort of publisher doesn't control their own publishing system?[^1]

Email is a wonderful distributed publishing system that no one own. That's okay, I don't need to control the delivery mechanism, just the product at either end. And email is more or less the inverse of the web. You send a single copy to many readers, rather than many readers coming to a single copy as with a web page. The point is, there's no reason I can't create and host the original email here and send out the copies myself. The hard part -- creating the protocols and low-level tool that power email -- was taken care of decades ago.

With that goal in mind I started looking at open source solutions. I use [Django](https://www.djangoproject.com) to publish what you're reading here, so I looked at some Django-based mailing list software. The two I considered most seriously were [Django Newsletter](https://django-newsletter.readthedocs.io/en/latest/) and [Emencia Django Newsletter](https://github.com/emencia/emencia-django-newsletter). I found a few other smaller projects as well, but those seem to be the big two in what's left of the Django universe. 

Those two, and some others influenced what I ended up writing in various ways, but none of them were quite what I wanted out of the box. Most of them still used some kind of tracking, whether a pixel embedded in the email or wrapping links with individual identifiers. I didn't want either of those things and stripping them out, while staying up-to-date with upstream changes would have been cumbersome. So, DIY then.

But running a mail server is... difficult, risky, and probably going to keep you up at night. I tried it, briefly.

One of the big problems with email is that, despite email being an open protocol, Google and other big corps are able to gain some control by using spam as a reason to tightly control who gets to send email[^2] That means if I just spin up a VPS at Vultr and try to send some emails with Postfix they're probably all going to end up in, best case, you Spam folder, but more likely they'd never be delivered. 

So while I wrote the publishing tools myself, host the newletter archive myself, designed everything about it myself, I handed off the sending to Amazon's SES, which has been around long enough, and is used by enough big names that mail sent through it isn't automatically deleted. It may possibly still end up in some Spam folders, but for the most part in my early testing (thank you to all my friends who helped out with that) that hasn't been an issue.

In the end what I have is a fairly robust, loosely-joined system where I have control over the key elements and it's easy to swap out the sending mechanism down the road should I have problems with Amazon SES. 

###Was it Worth It?

So far absolutely not. But I knew that when I started.

I could have signed up for Mailchimp, picked some pre-made template, and spent the last year sending out newsletters to subscribers, and who knows, maybe I'd have tons of those by now. But that's okay, that was never the goal. 

I am and always have been playing a very long game when it comes to publishing. I am building a thing that I want to last the rest of my life and beyond if I can manage it. 

I am patient. I am not looking for a ton of readers, I am looking for the right readers. The sort of people who are in short supply these days, the sort of people who end up on a piece like this and actually read the whole thing. The people for whom signing up for Mailchimp would be too easy, too boring.

I am looking for those who want some adventure in everything they do, the DIYer, the curious, the explorers, the misfits. There's more of us than most of us realize. If you're interested feel free to [join our club](/newsletter/friends).

[^1]: Sadly, these days almost no publisher retains any control over their systems. They're all beholden to Google AMP, Facebook News, and whatever the flavor of year happens to be. A few of them are slowly coming around to the idea that it might be better to build their own audiences, which somehow passed for revolutionary in publishing today. But I digress.
[^2]: Not to go too conspiracy theory here, but I suspect that Google and its ilk generate a fair bit of the spam themselves, and do nothing to prevent the rest precisely because it allows for this control. Which is not to say spam isn't a problem, just that it's a convenient problem.

# Replacing Autokey on Wayland

date:2020-06-03 11:35:31
url:/src/replacing-autokey-wayland-plain-text-snippets

Snippets are bits of text you use frequently. Boilerplate email responses, code blocks, and whatever else you regularly need to type. My general rule is, if I type it more than twice, I save it as a snippet.

I have a lot of little snippets of text and code from years of doing this. When I used the i3 desktop (and X11) I used [Autokey](https://github.com/autokey/autokey) to invoke shortcuts and paste these snippets where I need them. In Autokey you define a shortcut for your longer chunk of text, and then whenever you type that shortcut Autokey "expands" it to your longer text.

It's a great app, but I [switched to a Wayland-based desktop](/src/guide-to-switching-i3-to-sway) ([Sway](https://swaywm.org/)) and Autokey doesn't work in Wayland yet. It's unclear to me whether it's even possible to have an Autokey-like app work within Wayland's security model ([Hawck](https://github.com/snyball/Hawck) claims to, but I have not tested it). 

Instead, after giving it some thought, I came up with a way to do everything I need in a way like even better, using tools that I already have installed. 

###Rolling Your Own Text Snippet Manager

Autokey is modeled on the idea of typing shortcuts and having them replaced with a larger chuck of text. It works to a point, but has the mental overhead of needing to remember all those keystroke combos. 

Dedicating memory to digital stuff feels like we're doing it wrong. Why not *search* for a snippet instead of trying to remember some key combo? If the searching is fast and seamless there's no loss of "flow," or switching contexts, and no need to remember some obtuse shortcut. 

To work though the search must be *fast*. Fortunately there's a great little command line app that offers lighting-fast search: [`fzf`](https://github.com/junegunn/fzf), a command line "fuzzy" finder. `fzf` is a find-as-you-type search interface that's incredibly fast, especially when you pair it with [`ripgrep`](https://github.com/BurntSushi/ripgrep) instead of `find`.

I already use `fzf` as a DIY application launcher, so I thought why not use it to search for snippets? This way I can keep my snippets in a simple text file, parse them into an array, pass that to `fzf`, search, and then pass the selected result on to the clipboard. 

I combined Alacritty, a Python script, `fzf`, `sed`, and some Sway shortcuts to make a snippet manager I can call up and search through with a single keystroke. 

###Python 

It may be possible to do this entirely in a bash script, but I'm not that great at bash scripting so I did the text parsing in Python, which I know well enough.

I wanted to keep all my snippets in a single text file, with the option to do multiline snippets for readability (in other words I didn't want to be writing `\n` characters just because that's easier to parse). I picked `---` as a delimiter because... no reason really. 

The other thing I wanted was the ability to use tags to simplify searching. Tags become a way of filtering searches. For example, all the snippets I use writing for Wired can be tagged wired and I can see them all in one view by typing "wired" in `fzf`. 

So my snippet files looks something like this:

````
<div class="cluster">
    <span class="row-2">
    </span>
</div>
tags:html cluster code

---
```python

```
tags: python code

---
````

Another goal, which you may notice above, is that I didn't want any format constraints. The snippets can take just about any ascii character. The tags line can have a space, not a have space, have commas, semicolons, doesn't matter because either way `fzf` can search it, and the tags will be stripped out before it hits the clipboard. 

Here's the script I cobbled together to parse this text file into an array I can pass to `fzf`:

~~~python
import re
with open('~/.textsnippets.txt', 'r') as f:
    data = f.read()
snips = re.split("---", data)
for snip in snips:
    # strip the blank line at the end
    s = '\n'.join(snip.split('\n')[1:-1])
    #make sure we output the newlines, but no string wrapping single quotes
    print(repr(s.strip()).strip('\''))
~~~

All this script does is open a file, read the contents into a variable, split those contents on `---`, strip any extra space and then return the results to stdout. 

The only tricky part is the last line. We need to preserve the linebreaks and to do that I used [`repr`](https://docs.python.org/3.8/library/functions.html#repr), but that means Python literally prints the string, with the single quotes wrapping it. So the last `.strip('\'')` gets rid of those. 

I saved that file to `~/bin` which is already on my `$PATH`.

###Shell Scripting

The next thing we need to do is call this script, and pass the results to `fzf` so we can search them.

To do that I just wrote a bash script. 

~~~.bash
#!/usr/bin/env bash
selected="$(python ~/bin/snippet.py | fzf -i -e )"
#strip tags and any trailing space before sending to wl-copy
echo -e "$selected"| sed -e 's/tags\:\.\*\$//;$d' | wl-copy
~~~

What happens here is the Python script gets called, parses the snippets file into chunks of text, and then that is passed to `fzf`. After experimenting with some `fzf` options I settled on case-insensitive, exact match (`-i -e`) searching as the most efficient means of finding what I want. 

Once I search for and find the snippet I want, that selected bit of text is stored in a variable called, creatively, `selected`. The next line prints that variable, passes it to `sed` to strip out the tags, along with any space after that, and then sends that snippet of text the clipboard via wl-copy.

I saved this file in a folder on my `PATH` (`~/bin`) and called it `fzsnip`. At this point in can run `fzsnip` in a terminal and everything works as I'd expect. As a bonus I have my snippets in a plain text file I can access to copy and paste snippets on my phone, tablet, and any other device where I can run [NextCloud](https://nextcloud.com/).

That's cool, but on my laptop I don't want to have to switch to the terminal every time I need to access a snippet. Instead I invoke a small terminal window wherever I am. To do that, I set up a keybinding in my Sway config file like this:

~~~.bash
bindsym $mod+s exec alacritty --class 'smsearch' --command bash -c 'fzsnip | xargs -r swaymsg -t command exec'
~~~

This is very similar to how I launch apps and search passwords, which I detailed in my post on [switching from i3 to Sway](/src/guide-to-switching-i3-to-sway). The basic idea is whatever virtual desktop I happen to be on, launch a new instance of [Alacritty](https://github.com/alacritty/alacritty), with the class `smsearch`. Assigning that class gives the new instance some styling I'll show below. The rest of the line fires off that shell script `fzsnip`. This allows me to hit `Alt+s` and get a small terminal window with a list of my snippets displayed. I search for the name of the snippet, hit return, the Alacritty window closes and the snippet is on my clipboard, ready to paste wherever I need it.

This line in my Sway config file styles the window class `launcher`:

~~~.bash
for_window [app_id="^smsearch$"] floating enable, border none, resize set width 80 ppt height 60 ppt, move position 0 px 0 px
~~~

That puts the window in the upper left corner of the screen and makes it about 1/3 the width of my screen. You can adjust the width and height to suite your tastes.

If you don't use Alacritty, adjust the command to use the terminal app you prefer. If you don't use Sway, you'll need to use whatever system-wide shortcut tool your window manager or desktop environment offers. Another possibility it is using [Guake](https://github.com/Guake/guake) which might be able to this for GNOME users, but I've never used it.

###Conclusion

I hope this gives anyone searching for a way to replace Autokey on Wayland some ideas. If you have any questions for run into problems, don't hesitate to drop a comment below.

Is it as nice as Autokey? I actually like this far better now. I often had trouble remembering my Autokey shortcuts, now I can search instead. 

As I said above, if I were a better bash scripter I get rid of the Python file and just use a bash loop. That would make it easy to wrap it in a neat package and distribute it, but as it is it has too many moving parts to make it more than some cut and paste code.

####Shoulders Stood Upon

- [Using `fzf` instead of `dmenu`](https://medium.com/njiuko/using-fzf-instead-of-dmenu-2780d184753f) -- This is the post that got me thinking about ways I could use tools I already use (`fzf`, Alacritty) to accomplish more tasks.

# How to Use Ranger, the Command Line File Browser

date:2020-02-12 14:45:49
url:/src/how-use-ranger-command-line-file-browser

[Ranger](http://nongnu.org/ranger/) is a terminal-based file browser with Vim-style keybindings. It uses ncurses and can hook into all sorts of other command line apps to create an incredibly powerful file manager.

If you prefer a graphical experience, more power to you. I'm lazy. Since I'm already using the terminal for 90 percent of what I do, it make sense to not leave it just because I want to browse files. 

The keyword here for me is "browse." I do lots of things to files without using Ranger. Moving, copying, creating, things like that I tend to do directly with `cp`, `mv`, `touch`, `mkdir` and so on. But sometimes you want *browse* files, and in those cases Ranger is the best option I've used.

That said, Ranger is something of a labyrinth of commands and keeping track of them all can be overwhelming. If I had a dollar for every time I've searched "show hidden files in Ranger" I could buy you a couple beers (the answer, fellow searchers, is `zh`). 

I'm going to assume you're familiar with the basics of movement in Ranger like `h`, `j`, `k`, `l`, `gg`, and `G`. Likewise that you're comfortable with `yy`, `dd`, `pp`, and other copy, cut, and paste commands. If you're not, if you're brand new to ranger, check out [the official documentation](https://github.com/ranger/ranger/wiki/Official-user-guide) which has a pretty good overview of how to do all the basic stuff you'll want to do with a file browser. 

Here's a few less obvious shortcuts I use all the time. Despite some overlap with Vim, I do not find these particularly intuitive, and had a difficult time remembering them at first:

- `zh`: toggle hidden files
- `gh`: go home (`cd ~/`)
- `oc`: order by create date (newest at top)
- `7j`: jump down seven lines (any number followed by j or k will jump that many lines)
- `7G`: jump to line 7 (like Vim, any number followed by `G` will jump to that line)
- `.d`: show only directories
- `.f`: show only files
- `.c`: clear any filters (such as either of the previous two commands)

Those are handy, but if you really want to speed up Ranger and bend it to the way you work, the config file is your friend. What follows are a few things I've done to tweak Ranger's config file to make my life easier.

###Ranger Power User Recommendations

Enabling line numbers was a revelation for me. Open `~/.config/ranger/rc.conf` and search for `set line_numbers` and change the value to either `absolute` or `relative`. The first numbers from the top no matter what, the `relative` option sets numbers relative to the cursor. I can't stand relative, but absolute works great for me, YMMV.

Another big leap forward in my Ranger productivity came when I discovered local folder sorting options. As noted above, typing `oc` changes the sort order within a folder to sort by date created[^1]. While typing `oc` is pretty easy, there are some folders that I *always* want sorted by date modified. That's easily done with Ranger's `setlocal` config option. 

Here's a couple lines from my `rc.conf` file as an example:

~~~bash
setlocal path=~/notes sort mtime
setlocal path=~/notes/reading sort mtime
~~~

This means that every time I open `~/notes` or `~/notes/reading` the files I've worked with most recently are at the top (and note that you can also use `sort_reverse` instead of `sort`). That puts the most recently edited files right at the top where I can find them.

Having my most recent notes at the top of the pane is great, but what makes it even more useful is having line wrapped file previews so I don't even need to open the file to read it. To get that I currently use the latest Git version of Ranger which I installed via [Arch Linux's AUR](https://aur.archlinux.org/packages/ranger-git/).

This feature, which is invaluable to me since one of my common use cases for Ranger is quickly scanning a bunch of text files, has been [merged to master](https://github.com/ranger/ranger/pull/1322), but not released yet. If you don't [use Arch Linux](/src/why-i-switched-arch-linux) you can always build from source, or you can wait for the next release which should include an option to line wrap your previews.

###Bookmarks

Part of what makes Ranger incredibly fast are bookmarks. With two keystrokes I can jump between folders, move/copy files, and so on. 

To set a bookmark, navigate to the directory, then hit `m` and whatever letter you want to serve as the bookmark. Once you've bookmarked it, type `` `<letter>`` to jump straight to that directory. I try to use Vim-like mnemonics for my bookmarks, e.g. `` `d`` takes me to documents, `` `n`` takes me to `~/notes`, `` `l `` takes me to the dev folder for this site, and so on. As with the other commands, typing just `` ` `` will bring up a list of your bookmarks.  

###Conclusion

Ranger is incredibly powerful and almost infinitely customizable. In fact I don't think I really appreciated how customizable it was until I wrote this and dug a little deeper into all the ways you can map shell scripts to one or two character shortcuts. It can end up being a lot to keep track of though. I suggest learning maybe one or two new shortcuts a week. When you know longer have to think abut them, move on to the next couple.

Or you can do what I do, wait until you have something you want to do, but don't know how, figure out how to do it, then write it down so you remember it.

####Shoulders Stood Upon

* [Dquinton's Ranger setup details](http://dquinton.github.io/debian-install/config/ranger.html) - I have no idea who this person is, but their Ranger setup and detailed notes was hugely helpful. 
* [Ranger documentation](https://ranger.github.io/ranger.1.html) - The docs have a pretty good overview of the options available, though sometimes it's challenging to translate that into real world use cases.
* [Arch Wiki Ranger page](https://wiki.archlinux.org/index.php/Ranger) - Where would we be without the Arch Wiki?



[^1]: In fact, just type `o` and you'll get a list of other sorting options (and if you know what `normal` means, drop me a comment below, I'm still trying to figure out what that means).

# A Guide to Switching From i3 to Sway

date:2020-01-14 10:20:01
url:/src/guide-to-switching-i3-to-sway

[*Updated June 2023: While I do still love Sway, fighting to get video and audio editors working properly in Wayland took too much time. I gave up and went back to [X.org with Openbox](https://luxagraf.net/src/back-to-x11).*]

I recently made the switch from the [i3 tiling window manager](https://i3wm.org/) to [Sway](https://swaywm.org/), a Wayland-based i3 clone. I still [run Arch Linux on my personal machine](/src/why-i-switched-arch-linux), so all of this is within the context of Arch.

I made the switch for a variety of reasons. There's the practical: Sway/Wayland gives me much better battery life on my laptop. As well as the more philosophical: Sway's lead developer Drew Devault's take on code is similar to mine[^1] (e.g. [avoid traumatic changes](https://drewdevault.com/2019/11/26/Avoid-traumatic-changes.html) or [avoid dependencies](https://drewdevault.com//2020/02/06/Dependencies-and-maintainers.html)), and after reading his blog for a year he's someone whose software I trust. 

I know some people would think this reason ridiculous, but it's important to me that the software I rely on be made by people I like and trust. Software is made by humans, for humans. The humans are important. And yes, it goes the other way too. I'm not going to name names, but there are some theoretically good software out there that I refuse to use because I do not like or trust the people who make it.

When I find great software made by people who seem trustworthy, I use it. So I switched to Sway and it's been a good experience.

Sway and Wayland have been very stable in my use. I get about 20 percent more out of my laptop battery. That seems insane to me, but as someone who [lives almost entirely off solar power](/1969-dodge-travco-motorhome) it's a huge win I can't ignore.

### Before You Begin

I did not blindly switch to Sway. Or rather I did and that did not go well. I switched back after a few hours and started doing some serious searching, both the search engine variety and the broader, what am I really trying to do here, variety. 

The latter led me to change a few tools, replace some things, and try some new work flows. Not all of it was good. I could never get imv to do the things I can with feh for instance, but mostly it was good.

One thing I really wanted to do was avoid XWayland (which allows apps that need X11 to run under Wayland). Wherever I could I've opted for applications that run natively under Wayland. There's nothing wrong with XWayland, that was just a personal goal, for fun.

Here's my notes on making the transition to Wayland along with the applications I use most frequently.

##### Terminal

I do almost everything in the terminal. I write in Vim, email with mutt, read RSS feeds with newsboat, listen to music with mpd, and browse files with ranger.

I tested quite a few Wayland-native terminals and I really like [Alacritty](https://github.com/alacritty/alacritty). Highly recommended. [Kitty](https://github.com/kovidgoyal/kitty) is another option to consider.

<s>That said, I am sticking with urxvt for now. There are two problems for me with Alacritty. First off Vim doesn't play well with the Wayland clipboard in Alacritty. Second, Ranger will not show image previews in Alacritty.</s>

*Update April 2021:* I have never really solved either of these issues, but I switched to Alacritty anyway. I use Neovim instead of Vim, which was a mostly transparent switch and Neovim support the Wayland clipboard. As for previews in Ranger... I forgot about those. They were nice. But I guess I don't miss them that much.


##### Launcher

I've always used dmenu to launch apps and grab passwords from pass. It's simple and fast. Unfortunately dmenu is probably never going to run natively in Wayland. 

I tested rofi, wofi, and other potential replacements, but I did not like any of them. Somewhere in my search for a replacement launcher I ran across [this post](https://medium.com/njiuko/using-fzf-instead-of-dmenu-2780d184753f) which suggested just calling up a small terminal window and piping a list of applications to [fzf](https://github.com/junegunn/fzf), a blazing fast search tool.

That's what I've done and it works great. I created a keybinding to launch a new instance of Alacritty with a class name that I use to resize the window. Then within that small Alacritty window I call `compgen` to get a list of executables, then sort it to eliminate duplicates, and pass the results to fzf. Here's the code in my Sway config file:

~~~console
bindsym $mod+Space exec alacritty --class 'launcher' --command bash -c 'compgen -c | sort -u | fzf | xargs -r swaymsg -t command exec'

for_window [app_id="^launcher$"] floating enable, border none, resize set width 25 ppt height 20 ppt, move position 0 px 0 px
~~~

These lines together will open a small terminal window in the upper left corner of the screen with a fzf search interface. I type, for example, "dar" and Darktable comes up. I hit return, the terminal window closes, and Darktable launches. It's as simple as dmenu and requires no extra applications (since I was already using fzf in Vim).

If you don't want to go that route, Bemenu is dmenu-like launcher that runs natively in Wayland.

##### Browsers

I mainly use [qutebrower](https://qutebrowser.org/), supplemented by [Vivaldi](https://vivaldi.com/)[^2] for research because having split screen tabs is brilliant for research. I also use [Firefox Developer Edition](https://www.mozilla.org/en-US/firefox/developer/) for any web development work, because the Firefox dev tools are far superior to anything else.

All three work great under Wayland. In the case of qutebrowser though you'll need to set a few shell variables to get it to start under Wayland, out of the box it launches with XWayland for some reason. Here's what I added to `.bashrc` to get it to work:

~~~bash
export XDG_SESSION_TYPE=wayland 
export GDK_BACKEND=wayland
~~~

One thing to bear in mind if you do have a lot of X11 apps still is that with this in your shell you'll need to reset the `GDK_BACKEND` to X11 or those apps won't launch. Instead you'll get an error, `Gtk-WARNING **: cannot open display: :0`. To fix that error you'll need to reset `GDK_BACKEND=x11`, then launch your X11 app.

There are several ways you can do this, but I prefer to override apps in `~/bin` (which is on my $PATH). So, for example, I have a file named `xkdenlive` in `~/bin` that looks like this:

~~~bash
#! /bin/sh
GDK_BACKEND=x11 kdenlive
~~~

Note that for me this is easier, because the only apps I'm using that need X11 are Kdenlive and Slack. If you have a lot of X11 apps, you're probably better off making qutebrowser the special case by launching it like this:

~~~bash
GDK_BACKEND=wayland qutebrowser
~~~

##### Clipboard

I can't work without a clipboard manager, I keep the last 200 things I've copied, and I like to have things permanently stored as well.

Clipman does a good job of saving clipboard history.

You need to have wl-clipboard installed since Clipman reads and writes to and from that. I also use wofi instead of the default dmenu for viewing and searching clipboard history. Here's how I set up clipman in my Sway config file:

~~~bash
exec wl-paste -t text --watch clipman store --max-items=60 --histpath="~/.local/share/clipman.json"
bindsym $mod+h exec clipman pick --tool="wofi" --max-items=30 --histpath="~/.local/share/clipman.json"
~~~

Clipman does not, however, have a way to permanently store bits of text. That's fine. Permanently stored bits of frequently used text are really not all that closely related to clipboard items and lumping them together in a single tool isn't a very Unix-y approach. Do one thing, do it well.

For snippets I ended up bending [pet](https://github.com/knqyf263/pet), the "command line snippet manager" a little and combining it with the small launcher-style window idea above. So I store snippets in pet, mostly just `printf "my string of text"`, call up an Alacritty window, search, and hit return to inject the pet snippet into the clipboard. Then I paste it were I need it. 

##### Volume Controls

Sway handles volume controls with pactl. Drop this in your Sway config file and you should be good:

~~~bash
bindsym XF86AudioRaiseVolume exec pactl set-sink-volume @DEFAULT_SINK@ +5%
bindsym XF86AudioLowerVolume exec pactl set-sink-volume @DEFAULT_SINK@ -5%
bindsym XF86AudioMute exec pactl set-sink-mute @DEFAULT_SINK@ toggle
bindsym XF86AudioMicMute exec pactl set-source-mute @DEFAULT_SOURCE@ toggle
~~~

##### Brightness 

I like [light](https://github.com/haikarainen/light) for brightness. Once it's installed these lines from my Sway config file assign it to my brightness keys:

~~~bash
bindsym --locked XF86MonBrightnessUp exec --no-startup-id light -A 10
bindsym --locked XF86MonBrightnessDown exec --no-startup-id light -U 10
~~~

### Quirks, Annoyances And Things I Haven't Fixed

There have been surprisingly few of these, save the Vim and Ranger issues mentioned above.

<s>I haven't found a working replacement for xcape. The only thing I used xcape for was to make my Cap Lock key dual-function: press generates Esc, hold generates Control. So far I have not found a way to do this in Wayland. There is ostensibly [caps2esc](https://gitlab.com/interception/linux/plugins/caps2esc), but it's poorly documented and all I've been able to reliably do with it is crash Wayland.</s>

*Update April 2021*: I managed to get caps2esc working. First you need to install it, for Arch that's something like:

~~~bash
yay -S interception-caps2esc
~~~

Once it's installed you need to create the config file. I keep mine at `/etc/interception/udevmon.d/caps2esc.yaml`. Open that up and paste in these lines:

~~~yaml
- JOB: "intercept -g $DEVNODE | caps2esc | uinput -d $DEVNODE"
  DEVICE:
    EVENTS:
      EV_KEY: [KEY_CAPSLOCK, KEY_ESC]
~~~

Then you need to start and enable the `udevmon` service unit, which is what runs the caps2esc code:

~~~bash
sudo systemctl start udevmon
sudo systemctl enable udevmon
~~~

The last thing to do is restart. Once you've rebooted you should be able to hold down caps_lock and have it behave like control, but a quick press with give you escape instead. This is incredibly useful if you're a Vim user.

The only other problems I've run into is the limited range of screen recording options -- there's wf-recorder and that's about it. It works well enough though for what I do. 

I've been using Sway exclusively for a year and half now and I have no reason or desire to ever go back to anything else. The rest of my family isn't fond of the tiling aspect of Sway so I do still run a couple of laptops with Openbox. I'd love to see a Wayland Openbox clone that's useable. I've played with [labwc](https://github.com/johanmalm/labwc), which is promising, but lacks a tint2-style launcher, which is really what I need (i.e., a system tray with launcher buttons, which Waybar does not have). Anyway, I am keeping an eye on labwc because it looks like a good project.

That's how I did it. But I am just one person. If you run into snags, feel free to drop a comment below and I'll see if I can help.

### Helpful pages:

- **[Sway Wiki](https://github.com/swaywm/sway/wiki)**: A good overview of Sway, config examples (how to replicate things from i3), and application replacement tips for i3 users (like this fork of [redshift](https://github.com/minus7/redshift/tree/wayland) with support for Wayland).
- **[Arch Wiki Sway Page](https://wiki.archlinux.org/index.php/Sway)**: Another good Sway resource with solutions to a lot of common stuff: set wallpaper, take screenshots, HiDPI, etc.
- **[Sway Reddit](https://old.reddit.com/r/swaywm/)**: There's some useful info here, worth searching if you run into issues. Also quite a few good tips and tricks from fellow Sway users with more experience. 
- **[Drew Devault's Blog](https://drewdevault.com/)**: He doesn't always write about Sway, but he does give updates on what he's working on, which sometimes has details on Sway updates.


[^1]: That's not to imply there's anything wrong with the i3 developers.

[^2]: Vivaldi would be another good example of me trusting a developer. I've been interviewing Jon von Tetzchner for many years, all the way back to when he was at Opera. I don't always see eye to eye with him (I wish Vivaldi were open source) but I trust him, so I use Vivaldi. It's the only software I use that's not open source (not including work, which requires quite a few closed source crap apps).

# Why I Ditched Vagrant for LXD

date:2019-04-07 21:09:02
url:/src/why-and-how-ditch-vagrant-for-lxd

***Updated July 2022**: This was getting a bit out of date in some places so I've fixed a few things. More importantly, I've run into to some issues with cgroups and lxc on Arch and added some notes below under the [special note to Arch users](#arch)*

I've used Vagrant to manage my local development environment for quite some time. The developers I used to work with used it and, while I have no particular love for it, it works well enough. Eventually I got comfortable enough with Vagrant that I started using it in my own projects. I even wrote about [setting up a custom Debian 9 Vagrant box](/src/create-custom-debian-9-vagrant-box) to mirror the server running this site. 

The problem with Vagrant is that I have to run a huge memory-hungry virtual machine when all I really want to do is run Django's built-in dev server. 

My laptop only has 8GB of RAM. My browser is usually taking around 2GB, which means if I start two Vagrant machines, I'm pretty much maxed out. Django's dev server is also painfully slow to reload when anything changes.

Recently I was talking with one of Canonical's [MAAS](https://maas.io/) developers and the topic of containers came up. When I mentioned I really didn't like Docker, but hadn't tried anything else, he told me I really needed to try LXD. Later that day I began reading through the [LinuxContainers](https://linuxcontainers.org/) site and tinkering with LXD. Now, a few days later, there's not a Vagrant machine left on my laptop.

Since it's just me, I don't care that LXC only runs on Linux. LXC/LXD is blazing fast, lightweight, and dead simple. To quote, Canonical's [Michael Iatrou](https://blog.ubuntu.com/2018/01/26/lxd-5-easy-pieces), LXC "liberates your laptop from the tyranny of heavyweight virtualization and simplifies experimentation."

Here's how I'm using LXD to manage containers for Django development on Arch Linux. I've also included instructions and commands for Ubuntu since I set it up there as well.

### What's the difference between LXC, LXD and `lxc`

I wrote this guide in part because I've been hearing about LXC for ages, but it seemed unapproachable, overwhelming, too enterprisey you might say. It's really not though, in fact I found it easier to understand than Vagrant or Docker.

So what is a LXC container, what's LXD, and how are either different than say a VM or for that matter Docker?

* LXC - low-level tools and a library to create and manage containers, powerful, but complicated.
* LXD - is a daemon which provides a REST API to drive LXC containers, much more user-friendly
* `lxc` - the command line client for LXD.

In LXC parlance a container is essentially a virtual machine, if you want to get pedantic, see Stéphane Graber's post on the [various components that make up LXD](https://stgraber.org/2016/03/11/lxd-2-0-introduction-to-lxd-112/). For the most part though, interacting with an LXC container is like interacting with a VM. You say ssh, LXD says socket, potato, potahto. Mostly.

An LXC container is not a container in the same sense that Docker talks about containers. Think of it more as a VM that only uses the resources it needs to do whatever it's doing. Running this site in an LXC container uses very little RAM. Running it in Vagrant uses 2GB of RAM because that's what I allocated to the VM -- that's what it uses even if it doesn't need it. LXC is much smarter than that. 

Now what about LXD? LXC is the low level tool, you don't really need to go there. Instead you interact with your LXC container via the LXD API. It uses YAML config files and a command line tool `lxc`.

That's the basic stack, let's install it.

### Install LXD

On Arch I used the version of [LXD in the AUR](https://aur.archlinux.org/packages/lxd/). Ubuntu users should go with the Snap package. The other thing you'll want is your distro's Btrfs or ZFS tools. 

Part of LXC's magic relies on either Btrfs and ZFS to read a virtual disk not as a file the way Virtualbox and others do, but as a block device. Both file systems also offer copy-on-write cloning and snapshot features, which makes it simple and fast to spin up new containers. It takes about 6 seconds to install and boot a complete and fully functional LXC container on my laptop, and most of that time is downloading the image file from the remote server. It takes about 3 seconds to clone that fully provisioned base container into a new container. 

In the end I set up my Arch machine using Btrfs or Ubuntu using ZFS to see if I could see any difference (so far, that would be no, the only difference I've run across in my research is that Btrfs can run LXC containers inside LXC containers. LXC Turtles all the way down).

Assuming you have Snap packages set up already, Debian and Ubuntu users can get everything they need to install and run LXD with these commands:

~~~~console
apt install zfsutils-linux
~~~~

And then install the snap version of lxd with:

~~~~console
snap install lxd
~~~~

Once that's done we need to initialize LXD. I went with the defaults for everything. I've printed out the entire init command output so you can see what will happen:

~~~~console
sudo lxd init
Create a new BTRFS pool? (yes/no) [default=yes]: 
would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: 
Create a new BTRFS pool? (yes/no) [default=yes]: 
Would you like to use an existing block device? (yes/no) [default=no]: 
Size in GB of the new loop device (1GB minimum) [default=15GB]: 
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like LXD to be available over the network? (yes/no) [default=no]:    
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
~~~~

LXD will then spit out the contents of the profile you just created. It's a YAML file and you can edit it as you see fit after the fact. You can also create more than one profile if you like. To see all installed profiles use:

~~~~console
lxc profile list
~~~~

To view the contents of a profile use:

~~~~console
lxc profile show <profilename>
~~~~

To edit a profile use:

~~~~console
lxc profile edit <profilename>
~~~~

So far I haven't needed to edit a profile by hand. I've also been happy with all the defaults although, when I do this again, I will probably enlarge the storage pool, and maybe partition off some dedicated disk space for it. But for now I'm just trying to figure things out so defaults it is. 

The last step in our setup is to add our user to the lxd group. By default LXD runs as the lxd group, so to interact with containers we'll need to make our user part of that group.

~~~~console
sudo usermod -a -G lxd yourusername
~~~~

#####Special note for Arch users. {:#arch } 

To run unprivileged containers as your own user, you'll need to jump through a couple extra hoops. As usual, the [Arch User Wiki](https://wiki.archlinux.org/index.php/Linux_Containers#Enable_support_to_run_unprivileged_containers_(optional)) has you covered. Read through and follow those instructions and then reboot and everything below should work as you'd expect.

Or at least it did until about June of 2022 when something changed with cgroups and I stopped being able to run my lxc containers. I kept getting errors like:

~~~~console
Failed to create cgroup at_mnt 24() 
lxc debian-base 20220713145726.259 ERROR conf - ../src/lxc/conf.c:lxc_mount_auto_mounts:851 - No such file or directory - Failed to mount "/sys/fs/cgroup"
~~~~

I tried debugging, and reading through all the bug reports I could find over the course of a couple of days and got nowhere. No one else seems to have this problem. I gave up and decided I'd skip virtualization and develop directly on Arch. I installed PostgreSQL... and it wouldn't start, also throwing an error about cgroups. That is when I dug deeper into cgroups and found a way to revert to the older behavior. I added this line to my boot params (in my case that's in /boot/loader/entries/arch.conf):

~~~~console
systemd.unified_cgroup_hierarchy=0
~~~~

That fixed all the issues for me. If anyone can explain *why* I'd be interested to hear from you in the comments.

### Create Your First LXC Container

Let's create our first container. This website runs on a Debian VM currently hosted on Vultr.com so I'm going to spin up a Debian container to mirror this environment for local development and testing.

To create a new LXC container we use the `launch` command of the `lxc` tool. 

There are four ways you can get LXC containers, local (meaning a container base you've downloaded), images (which come from [https://images.linuxcontainers.org/](https://images.linuxcontainers.org/), ubuntu (release versions of Ubuntu), and ubuntu-daily (daily images). The images on linuxcontainers are unofficial, but the Debian image I used worked perfectly. There's also Alpine, Arch CentOS, Fedora, openSuse, Oracle, Palmo, Sabayon and lots of Ubuntu images. Pretty much every architecture you could imagine is in there too. 

I created a Debian 9 Stretch container with the amd64 image. To create an LXC container from one of the remote images the basic syntax is `lxc launch images:distroname/version/architecture containername`. For example:

~~~~console
lxc launch images:debian/stretch/amd64 debian-base
Creating debian-base
Starting debian-base
~~~~

That will grab the amd64 image of Debian 9 Stretch and create a container out of it and then launch it. Now if we look at the list of installed containers we should see something like this:

~~~~console
lxc list
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                                                                         
|    NAME     |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |                                                                                         
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                                                                         
| debian-base | RUNNING | 10.171.188.236 (eth0) | fd42:e406:d1eb:e790:216:3eff:fe9f:ad9b (eth0) | PERSISTENT |           |                                                                                         
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+  
~~~~

Now what? This is what I love about LXC, we can interact with our container pretty much the same way we'd interact with a VM. Let's connect to the root shell:

~~~~console
lxc exec debian-base -- /bin/bash
~~~~

Look at your prompt and you'll notice it says `root@nameofcontainer`. Now you can install everything you need on your container. For me, setting up a Django dev environment, that means Postgres, Python, Virtualenv, and, for this site, all the Geodjango requirements (Postgis, GDAL, etc), along with a few other odds and ends. 

You don't have to do it from inside the container though. Part of LXD's charm is to be able to run commands without logging into anything. Instead you can do this:

~~~~console
lxc exec debian-base -- apt update
lxc exec debian-base -- apt install postgresql postgis virtualenv
~~~~

LXD will output the results of your command as if you were SSHed into a VM. Not being one for typing, I created a bash alias that looks like this: `alias luxdev='lxc exec debian-base -- '` so that all I need to type is `luxdev <command>`.  

What I haven't figured out is how to chain commands, this does not work:

~~~~console
lxc exec debian-base -- su - lxf && cd site && source venv/bin/activate && ./manage.py runserver 0.0.0.0:8000
~~~~

According to [a bug report](https://github.com/lxc/lxd/issues/2057), it should work in quotes, but it doesn't for me. Something must have changed since then, or I'm doing something wrong.

The next thing I wanted to do was mount a directory on my host machine in the LXC instance. To do that you'll need to edit `/etc/subuid` and `/etc/subgid` to add your user id. Use the `id` command to get your user and group id (it's probably 1000 but if not, adjust the commands below). Once you have your user id, add it to the files with this one liner I got from the [Ubuntu blog](https://blog.ubuntu.com/2016/12/08/mounting-your-home-directory-in-lxd):

~~~~console
echo 'root:1000:1' | sudo tee -a /etc/subuid /etc/subgid
~~~~

Then you need to configure your LXC instance to use the same uid:

~~~~console
lxc config set debian-base raw.idmap 'both 1000 1000'
~~~~

The last step is to add a device to your config file so LXC will mount it. You'll need to stop and start the container for the changes to take effect.

~~~~console
lxc config device add debian-base sitedir disk source=/path/to/your/directory path=/path/to/where/you/want/folder/in/lxc
lxc stop debian-base
lxc start debian-base
~~~~

That replicates my setup in Vagrant, but we've really just scratched the surface of what you can do with LXD. For example you'll notice I named the initial container "debian-base". That's because this is the base image (fully set up for Djano dev) which I clone whenever I start a new project. To clone a container, first take a snapshot of your base container, then copy that snapshot to create a new container:

~~~~console
lxc snapshot debian-base debian-base-configured
lxc copy debian-base/debian-base-configured mycontainer
~~~~

Now you've got a new container named mycontainer. If you'd like to tweak anything, for example mount a different folder specific to this new project you're starting, you can edit the config file like this:

~~~~console
lxc config edit mycontainer
~~~~

I highly suggest reading through Stéphane Graber's 12 part series on LXD to get a better idea of other things you can do, how to manage resources, manage local images, migrate containers, or connect LXD with Juju, Openstack or yes, even Docker.

#####Shoulders stood upon

* [Stéphane Graber's 12 part series on lxd 2.0](https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/) - Graber wrote LXC and LXD, this is the best resource I found and highly recommend reading it all.
* [Mounting your home directory in LXD](https://blog.ubuntu.com/2016/12/08/mounting-your-home-directory-in-lxd)
* [Official how to](https://linuxcontainers.org/lxd/getting-started-cli/)
* [Linux Containers Discourse site](https://discuss.linuxcontainers.org/t/deploying-django-applications/996)
* [LXD networking: lxdbr0 explained](https://blog.ubuntu.com/2016/04/07/lxd-networking-lxdbr0-explained)


[^1]: To be fair, I didn't need to get rid of Vagrant. You can use Vagrant to manage LXC containers, but I don't know why you'd bother. LXD's management tools and config system works great, why add yet another tool to the mix? Unless you're working with developers who use Windows, in which case LXC, which is short for, *Linux Container*, is not for you.

# Create a Debian 9 Stretch Vagrant Box

date:2019-02-24 15:45:53
url:/src/create-custom-debian-9-vagrant-box

I'm a little old fashioned with my love of Vagrant. I should probably keep up with the kids, dig into to Docker and containers, but I like managing servers. I like to have the whole VM at my disposal. 

[**Note**: Everything here is still true and will work, but I have [switched to using `lxd` rather than Vagrant](/src/why-and-how-ditch-vagrant-for-lxd). If I were using Vagrant though, I would still absolutely be using my own Debian image.]

Why Vagrant? Well, I run Arch Linux on my laptop, but I usually deploy sites to either Debian, preferably v9, "Stretch", or (if a client is using AWS) Ubuntu, which means I need a virtual machine to develop and test in. Vagrant is the easiest way I've found to manage that workflow.

When I'm deploying to Ubuntu-based machines I develop using the [Canonical-provided Vagrant box](https://app.vagrantup.com/ubuntu/boxes/bionic64) available through Vagrant's [cloud site](https://app.vagrantup.com/boxes/search). There is, however, no official Debian box provided by Debian. Worse, the most popular Debian 9 box on the Vagrant site has only 512MB of RAM. I prefer to have 1 or 2GB of RAM to mirror the cheap, but surprisingly powerful, [Vultr VPS instances](https://www.vultr.com/?ref=6825229) I generally use (You can use them too, in my experience they're faster and slightly cheaper than Digital Ocean. Here's a referral link that will get you [$50 in credit](https://www.vultr.com/?ref=7857293-4F)). 

That means I get to build my own Debian Vagrant box. 

Building a Vagrant base box from Debian 9 "Stretch" isn't hard, but most tutorials I found were outdated or relied on third-party tools like Packer. Why you'd want to install, setup and configure a tool like Packer to build one base box is a mystery to me. It's far faster to do it yourself by hand (which is not to slag Packer, it *is* useful when you're building an image from AWS or Digital Ocean or other provider).

Here's my guide to building a Debian 9 "Stretch" Vagrant Box.

### Create a Debian 9 Virtual Machine in Virtualbox

We're going to use Virtualbox as our Vagrant provider because, while I prefer qemu for its speed, I run into more compatibility issues with qemu. Virtualbox seems to work everywhere. 

First install Virtualbox, either by [downloading an image](https://www.virtualbox.org/wiki/Downloads) or, preferably, using your package manager/app store. We'll also need the latest version of Debian 9's netinst CD image, which you can [grab from the Debian project](https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/) (scroll to the bottom of that page for the actual downloads).

Once you've got a Debian CD, fire up Virtualbox and create a new virtual machine. In the screenshot below I've selected Expert Mode so I can go ahead and up the RAM (in the screenshot version I went with 1GB).

<img src="images/2019/debian9-vagrant-base-box-virtualmachine.jpg" id="image-1859" class="picfull" />

Click "Create" and Virtualbox will ask you about the hard drive, I stick with the default type, but bump the size to 40GB, which matches the VPS instances I use.

<img src="images/2019/debian9-vagrant-base-box-virtualdisk.jpg" id="image-1860" class="picfull" />

Click "Create" and then go to the main Virtualbox screen, select your new machine and click "Settings". Head to the audio tab and uncheck the Enable Audio option. Next go to the USB tab and disable USB.

<img src="images/2019/debian9-vagrant-base-box-no-audio.jpg" id="image-1855" class="picfull" />
<img src="images/2019/debian9-vagrant-base-box-no-usb.jpg" id="image-1856" class="picfull" />

Now click the network tab and make sure Network Adapter 1 is set to NAT. Click the "Advanced" arrow and then click the button that says Port Forwarding. Add a port forwarding rule. I call mine SSH, but the name isn't important. The important part is that the protocol is TCP, the Host and Guest IP address fields are blank, the Host port is 2222, the Guest port is 22. 

<img src="images/2019/debian9-vagrant-base-box-port-forward_EqGwcg4.jpg" id="image-1858" class="picfull" />

Hit okay to save your changes on both of those screens and now we're ready to boot Debian. 

### Install Debian

To get Debian installed first click the start button for your new VM and Virtualbox will boot it up and ask you for the install CD. Navigate to wherever you saved the Debian netinst CD we downloaded earlier and select that. 

That should boot you to the Debian install screen. The most important thing here is to make sure you choose the second option, "Install", rather than "Graphical Install". Since we disabled USB, we won't have access to the mouse and the Debian graphical installer won't work. Stick with plain "Install".

<img src="images/2019/debian9-vagrant-base-box-vm-install.jpg" id="image-1861" class="picfull" />

From here it's just a standard Debian install. Select the appropriate language, keyboard layout, hostname (doesn't matter), and network name (also doesn't matter). Set the root password to something you'll remember. Debian will then ask you to create a user. Create a user named "vagrant" (I used "vagrant" for the fullname and username) and set the password to "vagrant".

Tip: to select (or unselect) a check box in the Debian installer, hit the space bar.

Then Debian will get the network time, ask what timezone you're in and start setting up the disk. I go with the defaults all the way through. Next Debian will install the base system, which takes a minute or two.

Since we're using the netinst CD, Debian will ask if we want to insert any other CDs (no), and then it will ask you to choose which mirrors to download packages from. I went with the defaults. Debian will then install Linux, udev and some other basic components. At some point it will ask if you want to participate in the Debian package survey. I always go with no because I feel like a virtual machine might skew the results in unhelpful ways, but I don't know, maybe I'm wrong on that.

After that you can install your software. For now I uncheck everything except standard system utils (remember, you can select and unselect items by hitting the space bar). Debian will then go off and install everything, ask if you want to install Grub (you do -- select your virtual disk as the location for grub), and congratulations, you're done installing Debian. 

Now let's build a Debian 9 base box for Vagrant.

### Set up Debian 9 Vagrant base box

Since we've gone to the trouble of building our own Debian 9 base box, we may as well customize it. 

The first thing to do after you boot into the new system is to install sudo and set up our vagrant user as a passwordless superuser. Login to your new virtual machine as the root user and install sudo. You may as well add ssh while you're at it:

~~~~console
apt install sudo ssh
~~~~

Now we need to add our vagrant user to the sudoers list. To do that we need to create and edit the file:

~~~~console
visudo -f /etc/sudoers.d/vagrant
~~~~

That will open a new file where you can add this line:

~~~~console
vagrant ALL=(ALL) NOPASSWD:ALL
~~~~

Hit control-x, then "y" and return to save the file and exit nano. Now logout of the root account by typing `exit` and login as the vagrant user. Double check that you can run commands with `sudo` without a password by typing `sudo ls /etc/` or similar. If you didn't get asked for a password then everything is working.

Now we can install the vagrant insecure SSH key. Vagrant sends commands from the host machine over SSH using what the Vagrant project calls an insecure key, which is so called because everyone has it. We could in theory, all hack each other's Vagrant boxes. If this concerns you, it's not that complicated to set up your own more secure key, but I suggest doing that in your Vagrant instance, not the base box. For the base box, use the insecure key.

Make sure you're logged in as the vagrant user and then use these commands to set up the insecure SSH key:

~~~~console
mkdir ~/.ssh
chmod 0700 ~/.ssh
wget https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub -O ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys
chown -R vagrant ~/.ssh
~~~~

Confirm that the key is in fact in the `authorized_keys` file by typing `cat ~/.ssh/authorized_keys`, which should print out the key for you. Now we need to set up SSH to allow our vagrant user to sign in:

~~~~console
sudo nano /etc/ssh/sshd_config
~~~~

Uncomment the line `AuthorizedKeysFile ~/.ssh/authorized_keys ~/.ssh/authorized_keys2` and hit `control-x`, `y` and `enter` to save the file. Now restart SSH with this command:

~~~~console
sudo systemctl restart ssh
~~~~

### Install Virtualbox Guest Additions 

The Virtualbox Guest Addition allows for nice extras like shared folders, as well as a performance boost. Since the VB Guest Additions require a compiler, and Linux header files,  let's first get the prerequisites installed:

~~~~console
sudo apt install gcc build-essential linux-headers-amd64
~~~~

Now head to the VirtualBox window menu and click the "Devices" option and choose "Insert Guest Additions CD Image" (note that you should download the latest version if Virtualbox asks[^1]). That will insert an ISO of the Guest Additions into our virtual machine's CDROM drive. We just need to mount it and run the Guest Additions Installer:

~~~~console
sudo mount /dev/cdrom /mnt
cd /mnt
sudo ./VBoxLinuxAdditions.run
~~~~

Assuming that finishes without error, you're done. Congratulations. Now you can add any extras you want your Debian 9 Vagrant base box to include. I primarily build things in Python with Django and Postgresql, so I always install packages like `postgresql`, `python3-dev`, `python3-pip`, `virtualenv`, and some other software I can't live without. Also edit the .bashrc file to create some aliases and helper scripts. Whatever you want all your future Vagrant boxes to have, now is the time to install it.

### Packaging your Debian 9 Vagrant Box

Before we package the box, we're going to zero out the drive to save a little space when we compress it down the road. Here's the commands to zero it out:

~~~~console
sudo dd if=/dev/zero of=/zeroed bs=1M
sudo rm -f /zeroed
~~~~

Once that's done we can package up our box with this command:

~~~~console
vagrant package --base debian9-64base
==> debian9-64base: Attempting graceful shutdown of VM...
==> debian9-64base: Clearing any previously set forwarded ports...
==> debian9-64base: Exporting VM...
==> debian9-64base: Compressing package to: /home/lxf/vms/package.box
~~~~

As you can see from the output, I keep my Vagrant boxes in a folder call `vms`, you can put yours wherever you like. Wherever you decide to keep it, move it there now and cd into that folder so you can add the box. Sticking the `vms` folder I use, the commands would look like this:

~~~console
cd vms
vagrant box add debian9-64 package.box
~~~

Now when you want to create a new vagrant box from this base box, all you need to do is add this to your Vagrantfile:

~~~~console
Vagrant.configure("2") do |config|
  config.vm.box = "debian9-64"
end
~~~~

Then you start up the box as you always would:

~~~~console
vagrant up
vagrant ssh
~~~~

#####Shoulders stood upon

* [Vagrant docs](https://www.vagrantup.com/docs/virtualbox/boxes.html)
* [Engineyard's guide to Ubuntu](https://www.engineyard.com/blog/building-a-vagrant-box-from-start-to-finish)
* [Customizing an existing box](https://scotch.io/tutorials/how-to-create-a-vagrant-base-box-from-an-existing-one) - Good for when you don't need more RAM/disk space, just some software pre-installed.

[^1]: On Arch, using Virtualbox 6.x I have had problems downloading the Guest Additions. Instead I've been using the package `virtualbox-guest-iso`. Note that after you install that, you'll need to reboot to get Virtualbox to find it.

# Install Gitea with Nginx, Postgresql on Ubuntu 18.04

date:2018-10-12 08:43:47
url:/src/gitea-nginx-postgresql-ubuntu-1804

I've never liked hosting my git repos on someone else's servers. GitHub especially is not a company I'd do business with, ever. I do have a repo or two hosted over at [GitLab](https://gitlab.com/luxagraf) because those are projects I want to be easily available to anyone. But I store almost everything in git -- notes, my whole documents folder, all my code projects, all my writing, pretty much everything is in git -- but I like to keep all that private and on my own server.

For years I used [Gitlist](http://gitlist.org/) because it was clean, simple, and did 95 percent of what I needed in a web-based interface for my repos. But Gitlist is abandonware at this point and broken if you're using PHP 7.2. There are few forks that [patch it](https://github.com/patrikx3/gitlist), but it's copyrighted to the original dev and I don't want to depend on illegitimate forks for something so critical to my workflow. Then there's self-hosted Gitlab, which I like, but the system requirements are ridiculous.

Some searching eventually led me to Gitea, which is lightweight, written in Go and has everything I need. 

Here's a quick guide to getting Gitea up and running on your Ubuntu 18.04 -- or similar -- VPS.

### Set up Gitea

The first thing we're going to do is isolate Gitea from the rest of our server, running it under a different user seems to be the standard practice. Installing Gitea via the Arch User Repository will create a `git` user, so that's what I used on Ubuntu 18.04 as well. 

Here's a shell command to create a user named `git`:

~~~~console
sudo adduser --system --shell /bin/bash --group --disabled-password --home /home/git git
~~~~

This is pretty much a standard adduser command such as you'd use when setting up a new VPS, the only difference is that we've added the `--disable-password` flag so you can't actually log in with it. While we will use this user to authenticate over SSH, we'll do so with a key, not a password.

Now we need to grab the latest Gitea binary. At the time of writing that's version 1.5.2, but be sure to check the [Gitea downloads page](https://dl.gitea.io/gitea/) for the latest version and adjust the commands below to work with that version number. Let's download the Gitea binary and then we'll verify the signing key. Verifying keys is very important when working with binaries since you can't see the code behind them[^1].

~~~~console
wget -O gitea https://dl.gitea.io/gitea/1.5.2/gitea-1.5.2-linux-amd64
gpg --keyserver pgp.mit.edu --recv 0x2D9AE806EC1592E2
wget https://dl.gitea.io/gitea/1.5.2/gitea-1.5.2-linux-amd64.asc
gpg --verify gitea-1.5.2-linux-amd64.asc gitea
~~~~

A couple of notes here, GPG should say the keys match, but then it should also warn that "this key is not certified with a trusted signature!" That means, essentially, that this binary could have been signed by anybody. All we know for sure is that wasn't tampered with in transit[^1].

Now let's make the binary executable and test it to make sure it's working:

~~~~console
chmod +x gitea
./gitea web
~~~~

You can stop Gitea with `Ctrl+C`. Let's move the binary to a more traditional location:

~~~~console
sudo cp gitea /usr/local/bin/gitea
~~~~

The next thing we're going to do is create all the directories we need. 

~~~~console
sudo mkdir -p /var/lib/gitea/{custom,data,indexers,public,log}
sudo chown git:git /var/lib/gitea/{data,indexers,log}
sudo chmod 750 /var/lib/gitea/{data,indexers,log}
sudo mkdir /etc/gitea
sudo chown root:git /etc/gitea
sudo chmod 770 /etc/gitea
~~~~

That last line should make you nervous, that's too permissive for a public directory, but don't worry, as soon as we're done setting up Gitea we'll change the permissions on that directory and the config file inside it. 

Before we do that though let's create a systemd service file to start and stop Gitea. The Gitea project has a service file that will work well for our purposes, so let's grab it, make a couple changes and then we'll add it to our system:

~~~~console
wget https://raw.githubusercontent.com/go-gitea/gitea/master/contrib/systemd/gitea.service 
~~~~

Now open that file and uncomment the line `After=postgresql.service` so that Gitea starts after postgresql is running. The resulting config file should look like this:

~~~~ini
[Unit]
Description=Gitea (Git with a cup of tea)
After=syslog.target
After=network.target
#After=mysqld.service
After=postgresql.service
#After=memcached.service
#After=redis.service

[Service]
# Modify these two values and uncomment them if you have
# repos with lots of files and get an HTTP error 500 because
# of that
###
#LimitMEMLOCK=infinity
#LimitNOFILE=65535
RestartSec=2s
Type=simple
User=git
Group=git
WorkingDirectory=/var/lib/gitea/
ExecStart=/usr/local/bin/gitea web -c /etc/gitea/app.ini
Restart=always
Environment=USER=git HOME=/home/git GITEA_WORK_DIR=/var/lib/gitea
# If you want to bind Gitea to a port below 1024 uncomment
# the two values below
###
#CapabilityBoundingSet=CAP_NET_BIND_SERVICE
#AmbientCapabilities=CAP_NET_BIND_SERVICE

[Install]
WantedBy=multi-user.target
~~~~

Now we need to move the service file to somewhere systemd expects it and then start and enable the service so Gitea will launch automatically when the server boots.

~~~~console
sudo cp gitea.service /etc/systemd/system/
sudo systemctl enable gitea
sudo systemctl start gitea
~~~~

There you have it, Gitea is installed, running and will automatically start whenever we restart the server. Now we need to set up Postgresql and then Nginx to serve up our Gitea site to the world. Or at least to us.

### Setup a Postgresql and Nginx

Gitea needs a database to store all our data in; I use PostgreSQL. You can also use MySQL, but you're on your own there. Install PostgreSQL if you haven't already:

~~~~console
sudo apt install postgresql
~~~~

Now let's create a new user and database for Gitea:

~~~~console
sudo su postgres
createuser gitea
createdb gitea -O gitea
~~~~

Exit the postgres user shell by hitting `Ctrl+D`. 

Now let's set up Nginx to serve our Gitea site. 

~~~~console
sudo apt update
sudo apt install nginx
~~~~

For the next part you'll need a domain name. I use a subdomain, git.mydomain.com, but for simplicity sake I'll refer to `mydomain.com` for the rest of this tutorial. Replace `mydomain.com` in all the instructions below with your actual domain name.

We need to create a config file for our domain. By default Nginx will look for config files in `/etc/nginx/sites-enabled/`, so the config file we'll create is:

~~~~console
nano /etc/nginx/sites-enabled/mydomain.com.conf
~~~~

Here's what that file looks like:

~~~~nginx
server {
    listen 80;
    listen [::]:80;
    server_name <mydomain.com>;


    location / {
        proxy_pass http://localhost:3000;
    }

    proxy_set_header X-Real-IP $remote_addr;
}
~~~~

The main line here is the `proxy_pass` bit, which takes all requests and sends it to gitea, which is listening on `localhost:3000` by default. You can change that if you have something else that conflicts with it, but you'll need to change it here and in the service file that we used to start Gitea.

The last step is to add an SSL cert to our site so we can clone over https (and SSH if you keep reading). I have another tutorial on setting up [Certbot for Nginx on Ubuntu](/src/certbot-nginx-ubuntu-1804). You can use that to get Certbot installed and auto-renewing certs. Then all you need to do is run:

~~~~console
sudo certbot --nginx
~~~~

Select your Gitea domain, follow the prompts and when you're done you'll be ready to set up Gitea. 

### Setting up Gitea

Point your browser to `https://mydomain.com/install` and go through the Gitea setup process. That screen looks like this, and you can use these values, except for the domain name (and be sure to enter the password you used when we created the `gitea` user for postgresql).

One note, if you intend your Gitea instance to be for you alone, I strongly recommend you check the "disable self registration" box, which will stop anyone else from being able to sign up. But, turning off registration means you'll need to create an administrator account at the bottom of the page. 

<img src="images/2018/gitea-install_FAW0kIJ.jpg" id="image-1706" class="picwide" />

Okay, now that we've got Gitea initialized it's time to go back and change the permissions on those directories that we set up earlier.

~~~~console
sudo chmod 750 /etc/gitea
sudo chmod 644 /etc/gitea/app.ini
~~~~
Now you're ready to create your first repo in Gitea. Click the little button next to the repositories menu on the right side of your Gitea dashboard and that'll walk you through creating your first repo. Once that's done you can clone that repo with:

~~~~console
git clone https://mydomain.com/giteausername/reponame.git
~~~~

Now if you have an existing repo that you want to push to your new Gitea repo, just edit the `.git/config` files to make your Gitea repo the new url, e.g.:

~~~~ini
[remote "origin"]
    url = https://mydomain.com/giteausername/reponame.git
	fetch = +refs/heads/*:refs/remotes/origin/*
~~~~

Now do this:

~~~~console
git push origin master 
~~~~

### Setting up SSH

Working with git over https is pretty good, but I prefer the more secure method of SSH with a key. To get that working we'll need to add our SSH key to Gitea. That means you'll need a GPG key. If you don't have one already, open the terminal on your local machine and issue this command:

~~~~console
ssh-keygen -o -a 100 -t ed25519
~~~~

That will create a key named `id_ed25519` in the directory `.ssh/`. If you want to know where that command comes from, read [this article](https://blog.g3rt.nl/upgrade-your-ssh-keys.html).

Now we need to add that key to Gitea. First open the file `.ssh/id_ed25519.pub` and copy the contents to your clipboard. Now in the Gitea web interface, click on the user menu at the upper right and select "settings". Then across the top you'll see a bunch of tabs. Click the one that reads "SSH / GPG Keys". Click the add key button, give your key a name and paste in the contents of the key.

Note: depending on how your VPS was set up, you may need to add the `git` user to your sshd config. Open `/etc/ssh/sshd_config` and look for a line that reads something like this:

~~~~console
AllowUsers myuser myotheruser git
~~~~

Add `git` to the list of allowed users so you'll be able to authenticate with the git user over ssh. Now test SSH cloning with this line, substituting your SSH clone url:

~~~~console
git clone ssh://git@mydomain/giteausername/reponame.git
~~~~

Assuming that works then you're all set, Gitea is working and you can create all the repos you need. If you have any problems you can drop a comment in the form below and I'll do my best to help you out.

If you want to add some other niceties, the Gitea docs have a good guide to [setting up Fail2Ban for Gitea](https://docs.gitea.io/en-us/fail2ban-setup/) and then there's a whole section on [backing up Gitea](https://docs.gitea.io/en-us/backup-and-restore/) that's well worth a read.

[^1]: You can compile Gitea yourself if you like, there are [instructions on the Gitea site](https://docs.gitea.io/en-us/install-from-source/), but be forewarned its uses quite a bit of RAM to build.

# Set Up AWStats for Nginx on Ubuntu 20.04

date:2018-10-07 12:40:39
url:/src/awstats-nginx-ubuntu-debian

*Update Sept 2023: I still use this method and it still works. I've updated the guide so that the commands work on both Debian 12 and Ubuntu 23.04. Unfortunately the spambots love this page so I have disabled comments, if you have a question, [email me](/contact)*

If you'd like some basic data about your site's visitors, but don't want to let spyware vendors track them around the web, AWStats makes a good solution. It parses your server log files and tells you who came by and what they did. There's no spying, no third-party code bloat. AWStats just analyzes your visitors' footprints.

Here's how I've managed to get AWStats installed and running on Ubuntu 18.04, Ubuntu 20.04, Debian 10, and Debian 11.

### AWStats with GeoIP

The first step is to install the AWStats package from the Ubuntu repositories:

~~~~console
sudo apt install awstats
~~~~

This will install the various tools and scripts AWStats needs. Because I like to have some geodata in my stats, I also installed the tools necessary to use the AWStats geoip plugin. Here's what worked for me. 

First we need build-essential and libgeoip:

~~~~console
sudo apt install libgeoip-dev build-essential
~~~~

Next you need to fire up the cpan shell:

~~~~console
cpan
~~~~

If this is your first time in cpan you'll need to run two commands to get everything set up. If you've already got cpan set up, you can skip to the next step:

~~~~perl
make install
install Bundle::CPAN
~~~~

Once cpan is set up, install GeoIP:

~~~~perl
install Geo::IP
~~~~

That should take care of the GeoIP stuff. You can double-check that the database files exist by looking in the directory `/usr/share/GeoIP/` and verifying that there's a file named `GeoIP.dat`. 

Now, on to the log file setup.

#### Optional Custom Nginx Log Format

This part isn't strictly necessary. To get AWStats working the next step is to create our config files and build the stats, but first I like to overcomplicate things with a custom log format for Nginx. If you don't customize your Nginx log format then you can skip this section, but make a note of where Nginx is putting your logs, you'll need that in the next step. 

Open up `/etc/nginx/nginx.conf` and add these lines:

~~~~nginx
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                '$status $body_bytes_sent "$http_referer" '
                '"$http_user_agent" "$http_x_forwarded_for"';    
~~~~

Now we need to edit our individual nginx config file to use this log format. If you follow the standard nginx practice, your config file should be in `/etc/nginx/sites-enabled/`. For example this site is served by the file `/etc/nginx/sites-enabled/luxagraf.net.conf`. Wherever that file may be in your setup, open it and add this line somewhere in the `server` block.

~~~~nginx
server {
    # ... all your other config ...
    access_log  /var/log/nginx/yourdomain.com.access.log main;
    # ... all your other config ...
}
~~~~

### Configure AWStats for Nginx

As I said in the beginning, AWStats is ancient, it hails from a very different era of the internet. One legacy from the olden days is that AWStats is very strict about configuration files. You have to have one config file per domain you're tracking and that file has to be named in the following way: `awstats.domain.tld.conf`. Those config files must be placed inside the /etc/awstats/ directory.

If you go take a look at the `/etc/awstats` directory you'll see two files in there: `awstats.conf` and `awstats.conf.local`. The first is a main conf file that serves as a fallback if your own config file doesn't specify a particular setting. The second is an empty file that's meant to be used to share common config settings, which really doesn't make much sense to me.

I took a tip from [this tutorial](https://kamisama.me/2013/03/20/install-configure-and-protect-awstats-for-multiple-nginx-vhost-on-debian/) and dumped the contents of awstats.conf into awstats.local.conf. That way my actual site config file is very short. If you want to do that, then all you have to put in your config file are a few lines.

Using the naming scheme mentioned above, my config file resides at `/etc/awstats/awstats.luxagraf.net.conf` and it looks like this (drop your actual domain in place of "yourdomain.com"):

~~~~ini
# Path to your nginx log file
LogFile="/var/log/nginx/yourdomain.com.access.log"

# Domain of your vhost
SiteDomain="yourdomain.com"

# Directory where to store the awstats data
DirData="/var/lib/awstats/"

# Other domains/subdomain you want included from your logs, for example the www subdomain
HostAliases="www.yourdomain.com"

# If you customized your log format above add this line:

LogFormat = "%host - %host_r %time1 %methodurl %code %bytesd %refererquot %uaquot %otherquot"

# If you did not, uncomment and use this line:
# LogFormat = 1
~~~~

Save that file and open the fallback file `awstats.conf.local`. Now set a few things:

~~~~ini
# if your site doesn't get a lot of traffic you can leave this at 1
# but it can make things slow
DNSLookup = 0

# find the geoip plugin line and uncomment it:
LoadPlugin="geoip GEOIP_STANDARD /usr/share/GeoIP/GeoIP.dat"
~~~~

Then delete the LogFile, SiteDomain, DirData, and HostAliases settings in your `awstats.conf.local` file. We've got those covered in our site-specific config file. Also delete the import statement at the bottom to make sure you don't end up with a circular import.

Okay, that's it for configuring things, let's generate some data to look at.

### Building Stats and Rotating Log Files

Now that we have our log files, and we've told AWStats where they are, what format they're in and where to put its analysis, it's time to actually run AWStats and get the raw data analyzed. To do that we use this command:

~~~~console
sudo /usr/lib/cgi-bin/awstats.pl -config=yourdoamin.com -update
~~~~

Alternately, if you have a bunch of config files you'd like to update all at once, you can use this wrapper script conveniently located in a completely different directory:

~~~~console
/usr/share/doc/awstats/examples/awstats_updateall.pl now -awstatsprog=/usr/lib/cgi-bin/awstats.pl
~~~~

You're going to need to run that command regularly to update the AWStats data. One way to do is with a crontab entry, but there are better ways to do this. Instead of cron we can hook into logrotate, which rotates Nginx's log files periodically anyway and conveniently includes a `prerotate` directive that we can use to execute some code. Technically logrotate runs via /etc/cron.daily under the hood, so we haven't really escaped cron, but it's not a crontab we need to keep track of anyway.

~~~~log
Open up the file `/etc/logrotate.d/nginx` and replace it with this: 

    /var/log/nginx/*.log{
        daily
        missingok
        rotate 30
        compress
        delaycompress
        notifempty
        create 0640 www-data adm
        sharedscripts
        prerotate
            /usr/share/doc/awstats/examples/awstats_updateall.pl now -awstatsprog=/usr/lib/cgi-bin/awstats.pl
            if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
                run-parts /etc/logrotate.d/httpd-prerotate; \
            fi \
        endscript
        postrotate
            invoke-rc.d nginx rotate >/dev/null 2>&1
        endscript
    }
~~~~

The main things we've changed here are the frequency, moving from weekly to daily rotation in line 2, keeping 30 days worth of logs in line 4, and then calling AWStats in line 11. 

One thing to bear in mind is that if you re-install Nginx for some reason this file will be overwritten. 

Now do a dry run to make sure you don't have any typos or other problems:

~~~~console
sudo logrotate -f /etc/logrotate.d/nginx
~~~~

### Serving Up AWStats 

Now that all the pieces are in place, we need to put our stats on the web. I used a subdomain, awstats.luxagraf.net. Assuming you're using something similar here's an nginx config file to get you started:

~~~~nginx
server {
    server_name awstats.luxagraf.net;

    root    /var/www/awstats.luxagraf.net;
    error_log /var/log/nginx/awstats.luxagraf.net.error.log;
    access_log off;
    log_not_found off;

    location ^~ /awstats-icon {
        alias /usr/share/awstats/icon/;
    }

    location ~ ^/cgi-bin/.*\\.(cgi|pl|py|rb) {
        auth_basic            "Admin";
        auth_basic_user_file  /etc/awstats/awstats.htpasswd;

        gzip off;
        include         fastcgi_params;
        fastcgi_pass unix:/var/run/php/php7.2-fpm.sock; # change this line if necessary
        fastcgi_index   cgi-bin.php;
        fastcgi_param   SCRIPT_FILENAME    /etc/nginx/cgi-bin.php;
        fastcgi_param   SCRIPT_NAME        /cgi-bin/cgi-bin.php;
        fastcgi_param   X_SCRIPT_FILENAME  /usr/lib$fastcgi_script_name;
        fastcgi_param   X_SCRIPT_NAME      $fastcgi_script_name;
        fastcgi_param   REMOTE_USER        $remote_user;
    }

}
~~~~

This config is pretty basic, it passes requests for icons to the AWStats icon dir and then sends the rest of our requests to php-fpm. The only tricky part is that AWStats needs to call a Perl file, but we're calling a PHP file, namely `/etc/nginx/cgi-bin.php`. How's that work?

Well, in a nutshell, this script takes all our server variables and passes them to stdin, calls the Perl script and then reads the response from stdout, passing it on to Nginx. Pretty clever, so clever in fact that I did not write it. Here's the file I use, taken straight from the Arch Wiki:

~~~~php
<?php
$descriptorspec = array(
   0 => array("pipe", "r"),  // stdin is a pipe that the child will read from
   1 => array("pipe", "w"),  // stdout is a pipe that the child will write to
   2 => array("pipe", "w")   // stderr is a file to write to
);
$newenv = $_SERVER;
$newenv["SCRIPT_FILENAME"] = $_SERVER["X_SCRIPT_FILENAME"];
$newenv["SCRIPT_NAME"] = $_SERVER["X_SCRIPT_NAME"];
if (is_executable($_SERVER["X_SCRIPT_FILENAME"])) {
   $process = proc_open($_SERVER["X_SCRIPT_FILENAME"], $descriptorspec, $pipes, NULL, $newenv);
   if (is_resource($process)) {
       fclose($pipes[0]);
       $head = fgets($pipes[1]);
       while (strcmp($head, "\n")) {
           header($head);
           $head = fgets($pipes[1]);
       }
       fpassthru($pipes[1]);
       fclose($pipes[1]);
       fclose($pipes[2]);
       $return_value = proc_close($process);
   } else {
       header("Status: 500 Internal Server Error");
       echo("Internal Server Error");
   }
} else {
   header("Status: 404 Page Not Found");
   echo("Page Not Found");
}
?> 
~~~~

Save that mess of PHP as `/etc/nginx/cgi-bin.php` and then install php-fpm if you haven't already:

~~~~console
sudo apt install php-fpm
~~~~

Next we need to create the password file referenced in our Nginx config. We can create a .htpasswd file with this little shell command, just make sure to put an actual username in place of `username`:

~~~~console
printf "username:`openssl passwd -apr1`\n" >> awstats.htpasswd
~~~~

Enter your password when prompted and your password file will be created in the expected format for basic auth files.

Then move that file to the proper directory:

~~~~console
sudo mv awstats.htpasswd /etc/awstats/
~~~~

Now we have an Nginx config, a script to pass AWStats from PHP to Perl and some basic password protection for our stats site. The last, totally optional, step is to serve it all over HTTPS instead of HTTP. Since we have a password protecting it anyway, this is arguably unnecessary. I do it more out of habit than any real desire for security. I mean, I did write an article [criticizing the push to make everything HTTPS](https://arstechnica.com/information-technology/2016/07/https-is-not-a-magic-bullet-for-web-security/). But habit.

I have a separate guide on [how to set up Certbot for Nginx on Ubuntu](/src/certbot-nginx-ubuntu-1804) that you can follow. Once that's installed you can just invoke Certbot with:

~~~~console
sudo certbot --nginx
~~~~

Select the domain name you're serving your stats at (for me that's awstats.luxagraf.net), then select 2 to automatically redirect all traffic to HTTPS and certbot will append some lines to your Nginx config file.

Now restart Nginx:

~~~~console
sudo systemctl restart nginx
~~~~

Visit your new site in the browser at this URL (changing yourdomain.com to the domains you've been using): [https://awstats.yourdomain.com/cgi-bin/awstats.pl?config=yourdomain.com](https://awstats.yourdomain.com/cgi-bin/awstats.pl?config=yourdomain.com ). If all went well you should see AWStats with a few stats in it. If all did not go well, feel free to drop whatever your error message is in a comment here and I'll see if I can help.

### Motivations

And now the why. The "why the hell don't I just use --insert popular spyware here--" part.

My needs are simple. I don't have ads. I don't have to prove to anyone how much traffic I get. And I don't really care how you got here. I don't care where you go after here. I hardly ever look at my stats. 

When I do look all I want to see is how many people stop by in a given month and if there's any one article that's getting a lot of visitors. I also enjoy seeing which countries visitors are coming from, though I recognize that VPNs make this information suspect.

Since *I* don't track you I certainly don't want third-party spyware tracking you, so that means any hosted service is out. Now there are some self-hosted, open source spyware packages that I've used, Matomo being the best. It is nice, but I don't need or use most of what it offers. I also really dislike running MySQL, and unfortunately Matomo requires MySQL, as does Open Web Analytics. 

By process of elimination (no MySQL), and my very paltry requirements, the logical choice is a simple log analyzer. I went with AWStats because I'd used it in the past. Way in the past. But you know what, AWStats ain't broke. It doesn't spy, it uses no server resources, and it tells you 95 percent of what any spyware tool will tell you (provided you actually [read the documentation](http://www.awstats.org/docs/))

In the end, AWStats is good enough without being too much. But for something as simple as it is, AWStats is surprisingly complex to get up and running, which is what inspired this guide.

##### Shoulders stood upon:

* [AWStats Documentation](http://www.awstats.org/docs/awstats_config.html)
* [Ubuntu Community Wiki: AWStats](https://help.ubuntu.com/community/AWStats)
* [Arch Wiki: AWStats](https://wiki.archlinux.org/index.php/Awstats)
* [Install, configure and protect Awstats for multiple nginx vhost on Debian](https://kamisama.me/2013/03/20/install-configure-and-protect-awstats-for-multiple-nginx-vhost-on-debian/)

# Set up Certbot for Nginx on Ubuntu 18.04

date:2018-08-08 08:34:42
url:/src/certbot-nginx-ubuntu-1804

The EFF's free certificate service, [Certbot](https://certbot.eff.org/), has greatly simplified the task of setting up HTTPS for your websites. The only downside is that the certificates are only good for 90 days. Fortunately renewing is easy, and we can even automate it all with systemd. Here's how to set up Certbot with Nginx *and* make sure your SSL certs renew indefinitely with no input from you.

This tutorial is aimed at anyone using an Ubuntu 18.04 VPS from cheap hosts like DigitalOcean or [Vultr.com](https://www.vultr.com/?ref=6825229), but should also work for other versions of Ubuntu, Debian, Fedora, CentOS and any other system that uses systemd. The only difference will be the commands you use to install Certbot. See the Certbot site for [instructions](https://certbot.eff.org/) specific to your system.

First we'll get Certbot running on Ubuntu 18.04, then we'll dive into setting up automatic renewals via systemd.

You should not need this with 18.04, but to be on the safe side, make sure you have the `software-properties-common` package installed. 

~~~~console
sudo apt install software-properties-common
~~~~

The next part requires that you add a PPA, my least favorite part of Certbot for Ubuntu, as I don't like to rely on PPAs for something as mission critical as my security certificates. Still, as of this writing, there is not a better way. At least go [look at the code](https://launchpad.net/~certbot/+archive/ubuntu/certbot) before you blindly cut and paste. When you're done, here's your cut and paste:

~~~~console
sudo apt update
sudo add-apt-repository ppa:certbot/certbot
sudo apt update
sudo apt install python-certbot-nginx 
~~~~

Now you're ready to install some certs. For this part I'm going to show the commands and the output of the commands since the `certbot` command is interactive. Note that the version below will append some lines to your Nginx config file. If you prefer to edit your config file yourself, use this command: `sudo certbot --nginx certonly`, otherwise, here's what it looks like when you run `sudo certbot --nginx`:

~~~~console
sudo certbot --nginx

[sudo] password for $youruser: 
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx 

Which names would you like to activate HTTPS for?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: luxagraf.net
2: awstats.luxagraf.net
3: origin.luxagraf.net
4: www.luxagraf.net
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -                                                                                                                                                               
Select the appropriate numbers separated by commas and/or spaces, or leave input blank to select all options shown (Enter 'c' to cancel): 4
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for www.luxagraf.net
Waiting for verification...
Cleaning up challenges
Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/luxagraf.net.conf

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2

Traffic on port 80 already redirecting to ssl in /etc/nginx/sites-enabled/luxagraf.net.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Congratulations! You have successfully enabled https://www.luxagraf.net.
You should test your configuration at: https://www.ssllabs.com/ssltest/analyze.html?d=www.luxagraf.net
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/www.luxagraf.net/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/www.luxagraf.net/privkey.pem
   Your cert will expire on 2019-01-09. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot again
   with the "certonly" option. To non-interactively renew *all* of
   your certificates, run "certbot renew"
 - If you like Certbot, please consider supporting our work by:
   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le     
~~~~

And there you have it, SSL certs for all your domains.

Unfortunately, those new certs are only good for 90 days. The odds of you remembering to renew that every 90 days -- even with reminder emails from the EFF -- is near nil. Plus, do you really want to be renewing certs by hand, [like an animal](http://5by5.tv/hypercritical/17)? No, you want to automate everything so you can do better things with your time. 

You could use cron, but the more modern approach would be to create a systemd service and a systemd timer to control when that service runs.

I highly recommend reading through the Arch Wiki page on [systemd services and timers](https://wiki.archlinux.org/index.php/Systemd/Timers), as well as the [systemd.timer man pages](https://jlk.fjfi.cvut.cz/arch/manpages/man/systemd.timer.5) to get a better understanding of how you can automate other tasks in your system. But for the purposes of this tutorial all you really need to understand is that timers are just like other systemd unit files, but they include a `[Timer]` block which lets you specify exactly when you want your service file to run.

Timer files can live right next to your service files in `/etc/systemd/system/`.

There's no hard and fast rules about naming timers, but it makes sense to use the same name as the service file the timer controls, except the timer gets the `.timer` extension. So you'd have two files `myservice.service` and `myservice.timer`.

Let's start with the service file. I call mine `certbot-renewal`. Open the service file:

~~~~console
sudo nano /etc/systemd/system/certbot-renewal.service
~~~~

This is going to be a super simple service, we'll give it a description and a command to run and that's it:

~~~~ini
[Unit]
Description=Certbot Renewal

[Service]
ExecStart=/usr/bin/certbot renew
~~~~

Next we need to create a .timer file that will run the certbot.renewal service every day. Create this file:

~~~~console
sudo nano /etc/systemd/system/certbot-renewal.timer
~~~~

And now for the slightly more complex timer:

~~~~ini
[Unit]
Description=Certbot Renewal Timer

[Timer]
OnBootSec=500
OnUnitActiveSec=1d

[Install]
WantedBy=multi-user.target
~~~~

The `[Timer]` directive can take a number of parameters, the ones we've used constitute what's called a monotonic timer, which means they run "after a time span relative to a varying starting point". In other words they're not calendar events like cron. 

Our monotonic timer has two directives, `onBootSec` and `OnUnitActiveSec`. The first should be obvious, our timer will run 500 seconds after the system boots. Why 500? No real reason, I just didn't want to bog down the system at boot. 

The `OnUnitActiveSec` is really what makes this work. This directive measures time relative to when the service that the timer controls was last activated. In our case the `1d` means run the service one day after it last ran. So our timer will run once a day to make sure our scripts stay up to date.

As a kind of footnote, in systemd parlance calendar-based timers are called realtime timers and can be used to replace cron if you want. There are some disadvantages, see the Arch Wiki for [a good overview of what you get and what you lose](https://wiki.archlinux.org/index.php/Systemd/Timers#As_a_cron_replacement) if you go that route.

Okay, the last step for our certbot renewal system is to enable and then start our timer. Note that we don't have to do either to our actual service file because we don't want it active, the timer will control when it runs.

~~~~console
sudo systemctl enable certbot-renewal.timer
sudo systemctl start certbot-renewal.timer
~~~~

Run those commands and you're done. Your timer is now active and your Certbot certificates will automatically renew as long as your server is up and running.

# Why I Switched to Arch Linux

date:2016-07-23 00:56:15
url:/src/why-i-switched-arch-linux

Everyone seems to have a post about why they ended up with Arch. This is mine.

I recently made the switch to Arch Linux for my primary desktop and it's been great. If you're a Linux user with some experience, I highly suggest you give Arch a try. The installation is a little bit of a pain, hand partitioning, hand mounting and generating your own fstab files, but it teaches you a lot. It pulls back the curtain so you can see that you are in fact the person behind the curtain, you just didn't realize it.

**[Updated July 2021: Still running Arch. Still happy about it. I did switch back to Openbox instead of i3/Sway, but otherwise my setup is unchanged]**

<img src="images/2021/arch-screen_eknsuvf.jpg" id="image-2649" class="picwide caption" />

Why Arch? The good old DIY ethos, which is born out of the realization that if you don't do things yourself you'll have to accept the mediocrity that capitalism has produced. You'll never learn; you'll never grow. That's no way to live.

I used to be a devoted Debian fan. I still agree with the Debian manifesto, but in practice however I found myself too often having to futz with things and figure out how to get something to work. I know Arch as the reputation of being unstable, but for me it's been exactly the opposite. It's been five years now and I have never had an issue.

I came to Arch for the AUR, though the truth is these days I don't use it much anymore since I don't really test software anymore. For a while I [ran Sway](/src/guide-to-switching-i3-to-sway), which was really only practical on Arch. Since then though I went back to X.org. Sorry Wayland, but much as I love Sway, I did not love wrestling with MIDI controller drivers, JACK, and all the other elements of an audio/video workflow in Wayland. It can be done, but it’s more work, and I don’t want to work at getting software to work. I’m too old for that shit. I want to plug in a microphone, open Audacity, and record. If it’s any more complicated than that -- and it was for me in Wayland with the mics I own -- I will find something else. I really don’t care what my software stack is, so long as I can create what I want to create with it.

Wayland was smoother, less graphically glitchy, but meh, whatever. Ninety percent of the time I’m writing in Vim in a Urxvt window. I need smooth scrolling and transitions like I need a hole in my head. I also set up Openbox to behave very much like Sway, so I still have the same shortcuts and honestly, aside from the fact that Tint2 has more icons than Waybar, I can’t tell the difference. Well, that’s not true. Vim works fine with the clipboard again, no need for Neovim.

My Arch setup these days is minimalist: [Openbox](http://openbox.org/wiki/Main_Page) with [tint2](https://gitlab.com/o9000/tint2). I open apps with [dmenu](http://tools.suckless.org/dmenu/) and do most of my file system tasks from the terminal using bash (or [Ranger](http://nongnu.org/ranger/) if I want something fancier). Currently my setup uses about 200MB of RAM with no apps open. Arch doesn't have quite the software selection of Debian, but it has most of the software you'd ever want. My needs are simple: bash, vim, tmux, mutt, newsboat, mpd, mpv, git, feh, gimp, darktable and dev stuff like python3, postgis, etc. Every distro has this stuff. 

I've installed Arch on dozens of machines at this point. Currently I use a Lenovo x270 that I picked up off eBay for $300. I added a larger hard drive, a second hard drive, and 32-gigabytes of RAM. That brought to total cost to about $550. It runs Arch like a champ and gives me all I could ever want in a laptop. Okay, a graphics card would be nice for my occasional bouts of video editing, but otherwise it's more than enough.

# Workflows That Automatically Spawn Backups

date:2016-01-28 01:23:34
url:/src/workflows-automatically-spawn-backups

I wrote previously about how I [backup database files](/src/automatic-offsite-postgresql-backups) automatically. The key word there being "automatically". If I have to remember to make a backup the odds of it happening drop to zero. So I automate as I described in that piece, but that's not the only backup I have.

The point for me as a writer is that I don't want to lose these words.

Part of the answer is backing up databases, but part of my solution is also creating workflows which automatically spawn backups. 

This is actually my preferred backup method because it's not just a backup, it's future proofing. PostgreSQL may not be around ten years from now (I hope it is, because it's pretty awesome, but it may not be), but it's not my only backup.

In fact I've got at least half a dozen backups of these words and I haven't even finished this piece yet. Right now I'm typing these words in Vim and will save the file in a Git repo that will get pushed to a server. That's two backups. Later the containing folder will be backed up on S3 (weekly), as well as two local drives (one daily, one weekly, both [rsync](https://rsync.samba.org/) copies). 

None of that really requires any effort on my part. I do have to add this file to the git repo and then commit and push it to the remote server, but [Vim Fugitive](https://github.com/tpope/vim-fugitive) makes that ridiculously simple.

That's not the end of the backups though. Once I'm done writing I'll cut and paste this piece into my Django app and hit a publish button that will write the results out to the flat HTML file you're actually reading right now (this file is another backup). I also output a plain text version (just append `.txt` to any luxagraf URL to see a plain text version of the page).

The end result is that all this makes it very unlikely I will loose these words outright.

However, when I plugged these words into the database I gave this article a relationship with other objects in that database. So even though the redundant backups built into my workflow make a total data loss unlikely, without the database I will lose the relationships I've created. That's why I [a solid PostgreSQL backup strategy](/src/automatic-offsite-postgresql-backups), but what if Postgres does disappear?

I could and occasionally do output all the data in the database to flat files with JSON or YAML versions of the metadata attached. Or at least some of it. It's hard to output massive amounts of geodata in the text file (for example the shapefiles of [national parks](https://luxagraf.net/projects/national-parks/) aren't particularly useful as text data). 

I'm not sure what the answer is really, but lately I've been thinking that maybe the answer is just to let it go? The words are the story, that's what my family, my kids, my friends, and whatever few readers I have really want. I'm the only one that cares about the larger story that includes the metadata, the relationships between the stories. Maybe I don't need that. Maybe that it's here today at all is remarkable enough on its own.

The web is after all an ephemeral thing. It depends on our continued ability to do so many things we won't be able to do forever, like burn fossil fuels. In the end the most lasting backup I have may well be the 8.5x11 sheets of paper I've recently taken to printing out. Everything else depends on so much.

# Automatic Offsite PostgreSQL Backups Without a Password

date:2016-01-09 15:27:49
url:/src/automatic-offsite-postgresql-backups

When it comes to backups I'm paranoid and lazy. That means I need to automate the process of making redundant backups. 

Pretty much everything to do with luxagraf lives in a single PostgreSQL database that gets backed up every night. To make sure I have plenty of copies of those backup files I download them to various other machines and servers around the web. That way I have copies of my database files on this server, another backup server, my local machine, several local backup hard drives, in Amazon S3 and Amazon Glacier. Yes, that's overkill, but it's all so ridiculously easy, why not? 

Here's how I do it. 

## Make Nightly Backups of PostgreSQL with `pg_dump` and `cron`

The first step is to regularly dump your database. To do that PostgreSQL provides the handy `pg_dump` command. If you want a good overview of `pg_dump` check out the excellent [PostgreSQL manual]. Here's the basic syntax:

~~~~console
pg_dump -U user -hhostname database_name > backup_file.sql
~~~~

So, if you had a database named mydb and you wanted to back it up to a file that starts with the name of the database and then includes today's date, you could do something like this:

~~~~console
pg_dump -U user -hlocalhost mydb > mydb.`date '+%Y%m%d'`.sql
~~~~

That's pretty useful, but it's also potentially quite a big file. Thankfully we can just pipe the results to gzip to compress them:

~~~~console
pg_dump -U user -hlocalhost mydb | gzip -c > mydb.`date '+%Y%m%d'`.sql.gz
~~~~

That's pretty good. In fact for many scenarios that's all you'll need. Plug that into your cron file by typing `crontab -e` and adding this line to make a backup every night at midnight:

~~~~bash
0 0 * * * pg_dump -U user -hlocalhost mydb | gzip -c > mydb.`date '+%Y%m%d'`.sql
~~~~

For a long time that was all I did. But then I started running a few other apps that used PostgreSQL databases (like a version [Tiny Tiny RSS](https://tt-rss.org/gitlab/fox/tt-rss/wikis/home)), so I needed to have quite a few lines in there. Plus I wanted to perform a [VACUUM](http://www.postgresql.org/docs/current/static/sql-vacuum.html) on my main database every so often. So I whipped up a shell script. As you do. 

Actually most of this I cobbled together from sources I've unfortunately lost track of since. Which is to say I didn't write this from scratch. Anyway here's the script I use:

~~~~base
#!/bin/bash
#
# Daily PostgreSQL maintenance: vacuuming and backuping.
#
##
set -e
for DB in $(psql -l -t -U postgres -hlocalhost |awk '{ print $1}' |grep -vE '^-|:|^List|^Name|template[0|1]|postgres|\|'); do
  echo "[`date`] Maintaining $DB"
  echo 'VACUUM' | psql -U postgres -hlocalhost -d $DB
  DUMP="/path/to/backup/dir/$DB.`date '+%Y%m%d'`.sql.gz"
  pg_dump -U postgres -hlocalhost $DB | gzip -c > $DUMP
  PREV="$DB.`date -d'1 day ago' '+%Y%m%d'`.sql.gz"
  md5sum -b $DUMP > $DUMP.md5
  if [ -f $PREV.md5 ] && diff $PREV.md5 $DUMP.md5; then
    rm $DUMP $DUMP.md5
  fi
done
~~~~

Copy that code and save it in a file named psqlback.sh. Then make it executable:

~~~~console
chmod u+x psqlback.sh
~~~~

Now before you run it, let's take a look at what's going on.

First we're creating a loop so we can backup all our databases.

~~~~bash
for DB in $(psql -l -t -U postgres -hlocalhost |awk '{ print $1}' |grep -vE '^-|:|^List|^Name|template[0|1]|postgres|\|'); do
~~~~

This looks complicated because we're using `awk` and `grep` to parse some output but basically all it's doing is querying PostgreSQL to get a list of all the databases (using the `postgres` user so we can access all of them). Then we pipe that to `awk` and `grep` to extract the name of each database and ignore a bunch of stuff we don't want. 

Then we store the name of database in the variable `DB` for the duration of the loop. 

Once we have the name of the database, the script outputs a basic logging message that says it's maintaining the database and then runs VACUUM. 

The next two lines should look familiar:

~~~~bash
DUMP="/path/to/backup/dir/$DB.`date '+%Y%m%d'`.sql.gz"
pg_dump -U postgres -hlocalhost $DB | gzip -c > $DUMP
~~~~

That's very similar to what we did above, I just stored the file path in a variable because it gets used again. The next thing we do is grab the file from yesterday:

~~~~bash
PREV="$DB.`date -d'1 day ago' '+%Y%m%d'`.sql.gz"
~~~~

Then we calculate the md5sum of our dump:

~~~~bash
md5sum -b $DUMP > $DUMP.md5
~~~~

The we compare that to yesterday's sum and if they're the same we delete our dump since we already have a copy.

~~~~bash
  if [ -f $PREV.md5 ] && diff $PREV.md5 $DUMP.md5; then
    rm $DUMP $DUMP.md5
  fi
~~~~

Why? Well, there's no need to store a new backup if it matches the previous one exactly. Since sometimes nothing changes on this site for a few days, weeks, months even, this can save a good bit of space.

Okay now that you know what it does, let's run it:

~~~~console
./psqlback.sh
~~~~

If everything went well it should have asked you for a password and then printed out a couple messages about maintaining various databases. That's all well and good for running it by hand, but who is going to put in the password when cron is the one running it?

### Automate Your Backups with `cron`

First let's set up cron to run this script every night around midnight. Open up crontab:

~~~~console
crontab -e
~~~~

Then add a line to call the script every night at 11:30PM:

~~~~console
30 23 * * * ./home/myuser/bin/psqlback.sh > psqlbak.log
~~~~

You'll need to adjust the path to match your server, but otherwise that's all you need (if you'd like to run it less frequently or at a different time, you can read up on the syntax in the cron manual).

But what happens when we're not there to type in the password? Well, the script fails.

There are a variety of ways we can get around this. In fact the [PostgreSQL docs](http://www.postgresql.org/docs/current/static/auth-methods.html) cover everything from LDAP auth to peer auth. The latter is actually quite useful, though a tad bit complicated. I generally use the easiest method -- a password file. The trick to making it work for cron jobs is to create a file in your user's home directory called `.pgpass`.

Inside that file you can provide login credentials for any user on any port. The format looks like this:

~~~~vim
hostname:port:username:password
~~~~

You can use * as a wildcard if you need it. Here's what I use:

~~~~vim
localhost:*:*:postgres:passwordhere
~~~~

I hate storing a password in the plain text file, but I haven't found a better way to do this. 

To be fair, assuming your server security is fine, the `.pgpass` method should be fine. Also note that Postgres will ignore this file if it has greater than 600 permissions (that is, user is the only one that can execute it. Let's change that so that:

~~~~console
chmod 600 .pgpass
~~~~

Now we're all set. Cron will run our script every night at 11:30 PM and we'll have a compressed backup file of all our PostgreSQL data.

## Automatically Moving It Offsite

Now we have our database backed up to a file. That's a start. That saves us if PostgreSQL does something wrong or somehow becomes corrupted. But we still have a single point of failure -- what if the whole server crashes and can't be recovered? We're screwed.

To solve that problem we need to get our data off this server and store it somewhere else. 

There's quite a few ways we could do this and I have done most of them. For example we could install [s3cmd](http://s3tools.org/s3cmd) and send them over to an Amazon S3 bucket. I actually do that, but it requires you pay for S3. In case you don't want to do that, I'm going to stick with something that's free -- Dropbox.

Head over to the Dropbox site and follow their instructions for [installing Dropbox on a headless Linux server](https://www.dropbox.com/en/install?os=lnx). It's just one line of cut and pasting though you will need to authorize Dropbox with your account.

**BUT WAIT**

Before you authorize the server to use your account, well, don't. Go create a second account solely for this server. Do that, then authorize that new account for this server. 

Now go back to your server and symlink the folder you put in the script above, into the Dropbox folder.

~~~~console
cd ~/Dropbox
ln -s ~/path/to/pgbackup/directory .
~~~~

Then go back to Dropbox log in to the second account, find that database backup folder you just symlinked in and share it with your main Dropbox account. 

This way, should something go wrong and the Dropbox folder on your server becomes compromised at least the bad guys only get your database backups and not the entirety of your documents folder or whatever might be in your normal Dropbox account. 

Credit to [Dan Benjamin](http://hivelogic.com/), who's first person I heard mention this dual account idea.

The main thing to note about this method is that you're limited to 2GB of storage (the max for a free Dropbox account). That's plenty of space in most cases. Luxagraf has been running for more than 10 years, stores massive amounts of geodata in PostgreSQL, along with close to 1000 posts of various kinds, and a full compressed DB dump is still only about 35MB. So I can store well over 60 days worth of backups, which is plenty for my purposes (in fact I only store about half that).

So create your second account, link your server installation to that and then share that folder with your main Dropbox account. 

The last thing I suggest you do, because Dropbox is not a backup service, but a syncing service, is **copy** the backup files out of the Dropbox folder on your local machine to somewhere else on that machine. Not move, but **copy**. So leave a copy in Dropbox and make a second copy on your local machine outside of the Dropbox folder.

If you dislike Dropbox (I don't blame you, I no longer actually use it for anything other than this) there are other ways to accomplish the same thing. The already mentioned s3cmd could move your backups to Amazon S3, good old `scp` could move them to another server and of course you can always download them to your local machine using `scp` or `rsync` (or SFTP, but then that wouldn't be automated).

Naturally I recommend you do all these things. I sync my nightly backups to my local machine with Dropbox and `scp` those to a storage server. Then I use s3cmd to send weekly backups to S3. That gives me three offsite backups which is enough even for my paranoid, digitally distrustful self.

# How to Set Up Django with Nginx, uWSGI & systemd on Debian/Ubuntu

date:2016-01-05 16:06:00
url:/src/how-set-django-uwsgi-systemd-debian-8

I've served Django over all sorts of different servers, from Apache with mod_python to Nginx with Gunicorn. The current incarnation of my publishing system[^1] runs atop an Nginx server which passes requests for dynamic pages to uWSGI. I've found this setup to be the fastest of the various options out there for serving Django apps, particularly when pared with a nice, [fast, cheap VPS instance](/src/setup-and-secure-vps).

I am apparently not alone in thinking uWSGI is fast. Some people have even [tested uWSGI](http://www.peterbe.com/plog/fcgi-vs-gunicorn-vs-uwsgi) and [proved as much](http://nichol.as/benchmark-of-python-web-servers). Honestly though speed is not what got me using uWSGI. I switched because just plays so much nicer with systemd than Gunicorn. Also, something about the Gunicorn project always rubbed me the wrong way, but that's just me.

Anyway, my goal was to have a server running that's managed by the system. In my case that means Debian 8 with systemd. I set things up so that a uWSGI "emperor" instance starts up with systemd and then automatically picks up any "vassals" residing a directory[^2]. That way the server will automatically restart should the system need to reboot. 

The first step in this dance is to install uWSGI, which is a Python application (for more background on how uWSGI works and what the various parts are, check out [this tutorial](https://www.digitalocean.com/community/tutorials/how-to-set-up-uwsgi-and-nginx-to-serve-python-apps-on-centos-7#definitions-and-concepts)). We could install uwsgi through the Debian repos with `apt-get`, but that version is pretty ancient, so I install uWSGI with pip. 

~~~~console
pip install uwsgi
~~~~

Now we need a systemd service file so that we can let systemd manage things for us. Here's what I use. Note that the path is the standard Debian install location, your system may vary (though I believe Ubuntu is the same):

~~~~ini
[Unit]
Description=uWSGI Emperor
After=syslog.target

[Service]
ExecStart=/usr/local/bin/uwsgi --ini /etc/uwsgi/emperor.ini
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all

[Install]
WantedBy=multi-user.target
~~~~

Save that to `/lib/systemd/system/uwsgi.service`

Then enable it and try starting it:

~~~~console
sudo systemctl enable uwsgi.service
sudo systemctl start uwsgi
~~~~

This should cause an error like so...

~~~~console
Job for uwsgi.service failed. See 'systemctl status uwsgi.service' and 'journalctl -xn' for details.
~~~~

If you look at the journal you'll see that the problem is that uwsgi can't find the emperor.ini file we pointed to in our service file. Let's create that file. Most likely the directory /etc/uwsgi doesn't exist, so create that and then the emperor.ini file in it:

~~~~console
mkdir /etc/uwsgi
vim /etc/uwsgi/emperor.ini
~~~~

Here's the contents of my emperor.ini:

~~~~ini
[uwsgi]
emperor = /etc/uwsgi/vassals
uid = www-data
gid = www-data
limit-as = 1024
logto = /tmp/uwsgi.log
~~~~

The last step is to create the vassals directory we just referenced in emperor.ini:

~~~~console
sudo mkdir /etc/uwsgi/vassals
~~~~

The last step is to add a vassal, which would be the ini file for your actual uWSGI app. 

To create that file, have a look at [this gist over on github](https://gist.github.com/evildmp/3094281), it has a pretty good example. Once you have that file tweaked to your liking, just symlink it into `/etc/uwsgi/vassals/`. The exact paths will vary, but something like this should do the trick:

~~~~console
sudo ln -s /path/to/your/project/django.ini /etc/uwsgi/vassals/
~~~~

Now go back and try starting uWSGI again:

~~~~console
sudo systemctl start uwsgi
~~~~

This time it should work with no errors. Go ahead and stop it and add it to systemd so it will startup with the system:

~~~~console
sudo systemctl stop uwsgi
sudo systemctl enable uwsgi
sudo systemctl start uwsgi
~~~~

Congratulations, your uWSGI server is now running.


Further Reading:

* As mentioned above, [this gist](https://gist.github.com/evildmp/3094281) covers how to setup the Django end of the equation and covers more of what's actually happening in this setup.
* This [Digital Ocean tutorial](https://www.digitalocean.com/community/tutorials/how-to-set-up-uwsgi-and-nginx-to-serve-python-apps-on-centos-7#definitions-and-concepts) is for CentOS and related distros, but it's what I used originally (I wrote this to keep track of all the places I changed that one).
* The [official uWSGI docs](http://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html) are pretty great too.

If you enjoyed this tutorial and want a VPS instance to try it out on consider [signing up for Digital Ocean](https://www.digitalocean.com/?refcode=3bda91345045). It's cheap ($5/month gets you a VPS, this site runs on a $10/month instance), fast and dare I say fun. That link will get you $10 credit, which works out to two free months of hosting and you'll help support this site. But if you prefer here's [a link](https://www.digitalocean.com/) without the referral code and no $10 credit.

[^1]: The vast majority of this site is served from flat html files, but there are a few dynamic things like the comments that actually hit the database. For the most part though, I am the only one interacting with the Django portion of my site (which is used to build the flat HTML files I serve up to you).
[^2]: I think "emperor" and "vassal" are the uWSGI project's effort to get rid of the "slave"/"master" lingo that gets used a lot in these circumstances.

#  How Google’s AMP project speeds up the Web—by sandblasting HTML

date:2015-11-05 16:42:44
url:/src/how-googles-amp-project-speeds-web-sandblasting-ht

[**This story originally appeared on <a href="http://arstechnica.com/information-technology/2015/11/googles-amp-an-internet-giant-tackles-the-old-myth-of-the-web-is-too-slow/" rel="me">Ars Technica</a>, to comment and enjoy the full reading experience with images (including a TRS-80 browsing the web) you should read it over there.**]

There's a story going around today that the Web is too slow, especially over mobile networks. It's a pretty good story—and it's a perpetual story. The Web, while certainly improved from the days of 14.4k modems, has never been as fast as we want it to be, which is to say that the Web has never been instantaneous.

Curiously, rather than a focus on possible cures, like increasing network speeds, finding ways to decrease network latency, or even speeding up Web browsers, the latest version of the "Web is too slow" story pins the blame on the Web itself. And, perhaps more pointedly, this blame falls directly on the people who make it.

The average webpage has increased in size at a terrific rate. In January 2012, the average page tracked by HTTPArchive [transferred 1,239kB and made 86 requests](http://httparchive.org/trends.php?s=All&minlabel=Oct+1+2012&maxlabel=Oct+1+2015#bytesTotal&reqTotal). Fast forward to September 2015, and the average page loads 2,162kB of data and makes 103 requests. These numbers don't directly correlate to longer page load-and-render times, of course, especially if download speeds are also increasing. But these figures are one indicator of how quickly webpages are bulking up.

Native mobile applications, on the other hand, are getting faster. Mobile devices get more powerful with every release cycle, and native apps take better advantage of that power.

So as the story goes, apps get faster, the Web gets slower. This is allegedly why Facebook must invent Facebook Instant Articles, why Apple News must be built, and why Google must now create [Accelerated Mobile Pages](http://arstechnica.com/information-technology/2015/10/googles-new-amp-html-spec-wants-to-make-mobile-websites-load-instantly/) (AMP). Google is late to the game, but AMP has the same goal as Facebook's and Apple's efforts—making the Web feel like a native application on mobile devices. (It's worth noting that all three solutions focus exclusively on mobile content.)

For AMP, two things in particular stand in the way of a lean, mean browsing experience: JavaScript... and advertisements that use JavaScript. The AMP story is compelling. It has good guys (Google) and bad guys (everyone not using Google Ads), and it's true to most of our experiences. But this narrative has some fundamental problems. For example, Google owns the largest ad server network on the Web. If ads are such a problem, why doesn't Google get to work speeding up the ads?

There are other potential issues looming with the AMP initiative as well, some as big as the state of the open Web itself. But to think through the possible ramifications of AMP, first you need to understand Google's new offering itself.

## What is AMP?

To understand AMP, you first need to understand Facebook's Instant Articles. Instant Articles use RSS and standard HTML tags to create an optimized, slightly stripped-down version of an article. Facebook then allows for some extra rich content like auto-playing video or audio clips. Despite this, Facebook claims that Instant Articles are up to 10 times faster than their siblings on the open Web. Some of that speed comes from stripping things out, while some likely comes from aggressive caching.

But the key is that Instant Articles are only available via Facebook's mobile apps—and only to established publishers who sign a deal with Facebook. That means reading articles from Facebook's Instant Article partners like National Geographic, BBC, and Buzzfeed is a faster, richer experience than reading those same articles when they appear on the publisher's site. Apple News appears to work roughly the same way, taking RSS feeds from publishers and then optimizing the content for delivery within Apple's application.

All this app-based content delivery cuts out the Web. That's a problem for the Web and, by extension, for Google, which leads us to Google's Accelerated Mobile Pages project.

Unlike Facebook Articles and Apple News, AMP eschews standards like RSS and HTML in favor of its own little modified subset of HTML. AMP HTML looks a lot like HTML without the bells and whistles. In fact, if you head over to the [AMP project announcement](https://www.ampproject.org/how-it-works/), you'll see an AMP page rendered in your browser. It looks like any other page on the Web.

AMP markup uses an extremely limited set of tags. Form tags? Nope. Audio or video tags? Nope. Embed tags? Certainly not. Script tags? Nope. There's a very short list of the HTML tags allowed in AMP documents available over on the [project page](https://github.com/ampproject/amphtml/blob/master/spec/amp-html-format.md). There's also no JavaScript allowed. Those ads and tracking scripts will never be part of AMP documents (but don't worry, Google will still be tracking you).

AMP defines several of its own tags, things like amp-youtube, amp-ad, or amp-pixel. The extra tags are part of what's known as [Web components](http://www.w3.org/TR/components-intro/), which will likely become a Web standard (or it might turn out to be "ActiveX part 2," only the future knows for sure).

So far AMP probably sounds like a pretty good idea—faster pages, no tracking scripts, no JavaScript at all (and so no overlay ads about signing up for newsletters). However, there are some problematic design choices in AMP. (At least, they're problematic if you like the open Web and current HTML standards.)

AMP re-invents the wheel for images by using the custom component amp-img instead of HTML's img tag, and it does the same thing with amp-audio and amp-video rather than use the HTML standard audio and video. AMP developers argue that this allows AMP to serve images only when required, which isn't possible with the HTML img tag. That, however, is a limitation of Web browsers, not HTML itself. AMP has also very clearly treated [accessibility](https://en.wikipedia.org/wiki/Computer_accessibility) as an afterthought. You lose more than just a few HTML tags with AMP.

In other words, AMP is technically half baked at best. (There are dozens of open issues calling out some of the [most](https://github.com/ampproject/amphtml/issues/517) [egregious](https://github.com/ampproject/amphtml/issues/481) [decisions](https://github.com/ampproject/amphtml/issues/545) in AMP's technical design.) The good news is that AMP developers are listening. One of the worst things about AMP's initial code was the decision to disable pinch-and-zoom on articles, and thankfully, Google has reversed course and [eliminated the tag that prevented pinch and zoom](https://github.com/ampproject/amphtml/issues/592).

But AMP's markup language is really just one part of the picture. After all, if all AMP really wanted to do was strip out all the enhancements and just present the content of a page, there are existing ways to do that. Speeding things up for users is a nice side benefit, but the point of AMP, as with Facebook Articles, looks to be more about locking in users to a particular site/format/service. In this case, though, the "users" aren't you and I as readers; the "users" are the publishers putting content on the Web.

## It's the ads, stupid

The goal of Facebook Instant Articles is to keep you on Facebook. No need to explore the larger Web when it's all right there in Facebook, especially when it loads so much faster in the Facebook app than it does in a browser.

Google seems to have recognized what a threat Facebook Instant Articles could be to Google's ability to serve ads. This is why Google's project is called Accelerated Mobile Pages. Sorry, desktop users, Google already knows how to get ads to you.

If you watch the [AMP demo](https://googleblog.blogspot.com/2015/10/introducing-accelerated-mobile-pages.html), which shows how AMP might work when it's integrated into search results next year, you'll notice that the viewer effectively never leaves Google. AMP pages are laid over the Google search page in much the same way that outside webpages are loaded in native applications on most mobile platforms. The experience from the user's point of view is just like the experience of using a mobile app.

Google needs the Web to be on par with the speeds in mobile apps. And to its credit, the company has some of the smartest engineers working on the problem. Google has made one of the fastest Web browsers (if not the fastest) by building Chrome, and in doing so the company has pushed other vendors to speed up their browsers as well. Since Chrome debuted, browsers have become faster and better at an astonishing rate. Score one for Google.

The company has also been touting the benefits of mobile-friendly pages, first by labeling them as such in search results on mobile devices and then later by ranking mobile friendly pages above not-so-friendly ones when other factors are equal. Google has been quick to adopt speed-improving new HTML standards like the responsive images effort, which was first supported by Chrome. Score another one for Google.

But pages keep growing faster than network speeds, and the Web slows down. In other words, Google has tried just about everything within its considerable power as a search behemoth to get Web developers and publishers large and small to speed up their pages. It just isn't working.

One increasingly popular reaction to slow webpages has been the use of content blockers, typically browser add-ons that stop pages from loading anything but the primary content of the page. Content blockers have been around for over a decade now (No Script first appeared for Firefox in 2005), but their use has largely been limited to the desktop. That changed in Apple's iOS 9, which for the first time put simple content-blocking tools in the hands of millions of mobile users.

Combine all the eyeballs that are using iOS with content blockers, reading Facebook Instant Articles, and perusing Apple News, and you suddenly have a whole lot of eyeballs that will never see any Google ads. That's a problem for Google, one that AMP is designed to fix.

## Static pages that require Google's JavaScript

The most basic thing you can do on the Web is create a flat HTML file that sits on a server and contains some basic tags. This type of page will always be lightning fast. It's also insanely simple. This is literally all you need to do to put information on the Web. There's no need for JavaScript, no need even for CSS.

This is more or less the sort of page AMP wants you to create (AMP doesn't care if your pages are actually static or—more likely—generated from a database. The point is what's rendered is static). But then AMP wants to turn around and require that each page include a third-party script in order to load. AMP deliberately sets the opacity of the entire page to 0 until this script loads. Only then is the page revealed.

This is a little odd; as developer Justin Avery [writes](https://responsivedesign.is/articles/whats-the-deal-with-accelerated-mobile-pages-amp), "Surely the document itself is going to be faster than loading a library to try and make it load faster."

Pinboard.in creator Maciej Cegłowski did just that, putting together a demo page that duplicates the AMP-based AMP homepage without that JavaScript. Over a 3G connection, Cegłowski's page fills the viewport in [1.9 seconds](http://www.webpagetest.org/result/151016_RF_VNE/). The AMP homepage takes [9.2 seconds](http://www.webpagetest.org/result/151016_9J_VNN/). JavaScript slows down page load times, even when that JavaScript is part of Google's plan to speed up the Web.

Ironically, for something that is ostensibly trying to encourage better behavior from developers and publishers, this means that pages using progressive enhancement, keeping scripts to a minimum and aggressively caching content—in other words sites following best practices and trying to do things right—may be slower in AMP.

In the end, developers and publishers who have been following best practices for Web development and don't rely on dozens of tracking networks and ads have little to gain from AMP. Unfortunately, the publishers building their sites like that right now are few and far between. Most publishers have much to gain from generating AMP pages—at least in terms of speed. Google says that AMP can improve page speed index scores by between 15 to 85 percent. That huge range is likely a direct result of how many third-party scripts are being loaded on some sites.

The dependency on JavaScript has another detrimental effect. AMP documents depend on JavaScript, which is to say that if their (albeit small) script fails to load for some reason—say, you're going through a tunnel on a train or only have a flaky one-bar connection at the beach—the AMP page is completely blank. When an AMP page fails, it fails spectacularly.

Google knows better than this. Even Gmail still offers a pure HTML-based fallback version of itself.

## AMP for publishers

Under the AMP bargain, all big media has to do is give up its ad networks. And interactive maps. And data visualizations. And comment systems.

Your WordPress blog can get in on the stripped-down AMP action as well. Given that WordPress powers roughly 24 percent of all sites on the Web, having an easy way to generate AMP documents from WordPress means a huge boost in adoption for AMP. It's certainly possible to build fast websites using WordPress, but it's also easy to do the opposite. WordPress plugins often have a dramatic (negative) impact on load times. It isn't uncommon to see a WordPress site loading not just one but several external JavaScript libraries because the user installed three plugins that each use a different library. AMP neatly solves that problem by stripping everything out.

So why would publishers want to use AMP? Google, while its influence has dipped a tad across industries (as Facebook and Twitter continue to drive more traffic), remains a powerful driver of traffic. When Google promises more eyeballs on their stories, big media listens.

AMP isn't trying to get rid of the Web as we know it; it just wants to create a parallel one. Under this system, publishers would not stop generating regular pages, but they would also start generating AMP files, usually (judging by the early adopter examples) by appending /amp to the end of the URL. The AMP page and the canonical page would reference each other through standard HTML tags. User agents could then pick and choose between them. That is, Google's Web crawler might grab the AMP page, but desktop Firefox might hit the AMP page and redirect to the canonical URL.

On one hand, what this amounts to is that after years of telling the Web to stop making m. mobile-specific websites, Google is telling the Web to make /amp-specific mobile pages. On the other hand, this nudges publishers toward an idea that's big in the [IndieWeb movement](http://indiewebcamp.com/): Publish (on your) Own Site, Syndicate Elsewhere (or [POSSE](http://indiewebcamp.com/POSSE) for short).

The idea is to own the canonical copy of the content on your own site but then to send that content everywhere you can. Or rather, everywhere you want to reach your readers. Facebook Instant Article? Sure, hook up the RSS feed. Apple News? Send the feed over there, too. AMP? Sure, generate an AMP page. No need to stop there—tap the new Medium API and half a dozen others as well.

Reading is a fragmented experience. Some people will love reading on the Web, some via RSS in their favorite reader, some in Facebook Instant Articles, some via AMP pages on Twitter, some via Lynx in their terminal running on a [restored TRS-80](http://arstechnica.com/information-technology/2015/08/surfing-the-internet-from-my-trs-80-model-100/) (seriously, it can be done. See below). The beauty of the POSSE approach is that you can reach them all from a single, canonical source.

## AMP and the open Web

While AMP has problems and just might be designed to lock publishers into a Google-controlled format, so far it does seem friendlier to the open Web than Facebook Instant Articles.

In fact, if you want to be optimistic, you could look at AMP as the carrot that Google has been looking for in its effort to speed up the Web. As noted Web developer (and AMP optimist) Jeremy Keith [writes](https://adactio.com/journal/9646) in a piece on AMP, "My hope is that the current will flow in both directions. As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask 'Why can't our regular pages be this fast?' By showing that there is life beyond big bloated invasive webpages, perhaps the AMP project will work as a demo of what the whole Web could be."

Not everyone is that optimistic about AMP, though. Developer and Author Tim Kadlec [writes](https://timkadlec.com/2015/10/amp-and-incentives/), "[AMP] doesn't feel like something helping the open Web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the Web... Using a very specific tool to build a tailored version of my page in order to 'reach everyone' doesn't fit any definition of the 'open Web' that I've ever heard."

There's one other important aspect to AMP that helps speed up their pages: Google will cache your pages on its CDN for free. "AMP is caching... You can use their caching if you conform to certain rules," writes Dave Winer, developer and creator of RSS, [in a post on AMP](http://scripting.com/2015/10/10/supportingStandardsWithoutAllThatNastyInterop.html). "If you don't, you can use your own caching. I can't imagine there's a lot of difference unless Google weighs search results based on whether you use their code."


# About <code>src</code>

date:2015-10-28 15:04:24
url:/src/about

**If you're here because Google sent you to one of the articles I deleted and then you got redirected here, have a look at the [Internet Archive](https://web.archive.org/web/*/https://longhandpixels.net/blog/), which preserved those pages.**

For a while I had another blog at the URL longhandpixels.net. I made a few half-hearted attempts to make money with it, which I refuse to do here. 

I felt uncomfortable with the marketing that required and a little bit dirty about the whole thing. I don't want to spend my life writing things that will draw in people to buy my book. Honestly, I don't care about selling the book (at this point, 2018, it's enough out of date that I pulled it completely).

What I want to do is write what I want to write, whether the topic is [life on the road with my family, traveling in a restored 1969 Dodge Travco RV](/) (which is what most of this site is about), fiction or technology. I don't really care if anyone else is interested or not. Long story short; I shut down longhandpixels. I ported over a small portion of articles that I liked and deleted the rest, redirecting them all to this page, hence the message at the top.

So, there you go. Now if I were you I'd close this browser window right now and go somewhere with fresh air and sunshine, but if you're not up for that, I really do hope you enjoy `src`, which is what I call this code/tech-centric portion of luxagraf. 

###Acknowledgements

`src` and the rest of this site would not be possible without the following software, many thanks to the creators:

* [Git](http://git-scm.com/) -- pretty much everything I write is stored in Git for version control purposes. I host my own repos privately.

* [Nginx](http://nginx.org/) -- This site is served by a custom build of Nginx. You can read more about how I set up Nginx in the tutorial I wrote, *[Install Nginx on Debian/Ubuntu](/src/install-nginx-debian)*

* [Python](https://www.python.org/) and [Django](https://www.djangoproject.com/) -- This site consists primarily of flat HTML files generated by a custom Django application I wrote.

* [Arch Linux](https://www.archlinux.org/) -- Way down at the bottom of stack there is Arch, which is my preferred operating system, server or otherwise. Currently I run Arch on a small VPS instance at [Vultr.com](http://www.vultr.com/?ref=6825229) (affiliate link, costs you nothing, but helps cover my hosting).

# Switching from LastPass to Pass

date:2015-10-28 15:02:09
url:/src/pass

I used to keep all my passwords in my head. I kept track of them using some memory tricks based my very, very limited understanding of what memory champions like [Ed Cooke][1] do. Basically I would generate strings using [pwgen][2] and then memorized them. 

As you might imagine, this did not scale well. 

Or rather it led to me getting lazy. It used to be that hardly any sites required you to log in so it was no big deal to memorize a few passwords. Now pretty much every time you buy something you have to create an account and I don't want to memorize a new strong password for some one-off site I'll probably never visit again. So I ended up using a less strong password for those. Worse, I'd re-use that password at multiple sites.

My really important passwords (email and financial sites), are still only in my head, but recognizing that re-using the same simple password for the one-offs was a bad idea, I started using LastPass for those sorts of things. But I never really liked using LastPass. It bothered me that my passwords were stored on a third-party server. But LastPass was just *so* easy.

Then LogMeIn bought LastPass and suddenly I was motivated to move on. 

As I outlined in a [brief piece][3] for The Register, there are lots of replacement services out there -- I like [Dashlane][4], despite the price -- but I didn't want my password data on a third party server any more. I wanted to be in total control.

I can't remember how I ran across [pass][5], but I've been meaning to switch over to it for a while now. It exactly what I wanted in a password tool -- a simple, secure, command line based system using tested tools like GnuPG. There's also [Firefox add-on][6] and [an Android app][7] to make life a bit easier. So far though, I'm not using either.

So I cleaned up my LastPass account, exported everything to CSV and imported it all into pass with this [Ruby script][8]. 

Once you have the basics installed there are two ways to run pass, with Git and without. I can't tell you how many times Git has saved my ass, so naturally I went with a Git-based setup that I host on a private server. That, combined with regular syncing to my Debian machine, my wife's Mac, rsyncing to a storage server, and routine backups to Amazon S3 means my encrypted password files are backed up on six different physical machines. Moderately insane, but sufficiently redundant that I don't worry about losing anything.

If you go this route there's one other thing you need to backup -- your GPG keys. The public key is easy, but the private one is a bit harder. I got some good ideas from [here][9]. On one hand you could be paranoid-level secure and make a paper print out of your key. I suggest using a barcode or QR code, and then printing on card stock which you laminate for protection from the elements and then store it in a secure location like a safe deposit box. I may do this at some point, but for now I went with the less secure plan B -- I simply encrypted my private key with a passphrase. 

Yes, that essentially negates at least some of the benefit of using a key instead of passphrase in the first place. However, since, as noted above, I don't store any passwords that would, so to speak, give you the keys to my kingdom, I'm not terribly worried about it. Besides, if you really want to get these passwords it would be far easier to just take my laptop and [hit me with a $5 wrench][10] until I told you the passphrase for gnome-keyring.

The more realistic thing to worry about is how other, potentially far less tech-savvy people can access these passwords should something happen to you. No one in my immediate family knows how to use GPG. Yet. So should something happen to me before I teach my kids how to use it, I periodically print out my important passwords and store that file in a secure place along with a will, advance directive and so on.


[1]: https://twitter.com/tedcooke
[2]: https://packages.debian.org/search?keywords=pwgen
[3]: tk
[4]: http://dashlane.com/
[5]: http://www.passwordstore.org/
[6]: https://github.com/jvenant/passff#readme
[7]: https://github.com/zeapo/Android-Password-Store
[8]: http://git.zx2c4.com/password-store/tree/contrib/importers/lastpass2pass.rb
[9]: http://security.stackexchange.com/questions/51771/where-do-you-store-your-personal-private-gpg-key
[10]: https://www.xkcd.com/538/


# Setup And Secure Your First VPS

date:2015-03-31 20:45:50
url:/src/setup-and-secure-vps

Let's talk about your server hosting situation. I know a lot of you are still using a shared web host. The thing is, it's 2015, shared hosting is only necessary if you really want unexplained site outages and over-crowded servers that slow to a crawl.

It's time to break free of those shared hosting chains. It time to stop accepting the software stack you're handed. It's time to stop settling for whatever outdated server software and configurations some shared hosting company sticks you with.

You need a VPS. Seriously.

What? Virtual Private Servers? Those are expensive and complicated... don't I need to know Linux or something?

No, no and not really.

Thanks to an increasingly competitive market you can pick up a very capable VPS for $5 a month. Setting up your VPS *is* a little more complicated than using a shared host, but most VPS's these days have one-click installers that will set up a Rails, Django or even WordPress environment for you.

As for Linux, knowing your way around the command line certainly won't hurt, but these tutorials will teach you everything you really need to know. We'll also automate everything so that critical security updates for your server are applied automatically without you lifting a finger.

## Pick a VPS Provider

There are hundreds, possibly thousands of VPS providers these days. You can nerd out comparing all of them on [serverbear.com](http://blog.serverbear.com/) if you want. When you're starting out I suggest sticking with what I call the big three: Linode, Digital Ocean or Vultr.

Linode would be my choice for mission critical hosting. I use it for client projects, but Vultr and Digital Ocean are cheaper and perfect for personal projects and experiments. Both offer $5 a month servers, which gets you .5 GB of RAM, plenty of bandwidth and 20-30GB of a SSD-based storage space. Vultr actually gives you a little more RAM, which is helpful if you're setting up a Rails or Django environment (i.e. a long running process that requires more memory), but I've been hosting a Django-based site on a 512MB Digital Ocean instance for 18 months and have never run out of memory.

Also note that all these plans start off charging by the hour so you can spin up a new server, play around with it and then destroy it and you'll have only spent a few pennies.

Which one is better? They're both good. I've been using Vultr more these days, but Digital Ocean has a nicer, somewhat slicker control panel. There are also many others I haven't named. Just pick one.

Here's a link that will get you a $10 credit at [Vultr](http://www.vultr.com/?ref=6825229) and here's one that will get you a $10 credit at [Digital Ocean](https://www.digitalocean.com/?refcode=3bda91345045) (both of those are affiliate links and help cover the cost of hosting this site *and* get you some free VPS time).

For simplicity's sake, and because it offers more one-click installers, I'll use Digital Ocean for the rest of this tutorial.

## Create Your First VPS

In Digital Ocean you'll create a "Droplet". It's a three step process: pick a plan (stick with the $5 a month plan for starters), pick a location (stick with the defaults) and then install a bare OS or go with a one-click installer. Let's get WordPress up and running, so select WordPress on 14.04 under the Applications tab.

If you want automatic backups, and you do, check that box. Backups are not free, but generally won't add more than about $1 to your monthly bill -- it's money well spent.

The last thing we need to do is add an SSH key to our account. If we don't Digital Ocean will email our root password in a plain text email. Yikes.

If you need to generate some SSH keys, here's a short guide, [How to Generate SSH keys](/src/ssh-keys-secure-logins). You can skip step 3 in that guide. Once you've got your keys set up on your local machine you just need to add them to your droplet.

If you're on OS X, you can use this command to copy your public key to the clipboard:

~~~~console
pbcopy < ~/.ssh/id_rsa.pub
~~~~

Otherwise you can use cat to print it out and copy it:

~~~~console
cat ~/.ssh/id_rsa.pub
~~~~

Now click the button to "add an SSH key". Then paste the contents of your clipboard into the box. Hit "add SSH Key" and you're done.

Now just click the giant "Create Droplet".

Congratulations you just deployed your first VPS server.

## Secure Your VPS

Now we can log in to our new VPS with this code:

~~~~console
ssh root@127.87.87.87
~~~~

That will cause SSH to ask if you want to add the server to list of known hosts. Say yes and then on OS X you'll get a dialog asking for the passphrase you created a minute ago when you generate your SSH key. Enter it, check the box to save it to your keychain so you don't have to enter it again.

And you're now logged in to your VPS as root. That's not how we want to log in though since root is a very privileged user that can wreak all sorts of havoc. The first thing we'll do is change the password of the root user. To do that, just enter:

~~~~console
passwd
~~~~

And type a new password.

Now let's create a new user:

~~~~console
adduser myusername
~~~~

Give your username a secure password and then enter this command:

~~~~console
visudo
~~~~

If you get an error saying that there is no app installed, you'll need to first install sudo (`apt-get install sudo` on Debian, which does not ship with sudo). That will open a file. Use the arrow key to move the cursor down to the line that reads:

~~~~vim
root ALL=(ALL:ALL) ALL
~~~~

Now add this line:

~~~~vim
myusername ALL=(ALL:ALL) ALL
~~~~

Where myusername is the username you created just a minute ago. Now we need to save the file. To do that hit Control-X, type a Y and then hit return.

Now, **WITHOUT LOGGING OUT OF YOUR CURRENT ROOT SESSION** open another terminal window and make sure you can login with your new user:

~~~~console
ssh myusername@12.34.56.78
~~~~

You'll be asked for the password that we created just a minute ago on the server (not the one for our SSH key). Enter that password and you should be logged in. To make sure we can get root access when we need it, try entering this command:

~~~~console
sudo apt-get update
~~~~

That should ask for your password again and then spit out a bunch of information, all of which you can ignore for now.

Okay, now you can log out of your root terminal window. To do that just hit Control-D.

## Finishing Up

What about actually accessing our VPS on the web? Where's WordPress? Just point your browser to the bare IP address you used to log in and you should get the first screen of the WordPress installer.

We now have a VPS deployed and we've taken some very basic steps to secure it. We can do a lot more to make things more secure, but I've covered that in a separate article:

One last thing: the user we created does not have access to our SSH keys, we need to add them. First make sure you're logged out of the server (type Control-D and you'll get a message telling you the connection has been closed). Now, on your local machine paste this command:

~~~~console
cat ~/.ssh/id_rsa.pub | ssh myusername@45.63.48.114 "mkdir -p ~/.ssh && cat >>  ~/.ssh/authorized_keys"
~~~~

You'll have to put in your password one last time, but from now on you can login via SSH.

## Next Steps

Congratulations you made it past the first hurdle, you're well on your way to taking control over your server. Kick back, relax and write some blog posts.

Write down any problems you had with this tutorial and send me a link so I can check out your blog (I'll try to help figure out what went wrong too).

Because we used a pre-built image from Digital Ocean though we're really not much better off than if we went with shared hosting, but that's okay, you have to start somewhere. Next up we'll do the same things, but this time create a bare OS which will serve as the basis for a custom built version of Nginx that's highly optimized and way faster than any stock server.

# Setup SSH Keys for Secure Logins

date:2015-03-21 20:49:26
url:/src/ssh-keys-secure-logins

SSH keys are an easier, more secure way of logging into your virtual private server via SSH. Passwords are vulnerable to brute force attacks and just plain guessing. Key-based authentication is (currently) much more difficult to brute force and, when combined with a password on the key, provides a secure way of accessing your VPS instances from anywhere.

Key-based authentication uses two keys, the first is the "public" key that anyone is allowed to see. The second is the "private" key that only you ever see. So to log in to a VPS using keys we need to create a pair -- a private key and a public key that matches it -- and then securely upload the public key to our VPS instance. We'll further protect our private key by adding a password to it.

Open up your terminal application. On OS X, that's Terminal, which is in Applications >> Utilities folder. If you're using Linux I'll assume you know where the terminal app is and Windows fans can follow along after installing [Cygwin](http://cygwin.com/).

Here's how to generate SSH keys in three simple steps.


## Setup SSH for More Secure Logins

### Step 1: Check for SSH Keys

Cut and paste this line into your terminal to check and see if you already have any SSH keys:

~~~~console
ls -al ~/.ssh
~~~~

If you see output like this, then skip to Step 3:

~~~~console
id_dsa.pub
id_ecdsa.pub
id_ed25519.pub
id_rsa.pub
~~~~

### Step 2: Generate an SSH Key

Here's the command to create a new SSH key. Just cut and paste, but be sure to put in your own email address in quotes:

~~~~console
ssh-keygen -t rsa -C "your_email@example.com"
~~~~

This will start a series of questions, just hit enter to accept the default choice for all of them, including the last one which asks where to save the file.

Then it will ask for a passphrase, pick a good long one. And don't worry you won't need to enter this every time, there's something called `ssh-agent` that will ask for your passphrase and then store it for you for the duration of your session (i.e. until you restart your computer).

~~~~console
Enter passphrase (empty for no passphrase): [Type a passphrase]
Enter same passphrase again: [Type passphrase again]
~~~~

Once you've put in the passphrase, SSH will spit out a "fingerprint" that looks a bit like this:

~~~~console
# Your identification has been saved in /Users/you/.ssh/id_rsa.
# Your public key has been saved in /Users/you/.ssh/id_rsa.pub.
# The key fingerprint is:
# d3:50:dc:0f:f4:65:29:93:dd:53:c2:d6:85:51:e5:a2 scott@longhandpixels.net
~~~~

### Step 3 Copy Your Public Key to your VPS

If you have ssh-copy-id installed on your system you can use this line to transfer your keys:

~~~~console
ssh-copy-id user@123.45.56.78
~~~~

If that doesn't work, you can paste in the keys using SSH:

~~~.language-bash
cat ~/.ssh/id_rsa.pub | ssh user@12.34.56.78 "mkdir -p ~/.ssh && cat >>  ~/.ssh/authorized_keys"
~~~

Whichever you use you should get a message like this:

~~~~console
The authenticity of host '12.34.56.78 (12.34.56.78)' can’t be established.
RSA key fingerprint is 01:3b:ca:85:d6:35:4d:5f:f0:a2:cd:c0:c4:48:86:12.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '12.34.56.78' (RSA) to the list of known hosts.
username@12.34.56.78's password: 
~~~~

Now try logging into the machine, with `ssh username@12.34.56.78`, and check in:

~~~~console
~/.ssh/authorized_keys
~~~~

to make sure we haven't added extra keys that you weren't expecting.

Now log in to your VPS with ssh like so:

~~~~console
ssh username@12.34.56.78
~~~~

And you won't be prompted for a password by the server. You will, however, be prompted for the passphrase you used to encrypt your SSH key. You'll need to enter that passphrase to unlock your SSH key, but ssh-agent should store that for you so you only need to re-enter it when you logout or restart your computer.

And there you have it, secure, key-based log-ins for your VPS.

### Bonus: SSH config

If you'd rather not type `ssh myuser@12.34.56.78` all the time you can add that host to your SSH config file and refer to it by hostname. 

The SSH config file lives in `~/.ssh/config`. This command will either open that file if it exists or create it if it doesn't:

~~~~console
nano ~/.ssh/config
~~~~

Now we need to create a host entry. Here's what mine looks like:

~~~~ini
Host myname  
  Hostname 12.34.56.78
  user myvpsusername
  #Port 24857 #if you set a non-standard port uncomment this line
  CheckHostIP yes
  TCPKeepAlive yes
~~~~

Then to login all I need to do is type `ssh myname`. This is even more helpful when using `scp` since you can skip the whole username@server and just type: `scp myname:/home/myuser/somefile.txt .` to copy a file.


# How My Two-Year-Old Twins Made Me a Better Programmer

date:2014-08-05 20:55:13
url:/src/better

TL;DR version: "information != knowledge; knowledge != wisdom; wisdom != experience;"

I have two-year-old twins. Every day I watch them figure out more about the world around them. Whether that's how to climb a little higher, how to put on a t-shirt, where to put something when you're done with it, or what to do with these crazy strings hanging off your shoes.

It can be incredibly frustrating to watch them struggle with something new and fail. They're your children so your instinct is to step in and help. But if you step in and do everything for them they never figure out how to do any of it on their own. I've learned to wait until they ask for help.

Watching them struggle and learn has made me realize that I don't let myself struggle enough and my skills are stagnating because of it. I'm happy to let Google step in and solve all my problems for me. I get work done, true, but at the expense of learning new things.

I've started to think of this as the Stack Overflow problem, not because I actually blame Stack Overflow -- it's a great resource, the problem is mine -- but because it's emblematic of a problem. I use StackOverflow, and Google more generally, as a crutch, as a way to quickly solve problems with some bit of information rather than digging deeper and turning information into actual knowledge.

On one hand quick solutions can be a great thing. Searching the web lets me solve my problem and move on to the next (potentially more interesting) one.

On the other hand, information (the solution to the problem at hand) is not as useful as knowledge. Snippets of code and other tiny bits of information are not going to land you job, nor will they help you when you want to write a tutorial or a book about something. This sort of "let's just solve the problem" approach begins and ends in the task at hand. The information you get out of that is useful for the task you're doing, but knowledge is much larger than that. And I don't know about you, but I want to be more than something that's useful for finishing tasks.

Information is useless to me if it isn't synthesized into personal knowledge somehow. And, for me at least, that information only becomes knowledge when I stop, back up and try to understand the *why* rather than than just the *how*. Good answers on Stack Overflow explain the why, but more often than not this doesn't happen.

For example, today I wanted a simple way to get python's `os.listdir` to ignore directories. I knew that I could loop through all the returned elements and test if they were directories, but I thought perhaps there was a more elegant way to doing that (short answer, not really). The details of my problem aren't the point though, the point is that the question had barely formed in my mind and I noticed my fingers already headed for command tab, ready to jump the browser and cut and paste some solution from Stack Overflow.

This time though I stopped myself before I pulled up my browser. I thought about my daughters in the next room. I knew that I would likely have the answer to my question in 10 seconds and also knew I would forget it and move on in 20. I was about to let easy answers step in and solve my problem for me. I was about to avoid learning something new. Sometimes that's fine, but do it too much and I'm worried I might be more of a successful cut-and-paster than struggling programmer.

Sometimes it's good to take a few minutes to read the actual docs, pull up the man pages, type `:help` or whatever and learn. It's going to take a few extra minutes. You might even take an unexpected detour from the task at hand. That might mean you learn something you didn't expect to learn. Yes, it might mean you lose a few minutes of "work" to learn. It might even mean that you fail. Sometimes the docs don't help. The sure, Google. The important part of learning is to struggle, to apply your energy to the problem rather than finding to solution.

Sometimes you need to struggle with your shoelaces for hours, otherwise you'll never figure out how to tie them.

In my specific case I decided to permanently reduce my dependency on Stack Overflow and Google. Instead of flipping to the browser I fired up the Python interpreter and typed `help(os.listdir)`. Did you know the Python interpreter has a built-in help function called, appropriately enough, `help()`? The `help()` function takes either an object or a keyword (the latter needs to be in quotes like "keyword"). If you're having trouble I wrote a quick guide to [making Python's built-in `help()` function work][1].

Now, I could have learned what I wanted to know in 2 seconds using Google. Instead it took me 20 minutes[^1] to figure out. But now I understand how to do what I wanted to do and, more importantly, I understand *why* it will work. I have a new piece of knowledge and next time I encounter the same situation I can draw on my knowledge rather than turning to Google again. It's not exactly wisdom or experience yet, but it's getting closer. And when you're done solving all the little problems of day-to-day coding that's really the point -- improving your skill, learning and getting better at what you do every day.

[^1]: Most of that time was spent figuring out where OS X stores Python docs, which [I won't have to do again][1]. Note to self, I gotta switch back to Linux.

[1]: /src/python-help

# Get Smarter with Python's Built-In Help

date:2014-08-01 20:56:57
url:/src/python-help


One of my favorite things about Python is the `help()` function. Fire up the standard Python interpreter, and import `help` from `pydoc` and you can search Python's official documentation from within the interpreter. Reading the f'ing manual from the interpreter. As it should be[^1].

The `help()` function takes either an object or a keyword. The former must be imported first while the latter needs to be a string like "keyword". Whichever you use Python will pull up the standard Python docs for that object or keyword. Type `help()` without anything and you'll start an interactive help session.

The `help()` function is awesome, but there's one little catch.

In order for this to work properly you need to make sure you have the `PYTHONDOCS` environment variable set on your system. On a sane operating system this will likely be in '/usr/share/doc/pythonX.X/html'. In mostly sane OSes like Debian (and probably Ubuntu/Mint, et al) you might have to explicitly install the docs with `apt-get install python-doc`, which will put the docs in `/usr/share/doc/pythonX.X-doc/html/`.

If you're using OS X's built-in Python, the path to Python's docs would be:

~~~~console
/System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/Resources/English.lproj/Documentation/
~~~~

Note the 2.6 in that path. As far as I can tell OS X Mavericks does not ship with docs for Python 2.7, which is weird and annoying (like most things in Mavericks). If it's there and you've found it, please enlighten me in the comments below.

Once you've found the documentation you can add that variable to your bash/zshrc like so:

~~~~console
export PYTHONDOCS=/System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/Resources/English.lproj/Documentation/
~~~~

Now fire up iPython, type `help()` and start learning rather than always hobbling along with [Google, Stack Overflow and other crutches](/src/better).

Also, PSA. If you do anything with Python, you really need to check out [iPython](http://ipython.org/). It will save you loads of time, has more awesome features than a Veg-O-Matic and [notebooks](http://ipython.org/notebook.html), don't even get me started on notebooks. And in iPython you don't even have to import help, it's already there, ready to go from the minute it starts.

[^1]: The Python docs are pretty good too. Not Vim-level good, but close.


# Protect Your Online Privacy with Ghostery

date:2014-05-29 21:00:40
url:/src/protect-your-online-privacy-ghostery

[**Update 12-11-2015** While everything in this tutorial still works, I should note that I don't actually use Ghostery anymore. Instead I've found [uBlock Origin](https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/) for Chromium and Firefox to be far more robust, customizable and powerful. It's also [open source](https://github.com/gorhill/uBlock). For most people I would continue to suggest Ghostery, but for the particularly tech savvy, check out uBlock.]

There's an invisible web that lies just below the web you see everyday. That invisible web is tracking the sites you visit, the pages you read, the things you like, the things you favorite and collating all that data into a portrait of things you are likely to purchase. And all this happens without anyone asking your consent.

Not much has changed since [I wrote about online tracking years ago on Webmonkey][1]. Back then visiting five websites meant "somewhere between 21 and 47 other websites learn about your visit to those five". That number just continues to grow.

If that doesn't bother you, and you could not care less who is tracking you, then this is not the tutorial for you.

However, if the extent of online tracking bothers you and you want to do something about it, there is some good news. In fact it's not that hard to stop all that tracking.

To protect your privacy online you'll just need to add a tool like [Ghostery][2] or [Do Not Track Plus][3] to your web browser. Both will work, but I happen to use Ghostery so that's what I'm going to show you how to set up. 

## Install and Setup Ghostery in Firefox, Chrome/Chromium, Opera and Safari.

The first step is to install the Ghostery extension for your web browser. To do that, just head over to the [Ghostery downloads page][4] and click the install button that's highlighted for your browser.

Some browsers will ask you if you want to allow the add-on to be installed. In Firefox just click "Allow" and then click "Install Now" when the installation window opens up.

[![Installing add-ons in Firefox][5]](/media/src/images/2014/gh-firefox-install01.png "View Image 1")
: In Firefox click Allow...

[![Installing add-ons in Firefox 2][6]](/media/src/images/2014/gh-firefox-install02.png "View Image 2")
: ...and then Install Now

If you're using Chrome just click the Add button. 

[![Installing extensions in Chrome/Chromium][7]](/media/src/images/2014/gh-chrome-install01.jpg "View Image 3")
: Installing extensions in Chrome/Chromium

Ghostery is now installed, but out of the box Ghostery doesn't actually block anything. That's why, once you have it installed, Ghostery should have opened a new window or tab that looks like this:

[![The Ghostery install wizard][8]](/media/src/images/2014/gh-first-screen.jpg "View Image 4")
: The Ghostery install wizard

This is the series of screens that walk you through the process of setting up Ghostery to block sites that would like to track you. 

Before I dive into setting up Ghostery, it's important to understand that some of what Ghostery can block will limit what you see on the web. For example, Disqus is a very popular third-party comment system. It happens to track you as well. If you block that tracking though you won't see comments on a lot of sites. 

There are two ways around this. One is to decide that you trust Disqus and allow it to run on any site. The second is to only allow Disqus on sites where you want to read the comments. I'll show you how to set up both options.

## Configuring Ghostery

First we have to configure Ghostery. Click the right arrow on that first screen to get started. That will lead you to this screen:

[![The Ghostery install wizard, page 2][9]](/media/src/images/2014/gh-second-screen.jpg "View Image 5")
: The Ghostery install wizard, page 2

If you want to help Ghostery get better you can check this box. Then click the right arrow again and you'll see a page asking if you want to enable the Alert Bubble.

[![The Ghostery install wizard, page 3][10]](/media/src/images/2014/gh-third-screen.jpg "View Image 6")
: The Ghostery install wizard, page 3

This is Ghostery's little alert box that comes up when you visit a new page. It will show you all the trackers that are blocked. Think of this as a little window into the invisible web. I enable this, though I change the default settings a little bit. We'll get to that in just a second.

The next screen is the core of Ghostery. This is where we decide which trackers to block and which to allow. 

[![The Ghostery install wizard -- blocking trackers][11]](/media/src/images/2014/gh-main-01.jpg "View Image 7")
: The Ghostery install wizard -- blocking trackers

Out of the box Ghostery blocks nothing. Let's change that. I start by blocking everything:

[![Ghostery set to block all known trackers][12]](/media/src/images/2014/gh-main-02.jpg "View Image 8")
: Ghostery set to block all known trackers

Ghostery will also ask if you want to block new trackers as it learns about them. I go with yes.

Now chances are the setup we currently have is going to limit your ability to use some websites. To stick with the earlier example, this will mean Disqus comments are never loaded. The easiest way to fix this is to search for Disqus and enable it:

[![Ghostery set to block everything but Disqus][13]](/media/src/images/2014/gh-main-03.jpg "View Image 9")
: Ghostery set to block everything but Disqus

Note that, along the top of the tracker list there are some buttons. This makes it easy to enable, for example, not just Disqus but every commenting system. If you'd like to do that click the "Commenting System" button and uncheck all the options:

[![Filtering Ghostery by type of tracker][14]](/media/src/images/2014/gh-main-04.jpg "View Image 10")
: Filtering Ghostery by type of tracker

Another category of things you might want to allow are music players like those from SoundCloud. To learn more about a particular service, just click the link next to the item and Ghostery will show you what it knows, including any industry affiliations.

[![Ghostery showing details on Disqus][15]](/media/src/images/2014/gh-main-05.jpg "View Image 11")
: Ghostery showing details on Disqus

Now you may be thinking, wait, how do I know which companies I want to allow and which I don't? Well, you don't really need to know all of them because you can enable them as you go too. 

Let's save what we have and test Ghostery out on a site. Click the right arrow one last time and check to make sure that the Ghostery icon is in your toolbar. If it isn't you can click the button "Add Button".

## Ghostery in Action

Okay, Ghostery is installed and blocking almost everything it knows about. But that might limit what we can do. For example, let's go visit arstechnica.com. You can see down here at the bottom of the screen there's a list of everything that's blocked. 

[![Ghostery showing all the trackers no longer tracking you][16]](/media/src/images/2014/gh-example-01.jpg "View Image 12")
: Ghostery showing all the trackers no longer tracking you

You can see in that list that right now the Twitter button is blocked. So if you scroll down the bottom of the article and look at the author bio (which should have a twitter button) you'll see this little Ghostery icon:

[![Ghostery replaces elements it has blocked with the Ghostery icon.][17]](/media/src/images/2014/gh-example-02.jpg "View Image 13")
: Ghostery replaces elements it has blocked with the Ghostery icon.

That's how you will know that Ghostery has blocked something. If you were to click on that element Ghostery would load the blocked script and you'd see a Twitter button. But what if you always want to see the Twitter button? To do that we'll come up to the toolbar and click on the Ghostery icon which will reveal the blocking menu:

[![The Ghostery panel.][18]](/media/src/images/2014/gh-example-03.jpg "View Image 14")
: The Ghostery panel.

Just slide the Twitter button to the left and Twitter's button (and accompanying tracking beacons) will be allowed after you reload the page. Whenever you return to Ars, the Twitter button will load. As I mentioned before, you can do this on a per-site basis if there are just a few sites you want to allow. To enable the Twitter button on every site, click the little check box button the right of the slider. Realize though, that enabling it globally will mean Twitter can track you everywhere you go.

[![Enabling trackers from the Ghostery panel.][19]](/media/src/images/2014/gh-example-04.jpg "view image 15")
: Enabling trackers from the Ghostery panel.

This panel is essentially doing the same thing as the setup page we used earlier. In fact, we can get back the setting page by click the gear icon and then the "Options" button:

[![Getting back to the Ghostery setting page.][20]](/media/src/images/2014/gh-example-05.jpg "view image 16")
: Getting back to the Ghostery setting page.

Now, you may have noticed that the little purple panel showing you what was blocked hung around for quite a while, fifteen seconds to be exact, which is a bit long in my opinion. We can change that by clicking the Advanced tab on the Ghostery options page:


[![Getting back to the Ghostery setting page.][21]](/media/src/images/2014/gh-example-06.jpg "view image 17")
: Getting back to the Ghostery setting page.

The first option in the list is whether or not to show the alert bubble at all, followed by the length of time it's shown. I like to set this to the minimum, 3 seconds. Other than this I leave the advanced settings at their defaults. 

Scroll to the bottom of the settings page, click save, and you're done setting up Ghostery.

## Conclusion

Now you can browse the web with a much greater degree of privacy, only allowing those companies *you* approve of to know what you're up to. And remember, any time a site isn't working the way you think you should, you can temporarily disable Ghostery by clicking the icon in the toolbar and hitting the pause blocking button down at the bottom of the Ghostery panel:

[![Temporarily disable Ghostery.][22]](/media/src/images/2014/gh-example-07.jpg "view image 18")
: Temporarily disable Ghostery.

Also note that there is an iOS version of Ghostery, though, due to Apple's restrictions on iOS, it's an entirely separate web browser, not a plugin for Mobile Safari. If you use Firefox for Android there is a plugin available. 

##Further reading:


* [How To Install Ghostery (Internet Explorer)][23] -- Ghostery's guide to installing it in Internet Explorer.
* [Secure Your Browser: Add-Ons to Stop Web Tracking][24] -- A piece I wrote for Webmonkey a few years ago that gives some more background on tracking and some other options you can use besides Ghostery. 
* [Tracking our online trackers][25] -- TED talk by Gary Kovacs, CEO of Mozilla Corp, covering online behavior tracking more generally. 
* This sort of tracking is [coming to the real world too][26], so there's that to look forward to. 




[1]: http://www.webmonkey.com/2012/02/secure-your-browser-add-ons-to-stop-web-tracking/
[2]: https://www.ghostery.com/
[3]: https://www.abine.com/index.html
[4]: https://www.ghostery.com/en/download
[5]: /media/src/images/2014/gh-firefox-install01-tn.jpg
[6]: /media/src/images/2014/gh-firefox-install02-tn.jpg
[7]: /media/src/images/2014/gh-chrome-install01-tn.jpg
[8]: /media/src/images/2014/gh-first-screen-tn.jpg
[9]: /media/src/images/2014/gh-second-screen-tn.jpg
[10]: /media/src/images/2014/gh-third-screen-tn.jpg
[11]: /media/src/images/2014/gh-main-01-tn.jpg
[12]: /media/src/images/2014/gh-main-02-tn.jpg
[13]: /media/src/images/2014/gh-main-03-tn.jpg
[14]: /media/src/images/2014/gh-main-04-tn.jpg
[15]: /media/src/images/2014/gh-main-05-tn.jpg
[16]: /media/src/images/2014/gh-example-01-tn.jpg
[17]: /media/src/images/2014/gh-example-02-tn.jpg
[18]: /media/src/images/2014/gh-example-03-tn.jpg
[19]: /media/src/images/2014/gh-example-04-tn.jpg
[20]: /media/src/images/2014/gh-example-05-tn.jpg
[21]: /media/src/images/2014/gh-example-06-tn.jpg
[22]: /media/src/images/2014/gh-example-07-tn.jpg
[23]: https://www.youtube.com/watch?v=NaI17dSfPRg
[24]: http://www.webmonkey.com/2012/02/secure-your-browser-add-ons-to-stop-web-tracking/
[25]: http://www.ted.com/talks/gary_kovacs_tracking_the_trackers
[26]: http://business.financialpost.com/2014/02/01/its-creepy-location-based-marketing-is-following-you-whether-you-like-it-or-not/?__lsa=e48c-7542

# Scaling Responsive Images in CSS

date:2014-02-27 20:43:23
url:/src/scaling-responsive-images-css

It's pretty easy to handle images responsively with CSS. Just use `@media` queries to swap images at various breakpoints in your design.

It's slightly trickier to get those images to be fluid and scale in between breakpoints. Or rather, it's not hard to get them to scale horizontally, but what about vertical scaling?

Imagine this scenario. You have a div with a paragraph inside it and you want to add a background using the `:before` pseudo element -- just a decorative image behind some text. You can set the max-width to 100% to get the image to fluidly scale in width, but what about scaling the height?

That's a bit trickier, or at least it tripped me up for a minute the other day. I started with this:

~~~~css
.wrapper--image:before {
    content: "";
    display: block;
    max-width: 100%;
    height: 443px;
    background-color: #f3f;
    background-image: url('bg.jpg');
    background-repeat: no-repeat;
    background-size: 100%;
 }
~~~~

Do that and you'll see... nothing. Okay, I expected that. Setting height to auto doesn't work because the pseudo element has no real content, which means its default height is zero. Okay, how do I fix that?

You might try setting the height to the height of your background image. That works whenever the div is the size of, or larger than, the image. But the minute your image scales down at all you'll have blank space at the bottom of your div, because the div has a fixed height with an image inside that's shorter than that fixed height. Try re-sizing [this demo](/demos/css-bg-image-scaling/no-vertical-scaling.html) to see what I'm talking about, make the window less than 800px and you'll see the box no longer scales with the image.

To get around this we can borrow a trick from Thierry Koblentz's technique for [creating intrinsic ratios for video](http://alistapart.com/article/creating-intrinsic-ratios-for-video/) to create a box that maintains the ratio of our background image. 

We'll leave everything the way it is, but add one line:

~~~~css
.wrapper--image:before {
    content: "";
    display: block;
    max-width: 100%;
    background-color: #f3f;
    background-image: url('bg.jpg');
    background-repeat: no-repeat;
    background-size: 100%;
    padding-top: 55.375%;
}

~~~~

We've added padding to the top of the element, which forces the element to have a height (at least visually). But where did I get that number? That's the ratio of the dimensions of the background image. I simply divided the height of the image by the width of the image. In this case my image was 443px tall and 800px wide, which gives us 53.375%.

Here's a [working demo](/demos/css-bg-image-scaling/vertical-scaling.html).

And there you have it, properly scaling CSS background images on `:before` or other "empty" elements, pseudo or otherwise.

The only real problem with this technique is that requires you to know the dimensions of your image ahead of time. That won't be possible in every scenario, but if it is, this will work.


# Install Nginx on Debian/Ubuntu

date:2014-02-10 21:03:23
url:/src/install-nginx-debian


I recently helped a friend set up his first Nginx server and in the process realized I didn't have a good working reference for how I set up Nginx.

So, for myself, my friend and anyone else looking to get started with Nginx, here's my somewhat opinionated guide to installing and configuring Nginx to serve static files. Which is to say, this is how I install and set up Nginx to serve my own and my clients' static files whether those files are simply stylesheets, images and JavaScript or full static sites like this one. What follows is what I believe are the best practices of Nginx[^1]; if you know better, please correct me in the comments.

[This post was last updated <span class="dt-updated updated" datetime="2015-10-30T12:04:25" itemprop="datePublished"><span>30 October 2015</span></span>]

## Nginx Beats Apache for Static Content[^2]

Apache is overkill. Unlike Apache, which is a jack-of-all-trades server, Nginx was really designed to do just a few things well, one of which is to offer a simple, fast, lightweight server for static files. And Nginx is really, really good at serving static files. In fact, in my experience Nginx with PageSpeed, gzip, far future expires headers and a couple other extras I'll mention is faster than serving static files from Amazon S3[^3] (potentially even faster in the future if Verizon and its ilk [really do](http://netneutralitytest.com/) start [throttling cloud-based services](http://davesblog.com/blog/2014/02/05/verizon-using-recent-net-neutrality-victory-to-wage-war-against-netflix/)).

## Nginx is Different from Apache

In its quest to be lightweight and fast, Nginx takes a different approach to modules than you're probably familiar with in Apache. In Apache you can dynamically load various features using modules. You just add something like `LoadModule alias_module modules/mod_alias.so` to your Apache config files and just like that Apache loads the alias module.

Unlike Apache, Nginx can not dynamically load modules. Nginx has available what it has available when you install it.

That means if you really want to customize and tweak it, it's best to install Nginx from source. You don't *have* to install it from source. But if you really want a screaming fast server, I suggest compiling Nginx yourself, enabling and disabling exactly the modules you need. Installing Nginx from source allows you to add some third-party tools, most notably Google's PageSpeed module, which has some fantastic tools for speeding up your site.

Luckily, installing Nginx from source isn't too difficult. Even if you've never compiled any software from source, you can install Nginx. The remainder of this post will show you exactly how.

## My Ideal Nginx Setup for Static Sites

Before we start installing, let's go over the things we'll be using to build a fast, lightweight server with Nginx.

* [Nginx](http://nginx.org).
* [SPDY](http://www.chromium.org/spdy/spdy-protocol) -- Nginx offers "experimental support for SPDY", but it's not enabled by default. We're going to enable it when we install Nginx. In my testing SPDY support has worked without a hitch, experimental or otherwise.
* [Google Page Speed](https://developers.google.com/speed/pagespeed/module) -- Part of Google's effort to make the web faster, the Page Speed Nginx module "automatically applies web performance best practices to pages and associated assets".
* [Headers More](https://github.com/agentzh/headers-more-nginx-module/) -- This isn't really necessary from a speed standpoint, but I often like to set custom headers and hide some headers (like which version of Nginx your server is running). Headers More makes that very easy.
* [Naxsi](https://github.com/nbs-system/naxsi) -- Naxsi is a "Web Application Firewall module for Nginx". It's not really all that important for a server limited to static files, but it adds an extra layer of security should you decided to use Nginx as a proxy server down the road.

So we're going to install Nginx with SPDY support and three third-party modules.

Okay, here's the step-by-step process to installing Nginx on a Debian 8 (or Ubuntu) server. If you're looking for a good, cheap VPS host I've been happy with [Vultr.com](http://www.vultr.com/?ref=6825229) (that's an affiliate link that will help support luxagraf; if you prefer, here's a non-affiliate link: [link](http://www.vultr.com/))

The first step is to make sure you're installing the latest release of Nginx. To do that check the [Nginx download page](http://nginx.org/en/download.html) for the latest version of Nginx (at the time of writing that's 1.5.10).

Okay, SSH into your server and let's get started.

While these instructions will work on just about any server, the one thing that will be different is how you install the various prerequisites needed to compile Nginx.

On a Debian/Ubuntu server you'd do this:

~~~~console
sudo apt-get -y install build-essential zlib1g-dev libpcre3 libpcre3-dev libbz2-dev libssl-dev tar unzip
~~~~



If you're using RHEL/Cent/Fedora you'll want these packages:

~~~~console
sudo yum install gcc-c++ pcre-dev pcre-devel zlib-devel make
~~~~

After you have the prerequisites installed it's time to grab the latest version of Google's Pagespeed module. Google's [Nginx PageSpeed installation instructions](https://developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_source) are pretty good, so I'll reproduce them here.

First grab the latest version of PageSpeed, which is currently 1.9.32.2, but check the sources since it updates frequently and change this first variable to match the latest version.

~~~~console
NPS_VERSION=1.9.32.2
wget https://github.com/pagespeed/ngx_pagespeed/archive/release-${NPS_VERSION}-beta.zip
unzip release-${NPS_VERSION}-beta.zip
~~~~

Now, before we compile pagespeed we need to grab `psol`, which PageSpeed needs to function properly. So, let's `cd` into the `ngx_pagespeed-release-1.8.31.4-beta` folder and grab `psol`:

~~~~console
cd ngx_pagespeed-release-${NPS_VERSION}-beta/
wget https://dl.google.com/dl/page-speed/psol/${NPS_VERSION}.tar.gz
tar -xzvf ${NPS_VERSION}.tar.gz
cd ../
~~~~

Alright, so the `ngx_pagespeed` module is all setup and ready to install. All we have to do at this point is tell Nginx where to find it.

Now let's grab the Headers More and Naxsi modules as well. Again, check the [Headers More](https://github.com/agentzh/headers-more-nginx-module/) and [Naxsi](https://github.com/nbs-system/naxsi) pages to see what the latest stable version is and adjust the version numbers in the following accordingly.

~~~~console
HM_VERSION =v0.25
wget https://github.com/agentzh/headers-more-nginx-module/archive/${HM_VERSION}.tar.gz
tar -xvzf ${HM_VERSION}.tar.gz
NAX_VERSION=0.53-2
wget https://github.com/nbs-system/naxsi/archive/${NAX_VERSION}.tar.gz
tar -xvzf ${NAX_VERSION}.tar.gz
~~~~

Now we have all three third-party modules ready to go, the last thing we'll grab is a copy of Nginx itself:

~~~~console
NGINX_VERSION=1.7.7
wget http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz
tar -xvzf nginx-${NGINX_VERSION}.tar.gz
~~~~

Then we `cd` into the Nginx folder and compile. So, first:

~~~~console
cd nginx-${NGINX_VERSION}/
~~~~

So now we're inside the Nginx folder, let's configure our installation. We'll add in all our extras and turn off a few things we don't need. Or at least they're things I don't need, if you need the mail modules, then delete those lines. If you don't need SSL, you might want to skip that as well. Here's the config setting I use (Note: all paths are for Debian servers, you'll have to adjust the various paths accordingly for RHEL/Cent/Fedora/ servers):


~~~~console
./configure 
        --add-module=$HOME/naxsi-${NAX_VERSION}/naxsi_src 
        --prefix=/usr/share/nginx 
        --sbin-path=/usr/sbin/nginx 
        --conf-path=/etc/nginx/nginx.conf 
        --pid-path=/var/run/nginx.pid 
        --lock-path=/var/lock/nginx.lock 
        --error-log-path=/var/log/nginx/error.log 
        --http-log-path=/var/log/access.log 
        --user=www-data 
        --group=www-data 
        --without-mail_pop3_module 
        --without-mail_imap_module 
        --without-mail_smtp_module 
        --with-http_stub_status_module 
        --with-http_ssl_module 
        --with-http_spdy_module 
        --with-http_gzip_static_module 
        --add-module=$HOME/ngx_pagespeed-release-${NPS_VERSION}-beta 
        --add-module=$HOME/headers-more-nginx-module-${HM_VERSION}
~~~~

There are a few things worth noting here. First off make sure that Naxsi is first. Here's what the [Naxsi wiki page](https://github.com/nbs-system/naxsi/wiki/installation) has to say on that score: "Nginx will decide the order of modules according the order of the module's directive in Nginx's ./configure. So, no matter what (except if you really know what you are doing) put Naxsi first in your ./configure. If you don't do so, you might run into various problems, from random/unpredictable behaviors to non-effective WAF." The last thing you want is to think you have a web application firewall running when in fact you don't, so stick with Naxsi first.

There are a couple other things you might want to add to this configuration. If you're going to be serving large files, larger than your average 1.5MB HTML page, consider adding the line: `--with-file-aio `, which is apparently faster than the stock `sendfile` option. See [here](https://calomel.org/nginx.html) for more details. There are quite a few other modules available. A [full list of the default modules](http://wiki.nginx.org/Modules) can be found on the Nginx site. Read through that and if there's another module you need, you can add it to that config list.

Okay, we've told Nginx what to do, now let's actually install it:

~~~~console
make
sudo make install
~~~~

Once `make install` finishes doing its thing you'll have Nginx all set up.

Congrats! You made it.

The next step is to add Nginx to the list of things your server starts up automatically whenever it reboots. Since we installed Nginx from scratch we need to tell the underlying system what we did.

## Make it Autostart

Since we compiled from source rather than using Debian/Ubuntu's package management tools, the underlying stystem isn't aware of Nginx's existence. That means it won't automatically start it up when the system boots. In order to ensure that Nginx does start on boot we'll have to manually add Nginx to our server's list of startup services. That way, should we need to reboot, Nginx will automatically restart when the server does.

**Note: I have embraced systemd so this is out of date, see below for systemd version**

To do that I use the [Debian init script](https://github.com/MovLib/www/blob/master/bin/init-nginx.sh) listed in the [Nginx InitScripts page](http://wiki.nginx.org/InitScripts):

If that works for you, grab the raw version:

~~~~console
wget https://raw.githubusercontent.com/MovLib/www/develop/etc/init.d/nginx.sh
# I had to edit the DAEMON var to point to nginx
# change line 63 in the file to:
DAEMON=/usr/sbin/nginx
# then move it to /etc/init.d/nginx
sudo mv nginx.sh /etc/init.d/nginx
# make it executable:
sudo chmod +x /etc/init.d/nginx
# then just:
sudo service nginx start #also restart, reload, stop etc
~~~~

##Updated Systemd scripts

Yeah I went and did it. I kind of like systemd actually. Anyway, here's what I use to stop and start my custom compiled nginx with systemd...

First we need to create and edit an nginx.service file.

~~~~console
sudo vim /lib/systemd/system/nginx.service #this is for debian
~~~~

Then I use this script which I got from the nginx wiki I believe.

~~~~ini
# Stop dance for nginx
# =======================
#
# ExecStop sends SIGSTOP (graceful stop) to the nginx process.
# If, after 5s (--retry QUIT/5) nginx is still running, systemd takes control
# and sends SIGTERM (fast shutdown) to the main process.
# After another 5s (TimeoutStopSec=5), and if nginx is alive, systemd sends
# SIGKILL to all the remaining processes in the process group (KillMode=mixed).
#
# nginx signals reference doc:
# http://nginx.org/en/docs/control.html
#
[Unit]
Description=A high performance web server and a reverse proxy server
After=network.target

[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'
ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'
ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload
ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid
TimeoutStopSec=5
KillMode=mixed

[Install]
WantedBy=multi-user.target
~~~~

Save that file, exit your text editor. Now we just need to tell systemd about our script and then we can stop and start via our service file. To do that...

~~~~console
sudo systemctl enable nginx.service
sudo systemctl start nginx.service
sudo systemctl status nginx.service
~~~~

I suggest taking the last bit and turning it into an alias in your `bashrc` or `zshrc` file so that you can quickly restart/reload the server when you need it. Here's what I use:

~~~~ini
alias xrestart="sudo systemctl restart nginx.service"
~~~~


If you're using systemd, congrats, you're done. If you're looking for a way to get autostart to work on older or non-systemd servers, read on...

**End systemd update**

Okay so we now have the initialization script all set up, now let's make Nginx start up on reboot. In theory this should do it:

~~~~console
update-rc.d -f nginx defaults
~~~~

But that didn't work for me with my Digital Ocean Debian 7 x64 droplet (which complained that "`insserv rejected the script header`"). I didn't really feel like troubleshooting that at the time; I was feeling lazy so I decided to use chkconfig instead. To do that I just installed chkconfig and added Nginx:

~~~~console
sudo apt-get install chkconfig
sudo chkconfig --add nginx
sudo chkconfig nginx on
~~~~

So there we have it, everything you need to get Nginx installed with SPDY, PageSpeed, Headers More and Naxsi. A blazing fast server for static files.

After that it's just a matter of configuring Nginx, which is entirely dependent on how you're using it. For static setups like this my configuration is pretty minimal.

Before we get to that though, there's the first thing I do: edit `/etc/nginx/nginx.conf` down to something pretty simple. This is the root config so I keep it limited to a `http` block that turns on a few things I want globally and an include statement that loads site-specific config files. Something a bit like this:

~~~~nginx
user  www-data;
events {
    worker_connections  1024;
}
http {
    include mime.types;
    include /etc/nginx/naxsi_core.rules;
    default_type  application/octet-stream;
    types_hash_bucket_size 64;
    server_names_hash_bucket_size 128;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  logs/access.log  main;
    more_set_headers "Server: My Custom Server";
    keepalive_timeout  65;
    gzip  on;
    pagespeed on;
    pagespeed FileCachePath /var/ngx_pagespeed_cache;
    include /etc/nginx/sites-enabled/*.conf;
}
~~~~

A few things to note. I've include the core rules file from the Naxsi source. To make sure that file exists, we need to copy it over to `/etc/nginx/`.

~~~~console
sudo cp naxsi-0.53-2/naxci_config/naxsi_core.rule /etc/nginx
~~~~

Now let's restart the server so it picks up these changes:

~~~~console
sudo service nginx restart
~~~~

Or, if you took my suggestion of creating an alias, you can type: `xrestart` and Nginx will restart itself.

With this configuration we have a good basic setup and any `.conf` files you add to the folder `/etc/nginx/sites-enabled/` will be included automatically. So if you want to create a conf file for mydomain.com, you'd create the file `/etc/nginx/sites-enabled/mydomain.conf` and put the configuration for that domain in that file.

I'm going to post a follow up on how I configure Nginx very soon. In the mean time here's a pretty comprehensive [guide to configuring Nginx](https://calomel.org/nginx.html) in a variety of scenarios. And remember, if you want to some more helpful tips and tricks for web developers, sign up for the mailing list below.

[^1]: If you're more experienced with Nginx and I'm totally bass-akward about something in this guide, please let me know.
[^2]: In my experience anyway. Probably Apache can be tuned to get pretty close to Nginx's performance with static files, but it's going to take quite a bit of work. One is not necessarily better, but there are better tools for different jobs.
[^3]: That said, obviously a CDN service like Cloudfront will, in most cases, be much faster than Nginx or any other server.


# Tools for Writing an Ebook

date:2014-01-24 20:05:17
url:/src/ebook-writing-tools

It never really occurred to me to research which tools I would need to create an ebook because I knew I was going to use Markdown, which could then be translated into pretty much any format using [Pandoc](http://johnmacfarlane.net/pandoc/). Bu since a few people have [asked](https://twitter.com/situjapan/status/549935669129142272) for more details on *exactly* which tools I used, here's a quick rundown:

1. I write books as single text files lightly marked up with Pandoc-flavored Markdown.
2. Then I run Pandoc, passing in custom templates, CSS files, fonts I bought and so on. Pretty much as [detailed here in the Pandoc documentation](http://johnmacfarlane.net/pandoc/epub.html). I run these commands often enough that I write a shell script for each project so I don't have to type in all the flags and file paths each time.
3. Pandoc outputs an ePub file and an HTML file. The latter is then used with [Weasyprint](http://weasyprint.org/) to generate the PDF version of the ebook. Then I used the ePub file and the [Kindle command line tool](http://www.amazon.com/gp/feature.html?ie=UTF8&docId=1000765211) to create a .mobi file.
4. All of the formatting and design is just CSS, which I am already comfortable working with (though ePub is only a subset of CSS and reader support is somewhat akin to building website in 1998 -- who knows if it's gonna work? The PDF is what I consider the reference copy.)

In the end I get the book in TXT, HTML, PDF, ePub and .mobi formats, which covers pretty much every digital reader I'm aware of. Out of those I actually include the PDF, ePub and Mobi files when you [buy the book](/src/books/).

## FAQs and Notes.

**Why not use InDesign or iBook Author or \_\_\_\_\_\_\_?**

I wanted to use open source software, which offers me more control over the process than I could get with monolithic tools like visual layout editors. 

The above tools are, for me anyway, the simplest possible workflow which outputs the highest quality product. 

**What about Prince?**

What does The Purple One have to do with writing books? Oh, that [Prince](http://www.princexml.com/). Actually I really like Prince and it can do a few things that WeasyPrint cannot (like execute JavaScript which is handy for code highlighting or allow for `@font-face` font embedding), but it's not free and in the end, I decided, not worth the money.

**Can you share your shell script?**

Here's the basic idea, adjust file paths to suit your working habits.

~~~~bash
#!/bin/sh
#Update PDF:
pandoc --toc --toc-depth=2 --smart --template=lib/template.html5 --include-before-body=lib/header.html -t html5 -o rwd.html draft.txt && weasyprint rwd.html rwd.pdf


#Update epub:
pandoc -S -s --smart -t epub3 --include-before-body=lib/header.html --template=lib/template_epub.html --epub-metadata=lib/epub-metadata.xml --epub-stylesheet=lib/print-epub.css --epub-cover-image=lib/covers/cover-portrait.png --toc --toc-depth=2 -o rwd.epub draft.txt

#update Mobi:
pandoc -S -s --smart -t epub3 --include-before-body=lib/header.html --template=lib/template_epub.html --epub-metadata=lib/epub-metadata.xml --epub-stylesheet=lib/print-kindle.css --epub-cover-image=lib/covers/cover-portrait.png --toc --toc-depth=2 -o kindle.epub Draft.txt && kindlegen kindle.epub -o rwd.mobi
~~~~

I just run this script and bang, all my files are updated. 

<strong>What advice do you have for people trying to write an ebook?</strong>

At the risk of sounding trite, just do it. 

Writing a book is not easy, or rather the writing is never easy, but I don't think it's ever been this easy to *produce* a book. It took me two afternoons to come up with a workflow that involves all free, open source software and allows me to publish literally any text file on my hard drive as a book that can then be read by millions. I type two key strokes and I have a book. Even if millions don't ever read your book (and, for the record, millions have most definitely not read my books), that is still f'ing amazing. 

Now go make something cool (and be sure to tell me about it).


# Whatever Happened to Webmonkey.com?

date:2013-09-20 21:04:57
url:/src/whatever-happened-webmonkey

[Update 02/2019: If you're looking for a good resource, similar to Webmonkey, I suggest Mozilla's [Developer Docs site](https://developer.mozilla.org/en-US/). It lacks Webmonkey's sense of humor and fun, and it doesn't cover everything Webmonkey covered, but it does have some good tutorials and documentation on HTML, CSS and JavaScript]

People on Twitter have been asking what's up with [Webmonkey.com][1]. Originally I wanted to get this up on Webmonkey, but I got locked out of the CMS before I managed to do that, so I'm putting it here.

Earlier this year Wired decided to stop producing new content for Webmonkey. [**Update 07/2016**: The domain has been shut down and now redirects to wired.com. I told you they were serious this time.]
For those keeping track at home, this is the fourth, and I suspect final, time Webmonkey has been shut down (previously it was shut down in 1999, 2004 and 2006).

I've been writing for Webmonkey.com since 2000, full time since 2006 (when it came back from the dead for a third run). And for the last two years I have been the sole writer, editor and producer of the site.

Like so many of you, I learned how to build websites from Webmonkey. But it was more than just great tutorials and how tos. Part of what made Webmonkey great was that it was opinionated and rough around the edges. Webmonkey was not the product of professional writers, it was written and created by the web nerds building Wired's websites. It was written by people like us, for people like us.

I'll miss Webmonkey not just because it was my job for many years, but because at this point it feels like a family dog to me, it's always been there and suddenly it's not. Sniff. I'll miss you Webmonkey.

Quite a few people have asked me why it was shut down, but unfortunately I don't have many details to share. I've always been a remote employee, not in San Francisco at all in fact, and consequently somewhat out of the loop. What I can say is that Webmonkey's return to Wired in 2006 was the doing of long-time Wired editor Evan Hansen ([now at Medium][2]). Evan was a tireless champion of Webmonkey and saved it from the Conde Nast ax several times. He was also one of the few at Wired who "got" Webmonkey. When Evan left Wired earlier this year I knew Webmonkey's days were numbered.

I don't begrudge Wired for shutting Webmonkey down. While I have certain nostalgia for its heyday, even I know it's been a long time since Webmonkey was leading the way in web design. I had neither the staff nor the funding to make Webmonkey anything like its early 2000s self. In that sense I'm glad it was shut down rather than simply fading further into obscurity.

<span class="strike">I am very happy that Wired has left the site in place. As far as I know Webmonkey (and its ever-popular cheat sheets, which still get a truly astounding amount of traffic) will remain available on the web</span>. [**Update 07/2016**: so much for that, domain and all content are gone now.] That said, note to the [Archive Team][3], it wouldn't hurt to create a backup. Sadly, many of the very earliest writings have already been lost in the various CMS transitions over the years and even much of what's there now has incorrect bylines. Still, at least most of it's there. For now.

If you have any questions or want more details use the comments box below.

In closing, I'd like to thank some people at Wired -- thank you to my editors over the years, especially [Michael Calore][5], [Evan Hansen][6] and [Leander Kahney][7] who all made me a much better writer. Also thanks to Louise for always making sure I got paid. And finally, to everyone who read Webmonkey and contributed over the years, whether with articles or even just a comment, thank you.

Cheers and, yes, thanks for all the bananas.

[1]: http://www.webmonkey.com/
[2]: https://medium.com/@evanatmedium
[3]: http://www.archiveteam.org/index.php?title=Main_Page
[4]: https://twitter.com/LongHandPixels
[5]: http://snackfight.com/
[6]: https://twitter.com/evanatmedium
[7]: http://www.cultofmac.com/about/

# New Adventures in HiFi Text

date:2005-02-12 11:01:49
url:/src/new-adventures-in-hifi-text

I sometimes bitch about Microsoft Word in this piece, but let me be clear that I do not hate Windows or Microsoft, nor am I a rabid fan of Apple. In fact prior to the advent of OS X, I was ready to ditch the platform altogether. I could list as many crappy things about Mac OS 7.x-9.x as I can about Windows. Maybe even more. But OS X changed that for me, it's everything I was looking for in an OS and very little I wasn't. But I also don't think Microsoft is inherently evil and Windows is their plan to exploit the vulnerable masses. I mean really, do you think [this guy][2] or anything he might do could be *evil*? Of course not. I happen to much prefer OS X, but that's just personal preference and computing needs. I use Windows all the time at work and I don't hate it, it just lacks a certain *je ne sais quoi*. [2014 update: These days I use Arch Linux because it just works better than any other OS I have use.]

###In Praise of Plain Text

That said, I have never really liked Microsoft Word on any platform. It does all sorts of things I would prefer that it didn't, such as capitalize URLs while I'm typing or automatically convert email addresses to live links. Probably you can turn these sorts of things on and off in the preferences, but that's not the point. I dislike the way Word approaches me, [assuming that I want every bell and whistle possible][10], including a shifty looking paperclip with Great Gatsbyesque eyes watching my every move. 

Word has too many features and yet fails to implement any of them with much success. Since I don't work in an office environment, I really don't have any need for Word (did I mention it's expensive and crashes with alarming frequency?). I write for a couple of magazines here and there, post things on this site, and slave away at the mediocre American novel, none of which requires me to use MS Word or the .doc format. In short, I don't *need* Word.

Yet for years I used it anyway. I still have the copy I purchased in college and even upgraded when it became available for OS X. But I used it mainly out of ignorance to the alternatives, rather than usefulness of the software. I can now say I have tried pretty much every office/word processing program that's available for OS X and I only like one of them -- [Mellel][11]. But aside from that one, I've concluded I just don't like word processors (including [Apple's new Pages program][12]). 

These days I do my writing in a text editor, usually BBEdit. Since I've always used BBEdit to write code, it was open and ready to go. Over time I noticed that when I wanted to jot down some random idea I turned to BBEdit rather than opening up Word. It started as a convenience thing and just sort of snowballed from there. Now I'm really attached to writing in plain text.

In terms of archival storage, plain text is an excellent way to write. If BareBones, the makers of BBEdit, went bankrupt tomorrow I wouldn't miss a beat because just about any program out there can read my files. As a file storage format, plain text is almost totally platform independent (I'm sure someone has got a text editor running on their PS2 by now.), which makes plain text fairly future proof (and if it's not then we have bigger issues to deal with). Plain text is also easy to marked up for web display, a couple of `<p>` tags, maybe a link here and there and we're on our way.

###In Praise of Formatted Text

But there are some drawbacks to writing in plain text -- it sucks for physical documents. No one wants to read printed plain text. Because plain text must be single spaced printing renders some pretty small text with no room to make corrections -- less than ideal for editing purposes. Sure, I could adjust the font size and whatnot from within BBEdit's preferences, but I can't get the double spacing, which is indispensable for editing, but a waste of space when I'm actually writing. 

Of course this may  be peculiar to me. It may be hard for some people to write without having the double-spaced screen display. Most people probably look at what they're writing while they write it. I do not. I look at my hands. Not to find the keys, but rather with a sort of abstract fascination. My hands seem to know where to go without me having to think about it, it's kind of amazing and I like to watch it happen. I could well be thinking about something entirely different from what I'm typing and staring down at my hands produces a strange realization -- wow look at those fingers go, I wonder how they know what their doing? I'm thinking about the miraculous way they seem to know what their doing, rather than what they're actually doing. It's highly likely that this is my own freakishness, but it eliminates the need for nicely spaced screen output (and introduces the need for intense editing).

But wait, let's go back to that earlier part where I said its easy to mark up plain text for the web -- what if it were possible to mark up plain text for print? Now that would be something.

###The Best of Both Worlds (Maybe)

In fact there is a markup language for print documents. Unfortunately its pretty clunky. It goes by the name TeX, the terseness of which should make you think -- ah, Unix. But TeX is actually really wonderful. It gives you the ability to write in plain text and use an, albeit esoteric and awkward, syntax to mark it up. TeX can then convert your document into something very classy and beautiful.

Now prior to the advent of Adobe's ubiquitous PDF format I have no idea what sort of things TeX produced, nor do I care, because PDF exists and TeX can leverage it to render printable, distributable, cross-platform, open standard and, most importantly, really good looking documents.

But first let's deal with the basics. TeX is convoluted, ugly, impossibly verbose and generally useless to anyone without a computer science degree. Recognizing this, some reasonable folks can along and said, hey, what if we wrote some simple macros to access this impossibly verbose difficult to comprehend language? That would be marvelous. And so some people did and called the result LaTeX because they were nerd/geeks and loved puns and the shift key. Actually I am told that LaTeX is pronounced Lah Tech, and that TeX should not be thought of as tex, but rather the greek letters tau, epsilon and chi. This is all good and well if you want to convince people you're using a markup language rather than checking out fetish websites, but the word is spelled latex and will be pronounced laytex as long as I'm the one saying it. (Note to Bono: Your name is actually pronounced bo know. Sorry, that's just how it is in my world.)

So, while TeX may do the actual work of formating your plain text document, what you actually use to mark up your documents is called LaTeX. I'm not entirely certain, but I assume that the packages that comprise LaTeX are simple interfaces that take basic input shortcuts and then tell TeX what they mean. Sort of like what Markdown does in converting text to HTML. Hmmm. More on that later.

###Installation and RTFM suggestions

So I went through the whole unixy rigamarole of installing packages in usr/bin/ and other weird directories that I try to ignore and got a lovely little Mac OS X-native front end called [TeXShop][3]. Here is a link to the [Detailed instructions for the LaTeX/TeX set up I installed][4]. The process was awkward, but not painful. The instruction comprise only four steps, not as bad as say, um, well, okay, it's not drag-n-drop, but its pretty easy.

I also went a step further because LaTeX in most of it's incarnations is pretty picky about what fonts it will work with. If this seems idiotic to you, you are not alone. I thought hey, I have all these great fonts, I should be able to use any of them in a LaTeX document, but no, it's not that easy. Without delving too deep into the mysterious world of fonts, it seems that, in order to render text as well as it does, TeX needs special fonts -- particularly fonts that have specific ligatures included in them. Luckily a very nice gentlemen by the name of Jonathan Kew has already solved this problem for those of us using Mac OS X. So I [downloaded and installed XeTeX][13], which is actually a totally different macro processor that runs semi-separately from a standard LaTeX installation (at least I think it is, feel free to correct me if I'm wrong. This link offers [more information on XeTeX][5].

So then [I read the fucking manual][6] and [the other fucking manual][7] (which should be on your list of best practices when dealing with new software or programming languages). After an hour or so of tinkering with pre-made templates developed by others, and consulting the aforementioned manuals, I was actually able to generate some decent looking documents.

But the syntax for LaTeX is awkward and verbose (remember -- written to avoid having to know an awkward and verbose syntax known as TeX). Would you rather write this:

~~~{.latex}

\section{Heading}
\font\a="Bell MT" at 12pt 	
\a some text some text some text some text, for the love of god	I will not use latin sample text because damnit I am not roman and do not like fiddling. \href{some link text}{http://www.linkaddress.com} to demonstrate what a link looks like in XeTeX. \verb#here is a line of code# to show what inline code looks like in XeTeX some more text because I still won't stoop, yes I said stoop, to Latin.
~~~

Or this:

~~~{.markdown}

###Heading

Some text some text some text some text, for the love of god I will not use latin sample text because damnit I am not roman and do not like fiddling. [some link text][99] to demonstrate what a link looks like in Markdown. `here is a line of code` to show what inline code looks like in Markdown. And some more text because I still won't stoop, yes I said stoop, to Latin.

~~~
    
In simple terms of readability, [John Gruber's Markdown][8] (the second sample code) is a stroke of true brilliance. I can honestly say that nothing has changed my writing style as much since my parents bought one of these newfangled computer thingys back in the late 80's. So, with no more inane hyperbole, lets just say I like Markdown.

LaTeX on the other hand shows it's age like the denture baring ladies of a burlesque revival show. It ain't sexy. And believe me, my sample is the tip of the iceberg in terms of mark up.

###Using Perl and Applescript to Generate XeTeX

Here's where I get vague, beg personal preferences, hint a vast undivulged knowledge of AppleScript (not true, I just use the "start recording" feature in BBEdit) and simply say that, with a limited knowledge of Perl, I was able to rewrite Markdown, combine that with some applescripts to call various Grep patterns (LaTeX must escape certain characters, most notably, `$` and `&`) and create a BBEdit Textfactory which combines the first two elements to generate LaTeX markup from a Markdown syntax plain text document. And no I haven't been reading Proust, I just like long, parenthetically-aside sentences.

Yes all of the convolution of the preceding sentence allows me to, in one step, convert this document to a latex document and output it as a PDF file. Don't believe me? [Download this article as a PDF produced using LaTeX][9]. In fact it's so easy I'm going to batch process all my entries and make them into nice looking PDFs which will be available at the bottom of the page.

###Technical Details

I first proposed this idea of using Markdown to generate LaTeX on the BBEdit mailing list and was informed that it would be counter-productive to the whole purpose and philosophy of LaTeX. While I sort of understand this guidance, I disagree. 

I already have a ton of documents written with Markdown syntax. Markdown is the most minimal syntax I've found for generating html. Why not adapt my existing workflow to generate some basic LaTeX? See I don't want to write LaTeX documents; I want to write text documents with Markdown syntax in them and generate html and PDF from the same initial document. Then I want to revert the initial document back to it's original form and stash it away on my hard drive.

I simply wanted a one step method of processing a Markdown syntax text file into XeTeX to compliment the one step method I already have for turning the same document into HTML.

Here's how I do it. I modified Markdown to generate what LaTeX markup I need, i.e. specific definitions for list elements, headings, quotes, code blocks etc. This was actually pretty easy, and keep in mind that I have never gotten beyond a "hello world" script in Perl. Kudos to John Gruber for copious comments and very logical, easy to read code.

That's all good and well, but then there are some other things I needed to do to get a usable TeX version of my document. For instance certain characters need to be escaped, like the entities mentioned above. Now if I were more knowledgeable about Perl I would have just added these to the Markdown file, but rather than wrestle with Perl I elected to use grep via BBEdit. So I crafted an applescript that first parsed out things like `&mdash;` and replaced them with the unicode equivalent which is necessary to get an em-dash in XeTeX (in a normal LaTeX environment you would use `---` to generate an emdash). Other things like quote marks, curly brackets and ampersands are similarly replaced with their XeTeX equivalents (for some it's unicode, others like `{` or `}` must be escaped like so: `\{`). 

Next I created a BBEdit Textfactory to call these scripts in the right order (for instance I need to replace quote marks after running my modified Markdown script since Markdown will use quotes to identify things like url title tags (which my version simply discards). Then I created an applescript that calls the textfactory and then applies a BBEdit glossary item to the resulting (selected) text, which adds all the preamble TeX definitions I use and then passes that whole code block off to XeTeX via TeXShop and outputs the result in Preview.

Convoluted? Yes. But now that it's done and assigned a shortcut key it takes less than two seconds to generate a pdf of really good looking (double spaced) text. The best part is if I want to change things around, the only file I have to adjust is the BBEdit glossary item that creates the preamble.

The only downside is that to undo the various manipulations wrought on the original text file I have to hit the undo command five times. At some point I'll sit down and figure out how to do everything using Perl and then it will be a one step undo just like regular Markdown. In the mean time I just wrote a quick applescript that calls undo five times :)

###Am I insane?

I don't know. I'm willing to admit to esoteric and when pressed will concede stupid, but damnit I like it. And from initial install to completed workflow we're only talking about six hours, most of which was spent pouring over LaTeX manuals. Okay yes, I'm insane. I went to all this effort just to avoid an animated paperclip. But seriously, that thing is creepy.

Note of course that my LaTeX needs are limited and fairly simple. I wanted one version of my process to output a pretty simple double spaced document for editing. Then I whipped up another version for actual reading by others (single spaced, nice margins and header etc). I'm a humanities type, I'm not doing complex math equations, inline images, or typesetting an entire book with table of contents and bibliography. Of course even if I were, the only real change I would need to make is to the LaTeX preamble template. Everything else would remain the same, which is pretty future proof. And if BBEdit disappears and Apple goes belly up, well, I still have plain text files to edit on my PS457.

[1]: http://www.luxagraf.com/archives/flash/software_sucks "Why Software sucks. Sometimes."
[2]: http://www.snopes.com/photos/people/gates.asp "Bill Gates gets sexy for the teens--yes, that is a mac in the background"
[3]: http://www.uoregon.edu/~koch/texshop/texshop.html "TeXShop for Mac OS X"
[4]: http://www.mecheng.adelaide.edu.au/~will/texstart/ "TeX on Mac OS X: The most simple beginner's guide"
[5]: http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&item_id=xetex_texshop "Using XeTeX with TexShop"
[6]: http://www.math.hkbu.edu.hk/TeX/ "online LaTeX manual"
[7]: http://www.math.hkbu.edu.hk/TeX/lshort.pdf "Not so Short introduction to LaTeX"
[8]: http://daringfireball.net/projects/markdown/ "Markdown"
[9]: http://www.luxagraf.com/pdf/hifitext.pdf "this article as an XeTeX generated pdf"
[10]: http://www1.appstate.edu/~clarkne/hatemicro.html "Microsoft Word Suicide Note help"
[11]: http://www.redlers.com/ "Mellel, great software and a bargin at $39"
[12]: http://www.apple.com/iwork/pages/ "Apple's Pages, part of the new iWork suite"
[13]: http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&item_id=xetex&_sc=1 "The XeTeX typesetting system"