Larry Boltovskoi

Web Developer

How I Like to Setup My Servers

I like playing around with VPS servers, you know… for science! And because it’s fun, it’s good practice to become a better sysadmin and gives me more freedom than a shared host. And admit it, being a sysadmin of your own little cluster is cool!

Documenting it in a blogpost will provide a place where I can write down how a perfect web server should be setup according to me, and a place to copy/paste commands when I’m reinstalling after too much messing around.
To get started, I personally prefer Debian 6.0 (Squeeze) as server-OS.

Important!

Please note I’m not a professional system-administrator. Everything here is based on my own experience and preference. I’m pretty sure there are a lot better ways to achieve this using fancy tools such as puppet, ansible, PaaS-solutions, etc. Eventually I’ll learn how to use all those tools as well, but until then, this works for me. :)

Initial config

Before starting with anything related to web-serving, I setup the server a bit more friendly to me.

This means editing /etc/apt/sources.list:

1
2
3
4
deb http://ftp.debian.org/debian oldstable main contrib non-free
deb http://ftp.debian.org/debian/ squeeze-updates main contrib non-free
deb http://security.debian.org/ squeeze/updates main contrib non-free
deb http://debian.cs.binghamton.edu/debian-backports squeeze-backports main

And apt-gettin’ all the packages I require: (still logged in as root)

1
2
3
4
5
6
7
apt-get update -y
apt-get --reinstall install debian-archive-keyring
apt-get upgrade -y
apt-get install sudo htop vim ack mosh git tmux tree curl -y

apt-get install acl -y
mount -o remount,acl /

User setup

After having the basic packages I add a regular account to comment to my server, which I keep everywhere uniform to my username on my MacBook Pro. Which means I can leave off the username@hostname when connecting.

1
2
adduser larrybolt
usermod -a -G sudo larrybolt

I also create the group web which will be assigned to developers that have access to the apps-directory where all web-applications/sites will be located.

1
2
addgroup web
usermod -a -G web larrybolt

After this I disconnect and copy over my ssh keys (a good tutorial on that can be found at the WebFactional Docs).

1
2
3
4
5
6
7
8
9
#locally
scp .ssh/id_dsa.pub yuna.codr.in:~/authorized_keys
ssh node.example.com
#once logged in
mkdir .ssh && mv authorized_keys .ssh && chmod 600 .ssh/authorized_keys && chmod 700 .ssh
git clone git://github.com/larrybolt/dotfiles.git .dotfiles
ln -s ~/.dotfiles/server/.aliases ~/.aliases
ln -s ~/.dotfiles/server/.bash_profile ~/.bash_profile
ln -s ~/.dotfiles/development.server/.tmux.conf ~/.tmux.conf

Web Server

I prefer to keep all my web-related files and logs under /var/web.

1
2
3
4
5
6
7
sudo mkdir -p /var/web/{apps/{nodejs,php,ruby,scala,static,python},backups,logs/{apache,mysql,nginx},root,tools,vhosts,conf/{nginx/{sites-enabled,sites-available},apache/{sites-enabled,sites-available},mysql,php}}
cd /var/web
sudo ln -s ./logs ./log
sudo chgrp -R web ./apps
sudo chmod g+s ./apps
sudo setfacl -R -d -m group:web:rwx ./apps
sudo setfacl -R -m group:web:rwx ./apps

This will create a tree like following:

/var/web
├── apps
│   ├── nodejs
│   ├── php
│   ├── python
│   ├── ruby
│   ├── scala
│   └── static
├── backups
├── conf
│   ├── apache
│   │   ├── sites-available
│   │   └── sites-enabled
│   ├── mysql
│   ├── nginx
│   │   ├── sites-available
│   │   └── sites-enabled
│   └── php
├── log -> ./logs
├── logs
│   ├── apache
│   ├── mysql
│   └── nginx
├── root
├── tools
└── vhosts

This allows a centralized and organized location for anything web related. (Which in my opinion is easier for backup or when moving servers). The vhosts directory will simply contain symlinks to the apps-folder.

Example: cd /var/web/vhosts; ln -s ../apps/php/example/public ./example.com.

Getting RVM and installing Ruby 1.9.3

1
2
3
4
5
6
\curl -L https://get.rvm.io | sudo bash -s stable
sudo usermod -a -G rvm larrybolt
# reload shell (logout/login)
rvm requirements
# this will take a while!
rvm install 1.9.3

Nginx and Passenger

The easiest way I found to get passenger and Nginx working together was by installing passenger as a gem and letting

1
2
3
4
5
6
gem install passenger
rvmsudo passenger-install-nginx-module
# Probably you'll have to install some packages, in my case only next:
sudo apt-get install libcurl4-openssl-dev
# and run passenger-install-nginx-module again to install nginx to /opt/nginx
rvmsudo passenger-install-nginx-module

When this step completed successfully continue on by adding a init.d script for nginx:

1
2
3
4
5
cd
wget https://gist.github.com/hisea/1548664/raw/53f6d7ccb9dfc82a50c95e9f6e2e60dc59e4c2fb/nginx
sudo cp nginx /etc/init.d/
sudo chmod +x /etc/init.d/nginx
sudo update-rc.d nginx defaults

Edit the location where to PID-file is stored in /etc/init.d/nginx, in my case: PIDSPATH=/opt/nginx/logs. You can alter this behavior in the nginx config file located in /opt/nginx/conf/nginx.conf.

1
2
sudo mv /opt/nginx/conf/nginx.conf* /var/web/conf/nginx/
sudo ln -s /var/web/conf/nginx/nginx.conf /opt/nginx/conf/nginx.conf

And launch nginx! sudo service nginx start

Using nginx and apache together

Next step is getting nginx and apache installed:

1
sudo apt-get install apache2

Nginx will be the main server and will be listening on port 80, apache will be listening on port 8080 and only accept connections passed trough by nginx. This because sometimes it’s easier to setup a site trough apache (htaccess rules still differ from a nginx configuration).

To begin the configuration for nginx. Without any sites configured two files will be present in /etc/nginx/sites-enabled:

  • 000-example which is the nodes root and will be shown upon visiting node.example.com
  • 999-apache which will proxy any request not covered by the config to apache.
/etc/nginx/sites-enabled/000-node
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
server {

        listen   80;
        server_name  node.example.com;

        access_log  /var/log/nginx/localhost.access.log;

        location / {
                root   /var/web/root;
                index  index.html index.htm;
        }
        error_page 404 /;
        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        location ~ /\.ht {
                deny  all;
        }
}
/etc/nginx/sites-enabled/999-apache
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
server {

        listen   80;
        server_name _;

        access_log  /var/web/logs/nginx/999-apache.access.log;
        error_log   /var/web/logs/nginx/999-apache.error.log;

        location / {
            proxy_pass         http://127.0.0.1:8080/;
            proxy_redirect     off;

            proxy_set_header   Host             $host;
            proxy_set_header   X-Real-IP        $remote_addr;
            proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
            proxy_max_temp_file_size 0;

            client_max_body_size       10m;
            client_body_buffer_size    128k;

            proxy_connect_timeout      90;
            proxy_send_timeout         90;
            proxy_read_timeout         90;

            proxy_buffer_size          4k;
            proxy_buffers              4 32k;
            proxy_busy_buffers_size    64k;
            proxy_temp_file_write_size 64k;

        }
}

Hosting this blog (which is a static site, powered by octropress) is done by following config:

/etc/nginx/sites-enabled/100-codr.in
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
server {

        listen   80;

        server_name  codr.in;

        access_log  /var/log/nginx/codrin.access.log;

        location / {
                root   /var/web/vhosts/codr.in;
                index  index.html;
        }

        error_page 404 /404.html;

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        location ~ /\.ht {
                deny  all;
        }
}
server {
        listen 80;
        server_name www.codr.in;
        rewrite ^(.*) http://codr.in$1 permanent;
}

The weird numbering to sort the config files so the correct ones get’s loaded first and the fallback get’s loaded as last.

Moving on to apache!

DNS

For now, I’m a glad CloudFlare customer, but I only use their CDN-service and DNS-management tool. To make switching hosts easy I maintain next convention:

My main domain for my cluster, eg.: example.com is an A-record, for each node, node.example.com I have another A-record set to the matching IP. *.node is a CNAME pointing to the matching node.

I’m exploring the possibilities of running my own DNS-servers in order to provide failover or perhaps using HAProxy though that would require a more expensive setup.

Problems that may occur

The annoying perl: warning: Setting locale failed.
Fix it using the Hard Way

    [email protected]:~$ \curl -L https://get.rvm.io | sudo bash -s stable --rails --autolibs=enabled --ruby=1.9.3
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (60) SSL certificate problem: self signed certificate in certificate chain
    More details here: http://curl.haxx.se/docs/sslcerts.html

    curl performs SSL certificate verification by default, using a "bundle"
     of Certificate Authority (CA) public keys (CA certs). If the default
     bundle file isn't adequate, you can specify an alternate file
     using the --cacert option.
    If this HTTPS server uses a certificate signed by a CA represented in
     the bundle, the certificate verification probably failed due to a
     problem with the certificate (it might be expired, or the name might
     not match the domain name in the URL).
    If you'd like to turn off curl's verification of the certificate, use
     the -k (or --insecure) option.

Just as this rvm.io page describes sudo apt-get install ca-certificates solves this problem and (should) solve future ssl-certificate problems.

    W: Failed to fetch http://ftp.us.debian.org/debian-backports/dists/squeeze-backports/main/binary-i386/Packages.gz  404  Not Found [IP: 128.30.2.36 80]

    E: Some index files failed to download, they have been ignored, or old ones used instead.

The instructions to simple add deb http://YOURMIRROR.debian.org/debian wheezy-backports main made me pick ftp.us.debian.org/debian-backports/ as mirror, which doesn’t seem http://backports.debian.org/Mirrors/

If you get an error message similar to: setfacl: ./apps: Operation not supported you simply need to remount your /-directory using: sudo mount -o remount,acl /.