Refined Server Setup
23 February 2011
In my previous post, I mentioned it was using the
nginx user to run the application servers for the sake of simplicity. I also stated it was not how I would set it up for a "production" application and that I was going to outline that setup in another post. This is that post. I know it's 3-months later, but I wasn't sure if anyone was all that interested in it. After talking with a few people (both in person and on the tubes), it seems there is some interest...
Note: I am planning to write a follow-up to this which outlines how to set up a "git-friendly" deployment similar to what @defunkt outlined on the Github blog a while back. If you are interested in this topic, drop me a message on the Twitters (@tomkersten) to let me know. I'm more motivated to write when I know there is interest. ;-)
- To learn about setting up nginx and Unicorn on a server in a reliable configuration. You can probably find Puppet modules and Chef recipes to set up something similar to what is outlined here in a matter of minutes...and it is quite possibly a better setup. If you are looking for a quick server setup where you don't have to do anything...this may not be for you.
- An improved server configuration for hosting one or more Rails applications with a base configuration similar to what is outlined in my previous post (nginx, unicorn, rvm, etc). When you are done with this, you will have a pretty clear idea of how to set up nginx and Unicorn to host multiple applications. You will have to set up log rotation, backups, et cetera on your own, but...there are lots of sources of information on how to do that.
- You either have a server set up similar to what is outlined in my previous post, or are capable of modifying the instructions below as necessary for your configuration. For the sake of this article, I will be working off of a brand new instance of the AMI I set up in the last article (ami-263eca4f), so you can follow along by firing one of them up if you like.
- You are deploying a Rails 3 application.
- You use the git version control system for your codebase.
I take no responsibility for any issues you run into with this. I promise to do my best to avoid creating a security hole you can drive a truck through. However, if there is a vulnerability in the setup I outline, it is your responsibility to find it and fix it yourself. If you lock yourself out of your server because you did something wrong with SSH...that's on you. Et cetera. If you do see a flaw/vulnerability, please let me know and I will update it so others do not fall prey to the issue.
For what it's worth, if you do end up having issues due to the instructions I have outlined, I will feel really bad. Seriously.
- Install a few packages
- Improve the default user directory configuration
- Set up deployment area
- Database setup
- Configure application servers and nginx to server site(s)
Don't have multiple domain names to test your setup?
For the sake of this example, I will assume you are deploying more than one application to the same server and want to know how to configure things. In order for you to be able to follow along, you can set up some temporary hard-coded hostname entries in your (local) /etc/hosts file...such as the following:
!! On your local machine... $ sudo vi /etc/hosts !! Add the following XXX.XXX.XXX.XXX example1.com example2.com ^^^^^^^^^^^^^^^ IP address EC2 instance (or whatever server you have set up)
Note: Obviously you can use your own registered name and DNS setup as well...this just temporarily skirts around that requirement.
Install a few packages
We didn't need these for the bare-bone rack app-setup outlined in the last article...
$ sudo apt-get install libreadline-ruby1.8 libruby1.8 libopenssl-ruby \ libxslt-dev libxml2-dev
Improve the default user directory configuration
I don't want to spoil the next step, but the goal here is to create a better "default" environment for any new users. I personally set this up with the basics of how I would set up my own shell so it has all of the conveniences I am used to (aliases, etc). To make this easier, you can put whatever files you would want in a new user's 'home' directory into
For the purposes of this example, I'll set it up so users default to using the Z-Shell and use the Oh-My-Zsh project. If you are really desperate, you could check out vixploder, but that one's not maintained (or set up) that well. ;-)
$ cd /etc/skel $ git clone https://github.com/robbyrussell/oh-my-zsh.git .oh-my-zsh $ ln -s ./.oh-my-zsh/templates/zshrc.zsh-template ./.zshrc
Add the RVM initialization stuff to the end of the default .zshrc
# Using vi, or whatever (...vi) [[ -s "/usr/local/lib/rvm" ]] && source "/usr/local/lib/rvm"
Note: I'll leave customizing this setup to be "ideal" as an exercise to the reader. There are so many variables that I will never be able to "nail" this part for you. I have customized the setup a bit so the prompt is what I want, etc. Feel free to do the same for yourself...
Add a default
.rvmrc with the following contents (still in /etc/skel):
$ echo "rvm_project_rvmrc_default=1" > .rvmrc
Note: It may not be applicable in some situations, but...something to consider is adding your SSH public key to a `.ssh/authorized_keys` file, then you will always be able to log in to new application home directories without having to copy that file over. Be sure to chmod 600 it or SSH will swear at you.
Set up deployment area
We all know running our app servers as a privileged user (like...root) fell just below #7 on the list of deadly sins...so we obviously don't want to do set things up in that manner. Until recently, I always created a separate user (or used an existing "system" user) to run services, but stored all the apps in the same directory and generally ran all application servers with the same account (think "www-data", "mongrel", or "unicorn"). However, I was reading through the Mongrel2 documentation, specifically the deployment tips section, and ran across the idea of always creating a separate user for each site. I couldn't believe I had never thought of doing this before. Changing the setup to this has some nice benefits for free due to the way users are set up on a Linux/Unix-based OS. One example would be never worrying about read/write privileges when running multiple applications on the same server, because all application servers are running as separte users. This essentially "contains" any damage that can be done to the user's (application's) home directory.
More specifically, setting it up in this manner would prevent a user from being able to accidentally restart all application servers on your system by running the wrong kill or restart command...because they don't have permission to kill processes owned by other users ("web application" users, in this context).
Anyway, I liked the idea, so that's how I do it now...
- Create a new user (and group) on the server
This will add them to the
rvm group and set it as their default group...but also creates a user-specific group so they can get all secretive on other app-user's asses.
!! I have been using the domain name as the user's login, which !! requires the `--force-badname` flag... $ sudo adduser --force-badname --ingroup rvm --shell /usr/bin/zsh example1.com !! Add them to their own group as well: $ sudo groupadd example1.com $ sudo usermod -G rvm,example1.com example1.com !! Do the same for your second domain... $ sudo adduser --force-badname --ingroup rvm --shell /usr/bin/zsh example2.com $ sudo groupadd example2.com $ sudo usermod -G rvm,example2.com example2.com
- Set up SSH keys (for both users)
Switch to the newly created user(s)
$ sudo su - $ su - example1.com
Copy your local SSH keys up to the server users'
~/.ssh directory, so you can clone the same repositories from Github, or wherever.
Tangent: Obviously you can just create a new public/private keypair for this server and add the public key to your Github profile as well...but...that can get unweildy if you have a lot of servers you deal with. I personally have started to generate a key for each client and manage it via my
~/.ssh/config. Another option would be to set up a password on your key and just use the same one everywhere...but I digress. This topic alone is likely worthy of a blog post.
Now you should be able to both SSH into the server as 'example1.com' without being prompted for a password AND clone the repository you want to deploy there. When you log in, you should verify that your shell environment behaves as you would expect and that you are properly initializing RVM. If not, revisit the steps outlined above until both are true.
Repeat for the
When that's all done and you are able to SSH in without passwords, you might as well disable password-based logins, if you haven't already. Just a thought...
Set up database users
$ su - postgres $ createuser -D -A -P example1 # Enter info... $ createuser -D -A -P example2 # Enter info...
$ createdb -O example1 example1_db $ createdb -O example2 example2_db
Configure application servers and nginx to server site(s)
For now, we are just going to set up the most basic app possible, without a deployment configuration or anything. As I mentioned earlier, I plan to outline a deployment setup which steps away from the typical setup you see with Rails apps (a "releases" directory with symlink to "current", etc). This will work into that setup...but we won't get there in this post...
Switch to "example1.com" user
$ su - example1.com
$ git clone git://github.com/tomkersten/basic_rails_app.git website
Confirm RVM is working correctly
Assuming your site has a
.rvmrcfile in it, when you
websitedirectory, you should be prompted to confirm taht it is a trusted
.rvmrc. After agreeing to it, when you type
rvm info, you should see that you are using a separate
If any of this is not happening for you, stop here and figure out why.
Install the application's gems
$ bundle install
Set up a config/database.yml file
Add the following to your applications
production: adapter: postgresql database: example1_db username: example1 password: example1 pool: 5 host: localhost
- Set up your application's
- Set up an /etc/init.d/ script
(As root) Add the following to to
...and set it up to start on reboot:
$ update-rc.d unicorn_example1.com defaults Adding system startup for /etc/init.d/unicorn_example1.com ... /etc/rc0.d/K20unicorn_example1.com -> ../init.d/unicorn_example1.com /etc/rc1.d/K20unicorn_example1.com -> ../init.d/unicorn_example1.com /etc/rc6.d/K20unicorn_example1.com -> ../init.d/unicorn_example1.com /etc/rc2.d/S20unicorn_example1.com -> ../init.d/unicorn_example1.com /etc/rc3.d/S20unicorn_example1.com -> ../init.d/unicorn_example1.com /etc/rc4.d/S20unicorn_example1.com -> ../init.d/unicorn_example1.com /etc/rc5.d/S20unicorn_example1.com -> ../init.d/unicorn_example1.com
- Make sure it works
Note: Depending on your application's directory structure, you may need to add a
Now, you should be able to
su - example1.com do the following (some columns trimmed):
$ /etc/init.d/unicorn_example1.com start $ netstat -l unix 2 STREAM LISTENING 23079 /tmp/example1.com.socket ^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^ $ ps aux |grep example1 1001 4845 unicorn master -c /home/example1.com/website/config/unicorn.rb 1001 4848 unicorn worker -c /home/example1.com/website/config/unicorn.rb 1001 4849 unicorn worker -c /home/example1.com/website/config/unicorn.rb ^^^^ example1.com user's UID...so it's running as the correct user
Now after you run:
$ /etc/init.d/unicorn_example1.com stop
...you should see that all the application servers (and the socket) are closed down, meaning your
example1.com user can control their own application servers. Perfect.
Now, you should also be able to do the same thing as
root and get the same output...with the unicorn processes still running as the
example1.com user, etc.
Something to consider doing: I've started editing my
~/.zshrc and adding
~/bin to the web application user's
$PATH and then symlinking the unicorn control script to there as
unicornctl, like so:
$ ln -s /etc/init.d/unicorn_example1.com ~/bin/unicornctl
Doing this makes it so you have a uniform "API" across all applications, so controlling your application servers is always just
unicornctl (start|stop|etc). Additionally, it's similar to what many people are used to with
apache2ctl...which is a small perk.
- Update nginx to support the new domain
root, you can add an new "virtual host" (in Apache speak) to nginx for the
example1.com domain by adding the following to
Note that line #20 includes a file called
proxy.include, which holds the common settings you would want for all of your proxy servers. You can add the following to
proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k;
Note: I saw the proxy.include idea somewhere, but can't seem to find the link...but this is comletely ripped because it seemed like a nice setup.
Also note: This is not doing any gzipping on assets, etc...which it should. I want to dig into that a bit more before I cargo cult the config for it. You should consider investigating how to set that up on your own, as it is something you should be doing.
- Test it out
Assuming you can run:
$ /etc/init.d/nginx configtest
...without getting any warnings/errors, you should be able do the following:
$ /etc/init.d/unicorn_example1.com start $ /etc/init.d/nginx restart
...and visit http://example1.com and see a "books" scaffold app.
- Repeat for example2.com
You should be able to host both applications by essentially replacing "example1" with "example2" in the
nginx config files...and the commands listed above, starting up your 'example2.com' application servers, and restarting nginx.
Things we didn't do...which you should...
Set up gzipping of assets: There are quite a few examples of how to do this...and I could have copied & pasted it in the nginx config above and it probably would have worked. However, I would like to figure out how it works a bit more in detail and test it for myself before I tell you how to do it. I'm still pretty new to nginx, so I am taking it a bit slower. If you are itching for how to do it, a couple examples you can start with are here and here. Both are relatively old, but Ezra's brilliant and the other one seems consistent with many others I have glanced over.
Setting up respawning of Monit: If you follow my previous post on the general Monit setup, you are sitting pretty good. However, if Monit dies, or is killed, it won't restart automatically with that configuration. Assuming you are using an Ubuntu release of 9.04 or newer, you can use the Upstart daemon to manage watching this. The easiest way to set this up is to follow the instructions at the top of the monit.upstart file in the Monit codebase. If you are running a distribution that does not use Upstart (event.d), you can use the Googles and grab a tuturial on setting Monit up with
inittab...it's pretty straightforward.
Set up log rotation: There are tons of examples on how to set this up as well, so I won't delve into it here.
Set up database backups: There are tons of examples on how to set this up as well, so I won't delve into it here.
Set up tempfile cleanup: There are tons of examples on how to set this up as well, so I won't delve into it here.
Other noteable information
- If you are using the sample Rails application I have been, both applications will be using the same gemset. This is not really awesome, but it's also not a common situation you should have to deal with.
- Oh...and be sure to remove those DNS entries from your local
Drop me a message at @tomkersten if you see any issues with what I've outlined. I'll be sure to incorporate improvements to the article.