How to Deploy Cloud Foundry v2 to AWS by Using Vagrant

by Gastón RamosAugust 2, 2013
Learn an easy and fast way to spin up a single instance of Cloud Foundry v2 on Amazon EC2, with suggestions on automating some installation tasks.

Recently, we published an article on the Cloud Foundry blog in which we explained how to install Cloud Foundry with Vagrant. Although BOSH is suggested as the official method of setting up a system, the way described in the article is easier and faster. This blog post found on the ActiveState blog adds some more details to the subject. Don’t skip the comments made by our Argentinian team, in which we suggest the ways of automating some installation tasks.

Read the full article “How to Deploy Cloud Foundry v2 to AWS via Vagrant” to learn the details.

In this post, I’m going to quickly run through how I got up and running with Cloud Foundry v2. These notes are based on my colleague’s instructions, who is in the process of giving Cloud Foundry v2’s tires a good kicking.

The easiest way to deploy Cloud Foundry version 2 (a.k.a “ng” or “next generation”) seems to be via Vagrant. The official way is via BOSH, but we have created a method which makes it much easier to spin up a single instance of Cloud Foundry v2 on Amazon EC2. We found with BOSH we needed 14 instances to get up and running and it took much longer.

 

Install the installer

You start by git-cloning the cf-vagrant-installer repository from GitHub.

As you will see in the README.md, there are a few vagrant dependencies, the first of which is Vagrant itself.

 

Install Vagrant

If you do not have Vagrant installed, you can install it from http://downloads.vagrantup.com/. I installed the .dmg for my Mac, which was pretty straightforward.

 

Install Vagrant plug-ins

The Vagrant plug-ins required (if they have not changed) were vagrant-berkshelf, which adds Berkshelf integration to the Chef provisioners, vagrant-omnibus, which ensures the desired version of Chef is installed via the platform-specific Omnibus packages, and vagrant-aws, which adds an AWS provider to Vagrant, allowing Vagrant to control and provision machines in EC2.

Installation of these plug-ins could not be simpler:

 

Run the bootstrap

Next, make sure we are in the cf-vagrant-installer (which we cloned above) directory and run the rake command to download all the Cloud Foundry components.

The output of this rake command will look something like this:

 

Set up AWS credentials

Next, you will need to edit the Vagrantfile:

Add the following section directly above the config.vm.provider :vmware_fusion line:

config.vm.provider :aws do |aws, override|
    override.vm.box_url = "http://files.vagrantup.com/precise64.box"

    aws.access_key_id = "YOUR AWS ACCESS KEY"
    aws.secret_access_key = "YOUR AWS SECRET KEY"
    aws.keypair_name = "YOUR AWS KEYPAIR NAME"
    aws.ami = "ami-23d9a94a"
    aws.instance_type = "m1.large"
    aws.region = "us-east-1"
    aws.security_groups = ["open"]

    aws.user_data = File.read('ec2-setup.sh')

    override.ssh.username = "vagrant"
    override.ssh.private_key_path = "THE LOCAL PATH TO YOUR AWS PRIVATE KEY"
  end

Then replace "YOUR AWS ACCESS KEY", "YOUR AWS SECRET KEY", and "YOUR AWS KEYPAIR NAME" with your own AWS credentials.

 

An open security group

The AWS security group used in the above example is one called “open.” This is just one with all open ports. You will need to create it if you do not have it already. You can do this through the AWS console.

 

Create an EC2 set-up script

Next, you’ll need to create an ec2-setup.sh file directly in the cf-vagrant-installer directory. It should look exactly like the following:

 

Build the EC2 instance running CFv2

Finally, run "vagrant up --provider=aws" and your instance will be built:

My (truncated) output looked something like this:

We can now log into our new EC2 instance, which is running Cloud Foundry v2:

Note: all commands that follow are intended to be run on the EC2 instance.

 

Push an app

First, we must initialize the Cloud Foundry v2 command-line interface with the following command:

Here is the output of that command:

Now you can deploy one of the test apps. We will use a Node.js “Hello World” app:

We see the output:

Cloud Foundry v2 is running on localhost on our EC2 instance, so our app is not accessible from our web-browser, but we can test the app exists using curl from the EC2 instance:

Here is what is output by curl:

 

Delete the app

To delete the app, you can use:

The following output is seen:

 

Inside out

xip.io

From the notes I was given:

Now, to expose apps externally, it gets trickier. First, you’ll need to provision an elastic IP in the AWS console and attach it to the EC2 instance that’s running the cf v2 install. Then, you’ll need to set up a wildcard DNS record to point to that IP (*.domain and domain should point to that IP). xip.io might work here, but I’m not familiar enough with it to know for sure.

xip.io is actually perfect for this. All I need is my external IP, which was 50.19.50.63, and I append ".xip.io", which gives me "50.19.50.63.xip.io" as well as wildcard "*.50.19.50.63.xip.io" for the Cloud Foundry API and any apps I deploy. This is a zero-configuration service. The IP that you want to resolve to is included in the hostname you create and the DNS service simply returns you the IP. This means you can have a valid globally resolvable DNS hostname instantly.

I can also get a simpler hostname by checking the DNS record of this hostname, which is actually just a CNAME.

Which outputs:

So, I can use hj8raq.xip.io instead, since it is shorter and I just want to use it temporarily.

 

Updating more config

Since we now have an external domain name, not just localhost, we need to update some configuration files within the custom_config_files directory.

Assuming you are running under the domain "yourdomain" (or "hj8raq.xip.io" in my case), you should edit the cloud_controller.yml as follows:

  • change external_domain to api.yourdomain
  • change system_domain to yourdomain
  • change app_domains to yourdomain
  • change uaa:url to http://yourdomain:8080/uaa

Next, edit the DEA configuration.

  • change domain to yourdomain

And, finally, the configuration of the Health Manager:

  • change bulk_api:host to http://api.yourdomain:8181

 

Router-registry bug

There was a small bug on my AWS deployment that may have been fixed. This was related to a incompatibility with the JSON between the Cloud Controller and the Router when registering the API endpoint with the router. Here’s the fix:

Then, change the line:

:uris => config[:external_domain],

To this:

:uris => [config[:external_domain]],

This will make :uris an array, not a string. Probably, better to fix this in the gorouter, but this is quicker for now.

 

Restart CC DB

Now we need to reset the Cloud Controller database.

Finally, reboot the machine.

When the machine comes back up, we can ssh back into it:

And run the ./start.sh command to start Cloud Foundry components.

Now, Cloud Foundry v2 should be running with your externally accessible endpoint.

 

Related video

In this meetup session, Gastón Ramos and Alan Morán of Altoros Argentina deliver an overview of Cloud Foundry and present CF Vagrant Installer to the audience.

 

Further reading