Heartbleed vulnerability and Stackato


Is Stackato vulnerable to the Heartbleed bug? How can I patch Stackato?


Stackato is vulnerable to the Heartbleed bug. You can patch your system by running "kato patch install heartbleed-fix". This patch will install updated OpenSSL libraries on both the host VM and inside the container templates. Most apps won't need to be redeployed, but some may depending on the app and how SSL is used. We advise you test your app (see tools below) after applying the patch to determine if redeploying the app is necessary.

A patch is also available for Stackato 2.10.4 here:

If you have any questions please contact ActiveState support.

More info:


Stackato 2.10.6 Ruby Security Patch


I notice that ruby released a critical security update recently. Is stackato impacted by this?


Stackato is indeed affected by this. A patch has been generated and is available via 'kato patch'. As always this can be installed via 'kato patch status' followed by 'kato patch install.

Notes: This patch downloads a 50 MB tar file from our public download site, and will do so on every node in your cluster. This file will be removed once the patch is installed.

Additionally, this patch will restart EVERY role on EVERY node on your cluster as stackato makes significant use of ruby in a number of places. The outage shouldn't last more than a minute for each node though this might vary slightly depending on your IaaS solution.

Stackato 2.10.6 - Patching Linux on a Stackato VM


How do I ensure that my Stackato nodes are running the latest patches released for Ubuntu?


Stackato 2.10.6 supports the use of the standard Ubuntu "apt-get" patching tools. Critical Stackato components are protected by pinning the affected packages, so users are able to run
'sudo apt-get update&&sudo apt-get dist-upgrade'
from a terminal window (ssh) to patch the base VM. Each VM will need to be patched separately.

Pinning packages means that apt-get is unlikely to result in a reboot being needed, however, it is still good practice to check the content of the recommended patches. Planning for the use of maintenance mode, with a contingency of a controlled reboot if appropriate, is strongly recommended.

Pinned packages can only be updated by upgrading to a newer version of Stackato. To see a list of pinned packages, execute 'sudo apt-mark showhold'.

After patching the base VM, the template for generating new LXC containers should also be patched. See the FAQ for patching the templates:

While it is possible to 'stackato ssh' into a running container and patch it if you have sudo enabled for that container, we do not recommend this. Patching the template, spinning up new containers, and dropping the old container will give more consistent results over the long term.

Stackato 2.10.X Security Fix - container sudo fix


Any recent security patches for Stackato?


We've generated a second patch that needs to be applied on top of the initial apt-get-wrapper patch to fix an issue that this was causing with unprivileged users in containers.


First step is to install everything at http://community.activestate.com/node/10157. This will include http://get.stackato.com/patch/2.10/stackato-2.10.4-apt-get-wrapper.sh, which has an issue as described above. This can be corrected with the patch downloaded at http://get.stackato.com/patch/2.10/stackato-2.10.4-apt-get-wrapper-fix.sh. Patch instructions are to upload the patch to all nodes running 'DEA' or 'Stager' in your cluster and execute via 'sh stackato-2.10.4-apt-get-wrapper-fix.sh'. Upon doing this you should restart your stager and/or dea roles. Any future applications deployed will have this fix enabled.


This patch is available via the kato patch command and can be installed by executing 'kato patch update' followed by 'kato patch install'.

Stackato 2.10.6 - Patching the LXC container template


When Ubuntu patches are issued, how do I apply those updates to the template used to create new LXC containers?


We have a script to assist with this.


The tarfile contains the script and Installation instructions. It will need to be run on each node running a DEA or a Stager at any time that you would like to apply updates*. This should be done interactively from the command line, so that you can monitor the progress.

New containers launched after our script is run will have the updates applied. Containers started before the updates were applied will not be altered, but you can migrate running droplets to updated containers on other DEAs. See this FAQ for a more detailed description of moving apps.

*Note that the actual updates that will be installed are provided by the Ubuntu community, and what you install when you run our script will depend on what is available, and what you have already installed.

Stackato 2.10.X MongoDB Client version in containers is outdated


I've noticed that the version of MongoDB client that gets deployed to Stackato containers used by 'stackato dbshell' is 2.0.4. Is there any way to update this?


We've created a patch for 2.10.4 and 2.10.6 to update the MongoDB client in the container to 2.4.1. Instructions to follow

You can download the patch from http://get.stackato.com/patch/2.10/stackato-2.10.4-mongodb-client-versio.... You'll need to copy it to the node you have running the MongoDB role. Once you've uploaded the patch you can install it via 'sh stackato-2.10.4-mongodb-client-version.sh'. This patch does not require a restart of any roles. Note that this will not update the client version in any already-running mongodb services. These will need to be redeployed.

This patch is available via kato patch. You will need to update your patch manifest via 'kato patch status'. Once you've updated the manifest you can install the patch to all of your nodes via 'kato patch install'. You will be prompted for sudo passwords for your nodes.

Stackato 2.10.x Router does not reconnect to doozer when it loses connection


I've noticed that when my routers can't connect to the primary/doozer node for more than a few seconds, they give up and enter a 'starting' state. Is there anything that can be done about this?


We've generated a patch that adds some reconnection logic to the router role. The patch is available for both 2.10.4 and 2.10.6, with instructions to follow.

You can download this patch from http://get.stackato.com/patch/2.10/stackato-2.10.4-router-reconnect.sh. You'll want to upload it to every node in your cluster running the router role (Including the router on your primary node, for consistency). After you've uploaded the patch, open a terminal session to each of your nodes and execute 'sh stackato-2.10.4-router-reconnect.sh'. Once the patch has been applied you'll need to restart your router role via 'kato restart router'. If you've only got one router in your cluster this will interrupt access to/within your cluster for a few moments. If you have redundant routers this should not be a problem. NOTE: This patch touches the same file as http://community.activestate.com/node/9948, which must be installed first.

This patch is available via kato patch. You will need to open a terminal session to a node in your cluster and execute 'kato patch update' to download the latest manifest. Once you've done this you can install the patch via 'kato patch install'. This will deploy and install the patch to every node in your cluster, and restart the 'router' role where appropriate. Note that if you have one router in your cluster this will interrupt access to/within your cluster for a few moments while the role restarts.

Stackato 2.10.X Logyard not destroying all tcp connections when it is closed


I've noticed that when I close a drain, logyard does not tear down all of its connections and leaves one running. I'm worried about hitting my system ulimit, is there anything that can be done about this?


This is a bug with logyard. We've created a new binary that will correct the issue and tear down all of your tcp connections properly. This can be patched in to your stackato system via the following instructions

You can download your patch from http://get.stackato.com/patch/2.10/stackato-2.10.4-logyard-connection.sh. You should upload this patch to all nodes. The patch can be installed via 'sh stackato-2.10.4-logyard-connection.sh'. Note that this patch will restart several logyard related processes, so you should expect to see some traffic in your cloud events in the admin console. After you run this patch you should not need to restart anything else.

This patch can be installed via the kato patch command from a terminal session. After accessing one of your nodes in a terminal session, execute 'kato patch status' to update your machine with the most recent patch manifest. After you've done this, execute 'kato patch install'. This will kick off the patch install process on all of your nodes (you'll be prompted for sudo passwords, and also ssh passwords if you haven't set up passwordless authentication between the node you're running the command on and the rest of your cluster). Note that this patch will restart several logyard related processes, so you should expect to see some traffic in your cloud events in the admin console.

Issue with changing token expiration in web console


I'm getting errors with my cloud controller after I modify my token expiration time in the web console. /s/logs/cloud_controller.log has messages like cloud_controller ERROR -- Exception Caught (TypeError): can't convert String into an exact number. How can I fix this?


We've identified an issue with the web console setting the token_expiration value as a string rather than a numeric, which leads to this problem. An easy work-around for this, as well as a remedy, is to set token_expiration from command line from one of the nodes in the system. Open a terminal session to one of your stackato nodes and enter the following:

$ kato config set --force cloud_controller keys/token_expiration

where is the number of seconds you'd like to set for a token expiration value. The --force flag will force kato to set the value as 'numeric' in case it's been overridden as a string.

A patch for this issue will be forthcoming.

Stackato 2.10.X stackato-ssh security fix


Are there any security patches for stackato-ssh?


We've identified a security vulnerability related to stackato-ssh that will require a patch to any nodes configured as either 'primary' or 'load balancer' for the cluster.

You can download the patch at http://get.stackato.com/patch/2.10/stackato-2.10.4-stackato-ssh-validati.... The patch will not interrupt any service and requires no restarts.

This patch is also available for 2.10.6 users via kato patch, and can be installed with 'kato patch install'.