Release Notes: IBM Aspera Transfer Cluster Manager 1.2.0
Release Notes: IBM Aspera Transfer Cluster Manager 1.2.0

Release Notes: IBM Aspera Transfer Cluster Manager 1.2.0

Product Release: September 30, 2016
Release Notes Updated: October 27, 2016

This release of IBM Aspera Transfer Cluster Manager 1.2.0 for Amazon Web Services (AWS) provides the new features, fixes, and other changes listed below. In particular, the Breaking Changes section provides important information about modifications to the product that may require you to adjust your workflow, configuration, or usage. These release notes also list system requirements, including supported platforms, and known problems.


Note: The IBM Aspera Transfer Cluster Manager versioning is now separate from IBM Aspera Enterprise Server versioning. Though the previous release of the Cluster Manager was 3.6.0 to match the release of Enterprise Server, this version is 1.2.0.
  • Cluster Transfer Performance Enhancements
  • Improved Support for Running Clusters in a Private VPC (AWS)
    • Admins can configure a specific private IP address to be used as state-store host and node configuration download when launching clusters.
    • Admins can use the Cluster Manager's private IP address for cluster nodes to download the node configuration and connect to the state store.
    • Admins can now specify private DNS names when launching a cluster.
  • Cluster Provisioning Enhancements
    • Admins can now run a custom first boot script on the Cluster Manager specified through user data. For example, you can assign an elastic or secondary private IP address needed for automated failure recovery.
    • Admins can now run a custom first boot script on cluster nodes specified in the cluster configuration.
    • Admins can now configure cluster nodes to create and mount a separate "swap volume" when using instance types that do not provide local instance store volumes.
  • Cluster Management Enhancements
    • Cluster manager and nodes now include jq and cloud-specific command line utilities.
    • Error messages in the status tab are greyed out when an activity returns to healthy.
    • Admins can now specify multiple DNS hosted zones and configure transfer nodes with separate hosted zones for public and private IP addresses.
    • Admins can now configure hosted zone IDs for when there are multiple hosted zones with the same name.
    • The default Cluster Manager Console timeout has been set to two weeks.
    • The password for the admin user of the ATC-API is automatically set to the Instance ID on first boot of the instance.
    • The Cluster Manager console now shows the Public IP and Private IP columns instead of the Hostname column for cluster nodes.
    • Logs on cluster nodes and the Cluster Manager now rotate.
    • The recursive file count feature is disabled in aspera.conf.


ATC-176 - State store backup fails because the redis.rdb file is too large.

ATC-149 - Stats-collector fails to connect to RDS.

ATC-147 - Instances terminated manually during image upgrade remain in "REPLACEMENT IN PROGRESS" state forever.

ATC-145 - The atcm service fails to start with error code 143.

ATC-142 - The cluster refuses new transfers when the Cluster Manager is not available.

ATC-141 - AWS cluster nodes of instance type M3 cannot be rebooted.

ATC-133 - The Cluster Manager monitors transfers using port 9092 instead of 443.

ATC-117 - Creating a cluster with an invalid cluster configuration results in a NullPointerException error instead of a ValidationException error.

ATC-114 - Auto scale policy lacks validation.

ATC-99 - Stats-collector fails if backup or restore configuration contains password with special characters (for example, ^).

ATC-80 - Locking timeouts set incorrectly for the AK synch and Cluster Master periodic activities.

ATC-74 - Degraded cluster node is shut down even if it has the SCALEKV cluster role

ATC-45 - Failures to start new nodes are counted when calculating and verifying Start Frequency: Count setting for the auto scale policy.

ATC-26 - Nodes in DEGRADED state are not replaced with new nodes.

ATC-25 - Failure to acquire AUTO_SCALE activity lock during image upgrade prevents upgrade from proceeding.


Cluster Manager Image

Name: atc-clustermanager-

Region Registered Image
us-east-1 ami-d02061c7
us-west-1 ami-4deea02d
us-west-2 ami-4ae9372a
ap-south-1 ami-1d334772
ap-northeast-1 ami-92c41bf3
ap-northeast-2 ami-ef944081
ap-southeast-1 ami-83bb1fe0
ap-southeast-2 ami-fa4e7d99
eu-central-1 ami-6e708c01
eu-west-1 ami-56054325
sa-east-1 ami-26f86a4a
Transfer Node Image

Name: atc-node-

Region Registered Image
us-east-1 ami-0b29681c
us-west-1 ami-06d09e66
us-west-2 ami-31eb3551
ap-south-1 ami-1b334774
ap-northeast-1 ami-14fd2275
ap-northeast-2 ami-58964236
ap-southeast-1 ami-24852147
ap-southeast-2 ami-454d7e26
eu-central-1 ami-50728e3f
eu-west-1 ami-560d4b25
sa-east-1 ami-08e67464


Cluster Manager

Transfer Nodes


ATC-207 - The Cluster Manager Console displays a max of ten access keys in the drop-down menu.


For on-line support resources for Aspera products, including raising new support tickets, please visit the Aspera Support Portal. Note that you may have an existing account if you contacted the Aspera support team in the past. Before creating a new account, first try setting a password for the email that you use to interact with us. You may also call one of our regional support centers.