Ooyala Flex Hardware Specifications

Ooyala Flex hardware recommendations are a set of suggestions that we propose you use as part of your Ooyala Flex build.


These are rough guidelines for provisioning hardware/cloud resources for Ooyala Flex.

Ooyala Flex Manager Node:

• 4 CPU's


• 50GB disk space

Ooyala Flex Index node:

• 2 CPUs


• 50GB disk space

Ooyala Flex Publish node:

• 2 CPUs


• 50GB disk space

Ooyala Flex Transfer node


Disk space provisioning for this node depends on the maximum size of Assets being ingested. The Transfer resource node stores all files that are in-transit locally and as such need to be able to store a file as big as your largest asset plus any other files in-transit. The file system and application overhead is no more than 20GB so for example: providing your Assets are smaller than 10GB and you don't do parallel Transfer ingests of more than 3 files 50GB would be sufficient. We recommend provisioning 50GB as a minimum and more if you deal with larger files on a regular basis.


Ooyala Flex is designed to be resilient, however it requires your choice of storage to be resilient too. For example, this can be provided by:

• Resilient NAS / dedicated appliance (for example EMC Isilon, Oracle ZFS Storage)

• NetApp CIFS + NFS shares

• Amazon S3

Please note that Ooyala Flex nodes require storage shared between them for storing indexes and other files related to clustering. It is not possible to use cloud storage (like Amazon S3) for this and for performance reasons we highly recommend using a NFS share. When setting up a Ooyala Flex environment in a cloud environment like Amazon AWS or Rackspace, you will need to ensure that you don't have a single point of failure - EBS (Amazon) and CBS (Rackspace) can only be accessed by one server at a time. One possible solution for this would be using a clustered file system like GlusterFS. This is not something we have tested ourselves, merely a suggestion.


Another aspect of Ooyala Flex's resilience is that all the nodes (bar the index node) run either as clusters or independent nodes that can survive the failure of the other node. As such, it is highly advisable that if possible, the nodes are split out - for example on different hosts (VMware) or different Availability Zones (Amazon). A simplified bare hardware equivalent would be using two separate, cross-connected network switches.


These are rough guidelines for the networking setup of a full Ooyala Flex stack (Manager, Publish and Transfer resources).

We recommend using a load balancer to provide a resilient endpoint for the Ooyala Flex Console. It is possible to set this up without a dedicated load balancer, using Apache and CARP, however certain hosted/cloud environments (for example Amazon) will not allow you to use CARP. We can recommend using a Riverbed SteelApp (formerly Stingray) load balancer and have successfully run Ooyala Flex with Amazon's ELB solution too. It is worth noting that if you're setting up in Amazon you will need two ELB instances - one for the Ooyala Flex Console and one for your Publish endpoint.

When it comes to locking the environments down, a recommended solution is to place the load balancers and FTP resource nodes in a DMZ. When deploying to Amazon, you are probably going to need a NAT instance in the DMZ too.

• The Transfer resource nodes will need to be accessed directly via FTP (if using the 'accelerated transfers' - over UDP instead of TCP).

• Both the Manager nodes and Publish nodes will need outbound access to a CDN, if using one for publishing.

• Publish and Transfer resource nodes need to communicate with the Manager using it's (load balanced) Ooyala Flex Console endpoint. The Managers communicate over http with the Publish endpoint, and all Transfer resource nodes (these do not have a shared endpoint and are just a redundant pair rather than a cluster)

It is entirely possible to run a Manager only Ooyala Flex stack with outbound access completely locked down. One thing to keep in mind is that the only way of deploying Ooyala Flex is using our NEM deployment tool, for which you will need access to 'Ooyala's package and SVN repositories. We are happy to provide you with VPN access for this.


Ooyala Flex utilises MySQL as a means of storing relational data. We suggest provisioning plenty of memory for your DB solution and ensuring that as much as possible is available to the InnoDB buffer pool. 8GB of RAM would be a good starting point. If you're expecting to be ingesting 100s of Assets everyday and performing complex workflows, a SSD storage solution for your database is highly recommended as you will probably outgrow the available RAM fairly quickly.

For resilience, the only currently supported solution is using a replication slave and failing over to it in case anything happens to the main DB server. Do remember that you will need to provision exactly the same resources for it as for your master as in case of the master DB server failing you will need to be able to run Ooyala Flex from the slave DB. Ooyala Flex only supports MySQL 5.6+ and only the InnoDB storage engine - you will not be able to use a NDB MySQL Cluster solution for this. When using a replication slave, you'll need to set your MySQL binlog format to ROW (the default is STATEMENT).