NEM Guide

NEM stands for Nativ Environment Manager.

This section introduces the technologies and processes Nativ recommend for delivering the optimal service levels from a Flex Enterprise system, while retaining the agility required to support development and testing of new functionality and configuration. Nativ uses a combination of standard Linux and bespoke tools to manage the deployment and configuration of Flex.

Components

Maintenance of systems managed with the Nativ Environment Manager (NEM) is principally done through a revision control system shared between all parties. This provides the following:

• A centralized, definitive definition: This reduces ambiguity over which is considered the latest version, or which one is desired to be live

• An audit trail: As all changes are forced to go into a central repository, and all changes are required to have change log entries, problems can be easily isolated. NEM and other required tools are installed via Linux (Ubuntu and Red Hat) packages detailed in the requirements section below.

• The Subversion revision control system is used due to its current popularity and granular access control. Most work is done in the trunk but experimental work, such as testing new web applications, branches of the NEM definition may be created.

Building out the definition held in Subversion onto the target file system is done by Apache Ant. This was chosen as it supports simple templates, and it has a reasonable developer mind share.

Upgrading from version 5.0.X to 5.1.X

Flex version 5.1.0 introduces a lot of new functionality such as comments on Metadata fields, interdependent fields, LDAP integration, Data Source timed actions, Flex Objects, major changes to the Scripting API and a lot more. Depending on the Flex functionality that you're currently using in the 5.0.x series this can be slightly more complex than upgrades between minor versions.

In order to guarantee a quick and successful upgrade, please get in touch with Nativ so that we can work with you to identify necessary changes to your configuration and come up with a migration strategy tailored to how you use Flex.

NEM Implementation

The NEM start state for a NEM system is to have no services are running and the ‘/nem/’ directory empty. The following list describes the transitions that are possible with NEM. The possible states and transitions are also shown by a diagram in the appendix section.

Download assets

• The required assets are copied locally, using bespoke NEM code calling libraries.

• Subversion, HTTP and local files are supported.

• A failure during this transition will typically be due to networking issues.

Build out

Apache Ant moves assets into place.

This transition fails if Ant exits with non-zero status.

This is the only stage of the deployment where configuration is applied.

Subsequent states do, however, read from the configuration written at this stage.

 

Enable

• Recursively finds service dependencies of the ‘/nem/nem.conf’ meta-service and enables them accordingly.

• Typically this is starting processes but can also enable NFS mounts, check database status, add users.

Note: As part of the Flex deployment, NEM will create a 'mio' user with a UID of 25001. This user will be the owner of the Flex process. When using network shares you'll need to ensure they are readable and writeable by this user.

Disable

• All dependencies of ‘/nem/nem.conf’ meta-service are disabled. For example: Processes are killed, and mounts, users, and Apache hosts are removed.

Destroy

• Performed as part of an upgrade, to clean ‘/nem/’ and return the system to the start state.

There has been a bug in MySQL since a separate TRIGGER privilege was introduced in version 5.1 (http://bugs.mysql.com/bug.php?id=39489) which will result in an error message when trying to create a trigger on a MySQL instance with binary logging enabled (for example when replicating to a slave). This can be worked around by adding the following line to your MySQL configuration file.

NEM Installation

Red Hat

Start by adding the adding the Nativ repository:

​​[root@localhost ~]# tee /etc/yum.repos.d/ns.repo <<EOF

[ns]

name=Red Hat Enterprise Linux 6Server - $basearch - Source

baseurl=http://packages.nativ-systems.com/redhat/ns/6Server/$basearch/

enabled=1

gpgcheck=0

EOF

 

 

Oracle Java 7 is a prerequisite, you can install it from Nativ’s repository:

​​[root@localhost ~]# tee /etc/yum.repos.d/java7.repo <<EOF

[java7]

name=Red Hat Enterprise Linux $releasever - $basearch - Source

baseurl=http://packages.nativ-systems.com/redhat/java7/$releasever/$basearch/

enabled=1

gpgcheck=0

EOF

Note: It is assumed that the EPEL repository is already available, so NEM and Oracle Java can now be installed

[root@localhost ~]# yum install nativsystems-environment-manager-agent jdk

Loaded plugins: fastestmirror

Loading mirror speeds from cached hostfile

* base: mirrors.ukfast.co.uk

* epel: epel.check-update.co.uk

* extras: mirror.econdc.com

* updates: centos.mirroring.pulsant.co.uk

Setting up Install Process

Resolving Dependencies

--> Running transaction check

---> Package jdk.x86_64 2000: 1.7.0_40 will be installed

---> Package nativsystems-environment-manager-agent.i686 0:0.1-97.el6 will be installed

Transaction Summary

================================================================================

Install 69 Package(s)

Upgrade 1 Package(s)

Total download size: 164 M

Is this ok [y/N]: y

Downloading Packages:

(1/70): alsa-lib-1.0.22-3.el6.x86_64.rpm | 370 kB 00:00

Complete!

[root@localhost ~]#

Ubuntu

​root@ip-10-9-5-22:~# wget -qO - http://packages.nativ-systems.com/keys | sudo apt-key add -

OK

root@ip-10-9-5-22:~# echo deb http://packages.nativ-systems.com/ stable ns | sudo tee -a /etc/apt/sources.list

deb http://packages.nativ-systems.com/ stable ns

root@ip-10-9-5-22:~# sudo apt-get update

Ign http://us-west-2.ec2.archive.ubuntu.com trusty InRelease

Ign http://us-west-2.ec2.archive.ubuntu.com trusty-updates InRelease

Hit http://us-west-2.ec2.archive.ubuntu.com trusty Release.gpg

Hit http://us-west-2.ec2.archive.ubuntu.com trusty-updates Release.gpg

(...)

Ign http://packages.nativ-systems.com stable InRelease

Ign http://security.ubuntu.com trusty-security InRelease

Hit http://packages.nativ-systems.com stable Release.gpg

Hit http://security.ubuntu.com trusty-security Release.gpg

Hit http://packages.nativ-systems.com stable Release

Hit http://security.ubuntu.com trusty-security Release

Hit http://packages.nativ-systems.com stable/ns amd64 Packages

(...)

Fetched 15.9 MB in 9s (1,623 kB/s)

Reading package lists... Done

root@ip-10-9-5-22:~# apt-get install nativsystems-environment-manager-agent

Reading package lists... Done

Building dependency tree

Reading state information... Done

The following extra packages will be installed:

(...)

The following NEW packages will be installed:

ant ant-optional apache2 apache2-bin apache2-data ca-certificates-java cifs-utils

(...)

nativsystems-environment-manager-agent nfs-common odbcinst odbcinst1debian2 openjdk-7-jre-headless oracle-java7-bin oracle-java7-jdk oracle-java7-jre

(...)

1 upgraded, 89 newly installed, 0 to remove and 66 not upgraded.

Need to get 187 MB of archives.

After this operation, 404 MB of additional disk space will be used.

Do you want to continue? [Y/n] Y

Get:1 http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ trusty/main libapr1 amd64 1.5.0-1 [85.1 kB]

(...)

done.

Note: Flex 5.1.0 requires NEM version 0.1.4 (or later).

NEM Configuration

$ sudo vim /etc/nem-upgrade.conf

Add the configuration and definition URLs:

[nem]

definition=svn https://nem.mio:XXXXXXX@repository.nativ-systems.com/svn/nem/projects/mio/platform/branches/MIO-Prod

configuration=file:///etc/mio.conf

Save and exit.

Environment Configuration

One configuration file should be made per environment. These configurations should be as minimal as possible. Normally a huge bundle of configuration files are associated with a project but the actual changes between environments are quite small. For example: The database host will differ between environments so this should be put into the configuration file, but Tomcat ports should be consistent through all environments, so these should be hard coded. Moving parameters into the configuration should be done as lazily as possible as this reduces the number of differences between environments. The configuration file is provided to Apache Ant during the build out stage.

Index

# Flex node type: master, job, index, playout or resource

nodetype=index

# ID to use for clustering.

clusterid=uatmio

# Defines the name of the logs directory

# /var/nem-logs/<clientname>/<domainname>/jboss/<hostname>/now/

clientname=company

# Defines master account URL. All account names will be templated according to this, ie. will be in the *.uat.company.com format

domainname=master.uat.company.com

# log4j log level

log.level=WARN

# Enable JVM debugging

jvm.debug=false

# Database configuration

# This is the same database as the master node

databasehost=10.10.10.10

databasename=miomaster

databaseuser=miomaster_user

databasepassword=miomaster_password

# Shared storage

# filestore is for indexes, cluster state and various temporary work files

# You can force nem to recursively chown the mounted storage to the user specified - recommended to set to 'mio' when using NFS.

filestore.owner=

filestore.mount=nfs 10.10.20.20:/export/storage/filestore

# Java executable path

java.executable=/usr/lib/jvm/java-7-oracle/bin/java

# Holds IP addresses of masters (i.e. where HornetQ is deployed)

master.ip.addresses=10.10.11.11,10.10.11.12

#####################

# Optional settings:

#

# set initial and maximum amount of allocated heapspace:

# heapspace=2g

#

# send log files to a log4j server listening on port 4560 (for example Logstash with a log4j input):

# logstash.server=10.10.10.9

#

# add a JMX username/password to allow for monitoring over JMX. please note that the JMX port is

# listening on localhost:9999 only and you will need to use JBoss EAP remoting-jmx libraries to connect.

# the connection string is service:jmx:remoting-jmx://localhost:9999

#

# you can find detailed information about JMX in JBoss EAP at https://planet.jboss.org/post/jmx_connections_to_jboss_as

# jmx.username=user

# jmx.password=password

Master

​# Flex node type: master, job, index, playout or resource

nodetype=master

# ID to use for clustering.

clusterid=uatmio

# Defines the name of the logs directory

# /var/nem-logs/<clientname>/<domainname>/jboss/<hostname>/now/

clientname=company

# Defines master account URL. All account names will be templated according to this, ie. for the example below it would be in the "account.uat.company.com" format

domainname=master.uat.company.com

# log4j log level

log.level=WARN

# Enable JVM debugging

jvm.debug=false

# Database configuration

databasehost=10.10.10.10

databasename=miomaster

databaseuser=miomaster_user

databasepassword=miomaster_password

# Shared storage

# filestore is for indexes, cluster state and various temporary work files

# media is for your media files

# You can force nem to recursively chown the mounted storage to the user specified - recommended to set to 'mio' when using NFS.

filestore.owner=

# You can use the 'local' option ONLY if you're running in a non-clustered, single-master environment

# filestore.mount=local

filestore.mount=nfs 10.10.20.20:/export/storage/filestore

# This is your media storage share

# - you can pass options to mount.cifs as the third argument, like in the example below

# - the uid and gid options are mandatory

# - when using newer Linux kernels you might need to use sec=ntlm to mount successfully (for example with netApp)

media.mount=cifs //10.10.20.10/media username=mio,password=pass,uid=mio,gid=mio,sec=ntlm

media.dirs=RECURSIVEWILDCARD

# Keep this set to local.

public.mount=local

public.owner=

# Subdirectory of /nem/apache/holdingpages containing a page displayed by Apache when Flex is down

holdingpages=default

# CARP configuration

# Use the following if you are not using CARP:

sharedip.vhid = 1

sharedip.password = password

sharedip.addresses = 127.0.0.1

sharedip.ip = 127.0.0.3

# If using CARP, you'll need a unique vhid!

# sharedip.vhid = 13

# sharedip.password = yourp4ssw0rd

# sharedip.addresses = 10.10.11.11,10.10.11.12 # comma-separated list of IP addresses of Flex Master nodes

# sharedip.ip = 10.10.11.10 # the virtual CARP IP

# You can declare extra packages that NEM will check the operating system for before allowing you to start Flex

supplementalpackages=cifs-utils,clamav

# Intermediate SSL certificate for Apache. This is only required if specified by your certification authority.

# If not, ensure this file is a commented string

# SSLCertificateChainFile = SSLCertificateChainFile /etc/apache2/mio_intermediate.cer

SSLCertificateChainFile= # not applicable

# Java executable path

java.executable=/usr/lib/jvm/java-7-oracle/bin/java

# Holds IP addresses for master nodes (i.e. where HornetQ is deployed)

master.ip.addresses=10.10.11.11,10.10.11.12

# Holds IP addresses for job nodes (so that we can union with master nodes to form JGroups/Hazelcast clusters)

# If this is not defined, NEM will buildout with jobs executing on masters

job.ip.addresses=10.10.11.81,10.10.11.82

# The 'testuser' password if different from default

testuser.password=testuser

# Hibernate search

# indexing topology - 'standalone' means local indexing, 'slave' means relying on an indexing node

# If this is set to standalone or not set, NEM will buildout with indexing executing on masters

hibernate.search.index.topology=slave

# Flex Playout API settings

tv.nativ.mio.playout.api.endpoint=https://playout.uat.company.com

tv.nativ.mio.playout.api.username=testuser

tv.nativ.mio.playout.api.password=testuser

#####################

# Optional settings:

#

# set initial and maximum amount of allocated heapspace:

# heapspace=2g

#

# send log files to a log4j server listening on port 4560 (for example Logstash with a log4j input):

# logstash.server=10.10.10.9

#

# comma-separated list of hosts Apache should be load balancing over. If not specified, the default is all hosts in the cluster (nem.cluster.ip.addresses)

# ajp13workers=10.10.11.11,10.10.11.12

#

# add a JMX username/password to allow for monitoring over JMX. please note that the JMX port is

# listening on localhost:9999 only and you will need to use JBoss EAP remoting-jmx libraries to connect.

# the connection string is service:jmx:remoting-jmx://localhost:9999

#

# you can find detailed information about JMX in JBoss EAP at https://planet.jboss.org/post/jmx_connections_to_jboss_as

# jmx.username=user

# jmx.password=password

#

# LDAP integration

# ldap.url=ldap://ldap.server.company.com:389/

# ldap.domain=company.com

# ldap.searchbase=OU=employees,DC=company,DC=com

Job

​# Flex node type: master, job, index, playout or resource

nodetype=job

# ID to use for clustering.

clusterid=uatmio

# Defines the name of the logs directory

# /var/nem-logs/<clientname>/<domainname>/jboss/<hostname>/now/

clientname=company

# Defines master account URL. All account names will be templated according to this, ie. for the example below it would be in the "account.uat.company.com" format

domainname=master.uat.company.com

# log4j log level

log.level=WARN

# Enable JVM debugging

jvm.debug=false

# Database configuration

databasehost=10.10.10.10

databasename=miomaster

databaseuser=miomaster_user

databasepassword=miomaster_password

# Shared storage

# filestore is for indexes, cluster state and various temporary work files

# media is for your media files

# You can force nem to recursively chown the mounted storage to the user specified - recommended to set to 'mio' when using NFS.

filestore.owner=

# You can use the 'local' option ONLY if you're running in a non-clustered, single-master environment

# filestore.mount=local

filestore.mount=nfs 10.10.20.20:/export/storage/filestore

# This is your media storage share

# - you can pass options to mount.cifs as the third argument, like in the example below

# - the uid and gid options are mandatory

# - when using newer Linux kernels you might need to use sec=ntlm to mount successfully (for example with netApp)

media.mount=cifs //10.10.20.10/media username=mio,password=pass,uid=mio,gid=mio,sec=ntlm

media.dirs=RECURSIVEWILDCARD

# Keep this set to local.

public.mount=local

public.owner=

# CARP configuration

# Use the following if you are not using CARP:

sharedip.vhid = 1

sharedip.password = password

sharedip.addresses = 127.0.0.1

sharedip.ip = 127.0.0.3

# If using CARP, you'll need a unique vhid!

# sharedip.vhid = 13

# sharedip.password = yourp4ssw0rd

# sharedip.addresses = 10.10.11.11,10.10.11.12 # comma-separated list of IP addresses of Flex Master nodes

# sharedip.ip = 10.10.11.10 # the virtual CARP IP

# You can declare extra packages that NEM will check the operating system for before allowing you to start Flex

supplementalpackages=cifs-utils,clamav

# Java executable path

java.executable=/usr/lib/jvm/java-7-oracle/bin/java

# Holds IP addresses for master nodes (i.e. where HornetQ is deployed)

master.ip.addresses=10.10.11.11,10.10.11.12

# Holds IP addresses for job nodes (so that we can union with master nodes to form JGroups/Hazelcast clusters)

# If this is not defined, NEM will buildout with jobs executing on masters

job.ip.addresses=10.10.11.81,10.10.11.82

# Hibernate search

# indexing topology - 'standalone' means local indexing, 'slave' means relying on an indexing node

# If this is set to standalone or not set, NEM will buildout with indexing executing on masters

hibernate.search.index.topology=slave

# Flex Playout API settings

tv.nativ.mio.playout.api.endpoint=https://playout.uat.company.com

tv.nativ.mio.playout.api.username=testuser

tv.nativ.mio.playout.api.password=testuser

#####################

# Optional settings:

#

# set initial and maximum amount of allocated heapspace:

# heapspace=2g

#

# send log files to a log4j server listening on port 4560 (for example Logstash with a log4j input):

# logstash.server=10.10.10.9

#

# add a JMX username/password to allow for monitoring over JMX. please note that the JMX port is

# listening on localhost:9999 only and you will need to use JBoss EAP remoting-jmx libraries to connect.

# the connection string is service:jmx:remoting-jmx://localhost:9999

#

# you can find detailed information about JMX in JBoss EAP at https://planet.jboss.org/post/jmx_connections_to_jboss_as

# jmx.username=user

# jmx.password=password

Playout

# Flex node type: master, job, index, playout or resource

nodetype=playout

# ID to use for clustering.

clusterid=uatmioplayout

# Defines the name of the logs directory

# /var/nem-logs/<clientname>/<domainname>/jboss/<hostname>/now/

clientname=nativ

# Defines the playout domain name

domainname=playout.stage.mioeverywhere.tv

# log4j log level

log.level=WARN

# Enable JVM debugging

jvm.debug=false

# database config

databasehost=10.10.10.10

databasename=mioplayout

databaseuser=mioplayout_user

databasepassword=mioplayout_password

# Shared storage

# This is where the lucene playout search index is stored - needs to be accessible by all playout nodes.

lucene.mount=nfs 10.10.20.20/export/storage/playout

lucene.owner=

public.mount=local

public.owner=mio

# Web service base URL to be used by Playout Wrapper.

# This is deprecated, set it as below.

tv.nativ.mio.playout.webservices.baseurl=http://127.0.0.1:30044/webservices/

# CARP configuration

# Use the following if you are not using CARP:

sharedip.vhid = 1

sharedip.password = password

sharedip.addresses = 127.0.0.1

sharedip.ip = 127.0.0.3

# If using CARP, you'll need a unique vhid!

# sharedip.vhid = 26

# sharedip.password = yourp4ssw0rd

# sharedip.addresses = 10.10.12.11,10.10.12.12 # comma-separated list of IP addresses of Flex Playout nodes

# sharedip.ip = 10.10.12.10 # the virtual CARP IP

# Intermediate SSL certificate for Apache. This is only required if specified by your certification authority.

# If not, ensure this file is a commented string

# SSLCertificateChainFile = SSLCertificateChainFile /etc/apache2/mio_intermediate.cer

SSLCertificateChainFile= # not applicable

# Java executable path

java.executable=/usr/lib/jvm/java-7-oracle/bin/java

# Holds comma separated IP addresses of all playout nodes.

playout.ip.addresses=10.10.12.11,10.10.12.12

# Total number of search results returned by lucene

tv.nativ.mio.playout.lucene.hits.per.page=5000

#####################

# Optional settings:

#

# set initial and maximum amount of allocated heapspace:

# heapspace=2g

#

# send log files to a log4j server listening on port 4560 (for example Logstash with a log4j input):

# logstash.server=10.10.10.9

#

# comma-separated list of hosts Apache should be load balancing over. If not specified, the default is all hosts in the cluster (nem.cluster.ip.addresses)

# ajp13workers=10.10.11.11,10.10.11.12

#

# add a JMX username/password to allow for monitoring over JMX. please note that the JMX port is

# listening on localhost:9999 only and you will need to use JBoss EAP remoting-jmx libraries to connect.

# the connection string is service:jmx:remoting-jmx://localhost:9999

#

# you can find detailed information about JMX in JBoss EAP at https://planet.jboss.org/post/jmx_connections_to_jboss_as

# jmx.username=user

# jmx.password=password

Resource

# Flex node type: master, job, index, playout or resource

nodetype=resource

# Defines the name of the logs directory

# /var/nem-logs/<clientname>/<domainname>/jboss/<hostname>/now/

clientname=nativ

# Master account URL

domainname=master.uat.company.com

# log4j log level

log.level=WARN

# Enable JVM debugging

jvm.debug=true

# For FTP resources set these to local

filestore.mount=local

filestore.owner=mio

incoming.mount=local

incoming.owner=mio

public.mount=local

public.owner=mio

# CARP configuration

# Use the following if you are not using CARP:

sharedip.vhid = 1

sharedip.password = password

sharedip.addresses = 127.0.0.1

sharedip.ip = 127.0.0.3

# If using CARP, you'll need a unique vhid!

# sharedip.vhid = 13

# sharedip.password = yourp4ssw0rd

# sharedip.addresses = 10.10.11.11,10.10.11.12 # comma-separated list of IP addresses of Flex Master nodes

# sharedip.ip = 10.10.11.10 # the virtual CARP IP

# You can declare extra packages that NEM will check the operating system for before allowing you to start Flex

supplementalpackages=

# Intermediate SSL certificate for Apache. This is only required if specified by your certification authority.

# If not, ensure this file is a commented string

#SSLCertificateChainFile = SSLCertificateChainFile /etc/apache2/mio.intermediate.crt

SSLCertificateChainFile = # not applicable

# Database configuration

http_databasehost=10.10.10.10

http_databasename=mio_web_upload

http_databaseuser=mio_web_upload

http_databasepassword=mio_web_upload_password

# Comma-separated list of all master nodes

master.ip.addresses=10.5.13.13,10.5.13.14

# Java executable path

java.executable=/usr/lib/jvm/java-7-oracle/bin/java

#####################

# Optional settings:

#

# set initial and maximum amount of allocated heapspace:

# heapspace=2g

#

# send log files to a log4j server listening on port 4560 (for example Logstash with a log4j input):

# logstash.server=10.10.10.9

#

# comma-separated list of hosts Apache should be load balancing over. If not specified, the default is all hosts in the cluster (nem.cluster.ip.addresses)

# ajp13workers=10.10.11.11,10.10.11.12

#

# add a JMX username/password to allow for monitoring over JMX. please note that the JMX port is

# listening on localhost:9999 only and you will need to use JBoss EAP remoting-jmx libraries to connect.

# the connection string is service:jmx:remoting-jmx://localhost:9999

#

# you can find detailed information about JMX in JBoss EAP at https://planet.jboss.org/post/jmx_connections_to_jboss_as

# jmx.username=user

# jmx.password=password

Deployment Walkthrough

The following example shows a change to the Flex Enterprise release version and its deployment.

​Before deploying Flex for the first time, ensure that:

• Apache will be able to create outgoing connections (SELinux on RedHand CentOS blocks this by default)

• iptables is disabled (RedHat and CentOS have a restrictive set of firewall rules by default)

Log into the Flex server and perform the upgrade

​$ sudo nem upgrade

INFO:root:Destroying existing environment...

INFO:root:Disabling [ucarp_01] /nem/carp/nem.conf (running)...

INFO:root:Disabled [ucarp_01] /nem/carp/nem.conf (not running)

INFO:root:Disabling [daemon_01] /nem/mio/nem.conf (running)...

WARNING:root:Still trying to kill: [daemon_01] /nem/mio/nem.conf (running)

INFO:root:Disabled [daemon_01] /nem/mio/nem.conf (not running)

INFO:root:Disabling [folder_01] /nem/public/nem.conf (running)...

INFO:root:Disabled [folder_01] /nem/public/nem.conf (not running)

...

...

INFO:root:Disabled [sql_01] /nem/db/nem.conf (running)

INFO:root:Destroying existing environment done

INFO:root:Building out new environment...

INFO:root:Checking out assets...

 

It presents you the following question:

"Deploying svn https://****:****@repository.nativ-systems.com/svn/path/to/sbn/branch, revision 25. Is that okay?"

Responding ‘yes’ to this question will deploy the latest revision of the chosen branch in Subversion. Responding ‘no’ will prompt the user for an older revision to deploy. Once complete, the upgrade process will print a summary of what’s been deployed:

​INFO:root:Checking out assets done

INFO:root:Running ant buildout...

INFO:root:Ant buildout complete

INFO:root:Tidying up checkout assets...

INFO:root:Tidying up checkout assets done

INFO:root:Building out new environment done

INFO:root:Starting new environment...

INFO:root:Enabling [meta_01] /nem/nem.conf (not running)...

INFO:root:Already enabled: [sql_01] /nem/db/nem.conf (running)

INFO:root:Enabling [users_01] /nem/users/nem.conf (not running)...

...

...

INFO:root:Enabled [meta_01] /nem/nem.conf (running)

INFO:root:Starting new environment done

INFO:root:

Deployment seems to have gone okay of:

Definition: svn https://****:****@repository.nativsystems.com/svn/nem/projects/mio/platform/branches/MIO-Maint (revision 5152)

Configuration: file:///etc/mio.conf

$

When deploying Flex for the first time, you will also need to perform two other steps:

Create necessary directory structures on the filestore and media network shares. You can do this by running nem run in the /nem/filestore and /nem/media directories - you will be shown a series of prompts asking if you'd like to create missing directories: answer yes to all of those questions.

Deploy the SQL schema on your previously created database - see Database Maintenance for detailed instructions below.

Database Maintenance

Please note that the Flex Master is using SQL triggers in the database. You will need to ensure that the TRIGGER privilege is granted to the database user as it is not grant​ed by default as part of GRANT ALL.

GRANT TRIGGER ON miomaster.* to 'miomaster'@'hostname' IDENTIFIED BY 'password';

There has been a bug in MySQL since a separate TRIGGER privilege was introduced in version 5.1 (http://bugs.mysql.com/bug.php?id=39489) which will result in an error message when trying to create a trigger on a MySQL instance with binary logging enabled (for example when replicating to a slave). This can be worked around by adding the following line to your MySQL configuration file.

Note: There has been a bug in MySQL since a separate TRIGGER privilege was introduced in version 5.1 (http://bugs.mysql.com/bug.php?id=39489) which will result in an error message when trying to create a trigger on a MySQL instance with binary logging enabled (for example when replicating to a slave). This can be worked around by adding the following line to your MySQL configuration file.

log_bin_trust_function_creators = 1

Updating the database schema

Since version 5.1.0, NEM uses Flyway (http://flywaydb.org) to track schema changes (also known as database migrations). When trying to start an environment which hasn't had the changes applied yet, the following will happen:

INFO:root:Starting new environment...

INFO:root:Enabling [meta_01] /nem/nem.conf (not running)...

INFO:root:Enabling [flyway_01] /nem/db/nem.conf (not running)...

Command:

/usr/bin/sudo -u nem /usr/lib/jvm/java-7-oracle/bin/java -cp /usr/share/nativsystems-environment-manager-agent/flyway/lib/*:/usr/share/nativsystems-environment-manager-agent/flyway/drivers/* org.flywaydb.commandline.Main -url=jdbc:mysql://10.5.1.12:3306/miotest?autoReconnect=true&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&jdbcCompliantTruncation=false&zeroDateTimeBehavior=convertToNull -user=miotest -password=miotest -sqlMigrationPrefix=V -sqlMigrationSeparator=__ -sqlMigrationSuffix= -placeholderPrefix=asdf -locations=filesystem:/nem/db/migrations validate

Exit status:

1

Stdout:

Flyway 3.2.1 by Boxfuse

Database: jdbc:mysql://10.10.10.10:3306/miotest (MySQL 5.6)

Validated 7 migrations (execution time 00:00.014s)

ERROR: Validate failed. Detected resolved migration not applied to database: 001

Stderr:

 

 

Apply the patch by doing the following:

~$ sudo nem -f /nem/db/nem.conf run

 

NEM then presents a menu showing multiple Flyway options. Choose Migrate and press the Enter key to apply all remaining changes. At any time you can also use the Info option to see which changes have already been applied and which are still pending.

Maintenance

The ‘nem’ command accepts the following commands/arguments:

Enable

• For example this will start up processes or mount folders via NFS.

• For some destructive changes such as applying SQL, services will refuse to be enabled (throwing an exception, halting deployment) until the change has been applied.

• This recursively starts the service dependencies. If this is not required or desired then supply the ‘--no-dependencies’ command line argument.

Disable

• Stop process, ‘unmount’ NFS, etc.

• Does nothing for some services such as SQL changes.

Restart

• Calls disable and enable commands in sequence.

Status

• Describes the status (running or not running) of the module and its dependencies.run:

• Provides an interactive session for modules such as SQL and folders or runs a process in the foreground for debugging.

• For destructive changes are only made following interaction with the user.

Upgrade

Reads /etc/nem-upgrade.conf and performs an upgrade based on the configuration and definition URLs.

• The system is taken from the built out or fully enabled states, described by the diagram in the appendix to this document, via the start state back to the fully enabled state.

• All running services are disabled, the new code is dropped into place, and then the services are enabled. The order of enabling and disabling is calculated based on the dependencies.

History

• Lists the dates, times, revision numbers and the URLs of previous deployments.

Tree

Prints a tree of dependencies of the selected module. Most of these commands need to find a ‘nem.conf’ file in the current working directory or can be given the path to a ‘nem.conf’ file with the -f flag. For example:

$ cd /nem/apache/

/nem/apache$ sudo nem status

INFO:root:[apache_01] /nem/apache/nem.conf (running)

/nem/apache$

As an alternative, the following command is equivalent to the ones above:

$ sudo nem -f /nem/apache/nem.conf status

INFO:root:[apache_01] /nem/apache/nem.conf (running)

$

SSL Certificate

The NEM deployment process will expect a SSL certificate (and key) present in the system in a predefined location. If your certificate authority requires the use of an intermediate certificate, this can be done using the SSLCertificateChainFile option in your Environment Configuration.​ This is required even for environments in which the web console is accessed via a load balancer performing the SSL encryption.

You will need to place your certificate in /etc/apache2/mio.crt and your key in /etc/apache2/mio.key.

If you do not currently have a certificate (for example if you're just setting up a test environment or simply are still awaiting for your certificate order to be fulfilled), you can create and use a self-signed certificate as follows:

1.) Generate a private key:

root@ip-10-9-5-22:~# openssl genrsa -des3 -out server.key 1024

Generating RSA private key, 1024 bit long modulus

.......++++++

....++++++

e is 65537 (0x10001)

Enter pass phrase for server.key

Verifying - Enter pass phrase for server.key:

2.) Generate a Certificate Signing Request:

​root@ip-10-9-5-22:~# openssl req -new -key server.key -out server.csr

Enter pass phrase for server.key:

You are about to be asked to enter information that will be incorporated

into your certificate request.

What you are about to enter is what is called a Distinguished Name or a DN.

There are quite a few fields but you can leave some blank

For some fields there will be a default value,

If you enter '.', the field will be left blank.

-----

Country Name (2 letter code) [AU]:

State or Province Name (full name) [Some-State]:

Locality Name (eg, city) []:

Organization Name (eg, company) [Internet Widgits Pty Ltd]:

Organizational Unit Name (eg, section) []:

Common Name (e.g. server FQDN or YOUR name) []:

Email Address []:

Please enter the following 'extra' attributes

to be sent with your certificate request

A challenge password []:

An optional company name []:

3.) Remove passphrase from key:

root@ip-10-9-5-22:~# cp server.key server.key.org

root@ip-10-9-5-22:~# openssl rsa -in server.key.org -out server.key

Enter pass phrase for server.key.org:

writing RSA key

4.) Generate a self-signed certificate:

​root@ip-10-9-5-22:~# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

Signature ok

subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd

 

Getting Private key

5.) Install the key and certificate files:

​root@ip-10-9-5-22:~# cp server.key /etc/apache2/mio.key

root@ip-10-9-5-22:~# cp server.crt /etc/apache2/mio.crt

Environment Configuration

One configuration file should be made per environment. These configurations should be as minimal as possible. Normally a huge bundle of configuration files are associated with a project but the actual changes between environments are quite small. For example: The database host will differ between environments so this should be put into the configuration file, but Tomcat ports should be consistent through all environments, so these should be hard coded. Moving parameters into the configuration should be done as lazily as possible as this reduces the number of differences between environments. The configuration file is provided to Apache Ant during the build out stage.

Index

# Flex node type: master, job, index, playout or resource

nodetype=index

# ID to use for clustering.

clusterid=uatmio

# Defines the name of the logs directory

# /var/nem-logs/<clientname>/<domainname>/jboss/<hostname>/now/

clientname=company

# Defines master account URL. All account names will be templated according to this, ie. will be in the *.uat.company.com format

domainname=master.uat.company.com

# log4j log level

log.level=WARN

# Enable JVM debugging

jvm.debug=false

# Database configuration

# This is the same database as the master node

databasehost=10.10.10.10

databasename=miomaster

databaseuser=miomaster_user

databasepassword=miomaster_password

# Shared storage

# filestore is for indexes, cluster state and various temporary work files

# You can force nem to recursively chown the mounted storage to the user specified - recommended to set to 'mio' when using NFS.

filestore.owner=

filestore.mount=nfs 10.10.20.20:/export/storage/filestore

# Java executable path

java.executable=/usr/lib/jvm/java-7-oracle/bin/java

# Holds IP addresses of masters (i.e. where HornetQ is deployed)

master.ip.addresses=10.10.11.11,10.10.11.12

#####################

# Optional settings:

#

# set initial and maximum amount of allocated heapspace:

# heapspace=2g

#

# send log files to a log4j server listening on port 4560 (for example Logstash with a log4j input):

# logstash.server=10.10.10.9

#

# add a JMX username/password to allow for monitoring over JMX. please note that the JMX port is

# listening on localhost:9999 only and you will need to use JBoss EAP remoting-jmx libraries to connect.

# the connection string is service:jmx:remoting-jmx://localhost:9999

#

# you can find detailed information about JMX in JBoss EAP at https://planet.jboss.org/post/jmx_connections_to_jboss_as

# jmx.username=user

# jmx.password=password

Master

​# Flex node type: master, job, index, playout or resource

nodetype=master

# ID to use for clustering.

clusterid=uatmio

# Defines the name of the logs directory

# /var/nem-logs/<clientname>/<domainname>/jboss/<hostname>/now/

clientname=company

# Defines master account URL. All account names will be templated according to this, ie. for the example below it would be in the "account.uat.company.com" format

domainname=master.uat.company.com

# log4j log level

log.level=WARN

# Enable JVM debugging

jvm.debug=false

# Database configuration

databasehost=10.10.10.10

databasename=miomaster

databaseuser=miomaster_user

databasepassword=miomaster_password

# Shared storage

# filestore is for indexes, cluster state and various temporary work files

# media is for your media files

# You can force nem to recursively chown the mounted storage to the user specified - recommended to set to 'mio' when using NFS.

filestore.owner=

# You can use the 'local' option ONLY if you're running in a non-clustered, single-master environment

# filestore.mount=local

filestore.mount=nfs 10.10.20.20:/export/storage/filestore

# This is your media storage share

# - you can pass options to mount.cifs as the third argument, like in the example below

# - the uid and gid options are mandatory

# - when using newer Linux kernels you might need to use sec=ntlm to mount successfully (for example with netApp)

media.mount=cifs //10.10.20.10/media username=mio,password=pass,uid=mio,gid=mio,sec=ntlm

media.dirs=RECURSIVEWILDCARD

# Keep this set to local.

public.mount=local

public.owner=

# Subdirectory of /nem/apache/holdingpages containing a page displayed by Apache when Flex is down

holdingpages=default

# CARP configuration

# Use the following if you are not using CARP:

sharedip.vhid = 1

sharedip.password = password

sharedip.addresses = 127.0.0.1

sharedip.ip = 127.0.0.3

# If using CARP, you'll need a unique vhid!

# sharedip.vhid = 13

# sharedip.password = yourp4ssw0rd

# sharedip.addresses = 10.10.11.11,10.10.11.12 # comma-separated list of IP addresses of Flex Master nodes

# sharedip.ip = 10.10.11.10 # the virtual CARP IP

# You can declare extra packages that NEM will check the operating system for before allowing you to start Flex

supplementalpackages=cifs-utils,clamav

# Intermediate SSL certificate for Apache. This is only required if specified by your certification authority.

# If not, ensure this file is a commented string

# SSLCertificateChainFile = SSLCertificateChainFile /etc/apache2/mio_intermediate.cer

SSLCertificateChainFile= # not applicable

# Java executable path

java.executable=/usr/lib/jvm/java-7-oracle/bin/java

# Holds IP addresses for master nodes (i.e. where HornetQ is deployed)

master.ip.addresses=10.10.11.11,10.10.11.12

# Holds IP addresses for job nodes (so that we can union with master nodes to form JGroups/Hazelcast clusters)

# If this is not defined, NEM will buildout with jobs executing on masters

job.ip.addresses=10.10.11.81,10.10.11.82

# Hibernate search

# indexing topology - 'standalone' means local indexing, 'slave' means relying on an indexing node

# If this is set to standalone or not set, NEM will buildout with indexing executing on masters

hibernate.search.index.topology=slave

# Flex Playout API settings

tv.nativ.mio.playout.api.endpoint=https://playout.uat.company.com

tv.nativ.mio.playout.api.username=testuser

tv.nativ.mio.playout.api.password=testuser

#####################

# Optional settings:

#

# set initial and maximum amount of allocated heapspace:

# heapspace=2g

#

# send log files to a log4j server listening on port 4560 (for example Logstash with a log4j input):

# logstash.server=10.10.10.9

#

# comma-separated list of hosts Apache should be load balancing over. If not specified, the default is all hosts in the cluster (nem.cluster.ip.addresses)

# ajp13workers=10.10.11.11,10.10.11.12

#

# add a JMX username/password to allow for monitoring over JMX. please note that the JMX port is

# listening on localhost:9999 only and you will need to use JBoss EAP remoting-jmx libraries to connect.

# the connection string is service:jmx:remoting-jmx://localhost:9999

#

# you can find detailed information about JMX in JBoss EAP at https://planet.jboss.org/post/jmx_connections_to_jboss_as

# jmx.username=user

# jmx.password=password

Master

​# Flex node type: master, job, index, playout or resource

nodetype=master

# ID to use for clustering.

clusterid=uatmio

# Defines the name of the logs directory

# /var/nem-logs/<clientname>/<domainname>/jboss/<hostname>/now/

clientname=company

# Defines master account URL. All account names will be templated according to this, ie. for the example below it would be in the "account.uat.company.com" format

domainname=master.uat.company.com

# log4j log level

log.level=WARN

# Enable JVM debugging

jvm.debug=false

# Database configuration

databasehost=10.10.10.10

databasename=miomaster

databaseuser=miomaster_user

databasepassword=miomaster_password

# Shared storage

# filestore is for indexes, cluster state and various temporary work files

# media is for your media files

# You can force nem to recursively chown the mounted storage to the user specified - recommended to set to 'mio' when using NFS.

filestore.owner=

# You can use the 'local' option ONLY if you're running in a non-clustered, single-master environment

# filestore.mount=local

filestore.mount=nfs 10.10.20.20:/export/storage/filestore

# This is your media storage share

# - you can pass options to mount.cifs as the third argument, like in the example below

# - the uid and gid options are mandatory

# - when using newer Linux kernels you might need to use sec=ntlm to mount successfully (for example with netApp)

media.mount=cifs //10.10.20.10/media username=mio,password=pass,uid=mio,gid=mio,sec=ntlm

media.dirs=RECURSIVEWILDCARD

# Keep this set to local.

public.mount=local

public.owner=

# Subdirectory of /nem/apache/holdingpages containing a page displayed by Apache when Flex is down

holdingpages=default

# CARP configuration

# Use the following if you are not using CARP:

sharedip.vhid = 1

sharedip.password = password

sharedip.addresses = 127.0.0.1

sharedip.ip = 127.0.0.3

# If using CARP, you'll need a unique vhid!

# sharedip.vhid = 13

# sharedip.password = yourp4ssw0rd

# sharedip.addresses = 10.10.11.11,10.10.11.12 # comma-separated list of IP addresses of Flex Master nodes

# sharedip.ip = 10.10.11.10 # the virtual CARP IP

# You can declare extra packages that NEM will check the operating system for before allowing you to start Flex

supplementalpackages=cifs-utils,clamav

# Intermediate SSL certificate for Apache. This is only required if specified by your certification authority.

# If not, ensure this file is a commented string

# SSLCertificateChainFile = SSLCertificateChainFile /etc/apache2/mio_intermediate.cer

SSLCertificateChainFile= # not applicable

# Java executable path

java.executable=/usr/lib/jvm/java-7-oracle/bin/java

# Holds IP addresses for master nodes (i.e. where HornetQ is deployed)

master.ip.addresses=10.10.11.11,10.10.11.12

# Holds IP addresses for job nodes (so that we can union with master nodes to form JGroups/Hazelcast clusters)

# If this is not defined, NEM will buildout with jobs executing on masters

job.ip.addresses=10.10.11.81,10.10.11.82

# Hibernate search

# indexing topology - 'standalone' means local indexing, 'slave' means relying on an indexing node

# If this is set to standalone or not set, NEM will buildout with indexing executing on masters

hibernate.search.index.topology=slave

# Flex Playout API settings

tv.nativ.mio.playout.api.endpoint=https://playout.uat.company.com

tv.nativ.mio.playout.api.username=testuser

tv.nativ.mio.playout.api.password=testuser

#####################

# Optional settings:

#

# set initial and maximum amount of allocated heapspace:

# heapspace=2g

#

# send log files to a log4j server listening on port 4560 (for example Logstash with a log4j input):

# logstash.server=10.10.10.9

#

# comma-separated list of hosts Apache should be load balancing over. If not specified, the default is all hosts in the cluster (nem.cluster.ip.addresses)

# ajp13workers=10.10.11.11,10.10.11.12

#

# add a JMX username/password to allow for monitoring over JMX. please note that the JMX port is

# listening on localhost:9999 only and you will need to use JBoss EAP remoting-jmx libraries to connect.

# the connection string is service:jmx:remoting-jmx://localhost:9999

#

# you can find detailed information about JMX in JBoss EAP at https://planet.jboss.org/post/jmx_connections_to_jboss_as

# jmx.username=user

# jmx.password=password

Job

# Flex node type: master, job, index, playout or resource

nodetype=job

# ID to use for clustering.

clusterid=uatmio

# Defines the name of the logs directory

# /var/nem-logs/<clientname>/<domainname>/jboss/<hostname>/now/

clientname=company

# Defines master account URL. All account names will be templated according to this, ie. for the example below it would be in the "account.uat.company.com" format

domainname=master.uat.company.com

# log4j log level

log.level=WARN

# Enable JVM debugging

jvm.debug=false

# Database configuration

databasehost=10.10.10.10

databasename=miomaster

databaseuser=miomaster_user

databasepassword=miomaster_password

# Shared storage

# filestore is for indexes, cluster state and various temporary work files

# media is for your media files

# You can force nem to recursively chown the mounted storage to the user specified - recommended to set to 'mio' when using NFS.

filestore.owner=

# You can use the 'local' option ONLY if you're running in a non-clustered, single-master environment

# filestore.mount=local

filestore.mount=nfs 10.10.20.20:/export/storage/filestore

# This is your media storage share

# - you can pass options to mount.cifs as the third argument, like in the example below

# - the uid and gid options are mandatory

# - when using newer Linux kernels you might need to use sec=ntlm to mount successfully (for example with netApp)

media.mount=cifs //10.10.20.10/media username=mio,password=pass,uid=mio,gid=mio,sec=ntlm

media.dirs=RECURSIVEWILDCARD

# Keep this set to local.

public.mount=local

public.owner=

# CARP configuration

# Use the following if you are not using CARP:

sharedip.vhid = 1

sharedip.password = password

sharedip.addresses = 127.0.0.1

sharedip.ip = 127.0.0.3

# If using CARP, you'll need a unique vhid!

# sharedip.vhid = 13

# sharedip.password = yourp4ssw0rd

# sharedip.addresses = 10.10.11.11,10.10.11.12 # comma-separated list of IP addresses of Flex Master nodes

# sharedip.ip = 10.10.11.10 # the virtual CARP IP

# You can declare extra packages that NEM will check the operating system for before allowing you to start Flex

supplementalpackages=cifs-utils,clamav

# Java executable path

java.executable=/usr/lib/jvm/java-7-oracle/bin/java

# Holds IP addresses for master nodes (i.e. where HornetQ is deployed)

master.ip.addresses=10.10.11.11,10.10.11.12

# Holds IP addresses for job nodes (so that we can union with master nodes to form JGroups/Hazelcast clusters)

# If this is not defined, NEM will buildout with jobs executing on masters

job.ip.addresses=10.10.11.81,10.10.11.82

# Hibernate search

# indexing topology - 'standalone' means local indexing, 'slave' means relying on an indexing node

# If this is set to standalone or not set, NEM will buildout with indexing executing on masters

hibernate.search.index.topology=slave

# Flex Playout API settings

tv.nativ.mio.playout.api.endpoint=https://playout.uat.company.com

tv.nativ.mio.playout.api.username=testuser

tv.nativ.mio.playout.api.password=testuser

#####################

# Optional settings:

#

# set initial and maximum amount of allocated heapspace:

# heapspace=2g

#

# send log files to a log4j server listening on port 4560 (for example Logstash with a log4j input):

# logstash.server=10.10.10.9

#

# add a JMX username/password to allow for monitoring over JMX. please note that the JMX port is

# listening on localhost:9999 only and you will need to use JBoss EAP remoting-jmx libraries to connect.

# the connection string is service:jmx:remoting-jmx://localhost:9999

#

# you can find detailed information about JMX in JBoss EAP at https://planet.jboss.org/post/jmx_connections_to_jboss_as

# jmx.username=user

# jmx.password=password

Playout

​# Flex node type: master, job, index, playout or resource

nodetype=playout

# ID to use for clustering.

clusterid=uatmioplayout

# Defines the name of the logs directory

# /var/nem-logs/<clientname>/<domainname>/jboss/<hostname>/now/

clientname=nativ

# Defines the playout domain name

domainname=playout.stage.mioeverywhere.tv

# log4j log level

log.level=WARN

# Enable JVM debugging

jvm.debug=false

# database config

databasehost=10.10.10.10

databasename=mioplayout

databaseuser=mioplayout_user

databasepassword=mioplayout_password

# Shared storage

# This is where the lucene playout search index is stored - needs to be accessible by all playout nodes.

lucene.mount=nfs 10.10.20.20/export/storage/playout

lucene.owner=

# Web service base URL to be used by Playout Wrapper.

# This is deprecated, set it as below.

tv.nativ.mio.playout.webservices.baseurl=http://127.0.0.1:30044/webservices/

# CARP configuration

# Use the following if you are not using CARP:

sharedip.vhid = 1

sharedip.password = password

sharedip.addresses = 127.0.0.1

sharedip.ip = 127.0.0.3

# If using CARP, you'll need a unique vhid!

# sharedip.vhid = 26

# sharedip.password = yourp4ssw0rd

# sharedip.addresses = 10.10.12.11,10.10.12.12 # comma-separated list of IP addresses of Flex Playout nodes

# sharedip.ip = 10.10.12.10 # the virtual CARP IP

# Intermediate SSL certificate for Apache. This is only required if specified by your certification authority.

# If not, ensure this file is a commented string

# SSLCertificateChainFile = SSLCertificateChainFile /etc/apache2/mio_intermediate.cer

SSLCertificateChainFile= # not applicable

# Java executable path

java.executable=/usr/lib/jvm/java-7-oracle/bin/java

# Holds comma separated IP addresses of all playout nodes.

playout.ip.addresses=10.10.12.11,10.10.12.12

# Total number of search results returned by lucene

tv.nativ.mio.playout.lucene.hits.per.page=5000

#####################

# Optional settings:

#

# set initial and maximum amount of allocated heapspace:

# heapspace=2g

#

# send log files to a log4j server listening on port 4560 (for example Logstash with a log4j input):

# logstash.server=10.10.10.9

#

# comma-separated list of hosts Apache should be load balancing over. If not specified, the default is all hosts in the cluster (nem.cluster.ip.addresses)

# ajp13workers=10.10.11.11,10.10.11.12

#

# add a JMX username/password to allow for monitoring over JMX. please note that the JMX port is

# listening on localhost:9999 only and you will need to use JBoss EAP remoting-jmx libraries to connect.

# the connection string is service:jmx:remoting-jmx://localhost:9999

#

# you can find detailed information about JMX in JBoss EAP at https://planet.jboss.org/post/jmx_connections_to_jboss_as

# jmx.username=user

# jmx.password=password

Resource

# Flex node type: master, job, index, playout or resource

nodetype=resource

# Defines the name of the logs directory

# /var/nem-logs/<clientname>/<domainname>/jboss/<hostname>/now/

clientname=nativ

# Master account URL

domainname=master.uat.company.com

# log4j log level

log.level=WARN

# Enable JVM debugging

jvm.debug=true

# For FTP resources set these to local

filestore.mount=local

filestore.owner=mio

incoming.mount=local

incoming.owner=mio

# CARP configuration

# Use the following if you are not using CARP:

sharedip.vhid = 1

sharedip.password = password

sharedip.addresses = 127.0.0.1

sharedip.ip = 127.0.0.3

# If using CARP, you'll need a unique vhid!

# sharedip.vhid = 13

# sharedip.password = yourp4ssw0rd

# sharedip.addresses = 10.10.11.11,10.10.11.12 # comma-separated list of IP addresses of Flex Master nodes

# sharedip.ip = 10.10.11.10 # the virtual CARP IP

# You can declare extra packages that NEM will check the operating system for before allowing you to start Flex

supplementalpackages=

# Intermediate SSL certificate for Apache. This is only required if specified by your certification authority.

# If not, ensure this file is a commented string

#SSLCertificateChainFile = SSLCertificateChainFile /etc/apache2/mio.intermediate.crt

SSLCertificateChainFile = # not applicable

# Database configuration

databasehost=10.10.10.10

databasename=mioresource

databaseuser=mioresource_user

databasepassword=mioresource_password

# Database configuration for the HTML5 web uploader

http_databasehost=10.10.10.10

http_databasename=mio_web_upload

http_databaseuser=mio_web_upload

http_databasepassword=mio_web_upload_password

# Comma-separated list of all master nodes

master.ip.addresses=10.5.13.13,10.5.13.14

# Java executable path

java.executable=/usr/lib/jvm/java-7-oracle/bin/java

#####################

# Optional settings:

#

# set initial and maximum amount of allocated heapspace:

# heapspace=2g

#

# send log files to a log4j server listening on port 4560 (for example Logstash with a log4j input):

# logstash.server=10.10.10.9

#

# comma-separated list of hosts Apache should be load balancing over. If not specified, the default is all hosts in the cluster (nem.cluster.ip.addresses)

# ajp13workers=10.10.11.11,10.10.11.12

#

# add a JMX username/password to allow for monitoring over JMX. please note that the JMX port is

# listening on localhost:9999 only and you will need to use JBoss EAP remoting-jmx libraries to connect.

# the connection string is service:jmx:remoting-jmx://localhost:9999

#

# you can find detailed information about JMX in JBoss EAP at https://planet.jboss.org/post/jmx_connections_to_jboss_as

# jmx.username=user

# jmx.password=password

Database Maintenance

​ Please note that the Flex Master is using SQL triggers in the database. You will need to ensure that the TRIGGER privilege is granted to the database user as it is not granted by default as part of GRANT ALL.

GRANT TRIGGER ON miomaster.* to 'miomaster'@'hostname' IDENTIFIED BY 'password';

There has been a bug in MySQL since a separate TRIGGER privilege was introduced in version 5.1 (http://bugs.mysql.com/bug.php?id=39489) which will result in an error message when trying to create a trigger on a MySQL instance with binary logging enabled (for example when replicating to a slave). This can be worked around by adding the following line to your MySQL configuration file:

log_bin_trust_function_creators = 1

Updating Metadata Schema

As stated earlier, database changes are tracked with Liquibase. If a database change has not been applied then an upgrade will fail until it has been applied. The following will happen:

INFO:root:Starting new environment...

INFO:root:Enabling [meta_01] /nem/nem.conf (not running)...

INFO:root:Enabling [sql_01] /nem/db/nem.conf (not running)...

INFO:root:Enabling [sql_01] /nem/db/nem.conf (not running)...

Could not start

Command:

/usr/bin/sudo -u nem /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true -client -cp /nem/db:/usr/share/nativsystems-environment-manager-agent/liquibase/liquibase.jar

...

... -- LiquiBase version: 2.0-rc1

-- *********************************************************************

-- Lock Database

-- Changeset

/nem/db/liquibase/036_remove_extract_action_jsp.sql.xml::generated::antscript::(Checksum: 2:c37bc10575fc36d1b09257c3b6de4dc2)

UPDATE ACTION_TYPE SET JSP_NAME_ = 'file_action' where JSP_NAME_ = 'extract_action';

INSERT INTO `DATABASECHANGELOG` (`AUTHOR`, `COMMENTS`, `DATEEXECUTED`,

`DESCRIPTION`, `EXECTYPE`, `FILENAME`, `ID`, `LIQUIBASE`, `MD5SUM`, `ORDEREXECUTED`) VALUES ('ant script', '', NOW(), 'Custom SQL', 'EXECUTED','/nem/db/liquibase/036_remove_extract_action_jsp.sql.xml', 'generated', '2.0-rc1', '2:c37bc10575fc36d1b09257c3b6de4dc2', 37);

-- Release Database Lock

Stderr:

INFO 12/10/13 8:47 AM:liquibase: Successfully acquired change log lock

INFO 12/10/13 8:47 AM:liquibase: Reading from `DATABASECHANGELOG`

...

... INFO 12/10/13 8:47 AM:liquibase: ChangeSet /nem/db/liquibase/036_remove_extract_action_jsp.sql.xml::generated::ant script ran successfully in 0ms

INFO 12/10/13 8:47 AM:liquibase: Successfully released change log lock

~$

Apply the patch by doing the following:

~$ sudo nem -f /nem/db/nem.conf run

NEM then presents a menu showing three options, select ‘liquibase (non-interactive)’ to apply all default changes or ‘liquibase’ to apply the changes one-by-one. The third option will start an interactive command line database session, which can be useful to debug database configuration or connectivity issues. A similar process is used for creating folders.

Flex Monitoring

This section suggests items that can be monitored, to verify the status of a Flex system.

Flex Manager Nodes

The Flex master nodes make use of Oracle Java, JBoss and the Apache web server, so standard monitoring tools can be used to test the items mentioned below:

HTTP checks

(mio.local is used as the example host name):

Flex component HTTP GET URL expected response code

console (Apache) https://mio.local/ 200

console (JBoss) http://mio.local:30044/ 200

As Flex checks the URL for the account that you’re trying to access, you will need to fake the Host header in order to be able to perform these checks on the Flex master nodes. Here’s a few examples:

$ curl -s -k -D- -H"Host: master.stage.mioeverywhere.tv" https://127.0.0.1|head -1

HTTP/1.1 200 OK

$ curl -s -D- -H"Host: master.stage.mioeverywhere.tv" 127.0.0.1:30044|head -1

HTTP/1.1 200 OK

The Following will NOT work:

$ curl -s -k -D- https://127.0.0.1|head -1

HTTP/1.1 404 Not Found

• No host correct header found, 404 returned.

$ curl -s -k -D- https://localhost|head -1

HTTP/1.1 500 Internal Server Error

Localhost used as the host header, 500 returned.

$ curl -s -k -D- -H"Host: stage.mioeverywhere.tv" https://127.0.0.1|head -1

HTTP/1.1 404 Not Found

• Correct Domain used but no account specified

NEM Modules

This section gives an overview of the functionality available within NEM. A detailed understanding of the NEM modules is not required for day-to-day operation of a Flex system, but may be useful in debugging issues found when starting, stopping or deploying Flex.

A number of modules are implemented which are intended to cover most of the routine tasks that need to be applied to environments. Module definition is done in ‘nem.conf’ files. For the most part Nativ will be responsible for the maintenance of these, but for minor changes clients and agencies are welcome to make changes. This section is not intended as a definitive guide, but rather an introduction of what's possible. For routine maintenance, knowledge of NEM modules is not required by clients and agencies, simply where to drop their new assets and configuration into Subversion.

The type of module used is set within a nem.conf file, for example:

~$ cat /nem/apache/nem.conf

[nem]

service = apache_01

depends = /nem/logs/apache/nem.conf

monitors = ~$

This file makes use of the Apache modules described in the following sections and sets up a dependency on a log folder under /nem/logs/apache/nem.conf.

Meta

Services in NEM are started as a dependency tree, and the meta-service is intended to sit at the root of that tree. This will already be defined in the configuration supplied to you by Nativ. Dependencies may be directly added here or to subcomponents.

Apache

Apache configuration can be done with the Apache service. This module simply includes given configuration onto a fairly basic Apache install. For routine maintenance of Apache please edit the relevant vhost.conf file. If you wish to have a new virtual host defined, please open a ServiceNow request, requesting Nativ to do this for you.

Flyway

The Flyway service is a wrapper around Flyway. Please see flywaydb.org/ for more information. This module allows us to assert that database updates are repeated over all environments.

SQL

Please note that the Liquibase database update service has been deprecated in the 5.1 series in favour of Flyway, however it still remains the default for the 5.0 series and is required to upgrade from 5.0.15 to 5.1.0.

The SQL service is a wrapper around Liquibase. Please see http://www.liquibase.org/ for more information. This module allows us to assert that database updates are repeated over all environments. Routine maintenance of an SQL module implementation this will typically require adding new ‘.sql’ files into Subversion.

It is possible to have a database dependency with no SQL module dependency defined if another process is used to track database changes. If changes are defined with the SQL module but not applied then NEM will refuse to upgrade.

If source files are later altered, rather than creating new changes, then this is not supported and will result in unpredictable behaviour.

Packages

The packages module can be used to request more Ubuntu/Red Hat packages be installed which are not included in the base install. Updates to this should be done by Nativ, and requested through the ticketing system.

Processes

The process module starts and stops daemons such as Tomcat or JBoss. It is a wrapper around start-stop-daemon.

Nativ have decided to do away with shell scripts for this task and this replacement has proven to be more consistent and reliable. In the ‘nem.conf’ parameters can be defined such as:

• user to run as

• Environment Variables

• Process arguments

• Java properties

It is intended that these should be intuitive enough to developers, but if it is not clear then please suggest what changes to ‘init scripts’ would be required in a ticket and submit it to our helpdesk. Routine maintenance of a process module typically involves updating a WAR file.

Folder

The folder module allows developers to define when data should be stored persistently on local disk or on a NAS.

Please contact the helpdesk if you require this.

Logs

A special type of folder can be created for logs. Logs are then stored in path "/var/nem-logs/<client>/<domain>/<service>/<hostname>/now" folder. In case of example configuration below, the logs could be found in "/var/nem-logs/nativ/master.example.com/jboss/<hostname>/now", where hostname would be the actual hostname of machine running the mio instance.

$ cat /nem/logs/jboss/nem.conf

[nem]

service = logfolder_01

depends =

monitors =

[logfolder]

client = nativ

domain = master.example.com

service = jboss

[folder]

owner = mio

Users

UNIX users can be created in the range 25000 to 29998. Ideally all daemon should run as different users for security reasons. Nativ will typically add new users here when new daemons are created.

Appendix

Was this article helpful?