Monday, May 7, 2018

Doing major version upgrades of OTRS docker containers

One missing feature of my OTRS docker containers was automating a major version upgrade without any manual configuration. It should be as easy to do as it is launching it. Minor versions upgrades are easy: just pull the new image and restart your containers. 

For example, if you are running OTRS 6.0.1 and want to upgrade to the latest version (6.0.7 ATM):
sudo docker-compose -f docker-compose-prod.yml pull
sudo docker-compose -f docker-compose-prod.yml stop
sudo docker-compose -f docker-compose-prod.yml rm -f -v
sudo docker-compose -f docker-compose-prod.yml up   
That's it. Minor version upgrades only involve security and bug fixes, so there are no database schema or module upgrades needed, the new image contains the latest OTRS software packages with the latest fixes.

Major version upgrades need much more work done, as mentioned before, additional to updating the software components there are others components that need updating too:
  • Database schema
  • Cronjobs
  • Configuration rebuild
  • Cache delete
So I have added a new major version upgrade feature controlled by the environment variable OTRS_UPGRADE=yes. When this variable is set in the docker-compose file, the major version upgrade process will be started.  Also modify your docker-compose file and make sure that the OTRS docker image has the latest tag (juanluisbaptiste/otrs:latest) on both the otrs and its data container. 

Then like with the minor version upgrade, pull the new OTRS image and restart your containers:
sudo docker-compose -f docker-compose-prod.yml pull
sudo docker-compose -f docker-compose-prod.yml stop
sudo docker-compose -f docker-compose-prod.yml rm -f -v
sudo docker-compose -f docker-compose-prod.yml up   
The upgrade procedure will pause the OTRS container boot process for 10 seconds to give the user the chance to cancel the upgrade. 

The first thing done by the upgrade process is to do a backup of the current version before starting. The backup will be stored on /backups (don't forget to map that directory to one in your host so you can access it).

Then it will follow the official upgrade instructions: 
  • Run database upgrade scripts
  • Upgrade cronjobs
  • Upgrade modules 
  • Fix file permissions 
  • Rebuild OTRS configuration and delete cache.
The software components were updated when pulling the new  mage. Also, there was no need to stop/start services as the upgrade occurs on container bootup, which means no services are started yet. Also remember to remove the OTRS_UPGRADE variable from the docker-compose file afterwards.

You could use this container to upgrade from non docker installations too, btw.

Note:
This feature was added to both OTRS 5 & 6 images, so upgrades from OTRS 4 can be preformed too.


WARNING: this feature is experimental and is still in heavy testing, use at your own risk !!

Wednesday, May 2, 2018

Ansible installation role for BigBlueButton



I was looking for an ansible role to install BigBlueButton with SSL support, but it seems there's not much roles out there for this. Searching I the Internet the most complete one I found was this one, but it was outdated (last commit from two years ago) and broken, and the PR to fix it was from almost a year ago with no answer from the developer, so I figured it was abandoned and forked it.

Additional to fixing the broken stuff, this fork has the following additional features:
  • Installs latest BigBlueButton stable version, currently 1.1, but it will be updated to 2.0 when it comes out of beta.
  • Installation behind a firewall (NAT setup support).
  • Automatic SSL configuration using LetsEncrypt certificates.
  • Optionally installs the bbb-demo and bbb-check packages.

Lets see an example playbook to do a BigBlueButton install with SSL support:
---
- hosts: bbb
  remote_user: ansible
  become: True
  become_user: root
  gather_facts: True
  roles:
    - role: ansible-bigbluebutton
      bbb_server_name: bbb.example.com
      bbb_configure_ssl: True
      bbb_ssl_email: foo@bar.com
Replace bbb_server_name with your server's hostname and bbb_ssl_email with your email address for the LetsEncrypt certificate generation, that's it. 

The role will install BigBlueButton according to the official installation instructions, generate SSL certificates using LetsEncrypt, and configure BigBlueButton to use those certificates. Remember, your hostname has to resolve to a public IP address, if not then LetsEncrypt certificate generation will not work.

If your server is behind a firewall the variable bbb_configure_nat: True needs to be added to the playbook to enable NAT configuration:
---
- hosts: bbb
  remote_user: ansible
  become: True
  become_user: root
  gather_facts: True
  roles:
    - role: ansible-bigbluebutton
      bbb_server_name: bbb.example.com
      bbb_configure_ssl: True
      bbb_ssl_email: foo@bar.com
      bbb_configure_nat: True
This will reconfigure BigBlueButton components to use the local IP address instead of the one the server publicly resolves to. 

If you want to install the demo package or the health check package you can use bbb_install_demo: True and bbb_install_check: True respectevly to install them.

There is still some missing stuff I want to do before I consider this role complete:
  • Push it to Ansible Galaxy.
  • Install the new greenlight interface to create meetings.
  • Install the new HTML5 client for testing.

As an alternative to greenlight there's another project called Mconf-web, which is a web portal from were you can create public and private rooms that have their own videoconference room in the BigBlueButton server. Check my mconf-web docker container for an easy way to use it.

Tuesday, February 13, 2018

OTRS 6 Help Desk System on docker

At the end of last year, OTRS 6 was released, it has some cool new features like the revamped admin interface, the new SysConfig , or the message transmission status. You should check out the complete new features list if you want to know more.

I have been working on and testing the OTRS 6 update to my unofficial OTRS docker images for the past month, and everything seems to be working fine with the new OTRS version.

The first thing done was to update the container base image to CentOS 7. I had avoided this update in the past because I have had some issues with the apache server from CentOS 7. I remember there was a bug that prevented the httpd process from starting, and also some of the reconfiguration that the container startup script does to the apache server needed to be rewritten, so I opted to wait until version 6 was released (as I expected to be the minimum supported CentOS version).

Now external databases can be used by setting the following environment variables:
  • OTRS_DB_NAME
  • OTRS_DB_USER
  • OTRS_DB_HOST 
  • OTRS_DB_PORT
I also worked on the logging output and changed colors to use the same ones from the OTRS logo. Speaking of logos, I added a new ascii logo to the container bootup process to make it look more professional :-)


The docker-compose file format was updated to version 3 (about time), but I left the old v1 on the repo if needed. Also there's a new CHANGELOG file, tracking most significant changes back to the first OTRS 5 image (check it out for a more detailed feature/bugfixing list).

From now it's not possible anymore to set some configuration options using environment variables, like OTRS_POSTMASTER_FETCH_TIME because it's not possible anymore to set the postmaster's fetch time from command line as it was possible until OTRS 5. Others variables were removed because after being set on Config.pm vía env variables, they could not be changed later using SysConfig, so the ones that could be edited at a later time were removed, like OTRS_ADMIN_EMAIL, OTRS_ORGANIZATION and OTRS_SYSTEM_ID (actually, these last ones were removed at some point during 5.0.x images, check the CHANGELOG for the exact version).

Another feature added during 5.0.x development was the auto module reinstallation at container bootup after version upgrade. So, if you had some additional modules installed and you upgraded your OTRS image to the latest version, they will be re-downloaded and reinstalled when the upgraded container starts so your installation does't break.

Lastly, there's another new environment variable to enable container debugging: OTRS_DEBUG. This debugging is not very complex, it just enables bash debugging and installs some programs like telnet and dig to aid on troubleshooting.

I have been testing it on one of our installations and it's working very well, you should give it a try too !

Thursday, January 25, 2018

network-tests: A tool to measure and report a network's latency and bandwidth performance

Because of new IT regulations in my country, since some time the company I work at has to periodically send a report to the IT ministry with performance data that measures our current network bandwidth performance that is used for our customer's services (I work at a company that is kinda of a small ISP for other ISP). 

We had some requirements, like: 
  • Being able to run multiple upload/download/ping tests
  • Calculate some stats over the results, like minimum, maximum and average speeds, and standard deviation, 
  • Send a CSV file with the results (both totals and all tests).

Looking around at the time (mid 2017) there wasn't any tool that could do all of this in one go and doing it manually was a daunting task. Sure, there are lots of tools to measure bandwidth usage  and network latency (like speedtest-cli and the good old ping command) that you could glue together with bash and some sed/grep/awk jujitsu and get all the values and the report as asked, but it could be really painful to develop and maintain thereafter. 

So I decided to write it in python.

For the sake of simplicity and easier maintenance I decided to split network-test's tests functionality in the obvious three:

Each of them share the following features:
  • Run multiple tests in one go.
  • Calculate average speeds for multiple tests.
  • Bandwidth measurement in both Mbps and MB/s.
  • Overall statistics with metrics like minimum, maximum and average speeds, and standard deviation.
  • Save the results and stats to a file with CSV format.

Installation

The program uses setup tools, so after cloning the git repo:
git clone https://github.com/juanluisbaptiste/network-tests

And inside the project directory run:
sudo python setup.py install

Installation is still flaky so maybe there could be some errors. Now lets see some examples of how to use these tools:

 

Network Latency

First lets see which parameters the program accepts:

usage: ping-tester [-h] [-c COUNT] -f PINGFILE [-o OUTFILE] [-I INTERFACE]
                    [-s]

optional arguments:
  -h, --help            show this help message and exit
  -c COUNT, --count COUNT
                        Ping count. Default: 5
  -f PINGFILE, --pingfile PINGFILE
                        List of hosts to ping
  -o OUTFILE, --outfile OUTFILE
                        Destination file for ping results
  -I INTERFACE          Network interface to use for pinging
  -s, --silent          Don't print verbose output from the test
The first thing to do to use it is to put in a file the list of hosts that are going to be ping'ed during the test For example, you could use this small test list:
www.google.com
www.yahoo.com
www.cisco.com
www.facebook.com
Now, for example if you needed to do 5 latency measurements against that list and save the results with stats to a file use the following command:

python ping-tester -c 5 -f hosts.txt -o results.csv

That would yield the following results:
juancho@moon:~$ ping-tester -c 5 -f $PWD/hosts.txt -o $PWD/pingtest.csv
Network Interface: Default
Ping Count: 5
Hosts: 4

Test #1:
Pinging Host www.google.com
Min: 0.488 ms Max: 0.497 ms Average: 0.491 ms Packet Loss Count: 0 Packet Loss Rate: 0.0%

Test #2:
Pinging Host www.yahoo.com
Min: 105.01 ms Max: 114.116 ms Average: 106.967 ms Packet Loss Count: 0 Packet Loss Rate: 0.0%

Test #3:
Pinging Host www.cisco.com
Min: 63.029 ms Max: 63.062 ms Average: 63.044 ms Packet Loss Count: 0 Packet Loss Rate: 0.0%

Test #4:
Pinging Host www.facebook.com
Min: 63.565 ms Max: 63.582 ms Average: 63.572 ms Packet Loss Count: 0 Packet Loss Rate: 0.0%


Time elapsed: 17.0 seconds

Average min: 58.02 ms
Average max: 60.31 ms
Average ping: 58.52 ms
Average packet loss count: 0.0
Average packet loss rate: 0.0 %
Standard deviation: 9.49 ms

And the CSV file has the following content:
Count,Time Elapsed (s),Min (ms),Max (ms),Average (ms),Packet Loss Count,Packet Loss Rate (%),Standard Deviation (ms)
5,17.0,58.02,60.31,58.52,0.0,0.0,9.49

Count,Min (ms),Max (ms),Average (ms),Std Deviation (ms),Lost,% Lost,Host
5,0.488,0.497,0.491,0.0,0,0.0,www.google.com
5,105.01,114.116,106.967,0.0,0,0.0,www.yahoo.com
5,63.029,63.062,63.044,0.0,0,0.0,www.cisco.com
5,63.565,63.582,63.572,0.0,0,0.0,www.facebook.com
There you have some global stats from all the tests, and bellow them are the individual results, with their own stats too.

Bandwidth tests

This tests are split in two programs:

 

Download-tester

These are the command parameters:

usage: download-tester [-h] [-c COUNT]
                         [-l {usw,use,tokyo,washington,sanjose,london}]
                         [-o OUTFILE] [-s] [-u URL]

optional arguments:
  -h, --help            show this help message and exit
  -c COUNT, --count COUNT
                        Number of downloads to do. Default: 1
  -l {usw,use,tokyo,washington,sanjose,london}, --location {usw,use,tokyo,washington,sanjose,london}
                        Server location for the test. Default: use
  -o OUTFILE, --outfile OUTFILE
                        Destination file for test results in CSV format
  -s, --silent          Don't print verbose output from the download process
  -u URL, --url URL     Alternate download URL (it must include path and

To do 5 downloads and save results to a CSV file, use this command:
download-tester -c 5 -o $PWD/download-test.csv

This would be a sample of the program's output:
juancho@moon:~$ download-tester -c 5 -o $PWD/download-test.csv                                
download_speed.pyc 0.1.1                       

Location: use                                  
URL: http://speedtest.newark.linode.com/100MB-newark.bin                                      
Total Tests: 5                                 

Test #1:                                       
[==================================================] 11.65 MB/s - 93.21 Mbpss                 
Downloaded file size: 100.0 MB                 

Average download speed: 11.65 MB/s - 93.21 Mbps                                               

Test #2:                                       
[==================================================] 11.65 MB/s - 93.21 Mbpss                 
Downloaded file size: 100.0 MB

Average download speed: 11.65 MB/s - 93.21 Mbps

Test #3: 
[==================================================] 11.65 MB/s - 93.21 Mbpss
Downloaded file size: 100.0 MB

Average download speed: 11.65 MB/s - 93.21 Mbps

Test #4: 
[==================================================] 13.11 MB/s - 104.86 Mbps
Downloaded file size: 100.0 MB

Average download speed: 13.11 MB/s - 104.86 Mbps

Test #5: 
[==================================================] 11.65 MB/s - 93.21 Mbpss
Downloaded file size: 100.0 MB

Average download speed: 11.65 MB/s - 93.21 Mbps


Test Results:
---- -------

Time Elapsed: 9.0 seconds

Overall Average Download Speed: 11.94MB/s - 95.54Mbps
Maximum download speed: 13.11MB/s - 104.86Mbps
Minimum download speed: 11.65MB/s - 93.21Mbps
Median download speed: 11.65MB/s - 93.21Mbps
Standard Deviation: 0.58MB/s - 4.66Mbps
download-tester includes some download HTTP URL's that can be used by using the -l parameter, although, I think this feature needs some rethinking, at least in the way they are named. You can also use your own HTTP download URL using the -u parameter. It currently only support HTTP downloads.

Like with the ping-tester program, results are saved to a CSV file:
Date,URL,Size (MB),Min (MB/s),Min (Mbps),Max (MB/s),Max (Mbps),Average (MB/s),Average (Mbps),Median (MB/sec),Median (Mbps)
Mon Jul 10 00:14:59 2017,http://speedtest.tokyo.linode.com/100MB-tokyo.bin,100.0,1.29,1.29,1.33,10.62,1.31,1.31,1.31,10.49

Sample#,File Size,Average Speed (MB/sec),Average Throughput (Mbps)
1,100.0,1.31,10.49
2,100.0,1.31,10.49
3,100.0,1.29,10.36
4,100.0,1.31,10.49
5,100.0,1.33,10.62

Upload-tester

These are the command parameters:
usage: upload-tester [-h] [-c COUNT] -f UPLOADFILE [-o OUTFILE] [-s] -l HOST
                       -u USERNAME -p PASSWORD [-P PASSIVE]

optional arguments:
  -h, --help            show this help message and exit
  -c COUNT, --count COUNT
                        Number of uploads to do. Default: 1
  -f UPLOADFILE, --uploadfile UPLOADFILE
                        Test file to upload
  -o OUTFILE, --outfile OUTFILE
                        Destination file for test results in CSV format
  -s, --silent          Don't print verbose output from the upload process
  -l HOST, --host HOST  FTP server for upload test
  -u USERNAME, --username USERNAME
                        FTP user name for upload test
  -p PASSWORD, --password PASSWORD
                        FTP password for upload test
  -P PASSIVE, --passive PASSIVE
                        Sets FTP passive mode. Default: False
The upload tests are done over FTP, so you need to have an ftp server and username available for the test. For example, to do 5 upload tests against ftp.example.com you can use the following command:
upload-tester -c 5 -f $PWD/test10Mb.db -l ftp.example.com -u bob -p mypassword
Yes, I know is not very secure to set the password over the command line, but this is just a testing tool and you are supposed to use a testing account too ;)

That command would show the following output:
juancho@moon:~$ upload-tester -c 5 -f $PWD/test10Mb.db -l ftp.example.com -u bob -p xxxxx
upload_speed.pyc v0.1.1

FTP Host: ftp.example.com
Username: bob
Password: xxxxx
File: /home/juancho/test10Mb.db
Size: 10.0MB

Total Tests: 5

Test #1:
[==================================================] 10.49 MB/s - 83.89 Mbps^[[23~

Average upload speed: 10.49MB/s - 83.89Mbps

Test #2:
[==================================================] 10.49 MB/s - 83.89 Mbps^F

Average upload speed: 5.24MB/s - 41.94Mbps

Test #3:
[==================================================] 5.24 MB/s - 41.94 MbpsG

Average upload speed: 5.24MB/s - 41.94Mbps

Test #4:
[==================================================] 5.24 MB/s - 41.94 Mbps

Average upload speed: 5.24MB/s - 41.94Mbps

Test #5:
[==================================================] 10.49 MB/s - 83.89 Mbps

Average upload speed: 10.49MB/s - 83.89Mbps


Test Results:
---- -------

Time Elapsed: 1.0 seconds

Overall Average download speed: 7.34MB/s - 58.72Mbps
Maximum download speed: 10.49MB/s - 83.89Mbps
Minimum download speed: 5.24MB/s - 41.94Mbps
Median download speed: 5.24MB/s - 41.94Mbps
Standard Deviation: 2.57MB/s - 20.55Mbps
Also with the CSV output:
Date,Server,File,Size,Min (MB/s),Min (Mbps),Max (MB/s),Max (Mbps),Average (MB/s),Average (Mbps),Median (MB/sec),Median (Mbps)
Mon Jul 10 00:11:00 2017,ftp.server.yyy,/home/juancho/test10Mb.db,4.16,0.15,0.15,0.23,1.84,0.18,1.42,0.17,1.34

Sample#,File Size,Average Speed (MB/sec),Average Throughput (Mbps)
1,4.16,0.17,1.34
2,4.16,0.15,1.2
3,4.16,0.23,1.84
4,4.16,0.17,1.34
5,4.16,0.17,1.4

If you are having trouble with the upload you can test FTP passive mode with the -P parameter.


ToDo

There is stuff that I would like to add soon, like:
  • Automatic conversion of speeds depending if the current value is over for ex, 1000kbps it should be shown as Mbps, if less than 1000kbps it should be shown as kbps, etc.
  • FTP download tests, currently only HTTP(S) is supported as a download method.

Contributions welcomed !

Thursday, July 20, 2017

New mageia 6 docker images available

Mageia 6 was released last week, so during this week I worked on updating the official docker images too. This new release includes a new package manager additional to urpmi called dnf from the Fedora Project, which makes it now possible to offer third-party free and open source software through Fedora COPR and the openSUSE Build Service targeting Mageia 6 and up. Through COPR or OBS, it is now possible for anyone to easily offer free and open source software built and tailored for Mageia, as well as free and open source software that is broadly compatible with Mageia along with other popular Linux distributions.

You can learn more about this new mageia release on the release notes, the docker images can be found at docker hub. Remember to create a container from this new image for mageia 6 you can do something like this:

  docker run -ti --name mageia mageia:latest bash

Check it out and please send any bug reports to the project's github issues page.

Enjoy !

Tuesday, June 14, 2016

Running your own help desk platform with docker and OTRS


At work we use OTRS for our help desk platform. We chose it because it's open source and very flexible, and we could install it on our premises to have more control. So, I went ahead and made a set of docker containers that we have been running multiple OTRS  4 .0.x installations for small companies for more than a year now without issues. Now I've had some time to upgrade the containers to OTRS 5.

The first thing to know is that this is an unofficial OTRS docker container.

For setting up an OTRS system you need several services:
  • A web server with the OTRS installation.
  • A database server.
  • An SMTP server.
  • A proxy server (optional).
This container setup is designed that way. It uses:

The docker-compose configuration files include all of those services, and upon container start a fresh OTRS install will be started, ready to to be configured by an OTRS administrator.

There are some environment variables you can use to control the container startup and initial state. For example, the container can be started in three ways, controlled by the OTRS_INSTALL environment variable:
  • OTRS_INSTALL=no when the container is run, it will load a default vanilla OTRS installation that is ready to be configured as you need. This is the default. 
  • OTRS_INSTALL=yes will launch the OTRS install web interface at http://localhost/otrs/install.pl.
  • OTRS_INSTALL=restore Will restore the backup specified by the OTRS_BACKUP_DATE environment variable. OTRS_BACKUP_DATE is the backup name to restore, in the same date_time format that the OTRS backup script uses, for example OTRS_BACKUP_DATE="2015-05-26_00-32". Backups must be inside the /var/otrs/backups directory (you should host mount it).
You need to mount that backups volume from somewhere, it can be from another volume (using --volumes-from) or mounting a host volume which contains the backup files.

For testing the containers you can bring them up with docker-compose:

    sudo docker-compose build
    sudo docker-compose up

This command will build all containers and pull missing images, bring up all needed containers, link them and mount volumes according to the docker-compose.yml configuration file:

version: '2'
services:
  otrs:
    build:
      context: otrs
      args:
        OTRS_VERSION: 5.0.10-01
    ports:
    - "80:80"
  # If running behind a proxy container, expose the ports instead
  # and link the proxy container to this one.
  #  expose:
  #  - "80"
    links:
    - mariadb:mariadb
    - postfix:postfix
    volumes_from:
    - data

  mariadb:
    build:
      context: mariadb
    expose:
    - "3306"
    volumes_from:
    - data
    environment:
        MYSQL_ROOT_PASSWORD: changeme
  postfix:
     image: juanluisbaptiste/postfix:latest
     expose:
     - "25"
     env_file: credentials-smtp.env
  data:
    image: centos/mariadb:latest
    volumes:
    - /var/lib/mysql
    - "./otrs/backup:/var/otrs/backups"

    command: /bin/true

The default database password is changeme, to change it, edit the docker-compose.yml file and change the MYSQL_ROOT_PASSWORD environment variable on the mariadb image definition before running docker-compose.

To start the containers in production mode use this docker-compose.yml file that points to the latest version of all images to be pulled and run instead of Dockerfiles to be built:

sudo docker-compose -f docker-compose-prod.yml pull


sudo docker-compose -f docker-compose-prod.yml -p company_otrs up -d  

After the containers finish starting up you can access the administration and customer interfaces can be accessed at following addresses:

Administration Interface

    http://$OTRS_HOSTNAME/otrs/index.pl

Customer Interface

    http://$OTRS_HOSTNAME/otrs/customer.pl

There are also some other environment variables that can be set to customize the default install, like root and database passwords, language, theme, ticket counter start, number generator, etc, check the github page for more info. OTRS 4 sources are still available in otrs-4_0_x branch.


Monday, May 16, 2016

How to mount VirtualBox shared folders at boot

I'm setting up a network install server on a VirtualBox VM, and I didn't want to copy the contents of the iso images of the distros going to be available through PXE to avoid having a huge VM. Worse if I also included other repos like epel or centosplus, or updates, so I created a VirtualBox shared folder pointing to the directory containing the iso images. After having created the shared folder, you can mount it like this:

sudo mount -t vboxsf -o uid=$UID,gid=$(id -g) share ~/host

To avoid having to manually mount the share each time the VM boots , the shared mount needs to be added to /etc/fstab, but there's a catch: the vboxsf kernel module, needed to mount the shared folder, isn't available when mounting all filesystems during the boot process. So, to fix this we need to make sure the vboxsf module is loaded before the filesystems mount at boot.

On CentOS 7, create a file on /etc/sysconfig/modules directory ending in .modules and add this to load VirtualBox kernel module before filesystems are mounted:

#!/bin/sh

    lsmod |grep vboxsf >/dev/null 2>&1
    if [ $? -gt 0 ] ; then
      exec /sbin/modprobe vboxsf >/dev/null 2>&1
    fi

On Ubuntu/Debian, add the module name to /etc/modules. Now we need to add the shared mount to /etc/fstab. In my case, my shared folder is called isos, so I added the following line:

isos    /isos   vboxsf  defaults        0 0

After adding this line you can reboot the server/vm and see if it mounted correctly at boot.

If you want to mount the iso images too at boot, add a line like this one to /etc/fstab, for each iso to mount:

/isos/CentOS/CentOS-7-x86_64-DVD-1511.iso /distros/centos7 iso9660 loop 0 0

Remember to adjust loopback device limits if you plan to mount more than 8 or 10 images (don't remember right now the limit).