Tuesday, February 13, 2018

OTRS 6 Help Desk System on docker

At the end of last year, OTRS 6 was released, it has some cool new features like the revamped admin interface, the new SysConfig , or the message transmission status. You should check out the complete new features list if you want to know more.

I have been working on and testing the OTRS 6 update to my unofficial OTRS docker images for the past month, and everything seems to be working fine with the new OTRS version.

The first thing done was to update the container base image to CentOS 7. I had avoided this update in the past because I have had some issues with the apache server from CentOS 7. I remember there was a bug that prevented the httpd process from starting, and also some of the reconfiguration that the container startup script does to the apache server needed to be rewritten, so I opted to wait until version 6 was released (as I expected to be the minimum supported CentOS version).

Now external databases can be used by setting the following environment variables:
  • OTRS_DB_NAME
  • OTRS_DB_USER
  • OTRS_DB_HOST 
  • OTRS_DB_PORT
I also worked on the logging output and changed colors to use the same ones from the OTRS logo. Speaking of logos, I added a new ascii logo to the container bootup process to make it look more professional :-)


The docker-compose file format was updated to version 3 (about time), but I left the old v1 on the repo if needed. Also there's a new CHANGELOG file, tracking most significant changes back to the first OTRS 5 image (check it out for a more detailed feature/bugfixing list).

From now it's not possible anymore to set some configuration options using environment variables, like OTRS_POSTMASTER_FETCH_TIME because it's not possible anymore to set the postmaster's fetch time from command line as it was possible until OTRS 5. Others variables were removed because after being set on Config.pm vía env variables, they could not be changed later using SysConfig, so the ones that could be edited at a later time were removed, like OTRS_ADMIN_EMAIL, OTRS_ORGANIZATION and OTRS_SYSTEM_ID (actually, these last ones were removed at some point during 5.0.x images, check the CHANGELOG for the exact version).

Another feature added during 5.0.x development was the auto module reinstallation at container bootup after version upgrade. So, if you had some additional modules installed and you upgraded your OTRS image to the latest version, they will be re-downloaded and reinstalled when the upgraded container starts so your installation does't break.

Lastly, there's another new environment variable to enable container debugging: OTRS_DEBUG. This debugging is not very complex, it just enables bash debugging and installs some programs like telnet and dig to aid on troubleshooting.

I have been testing it on one of our installations and it's working very well, you should give it a try too !

Thursday, January 25, 2018

network-tests: A tool to measure and report a network's latency and bandwidth performance

Because of new IT regulations in my country, since some time the company I work at has to periodically send a report to the IT ministry with performance data that measures our current network bandwidth performance that is used for our customer's services (I work at a company that is kinda of a small ISP for other ISP). 

We had some requirements, like: 
  • Being able to run multiple upload/download/ping tests
  • Calculate some stats over the results, like minimum, maximum and average speeds, and standard deviation, 
  • Send a CSV file with the results (both totals and all tests).

Looking around at the time (mid 2017) there wasn't any tool that could do all of this in one go and doing it manually was a daunting task. Sure, there are lots of tools to measure bandwidth usage  and network latency (like speedtest-cli and the good old ping command) that you could glue together with bash and some sed/grep/awk jujitsu and get all the values and the report as asked, but it could be really painful to develop and maintain thereafter. 

So I decided to write it in python.

For the sake of simplicity and easier maintenance I decided to split network-test's tests functionality in the obvious three:

Each of them share the following features:
  • Run multiple tests in one go.
  • Calculate average speeds for multiple tests.
  • Bandwidth measurement in both Mbps and MB/s.
  • Overall statistics with metrics like minimum, maximum and average speeds, and standard deviation.
  • Save the results and stats to a file with CSV format.

Installation

The program uses setup tools, so after cloning the git repo:
git clone https://github.com/juanluisbaptiste/network-tests

And inside the project directory run:
sudo python setup.py install

Installation is still flaky so maybe there could be some errors. Now lets see some examples of how to use these tools:

 

Network Latency

First lets see which parameters the program accepts:

usage: ping-tester [-h] [-c COUNT] -f PINGFILE [-o OUTFILE] [-I INTERFACE]
                    [-s]

optional arguments:
  -h, --help            show this help message and exit
  -c COUNT, --count COUNT
                        Ping count. Default: 5
  -f PINGFILE, --pingfile PINGFILE
                        List of hosts to ping
  -o OUTFILE, --outfile OUTFILE
                        Destination file for ping results
  -I INTERFACE          Network interface to use for pinging
  -s, --silent          Don't print verbose output from the test
The first thing to do to use it is to put in a file the list of hosts that are going to be ping'ed during the test For example, you could use this small test list:
www.google.com
www.yahoo.com
www.cisco.com
www.facebook.com
Now, for example if you needed to do 5 latency measurements against that list and save the results with stats to a file use the following command:

python ping-tester -c 5 -f hosts.txt -o results.csv

That would yield the following results:
juancho@moon:~$ ping-tester -c 5 -f $PWD/hosts.txt -o $PWD/pingtest.csv
Network Interface: Default
Ping Count: 5
Hosts: 4

Test #1:
Pinging Host www.google.com
Min: 0.488 ms Max: 0.497 ms Average: 0.491 ms Packet Loss Count: 0 Packet Loss Rate: 0.0%

Test #2:
Pinging Host www.yahoo.com
Min: 105.01 ms Max: 114.116 ms Average: 106.967 ms Packet Loss Count: 0 Packet Loss Rate: 0.0%

Test #3:
Pinging Host www.cisco.com
Min: 63.029 ms Max: 63.062 ms Average: 63.044 ms Packet Loss Count: 0 Packet Loss Rate: 0.0%

Test #4:
Pinging Host www.facebook.com
Min: 63.565 ms Max: 63.582 ms Average: 63.572 ms Packet Loss Count: 0 Packet Loss Rate: 0.0%


Time elapsed: 17.0 seconds

Average min: 58.02 ms
Average max: 60.31 ms
Average ping: 58.52 ms
Average packet loss count: 0.0
Average packet loss rate: 0.0 %
Standard deviation: 9.49 ms

And the CSV file has the following content:
Count,Time Elapsed (s),Min (ms),Max (ms),Average (ms),Packet Loss Count,Packet Loss Rate (%),Standard Deviation (ms)
5,17.0,58.02,60.31,58.52,0.0,0.0,9.49

Count,Min (ms),Max (ms),Average (ms),Std Deviation (ms),Lost,% Lost,Host
5,0.488,0.497,0.491,0.0,0,0.0,www.google.com
5,105.01,114.116,106.967,0.0,0,0.0,www.yahoo.com
5,63.029,63.062,63.044,0.0,0,0.0,www.cisco.com
5,63.565,63.582,63.572,0.0,0,0.0,www.facebook.com
There you have some global stats from all the tests, and bellow them are the individual results, with their own stats too.

Bandwidth tests

This tests are split in two programs:

 

Download-tester

These are the command parameters:

usage: download-tester [-h] [-c COUNT]
                         [-l {usw,use,tokyo,washington,sanjose,london}]
                         [-o OUTFILE] [-s] [-u URL]

optional arguments:
  -h, --help            show this help message and exit
  -c COUNT, --count COUNT
                        Number of downloads to do. Default: 1
  -l {usw,use,tokyo,washington,sanjose,london}, --location {usw,use,tokyo,washington,sanjose,london}
                        Server location for the test. Default: use
  -o OUTFILE, --outfile OUTFILE
                        Destination file for test results in CSV format
  -s, --silent          Don't print verbose output from the download process
  -u URL, --url URL     Alternate download URL (it must include path and

To do 5 downloads and save results to a CSV file, use this command:
download-tester -c 5 -o $PWD/download-test.csv

This would be a sample of the program's output:
juancho@moon:~$ download-tester -c 5 -o $PWD/download-test.csv                                
download_speed.pyc 0.1.1                       

Location: use                                  
URL: http://speedtest.newark.linode.com/100MB-newark.bin                                      
Total Tests: 5                                 

Test #1:                                       
[==================================================] 11.65 MB/s - 93.21 Mbpss                 
Downloaded file size: 100.0 MB                 

Average download speed: 11.65 MB/s - 93.21 Mbps                                               

Test #2:                                       
[==================================================] 11.65 MB/s - 93.21 Mbpss                 
Downloaded file size: 100.0 MB

Average download speed: 11.65 MB/s - 93.21 Mbps

Test #3: 
[==================================================] 11.65 MB/s - 93.21 Mbpss
Downloaded file size: 100.0 MB

Average download speed: 11.65 MB/s - 93.21 Mbps

Test #4: 
[==================================================] 13.11 MB/s - 104.86 Mbps
Downloaded file size: 100.0 MB

Average download speed: 13.11 MB/s - 104.86 Mbps

Test #5: 
[==================================================] 11.65 MB/s - 93.21 Mbpss
Downloaded file size: 100.0 MB

Average download speed: 11.65 MB/s - 93.21 Mbps


Test Results:
---- -------

Time Elapsed: 9.0 seconds

Overall Average Download Speed: 11.94MB/s - 95.54Mbps
Maximum download speed: 13.11MB/s - 104.86Mbps
Minimum download speed: 11.65MB/s - 93.21Mbps
Median download speed: 11.65MB/s - 93.21Mbps
Standard Deviation: 0.58MB/s - 4.66Mbps
download-tester includes some download HTTP URL's that can be used by using the -l parameter, although, I think this feature needs some rethinking, at least in the way they are named. You can also use your own HTTP download URL using the -u parameter. It currently only support HTTP downloads.

Like with the ping-tester program, results are saved to a CSV file:
Date,URL,Size (MB),Min (MB/s),Min (Mbps),Max (MB/s),Max (Mbps),Average (MB/s),Average (Mbps),Median (MB/sec),Median (Mbps)
Mon Jul 10 00:14:59 2017,http://speedtest.tokyo.linode.com/100MB-tokyo.bin,100.0,1.29,1.29,1.33,10.62,1.31,1.31,1.31,10.49

Sample#,File Size,Average Speed (MB/sec),Average Throughput (Mbps)
1,100.0,1.31,10.49
2,100.0,1.31,10.49
3,100.0,1.29,10.36
4,100.0,1.31,10.49
5,100.0,1.33,10.62

Upload-tester

These are the command parameters:
usage: upload-tester [-h] [-c COUNT] -f UPLOADFILE [-o OUTFILE] [-s] -l HOST
                       -u USERNAME -p PASSWORD [-P PASSIVE]

optional arguments:
  -h, --help            show this help message and exit
  -c COUNT, --count COUNT
                        Number of uploads to do. Default: 1
  -f UPLOADFILE, --uploadfile UPLOADFILE
                        Test file to upload
  -o OUTFILE, --outfile OUTFILE
                        Destination file for test results in CSV format
  -s, --silent          Don't print verbose output from the upload process
  -l HOST, --host HOST  FTP server for upload test
  -u USERNAME, --username USERNAME
                        FTP user name for upload test
  -p PASSWORD, --password PASSWORD
                        FTP password for upload test
  -P PASSIVE, --passive PASSIVE
                        Sets FTP passive mode. Default: False
The upload tests are done over FTP, so you need to have an ftp server and username available for the test. For example, to do 5 upload tests against ftp.example.com you can use the following command:
upload-tester -c 5 -f $PWD/test10Mb.db -l ftp.example.com -u bob -p mypassword
Yes, I know is not very secure to set the password over the command line, but this is just a testing tool and you are supposed to use a testing account too ;)

That command would show the following output:
juancho@moon:~$ upload-tester -c 5 -f $PWD/test10Mb.db -l ftp.example.com -u bob -p xxxxx
upload_speed.pyc v0.1.1

FTP Host: ftp.example.com
Username: bob
Password: xxxxx
File: /home/juancho/test10Mb.db
Size: 10.0MB

Total Tests: 5

Test #1:
[==================================================] 10.49 MB/s - 83.89 Mbps^[[23~

Average upload speed: 10.49MB/s - 83.89Mbps

Test #2:
[==================================================] 10.49 MB/s - 83.89 Mbps^F

Average upload speed: 5.24MB/s - 41.94Mbps

Test #3:
[==================================================] 5.24 MB/s - 41.94 MbpsG

Average upload speed: 5.24MB/s - 41.94Mbps

Test #4:
[==================================================] 5.24 MB/s - 41.94 Mbps

Average upload speed: 5.24MB/s - 41.94Mbps

Test #5:
[==================================================] 10.49 MB/s - 83.89 Mbps

Average upload speed: 10.49MB/s - 83.89Mbps


Test Results:
---- -------

Time Elapsed: 1.0 seconds

Overall Average download speed: 7.34MB/s - 58.72Mbps
Maximum download speed: 10.49MB/s - 83.89Mbps
Minimum download speed: 5.24MB/s - 41.94Mbps
Median download speed: 5.24MB/s - 41.94Mbps
Standard Deviation: 2.57MB/s - 20.55Mbps
Also with the CSV output:
Date,Server,File,Size,Min (MB/s),Min (Mbps),Max (MB/s),Max (Mbps),Average (MB/s),Average (Mbps),Median (MB/sec),Median (Mbps)
Mon Jul 10 00:11:00 2017,ftp.server.yyy,/home/juancho/test10Mb.db,4.16,0.15,0.15,0.23,1.84,0.18,1.42,0.17,1.34

Sample#,File Size,Average Speed (MB/sec),Average Throughput (Mbps)
1,4.16,0.17,1.34
2,4.16,0.15,1.2
3,4.16,0.23,1.84
4,4.16,0.17,1.34
5,4.16,0.17,1.4

If you are having trouble with the upload you can test FTP passive mode with the -P parameter.


ToDo

There is stuff that I would like to add soon, like:
  • Automatic conversion of speeds depending if the current value is over for ex, 1000kbps it should be shown as Mbps, if less than 1000kbps it should be shown as kbps, etc.
  • FTP download tests, currently only HTTP(S) is supported as a download method.

Contributions welcomed !

Thursday, July 20, 2017

New mageia 6 docker images available

Mageia 6 was released last week, so during this week I worked on updating the official docker images too. This new release includes a new package manager additional to urpmi called dnf from the Fedora Project, which makes it now possible to offer third-party free and open source software through Fedora COPR and the openSUSE Build Service targeting Mageia 6 and up. Through COPR or OBS, it is now possible for anyone to easily offer free and open source software built and tailored for Mageia, as well as free and open source software that is broadly compatible with Mageia along with other popular Linux distributions.

You can learn more about this new mageia release on the release notes, the docker images can be found at docker hub. Remember to create a container from this new image for mageia 6 you can do something like this:

  docker run -ti --name mageia mageia:latest bash

Check it out and please send any bug reports to the project's github issues page.

Enjoy !

Tuesday, June 14, 2016

Running your own help desk platform with docker and OTRS


At work we use OTRS for our help desk platform. We chose it because it's open source and very flexible, and we could install it on our premises to have more control. So, I went ahead and made a set of docker containers that we have been running multiple OTRS  4 .0.x installations for small companies for more than a year now without issues. Now I've had some time to upgrade the containers to OTRS 5.

The first thing to know is that this is an unofficial OTRS docker container.

For setting up an OTRS system you need several services:
  • A web server with the OTRS installation.
  • A database server.
  • An SMTP server.
  • A proxy server (optional).
This container setup is designed that way. It uses:

The docker-compose configuration files include all of those services, and upon container start a fresh OTRS install will be started, ready to to be configured by an OTRS administrator.

There are some environment variables you can use to control the container startup and initial state. For example, the container can be started in three ways, controlled by the OTRS_INSTALL environment variable:
  • OTRS_INSTALL=no when the container is run, it will load a default vanilla OTRS installation that is ready to be configured as you need. This is the default. 
  • OTRS_INSTALL=yes will launch the OTRS install web interface at http://localhost/otrs/install.pl.
  • OTRS_INSTALL=restore Will restore the backup specified by the OTRS_BACKUP_DATE environment variable. OTRS_BACKUP_DATE is the backup name to restore, in the same date_time format that the OTRS backup script uses, for example OTRS_BACKUP_DATE="2015-05-26_00-32". Backups must be inside the /var/otrs/backups directory (you should host mount it).
You need to mount that backups volume from somewhere, it can be from another volume (using --volumes-from) or mounting a host volume which contains the backup files.

For testing the containers you can bring them up with docker-compose:

    sudo docker-compose build
    sudo docker-compose up

This command will build all containers and pull missing images, bring up all needed containers, link them and mount volumes according to the docker-compose.yml configuration file:

version: '2'
services:
  otrs:
    build:
      context: otrs
      args:
        OTRS_VERSION: 5.0.10-01
    ports:
    - "80:80"
  # If running behind a proxy container, expose the ports instead
  # and link the proxy container to this one.
  #  expose:
  #  - "80"
    links:
    - mariadb:mariadb
    - postfix:postfix
    volumes_from:
    - data

  mariadb:
    build:
      context: mariadb
    expose:
    - "3306"
    volumes_from:
    - data
    environment:
        MYSQL_ROOT_PASSWORD: changeme
  postfix:
     image: juanluisbaptiste/postfix:latest
     expose:
     - "25"
     env_file: credentials-smtp.env
  data:
    image: centos/mariadb:latest
    volumes:
    - /var/lib/mysql
    - "./otrs/backup:/var/otrs/backups"

    command: /bin/true

The default database password is changeme, to change it, edit the docker-compose.yml file and change the MYSQL_ROOT_PASSWORD environment variable on the mariadb image definition before running docker-compose.

To start the containers in production mode use this docker-compose.yml file that points to the latest version of all images to be pulled and run instead of Dockerfiles to be built:

sudo docker-compose -f docker-compose-prod.yml pull


sudo docker-compose -f docker-compose-prod.yml -p company_otrs up -d  

After the containers finish starting up you can access the administration and customer interfaces can be accessed at following addresses:

Administration Interface

    http://$OTRS_HOSTNAME/otrs/index.pl

Customer Interface

    http://$OTRS_HOSTNAME/otrs/customer.pl

There are also some other environment variables that can be set to customize the default install, like root and database passwords, language, theme, ticket counter start, number generator, etc, check the github page for more info. OTRS 4 sources are still available in otrs-4_0_x branch.


Monday, May 16, 2016

How to mount VirtualBox shared folders at boot

I'm setting up a network install server on a VirtualBox VM, and I didn't want to copy the contents of the iso images of the distros going to be available through PXE to avoid having a huge VM. Worse if I also included other repos like epel or centosplus, or updates, so I created a VirtualBox shared folder pointing to the directory containing the iso images. After having created the shared folder, you can mount it like this:

sudo mount -t vboxsf -o uid=$UID,gid=$(id -g) share ~/host

To avoid having to manually mount the share each time the VM boots , the shared mount needs to be added to /etc/fstab, but there's a catch: the vboxsf kernel module, needed to mount the shared folder, isn't available when mounting all filesystems during the boot process. So, to fix this we need to make sure the vboxsf module is loaded before the filesystems mount at boot.

On CentOS 7, create a file on /etc/sysconfig/modules directory ending in .modules and add this to load VirtualBox kernel module before filesystems are mounted:

#!/bin/sh

    lsmod |grep vboxsf >/dev/null 2>&1
    if [ $? -gt 0 ] ; then
      exec /sbin/modprobe vboxsf >/dev/null 2>&1
    fi

On Ubuntu/Debian, add the module name to /etc/modules. Now we need to add the shared mount to /etc/fstab. In my case, my shared folder is called isos, so I added the following line:

isos    /isos   vboxsf  defaults        0 0

After adding this line you can reboot the server/vm and see if it mounted correctly at boot.

If you want to mount the iso images too at boot, add a line like this one to /etc/fstab, for each iso to mount:

/isos/CentOS/CentOS-7-x86_64-DVD-1511.iso /distros/centos7 iso9660 loop 0 0

Remember to adjust loopback device limits if you plan to mount more than 8 or 10 images (don't remember right now the limit).


Thursday, December 31, 2015

Using a portable SMTP relay server with docker

I have had a very busy year and had the chance to work on a lot of new and useful docker containers. Taking advantage of these holidays, I finally started to catch up here with my latest work on them :D

There's one thing that it's a pretty common requirement of a website: a SMTP email server or relay.
Every web app needs one for different tasks, like sending out notifications, registration emails, password resets, etc. I made a Postfix SMTP relay container that is easy to use with other containers.

Before running the container, first you need to set the following environment variables to configure the SMTP relay host:
  • SMTP_SERVER: Server address of the SMTP server that will send email from our postfix container.
  • SMTP_USERNAME: Username to authenticate with.
  • SMTP_PASSWORD: Password of the SMTP user.
  • SERVER_HOSTNAME: Server hostname for the Postfix container. Emails will appear to come from this hostname domain.
To use it you need to first pull the image:
docker pull juanluisbaptiste/postfix
and then fire it up with the previous variables defined:
docker run -d --name postfix -P \
       -e SMTP_SERVER=smtp.bar.com \
       -e SMTP_USERNAME=foo@bar.com \
       -e SMTP_PASSWORD=XXXXXXXX \
        -e SERVER_HOSTNAME=helpdesk.mycompany.com \        
        juanluisbaptiste/postfix
Lastly, link your container against it:
docker run --name mycontainer --link "postfix:postfix" myimage

Using docker-compose

Or, you could use docker-compose to start your application containers linked against the postfix container with one command. Suppose you have a web application that links against a database and this postfix container. Download and install docker-compose for your platform, and then on your website's docker project, create a new file called docker-compose.yml, and put the following contents there:
myapp:
  build: myapp
  ports:
  - "80:80"
# If running behind a proxy container, expose the ports instead
# and link the proxy container to this one.
#  expose:
#  - "80"
  links:
  - mariadb:mariadb
  - postfix:postfix
  volumes_from:
  - data
mariadb:
  image: centos/mariadb:latest
  expose:
  - "3306"
  volumes_from:
  - data
  environment:
      MYSQL_ROOT_PASSWORD: changeme
postfix:
   image: juanluisbaptiste/postfix:latest
   expose:
   - "25"
  environment:
      SMTP_SERVER: smtp.mycompany.com
      SMTP_USERNAME: user@mycompany.com
      SMTP_PASSWORD: changeme
      SERVER_HOSTNAME: helpdesk.mycompany.com
data:
  image: centos:latest
  volumes:
  - /var/lib/mysql
  - /var/www/webapp

  command: /bin/true
Then, you can launch your webapp, the database and the postfix container with this command:
docker-compose up
All containers will be started in the right order and the webapp will be linked against the mariadb and postfix containers. Also, the webapp and the mariadb database container will share the same data volume container (unrelated to the postfix container but a good practice).

On thing to note, this container doesn't enable client SMTP authentication, the idea is to expose the port 25 to containers, and then link the containers that need a SMTP service against it so the relay isn't publicly exposed.

A note about using gmail as a relay

Since last year, Gmail by default does not allow email clients that don't use OAUTH 2
for authentication (like Thunderbird or Outlook). First you need to enable access to "Less secure apps" on your google settings.

Also take into account that email From: header will contain the email address of the account being used to authenticate against the Gmail SMTP server (SMTP_USERNAME), the one on the email will be ignored by Gmail.

Friday, February 13, 2015

Setting up a BigBlueButton 0.81 docker container: Part 2

I have made some improvements on my BigBlueButton docker images since I last posted about that. Now, the container can be accessed externally and not only through the private IP address docker assigns to it (by default in 172.17.0.x range) as before. For this to work, the SERVER_NAME env variable must be set pointing to the hostname that is going to be used to access your BigBlueButton container. Now, the container can be started like this:

sudo docker run -d --name bbb -p 80:80 -p 9123:9123 -p 1935:1935 -e SERVER_NAME=meeting.somedomain.com bbb_0.81

Then you can access the container externally (provided SERVER_NAME resolves to a public IP address) using $SERVER_NAME. The hostname set in SERVER_NAME must point to the docker host machine. If the container can't use the same host ports (ie: there's already a web server running on port 80) you can start the container using other ports:

sudo docker run -d --name bbb -p 80:8080 -p 9123:91230 -p 1935:19350 -e SERVER_NAME=meeting.somedomain.com bbb_0.81
And configure a reverse proxy server (like nginx) to go to the BigBlueButton's container private IP address and the new http port in the docker run command when accessing SERVER_NAME, and port forward ports 1935 and 9123 on the docker host machine to the container. Or even easier, use a nginx container and link it to the BigBlueButton container but this deserves another post.

More detailed instructions in the github project page.