View on GitHub

EventPost-CCProject

Main project for the Could Computing course at UGR Master in Computer Science

Project documentation section.

Abstract:

The idea for this project is to develop a microservices based application for the cloud. It will provide a discovering and management system for social events along with the possiblility of programming email-based notification for the desired events as reminders.

Requirements and Usage:

Architecture:

As stated before, we will be using a microservices based architecture with one microservice per entity in the system and one as task dispatcher. The microservices needed arise from decomposing our system using Domain Driven Design subdomains. The microservices developed to achieve our goal are the following:

Addtionally we will use a service for user management.

All of them will be addressed through a specific API Gateway. It will also be in charge of authenticating certain request to other microservices against the User management service.

A centralized system using Consul will take care of service discovery and configuration between microservices. There will be a centralized log system as well.

Finally, for queueing tasks to the Email-sender microservice a RabbitMQ instace will be deployed with the Email-sender as receiver.

Communication:

The majority of the communications will take place over HTTP. The user will be able to contact our microservices through the API Gateway RESTful API. The Gateway will route those messages to the User management, Events and Notifications microservices which will include a RESTful API for receiving messages.

Communication between the Email-sender and other microservices will be carried out using a RabbitMQ message broker implementing a queue and the AMQP protocol for message passing. The Email-sender will act as the receiver, taking messages from the queue and the other microservices will act as senders sending tasks to the queue.

Database management:

In our system we will store the following data:

In order to keep services independent, a Database-per-service architecture will be implemented. Each database will be private to its corresponding microservice. As there are no relation in our databases a NoSQL database will be used.

Technologies:

Arquitecture basic diagram.

Microservices architecture diagram

User stories:

The user stories that came up are the following.

System functionality:

The features found out in the user stories for each microservice have been mapped into milestones and its corresponding issues (links to milestones below).

Additionally the User management system will include the following features. Users Management: - Create an account. - Delete an account. - Update email. - Add a secondary email to an account.

This features has been mapped into milestones and issues.

Continuous integration:

This section will describe the continuous integration system and tools, the testing and the building tool used.

Testing:

A series of tests have been develop for our microservices. For testing the code the python library pytest has been used. Another library, coverage, has been also used for generating the testing coverage reports. Finally this information is uploaded to CodeCov using its command tool.

For implementing integration tests for the REST services we have used pytest and a Falcon feature, a testing client that lets us send requests to the server without starting it.

Once we have created our test we can execute them, check the tests coverage, generate a report file and upload the report to CodeCov.

Task tool:

buildtool: tasks.py

As we are using python the task tool selected has been invoke. It uses the file tasks.py for declaring tasks. In each task we can declare a series of cammands to execute. Finally we can do invoke <task> to carry out the desired task.

We created tasks for updating the dependencies file (requirements.txt), installing new dependencies, running tests, building running and testing microservices containers, etc. All tasks can be found in the file tasks.py.

CI tools:

For continuous integration we have used two different tools, Travis-CI and Circle-CI. For both of them to work we need to link our GitHub account to the services, allow access to the our repository and add a configuration file.

For TravisCI the file used is .travis.yml. There we can specify the laguage versions to test and the commands for setting it up. In previous test we found out that the system does not work below python3.5 due to dependencies.

For CircleCI the file used is config.yml. The concept is the same as in the travis file but with a different sintax. More information about both files can be found in their links.

Docker

The base image choosen for the containers has been bitnami/minideb, mainly because its ease of use, more information about this in the following section.

We have until now 2 microservices so we will be running 2 containers. Dockerfiles for creating them are the following, they include explaining commentaries. Events Dockerfile. Notifications Dockerfile

A docker-compose file has been created for speeding up container management in development docker-compose file. When the docker-compose file is executed using docker-compose up -d with success both microservices will be ready to receive requests.

Containers:

The link to the DockerHub containers is the following:

Contenedor: https://hub.docker.com/r/carlosel/eventpost-cc

Additionally the the containers have been uploaded to GitHub packages in the following URLs:

Both registries auto-update when there is a push to GitHub thanks to travis.

Deployment:

Heroku:

The Events microservice has been deployed to Heroku. The steps followed can be found in the official documentation, basically we just created a heroku app using the CLI, specify that it will be a containerized app and pushed our code to heroku. Additionally we have created a heroku.yml and enabled autodeploy for our app in order re-deploy the app everytime we make a push to our repository.

Link to the deployed app:

This service can be populated using the command invoke populateHerokuEventsApp available at tasks.py.

Deployment to AWS EC2 using Ansible for provisioning:

In order to deploy our microservices using virtual machines we have used AWS EC2 service as well as Ansible for provisioning our instances.

Setting up AWS CLI and Ansible:

There are several steps that we have to complete for setting up our deployment environment:

After we have completed all these steps Ansible will be ready to work with the AWS module.

Use of Ansible:

Once the development environment is ready we can create an Ansible file structure for provisioning. It will live in the provision folder. In our case we have choosen to use Ansible not only to provide the instances but to instanciate them. Here is what we have done:

Describing the instances:

AWS EC2 service offers a good amount of configurations for its instances. The configuration that we have choosen for our microservices is the following:

Performance:

Performance testing:

Performance testing has been carried on using Taurus with Jmeter as executor. A taurus script has been creted for testing both microservices. The script contains auto-explainatory annotations but basically it consist of the following:

Optimization:

In order to achieve a good performance two actions have been performed:

Results:

Results obtained from performing this tests over the Events microservice in a local container are the following:

As we can see for 10 concurrent users and the scheme of requests explained previously the Events microservice can handle up to 2000 RPS with a very low response time and no errors. Performance Testing Events concurrency 10

In the case of the Notifications Microservice the result are very similar. After all both microservice manage very light resources. Performance Testing Notifications concurrency 10

Finally to test the service limits concurrency has been incremented several times. The image below shows how with 100 concurrent users the RPS decrese is irrelevant, however the performance decrease in the response time is remarkable. Performance Testing Events concurrency 100

The microservice starts to fail with 200 concurrent users and a combined throughput of 4000 from both scenarios. Also tests with 100 concurrent users and a combined throughput of 10000 shows a decrese of the RPS to 1600. Performance Testing Events concurrency 200

Images performance testing:

Load performance measurements between containers using the Event microservice with 2 different base images (alpine and python:3.7-alpine) has been done. For this purpose we have used the tool Taurus and used a reference script with some modifications. Explanation about the script can be found in the in-line comments of the file.

The tests have been performend in local using a Intel Core i7-4790 CPU @ 3.60GHz × 8 CPU.

After pulling the required base images we have used this dockerfile, changing only the FROM directive between alpine and python:3.7-alpine and deleting the python install line. Making this we creates ‘events’ image, with alpine and ‘events-second’ with ‘python:3.7-alpine’.

Docker images for testing As we can see the image created with ‘python:3.7-alpine’ doubles the size of the other which is undesirable. Now we will run the microservice and perform a load test in both containers. Containers up image Measurements for ‘events’ image (base image: alpine): events image (alpine) measurements Measurements for ‘events-second’ image (base image: python:3.7-alpine): events-second image (python:3.7-alpine) measurements

For the same load we can see that the container using alpine as base image performs slightly better in terms of RPS and also has a slightly lower response time. With this metrics we can then assure than the ‘alpine’ base image is better for our project.

Later on we required the installation of MongoDB in the container. Alpine4.0 does not supports MongoDB and a lot of problems arise when trying to install and execute it so we changed to a similar debian image, ‘bitnami/minideb’ it is not as small as alpine (67MB) but supports all the applications we need.

AWS EC2 Instances performance testing:

For testing the performance of the microservices we have used t2.xlarge EC2 instances and the Taurus scripts created for the previous milestone.

We have tested executing the script from our local computer addressing the instance and executing the script in the same machine (instance) the service is deployed on.