Cybersecurity Capstone Project

One
Diarpi
May, 2018

Joined (and Passed!) this month's Cyber Security Capstone project, which is a final part of Cybersecurity Specialization course run by University of Maryland. They run it on the Coursera platform.

I already did participate on this project/contest a few years ago, and it was extremely fun.
Back then, it was run on Build-it,Break-it platform.

You can team up with other contestants, but I decided try it on my own.

The project is divided into three phases, each running for two weeks:

  1. Build it, where you plan and build your application according to specification.
  2. Break it, where you find bugs, crashes and vulnerabilities on other contestants' applications.
  3. Fix it, where you fix issues found on your application.

Each phase also has its own grading system.
Your team gains points by fulfilling the application specification on the "Build it" phase.

While in the "Break it" phase, you gain points by finding issues on other teams' applications and lose points if some other team finds issues in your work.

Last, the "Fix it" phase is meant to acknowledge the issues on your application and fix them - therefore you can gain the lost points back.

To spice it up a bit, issues that were found, for example only by one team, are worth more points. Meaning common bugs or vulnerabilities, found by many teams are going to be worth only a handful.


The task was to build a simple and secure messaging system. The application was required to be web based (accessible in a browser), but there was no limitation on used technology / programming language.

The basic requirements for the application were user registration, login and a functionality to send messages to other users, or read messages sent to you.
There was also the requirement to provide a database dump with a click of a button - so contestants can inspect your data model, if the passwords are encrypted, etc.

I chose to build my application in Flask - which I've never used before. A great opportunity to learn it ! To spice it up a bit, I also added the google re-captcha verification on registration / login.

For the database part, I chose to use sqlite.

For the user data model, I chose to symmetrically encrypt user passwords along with unique salts.
The salts were also symmetrically encrypted. This way, if an adversary used brute force to decrypt a salt, he would only have the salt belonging to one user, not all of them.
Naturally, the messages were also encrypted.

For the front-end part, I used Nginx, and registered for a free Letsencrypt certificate.
And for the presentation part, I used basic bootstrap HTML templates.

To protect against SQL injections, I used SQLAlchemy, which handles escaping user input. I was amazed how many teams did not do this, and instead wrote inline SQL statements!
I also took extra precaution to watch for possible open redirects, thus I avoided using "next" URL parameter functionality.

It took me roughly 2 days (a weekend) to build the application, and while it is not perfect (there are a few things I could improve), it did the job. There were no vulnerabilities found!

On the other side, I managed to find quite a bit of them. Mostly teams not using TLS, thus being vulnerable to MITM attack vectors and SQL injections.


I must note that the organizers did not quite follow the rules they defined - I'm guessing this is due to limitations of the Coursera platform.

For example, there was no "Fix it" phase, and the grading was abit messy, where same amount of points were awarded whether the issue was found by everyone, or only by you.
This clearly killed the spirit of competition.

While the "Build-it" part was okay, the thing that bothered me the most was how the "Break it" phase was run.
This is where my interests lie after all!

When I previously participated on this project a few years ago, I actually had to write my own apps to prove bugs or vulnerabilities.

For example, if there was an MITM attack vulnerability, I would be required to write an app which intercepted the traffic, manipulated it, passed it to the target contestant application and specify the expected output, which differs from the user input.

The organizers had a sample system, called "the Oracle" which compared output when running my exploit against itself, and target contestant's application. If there were differences, I would get points.
You even gained extra points, if you managed to find issues on "the Oracle" (breaking the organizers application).

While on this project session, it was enough just to write, for example that an application has a vulnerability due to not using TLS. I would definitely prefer a more practical approach, and I believe we would be learning much more this way.

Still, I can say I did learn something new, especially in the "Build-it" phase.

Code is available here:
https://github.com/diarpi/coursera-cs-capstone