Krzysztof Żuraw

Personal site

Pomodoro timer - counting

Welcome to today's blog post! This blog post will be about implementing countdown in JavaScript and also about some CSS work I have to do so my timer looks decent.

Core functionality of pomodoro timer

As the name suggests the core functionality of a timer is to count down time. In the case of this timer, I will be using 25 minutes as a timer that needs to be counted down. I decided that for the time being, I will have only two control buttons for the timer: start & restart.

Implementing timer in JavaScript

As I know what I want to accomplish the first thing is the look of my timer. I was wondering if it will be better to write some CSS from scratch and learn this language too but when I start doing that I realized that I can spend a whole week only on this task. Instead, I decided to use Material Design Lite. This is a collection of CSS and JavaScript that allows me to use Google Material Design. To get started all I need to do is include some code from google CDN:

  <link rel="stylesheet" href="">
  <link rel="stylesheet" href="">
  <script defer src=""></script>

You may have noticed that script has defer attribute which means that this script will be executed after the document has been parsed. I also add my custom style.css:

.display__time-left {
  font-weight: 100;
  font-size: 20rem;
  margin: 0;
  color: black;
  flex: 1;
  display: flex;
  flex-direction: column;
  align-items: center;
  justify-content: center;

.control_buttons {
  flex: 1;
  display: flex;
  justify-content: space-around;
  align-items: center;

Much of the code from style.css is based on Wes Bos code from here. In display__time-left I set up a few properties of the font that will be showing how many minutes and seconds are still in one pomodoro. I also made this element flex which fits element in its available space. .control_buttons are evenly spaced on the webpage with space between them by space-around. After loading a page it looks like this:

Basic layout

I am aware that this look needs a bit of work though. As I have my styles ready I add this HTML to the body:

<h1 class="display__time-left">25:00</h1>
<div class="control_buttons">
  <button class="mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect mdl-button--accent" data-action="start">

  <button class="mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect mdl-button--accent" data-action="stop">

<audio id="end_sound" src="sound.wav"></audio>

At the beginning, I show time left in pomodoro which by default is 25 minutes. Next, I have my control buttons with classes from Material Design Lite. At the end, there is an audio file which I will be playing at the end of each pomodoro.

How is the counting implemented? For this you need to look into script.js:

let countdown;
const timerDisplay = document.querySelector('.display__time-left');
const startTimeBtn = document.querySelector('[data-action="start"]');
const restartTimeBtn = document.querySelector('[data-action="stop"]');

Here I just select necessary elements from HTML. I'm using querySelector to take class and data attributes. As I have my startTimeBtn selected then I add an event listener to it:

startTimeBtn.addEventListener('click', () => {
  if (countdown) return;

I'm listening for click event and if this happens I set up my timer for 1500 seconds which is 25 minutes. But before running timer(1500) I check if countdown element is defined. Why? Before the user can click as many times as he/ she wanted and start the timer from the beginning. Then I run timer:

function timer(seconds) {
  const now =;
  const then = now + (seconds * 1000);


At the beginning, I define now which tells me what is current time right now. Then I foresee at which time my pomodoro timer will end. Then I call displayTimeLeft:

function displayTimeLeft(seconds) {
  const minutes = Math.floor(seconds / 60);
  const remainderSeconds = seconds % 60;
  const display = `${minutes}:${remainderSeconds < 10 ? '0' : ''}${remainderSeconds}`;
  timerDisplay.textContent = display;

Which is a simple function to display time in min:sec format. I compute minutes & remainderSeconds and then use es6 template string to neatly interpolate variables into the string. At the end, I set textContent of my timerDisplay which is h1 HTML element.

Let's go back to timer:

function timer(seconds) {
  // variables

  countdown = setInterval(() => {
    const secondsLeft = Math.round((then - / 1000);

    if (secondsLeft < 0) {

  }, 1000);

Here to countdown, I assign interval which will be executed every second. This is the place when this variable is defined and has an integer value. In the interval I calculate secondsLeft and if they are less than 0 it means it's time to stop interval by clearInterval, play sound and exit the function. At the end, I display changing time. playAudio is a simple function:

const endSound = document.querySelector('#end_sound');

function playAudio() {
  const sound = new Audio(endSound.src);;

By the way most of these functions I take from JavaScript 30 day 29 by Wes Bos.

There is the last thing to do - restart my timer:

restartTimeBtn.addEventListener('click', () => {
  countdown = undefined;
  timerDisplay.textContent = '25:00';

I stop interval, set the countdown to undefined so I can start my timer again. I also redisplay remaining time.

What is next?

That's all for today! Thanks for reading but don't worry there is still a lot to do:

  • checking if pomodoro was good or bad
  • breakes
  • notifications
  • storing good & bad pomodoros

Please feel free to comment! If you have another way to do any of this don't hesitate and write to me.

Repo with this code is available on github.

Other blog posts in this series:

Special thanks to Kasia for being an editor for this post. Thank you.

Cover image from Unsplash under CC0.

To see comments and full article enter: Pomodoro timer - counting

Pomodoro timer - beginning

From this post I will try to make new a blog post series - documenting my projects. In previous projects like this, I had every time a fixed number of blog posts I wanted to write about a specific project - from 2 to 4. Right now, I want to try writing as many blog posts as it will be necessary to end a project - without any specific number in mind. Let's get started!

What pomodoro-timer project will be about

I stumbled upon pomodoro technique during my student times when I wanted to be more productive. It works great and I tried many different tools starting from web apps and ending on google play store. Recently I reread the pomodoro technique manifesto and I found out that I have missed one important aspect - tracking if 25min of work was without distraction. To accomplish that I started noting down which pomodoro was without distractions and which wasn't. I started noticing that I sometimes forgot to write down if pomodoro was good or not.

Then I had an idea - what if I write my own timer and at the end of 25min application will ask me: 'How productive last 25min was?'. Based on that I can start tracking my productivity throughout the day.

Moreover, I wanted to learn javascript so I decided to create my own pomodoro timer as a web page.

A few words about tools

In today javascript there are infinite number of tools, frameworks - by the way I recommend to read this piece.

I wanted to start from the basics without any framework to help me. I believe that frameworks come and go but understanding how language works stays. So I pick the newest javascript implementation - ECMAScript 6.

Then I started searching for web application template and I found one - Web Started Kit. I've opened it and looked inside the code. I looked one more time. So many tools! Sass, gulp, babel and other. I closed the editor. I removed this code and I started from scratch. I know it can help me a lot but I want to start from the basics. As I'm doing javascript course by Wes Bos I decided to use some tools that he is using. I really like browser-sync. It automatically reloads web pages when I change html, css or js files. To start browser-sync I have this one line in my package.json:

  "scripts": {
    "start": "browser-sync start --server --files '*.css, *.html, *.js'"

Then I just write npm start.

When I learn a new language I always look for the best practices. In javascript word there is a couple of them but I choose Airbnb JavaScript Style Guide. Hot tool for linting js files right now is eslint. To use eslint with this style guide I installed eslint-config-airbnb. Thanks to that in my .eslintrc I just wrote:

  "extends": "airbnb",

Right now I'm ready to write some javascript code! Stay tuned for the next blog post. If you have anything to add please comment below.

Repo with this code is available on github.

Special thanks to Kasia for being an editor for this post. Thank you.

To see comments and full article enter: Pomodoro timer - beginning

Gunicorn & LRU cache pitfall

Today I want to write about some interesting situation connected with using python LRU cache in an application that uses gunicorn.

What is LRU cache?

When you cache is starting to grow more and more you have to remove something so new values can be stored in a cache. One of the algorithms that are used to accomplish this task is called Least Recently Used (LRU). When you performing LRU caching you always throw out the data that was least recently used.

Imagine you have in cache five elements: A,B,C,D,E. You access element A which is in cache - nothing changes. Right after that, you want to add a new element to cache - F. At this moment the least recently used item is B so you throw it and replace with F. The same mechanism goes for other items. That's how LRU cache works.

Gunicorn & LRU pitfall

In python 3 you can use decorator @lru_cache from functools module. It stores a result of decorated function inside the cache. Imagine that you have simple flask application:

from flask import Flask, jsonify

from functools import lru_cache

app = Flask(__name__)

def main():
  return jsonify({'message': 'Stored'})

def store_to_cache():
  return {'this_goes_to_cache': 'and_this_too'}

You enter the root URL and you store dictionary to cache. Cache is setup to have only 2 elements inside. Then you have a helper function for getting info about an object that is inside that cache:

def get_cache_info():
  cache_info = store_to_cache.cache_info()
  return jsonify({
      'Hits': cache_info.hits,
      'Misses': cache_info.misses,
      'Maxsize': cache_info.maxsize,
      'Currsize': cache_info.currsize

When you run this application in development mode - without gunicorn everything works as expected - you store to cache and receive proper information:

$ curl -X GET
"message": "Stored"
$ curl -X GET
  "Currsize": 1,
  "Hits": 0,
  "Maxsize": 2,
  "Misses": 1

Let's run the same code but with using gunicorn with two workers:

$ gunicorn --workers=2 application:app
$ curl -X GET
$ curl -X GET
  "Currsize": 1,
  "Hits": 0,
  "Maxsize": 2,
  "Misses": 1
curl -X GET
  "Currsize": 0,
  "Hits": 0,
  "Maxsize": 2,
  "Misses": 0

Sometimes request returns that there is one item inside cache and other times that there are no items in the cache. Why is that? Because LRU cache is using cache per worker. It means that when user enters your site cache is stored but it is stored only on this worker! The same user enters another time and his request is handled by the second worker which doesn't have anything stored in the cache!

For this reason, it's not a good idea to use cache per worker in your web application. What can you use instead? Use centrally stored cache like Memcached. You will thank yourself in the future.

That's all for today! Feel free to comment - maybe you have a better idea which cache use to avoid pitfalls?

Example of how LRU cache works is based upon this article.

The code that I have made so far is available on github. Stay tuned for next blog post from this series.

Update 13-02-16:

Side note from my friend from work: Cache per worker is good for data that doesn't changex like archival exchange rate. But this type of cache is not good for data that can change.

Thank you Paweł for this note.

Cover image by Tim Green under CC BY-SA 2.0.

To see comments and full article enter: Gunicorn & LRU cache pitfall

Provisioning django application using ansible

As I recently have opportunity of having a workshop about ansible in my work and I decided to write a blog post on how to provision django application using this tool. In this blog post I am using the same application as in puppet post.

What is ansible and how's is different from puppet

Ansible is a tool that helps automate boring tasks. These tasks are connected with setting up Linux machines, installing proper software on them and moving code from repositories to machines. Ansible has a different way of accomplishing these tasks than puppet. It is using push system - in short ansible connects to your machine via ssh and push changes. No need for masters and agents etc. Puppet, on the other hand, is using pull system which allows every machine to pull changes from master. Ansible is using the same principles as puppet so you declare how should host look like after running ansible.

Provisioning django application using ansible

I will be provisioning geodjango-leaflet. I assume that you know basic concepts of ansible like play, playbook or role. This is how a structure of my ansible repo looks like:

├── ansible.cfg
├── inventory
│   └── vagrant
│       └── hosts.ini
├── playbooks
│   ├── roles -> ../roles/
│   └── vagrant.yaml
├── roles
│   ├── db
│   │   └── tasks
│   │       └── main.yml
│   ├── geodjango
│   │   ├── handlers
│   │   │   └── main.yml
│   │   ├── tasks
│   │   │   └── main.yml
│   │   └── templates
│   │       ├── nginx.conf.j2
│   │       └── supervisord.conf.j2
│   ├── redis
│   │   └── tasks
│   │       └── main.yml
│   └── roles -> roles
└── Vagrantfile

Let's start from the bottom - Vagrantfile. I will be using vagrant as a playground. Configuration file a.k.a vagrantfile looks as follows:


Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| = "trusty64"
config.vm.box_url = ""
config.ssh.insert_key = false

config.vm.hostname = "vagrant-ansible" "private_network", ip: ""

config.vm.provision "ansible" do |ansible|
 ansible.playbook = "playbooks/vagrant.yaml"
 ansible.inventory_path = "inventory/vagrant/hosts.ini"
 ansible.sudo = true
 ansible.verbose = "v"
 ansible.limit = "all"


I setup basic private_network with ip of a vagrant box. In config.vm.provision I specified playbook which should be run in vagrant and inventory where the configuration of my hosts lay. This inventory presents itself below:

vagrant-ansible ansible_ssh_host= ansible_ssh_port=22

My ansible playbook don't have tasks inside it but I delegate it to roles:


- hosts: vagrant-ansible
  become: yes

    - db
    - geodjango
    - redis

Let's start with the first role: db. In folder with this role, I have tasks folder with main.yml:


- name: ensure apt cache is up to date
  apt: update_cache=yes

- name: ensure packages are installed
    name: "{{item}}"
    - postgresql
    - libpq-dev
    - python-psycopg2
    - postgresql-9.3-postgis-2.1
    - python3-dev
    - python-dev

- name: ensure database is created
  become_user: postgres
    name: geodjango

- name: ensure user has access to database
  become_user: postgres
    db: geodjango
    name: geodjango
    password: geodjango
    priv: ALL

- name: enable postgis for database
  become_user: postgres
    name: postgis
    db: geodjango

In this task, I run apt-get update at the top then I install a couple of packages so I can setup Postgres. Right below that I create db, grant user access to that db and create PostGIS extension. As this role completes ansible will execute geodjango role:


- name: ensure packages are installed
    name: "{{item}}"
    - binutils
    - libproj-dev
    - gdal-bin
    - git
    - python-virtualenv
    - build-essential
    - postgresql-server-dev-all
    - supervisor
    - nginx

- name: ensure git repo is present
    dest: /opt/geodjango

- name: create virtualenv
  command: virtualenv /opt/venv -p python3.4 creates="/opt/venv"

- name: install requirements
    requirements: /opt/geodjango/requirements.txt
    executable: /opt/venv/bin/pip

- name: migrate django application
    command: migrate
    virtualenv: /opt/venv
    app_path: /opt/geodjango

- name: load django initial data
    command: load_inital_voivodeships
    virtualenv: /opt/venv
    app_path: /opt/geodjango

- name: collect static files
    command: collectstatic
    virtualenv: /opt/venv
    app_path: /opt/geodjango

- name: ensure config dir for supervisor extists
    path: /etc/supervisor/conf.d
    state: directory

- name: ensure supervisor config is present
    src: templates/supervisord.conf.j2
    dest: /etc/supervisor/conf.d/geodjango.conf
  notify: reread supervisor

- name: remove default nginx configuration
    name: /etc/nginx/sites-enabled/default
    state: absent

- name: ensure nginx config is present
    src: templates/nginx.conf.j2
    dest: /etc/nginx/sites-enabled/geodjango.conf
  notify: restart nginx

This code above is self-explanatory but I will write closely about task called create virtualenv. Normally you can write this and next one task in one like:

  requirements: /opt/geodjango/requirements.txt
  virtualenv: /opt/venv

And if this virtualenv is not present it will be created. But there is a bug in ansible that is causing these requirements to be installed in system wide python, not virtualenv one. Reference is here. I use fix provided by one of the guys in this issue discussion - I break this task into two separate: one for creating virtualenv and second one for installing requirements.

What is different in this task is that I'm also using templates for supervisor and Nginx. They have j2 ending as ansible is using jinja2 template system. During the ansible run, they will be copied to given dest. At the end of tasks with templates I have notify keyword which tells ansible to look for handlers folder with tasks for restarting services. In my case they look as follows:


- name: reread supervisor
    name: geodjango_leaflet
    state: present

- name: restart nginx
    name: nginx
    state: restarted

The last role is redis. This code installs redis-server and starts it:


- name: ensure redis packages are installed
    name: "{{item}}"
    - redis-server

- name: ensure redis is started
  become: true
    name: redis-server
    state: started
    enabled: yes

My thoughts and feelings about ansible

I have to say I'm really impressed on how easy is to write ansible tasks. With puppet, I have this problem that I need to look for modules in puppet forge or write my own. Here everything is included. You want to use django commands - use django_manage, need to reread supervisor config use present in supervisorctl task. Really easy and fun to work with. I can quickly get a job done and move to another stuff.

Yet, I don't know how ansible will behave when it comes to provisioning a large amount of machines. Here I have only one host and it's going smoothly, but for sure when I will have a need for provisioning my private machines I will choose ansible.

That's all for this week blog post! Feel free to comment - I really appreciate that.

Repo with this code is available on github.

Cover image Unsplash under CC0

To see comments and full article enter: Provisioning django application using ansible

Transcoding with AWS- part five

This is the last blog post in this series - the only thing that has to be done is telling the user that file he or she uploads is processed. It will be done by writing custom message application.

How message application should work

From previous post I know that the last point of my application flow is to inform user that file is transcoded and ready to download. To do such thing I have to display message on every webpage that current user is. This message should have information about which file was processed. First I wanted to do this with existing django messaging framework but as it turns out is works only with request. As I decided to show message for different users as long as they dismiss this information I had to write my own small application.

Implementation in django

In my newly created application I created following model:

from django.db import models
from django.contrib.auth.models import User

class Message(models.Model):
    text = models.CharField(max_length=250)
    read = models.BooleanField(default=False)

    def __str__(self):
        return self.text

I decided to display my message only when it wasn't read. Based on that right now I can use it in endpoint that works with AWS (audio_transcode/

def transcode_complete(request):
    # rest of code is in previous blog post
    if json_body['Message']['state'] == 'COMPLETED':
        audio_file = AudioFile.objects.get(
            text='Your file {} has been processed'.format(
    return HttpResponse('OK')

As my message is created right now comes time for displaying it to the user. To do that I have to add a message to template context. It can be done via creating your own context manager:

from .models import Message

def message_context_processor(request):
    if request.user.is_anonymous():
        return {'messages': []}
    return {'messages': Message.objects.filter(read=False)}

And registering it:

        # rest of options
        'OPTIONS': {
            'context_processors': [
                # rest of context processors

And adding a message as django template tag:

{% if messages %}
  {% for message in messages %}
    <div class="alert alert-success alert-dismissible" data-message-id="{{ }}" data-message-url="{% url 'messages:read-message' %}"role="alert">
      <button type="button" class="close" data-dismiss="alert" aria-label="Close">
      <span aria-hidden="true">x</span>
      {{ message.text }}
  {% endfor %}
{% endif %}

Which renders as follows:

Transcode complete message

In the previous screenshot, there is an X that dismiss the message and make it read. To communicate with the backend I wrote quick jQuery script:

var csrftoken = Cookies.get('csrftoken');

function csrfSafeMethod(method) {
    // these HTTP methods do not require CSRF protection
    return (/^(GET|HEAD|OPTIONS|TRACE)$/.test(method));
    beforeSend: function(xhr, settings) {
        if (!csrfSafeMethod(settings.type) && !this.crossDomain) {
            xhr.setRequestHeader("X-CSRFToken", csrftoken);

$('.alert').on('', function(event) {
    method: 'POST',
    data: {'message_id':}

Going from the top - django by default uses csrftoken so I have to get it that my request passes the authentication. I'm using here library called js-cookie. In ajaxSetup I tell jQuery to always send csrftokens while using ajax request. Below I add the event listener to an element that has .alert class. This event - is provided by bootstrap. On triggering this event I send ajax POST to url from data attribute in alert element - data-message-url. Data that I send is taken from data-message-id attribute on alerts div. How endpoint for receiving such messages looks like? See below:

from .models import Message
from django.http import HttpResponse

def read_message(request):
     message = Message.objects.get(id=request.POST['message_id']) = True
     return HttpResponse('OK')

Here I take message_id and set read to True and save message.

That's all for this blog post and blog series! I know that in this design are particular flaws like: what is there will be more users than one? Everybody will see everyone messages. If you have idea how to fix that please write in comments below.

Other blog posts in this series

The code that I have made so far is available on github. Stay tuned for next blog post from this series.

Cover image by Harald Hoyer under CC BY-SA 2.0, via Wikimedia Commons

To see comments and full article enter: Transcoding with AWS- part five

Page 1 / 11