From Flux to Redux: Async Actions the easy way

The Flux architecture is based on some great ideas (check out our visual cheatsheet if you want to know more), but we found a big flaw when implementing it through Facebook’s own Dispatcher implementation. This brought us to Redux, a flux-inspired library that received high praise from the Flux creator herself.

Redux is very young and really tiny at 2kB, but already incredibly stable and feature rich. It is also a very non-opinionated library, and the just completed (almost) docs are very keen to point out how you can do things in many different ways. This is great if you already have your strong preferences, but doesn’t help the newcomer.

One thing some of us found particularly hard to digest was the documentation for Async Actions, so I wrote a simplified version to use with my team. You can find it below; it is based on the Todo List example in the “Basics” section of the docs: if you didn’t read that already, you should do it now.

Async Actions the easy way

As opposed to Flux, In Redux action creators must be pure functions with no side effects. This means that they can’t directly execute any async code, as the Flux architecture instead suggests, because async code has side effects by definition.

Fortunately, Redux comes with a powerful middleware system, and one specific middleware library, redux-thunk, that allows us to integrate async calls into our action creators in a clean and easy way.

First of all, we need to integrate the redux-thunk middleware into our store. We can do this in the Todo List example by changing a few lines inside index.js; from this:

import React from 'react';
import { createStore } from 'redux';
import { Provider } from 'react-redux';
import App from './containers/App';
import todoApp from './reducers';

let store = createStore(todoApp);

To this:

import React from 'react';
import { createStore, applyMiddleware } from 'redux';
import thunk from 'redux-thunk';
import { Provider } from 'react-redux';
import App from './containers/App';
import todoApp from './reducers';

let store =  applyMiddleware(thunk)(createStore)(todoApp);

Before this change, store.dispatch could only accept simple Action objects. With redux-thunk, dispatch can also accept a thunk function.

  • If dispatch receives an Action object, redux-thunk will do nothing, and the reducers will get called with the action as usual, and change the state synchronously.
  • If instead dispatch receives a thunk function, redux-thunk will execute it. It will be the thunk function’s responsibility to actually dispatch an action object. More importantly, the thunk function doesn’t need to be pure, so it can contain async calls, and can dispatch some (synchronous) action only after the async call has finished, also using the data received from the call. Let’s see this with a simple example for the Todo app.

In a real app, we’ll want to keep our Todo list synchronized with our server. For now we’ll only focus on the addTodo action creator; the one from the Basics section will only update the state in the client. Instead, we want it to:

  • Optimistically update the local state and UI
  • Send the data to the server, so it can update the DB
  • Manage the server response

To achieve this, we still need our “optimistic” addTodo action creator, but we also need an async thunk action creator to handle the whole async process. To keep the changes to the code at a minimum, we’ll rename the optimistic action creator, but not the action type that is sent to the reducers (and used in the switch statement to act). This code goes into actions.js:

// renamed optimistic action creator - this won't be called directly 
// by the React components anymore, but from our async thunk function
export function addTodoOptimistic(text) {
  return { type: ADD_TODO, text };

// the async action creator uses the name of the old action creator, so 
// it will get called by the existing code when a new todo item should 
//  be added
export function addTodo(text) {
  // we return a thunk function, not an action object!
  // the thunk function needs to dispatch some actions to change 
  // the Store status, so it receives the "dispatch" function as its
  // >first parameter

  return function(dispatch) {
    // here starts the code that actually gets executed when the 
    //  addTodo action creator is dispatched

    // first of all, let's do the optimistic UI update - we need to 
    // dispatch the old synchronous action object, using the renamed 
    // action creator

    // now that the Store has been notified of the new todo item, we 
    // should also notify our server - we'll use here ES6 fetch 
    // function to post the data
    fetch('/add_todo', {
      method: 'post',
      body: JSON.stringify({
    }).then(response => {
      // you should probably get a real id for your new todo item here, 
      // and update your store, but we'll leave that to you
    }).catch(err => {
    // Error: handle it the way you like, undoing the optimistic update,
    //  showing a "out of sync" message, etc.
  // what you return here gets returned by the dispatch function that 
  // used this action creator
  return null; 

The above code and its comments show how you can do async calls, mixing them with optimistic updates, while still using the usual syntax: dispatch(actionCreator(...)). The action creator itself is still a pure function, but the thunk function it returns doesn’t need to be, and it can do our async calls for us. If needed, the thunk can also access the current store state, as it receives getState as its second argument: return function(dispatch, getState) { ...

That’s all! If you want to learn more about Redux, you can find the full docs here:

Flux Architecture Visual Cheatsheet

At we recently started experimenting with the React library, and, right from the start, I really liked what I saw.

After that, I also went to study the Flux architecture, and, honestly, I found the official overview and tutorials rather confusing.

After studying those better, and after some more reading around, I think I understood where the confusion came from. Flux is itself – just like the whole Javascript ecosystem – in a period of very intense flux, and the docs aren’t really keeping up with that.

I realized that the Todo List tutorial – the first one you’re supposed to learn from, as I dutifully did – illustrates an older version of Flux, and isn’t even up to date to the recent changes to the Todo List example code on github. I’m pretty sure the Facebook team will fix this soon, but for now this really doesn’t help.

Anyway, in the end, I understood that Flux is much simpler than many wordy explanations make it look. To spare my team my initial pains in getting it, I created a diagram – a visual cheatsheet – to better illustrate how it works, based on the latest code examples by Facebook. The team found it useful, so I decided to give back to the community and publish it here.

It’s meant to be a companion to the official docs and tutorials, not a complete explanation. But most of the core concepts are there. The code snippets are from the Chat example app.


EDIT: some people are having problems with the link or want to download a simple image. This will be uglier when zoomed and without links, but here you go with a simple PNG export: Flux Architecture Visual Cheatsheet (png)

UPDATE: after trying to use Flux with Facebook’s own Dispatch library, we decided to instead use Redux. You can read some background in my new post about Redux Async actions the easy way.

Save Even More Time On Hacker News

Some months ago I published a Chrome extension I had developed for myself to save time on HN; it allowed you to “mark as read” the news items that you already read or scanned.

The Chrome store tells me that 770 users are using it weekly, so I decided to take the time to update it with some more features that I’ve been experimenting with lately and found very useful.

Mark comments as read

The new feature I found most useful is the ability to also mark comments as read.

Using the orange button, you can mark all comments in the page as read. If you do so, those comments will appear with a slightly darker background, so when you reload the page you can easily spot the new ones – I found this a real time saver!

If you want to find the new comments even faster, you can check the “Hide read comments” option. This mode can be a little confusing, as you lose the context for the comment, but sometimes I found it useful anyway.

Follow comments for a news item

Sometimes, even if you already read an item and its comments, you want to follow the discussion as it develops, or be notified if a discussion starts at all. Instead of just leaving the comments page open, you can now tick the “Follow comments” checkbox (see picture above); when you do so, the news item will be shown in the home page even if it was marked as read and you are hiding read items.

Most importantly, if there are new comments since you marked all comments as read for that item, that will be shown with a green text in the home page. This can also be a good time saver – if you resist the urge to follow too many discussions, at least.

Collapse comment threads

This is a pretty obvious feature, but it helped me save time when, for example, the first comment for an item is something contentious that generated a long thread I’m not interested in. Just click the little “-” symbol to the left of the root of the thread, and the thread will collapse.

You can also collapse all thread with the “‐‐” to the left of the “Follow comments” checkbox. “+” and “++”, of course, will respectively expand one thread and all threads.

Show parent comment

I left the fanciest feature last; I don’t use it very often, but when I do I find it incredibly useful.

I like threaded comments, but sometimes you just get lost in them: You read a long sub-thread, then you find a comment answering something, but you can’t understand what it’s answering to. How can you find that out? Enters “show parent”.

With this feature, when a comment is part of a thread, but not the first answer to its parent, a “show parent” link appears. You can hover your mouse pointer on that link, and the mysterious parent is shown right above it. I really like this solution because it is non-intrusive and (in my opinion) solves a real pain of threaded comments. The only serious limitation I see is that it wouldn’t work well on touch screens, but if somebody wants to adopt it for that use case, it could become a touch-to-toggle.

Wow! How much do I need to pay for all this?

Nothing! You can get the extension for free from the Chrome store.

Other browsers? It’s open source

I packaged this as a Chrome extension because it’s just too convenient. If you want to use it on some other browsers, though, you’re free to get the source code on github and adapt it any way you want. Happy hacking!

Shameless plug

Did you like this project? Do you need somebody to help you develop your web app or idea? I just started to be open for consulting projects. If you need an expert at translating ideas into software solutions, from requirements analysis to full stack development, check out what I could do for you here.

ABalytics.js: Client-side A/B Testing With Google Analytics

I recently wanted to conduct a little bit of A/B testing on a project of mine, so I went exploring – yet again – the free options.

There are lots of very good server-side solutions, but what I wanted was one that I didn’t need to configure on the server, and that could be integrated into Google Analytics, where I already track my websites performances. I found on the web some how-tos about the use of GA custom variables for A/B testing, but no ready-made libraries to do so.

So I created my own javascript library; it is so simple that I decided to implement it without any external dependencies, so that if in the future I want to use it without jQuery (which I usually use) I’ll be able to do so. And it is so simple that it was easy to give a minimum polishing and make it ready to publish, to give it back to the community.

These are the main features:

  • Easy to set up
    • You just list the possible variants, the randomization is handled automatically
    • You just mark the html elements you want to test on with a class, the substitution is automatic
    • No need to set anything up server side, all the results are stored on Google Analytics
  • Consistent user experience: The selected variant is stored in a cookie, so the user will see the same one when coming back
  • No external dependencies: Pure javascript, you just need to include GA
  • Flexible: You can conduct multiple, independent experiments at the same time. Each experiment will use a custom variable slot

I’m releasing the library with an MIT license, and you can find it – with basic instructions on how to use it – on Github. Enjoy!

OpenVZ howto addendum: Migrating your virtual machines to a new server

As I wrote some time ago, creating your own OpenVZ infrastructure is very easy, and it can save you a lot of money compared to the usual VPS providers. But it also has other advantages.

Hashtagify uses redis to store all its data in memory, which is great for performance, but… it requires a lot of RAM. So it happened that I almost finished my 24GB, right when I learned about a new great offer for a 32GB server which costs 33% less than my 24GB one. Moving to it means that I’m going to get back my 150€ of activation costs in just 5 months, while at the same time having 8GB more. Great!

This is why I bought a new server and decided to move all my virtual machines to it from the old one. It turned out to be quite easy, but I had to find the instructions in different places, so I’ll post here how I did it for both my future reference and for anybody else having the same problem.

First of all, you have to install OpenVZ: I did it just following my previous howto.

Then, you can migrate your machines. I thought I would need to back them up and then restore them in the new server, but I found out there is a more direct and much faster way to do it: The vzmigrate command.

To use it, you first need to enable password-less ssh login from the old server to the new one (this is because vzmigrate uses the ssh protocol to move the files). First, generate your public and private keys if you didn’t already:

ssh-keygen -t rsa

It is important that you create it with no passphrase. Then, move the public key to your new server, like this:

scp /root/.ssh/ root@XX.XX.XX.XX:./

Now log into the new server and do

cd .ssh/
touch authorized_keys2
chmod 600 authorized_keys2
cat ../ >> authorized_keys2
rm ../

After this, in the old server check that everything is ok with:

ssh -2 -v root@XX.XX.XX.XX

If everything is ok, you whould be logged in without being asked for a password. You can log out


and start migrating your machines. For each machine, for example the one with id 101, you do:

vzmigrate -r no XX.XX.XX.XX 101

This will take a pretty long time, depending on how big the virtual machine and how fast the net between the two servers are. There is no progress indicator, but you can get an idea how it is proceeding by checking the original size of the machine and the size of the copied one, by using the command

du -hs /vz/private/101

on the old and the new server respectively (this of course only if you mapped the openvz data directory to /vz as shown in the original howto.

When the command finishes, the new machine is ready… but only if you could move the VM IP from the old to the new server. In my case that wasn’t possible, so I had to ask for new IPs on the new server. You can assign the new IP, and remove the old one, like this:

vzctl set 101 –ipadd NEW.NEW.NEW.NEW –save
vzctl set 101 –ipdel OLD.OLD.OLD.OLD –save

So, now you’re really good to go:

vzctl start 101

And that’s it!


Is the open source/internet singularity coming?

Many have heard about the idea, popularized by Ray Kurzweil, that a technological singularity is coming. In a nutshell, the idea is that as the power of computers is growing exponentially, at some point it will bring about an artificial intelligence that will overtake human intelligence, reinforce itself and change everything beyond any (human) imagination.

I personally don’t believe that the creation of an AI able to compete (let alone surpass) with human intelligence is near at all. But while working on my latest web project, I noticed just how easier it has become to create incredibly powerful and attractive new software than just a few years ago. And all this thanks to Open Source and the internet.

Increasingly, we see amazing new software libraries, frameworks, programming environments being released as (Free) Open Source on the internet. This makes it progressively easier for other people, even solo developers working from home in their free time, to create great software, and often give something back to the Open Source community.

Not just that; the internet is making it easier and easier to find the best new pieces of software, to get answers to the most difficult programming questions, to circulate ideas and to publish the end results of it all. And to bring this all to even the most isolated programmers, in the remotest parts of the world.

Isn’t this all it’s needed to create an exponential growth of software innovation? I guess it is. And I don’t know what this will bring about, but it is a fact that information technology is disrupting lots of industries, with consequences that are harder and harder to predict – as unfortunately the current economic crisis is showing.

I don’t know if this could be the real technological singularity, even without artificial intelligence in the mix, but it could come damn close.

Got half an hour to kill? Make it useful to the community!

TranslatewikiRecently I was looking for an open source engine to create a stackoverflow-like website in Italian about mortgages. The best alternative I found is Shapado, which is also rails-based, always a plus for me.

Anyway, on their website I found out that they’re going to release a new version soon, and that it still missed an Italian translation. After a little investigation, I also found out that everybody can contribute to that translation on

I had never heard of that website, which is part of MediaWiki. I decided to give it a try, and after subscribing and waiting a little for approvation, I started translating the latest version of Shapado into Italian.

I would never have believed it, but it was kind of fun! The web translation interface is really very well-thought, the google and bing suggestions are quite helpful (and sometimes funny), and it only took me a couple of hours to translate all the parts that weren’t already translated from the previous release.

The best part is that this is something that you can do whenever you have some free time and a web connection; you are immediately productive, and seeing the list of missing translations becoming shorter by the minute really gives you a good feeling of accomplishment.

So, if you know one language different from English and have a little time to spare, what are waiting for? There are already 20 open source projects on translatewiki waiting for your help. Give them a hand!