I have been waiting for the right moment to put my newly acquired (modest) node.js skills to a test. Opportunity arose when my wife’s cousin asked me if I could make him a simple web site to showcase his woodworking shop. All he really wanted was a way to put online some images and a couple of paragraphs of text, but he also wanted to be able to update the images himself, add more pictures of his work as it went on. I could spend a couple of hours looking for the simplest flat-file CMS of the moment, but I decided to make him just a stock bootstrap static site and buy some time until I figure how to build it properly with node.js.

Finally, I opted for an interesting albeit simple stack - I used my DigitalOcean droplet reckoning that it would be more then enough - it is currently hosting my (static) Hugo site and a small flask app. For the image storage I chose to use Cloudinary - an interesting image hosting cloud provider sporting a complex API that includes dozens of transformations. To keep track of the images locations (urls) and titles, I used a mongo collection (mongo Atlas, free tier). Finally I added a SendGrid account for sending automated emails. I wrapped everything (basically trhree APIs) in a small express app: the admin is able to log in, upload images to cloudinary, set their titles and cropping options… and that’s it, pretty much. Everything is hosted via Nginx’s reverse proxying.

This setup is and overkill, I am well aware of that - this could easily be accomplished in a myriad of ways, but I find it is a nice adaptable setup that could easily be upgraded and it is a developer-friendly project: I have a login/register system (which the client will not use - only one admin, and maybe me), mail sending that could be used pretty much everywhere, unobtrusive image hosting without hitting server limits… but that is not the point. The point is that during this time I was rewiring a pretty old flask setup that serves a very similar purpose: image hosting, some image processing (Pillow, PIL) and some authentication, on a Sqlite instance - so I was forced to quickly shift my mind from node to python and vice versa.

It is different, I won’t lie. I still make silly mistakes with curly brackets and regular brackets, and keeping track of promises and the various .then() and .catch() clauses causes me some trouble. I cannot say I prefer python, I am well aware that I am just used to it - and I still love it, mind you, but every language and even framework has it’s own way of doing things, a certain flow, and I must say that I really like the Express.js flow. Of course, I kept comparing it with flask, and, at least at this (initial) stage it seems like an even more freewheelin’ tool: where flask uses the pretty simple flask-login extension, in Express I found myself stranded between rolling a handmade custom login system or using the “magic” juggernaut Passport.js that is becoming bigger then Express.js itself.

I found the module management via require to be very straightforward and the recommended app structure/decoupling to be logical: routers, templates, one entry point for the configuration etc. I learned to appreciate the npm universe: mongoose seems very intuitive, the other (few) packages that I used were well documented (SendGrid, Cloudinary). The biggest difference, of course, remains the asynchronous nature of the code, the promises, the async/await cycles, the paradigm shift.

In the following paragraphs I will insert some interesting code, while this is the Github repo.

MongoDB setup and schemas

I opted for a free mongodb instance on Cloudmongodb.com: it is free and it’s a breeze to setup. After the registration and the creation of an initial database, I needed just two collections - one for keeping track of users and another one for the images. The app stores just the metadata provided by cloudinary (the public url), the insertion date and the owner/uploader of the image, as well as the usernames and hashed passwords.

The code for connecting to the DB is very short:

// DB config

const db = require('./config/keys').MongoURI;
// connect to mongo
mongoose.connect(db, {useNewUrlParser:true}).
    then(() =>{ console.log('MongoDB connected');}).
    catch((err) => {
        console.log(err);
    });`

The “models” aka schemas are also surprisingly simple:

const mongoose = require("mongoose");
const PictureSchema = new mongoose.Schema({
    title:{
        type: String,
        required: true
    },
    url:{
        type: String,
        required: true
    },
    date:{
        type: Date,
        default: Date.now
    },
    front:{
        type:Boolean,
        default:false
    },
    public_id:{
        type:String,
        default:false
    },
    user:{
        type: String,
        required: true
    }

});

const Picture = mongoose.model('Picture', PictureSchema);

module.exports = Picture;

And that’s it. Granted, I haven’t had the need for complex relationships and fields, but this is really quite neat nevertheless.

Cloudinary setup

Setting up Cloudinary is a tad bit more complicated: after using the cloud id, the api key and the password, we get an object called cloudinary. It is important to remember to always use the v2 of the API. The relevant code is the following:

router.post('/new', (req, res) => {     
  upload(req, res, (err) => {
​    if(err){
​      console.log(err);
​      res.redirect('/');
​    } else {
​      if(req.file == undefined){
​        console.log('NO FILES');
​        res.redirect('/dashboard');
​      } else {
	    console.log(req.file.path);
​        cloudinary.v2.uploader.upload(req.file.path, { 
	         width: upload_width, height: upload_height, crop: "fit" },
​        function(err, result) {
​      if(err) {
​        console.log(err);
​         } else {
​            console.log(result);
​            console.log(result.url);
​            let front = false;

​            if (req.body.front != undefined)
	​            {front = true};
​            const newPicture = new Picture({
​                title: req.body.title,
​                url: result.url,
​                public_id:result.public_id,
​                front:front,
​                user: req.user.id });

​            newPicture.save()
​            .then(pic => {
                 req.flash('success_msg','Picture inserted succesfully');
​                         })
​            .catch(err =>console.log(err));
​         }

​        });
​        res.redirect('/dashboard');
​      }
​    }
  });

});`

I have predefined some image upload parameters, such as the width and the height of the pictures - currently they are 800x600 but they can be extended or reduced and they could be even inserted in the form for dynamic uploading. The whole process is based on promises and we do not store images on the disk - they are uploaded on the fly and afterwards we are greeted with a big fat json response containing all of the metadata.

I must point out that I haven’t even scratched the surface of what Cloudinary has to offer when it comes to image processing, filtering, bending and whatnot. Their processing API is conveniently structured in the form of urls and you should really look into their excellent docs. I used only the cropping, but i feel that once you take apart the public_url of your images, you can insert endless filters, cutting and cropping, processing tools and whatnot to be applied on the fly. I haven’t been able to find a way to store all of the images in one folder, which would be convenient for separating different projects, but I haven’t really had the time do dive in.

Stuff to read

nice read - inspiration for another project

cloudinary node.js docs - how to upload images

Sendgrid setup, domain whitelisting

I have considered several options for sending email. I really didn’t need anything serious for this project - we do not plan any newsletter or email marketing for the time being. I just wanted to be able to stick an old-school contact form on the site and be able to send emails to the owner of the shop. It is one of the most dreadful things in the wwworld: having to email people through forms, I hate it and I wish it would disappear completely. Yet, here I am, contributing to this disease.

Sign up with SendGrid, get a unique api key and put it in the config keys file. From there it is just a matter of choosing their recommended setup or using something else. I opted for the latter, choosing to maintain some flexibility should I ever choose to change the email provider:

const nodeMailer = require('nodemailer');
const sendgridTransport = require('nodemailer-sendgrid-transport');
const transporter = nodeMailer.createTransport(sendgridTransport({
  auth:{
    api_key: require('../config/keys').SENDGRID_API_KEY
  }
}));

The email sending is quite straightforward and I won’t delve into it. I should have made a nice template to be rendered before sending as well, but this is just for sending notifications to the site owner, so I decided that it shouldn’t be fancy. Anyway, all of the ingredients for managing a moderate-sized email campaign are there. I plan to hook up the email senders to the mongodb instance as well, maybe add an autoresponder, but those are the things that we’ll hopefully get to later.

You’ll have to get your Digitalocean dns table match some (three, to be precise) domains from SendGrid in order to be able to remove the dreaded sent by SentGrid moniker in the email bodies. Gmail will simply not tolerate this line and send your email to the spam folder without mercy. It is a pretty simple procedure of copypasta.

Passport setup

Passport.js is complicated. It just is. While trying to accommodate every imaginable way of logging and signing in (I expect they will be releasing a coin-flipping authentication method soon) it has grown into this behemoth that has caused me to just wish to start with it right away. Go read the docs, find some youtube tutorials (Net Ninja is great!) and start your app with the login system. You can thank me later.

DigitalOcean setup

Let me begin by saying that Nginx is a powerful motherucker. It is fast, it is able to serve almost anything at great speeds (so I am told), but the most impressive thing is it’s ability to work as a reverse proxy. Now, this is a concept that isn’t so easy to grasp: the idea is simple, but in order to get a working setup you will have to do some research.

What is reverse proxying? I currently host 3 sites on my DigitalOCean virtual Ubuntu server: an image gallery app running on python/flask/wsgi, this (static) Hugo site, and now this nodejs app. Nginx is the magic ingredient that enables all of these guys to coexist peacefully and work in unison, without fighting, interfering or seeing each other.

As a request arrives, Nginx determines which of the three domains originated the request and dispatches the corresponding response: if it is this static site, it will simply serve the directory which hosts all of the files. If it is the nodejs site, it will find the port on which the nodejs process is serving on the server and proxy_pass it to the node instance, if it is the flask guy it will get a response from the socket file that is enabling the communication with the python wsgi server - gunicorn.

You will have to do some reading - I do not have the time, nor the skills to write a full-fledged Nginx setup tutorial. Your starting point should always be the DigitalOcean docs - they have great walkthroughs for just about any setup you might need (Django, node, flask, php, static).

My setup for this site is the following:

server {
	listen 80;
	listen [::]:80;
	# root /var/www/slasistem/html;
	
	index index.html index.htm index.nginx-debian.html;

	server_name slasistem.rs www.slasistem.rs;

	location / {
		proxy_pass http://localhost:5000;
		proxy_http_version 1.1;
		proxy_set_header Upgrade $http_upgrade;
		proxy_set_header Connection 'upgrade';
		proxy_set_header Host $host;
		proxy_cache_bypass $http_upgrade;
		# First attempt to serve request as file, then
		# as directory, then fall back to displaying a 404.
		# try_files $uri $uri/ =404;
    }

Ok, what do we have here? Listen 80 means just listen to the default port, the commented root directive is where I used to host the static version of the site on server. The _servername is important - it tells Nginx that this block applies to those domains. In the location context the most important is the first line: proxy_pass and then you direct it to the process on port 5000. I had to comment out the last line in order for the setup to work (it took me like 15 minutes to figure it out).

This setup could and should be improved by letting Nginx handle the static files (css, js, images) instead of express’static middleware. Bear in mind that the bulk of the site’s load - the pictures - is handled by cloudinary, so in this case it doesn’t make a huuge difference. But it should be done, nevertheless.

As always, I feel the need to recommend some good resources for learning node.js and express that I used, borrowed, have seen and found helpful.

Anyway, the site is online (hopefully) and this setup feels really confrontable - it is flexible, simple to deploy (once you wrap your head around the numerous Nginx capabilities and nuances) and pretty scalable on a virtual server. I am looking forward to the next episode in which I will maybe give sequelize a spin.