Writing a 360 Image Viewer Desktop App

Creating a cross-platform 360 Image Viewing Desktop App using Electron.js, Three.js and Materialize

Preface

In this book, we are going to learn how to build a full-fledged cross-platform desktop app which lets you view equirectangular 360 images. The app works by asking the user for a 360 image which is dragged and dropped into the window of the app or is selected from the explorer. The app receives this image and displays it inside the viewport. You can then use your mouse or keyboard to look around and view the image. We will also be covering how to package the code and distribute it as an exe or a dmg file to be eventually downloaded and consumed by the average consumer. So, without further ado… let’s start! The applications that we are going to be using can be downloaded for Windows or Mac.

Don't want to write out all the code for this project from scratch? Get access to the full codebase today!

Libraries Used

Electron.js:

Electron.js is a very popular library that lets you create Desktop Applications with HTML, CSS, and Javascript. These are technologies that were originally created for web development and now are being used to create Desktop Applications. Electron essentially runs with a front-end and a back-end all smashed into a single application. It’s like a full-stack web developer's wet dream. The front-end is run on Chromium and back-end is set up with Node.js. This basically means that the business logic of the whole application is controlled by one language… Javascript albeit in different environments. It is an open-source framework and is currently maintained by GitHub. Electron.js is used by a lot of popular applications like Slack, Visual Studio Code, Atom, Twitch.tv among others.

Three.js:

Three.js is a very popular library and API (Application Programming Interface) that helps us display 3D Graphics inside our browser. It abstracts the complexities of WebGL so that we can focus on building out the functionality of our project. Three.js contains Cameras, Scenes, Shaders, Geometry, Vectors and a host of other tools to help us create amazing 3D Graphics.

Materialize:

Materialize is a library that helps you create Material UIs with HTML, CSS, and Javascript. The library gives you access to Google’s Material Design Components which you can directly use inside your own applications. The library is built to be an out-of-the-box material design website designing solution.

Why Visual Studio Code?

You can use any editor you want for this project. Visual Studio Code is my weapon of choice. I absolutely love the simplicity that VS Code brings to the table. In fact, it combines the best of all the other code editors that I have used in the past. So, if you have not started using an advanced code editor yet. You do not have to look any further because VS Code is here to stay (for now). I would recommend that you use a code editor that has a built-in command prompt. The shortcut to bring up the command prompt for VS Code is Control + J for Windows and Command + J for Mac.

Prerequisites (brew, node, git)

This book assumes that you have a basic understanding of HTML, CSS and Javascript. Mostly Javascript, actually, because HTML is not really a programming language. Nor is CSS and if anyone says otherwise, do not listen to them (unless they make valid points). You will also need to install `Node.js` and `git`. If you are on a Mac, the way to do it by using Homebrew. `Node.js` can be installed on Windows from their website and for `git`, go here.

Setting up the Project with Git

Before we do anything, we need to create a location on our computer to actually store the code for this whole project. Now, I am a different kinda person and a weird programmer so I like to eat the cake upside down. The way I do it is, I first create an empty GitHub repo and then pull it down from the cloud. I know it’s unorthodox. But, you know… some rituals are important. GitHub is a service where you can store and share your code online. A lot of people confuse GitHub and Git. GitHub is the company the provides the service and Git is the technology that they use to store the code, okay? There are also other technologies like Mercurial and… ?? uhh… yeah. Mercurial. That is literally the only other one I know. So, let’s go to GitHub and create that repository.

To create the repo, you will need to log on to GitHub and create an account if you don’t have one. Click on the ‘Create a new Repository’ button and fill up the form. The name of repo whatever you want or you can just name it `360-vr-desktop-app` which is what I did. I usually use lowercase letters separated by dashes to denote the name of the GitHub repo. Don’t worry if you are scared of using an illegal name. GitHub will correct you if you do something wrong. Apart from that, the sky's the limit. Use you uncanny naming skills to name the project whatever you want. No need to keep it civil here. Just don’t use profanity if it is a public repository. A little tidbit of advice.

After creating the git repository, we need to pull it down or clone it on our local machine. Cloning a repository is the process of essentially creating a copy of the repository on your local machine which you can make changes to and eventually, push those changes into the original repository online. This is the process of syncing the repository on your local machine with the GitHub repository. It’s a vicious cycle of awesomeness.

To clone the repo, all you need to do is go to the GitHub page of your repo and click on the `Clone` button. This will give you a pop-up which will contain an HTTPS link of where your repository lives on GitHub and we will be using this link with a git command in the terminal to clone the repository. Before you run the command, do make sure your terminal is pointing to the correct containing directory or else you will clone the repo in the wrong location and you don’t want that.

git clone https://github.com/link-to-repository/

Running the command after replacing the link with your actual specific GitHub repo link will clone it to your local machine. You can only run this command if you have git installed on your machine through. I am sure you went through the prerequisites.

Jumping into our Code Editor

We can start working with our favourite `code editor` on this awesome project of ours. If you are wondering if this epic code editor is actually VS Code, then you are absolutely correct. Now, you don’t need to use VS Code if you don’t want to. You can use other code editors like Atom or Brackets or even Sublime Text. I prefer to use VS Code so I made this course using VS Code. Lego.

Setting up `package.json`

Open up the project in your code editor of choice and also, open up the terminal at the project directory. This is very easy with VS Code as it has a built-in Terminal which can be opened up at the project directory with the keyboard shortcut, Control + J on Windows and Command + J on Mac. Our project directory is currently empty unless you had selected the `Create readme file` option while creating the repository in which case, you would have a readme file in the directory.

We will be using NPM or the Node Package Manager to keep track and include all of the dependencies that we will come across while building this whole project. To use NPM to keep track of all of these dependencies, we need a package.json file. We can create it manually or we can let NPM create one for us.

npm init

Using the above command, npm will ask you a couple of questions which you will need to answer. Some of these questions are important and some of them can be skipped. I usually press `Enter` to all of them apart from the ‘entry point’ slot. I stop here for a while and look at this and make sure I am going to use the default which is usually `index.js` or if I am going to use another file as a starting point. In this case, we are making an electron app, so the right file, in this case, is `main.js`. Write in `main.js` in this slot and press `Enter` to move further.

After this process is completed, you get a package.json file in your directory which is ready to go. Let’s write a simple program which prints out `Hello, World?` to the console and execute it. Open up the `main.js` file and write the following code into it. But before that, you will need to actually create the `main.js` file. Go to the terminal and type out following command.

touch main.js

This command will create the ‘main.js’ file in your directory. Open up `main.js` and write the standard console logging code which spits out `Hello, World?` into into the terminal.

console.log('Hello, World?');

Save this file and run it in the terminal with a simple command.

node main.js

If everything is working as it should, this will print out ‘Hello, World?’ on the console. This is how you run Javascript locally with the help of node.js. Now, personally, I don’t like running the node application like this. The reason is that here the person who runs the project NEEDS to know the file that starts the application up. I would like to abstract that portion. The way we do this is by using NPM to store the command to run the project. So, open up your package.json file and add this to you `scripts` section.

To run the project, we can write the following command in the terminal.

npm run start

Now, doesn’t that look much better than typing out node all the time? The `scripts` section allows us to automate the execution of commands that we use repeatedly in our project. I usually even add other custom commands for testing and deployment in this section. I am a lazy dude and writing everything out all the time is something I can live without. The create-react-app component of the Reactjs project uses this section to build the whole react project when needed. This utility of NPM is beautiful and very useful.

Back to our project, you might be thinking though. Hey, we told NPM which file we were going to start out with as the `entry point of execution`. Why do we still need to write the file name in the start script then? Well, the answer is that we don’t. Instead of writing `node main.js`, we can very well write `node .` and everything will work just as well. Let’s do that now.

After making that change, running the project with `npm run start` runs the project with no issues at all and we see the interrogatory `Hello, World?` displayed on the console.

But we are not limiting ourselves to making an offline node application that prints out `Hello, World?`, we are making a full-fledged electron desktop app so we will focus our attention on installing electron.js. Installing Electron Let’s open our terminal at the project root and run the following command to download the electron.js package.

npm install --save-dev electron

The `--save-dev` flag adds electron as a development dependency in the package.json file. In some cases, you might get an error which ends with basically displaying `Permission Denied`. This is an issue with permissions being denied to electron.js from downloading external dependencies to function. You can instead use the command mentioned below in such a case.

npm install --save-dev electron --unsafe-perm=true

And if you still get the same error, you might need to prepend `sudo` to the above command. Sudo will give the command full access to your system so I’d advise caution before jumping into it. The `--save-dev` flag adds electron.js to the package.json development dependencies section. We can very well install electron.js without using that flag but it is not recommended. One reason for that is this can blindside us to not being able to keep track of all the dependencies we are using in our project.

I would also like to note here that electron.js is a dev dependency and not a production dependency. Once the project is built, the electron.js code becomes a part of the application itself and at that point, it’s not really a dependency. If the software package that we were using is not a development dependency and is actually something that is needed during production, we would use the `--save` flag instead of the ‘--save-dev’ flag.

Once installed, we will see a folder called ‘node_modules’ show up inside the root directory of our project. This folder contains all of the software packages that we have installed into our project. In our case, electron.js was the software package that triggered the creation of this folder and yes, the electron.js packages are stored inside this folder.

Now, pay very close attention to what we are going to do. You have to attempt this at home. When we make push the repository into GitHub (in our case), we do not want to send `node_modules` along with it. The reason behind this is very simple. It’s too darn big and it contains a lot of files. We can easily install all the dependencies from the package.json file if we don’t have the `node_modules` folder by using the following command.

npm install

Ignoring File we don’t need We want `git` to ignore it. The way we tell git about how to ignore it is by creating a `.gitignore` file and adding this folder inside it. So, let's do that.

touch .gitignore

The above command will create the file for you. Now, open up the file and write the following code inside it.

node_modules/

If you are using VS Code or a code editor that has built-in support for `git`. You will see the `node_modules` folder go grey (or similar). It’s fine if this doesn’t happen though. Not like our life depends on it or anything but it’s a good indicator that this actually works.

If you are using VS Code or a code editor that has built-in support for `git`. You will see the `node_modules` folder go grey (or similar). It’s fine if this doesn’t happen though. Not like our life depends on it or anything but it’s a good indicator that this actually works.

Git Push

To push the repository to GitHub, we are going to use the following commands,

These commands add all of the not-ignored files to our git repo, commits them and pushes them to the GitHub location in the cloud thereby syncing our code online.

Base Electron App

Okay, now before you get annoyed at me, we are going to start writing our electron.js app so open up main.js and write the first line.

const electron = require('electron');

Now, the constant `electron` points to the electron.js software package that we downloaded into the `node_modules` and we can access the functionality that it provides. But, we can actually import only the modules that we need to use from the `electron` package. So instead, we replace this line with the line with the following line.

The above line is equivalent to writing,

const { app, BrowserWindow } = electron;

So, we just combine it into one single line. Cheeky.

One change we are going to do before we run any electron specific code is altering `node .` from `package.json` into `electron .` so as to look like the following.

After making this change, if you are worried about how to run the project. Don’t. You run it in the exact same manner. Using `npm run start`. I love abstraction.

Now, when we run the project, the Electron app initializes. But, nothing shows up and rightly so. We haven’t really told electron to do anything. We have just imported some stuff. The `app` constant has an event that fires when `electron.js` is initialized. We will use this event to run a function which creates a window. After adding all of the code to `main.js`, this is what it looks like.

If you observe, we are loading data from a file called `index.html`. But, well, we haven’t really created a file named `index.html`. So let’s do that.

touch index.html

Electron uses HTML / CSS / JS to set up it’s front-end user interface. Hence, this `index.html` file is what you see when you open the application up. So, open up the `index.html` file and add the following HTML code to it.

Now, when you run the application. You will get a Window that will open up in all its glory. Amazing.

Using the Materialize Library

In this section, we shall add the Materialize Library to our project as it provides an amazing platform to quickly build a Material Design UI on. To add the library, we will go on to https://materializecss.com and click on the `Getting Started` section. Scroll down to CDN section and add that code to the `head` section of our `index.js` file. Also, we can replace the code in the `body` with some Materialize specific code. This will enable us to test whether Materialize is actually working. So, we will use my favourite Materialize component in place of the <h1> tag and that is the button.

To use the button, in the Materialize search box, type in Button. You will be directed to the button page and from there you can copy the button code and put it inside the `body` instead of the <h1> tag. Your code probably looks something like this.

If you run this, and your computer is connected to the Internet, you will probably get a window in which there is a button placed in the top-left corner and has the correct functionality that is expected.

Offline Dependencies

But, there is a big issue that we are overlooking. This is supposed to be a desktop application. Yes, sure. It can connect to the Internet, but the UI dependencies should work out of the box. And yes, to that I would say, I fully agree with that sentiment which is why we are going to make sure that all of Materialize runs locally.

The first step is to get the raw CSS and the raw JS file and add them to our project. This is fairly simple. Just go to the links that are added to our project in the browser, copy the code, create CSS and JS files in our project at correct locations and paste the code in those files.

Before we do that, we would need to create folders called `css` and `js`. Create a file inside the `css` folder called `materialize.min.css` and a file called `materialize.min.js` inside the `js` folder. Now, we shall go on to the CDN links, copy the code and put them inside the files. Simple as pie. But, we are not done yet. We still need to update the links in the `head` to point to the local files.

If you have followed the same structure that I have mentioned above, the links are `./css/materialize.min.css` for the CSS and `./js/materialize.min.js` for the Javascript. The `.` in the path signifies the fact that you are referring to the current directory and the current directory is the one that the containing file is in. Smooth.

Another piece of code that we need to get here is jQuery 2.2.4. Get the text for this from https://code.jquery.com/jquery-2.2.4.min.js. Copy the text, create a file called `jquery-2.2.4.min.js` in the `js` folder and paste the text in that file.

In the end, the code looks like this.

Now, when you run the app, it runs without the actual need for an Internet connection and that is absolutely amazing.

Using the Three.js Library

Adding three.js to the mix is exactly the same. Go to the Three.js GitHub (yes, haha) repository which can be found at the link https://github.com/mrdoob/three.js and go to the `build` folder. In this folder, you can see a file called `three.min.js` and this is the file that you need. Click on this file and you will see GitHub trying to display this file. Control + A does not work here though. I know. Annoying. But, we can go to the Raw text from here by clicking the `RAW` button. You will be redirected to a page that has only the text of the Library. From here you can Control + A to select the whole text and copy it.

Create a new file called `three.min.js` in the `js` folder like we did before and paste the copied code in it. Once that is done, we just include it in the `index.html` file. Put it directly below the `materialize.min.js` import.

<script src="./js/three.min.js"></script>

Before we run this project, I would like to add a bit of three.js specific code so that we can make sure that the library is imported properly.

Firstly, we need to remove the existing HTML from the `body` and add the following code to it.

Three.js works on a system where it uses an element from the HTML DOM to display the viewport. So, in our case, we are going to use this `div` with the id as `webglviewer` and the class `full-screen` attached to it to display our three.js viewport.

We will also need to add some CSS to the file. Add the following code to your <head> tag.

Great! Now for the three.js code. This is where it might get a little confusing but let’s not fret. We shall prevail. Add the following code to the body of the `index.html` file and that shall be all we need to test the project out. If everything goes well, we might not even have to change the code and can just keep building on top of it.

Testing with DevTools

Save the project and let’s run it! Open up your devTools after running the project to make sure you don’t get any errors. To use devTools, the shortcut is Control + Alt + I on Windows and Command + Alt + I on Mac. Do it now. If we get a black screen and no errors, then don’t worry because that is the intended and desired behaviour of our code and not a black screen of death. Don’t believe me? Let’s put a sphere in our scene to prove this. Add the following code the app after you set the Camera LookAt.

The code written above is very simple but looks incredibly complicated. The first line initializes a sphere buffer geometry.

Working with Meshes and Materials

In simple terms, it creates a sphere. Geometry is a framework from which a mesh is made and the mesh is the object that you place in the scene. A mesh is created by combining a material with the geometry. Hence, rightly so, we create the material on the next line. We create a MeshBasicMaterial because it does not need any light to be seen.

Now, before we delve any deeper I want to touch on the concept of things not being visible if there is no light. If you are in a dark room with no open slots from where light can enter the room, you are not going to see anything. Only when you turn on the lights or open a window (during the day), will you be able to see anything. Light makes things visible… usually. Unless, when you have too much light and everything becomes white and people go blind. The surface of the MeshBasicMaterial is displayed in the same manner in the viewport whether there is light or not. Hence, I used this material in this case.

To prove what I just wrote, you can change the MeshBasicMaterial to MeshLambertMaterial and see the sphere disappear. The only way you can now bring the sphere back into sight is by placing a light in the scene. So, we will not place a point light in the scene at the location of the camera with the following code. Put this code right after you add the sphere to the scene.

Running the app after adding the code gives you back the image of the sphere but it does look a little different as there light in the scene currently. Instead of a colour, let's put an image on top of it. Yes, let try to wrap an image texture over the sphere and see if we can distinguish the shadows. The uniform green colour over the sphere clearly does not allow us to see any hint of the shadows that roll over the sphere.

So, to put a texture on the image, we will first need a spherical or equirectangular or 360 image. You are obviously free to use any image you need. But if you don’t have access to a 360 image, you can get one at https://s3-us-west-2.amazonaws.com/quinston-com/images/360_image.jpg. I got you. Now, we can directly use this image from the link itself but we are not going to do that. What we are going to do though is create a `textures` folder in our project directory and place the image in there.

The Texture Loader

To apply a texture onto the sphere, the first thing that you actually need is to load the texture into a variable and have a reference to actually use it in the program. So, the way we do that in three.js is by using the TextureLoader. But, to use the TextureLoader there is a catch. The program has to wait until the texture has been fetched, loaded and referenced. Remove all the code after the Camera LookAt and before the Light initialization and add the following code in its place.

Running the app displays a sphere in the viewport in all of its glory. The TextureLoader gives us access to the load function which exposes the onLoad callback function which in turn exposes a reference to the texture. How that happens is clearly visible in the code. Once the texture is available, we create the material and if you observe… instead of `colour` as a parameter, we are using the parameter `map` as a holder for the texture to bundle up and create the material.

Once the material is created, you can use it to build a sphere and add it to the scene. If you see, we have called the renderer again and asked it to render the scene. This is because when you make any changes in the scene, you need to update the frame. The frame doesn’t get updated on its own and hence you have to push it out. This is not the preferred way to update the frame but it works for our purposes at the moment. We will, though, look at the correct way to update the frame in the future.

Running the app shows us a very interesting view. We see a sphere with a texture wrapped around it. This is exactly the loop we were going for.

I think this is the right time we take a step back and think about what kind of app we are actually making at the moment. We are making a 360 Desktop App that enables us to view 360 images. The image we are currently using as the texture is a 360 image. So, can we modify the current scene to help us visualize this 360 image? Let’s do it.

If you have ever seen a 360 image being viewed on Facebook or in an app like the GoPro 360 Image Viewer, the camera is at the centre of the content and obviously, the viewer views the content through the camera. So, this gives the viewer an illusion that they are in the centre of the scene. In our case, if we have to replicate that, we would have to place the sphere at the location of the camera. So at the line where you have written…

sphere.position.set(10, 0, 0);

We will replace that with…

sphere.position = camera.position;

This line will take the sphere and put it at the exact location of where the camera is. `camera.position` returns a vector which we can use to set the position of the sphere. You can find this information in the docs (we’ll take a look at that later). But, when you run the app, something doesn’t seem right. Why can’t we see anything?

Well, the reason is that you are viewing the sphere from inside the sphere. How insane is that? I am sure you have played a few games where in some situations you actually find yourself stuck inside a 3D model. And you can’t see the 3D model because, surprise surprise, you are inside it. The technical term for this is Back-face Culling.

Inverting the Sphere

We really want to see the sphere though and image texture on top of it. So, to do that we need to reverse the normals of the sphere so that they face towards that centre. One way to do that is to invert the scale of the sphere on the X-axis. We do that by adding the following line after defining the sphere geometry. This basically inverse scales the sphere. Imagine what would happen if the sphere got smaller and smaller in the X-axis and eventually grew out in the negatives. Yeah, that’s exactly what we are doing here.

geometry.scale( - 1, 1, 1 );

Also, we don’t want the light to be affecting our texture so we will change the material to a `MeshBasicMaterial` from the current `MeshLambertMaterial` and remove the light from the scene. The final code in the onLoad function should look something like this.

When you run the app now, you will see a very 360-esque image over the surface of the sphere. This is the exact look we were going for. You might even try to click and drag to see if you can spin the camera to look around (which probably might not work). But that is the exact functionality that we are going to add to our project next. Let’s get on with it. If you have made it this far with me, you have no idea how happy I am. Let us go forth and conquer. Before we do any of that though, let’s sync our git repository.

Using OrbitControls.js

So, to add the functionality of looking around, we are going to use a library called OrbitControls.js which integrates really nicely with Three.js. Sure, we could have written custom code to create this functionality but I always live by the principle of not reinventing the wheel. To use OrbitControls.js, we have to first get the source code for it. There is a link to the code on the bottom of the OrbitControls docs on the Three.js docs or you can get it at the Three.js GitHub repo directly here.

<script src="./js/OrbitControls.js"></script>

Now to add the camera rotational functionality to the scene with the help of OrbitControls.js, place the following code after setting the camera’s position. Remove the camera lookAt code.

The way the Orbit Controls works is by setting a target and allowing the object that you applied the Controls on to move relative to the target. In our case, we have set the target to the camera’s position. But, you might observe that we have added a .1 to the X position of the target. Why have we done this?

Okay. Imagine this. You have a camera at the position (0, 0, 0) and a point around which you want to rotate which is again (0, 0, 0). Orbit Controls rotation is more like panning around the target and then adjusting the distance. Also, this is rotation around a point and not self-axis rotation. So, when both the object and target at the same position, this does not work too well. Hence, to make sure that the object and the target are not in the same position, I found that adding a 0.1 to the x position solves all related issues.

Game Loop

After you add this code, you might run the program and wonder why it doesn’t work though. And by not working, I am referring to you clicking around the viewport inside the window and wondering why the camera doesn’t seem to rotate. Remember when we talked about updating the frame every time there is a change in the scene, and that the change is only visible when you actually update the scene. Yeah, this is the exact same situation. We have added the controls to the scene and everytime we click and drag on the viewport to rotate the image, OrbitControls.js is trying to do their bit but it just doesn’t seem to be seen. Hence, to add the update functionality, we append the following code towards the end of the <script> tag in `index.html`.

In the code above, you call the animate function. We have very conveniently defined the animate function directly below it. We have defined the function and we are calling the function also. Just keep that in mind. Inside the function definition of the `animate` function, we have a call to the requestAnimationFrame. The requestAnimationFrame ensures that the `animate` function is called continuously every 60 times a second (in most cases, there can be a frame drop here and there). If you want to read more about this you can find the full docs here.

Controls variable exposes an `update` function which you can call and update controls. I know that doesn’t explain much but even the docs pretty much say the same thing. I am not complaining. I think this is an incredible tool and also, I don’t want libraries to tell me how they work on their documentation page. I’d like that information to be available on a need-to-know basis. The next line is just us rendering out the scene again. And now, when you run the app, it works like a charm.

Rotation User Experience

You can now click on the screen, drag it out and the camera moves, twists and turns and you can see the entire photo outright. But, is this the desired user experience we are looking for?

When we click and drag the mouse towards the left, the camera literally moves in that direction. But that is not the desired response. We want to create functionality where the camera moves in the opposite direction to what we are seeing here. So, we will need to make a few changes here. Let’s get on with that.

The solution to that is pretty simple. All you have to do is add the following code after the `controls` initialization code.

controls.rotateSpeed = -0.2;

The `rotateSpeed` member variable of `OrbitControls` is in charge of how fast and in what direction the object the OrbitControls is applied to rotates in response to the mouse clicks and drags. How do I know this? Well, it wasn’t very obvious at first when I looked into the docs. But I tried a bunch of stuff (including looking inside the OrbitControls.js source code) and this was the best solution I found. The negative-ness of the value directs the direction and the magnitude of the value itself adjusts for the sensitivity of the mouse movement when it is being clicked and dragged across the screen.

But, even after adding I don’t think it has the desired rotational functionality I would like to have. After leaving the mouse after the clicking and dragging the image comes to a dead stop. This leaves a bad taste in my mouth. I would like to have some residual motion which would add a bit of spice to the user experience. Adding the following code would, according to the documentation, add a sense of weight and inertia to the motion of the camera. This is exactly the behaviour we are going for.

controls.enableDamping = true;

But, after adding this we realize that the whole motion became a little too sensitive and the value of `-0.2` because a little too high. Hence, we reduce the value of `rotateSpeed` to `-0.1` and voila, this is exactly the behaviour we desire.

Window Resize Error

But, there is something that has been bothering me from the beginning of this project. Every single time I try to click and get a full-screen image, the viewport does not update itself to fill in the entire screen. Now, we might this is the fault of Three.js but that is not the case. It is entirely our fault for ignoring this issue. We will solve it now though. The reason this happens is that we are not updating the size of our viewport everything this happens. The container becomes bigger but our viewport and camera remain the same. Meek and small. Add the following code towards the end of the <script> tag and everything should work the way it is supposed to.

The onWindowResize function is called every single time there is a resizing of the window underway (duh). You can also test that out with a spectacular console.log message inside the function itself. Debugging is fun. Sometimes.

No Menu Needed

I think there is literally only one thing left that annoys me here before we move on to using custom images instead of the image we current single image that we currently have and that thing is, removing that darn ugly menu bar. I don’t want a menu bar for this project. Maybe if we were making something where keeping and tweaking the menu bar made sense. I’d keep and tweak it. But at the moment, it’s just annoying me. So, let’s remove it. Place the following code in `main.js` after you initialize the BrowserWindow. This is a function that is exposed by the BrowserWindow.

Running the App shows us no sign of the `menu` and that is just spectacular. Push the app to it’s GitHub repository and we are ready to move on to the next part.

Now that we have created the functionality with which we would interact with the 360 content, I would like us to be able to swap out and test images other than the default one. Now, to make sure that we don’t go back to the default image, we are going to delete the whole textures folder! Yes. Delete it. Why? Because I don’t want the folder to cause any trouble when we move towards making our app more and more generic. So go ahead and delete that folder and don’t look back. Of course, keep a copy of the photo elsewhere on your computer so that you can use that image later on for testing purposes.

Refactoring Everything

Now, when we created our sphere and put the material on top of it, we didn’t do a lot of thinking about where we put our code or if we should encapsulate it and put it in its own separate function. This is exactly what we are going to do at this current moment. We are going to refactor the texture loading and sphere creating function into something much more generic. But first, we are going to add a global variable to our <script> section and that global variable is `mesh`. Add the following code at the top of your <script> section outside any functions.

Why did we do this? We’ll get back to this in a while. Now, it’s time to refactor our texture loader and sphere creator. Remove all of the code that starts from `var loader =…` to the whole of `loader.load(...);` and add the following function in its place.

The function is called `createMeshWithMaterial` and it takes as a parameter the path of where the image which is supposed to be used as a texture is and just runs with it. Before refactoring, we created a mesh called `sphere` which stored the geometry and the material but now we want to have a more generic functionality because of which we changed the name of the variable from `sphere` to `mesh`. You can, of course, use a better name if you have one in mind.

If you run the app at this point in time, nothing would really work. You will just get the black screen of death. Sure, you have defined a function which creates a mesh with an actual material but you haven’t really written any code which actually calls the function, which is something we should really get on with. The function will not call itself.

Drag and Drop Image

So, we are going to now write some code which enables you to drag and drop a 360 image into the window and the app is going to display that particular image as our base 360 content. How cool is that? Let’s get on with it.

We need to basically define 4 events to which our ‘document’ will react to. `document` in this case is our `html` or document object model or DOM (essentially). The 4 events are `dragover`, `dragenter`, `dragleave` and `drop`. The following code does it for the first 3 events.

The next one is the `drop` event. All of this code is appended to the end of the <script> tag.

After adding the code above, when you run the app… everything seems to be working fine. Amazing even. You drag a 360 image into the black screen, the black screen goes white-ish for a while and it acquires the image after you drop it. Great! But, if you take a deeper look at what is happening behind the scenes, there is a new `mesh` created every time you drop content in the app. We need to optimize this.

After adding the code to our `createMeshWithMaterial` function (before initializing the `mesh`), the old mesh gets deleted when the new mesh is created. This is great and resources are conserved. Save your changes with git and let’s move on.

So, at this point, everything works like the app should and from here on out, we are going to try and make the app more user-friendly. When you open up the app, we get a black screen. Now, this was the functionality WE were going for… but is this the functionality that you want to present to the user? I think not.

UI Additions

Let’s add a message in the UI to tell the user what they can do to see their 360 content, I also want to add a button to the screen where instead of having to constantly drop a new image onto the window… the user can just click that button, an explorer window will open up and the user can select their files and open them without the drag and drop hassle.

Place this HTML code right after the `webglviewer` div. Now, if you don’t know any HTML, this would be very confusing for you. Because you might be wondering. Hey… wouldn’t this code slide right below the `webglviewer` div? And if that happens, we would not be able to see this element at all because the `webglviewer` occupies the whole screen. In a regular non-position altering circumstance, you would actually be correct, but this is not a regular non-position altering circumstance. Infact, we have actually set the position of our div to be `absolute`. What are the benefits of this? Well, for one, you can now place the div where ever you want on the screen relative to the top, left, right and bottom. We have, in our code, defined these values to be left and top but you can define the others too. That’s the idea, actually.

Also, we are using ‘flexbox’ to align and arrange our items so that they are not placed randomly. I like being in control of the UIs that I create. ‘flexbox’ is a topic that I don’t think I shall be able to do justice to in this tutorial so, here is a link to one of the best references that I have ever come across for `flexbox`. Enjoy. https://css-tricks.com/snippets/css/a-guide-to-flexbox/

Next, we are using a regular Materialize button and a header which has our message imprinted upon it. If you run the after this code is added, you will see these elements at the top left corner of the screen.

Select File Functionality

Clicking the `Select File` button does nothing at this moment. So, let’s focus on adding functionality to that button. We already discussed the functionality we are looking for previously. We cannot just open an explorer window on-command with JS code. We have to bind an input element in the HTML code to do it for us. We are going to add a file input element in our code under our `Select File` button and make that button invisible by setting the CSS style `display` as `none`. The `id` for this element will be `select-file-input`. The whole div now looks like this.

As you can see, I have also added an `id` to the button itself. We will now write some Javascript which will connect the button with the input and when the button is clicked, a file explorer will be opened up. Add the following code towards the end of our <script> tag to enable it.

With this code, you use jQuery to catch the click on the button `select-file-button` and using the callback function, click the hidden `select-file-input` input element that you created which eventually gets the files. Now, try running the app. It doesn’t work. Click on the button does not open up the file explorer. What could be the issue?

We can also detect the error from the console in our electron app. You can open the console with the shortcut Command + Alt + I on Mac and Control + Alt + I on Windows. But if those don’t work, you can open it from the menu bar (which we had set to null). It’s under View > Toggle Developer Tools. And if that doesn’t work you can get it up and running by adding the code ‘win.webContents.openDevTools();’ to below `win.loadFile('index.html');` in `main.js`. Try these out and you shall be successful at least one or more of these methods.

Import Error Fix

I am not entirely sure about why this is an error but from what I have read online and in the issues, the issue seems to be the fact that jQuery sees that it’s running in a CommonJS environment and expects to be used as such. So, to make sure that jQuery is imported correctly, we wrap the import statements such that they appear like the following…

The two scripts in those two separate script tags wrap around and make sure that the libraries are imported for performance in the correct environment. Running the App after this change makes it work like a charm.

But, now that’s not all. We also need to make sure that when we actually do select a file from the explorer that the image is showcased in 360. Add the following code towards the end of the <script> tag to get this functionality to work.

The jQuery `change` function binds an event handler to the "change" JavaScript event on the <input> element. Whenever there is a change in the file selected, the callback function is fired. Not only that, the callback function gets a parameter passed into it. This parameter is being referred to by the `event` variable in this case. You can console.log this variable to see all of the contents of this object. We, in particular, want the path of the file that we selected. We are not looking at multiple files at the moment so we only check the 0th file. The `path` property gives us the path of the actual file and that is what we pass into the ‘createMeshWithMaterial’ function. Now when you run the app, everything works the way it is supposed to. In the next section, we are going to deploy this application for the world to see.

Deploying the Electron App

The Electron framework allows us to create amazing desktop applications. Currently, whenever we run out app we are essentially running it in development mode through `npm run start`. If you want to regular user to use your app, how do you expect them to use it? Giving them the code and telling them to run `npm run start` is probably a terrible idea and I do not think I should mention all of the horrible things that could go wrong when you do something like that. What we need to do to `distribute` (yes, that is the technical term) our application is to package it up in either an EXE file or a DMG file for the respective platforms and allow the user to get access to that file. EXE for Windows and DMG for Mac. You can, of course, compile it for other platforms but that’s beyond the limits of this course. I mean, I am not going to compile it for every new version of Linux that comes out. Elementary! Where you at? (The last hope for Linux).

The packaging tool that we are going to use is called `electron-builder` and you can find the documentation for it on https://electron.build. Not too shabby. Let us install it with the help of the following command.

The packaging tool that we are going to use is called `electron-builder` and you can find the documentation for it on https://electron.build. Not too shabby. Let us install it with the help of the following command.

npm install --save-dev electron-builder

Now, we are going to configure the `build`. The `build` configurations for the `electron-builder` is taken from the `package.json` file from under the ‘build’ property.

All of the tags that are placed here can be easily found on the `electron-builder` documentation. Also, we are going to add a little something in the `scripts` section.

Now, if we run the following command on Windows.

npm run dist-windows

In most cases, this command would run fine and you will find an EXE file waiting in the `dist` folder of your project directory but sometimes it gives errors and doesn’t work well. Most of the times the error is that you haven’t configured the `icon` for the app (which we haven’t) and the errors go away once you have. We have linked the icons in the `package.json` file but we haven’t really got around to making them. Install the app that we packaged and run it (with your mouse this time). It should run like any other desktop application.

Before we do that though, I want to push the project into GitHub and I do not want this `dist` directory to go along with it. Mostly because it’s too big and might waste a lot of our time. So, let’s do what we should always do with such files… ignore them. So, open up your `.gitignore` file and add it below the only other line we have as shown below.

That is it. Now let’s test the same on a Mac. Running the following command on a Mac would definitely give us an error and the reason behind that is simple.

npm run dist-mac

We do not have a `icons.icns` file in our `build` folder. So, let’s put one there. I have mine made already. To make one, you need to have a base png file. (which is an icon created, possibly a logo, for your project). Information about the dimensions and other technicalities can be found at https://www.electron.build/icons. The are tonnes of convertors online to help you generate one of these bad boys. Create the ‘build’ folder. Add these icons and run the build process again and voila. You have a DMG file ready in your `dist` folder. Install it, run it and distribute it.

You might be thinking that that is the end of it. But, well not really. The app might be deployable but it still needs a tonne of work to be actually used by the average consumer. We also have to talk about the common errors that you would be facing while building this monster. Let us proceed.

Rectifying User Experience Problems

Add the following code the `animate` function to make sure the mesh always remains at the position of the camera. This also ensures that the camera is perfectly in the centre of the mesh.

The main functions of the camera in this application are the rotate around a target and show the whole image in 360. In this case, we do not want the camera to have any sort of panning. Panning, in essence, is the process of translating a camera along with an axis. It essentially changes the position of the camera relative to the target. This is a highly undesirable event and hence, we should do everything in our power to make sure that this does not happen. The lines below are to be added after the initialization of the `controls` object.

The first line disables the panning feature but just for safety, we will also set the panSpeed and the keyPanSpeed to zero. The `keyPanSpeed` variable basically controls how fast the camera pans when the keyboard is used and the `panSpeed` variable controls how fast the camera pans in general. We set them both to zero which does not let them pan at all. When you add zero to a value, the value doesn’t change.

Keyboard Support

The addition of keyboard support is a little bit more tricky. There is no obvious way to do it at first. The best to do it according to me, is to make some changes in the `OrbitControls.js` file itself. Sound exciting? Yes. It did to me too. There are two functions in the file that exclusively control the rotation of the object around the target and they are the `rotateUp` and the `rotateLeft` function. Unfortunately, they are not exposed publicly. So, we are going to expose them for our use. They way expose them is by writing public functions which can act as shadow callers to these internal functions. Add the following code before the `this.saveState` function and after the `this.getAzimuthalAngle` function in the `OrbitControls.js` file and we are good to go.

The following code is the response to the events fired by the keys. Add it towards the end in the <script> tag at the end of the `index.html` file.

Right Click to Open a File

This might not be obvious but a lot of times when we want to view media content on our computer, we just right click on the content and open it up in the application of our choice with the `Open With`. This might seem to happen automatically but, like most things in programming, nothing is automatic. The whole process is configured to work like that on both ends of the operating system and the application. We want to add this functionality to our app for better user experience. Let’s get on with that.

The first step is to make sure our app recognizes that it can open these files. We do this by adding a couple of configuration lines to our `package.json` file. This feature is provided to us by the `electron-builder`, so we shall place the configuration lines in the `build` section as shown below.

Now, you might be confused so as to see so many file types but if you observe closely, there are literally only 2 file types that we are adding support for. The first one is JPG or JPEG and another one is PNG. The reason they are in upper or lower cases is that, sometimes, an image on the computer can be stored with a capital `JPG` extension or a lowercase `jpg` extension. This means that even though our application has the capacity to open that particular file, it might not show up in the list because of case sensitivity issues. Yes, I know this seems sort of annoying but it’s only a few lines of code so I can live with it.

Okay, now even though we have defined the file associations, we have not defined what the app would do if it does, in fact, get opened with the right click for a specific file. So, let’s do that. This is particularly straightforward for the Mac (`darwin`) platform but it gets a little complicated for the Windows (`win32`) platform. The reason we are getting into talking about platforms is that the implementation details of this particular functionality depend heavily on the platform that the application is running on. Yes, you guessed it. We are writing platform-specific code from now on. So, instead of starting up with the easy part for the Mac, we are going to start with Windows. The string value that we need to view the image in 360 is the path to that image. Now, when we right click and open up the image in our application, the path for the image is passed in as an argument in the command line on Windows. We need to get access to that argument. But, first, we need to make sure that the argument exists. This argument can be gained access to with the `process` global object.

The `process` global object is available everywhere in your application. The issue is that the `process` object, which is available in your `index.html`, is not that same as the one available in `main.js`. They are two different instances. The reason this is important information is that when you package and distribute the application, it is very difficult to see what `main.js’ logs in its console as there is not obvious stdout. So, we need to create a mechanism that allows us to get that object from `main.js` to the `index.html` console so that we can estimate the contents of that object and write code in `main.js` to extract the path to the image. Basically, we need to establish a communication channel between `main.js` and `index.html`. We can do this with `ipcMain` and `ipcRenderer`. `ipcMain` communicates asynchronously from the main process to renderer processes and the main process, in our case, runs in the `main.js` file. `ipcRenderer` communicates asynchronously from a renderer process to the main process. There can be multiple renderer processes but only one main process.

The communication is event-based and is either asynchronous or synchronous. You send data with the `send` function with an event recognition string along with the data you want to send as arguments (which can even be objects). There is an `on` function at the receiving end and there can be multiple `on` functions. The event recognition string plays the part of sorting the communications to the correct end-points.

There is something you need to understand though. If you see the `ipcMain` documentation, you will realize that there is no `send` function for `ipcMain`. The `send` functions only exist for `ipcRenderer`. This is because there are only one main process and multiple renderer processes. I am sure there will be a way in which this functionality would be possible in the future but as of this moment, it does not exist. So, we will have to devise a way to work around this. Let’s go.

The first thing that we are going to do is import `ipcMain` into our `main.js` file. Replace the first line in your `main.js` file with the line below.

const { app, BrowserWindow, ipcMain } = require('electron')

After importing `ipcMain`, we create an event receiver. Add the following code at the bottom of `main.js’.

The event recognition string here is `get-process`. Now, whenever `ipcMain` receives a communication trigger event with the event recognition string `get-process`, the following callback function is called with the passed parameters. In this particular case, we are going to use a `synchronous send` in the `ipcRenderer`. You can think about this like a regular function call which has a return value. The way you return it is by literally assigning the actual return value to the `returnValue` property of the `event` object as shown in the code.

Before you are able to actually use `ipcRenderer` in the `index.html` file, you should import it from Electron. Add the following code towards the top of the <script> tag in `index.html`.

Now let’s add the trigger in `index.html` with the `ipcRenderer`. Add the following code to the bottom of our <script> tag.

console.log(ipcRenderer.sendSync('get-process'));

As we talked about earlier, `sendSync` is literally like a function call with a return value. The return value, in this case, is console logged exactly where we want it to be. I want you to notice the fact that we are not passing any arguments here. All we are doing is sending out a communication trigger with the event recognition string `get-process`. We could also have passed in arguments after the event recognition string separated by commas but in this particular case, there is no need for that.

Make sure you have the ability to open the console for `index.html` when the app runs. Now, package the app for distribution and install it on your computer. Go to any `jpg` or `png` file and try opening it up with a right click. This might take some maneuvering initially. Once open, check the console for logs. There will be an object printed out, which if you expand, contains a property `argv` and this property is actually an array and contains the path to the image and this is the second element in the array. Now, we are going to use this property to set our 360 image and we know exactly where it is.

You can either keep the console log code or delete it as we do not need that code going further. It was just a way to illustrate where the path is coming from in a Windows system before we go any further. So, as we talked about earlier, there is no way in which you can send an event communication from `ipcMain` to an `ipcRenderer`. They only way this can take place is if there is a request sent from the `ipcRenderer` first. So, because of this limitation, we are going to use a polling mechanism from the `ipcRenderer` to the `ipcMain`. Polling a system where you constantly as the provider which is `ipcMain` in this case if the data is available and if it is available, send it across.

Add the code below towards the end of the <script> tag in `index.html`. This code sends out an event with the event recognition string `get-open-file-path` and if it returns a value that is not null, loads that path as a file onto our mesh.

Now, in `main.js`, we need to make sure that the event coming from `index.html` is serviced correctly. We are going to define a variable called `openFilePath` which will momentarily store the value of the file path. You will soon realize why I said momentarily.

var openFilePath = null;

Now, we are going to extract the file path, if there is any, from the `argv` property of the `process` global object. If you would have run the app in development mode (from our console), you would have realized that you get a `.` pushed onto the end of the `argv` property. If we send that back as a file path to be displayed in 360, it will give us an error and that is highly undesirable. We do not want that, so we are going to write an if-clause against it as you can see in the code below. So, if there is actually a path to an image passed in, we set `openFilePath` with that path. Add the code below to the bottom of `main.js`.

You might be thinking about how this all plays into the grand scheme of things. We now need to take the value in `openFilePath` and send it back to our requester in `index.html`. The way we do this is by setting up an `on` function with `ipcMain`. This plays the role of the provider that we talked about. So, when `ipcMain` receives the communication event with the event recognition string `get-open-file-path`, we send `openFilePath` as it’s return value. Then we set the ‘openFilePath’ to null. Add the code below towards the end of the `main.js` file to get this functionality to work.

Now, what is exactly happening here? The `ipcRenderer` is asking `main.js` for the value of `openFilePath’ every 1 millisecond and every time that happens, it get a return value of `null` because, in most cases, that value is actually null. But, when a right click open occurs, the `process` `argv` property gets the image path and for that one short burst of time, `openFilePath` is not null. Once the return value is set `openFilePath` goes back to being null. But, due to that logic, `ipcRenderer` get the actual file path so that it can display the 360 image. This whole process does seem a little convoluted but in my experience, this is the best method that I have found for this to work. Run the app and make sure everything works because it’s time to shift to the Mac.

Platform Specific Code

This is very disappointing for me to write and might destroy any expectations that you might have had but Electron is not truly cross-platform. Now, what do I mean by that? I only mean that sometimes, even Electron can’t deal with what the underlying operating systems throw at it. There has to be functionality written for specific operating systems. Now, what does this mean for us? Are we going to split our code base and work on 2 different machines? Well, I think that is a terrible idea so no. We are not going to do that. Instead, we are going to make sure that our application knows which platform it is running on and executes code according to that platform.

Earlier, we dealt with an object called `process`. This object contains a property called `platform`. Seem obvious enough? We are going to take this property and use it to our advantage. If you console log this property on either Mac or Windows. On Mac, the log would be `darwin` and on Windows the log would be `win32`. There are the platform identifier strings. Using these platform identifier strings, we are going to write if-statements which will contain our platform specific code. But, I don’t want to write the literal strings in the if-statements. So, we create global variables called `platfromMac` and `platformWin`. Add the code below after the import statements towards the top of `main.js`.

We can then use if statements to then run our code only on the platforms that can actually run it. For example, consider the code we wrote for adding the `process.argv[1]` file path to the variable `openFilePath`. This is a Windows-specific piece of code. So let’s write it like that. Replace the current code with the code below.

We encapsulated the platform-specific code inside the if-statement that checks whether the platform is actually Windows.

Right Click to Open File (Mac)

If you go through the Electron documentation, you will come across a Mac-specific event called `open-file`. This is the event that gets triggered when a right click open is requested by the user on a Mac. Unfortunately, this is not available on Windows, but it is on the Mac, so let’s implement it. The implementation is actually very simple. All you have to do is check for platform, then add an `on` event catcher on the `app` object. The callback contains the event object and the path. We use the path to set the `variable`. I like elegant solutions like this. Add the code below to the end of `main.js` to implement that functionality.

Forced Single Instance and Closing Issues

You will observe that you can open multiple instances of the app on Windows and when I say instances I mean that you can open the app multiple times simultaneously on Windows. This is not something that happens on Mac. Usually, when you have an application, there is only one instance of an app running at a time, which basically means that one window should be open at a time and only one entry of the app should be in the memory at a time. But, unfortunately, our app does not conform to this. So, we shall `force` it to only generate a single instance at a time. But, to implement this behaviour, we would have to rewrite the whole `main.js` file. Yes, it’s a task but when has that ever stopped us.

The file starts off pretty simple with the import statements where we are importing everything that we need. Also, we define a global variable called `win` for storing the BrowserWindow instance as this is going to be an important part of us being able to force a single instance of our application. Next, we make a function for creating our window. So, instead of writing to code directly in the `ready` event callback, we refactored it to its own function. Just basic housekeeping, to be honest. Also, we added a callback to a `closed` event for making sure that the app is removed from memory when we quit it. The app lingering around after quitting is not a good idea.

Next, here is where we get into the actual meat of setting up the infrastructure for enforcing the single instance. One thing we need to understand is when the second instance is usually triggered by the user. Let’s imagine a scenario where the user is using the app to see a bunch of 360 images. But, then they minimize the application and go on a run at the explorer and find another 360 image that they would like to see. In that case, they might just right click and try to open up the app and voila! A second instance is created. Not cool. What we would ideally hope to happen is, a new instance does not get created and the old instance pops up and displays the image. That is the functionality that we are going for in this case.

The way we go about doing this is by using the `requestSingleInstanceLock`. Using this lock, we can predict if the instance of the app this is created is the first or the second instance. If it is the second instance, the app quits and if it is the first instance, it gets maximized. Also, if there is a right click open, through the `commandLine` array, we get access to the image path. All of this is demonstrated in the code. Also, this is very Windows-specific so we are also going to need to add the if-statement or the separation of code.

The following lines of code are for housekeeping. You may or may not choose to use them.

And at the end, we have the rest of the `openFilePath` specific code.

Code Signing

Code Signing is a security technology that you use to certify that an app was created by you. Yes, that is is the official line from the Electron documentation. Now, when you distribute your application without signing it with the OS Authorities, they bite back by not letting you install it right away. On Windows, you get a pop-up which asks you if you want the application to `Run Anyway` even though it is flagged for being malicious software even though it is not and on Mac, you have to allow the application to run from the `Settings` page. And rightly so, you never know what application you install on your computer is a virus, or malware or ransomware. It might just ruin the whole system and you might not be able to do much about it.

Electron Builder does support code signing and the docs do a great job of outlining how to go about it. You can find more information https://electronjs.org/docs/tutorial/code-signing.

Final Words

Building apps might seem like it’s a piece of cake but if you have ever delved into the world of creating software for actual humans, you would have quickly realized that it is way more complex than it seems from an outside perspective. In this book, we created a 360 image viewing app but if you look closely and tear it apart, there are so many areas where we can add new features and make it better. That is why a product is never finished. That is why Mark Zuckerberg says that Facebook is not complete and probably never will be. One thing I would say is that, in this whole book, we haven’t really used boilerplate code to generate our base project but I would highly recommend that you do use boilerplate code as it saves you a tonne of setup time. The boilerplate code that I usually use is here.

Don't want to write out all the code for this project from scratch? Get access to the full codebase today!

About the Author

My name is Quinston Pimenta and I am a full-stack developer living in Pune, India at the moment. I am the CTO and Co-Founder of a 360 / VR company called where we create beautiful 360 / VR Experiences for our clients. I also run a YouTube Channel where I make fun videos on general programming, data structures and algorithms using various programming languages. I am so privileged to be alive at a time where we have literally everything we have ever wanted at our fingertips. I am so grateful to the creators of these libraries to have given us the opportunity to create such beautiful experiences. I wish you the best of luck in your programming endeavours and happy coding! You know where to find me.