Dip Dapp Doe — Anatomy of an Ethereum distributed fair game (part 3)

Written by ledfusion | Published 2018/08/28
Tech Story Tags: ethereum | dip-dapp-doe | anatomy-of-ethereum | distributed-fair-game | dapps

TLDRvia the TL;DR App

Photo by Timo Wagner on Unsplash

If you are one of the strong readers who made it in part 2, welcome back! If you are not, I urge you to read part 1 and 2 with care, and get back when done.

Dip Dapp Doe — Anatomy of an Ethereum distributed fair game (part 2)_Today we are going to deploy the contracts and throw some colours on the screen as we design and a static web site to interact…_hackernoon.com

Today we are going to follow the Test Driven Development methodology on our frontend, along with Web3. We are also going to bundle the dapp and distribute it with IPFS.

Keep your seat belts fastened, because we are approaching our destination!

Picture by publicdomainphotos

As a reminder, the source code of the article can be found on the GitHub repo:

ledfusion/dip-dapp-doe_dip-dapp-doe — Distributed app featured in a Medium article_github.com

Yet, Test Driven Development

In the last article we dived into the architecture, design and the building blocks of our dapp’s frontend. For educational purposes, we even showed the integration of one of the blockchain transactions, but let’s not lose perspective. In TDD, we need to spec first and code later.

There are very nice tools which allow to automate UI tests and even record them visually among different browsers. However, in dapp testing we are limited by two important caveats:

**Only a few browsers support Web3**Browser support may extend with the release of new MetaMask plugins, but we are mainly pivoting around the Chrome engine and Gecko.

**We can’t get programmatic access to control MetaMask/Web3**Allowing Javascript code to accept Ethereum transactions would be a huge security flaw, because any web site could steal our funds at once. However, that is what we need to do in order to test our code.

The last issue would have been a major drawback for any serious project’s workflow. Until now.

Dappeteer

Puppeteer is an official package from Google that allows to programatically control a Chromium instance from NodeJS on Linux, Windows and MacOS. However how do we add the MetaMask plugin and tell it to accept transactions if the plugin runs out of our window?

That’s where Dappeteer comes into play! It is another NPM package that features a embedded version of MetaMask, tells Puppeteer to run with the plugin enabled and provides some wrapper methods to import accounts, accept transactions and even switch to a different network.

In our web folder:

$ npm i -D puppeteer dappeteer

Local blockchain

If you recall, in part 1 we developed our smart contracts by deploying and testing them in a local blockchain. Test cases waiting for every public transaction to be mined would take ages to complete.

However, in part 2 we demonstrated the integration to the public blockchain from the browser. What happens, now? How can we use a local blockchain so that transactions can be mined as fast as when using Truffle?

The tool for this is Ganache CLI. It is another NPM package, which is part of the Truffle Framework and it is what we actually used under the hood in part 1.

$ npm i -D ganacle-cli

If you run it now, you should see something like this:

Ganache CLI output

As you see, it generates random wallets with 100 ether, but it can be fully customized. Now we can mine immediate transactions without polluting the public blockchain with junk.

Workflow scripts

I normal web projects, you may be used to working with Webpack started by a simple NPM script. However, in the current project we need to start combining different simultaneous components at the same time.

What needs to happen when we run our E2E tests?

  • Starting the Ganache local blockchain (in the background)
  • Recompile the contracts
  • Deploy them to the local blockchain
  • Write the contract instance’s address so that the frontend knows where to attach to
  • Bundle the frontend files with Parcel
  • Start a local HTTP server for the static files (in the background, too)
  • Launch Chromium+Dappeteer and run the tests
  • Kill Ganache and the HTTP server
  • Forward the exit() code of Mocha to the parent process, so that it can determine if all tests passed or not

You are free to use any task runner that you like, but to me this clearly becomes a job for a shell script. To get the best of both worlds, I’d suggest you to use [runner-cli](https://www.npmjs.com/package/runner-cli), along with a Taskfile. More on this.

$ [sudo] npm i -g runner-cli

Let’s create one:

$ run --new? What template do you want to use?Gulp fileNPM packageMakefile❯ Shell script

Now edit taskfile and add a function called test with the following set of commands (commented in-line)

function test {echo "Starting ganache"ganache-cli --mnemonic "$(cat ./dev/mnemonic.txt)" > /dev/null &ganache_pid=$!

...

Here we start the server in background (with the & at the end) and retrieve the process PID by assigning $! into ganache_pid. Also note that "$(cat ./dev/mnemonic.txt)" reads the contents of the mnemonic.txt file and puts them as a Ganache parameter. With that, everyone can import the same account.

echo "Recompiling the contracts"cd ../blockchain./taskfile buildcd ../web

Here we go to the contracts folder and run another script that launches Solc to compile the contracts. Compilation can run concurrently with Ganache.

echo "Deploying to ganache"node ./dev/local-deploy.js

This script is quite similar to blockchain/deploy/lib.js. Instead of deploying to the Ropsten network, it deploys them to Ganache. It also stores the instance address into .env.test.local (will see it later).

echo "Bundling the web with NODE_ENV=test"NODE_ENV=test parcel build -d ./build --log-level 2 --no-source-maps src/index.html &parcel_pid=$!

Now that we know what address to attach to, we can tell Parcel to bundle from src to build with the appropriate environment variables in place. This can run in parallel with our next step:

echo "Starting local web server"serve build -p 1234 &serve_pid=$!

This will simply start an HTTP server, leave it on the background and get note of its PID. Run npm install -D serve to add it to the project.

echo "Running the tests"wait $parcel_pidmocha ./test/frontend.spec.jsmocha_result=$?sleep 1

Here, we wait for the Parcel process to complete, and when it does, we finally start our Mocha test cases. We keep the exit code of Mocha by reading $? and a bit later we start to clean things up:

echo "Stopping the servers"kill $ganache_pidkill $serve_pidexit $mocha_result}

We kill the two background processes and finally exit with the status code returned by Mocha.

Ta da!

Environment data

At the current point, if we run parcel -d ./build src/index.html, we will start a dev server on port 1234 with a Web3 pointing to the Ropsten (test) network. But if we dorun test, then we expect to have a web site that will connect to Ganache. How to achieve that without touching any code?

Parcel allows us to use .env files and map the KEY=value lines into process.env.* variables. Let’s create a couple of files for our environments. In web/.env:

CONTRACT_ADDRESS=0xf42F14d2cE796fec7Cd8a2D575dDCe402F2f3F8FWEBSOCKET_WEB3_PROVIDER=wss://ropsten.infura.io/wsEXPECTED_NETWORK_ID=ropsten

These are the environment variables that will be used by default. This is, when compiling the web, we will connect to the public Ropsten network, expect MetaMask to be on this network too and use the contract address where it is deployed.

However, when we are testing, we want those variables to look like below in web/.env.test.local:

CONTRACT_ADDRESS="--- LOCAL CONTRACT ADDRESS GOES HERE ---"WEBSOCKET_WEB3_PROVIDER=ws://localhost:8545/wsEXPECTED_NETWORK_ID=private

When NODE_ENV is set, Parcel will look for .env.$(NODE_ENV).local and inject those values instead of the default ones. So process.env.EXPECTED_NETWORK will evaluate to private in testing and be ropsten otherwise. More info here.

As we already mentioned, we need [web/dev/local-deploy.js](https://github.com/ledfusion/dip-dapp-doe/blob/master/web/dev/local-deploy.js) to replace the CONTRACT_ADDRESS placeholder by the contract’s local address. The main difference with the deployment script we already wrote in blockchain/deploy/lib.js is the following function:

function setContractAddressToEnv(contractAddress) {if (!contractAddress) {throw new Error("Invalid contract address")}const filePath = path.resolve(__dirname, "..", ".env.test.local")

**let** data = fs.readFileSync(filePath).toString()

**const** line = /CONTRACT\_ADDRESS=\[^\\n\]+/  
data = data.replace(line, \`CONTRACT\_ADDRESS=${contractAddress}\`)

fs.writeFileSync(filePath, data)  

}

Every time we run test, the .env.test.local file is updated, and there is no code to modify.

What if I want to just develop on a version of the dapp using the local blockchain?

Two versions of the dev task are available on the web folder’s taskfile on GitHub.

  • run dev will provide an environment identical to the one used to run the tests, but leaving the browser open for you
  • run dev ropsten will simply run Parcel’s dev server and rely on Chrome/Firefox’s MetaMask as any user would do

Time for spec’s

Create the web/test/frontend.spec.js file and copy the following content into it:

Ready? Type run test and see the magic happen :)

Everything we need is ready for us. To keep the article readable, we will not elaborate on every use case. Feel free the check the spec file on GitHub.

What happens next?

We could approach the specs by starting a game, switching to another account; accepting the game, switching account back again, etc. However, this could lead to overcomplex specs and check a behaviour that users will not experience like that. We’d rather focus in one player’s experience and make sure that all relevant use cases are checked.

To simulate the actions of the opponent, we will launch the corresponding transactions from the NodeJS testing script. So the approach we will follow looks like:

  • We tell Chromium to create a game
  • We launch a tranasction from web/test/frontend.spec.js to accept the game from accounts[1]
  • Chromium confirms
  • We tell Chromium to mark one position
  • We make a transaction from the opponent’s account to mark another position
  • Repeat the process until we reach a draw
  • We check that the cells have the appropriate state and that the game ends in draw

So how would such use-case test look like?

Writing UI specs like this can be slow at the beginning, but the effort pays off as soon as you have simulated 5 complete games in less than a minute.

A few things to note:

  • Some assertions need to be delayed a bit, so that the frontend receives events and UI components respond
  • The amount of time to delay may vary, depending on the environment speed
  • We have added HDWalletProvider to reuse the same mnemonic, get the second account available and let the opponent play from it
  • We have created a couple of helper functions to encapsulate repetitive tests, and will probably add more as we test more use cases

Given the following spec, we code the behaviour of the frontend accordingly.

Let’s watch the movie of our test case playing against itself:

Doesn’t it remind you to a well-known film?

Coding and polishing

After the first use case is tested, the slope doesn’t look steep anymore :)

What’s left for us is to spec the remaining use cases, code the frontend accordingly and bundle the static web site. Using the building blocks explained in part 2, the rest of the frontend functionality can be developed without major issues.

When our specs are ready and development is on the go, we see that it would be good to show the “Withdraw” button only when it hasn’t been done already. However, this means that we need to add a get operation to the smart contract.

What does it mean for us at this point?

  • Add a test case in blockchain/test/dipDappDoe.js
  • Add the function to blockchain/contracts/DipDappDoe.sol
  • Exec run test on the blockchain folder
  • Add an assertios in web/test/frontend.spec.js
  • Update web/src/game.js
  • Exec run test

Updates on the contract will immediately reflect on the frontend’s code, and automated testing will ensure that we broke nothing in about one minute.

Distribution

Bundling

Once we are happy with specs, results and UI performance, it’s time to think of distributing our dapp to the world. The first step is to use Parcel to bundle it with production settings:

function build {echo "Recompiling the contracts"cd ../blockchain./taskfile build > /dev/nullcd ../web

echo "Cleaning the build folder"rm ./build/*

echo "Bundling the web site"NODE_ENV=production parcel build -d ./build --log-level 2 --no-source-maps src/index.html}

Next, it is time to quick check that everything actually looks as expected, including the attachment to the Ropsten network:

function www {build

serve build -p 1234}

Navigate to http://localhost:1234/ and see that everything is Okay. These are the static files of our dapp:

At this point, we could simply upload these files to Netlify, Surge, S3 or whatever provider you like. Once our domain name pointed to the hosting IP address and the TLS certificate is ready, you should not worry about data integrity anymore, right? If nobody updates your git repo, your provider sticks to the SLA and corrupt governments don’t censor your web site, everything is fine.

However, it is a bit inconsistent that our dapp uses a smart contract that runs on a decentralized blockchain while it remains accessible through a centralized web site that a big fish could turn down.

IPFS

This is one other main reasons why IPFS exists. IPFS stands for InterPlanetary FileSystem and it is conceived with the aim of making content freely and reliably accessible across the globe. It has many advantages and some drawback, but for educational purposes, we will go through one of the most popular decentralized filesystems.

In a similar way to a blockchain, the IPFS network is made a lot of nodes around the world that nobody controls. They act as a global peer to peer network, in which files are invoked by their hash. You can think of it like a Git + BitTorrent architecture that also provides an HTTP gateway.

Without further introduction, let’s jump into it. First, install the IPFS CLI:

$ curl -O https://dist.ipfs.io/go-ipfs/v0.4.17/go-ipfs_v0.4.17_darwin-amd64.tar.gz

$ tar xvfz go-ipfs_v0.4.17_darwin-amd64.tar.gz$ cd go-ipfs$ ./install.sh

Let’s init our local repository:

$ ipfs initinitializing IPFS node at /Users/jordi/.ipfsgenerating 2048-bit RSA keypair...donepeer identity: QmUMkM9Px3touHUaWjB5yKi1qRbVwA9zRk8gjAndzkAy9wto get started, enter:

ipfs cat /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme

What happens if we run the last line as a command?

Hello and Welcome to IPFS!

██╗██████╗ ███████╗███████╗██║██╔══██╗██╔════╝██╔════╝██║██████╔╝█████╗ ███████╗██║██╔═══╝ ██╔══╝ ╚════██║██║██║ ██║ ███████║╚═╝╚═╝ ╚═╝ ╚══════╝

If you're seeing this, you have successfully installedIPFS and are now interfacing with the ipfs merkledag!

-------------------------------------------------------| Warning: || This is alpha software. Use at your own discretion! || Much is missing or lacking polish. There are bugs. |

Not yet secure. Read the security notes for more.

Check out some of the other files in this directory:

./about./help./quick-start <-- usage examples./readme <-- this file./security-notes

Several things happened:

  • QmS4ustL54... is the hash of an IPFS folder, and it depends on the contents of all of its files
  • If any of its files or subfolders varies, its hash will vary too
  • QmS4ustL54.../readme resolves the IPFS folder and retrieves the hash of the readme file
  • With the has of that file, contents are transferred across the net (at the time, locally) and printed to the screen

At the time, we are only using IPFS as a local client. Any content that is already in our repository will resolve immediately. But if we don’t have it, its hash will be requested to the network, transferred and eventually cached in our local repository. If nobody uses it for a while, it may be garbage collected.

How do we add our files and become content providers?

Let’s run the following command and see what happens:

IPFS has hashed our files, computed the hash of the root folder and added them to our local repository. If we cat its content locally, this is what we get:

However, what happens if we add a simple space+line to index.html?

You guessed it, the hash of index.html is radically different, and the hash of the root folder too. Any attempt to alter data integrity will always generate new hashes.

But how do we access these files from a web browser?

IPFS provides an HTTP and HTTPS gateway. Any file or folder can be navigated to with a URL like: https://ipfs.io/ipfs/<hash>. However, if we try to access this URL with the hash of the root folder, our browser will keep waiting forever because no node has such content yet.

Yes, we are not a reachable node yet. To join the network and provide our content, we need to start IPFS as a daemon. Open a new terminal tab and leave it running:

$ ipfs daemon

If we now visit https://ipfs.io/ipfs/<hash>, it may take a few seconds but it will load. But not quite:

Everything has been fine when running on localhost, but it turns out that ParcelJS is expecting the bundles to be available from the root folder of the server. But now we are under /ipfs/<hash>.

A little change in web/taskfile > build should make the difference:

# ...

NODE_ENV=production parcel build [...] --public-url ./ src/index.html

And then again, rebuild and add to IPFS:

Let’s copy the new hash and see what happens now:

DipDappDoe served from the IPFS gateway

After a bit of patience, our first request will finally complete and our dapp will be running! Subsequent requests will be much faster. What happens now?

If we run ipfs pin ls we will get the following:

IPFS allows nodes to pin files, so their content is never garbage collected on it. In our case, we have the two versions of our build folder and the sample data created on ipfs init.

Note that the indirect entries are files contained in other IPFS folders. They are pinned, only because another pinned element contains them. recursive entries correspond to the explicitly pinned folders.

Now, our daemon is running and our content is accessible, but what happens if we stop it? Any content that has not been accessed yet will remain unavailable. The files of the dapp we just visited will remain reachable for a few hours until the network nodes mark them as unused and clean them up.

Unused content will continue to be stored and available as long as an active node keeps them pinned.

IPNS

Telling the world to connect to a different URL when the web site is updated, is not much convenient. Isn’t there any better?

IPFS provides the Inter-Planetary Name System (IPNS) mechanism. An IPNS hash acts as an alias to an IPFS hash, with the difference that it can be updated over time. An IPNS hash can only be updated from the account that created it, as it is signed with the user’s private key.

$ ipfs name publish QmbVfUBSHp42kYtDud9zr1pxedd4dgqDmAHuRqHRPKGywTPublished to QmUMkM9Px3touHUaWjB5yKi1qRbVwA9zRk8gjAndzkAy9w: /ipfs/QmbVfUBSHp42kYtDud9zr1pxedd4dgqDmAHuRqHRPKGywT

From now on, the IPNF hash QmUMkM9Px3... will resolve to the file /ipfs/QmbVfUBSHp42... In the browser, if we navigate to https://ipfs.io/ipns/QmUMkM9Px3... will be the same as navigating to https://ipfs.io/ipfs/QmbVfUBSHp42...

If at a later time, we need to update the frontend and use another IPNS hash, we will need to repeat the steps above with the new one. Existing users will continue to use the same URL.

DNSLink

However the IPNS approach still presents a few issues.

  • IPNS URL’s are not user friendly, neither easy to remember
  • Given an IPFS URL, users will not be able to verify that such URL is legitimate and belongs to us
  • Using the ipfs.io domain, if the user was brought to a malicious web site also hosted on ipfs.io, it could expose certain dapps to XSS attacks or retrieve local data from unrestricted cookies
  • IPNS hash resolution may be slow

A more desirable scenario could be to use our domain name instead of the hash on IPNS. To that end, IPFS allows using DNS TXT records to indicate what IPFS resource should be served.

If our domain was dapp.game, we would add a TXT record that should contain a string like:

dnslink=/ipfs/<hash>

When changes propagate through the net, the IPFS gateway will be able to fetch the TXT record of the given domain and use the underlying hash. Then our dapp should be available via [https://ipfs.io/ipns/dapp.game/](https://ipfs.io/ipns/dapp.game/.). Easy to recognize, easy to check.

But yet, as we still use the ipfs.io domain, the third issue above still remains.

Custom domain

To achieve the most user friendly approach, we would need the dapp to be accessible via dapp.game, but then we will be in front of a tradeoff. TLS or IPFS.

  • For content to travel through TLS with our domain, we need to use our own server with the appropriate TLS certificate. The IPFS gateway has its own domain and certificate, and any other host name would be rejected by the browser.
  • If the above is not an option, then requests to dapp.game can be CNAME‘d to gateway.ipfs.io but this will only work on HTTP.

**Own server**We could get a TLS certificate from LetsEncrypt, start a local IPFS node and use Nginx to proxy external requests to IPFS, but that defeats the advantages of using IPFS.

Workload, security and data integrity depend on our centralized server, which becomes the bottleneck. Netlify, Firebase or Amazon are much stronger candidates than your own server to host the static site.

It is true that the IPFS gateway could be considered as a central point as well, but it is backed by a decentralized network of nodes and has successfully overcome DDoS attacks and censorship attempts.

Hosting the static files on our domain would mitigate potential XSS vulnerabilities, but it would expose our server to threats that IPFS has already handled in the past. More info.

**IPFS HTTP gateway**On the other hand, DipDappDoe does not rely on external resources beyond the blockchain. XSS should not be an issue for DipDappDoe but communication through HTTP opens the door to DNS hijacking and Man in the Middle attacks.

IPFS conclusion

The final decision will depend very much on the way the dapp is built and what kind of users will interact with it.

  • Using a IPNS URL like https://ipfs.io/ipns/dapp.game/ may be suitable if your dapp can not leak any information to any XSS attacker, does not load any dynamic content and your users don’t mind copying or typing slightly longer URLs.
  • CNAME-ing our domain to the IPFS gateway should be avoided as this will only run on HTTP.
  • Using your own backend to serve on HTTPS doesn’t take any advantage of IPFS, compared to serving local static files on its own. This approach would be suitable if third party content must be accessed from the dapp, if we can stand big fish attacks, or if the user base would not play well with a URL like the one above.
  • Using one of the major hosting providers is the fallback for any of the above approaches. They will allow to use your own domain name, TLS certificates and will do their best to prevent potential DDoS and censorship attacks. But will be centralized.

Global Summary

As you have seen, writing a simple dapp is far from simple. The list of technology involved is not short:

DipDappDoe is an effort to cover the entire process of building a fully functional dapp with the minimum viable technology.

In part 1 we learnt how to use the TDD methodology to develop the smart contracts of the dapp. In part 2 we saw how to deploy the contracts to a decentralized blockchain and we bootstrapped the architecture of the dapp’s frontend. In part 3 we have followed the TDD methodology again to develop the frontend of the dapp and have used a decentralized filesystem like IPFS to publish it to the world.

Now that our MVP is ready, what might be next?

Room for improvement

Obviously, a blockchain version of the Tic-Tac-Toe will not be as exciting as a centralized real-time version. The core value of our version is provide a provably fair game powered by smart contracts that everybody can trust. Our main goal is to demonstrate the full stack of a distributed app and see how to use the building blocks at our disposal.

If DipDappDoe was a real project, there would be many, many details to improve and work on at this point.

  • The smart contracts would need to be audited by additional expert blockchain developers, beyond the developer himself
  • Have the contracts metadata and source code automatically published so that they can be viewed and validated from places like Etherscan
  • Make active use of Swarm and Whisper once these two Ethereum technologies become ready and steady
  • Hire a dedicated graphics designer
  • Implementing a much deeper check of the client environment, detecting Web3 compatibility and leading the user to get a fully operational browser
  • Validate the UX/UI with private beta testers and ship an MVP close to what the app looks like today (if no issues arise)
  • Iterate over the UI/UX improvements as the user base grows and the team has feedback about the dapp
  • Disconnect from Web3 so that the frontend testing process can exit gracefully (when newer versions allow it)
  • Explore if Dappeteer UI test cases could work in headless mode, so that CI/CD providers could run them

The end.

Writing this series of articles has involved a big effort and countless hours of work. I’m honored to see that you made it to the end 🙂

If you found the series of articles interesting and useful, please stand up, clap your hands 👏, smile 😊 like a hero and share with the same 💚 that I’ve put into this piece of work. Thank you!

As said earlier, the project featured on the article can be found in the GitHub repo:

ledfusion/dip-dapp-doe_dip-dapp-doe — Distributed app featured in a Medium article_github.com

Stay tuned, because this is just the beginning of the above tech.

Photo by Timo Wagner on Unsplash


Published by HackerNoon on 2018/08/28