Skip to main content

Getting started

First Time Setup

  1. Install NVM (https://github.com/nvm-sh/nvm#installing-and-updating)
  2. nvm install
  3. nvm use
  4. corepack enable

Dev instructions

The following outlines how we should go about starting a local instance of the Backend service and frontend Web.

Running a web instance

  1. yarn install

    • Installs all required dependencies for the monorepo
  2. yarn workspace web start:local

    • Starts up a local instance of the web frontend project
    • You can also use start:staging or start:production however its preferred to use local whilst developing so we don't create unnecessary data in our staging environments. Creating data in production should be avoided at all cost unless you are required to do so, however using production is useful to debug issues that can't be replicated on your local backend service. Also e2e tests will fail in anything other than local due to not having the Root user details required.

Running a local backend

We often find that E2E tests perform better locally against a fresh backend. It also comes with the advantage of being able to checkout backend branches without having to wait for them to be deployed to staging.

In order to spin up a local backend, you can run the below:

  1. yarn install

    • Installs all required dependencies for the monorepo
  2. yarn backend:start

    • Starts up the local backend service - it will ask you which branch to use; main is often what you want to choose.
    • During this step it will also automatically attempt to create your new Root user account - this may fail as it requires the local web instance to be running, see Login Data below for more information.
  3. yarn workspace web start:local

    • Starts up a local instance of the web frontend project
  4. yarn backend:create-org

    • Generates your first organisation along with users and creates a Root user if one doesnt already exist.

Build & Deployment

You can see the workflow pipeline in github that facilitate the build and deployment.

Build

We are building the application in a github actions workflow, and publishing the resulting build artifact as a zip file to an AWS S3 bucket in the tools AWS account. This bucket holds all the versions forever.

Deployment

This artifact then has some environment specific config injected (a json config file), is repackaged and deployed via AWS amplify in each environments account. The pipeline triggers these deployments and there is a manual gate to production.

Infrastructure

The AWS resources that this repo depends on such as the S3 buckets and AWS amplify setup is done in the platform-terraform repo. Ideally these would live in here, but thats more hassle than its worth right now.

API Clients

We use "orval" to generate our API clients and react-query hooks. You can find the source code under <rootDir>/packages/api-client. And you should find sample code by searching for @/generated-api-clients/ in the codebase and seeing how they are used.

Code conventions

TODO: write important code patterns here

File structure

TODO: write important points about how the file structure of the app works

User journeys

A good way to get familiar with the different user journeys in the application is to run the E2E playwright tests using "debug" mode (see section below which explains how yarn workspace web e2e:debug works). The "debug" mode on Playwright allows you to click through each step of the user journeys and can give you some good familiarity on how the app works.

E2E testing with Playwright

Important notes

  1. Each run will create a super admin using an API call, and then will create a fresh new organization. This allows our tests to be deterministic and not depend on any pre-existing data.
  2. We use testmail.app in our tests to read the passwords that get sent by email for account creation.
  3. We have setup the tests to retain video recordings on failures, see <rootDir>/test-results folder to see video recordings of failed tests.

Commands to run tests

You can use any of the following to run the E2E tests:

  1. yarn workspace web e2e: This will run the tests in "headless" mode, i.e. without showing you the browser and everything just runs in the terminal.
  2. yarn workspace web e2e:headed: Same as above, but you will see the tests run in the browser.
  3. yarn workspace web e2e:debug: A very helpful debugger which allows you step through your test code line-by-line and see what's hapenning in the browser. It also features a helpful recording/codegen tool which should be our default go-to instead of manually writing tests.
  4. yarn workspace web e2e:ui: Runs the tests in a UI window, allowing you to see a timeline of the events taking place as well as additional information while the tests run.
  5. yarn workspace web e2e:record: Opens up a blank recording sesssion – however, you might find it easier to use yarn workspace web e2e:debug which has the same recorder tool.

You can additionally pass CLI parameters to run specific tests, e.g. yarn workspace web e2e:debug assets.spec -g "existing wallet address"

Tips

  1. Try to avoid writing manual tests as much as possible, and instead prefer the recorder tool as this will make it easier & faster to write tests. Here are some docs on how this works.
  2. Easiest way to debug test failures is to use the yarn workspace web e2e:debug and step through each line of testing code.
  3. You can use await page.pause() breakpoints in your test code to pause the debugger programmatically – this will aid in debugging test failures because you can have the debugger pause right before the failing line of code.
  4. The recorder tool works best if you have have good accessibility-friendly semantic markup and if you write tests keeping in mind user-flows. See the "locators" docs to get a good sense of which locators work best (for example, it's best to select a button using getByRole or non-interactive elements using getByText, while getByTestId is not user-facing, so generally best avoided as your tests become dependent on code rather than user-flow).
  5. Even though they both appear to be doing the same thing, the mental model in Playwright is different to Cypress. Cypress tends to emphasize writing clean test code and using data-cy selectors when it comes to considering long-term maintenance. Playwright, however, is a little more agile as it tends to emphasize a more codegen-style test code generation – this means that as long we have our html semantics done correctly, we can codegen our tests as our primary method and avoid trying to make our test code super-clean. When it comes to fixing broken tests and maintenance, it's easy to fix them by stepping-through to the point of failure using the debugger, and re-recording the following steps.

Improvement Suggestions

TODO: add here your suggestions on how we can improve this codebase.