A stack to grow with

A stack to grow with

6 feb 2023

How we navigated the endless list of cutting edge technologies available when building our tech organisation from scratch.

When building the Bits platform, we were faced with an opportunity many developers dream about: the freedom to choose any technology stack with no technical debt to worry about. While this may seem like an attractive option if you're working with a technology you don't like, the choice can be more challenging than expected. Should you use the latest and greatest new tool, or something older and more proven?Beyond the obvious factors such as performance and developer experience, you need to consider the availability of engineers experienced with, or willing to learn, the technology you choose. The support and maintenance community around your tools and technologies also play a crucial role in how quickly you can solve any issues and how well your stack stays relevant and up-to-date with technological advancements. In this post, we'll outline the reasoning and discussions behind our technology choices and the final decisions we made.For the frontend of our application, we chose to use TypeScript and React with the Next.js framework. TypeScript has become a de facto standard for new web projects, not only preventing type errors but offering hugely improved code completion and annotation support over vanilla JavaScript, resulting in a great developer experience. React has also grown to be somewhat of a default choice, and it comes with its pros and cons, but we feel many potential objections are addressed by the Next.js framework. Next.js allows us to employ server-side rendering in a flexible way, resulting in an indexable and snappy web page without compromising functionality, where traditional single page applications had a tendency to feel sluggish. And while Next.js may be quite a new framework, most of the code we write is still standard React code, which can be ported to a different framework or vanilla React without too much effort if needed in the future.When one has elected to use JS or TS on the frontend side, one should consider using the same language in the backend. This enables sharing of types and shared functionality between front- and backend (especially with the introduction of tRPC), and programmers are empowered to work throughout the stack instead of sticking to their corners. You’re not even bound to use Node anymore, with Deno and Bun appearing as challengers in the server-side JS runtime space. This considered, one should have good reasons for choosing another language, and we believe we have a few of those supporting our choice of using the Go programming language for our backend.

Beyond the obvious factors such as performance and developer experience, you need to consider the availability of engineers experienced with, or willing to learn, the technology you choose. Joar Rutqvist at Bits Technology

Go has slowly but steadily risen in popularity in recent years and is now quite well-established, having been proven in real world applications, with a very active open source community surrounding it, and official support by many cloud providers and other services. The language is also becoming quite sought after for developers to work with, and in our opinion programmers working with or wanting to work with Go tend to be quality focused and passionate about what they do. Our language choice helps us select for the type of developers we want on our team as Go incorporates many things we see as good programming practices into the language and forces you to adhere to them. Those who share our high regard for such practices tend to enjoy working with Go.  On a technical level, the language provides a scaled-back syntax and standard library which is ‘just enough’ to build complex programs with efficient abstraction while keeping you from losing yourself in deep type hierarchies and other metaprogramming paperwork. A killer feature of Go is the fact that it is, in a sense boring: there is often only one way to do something meaning you spend a lot less time thinking about how to write code, and - since this applies to your coworkers - reading others’ code also becomes easier, since it is usually written close to how you would have done it yourself. Pair this with the fact that Go compiles to a lightweight binary and performs better than a garbage-collected language has any right to (with great multithreading support), and you get a language we are excited to base our core product logic on.

“A killer feature of Go is the fact that it is, in a sense, boring”— Simon Andersson at Bits Technology

To further optimize integration with different service providers, we utilized OpenAPI standards and automatic generation of data structures. This helped reduce the amount of work spent on constructing and maintaining API specifications.In the infrastructure department, we elected to use a serverless/microservice-hybrid architecture. While serverless is great for quickly starting, and especially scaling a new product, we had some necessary components that still made more sense being implemented as microservices. We elected to use AWS as our cloud provider, and combined it with a very interesting new framework for serverless development called Serverless Stack, or SST. SST provides its own TypeScript interface and primitives for setting up infrastructure as code. The framework is very much based on requiring the smallest possible amount of configuration, and assumes sane defaults for anything not specified by the user. This leads to an incredibly small amount of coding work being needed to get a complete stack with API Gateway, Lambda, RDS/Aurora, SQS/SNS etc. set up, interconnected and deployed, with separate private development environments being generated for each programmer automatically. Behind the scenes SST produces AWS CDK code, and you can always add custom CDK operations to your SST spec if you need to use a feature not supported by the framework.Additionally, SST provides a development aid called Live Lambda which creates a proxy connection between stub lambda functions deployed on AWS and the actual function running on your local machine, meaning you can test your deployed api while still being able to make changes that hot-reload locally in fractions of a second. This enables a quick and iterative development process not usually available when working with serverless. The actual Lambda functions are implemented as usual, the code doesn’t need adaptation to work with SST, so if we ever want to move on from the framework we won’t need to scrap or rework any of our existing codebase.In conclusion, our technology choices were made with a focus on developer experience and efficiency, while making sure not to compromise on performance and scalability. Next.js gives us cutting edge capabilities client-side, while Go helps us build high performing but clean backend code. AWS and SST lets us build robust and scalable cloud infrastructure without months of configuration work. If this article has managed to make you even the tiniest bit jealous, we are currently hiring engineers, come help us build amazing products on this wonderful tech stack!

Read more