The Vol 23 of ThoughtWorks Technology Radar is out.
The Struggle with the Browser Continues
The web browser was originally designed for document browsing, but now it primarily hosts applications, and the abstraction mismatch continues to challenge developers. To overcome the many headaches inherent in this mismatch, developers keep rethinking and rechallenging established approaches for browser testing, state management and building fast and rich browser applications in general. We see several of these trends in the Radar. First, since we moved Redux to Adopt in 2017 as the default way to manage state in React applications, we now see developers either look elsewhere (Recoil) or delay the decision for a state management library. Second, Svelte has been gaining more interest, and it is challenging one of the established concepts applied by popular application frameworks such as React and Vue.js: the virtual DOM. Third, we keep seeing new tools to deal with testing in the browser: Playwright is yet another attempt at improving UI testing, and Mock Service Worker is a novel approach to decouple tests from their back-end interactions. Fourth, we continue to see the challenge of balancing developer productivity with performance, with browser-tailored polyfills aiming to move the scale in that trade-off.
Several of our discussions revolve around tools and techniques that promote the democratization of programming: allowing nonprogrammers the ability to perform tasks that previously only programmers could do. For example, solutions such as IFTTT and Zapier have for long been popular in this space. We’ve observed an increasing use of tools such as Amazon Honeycode, a low-code environment for creating simple business applications. Although tools such as these provide fit-to-purpose programming environments, challenges arise when moving them to production-scale environments. Developers and spreadsheet wizards have long managed to find a compromise between domain-specific and traditional coding environments. The advent of more modern tools renews that discussion across wider domains, with many of the same positive and negative trade-offs.
There’s also a podcast that covers this topic:
There’s growing interest in empowering non-developers to perform tasks that previously only programmers could do. This can help the enterprise deliver useful things quicker and free up developers to focus on more critical stuff. But challenges emerge when moving the citizen-developer-built applications to production scale. Our podcast team explores the possibilities and problems of democratizing programming.
Languages and Frameworks
From the radar section, Languages and frameworks, some items of interest.
Redux got moved BACK to Trial from Adopt: " We’ve decided to move Redux back into the Trial ring to show that we no longer consider it the default approach for state management in React applications."
TypeScript is mentioned heavily, and lots of Kotlin-related stuff.
Steady progress has been made since we first wrote about web components in 2014. LitElement , part of the Polymer Project, is a simple library that you can use to create lightweight web components. It’s really just a base class that removes the need for a lot of the common boilerplate making writing web components a lot easier. We’ve had early success using it on projects and are excited to see the technology maturing.
More and more teams using React are reevaluating their options for state management, something we also mention in our reassessment of Redux. Now, Facebook — the creators of React — have published Recoil , a new framework for managing state, which came out of an internal application that had to deal with large amounts of data. Even though we currently do not have much practical experience with Recoil, we see its potential and promise. The API is simple and easy to learn; it feels like idiomatic React. Unlike other approaches, Recoil provides an efficient and flexible way to have state shared across an application: it supports dynamically created state by derived data and queries as well as app-wide state observation without impairing code splitting.
SWR is a React Hooks library for fetching remote data. It implements the stale-while-revalidate HTTP caching strategy. SWR first returns data from cache (stale), then sends the fetch request (revalidate) and finally refreshes the values with the up-to-date response. Components receive a stream of data, first stale and then fresh, constantly and automatically. Our developers have had a good experience using SWR, dramatically improving the user experience with always having data on the screen. However, we caution teams to only use SWR caching strategy when appropriate for an application to return stale data. Note that HTTP requires that caches respond to a request with the most up-to-date response held that is appropriate to the request, and only in carefully considered circumstances is a stale response allowed to be returned.
The Tools section.
We’ve long liked the idea of using static site generators to avoid complexity and improve performance, whenever the use case allows it. Although Eleventy has been around for a few years, it’s recently caught our attention as it’s matured and previous favorites such as Gatsby.js displayed some scalability problems. Eleventy is quick to learn and easy to build sites with. We also like the ease with which you can create semantic (and therefore more accessible) markup with its templating and its simple and robust support for pagination.
Web UI testing continues to be an active space. Some of the folks who built Puppeteer have since moved on to Microsoft and are now applying their learnings to Playwright , which allows you to write tests for Chromium and Firefox as well as WebKit, all through the same API. Playwright has gained some attention for its support of all the major browser engines, which it currently achieves by including patched versions of Firefox and Webkit. It remains to be seen how quickly other tools can catch up, with more and more support for the Chrome DevTools Protocol as a common API for automating browsers.
pnpm is an up-and-coming package manager for Node.js that we’re looking at closely because of its higher speed and greater efficiency compared to other package managers. Dependencies are saved in a single place on the disk and are linked into the respective
node_modulesdirectories. pnpm also supports incremental optimization on file level, provides a solid API foundation to allow extension/customization and supports store server mode, which speeds up dependency download even more. If your organization has a large number of projects with the same dependencies, you may want to take a closer look at pnpm.
Zola is a static site generator written in Rust. As such it comes as a single executable with no dependencies, is very fast and supports all the usual things you’d expect such as Sass, content in markdown and hot reloading. We’ve had success building static sites with Zola and appreciate how intuitive it is to use.
The Techniques section:
Use “remote native” processes and approaches (Trial)
As the pandemic stretches on it seems that highly distributed teams will be the “new normal,” at least for the time being. Over the past six months we’ve learnt a lot about effective remote working. On the positive side, good visual work-management and collaboration tools have made it easier than ever to collaborate remotely with colleagues. Developers, for example, can count on Visual Studio Live Share and GitHub Codespaces to facilitate teamwork and increase productivity. The biggest downside to remote work might be burnout: far too many people are scheduled for back-to-back video calls all day long, and this has begun to take its toll. While online visual tools make it easier to collaborate, it’s also possible to build complex giant diagrams that end up being very hard to use, and the security aspects of tool proliferation also need to be carefully managed. Our advice is to remember to take a step back, talk to your teams, evaluate what’s working and what’s not and change processes and tools as needed.
Zero Trust Architecture (Trial)
While the fabric of computing and data continues to shift in enterprises — from monolithic applications to microservices, from centralized data lakes to data mesh, from on-prem hosting to polycloud, with an increasing proliferation of connected devices — the approach to securing enterprise assets for the most part remains unchanged, with heavy reliance and trust in the network perimeter: Organizations continue to make heavy investments to secure their assets by hardening the virtual walls of their enterprises, using private links and firewall configurations and replacing static and cumbersome security processes that no longer serve the reality of today. This continuing trend compelled us to highlight zero trust architecture (ZTA) again.
ZTA is a paradigm shift in security architecture and strategy. It’s based on the assumption that a network perimeter is no longer representative of a secure boundary and no implicit trust should be granted to users or services based solely on their physical or network location. The number of resources, tools and platforms available to implement aspects of ZTA keeps growing and includes: enforcing policies as code based on the least privilege and as granular as possible principles and continuous monitoring and automated mitigation of threats; using service mesh to enforce security control application-to-service and service-to-service; implementing binary attestation to verify the origin of the binaries; and including secure enclaves in addition to traditional encryption to enforce the three pillars of data security: in transit, at rest and in memory. For introductions to the topic, consult the NIST ZTA publication and Google’s white paper on BeyondProd.
Bounded low code platforms (Assess)
One of the most nuanced decisions facing companies at the moment is the adoption of low-code or no-code platforms, that is, platforms that solve very specific problems in very limited domains. Many vendors are pushing aggressively into this space. The problems we see with these platforms typically relate to an inability to apply good engineering practices such as versioning. Testing too is typically really hard. However, we noticed some interesting new entrants to the market — including Amazon Honeycode, which makes it easy to create simple task or event management apps, and Parabola for IFTTT-like cloud workflows — which is why we’re including bounded low-code platforms in this volume. Nevertheless, we remain deeply skeptical about their wider applicability since these tools, like Japanese Knotweed, have a knack of escaping their bounds and tangling everything together. That’s why we still strongly advise caution in their adoption.
Decentralized Identity (Assess)
In 2016, Christopher Allen, a key contributor to SSL/TLS, inspired us with an introduction of 10 principles underpinning a new form of digital identity and a path to get there, the path to self-sovereign identity. Self-sovereign identity, also known as decentralized identity , is a “lifetime portable identity for any person, organization, or thing that does not depend on any centralized authority and can never be taken away,” according to the Trust over IP standard. Adopting and implementing decentralized identity is gaining momentum and becoming attainable. We see its adoption in privacy-respecting customer health applications, government healthcare infrastructure and corporate legal identity. If you want to rapidly get started with decentralized identity, you can assess Sovrin Network, Hyperledger Aries and Indy OSS, as well as decentralized identifiers and verifiable credentials standards. We’re watching this space closely as we help our clients with their strategic positioning in the new era of digital trust.
Secure enclaves (Assess)
Secure enclaves , also identified as trusted execution environments (TEE), refer to a technique that isolates an environment — processor, memory and storage — with a higher level of security and only provides a limited exchange of information with its surrounding untrusted execution context. For example, a secure enclave at the hardware and OS levels can create and store private keys and perform operations with them such as encrypt data or verify signatures without the private keys leaving the secure enclave or being loaded in the untrusted application memory. Secure enclave provides a limited set of instructions to perform trusted operations, isolated from an untrusted application context.
The technique has long been supported by many hardware and OS providers (including Apple), and developers have used it in IoT and edge applications. Only recently, however, has it gained attention in enterprise and cloud-based applications. Cloud providers have started to introduce confidential computing features such as hardware-based secure enclaves: Azure confidential computing infrastructure promises TEE-enabled VMs and access through the Open Enclave SDK open-source library to perform trusted operations. Similarly, GCP Confidential VMs and Compute Engine, still in beta, allow using VMs with data encryption in memory, and AWS Nitro Enclaves is following them with its upcoming preview release. With the introduction of cloud-based secure enclaves and confidential computing, we can add a third pillar to data protection: in rest, in transit and now in memory.
Even though we’re still in the very early days of secure enclaves for enterprise, we encourage you to consider this technique, while staying informed about known vulnerabilities that can compromise the secure enclaves of the underlying hardware providers.
Verifiable Credentials (Assess)
Credentials are everywhere in our lives and include passports, driver’s licenses and academic certificates. However, most digital credentials today are simple data records from information systems that are easy to modify and forge and often expose unnecessary information. In recent years, we’ve seen the continuous maturity of Verifiable Credentials solve this issue. The W3C standard defines it in a way that is cryptographically secure, privacy respecting and machine verifiable. The model puts credential holders at the center, which is similar to our experience when using physical credentials: users can put their verifiable credentials in their own digital wallets and show them to anyone at any time without the permission of the credentials’ issuer. This decentralized approach also enables users to better manage their own information and selectively disclose certain information and greatly improves data privacy protection. For example, powered by zero-knowledge proof technology, you can construct a verifiable credential to prove that you are an adult without revealing your birthday. The community has developed many use cases around verifiable credentials. We’ve implemented our own COVID health certification with reference to the COVID-19 Credentials Initiative (CCI). Although verifiable credentials don’t rely on blockchain technology or decentralized identity, this technique often works with DID in practice and uses blockchain as a verifiable data registry. Many decentralized identity frameworks are also embedded with verifiable credentials.