The history of UI testing

Share This Post

Share on linkedin
Share on twitter
Share on email
User Interface Testing is one of the most complex fields of software testing. In this article we highlight the milestones of user interfaces and software testing to show how UI testing evolved over time.

The history of UI testing

User Interfaces (UI) have become part of our daily lives. With the rise of UIs, they had to be tested just like any other product before they hit the market. Compared to other testing services, user interface testing is one of the youngest fields where huge progress is still being made every year. This article is dedicated to everyone who is fascinated by and works in the broad field of user interface testing or design.

The history of UI testing is strongly linked to the history of user interfaces and the history of software testing. So let’s have a look at these first.

1. History of user interfaces

Where do we find the forerunner of user interfaces? What was the first user interface? These questions are difficult to answer and different sources name different starting points. Some reach back all the way to the first industrial revolution, others start in the 1960s. Today there are several different types of user interfaces, wikipedia lists 29 different types. 

We are going to stick to the interfaces that mattered the most for today’s status quo, namely command-line user interfaces, batch interfaces and graphical user interfaces.

Batch Interfaces

The batch era of interfaces dates back to the post-war time until the late 1960s. Computing power was expensive and rare, interfaces were rudimentary.

Punched paper cards were used as input for computers

Batch interfaces are non-interactive user interfaces, where the user specifies all the details of the batch job in advance to batch processing, and receives the output when all the processing is done. Just to give you an impression of how difficult and tedious it was to generate input, look at the picture below. Punched cards like this paper tape were used as input. And since there was no operator’s console as we know it today, interacting with the machine in real time was not possible. At the late stage of the batch era, so-called compile and go systems could be implemented. These programming language processors enabled monitoring programs to be used for these machines. For the development towards more interactive interfaces the batch era was crucial. And guess what, the first monitoring programs were mainly used to improve error-finding – one of the earliest precursors of today’s UI testing.

Command-line interfaces

Command-line interfaces (CLIs) evolved from batch monitors that were connected to the system console. They prompt the user to provide input by typing a command with the keyboard, the responding output is readable text on the computer monitor. This sounds a lot more familiar to what computer scientists and engineers do until today. These interfaces can be seen as the most basic and popular interface, as it is still known and learned by more technically advanced users. 

Graphical User Interface

In the late 1960s computer and internet pioneer Douglas Engelbart demonstrated the NLS (oN-Line System) to the world. What made NLS so unique and futuristic? The system used a mouse, pointers, hypertext and multiple windows. This was introduced in the very same decade, when punched cards from the batch era were still being used, to give this huge leap some context.

Generally speaking, graphical user interfaces (GUIs) accept input via devices such as keyboards and mouse and they provide articulated graphical output on the computer monitor. The two most popular principles in GUI design today are the object-oriented user interfaces and application-oriented interfaces.

Modern user interfaces come in all sizes and formats.

Generally speaking we’ve come a long way to the variety of interfaces we have today. Most devices we interact with nowadays have some sort of interface. All of them need to be tested, which is why the history of user interfaces is so closely linked to the history of software testing.

2. History of software testing

Scientifically, the same phenomenon that applied to user interfaces, applies to software testing: there are dozens of different articles and blogs that try to identify the first “software tests”. For the purpose of this article, we are just going to stick to Hetzel and Gelperin who divided testing until the 2000s into five significant eras and described them as follows.

  1. Debugging oriented era

During this phase in the early 1950s, there was hardly any distinction between testing and debugging. It was merely the same. A developer would write his code, try it and in case of facing an error he would analyse and debug the problem. There was no systematic or concept behind this process. A distinction from debugging and program testing wasn’t even present until the late 1950s.

  1. Demonstration oriented era

Starting in the late 1950s debugging and testing were starting to separate – from now on debugging was considered eliminating errors and testing was considered finding (possible) errors. The major goal of this era was to make sure that software requirements were satisfied. During this time though, many types of testing weren’t discovered or even thought about yet. Negative testing for example, the attempt to break an application, was not practiced. Considering how expensive, rare and time consuming computing power back then were, it’s understandable that nobody wanted to mess around too much with the computers.

  1. Destruction oriented era

Popular computer scientist Glenford Myers changed this in the late 1970s and early 1980s and initiated the destruction oriented era. In this era, breaking the code and finding errors in it were the main goals.

Glenford Myers popularized concepts such as “a successful test case is one that detects an as-yet-undiscovered error”. Debugging and verification were separated even further, to the delight of the developers. The first testers were hired who would be given a software and try to break it everywhere they could. From simple errors like typing letters in (meant to be) numeric text fields to the complete collapse of the software.

The problem with this was, that software would literally never get released because there was always another bug that could be found. Or if you fixed a bug, another bug decided to appear somewhere else. Changes had to be made.

  1. Evaluation oriented era

In the mid 1980s, software testing took a new direction towards the evaluation and measurement of software quality. At some point it had to be accepted that all errors could never be found, so the relevance of testing had to be redefined. From now on, testing was considered an improvement for the confidence on how good a software was working. Testing was done until an acceptable point was reached, a point where major bugs and crucial problems were fixed.

  1. Prevention oriented era

The last era before different types of testing such as the user interface testing emerged, was the prevention oriented era. In the late 1980s and 1990s computers “came home”. Literally. Computers became affordable for regular customers and thus the requirements for software testing changed once more. Tests now focused on demonstrating that software met its specifications and that computer defects could be prevented. Code was divided into testable and non-testable code and new techniques for software testing came up. Most popularly in the 1990s was the exploratory testing, which took the sheer “will of destruction” from the 70s and tried to understand the system more deeply to find some more complex bugs.

3. History of user interface testing

Finally, let’s narrow all these information down and have a look at how user interface testing emerged from the history of user interfaces and software testing.

The homecoming of the computer

First of all let’s get back to the scenario we just opened, the “homecoming” of the computer. What did that mean for software and especially user interface testing? Compared to today, testers will most likely smile and think about how much easier the environment-part of UI testing was. There were very few different computer models on the market, which is why Windows 95 is iconic to this date. Basically every family that had a computer in the late 1990s had a Windows 95, with a user interface everyone can still remember vividly. That shift of the main customers from big companies and governmental organizations toward private customers, UI testing really started as the requirements were so much more diverse.

Today, there are thousands of different devices and user interfaces. Responsive web design was something that couldn’t even be thought of, back when everyone had the same computer and the same monitor. 

Freeware programming language AutoIT for Microsoft Windows was the first possibility to do some software testing on your own computer at home. It was primarily intended to create automation scripts and was very popular among computer enthusiasts in the early 2000s.

Agile development

The early 2000s lead to a whole new awareness and appreciation of software quality. The agile manifesto can be seen as the driving force behind this change. Today “agile” is an umbrella term that includes methods, concepts and techniques that build on the agile manifesto. As a reminder these four key values were:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

Now why is this so crucial for UI testing? In his book Succeeding with Agile, Mike Cohn came up with the concept of the test pyramid. 

The test pyramid

The test pyramid is a simplistic test metaphor that is used to visualize the different layers of testing and how much of testing needs to be done on each layer. Back in the early 2000s, single page application frameworks such as React or Angular were really popular and many UI tests were just unit tested in all of these frameworks. Looking at the huge variety of different display sizes and responsive web design, the test pyramid may be as good of a metaphor as ever. Unit testing UIs is basically impossible today, but it was popular back then and cleared the way for the first automated tests.

Automated testing

The first framework that really hit big when it comes to UI testing was Selenium. First published in 2004, Selenium offers a portable framework for testing web applications. It provides a playback tool for functional tests without the necessity of learning a test scripting language.

The new awareness for software quality coming from agile development led to a more in-depth understanding and approach to testing. User interface profited from this trend, as it got more attention after being overlooked for a long time because of how slow and expensive it is for test departments. Until today, most smaller companies still test their user interfaces manually, which shows how difficult it is regarded to make the switch to UI automation.

While automated testing has become the norm for every other type of software testing, automated UI testing can be considered the showcase field of test automation. But most tools collapse with today’s requirements of being able to cover every single display size and format. Only two decades later the first completely visual UI automations are emerging to solve this problem.

Visual test automation in the 2020s

One of the most promising approaches to solve the struggles of UI testing is a human centric UI testing approach. This approach has profited the most from scientific accomplishments in Computer Vision – it enables the humanisation of UI testing.

Many robots and automations are supposed to work as human-like as possible. But how do you train an AI to detect UI elements like humans? You teach them.. By giving an AI as much information about UI elements as possible it will do the job eventually. 

Why is this such a huge leap for test automation and why will it shape the future of UI testing? 

As we mentioned before, most UI testing tools rely solely on code and that’s why they struggle with varying display sizes, formats and even the tiniest UI changes. AIs that detect elements are independent of test environments and context, just like manual testers. This technique is completely independent of the visual appearance of an application. For example almost every single login page can be executed with the exact same script and same test step descriptions.

The possibility to teach an AI the same semantics that manual testers use will shape the future of UI testing.

With this peek into the future this article about the history of UI testing ends. Let us know which milestones should have been mentioned in your opinion and why.

More To Explore

Cheat Sheets

Integration Testing

Learn everything you need to get started in integration testing in our cheat sheet.

UI Testing Myths
Blog

Debunking 4 UI Testing Myths

UI Testing remains one of the most feared challenges for business owners and companies. But some myths around UI testing can be debunked.