No classic frameworks,
only the new fundamentals.

We are NOT another framework just using OCR. We are NOT another framework using OpenCV. We are NOT another framework using object repositories.

What we are is a new way for UI automation without the hustle of dealing with object attributes or selectors. Just human-readable descriptions - we take care of the rest.

How we locate elements

askui James enables UI automation for any current and future technology – with simple instructions in an easily readable fluent API. This is made possible by using our AI James, which performs element addressing completely independent of underlying selectors.
At runtime, screenshots of the screen are analyzed with deep learning algorithms that dynamically track elements. We do not look into the DOM at any time (really). Automate anything visible on your screen.

Do you want to automate Slack, Spotify, and Wikipedia (for whatever reason) in one go? We got you covered!

askui james robot

How we understand your instructions

The intelligent fluent API enables the creation of test cases in just a few steps. It guides the user through the test steps by suggesting actions to be performed. In addition, relations between different UI elements can be defined. This way, errors can be detected that remain hidden with classic selector-based approaches.

By controlling the operating system, we are able to perform real clicks and keystrokes rather than interacting via an interface. You can find an overview of all possible actions in our documentation.

askui test description

Feature Overview

Our advanced computer vision algorithms will check for the correct display of elements in your web shop or gallery.

Automate Anything

askui James enables the automation of all possible UI commands by simulating real interactions. Drag & drop, swipe commands and even color verifications are no longer a problem.

No More Selectors

Using modern Deep Learning technologies, we identify UI elements based solely on visual features. Relational descriptions are enough for us as input.

Simulate Human Actions

Our automation does not access underlying code selectors or the DOM. It performs real mouse movements and element clicks – just like a human would.

Runs On All Technologies

By automating solely on visual properties, we enable automation on all UI technologies – whether Desktop, Web, Native Mobile, … anything works.