Any device, any OS.
Windows, macOS, Linux, Android, iOS, plus embedded screens through Companion Mode when software access is not possible.
5+ OS targetsInstall AgentOS on real desktops, embedded HMIs, and physical devices. Configure prompts, connect tools, route models, and ship agents to production.
## Task # Cash transaction with 7% tax Verify checkout, capture evidence, and report the result.
Read display context and select the active screen before acting.
Recognize Checkview, skip the Tableview branch, and proceed to the product action.
Use the demo project structure to organize prompts, tools, procedures, plans, and tests that execute through AgentOS.
python -m venv .venv source .venv/bin/activate pip install -r requirements.txt cp .env.template .env # Add AskUI Hub credentials to .env
Three things every team configures. Three views in the control plane.
Install AgentOS on macOS, Windows, or Linux targets. For embedded screens and devices you cannot instrument, use Companion Mode with HDMI capture.
Each agent gets a system prompt, a model, and an allowlist of tools. Swap models without touching the prompt. Version everything.
Operate the head unit as a QA validator. After each step, capture a screenshot. If a label is missing, halt and report.
claude-sonnetcache onTrigger from the app, CI, a schedule, or an integration. Every action is traced, every screenshot is stored, every cache hit is visible.
// run: head-unit-validator @ bench-04 [00.0s] screenshot -> 1080x1920 [00.3s] locate "settings gear" [cache hit] [00.4s] click -> (912, 64) [01.1s] screenshot -> panel opened [01.4s] locate "Language" [cache hit] [01.6s] click -> (240, 480) [02.2s] scroll -> "Deutsch" into view [02.5s] click -> tap dispatched [03.1s] verify label reads "Einstellungen" pass - 3.1s - 2 LLM calls - 4 cache hits
Engineering leaders choose AskUI when they need agents to run reliably on real infrastructure.
Windows, macOS, Linux, Android, iOS, plus embedded screens through Companion Mode when software access is not possible.
5+ OS targetsRoute each agent to Claude, Gemini, OpenAI, AskUI models, or your own provider through your credentials.
BYOM readySkip redundant model calls on stable workflows and keep cost visible at every step of the run.
70-90% lower LLM spendOn-prem and air-gapped deployment, ISO 27001 certification, GDPR posture, audit logs, and no training on customer data.
ISO 27001AskUI runs anywhere there is a screen, from a browser tab to a bench instrument.
Native and legacy software with no APIs, DOM, or selectors.
windows - macos - linuxNative iOS and Android apps on real devices or simulators.
android - iosBrowser flows using selectors and DOM tools, with screen-based execution when flows leave the browser.
chromium - firefox - webkitTouchscreens and device panels driven through capture and input.
qt - qml - custom stacksHead units and in-vehicle displays across automotive stacks.
android auto - qnx - linuxLocked-down terminals, POS systems, and ticket machines.
windows iot - linuxBench equipment and instruments beside the device under test.
vxi - gpib - usbHeadless CI runs with logon and privilege flows handled.
vmware - hyper-v - cloud vmClassical automation works for happy paths and breaks on drift, popups, and edge cases. AskUI agents reason about the screen and the goal.
Linear in headcount. Reliable for what people actually do, but every new workflow needs more people and time.
Fast for the happy path. Then it breaks on layout changes, drifted environments, and unnamed edge cases.
Reason about the screen and the goal, then handle conditions classical scripts cannot absorb.
Install in minutes. Free includes a 14-day trial with 5,000 inference credits and a non-commercial AgentOS license.
Get startedStarter, Pro, and Enterprise add commercial AgentOS, support, and Hub token-based inference usage.
View pricing