Skip to main content
The natural language approach is designed for teams that already have existing test structures, whether written manually or generated as part of their PRs by their own coding agents.

How it works

MobileBoost does not depend on element IDs or accessibility identifiers to interact with your app. The test agent follows natural-language instructions, making it possible to go directly from feature specifications and acceptance criteria to executable test definitions. This means you can:
  • Turn acceptance criteria into tests: write test steps that mirror how a user would describe the flow
  • Test across any mobile component: native views, web views, third-party SDKs, and graphical elements all work without special selectors
  • Build robust tests that survive UI changes: because tests describe intent rather than implementation details, they don’t break when element IDs or layouts change
Alternative: Instead of writing test definitions manually, install the MobileBoost SDET Agent GitHub App. It analyzes the code touched in your PR and generates test definitions automatically.

Format requirements

Structuring your test as a sequence of steps is required for successful execution. Each step must describe one action and its expected outcome.
## Test Case: Login via Email and Password

### Step 1
Instruction: Tap on the email field
Command: tapOn.id: "login-email-input"
Time critical: No

### Step 2
Instruction: Type the email address
Command: inputText: "test@test.com"
Time critical: No
The context you provide per step is critical. Vague or incomplete instructions lead to failed executions and false positive reports. Each step must include enough detail for the test agent to unambiguously identify the target element and verify the outcome.
To achieve reliable results, we recommend one of two approaches:
  • Use the agentic guideline below to ensure your agents generate properly structured test definitions with all required metadata fields
  • Use Code generation instead, which works with less detailed instructions because MobileBoost can infer context directly from your application code

Agentic guideline

If you use coding agents (Claude Code, Cursor, Codex, or your own) to generate test definitions, add the MobileBoost test specification to your agent’s rules. This ensures the generated output follows the format that gives the test agent the best chances of success.
1

Add the guideline link to your agent rules

Reference the guideline URL directly in your AGENTS.md, .cursorrules, or system prompt so your agent always fetches the latest version:
https://www.mobileboost.io/mobileboost-test-guideline.md
Linking to the URL rather than downloading a copy ensures your agent always uses the most up-to-date specification.
2

Instruct your agent to follow the guideline

Tell your agent to read and follow the specification when generating test definitions. For example, add a rule like: “Follow the MobileBoost test guideline at the URL above when generating test definitions.”