AI-Assisted Trade Navigation (Early 2022)

Implemented months before generative AI became mainstream — leveraging GPT-3 completions (April 2022) to turn free-text analyst intent into precise data navigation parameters.

Context & Timeline

Q1–Q2 2022: While most products still relied on rigid filter panels, we explored whether large language model (LLM) completions could safely generate structured query parameters for trade analytics navigation. An implementation using GPT-3 completion API was approved for use (April 2022) under a defined scope and deployed publicly for real users shortly thereafter. The assistant augmented our existing NoCOINer analytics application and was delivered on top of our own reusable full-stack development platform (ingestion, normalized store, modular UI scaffolding).

Problem

Analysts navigating large volumes of normalized exchange trade data needed to pivot quickly across markets, date ranges, instruments and aggregation modes. Traditional multi-step filter forms increased cognitive load and slowed comparative exploration, even though the underlying domain (symbols, intervals, views) was well understood and finite.

Solution Overview

We introduced a small natural-language prompt box: analysts typed intent (e.g. “btc-usdt last 24h trades then show top traders by realized P&L”). A lightweight interpretation layer called the LLM with a constrained prompt template. The returned completion was parsed into a structured parameter object (market/exchange, symbol pair, interval or time window, view target: trades | traders | positions | P&L, optional sort/focus). After validation it drove both data queries and cross-page navigation (e.g. from aggregated trades view to traders ranking) without re-entering filters. The assistant does not perform free-form analysis; it simply translates user phrasing into safe UI and query parameters within this narrow domain.

Key Design Principles

  • Assist, don’t replace: Users could always fall back to explicit filters; AI suggestions accelerated, not obscured, control. The assistant is an input convenience layer, not a decision-maker.
  • Deterministic guardrails: The completion text was post-processed via regex + whitelist dictionaries; out-of-scope tokens triggered a safe fallback (no query executed until confirmed).
  • Transparent output: Parsed parameters were shown inline for confirmation before applying.
  • No sensitive data exposure: Only high-level navigation intent was sent-never proprietary or personal data.

Technical Architecture

  • Normalized trade store (stream & batch ingestion) feeding an analytics API (part of our internal platform).
  • Prompt builder injecting a constrained grammar hint (symbols, intervals, metrics).
  • GPT-3 completion call (low temperature) returning a compact structured hint string.
  • Parser & validator mapping to an internal parameter DTO; invalid tokens rejected.
  • Execution + navigation layer issuing optimized parameterized queries (pre-aggregations / caching) and routing to the appropriate application module (trades, traders, positions, P&L).
  • Observability: logged intent → parsed parameters → execution latency for tuning & safety reviews.

Outcomes

  • Reduced average navigation steps for exploratory tasks (multi-filter sequences) to a single confirmed action.
  • Higher session breadth: more distinct symbol/interval combinations per session (indicating faster pivoting).
  • Lower abandonment of partially configured queries.
  • Seamless cross-view transitions (trades ↔ traders ↔ positions ↔ P&L) using a shared validated parameter object.

Responsible AI & Safeguards

  • Scope restriction: Model only assisted with navigation parameters – never produced financial advice or predictions.
  • Validation layer: Hard constraints on symbols, intervals, metrics; any mismatch required manual confirmation.
  • Audit trail: Stored anonymized (hashed) prompt + derived parameter object for quality review and drift detection.
  • User agency: Always editable structured parameters before execution.

Lessons & Evolution

Early integration highlighted the importance of structured post-processing, guardrail transparency, and a reusable parameter model that can drive both data queries and navigation. It demonstrated tangible UX gains from natural language intent where domain vocabularies are constrained (symbols, metrics, intervals, target views). Subsequent iterations considered migrating to function-calling style APIs for even stricter schema adherence. For us, this remains a good example of a small, well-bounded AI helper inside a product, not a general-purpose AI analyst.

Disclaimer

This page describes an early 2022 implementation using GPT-3 completions under an approved use case. Mention of the model/API reflects historical fact and does not imply endorsement or sponsorship by its provider. We do not use provider approval for advertising; it is presented solely to illustrate proven, responsible early adoption of LLM technology.

Related Services

Need similarly focused helpers in your product? Explore our Data & Analytics, Architecture & Design and Custom Development services, or start with the Discovery & Acceleration Program. Our core work remains pragmatic architecture, engineering and data; AI is one of the tools we use where a narrow assistive layer makes sense.

Discuss AI enablement

Book a short call to evaluate where constrained, safe AI assistance can remove friction in your workflow or data exploration interfaces.

Contact