Client-hosted processing with VisionFI intelligence. Your data stays in your environment. Our expertise arrives per transaction.
VisionFI has architected the Scout platform into two distinct layers that can operate across separate environments. This separation is the foundation of our data sovereign offering.
When your team processes a loan package, commercial appraisal, or compliance review through Scout, your data is processed entirely within your infrastructure. VisionFI provides the intelligence that tells Scout what to look for and how to evaluate it, but never sees the documents themselves.
Financial institutions are well-acquainted with data sovereignty: knowing where data physically lives, what jurisdiction governs it, and that institutional policies apply to it at rest.
But AI introduces a gap in that framework. An institution can achieve perfect data sovereignty and still have sensitive documents leaving the boundary at the moment it matters most: when AI is reading and reasoning over them.
We call this gap inference sovereignty: the guarantee that not only does your data reside in your environment, but the AI processing of that data also happens within your environment, under your controls, in your region.
The data sovereign architecture ships in two deployment models. Both deliver identical Scout intelligence and identical data sovereignty guarantees. The choice between them is operational.
Scout Fieldwork runs as containers within your Azure tenant: AKS or ACI clusters, virtual networks, container registries, and the associated security and scaling policies. Your Azure subscription also provides the foundation model endpoints (Azure OpenAI Service or equivalent) and blob storage.
Best for: Institutions needing API-level integrations, batch processing pipelines, connections to partner and B2B systems, or horizontal scaling across multiple concurrent workloads.
Scout Fieldwork runs directly inside Scout Notebook on the user's workstation. Documents are ingested and processed locally on a machine the institution already owns, manages, and secures. The institution's Azure tenant provides only two services: model inference endpoints and blob storage. Containers, container orchestration, virtual networking, and the associated management overhead are eliminated entirely.
Best for: Interactive analyst use through Scout Notebook. Strongest inference sovereignty, lightest Azure administration burden. Not suitable for headless API access or high-volume batch workloads.
Your iPhone stores your photos, messages, and personal data locally on a device you own and control. But the intelligence that makes the camera recognize faces, the keyboard predict your next word, and Siri understand your questions comes from Apple, delivered continuously, updated invisibly, and never requiring you to install a new operating system to get smarter features.
Your data stays on your device. Apple's intelligence arrives on demand. You never manage AI version upgrades.
Your data stays in your Azure environment. VisionFI's intelligence arrives per transaction. Your IT team never manages ILM version upgrades.
Most AI vendors can say they don't store your data. They cannot say the inference itself happened inside your environment. Scout can. Model calls, document reasoning, field extraction: all within infrastructure you own and control.
When VisionFI updates a model for new regulatory guidance, improved covenant extraction, or enhanced appraisal analysis, every client receives those improvements immediately. No version upgrades. No deployment cycles.
Document content is processed within your institution's own cloud environment under your existing security controls, access policies, and audit frameworks. A clear, auditable story for regulators and board members.
This is an architectural fact your own Azure networking tools can independently verify. No trust-us claims required.
Giving institutions the flexibility to run Scout Fieldwork in their own environment is not simply a matter of relocating a container. It requires significant, ongoing engineering investment that ultimately benefits every client on the platform.
Scout Fieldwork may run against Azure OpenAI Service, Google Vertex AI, or other foundation models depending on the institution's configuration. Each model family has different strengths, latency profiles, and behavioral nuances. VisionFI also develops proprietary small models for specific financial document tasks. Our engineering team continuously tests, tunes, and validates this full model stack so your results are consistently accurate regardless of which infrastructure powers your instance.
Scout Orchestration is a managed intelligence platform handling real-time instruction assembly, transaction metering, performance telemetry, encrypted payload delivery, and versioned rollout across the entire client base. Within a given transaction, it manages a structured task workflow that breaks complex document analysis into discrete steps, coordinating which model handles which step, in what sequence, and how partial results feed downstream analysis.
VisionFI is not a hosting company. We are an applied AI research and engineering firm. The core of what clients pay for is the continuous improvement of document intelligence: the domain ontology, the validation logic, the lexical variation research, and the regulatory expertise that make Scout meaningfully better than general-purpose AI tools at understanding financial documents.
VisionFI is evaluating a consumption model that aligns with how leading enterprise platforms already operate. The VisionFI Token is a simple, predictable unit measuring intelligence consumption per action.
A commercial loan audit consumes a defined number of VisionFI Tokens. A stipulation review consumes fewer. The rate is fixed and published, regardless of document length, complexity, or which compute resources your environment uses.
Databricks operates on a virtually identical data sovereign architecture: their control plane stays in Databricks' own cloud account while the customer's data and compute run in the customer's own Azure subscription. Customers pay Databricks for DBUs (Databricks Units) and pay Microsoft separately for the underlying compute.
| Dimension | Databricks Model | VisionFI Model |
|---|---|---|
| Intelligence layer | Databricks control plane | Scout Orchestration (VisionFI-hosted) |
| Processing layer | Customer's Azure subscription | Scout Fieldwork (your Azure tenant) |
| Metering unit | DBU (Databricks Unit) | VisionFI Token |
| You pay the vendor for | Platform intelligence | Document intelligence |
| You pay Azure for | Compute & storage | Container compute & model inference |
| Your data visible to vendor? | Metadata visible; requires additional PII config | Architecturally impossible: no content crosses the boundary |
For institutions that prefer total budget predictability, VisionFI is also evaluating a Max Plan: a single flat monthly price per Scout ILM model. No per-action tracking, no token balances to manage. An institution using Scout for Commercial Lending and Audit would have fixed monthly line items that finance teams can plan around.
The data sovereign architecture makes a fixed-cost plan especially viable because the institution absorbs its own Azure compute costs for model inference. VisionFI's per-transaction cost is the lightweight intelligence delivery, not the heavy compute. Appropriate fair-use guardrails would be in place: velocity limits, generous monthly ceilings with notification thresholds, and contractual review rights for anomalous usage patterns.
If VisionFI never sees your documents, how does the platform continue to improve? Through a structured telemetry framework designed to be PII-free by architecture, not just by policy.