The pipeline

Gather AI : documents in, structure out.

The engine behind Lore . It splits multi-document bundles into separately filed records, classifies them against your taxonomy, tags emails as they land, brokers external sharing with role-based access and compliance gating, and pushes live updates back to the case — without slowing the case system down.

For the architecture and deployment topology, see the infrastructure page.

Classify & split

Documents in, structure out.

One upload of a 30-page mixed bundle returns N classified, separately filed documents — against your taxonomy, with confidence scores, with suggested file names.

Drag-and-watch multi-document splitting

Drop a 30-page mixed bundle into the case workspace and watch it become N separately filed documents, each with a suggested name, description, tag, and folder placement. The AI detects boundaries (letterheads, page resets, signatures); splits are validated before anything is written.

Customer-owned taxonomy

Classifies against your tag list, not a vendor-fixed schema. Bring your own categories; swapping them is a config change, not a model retrain.

Two-pass classification

A fast preview classification (~2 seconds) populates the UI immediately; deeper enrichment runs in the background and updates the row when it finishes.

Confidence scores per document

Anything below threshold is flagged for human-in-the-loop review before it is written to the case. Auditable, configurable, never silent.

Suggested file names and descriptions

Generated from the document content, in your naming style. The investigator gets a properly named, properly described file the moment it lands.

Tag every artifact

One vocabulary across the case.

Documents, emails, and notes share the same tag set. One filter chip surfaces every relevant artifact across record types.

Per-email tag classification

Every inbound and outbound email is tagged against the case taxonomy in roughly one second, at roughly $0.0002 per email. Fire-and-forget, no async queue, no UI block.

One tag vocabulary across the case

Documents, emails, and notes share the same tag set. One filter chip surfaces every relevant artifact across record types. No reconciliation between vocabularies.

AI-enriched notes

Tag suggestions appear as the adjuster writes and refine as the note grows. The classifier sees what the writer sees.

Storage

Files in SharePoint, not in ServiceNow.

Files go straight from the browser to your document store. The case system holds metadata and the document's URL — never the bytes. Per-row attachment footprint stays at zero.

Browser-direct uploads to your document store

Files upload from the browser straight to your document store. They never pass through the case system's attachment storage.

Zero ServiceNow document storage

Per-row attachment footprint stays at zero. Storage costs and SN attachment limits stop being a scaling cliff. ServiceNow holds metadata and audit; SharePoint holds the bytes.

Document URL written back to the case

Once a file lands in your document store, its location is written back to the case record. Clicking the row opens the file in place.

External sharing

Brokered, not improvised.

Role-based access, BAA gating, auto-grant on party-add, auto-revoke on party-remove, time-bounded by default. Every grant and revoke is audited.

Role-based external access

Adjuster, attorney, expert, claimant — each role sees only what their role permits. The role-access matrix is enforced at the API, not in the UI.

BAA gating on PHI documents

Parties without an active Business Associate Agreement cannot receive PHI-bearing documents. The API blocks the share and writes an audit event — no manual gate.

Auto-grant on party-add

Adding an attorney or expert to a case grants them access to in-scope documents automatically. No second step, no missed shares.

Auto-revoke on party-remove

Removing a party expires every active share they hold. Document permissions are revoked at the same moment; an audit event is written for each revocation.

Time-bounded shares

Every share has a configurable expiration. SharePoint enforces it; nothing lingers past its window.

Live updates

Without a refresh.

When AI finishes enriching a document, every open Lore session for that case updates in place. Per-user routing — no cross-noise.

Per-user live updates

When AI finishes enriching a document, every open Lore session for that case updates without a refresh. Updates route per user — your tabs see your uploads, not your colleague's.

Async enrichment queue

Long-running classification doesn't block the upload UI. The user gets a row immediately; enrichment finishes when it finishes and the row updates in place.

Configure & guarantee

How it operates.

Per-customer rules, two-backend deploy, server-side cache, confidence-routed human review, idempotent installs, IP-stamped responses.

Auto-sort rules

Per-customer, per-document-type rules for default folder placement. New files land where your operators expect them; no manual triage on every upload.

Two deployment postures

Preferred path runs in your cloud tenant. For environments that can't reach the cloud directly, Gather AI runs entirely inside the case system. Same product, two postures.

Server-side AI output cache

Claim Summary and other expensive outputs cache at the function tier, shared across every client viewing the same case. Models run once per case-state, not once per viewer.

Confidence-routed human review

Low-confidence classifications surface for adjuster confirmation before they're written. The model never silently mislabels; the human is the last word when the AI isn't sure.

Idempotent installs

Every backend install script is safe to re-run. Re-deploys, partial failures, and rollbacks don't leave orphaned config.

IP-stamped API responses

Every API response carries X-Copyright and X-Vergence-Product headers. The product asserts ownership on every byte it serves.

See Gather AI in action.

Upload a multi-doc PDF in our synthetic case and watch the splits and classifications appear.