How to use this assessment: How to use this assessment: Work through each item and mark it complete once confirmed. Items flagged High Risk represent the most common sources of material overspend. A score of 19 or more indicates a well-governed position.

Scoring Guide
Tally your confirmed items against these benchmarks to determine your current maturity level.
0 – 6 High Exposure
7 – 12 Partial Governance
13 – 25 Well Governed

Section 1

1. You have identified and catalogued all internal data sources — structured, unstructured, and semi-structured — that will be used to ground, fine-tune, or evaluate your GenAI applications. High Risk
GenAI applications are only as reliable as the data they are grounded in. Enterprises that skip the data catalogue step frequently discover mid-deployment that their most valuable internal knowledge is trapped in formats that cannot be ingested — scanned PDFs without OCR, legacy database exports with undocumented schemas, or SharePoint repositories with inconsistent metadata. Complete your data catalogue before selecting a GenAI platform, not after.
● High Risk
2. You have identified and catalogued all internal data sources — structured, unstructured, and semi-structured — that will be used to ground, fine-tune, or evaluate your GenAI applications. High Risk
GenAI applications are only as reliable as the data they are grounded in. Enterprises that skip the data catalogue step frequently discover mid-deployment that their most valuable internal knowledge is trapped in formats that cannot be ingested — scanned PDFs without OCR, legacy database exports with undocumented schemas, or SharePoint repositories with inconsistent metadata. Complete your data catalogue before selecting a GenAI platform, not after.
● High Risk
3. You have assessed data quality across your GenAI data sources — specifically recency, accuracy, completeness, and consistency — and documented where gaps exist. High Risk
A retrieval-augmented generation system that retrieves outdated or inaccurate internal documents produces confidently incorrect outputs. Data quality assessment for GenAI goes beyond traditional BI data quality checks; it requires evaluating whether documents are current, whether they reflect institutional knowledge or superseded processes, and whether conflicting information across sources is resolvable. Document quality gaps before deployment and plan remediation as part of the GenAI programme, not as a post-launch discovery.
● High Risk
4. You have confirmed that all data sources intended for GenAI use have been reviewed for personal data, confidential information, and third-party intellectual property, and that appropriate data governance policies are in place. High Risk
GenAI applications that retrieve and cite internal documents can inadvertently expose personal data, trade secrets, or third-party copyrighted content in their outputs. Confirm that every data source in your GenAI ingestion pipeline has been reviewed by your legal and compliance team, that personal data is either excluded or appropriately masked, and that third-party content is used within the bounds of your licence agreements.
● High Risk
5. You have established a data refresh and maintenance schedule for your GenAI knowledge bases that ensures retrieved information remains current. Medium Risk
A GenAI knowledge base that was accurate at deployment will drift from organisational reality over time as policies change, products evolve, and personnel turn over. Define the update frequency, the responsible data owner, and the staleness detection mechanism for each data source before deployment. Knowledge bases without active maintenance become liabilities — producing plausible-sounding but outdated outputs — within months of launch.
● Medium Risk
6. You have confirmed that your cloud or on-premises infrastructure meets the compute, storage, network, and latency requirements for your target GenAI architecture at production scale. High Risk
GenAI applications have substantially different infrastructure requirements from traditional enterprise applications. Vector database performance degrades at scale without appropriate indexing and hardware sizing. LLM inference at enterprise throughput requires GPU-accelerated infrastructure or high-capacity API rate limits. Document your infrastructure requirements at the expected production scale — not the pilot scale — and confirm that your existing infrastructure or contracted cloud capacity can meet them.
● High Risk
7. You have confirmed that your cloud or on-premises infrastructure meets the compute, storage, network, and latency requirements for your target GenAI architecture at production scale. High Risk
GenAI applications have substantially different infrastructure requirements from traditional enterprise applications. Vector database performance degrades at scale without appropriate indexing and hardware sizing. LLM inference at enterprise throughput requires GPU-accelerated infrastructure or high-capacity API rate limits. Document your infrastructure requirements at the expected production scale — not the pilot scale — and confirm that your existing infrastructure or contracted cloud capacity can meet them.
● High Risk

Section 2

8. You have selected and validated a vector database technology and confirmed that it integrates with your chosen LLM provider and scales to your document volume. Medium Risk
Vector database selection is a long-term architectural decision. Popular options — Pinecone, Weaviate, Chroma, pgvector — differ significantly in managed service availability, query performance at scale, hybrid search capability, and cost structure. Test your candidate vector database with your actual document corpus volume and query patterns before committing to a production deployment architecture.
● Medium Risk
9. You have confirmed that your organisation's API security controls — rate limiting, authentication, secrets management, and network policy — are configured for production GenAI workloads. High Risk
GenAI API keys are high-value targets for attackers because they provide direct access to AI inference and, through prompt injection, potentially to internal data. Confirm that API keys are stored in a secrets management system (not in code repositories), that rate limits are configured to detect anomalous consumption, and that API traffic is routed through your standard network monitoring stack, not bypassed via developer accounts.
● High Risk
10. You have evaluated the observability tooling available for your GenAI stack — LLM call tracing, token consumption monitoring, output quality metrics, and latency dashboards — and confirmed you have visibility into production behaviour before launch. Medium Risk
GenAI applications in production exhibit failure modes that are invisible to traditional application monitoring: prompt injection attempts, context window exhaustion, model hallucinations, and retrieval quality degradation. Implement GenAI-specific observability from launch — not as a retrofit — including LLM call tracing, per-request token logging, output quality sampling, and retrieval relevance scoring.
● Medium Risk
11. You have established an AI governance committee with clear ownership of GenAI use case approval, risk assessment, and compliance monitoring. High Risk
Over 57 percent of mature enterprise AI adopters now use a hub-and-spoke governance model: a central AI governance committee that sets standards, with use-case-level owners responsible for compliance implementation. Without governance ownership, GenAI use cases proliferate without risk assessment, commercial terms are signed by engineering teams without legal review, and compliance failures are discovered by regulators rather than internally. Establish governance before approving production use cases, not after the first compliance incident.
● High Risk
12. You have established an AI governance committee with clear ownership of GenAI use case approval, risk assessment, and compliance monitoring. High Risk
Over 57 percent of mature enterprise AI adopters now use a hub-and-spoke governance model: a central AI governance committee that sets standards, with use-case-level owners responsible for compliance implementation. Without governance ownership, GenAI use cases proliferate without risk assessment, commercial terms are signed by engineering teams without legal review, and compliance failures are discovered by regulators rather than internally. Establish governance before approving production use cases, not after the first compliance incident.
● High Risk
13. You have completed a risk assessment for each GenAI use case covering: accuracy requirements, hallucination impact, regulatory applicability, and human oversight requirements. High Risk
The EU AI Act, applicable from August 2026, classifies AI systems by risk level and imposes specific requirements on high-risk systems including documentation, bias testing, and human oversight. Complete a risk classification for each GenAI use case before deployment. High-risk use cases — those affecting access to services, employment decisions, or regulated financial advice — require specific governance controls that are non-trivial to retrofit after launch.
● High Risk

Section 3

14. You have documented a responsible AI policy covering: acceptable use cases, prohibited use cases, human oversight requirements, and the process for flagging and responding to AI output failures. Medium Risk
A responsible AI policy is not just a compliance document; it is the operational framework that guides engineering teams, product managers, and users in appropriate GenAI use. Without a published policy, each team makes independent decisions about acceptable use, oversight thresholds, and incident response — creating governance fragmentation that is exposed in the first significant AI output failure.
● Medium Risk
15. You have confirmed that your GenAI deployment includes appropriate human oversight mechanisms — review queues, confidence thresholds, override capabilities — for all use cases where AI outputs influence consequential decisions. High Risk
AI outputs in consequential workflows — customer-facing decisions, financial recommendations, HR actions, medical information — require human review capability, not just theoretical escalation routes. Confirm that your production GenAI applications include confidence scoring, flagging of low-confidence outputs for human review, and audit logging of all AI-assisted decisions. Human oversight mechanisms that exist only in policy documents are not oversight mechanisms.
● High Risk
16. You have reviewed all GenAI vendor contracts for data processing terms, SLA commitments, IP ownership, and liability provisions, with input from legal counsel. High Risk
Standard GenAI API terms are written to protect the vendor, not the enterprise buyer. Key risks in unreviewed standard terms include: no SLA on model availability, no commitment that your data is excluded from training, unlimited liability for outputs generated on your prompts, and restrictive IP terms on model outputs. Never deploy GenAI in a production application on standard API terms — engage legal counsel to review vendor contracts and negotiate enterprise terms before go-live.
● High Risk
17. You have reviewed all GenAI vendor contracts for data processing terms, SLA commitments, IP ownership, and liability provisions, with input from legal counsel. High Risk
Standard GenAI API terms are written to protect the vendor, not the enterprise buyer. Key risks in unreviewed standard terms include: no SLA on model availability, no commitment that your data is excluded from training, unlimited liability for outputs generated on your prompts, and restrictive IP terms on model outputs. Never deploy GenAI in a production application on standard API terms — engage legal counsel to review vendor contracts and negotiate enterprise terms before go-live.
● High Risk
18. You have obtained competitive pricing proposals from at least two AI vendors and used them to negotiate pricing and commercial terms with your preferred provider. Medium Risk
AI vendors will negotiate pricing, SLA, data terms, and liability provisions for enterprise accounts — but only when the buyer demonstrates credible multi-vendor evaluation and a minimum commitment volume. Without a documented alternative vendor proposal and consumption forecast, you have no negotiating position. Request pricing proposals from at least two vendors before entering commercial negotiations with your preferred choice.
● Medium Risk
19. You have confirmed your GenAI budget for the next 12 months and allocated it across platform costs, integration development, prompt engineering, governance, and training — not just API consumption. Medium Risk
GenAI programme budgets that account only for API consumption costs consistently underestimate total cost of ownership. Integration development costs $5,000 to $25,000 per use case. Prompt engineering resources cost $50,000 to $150,000 annually. Governance tooling, training, and change management add 20 to 40 percent on top of technical costs. Build a full programme budget that includes all cost categories before seeking board approval.
● Medium Risk

Section 4

20. You have a vendor management process in place that tracks API consumption against budget, monitors for contract renewal dates, and owns the vendor relationship at a senior level. Medium Risk
GenAI vendor management without a designated commercial owner creates fragmentation: engineering teams manage API access, finance manages billing, and legal manages contracts, with no single point of accountability for the total commercial relationship. Designate a named vendor manager for each GenAI provider with responsibility for commercial performance, contract compliance, and renewal negotiation.
● Medium Risk
21. You have completed a skills assessment and confirmed that your engineering, data science, and product teams have the GenAI literacy required to build, evaluate, and maintain production AI applications. Medium Risk
GenAI application development requires skills that differ from traditional software engineering: prompt engineering, RAG architecture design, output evaluation, hallucination detection, and AI ethics assessment. Confirm that your teams have received structured GenAI training — not just access to vendor documentation — and that you have access to specialist expertise (internal or external) for the most technically complex use cases.
● Medium Risk
22. You have completed a skills assessment and confirmed that your engineering, data science, and product teams have the GenAI literacy required to build, evaluate, and maintain production AI applications. Medium Risk
GenAI application development requires skills that differ from traditional software engineering: prompt engineering, RAG architecture design, output evaluation, hallucination detection, and AI ethics assessment. Confirm that your teams have received structured GenAI training — not just access to vendor documentation — and that you have access to specialist expertise (internal or external) for the most technically complex use cases.
● Medium Risk
23. You have a defined process for evaluating and approving new GenAI use cases — including a business case template, risk assessment framework, and governance sign-off requirement — that prevents unapproved shadow AI deployments. High Risk
Shadow AI is the fastest-growing enterprise governance gap: individual employees and teams deploying GenAI tools outside IT and legal oversight. Shadow AI creates data protection risks (sensitive data processed through non-approved vendor APIs), intellectual property risks (confidential information submitted to public AI services), and compliance risks (unmonitored AI use in regulated workflows). Implement a use case approval process and communicate it before shadow AI deployment has already occurred at scale.
● High Risk
24. You have a change management plan for GenAI adoption that addresses employee concerns, redefines roles affected by AI automation, and builds sustainable AI literacy across the organisation. Lower Risk
GenAI programmes that treat change management as optional consistently underestimate adoption friction. The organisations reporting highest GenAI ROI — cited at $50,000 to $500,000 annually per use case — are those that invested in structured adoption programmes, not those that built the best technical solution. Budget 15 to 20 percent of your GenAI programme cost for change management, training, and communication.
● High Risk
25. You have defined measurable success criteria for each GenAI use case — including accuracy targets, user adoption rates, efficiency metrics, and cost per outcome — and a review cadence to assess whether deployed use cases are delivering the projected value. Medium Risk
GenAI projects without defined success criteria and review cadences persist beyond their useful life, consuming budget and maintenance resources without generating the projected return. Define your success metrics before deployment, establish a 90-day post-launch review, and create a formal decommission process for use cases that do not meet their targets within 6 months of launch.
● Medium Risk

Ready to optimise your AI contract and cost position?

Download our AI Platform Contract Negotiation Guide — covering all major vendors, pricing structures, and negotiation tactics.
Download Free Guide →

Next Steps

Score your confirmed items against the benchmarks above. If you are in the High Exposure or Partial Governance bands, prioritise the items flagged High Risk — these represent the most common sources of material overspend and are addressable within a single procurement or FinOps cycle.

Redress Compliance works exclusively on the buyer side, with no vendor affiliations. Our GenAI advisory practice has benchmarked AI costs, negotiated enterprise AI contracts, and built governance frameworks across 500+ enterprise engagements. Contact us for a confidential review of your AI cost and contract position.