Three Questions to Ask Any AI Vendor Before You Sign

Three questions to ask any AI vendor before you sign
Most AI vendor evaluations focus on the wrong things. They ask about technology, team size, case studies, and pricing. These matter, but they're easy to answer well in a sales process regardless of whether the vendor can actually deliver.
The questions that reveal more are the ones vendors have to answer specifically — questions where "we have extensive experience in this area" doesn't cut it and where the quality of the answer tells you something real about how the team operates.
Here are three questions we think every buyer should ask, regardless of which vendor they're evaluating.
1. Tell me about an AI project that didn't go as planned. What happened and what did you do?
This is the most important question on the list.
Every vendor will tell you about their successes. A vendor who's actually shipped production AI has also encountered things that went wrong: a model that degraded faster than expected, a data quality problem that surfaced after launch, a client requirement that wasn't clear until it was too late to address cheaply. That's what building real things looks like.
What you're listening for in the answer:
Specificity. A real answer has specific details — what the project was, what went wrong, when it was discovered, who was in the room. A generic answer about "challenges" or "learnings" is a sign the vendor is deflecting.
Accountability. Did they take responsibility for what happened, or did the story involve a lot of "the client's data was messy" or "requirements changed"? Sometimes those things are true. But a vendor who can't describe any situation where they made a mistake or misjudged something hasn't been honest with you.
What they actually did. Did they communicate early, work through it with the client, find a solution? Or did they paper over it and ship something that technically worked but didn't deliver the value? The response to a hard situation tells you more than the situation itself.
If a vendor tells you they've never had a project go sideways, either they haven't done much real work, or they're not telling you the truth. Neither is what you want.
2. What will you not be able to do for us, and what do you recommend instead?
Vendors in active sales mode are reluctant to say no to anything. A vendor who cares about the engagement going well is usually more willing to tell you where they're not the right fit.
This question is useful in two directions.
First, it reveals whether the vendor has thought carefully about your situation or whether they're selling a standard offering regardless of fit. A good answer names something specific about your use case, your tech stack, your timeline, or your industry that creates a genuine limitation. "We don't have deep experience in regulated healthcare environments" or "our typical engagement assumes you have an existing data infrastructure — if you don't, there's pre-work we'd need to scope separately" is a useful answer.
Second, the recommendation matters. What does the vendor suggest instead? A vendor who genuinely has your interests in mind will point you toward a better option — a different vendor, a different approach, a different sequence — even when it means a smaller deal or no deal. That's a signal worth paying attention to.
The answer doesn't need to be disqualifying. Even if the limitation is minor, a vendor who can answer this honestly is demonstrating something about how they operate.
3. What does your monitoring and maintenance plan look like after launch?
This question separates vendors who think of deployment as the finish line from those who've actually operated AI in production.
Most AI systems require ongoing operational attention: monitoring for accuracy degradation, handling edge cases that weren't in the training distribution, retraining as data drifts, updating prompts when model behavior changes across API versions. A vendor who doesn't mention any of this isn't thinking about your system's long-term health.
Specifically, you want to understand:
Who is responsible after launch? Is there a transition plan, a support period, or ongoing retainer? If the vendor hands off completely at go-live, who inside your organization has the skills and context to maintain the system?
How will you know if it stops working? What monitoring exists? What defines "not working" for this system? What happens when quality degrades — how would you catch it, and how would you respond?
What's the retraining story? AI systems that work with your data need to be updated as that data evolves. Is there a plan for that, or is the assumption that the model shipped at launch will be fine indefinitely?
If the vendor's answer to this question is thin, that's worth probing. An AI system that performs well at launch but degrades six months in, with no one responsible for catching it, is not a success.
What to do with the answers
These questions are diagnostic, not pass/fail. A vendor who can't answer the first question at all is a bigger concern than one whose post-launch plan is lighter than you'd like.
Use the answers to understand how the vendor operates under pressure, how honest they are about their limitations, and how seriously they take the full lifecycle of a system rather than just the build.
The vendors who answer these well are usually the ones who've shipped production AI and lived with the consequences. That's the experience you're looking for.
If you want to ask us these questions directly, book a discovery call. We're happy to answer all three.