AI Solutions · Philippines

Six Things ChatGPT Changed in Our Scoping Calls

November 21, 20234 min read

A year ago, scoping calls for software projects occasionally touched on automation. Today, almost every scoping call for a new client includes a conversation about AI. ChatGPT made that happen. GPT-4 made it more specific. Here are the six ways our discovery process has shifted as a result.

1. We Now Ask What They Have Already Tried

Before ChatGPT, clients describing an automation problem had typically not attempted to solve it themselves. Now, a significant portion arrive having already tried a ChatGPT-based solution - either using it directly, or using a no-code AI wrapper. Their experience with that attempt is the most useful input we can get.

If it worked, we understand the problem is solvable and we know the rough shape of the solution. If it did not work, we learn where the edge cases are - what the model got wrong, where the output was inconsistent, what the client ended up doing manually anyway. That failure report is worth more to scoping than a fresh brief.

We now open discovery with: "Have you tried to solve this with any tools already? What happened?"

2. The Capabilities Conversation Has to Happen Earlier

Clients who have been using ChatGPT for personal productivity arrive with intuitions about AI capabilities that are partly right and partly not. They know it can write, summarize, classify, and converse. They sometimes believe it can do things it reliably cannot: consistently return perfectly structured data, access real-time information without a retrieval layer, remember prior conversations across sessions without an architecture that enables it.

We used to address capability mismatches as they surfaced. Now we address them in the first session as a deliberate agenda item. Not to deflate expectations, but to ground them in what production systems actually require as distinct from what a consumer chat interface does.

3. We Have a No-Go List for AI Features

Some features that seem like natural AI candidates are not viable for production because of reliability, latency, cost, or regulatory concerns. We have formalized a short no-go list that we walk through when AI is on the table.

Items on the list: features that require consistent structured output without a validation layer, autonomous actions with financial or legal consequences and no human review step, any feature in a regulated context where the model output could be mistaken for professional advice, and real-time requirements in latency-sensitive environments where model response time would be a user experience problem.

We did not have a formal no-go list before ChatGPT. The volume of AI-adjacent briefs made one necessary.

4. We Separate "AI Inside" From "AI Wrapper"

Clients often arrive asking for an "AI product." We have found it useful to clarify early whether they want AI as an internal processing layer inside a product with its own interface and workflow, or whether they want a thin interface on top of an existing AI API with their own prompting.

These are very different projects. The first is a full product build where AI is one component. The second is lighter-weight but requires careful prompt engineering and output handling to be reliable. Confusing them leads to misaligned expectations about timeline and cost.

5. Data Conversation Is Earlier in the Process

AI features that work well in demos often fail in production because of data. The training data, the input data format, the edge cases in real user language, the missing values in structured records. We now raise the data conversation in the first scoping session rather than in the technical design phase.

Questions we ask: where is the data coming from, what format is it in, who is responsible for its quality, and what happens when the input is malformed or incomplete? These questions have changed more than a few scopes.

6. We Quote Discovery Separately for AI Work

For non-AI projects, we often have enough information after one or two scoping calls to issue a quote. For AI-adjacent work, we almost always need a brief technical discovery phase first. The number of unknowns - data quality, model selection, output validation, integration points - is too high to price accurately from a brief.

We now tell clients upfront: AI work starts with a paid discovery. The discovery output is the basis for the build quote. This is not new as a practice, but making it explicit as an expectation for AI work specifically has reduced scope friction significantly.

If you are thinking about an AI integration and want to talk through what the scoping process looks like, we are happy to start there.

Start a project →

Need this built for your business?

Let's scope it together.

Start a project