Batch acquires Moonfish AI 🔥 Read the news →

Tech due diligence: how AI cuts costs and accelerates the process

Data & Tech

7 May 2026 · Written by Hervé Lourdin

At Batch, we just acquired Moonfish’s technology. We ran the entire tech due diligence in-house, without external consultants, no specialized audit firm. Just Claude, at every step.

Three takeaways:

  • 2 weeks of consulting work saved ;

  • Between €30k and €50k saved vs. a specialized audit firm ;

  • Far smoother exchanges with the Moonfish team than we expected.

Here’s how we did it.

Tech due diligence: a structurally hard exercise

Tech due diligence is a deep assessment of a technology before an acquisition. You’re evaluating code quality, architecture patterns, development practices, technical debt, security. All under time pressure, with teams that don’t know each other yet.

Historically, buyers have had two options.

The first: rely on technical workshops to surface what matters. Fast, but incomplete. Without a shared evaluation framework, feedback stays subjective and hard to consolidate.

The second: bring in a specialized audit firm. More reliable, but a comparable engagement runs €30k–€50k, with lead times that often exceed two weeks.

We didn’t have time to wait for a firm. We didn’t want to run blind workshops either. So we built a third option.

Step 1: Preparing the workshops with Claude

Preparing due diligence workshops is tedious work: identifying what to cover, writing the right questions, building usable evaluation tools. For an experienced consultant, that’s days of work.

We gave Claude the high-level objectives for each workshop. It came back with:

  • A logical workshop sequence ;

  • A complete question bank for each session ;

  • Notion-ready scorecards for Batch evaluators to fill in after each session ;

  • Supporting documentation explaining how to use each scorecard.

One thing we didn’t see coming: sharing the evaluation framework upfront with the Moonfish team made the workshops run better. Everyone knew exactly what we were assessing. Less defensiveness. More candor.

Once the Batch team filled in the scorecards, Claude consolidated all the evaluations into a summary report the same day. What normally takes a full day of write-up.

Results from this phase:

  • Prep work: days compressed into hours ;

  • More consistent, higher-quality feedback thanks to structured scorecards ;

  • Post-workshop synthesis delivered same-day.

Step 2: Analyzing the codebase without dedicated tooling

Code analysis is where internal due diligences tend to fall apart. Without a specialized tool, how do you get an objective read on a codebase you’ve never seen?

Relying solely on workshops means missing what developers wouldn’t flag on their own. Deploying an external audit tool at the target’s site opens up conversations about access, installation, confidentiality. That all takes time and creates friction.

We offered Moonfish something simpler: a prompt and a Claude API key, provided by Batch. Moonfish ran it themselves, on their own repositories, from their own environment.

Three immediate advantages:

  • Zero installation. No complex tooling to deploy. No access negotiations.

  • Full transparency. The prompt was readable by the Moonfish team before they ran it. No black box.

  • A shared asset. The analysis output became common ground between Batch and Moonfish, not a unilateral buyer’s report.

The codebase review workshop that followed was unusually productive. We went through the analysis repository by repository. Claude had surfaced concrete observations on development practices, architecture patterns, and security in the code. No abstract debates. No defensiveness. Just conversations grounded in shared facts.

What this means for tech due diligence

This experience points to a deeper shift in how tech teams can run acquisition processes.

LLMs change the equation in three ways.

Structuring. Building a rigorous evaluation framework (questions, scorecards, documentation) used to require rare expertise. It now takes a few hours, provided you know how to frame the right objectives.

Analysis. Code audits used to require specialists or expensive tools. A well-crafted prompt, run against a codebase, produces a useful read of practices and risks. Not a certified audit. A factual, shared starting point for discussion.

Synthesis. Consolidating workshop feedback is slow, low-value work. Exactly the kind of task LLMs handle well: aggregate, structure, summarize, without losing the nuance.

⚠️ One caveat: this doesn’t replace human judgment at the final stage. An LLM’s analysis of a codebase is a conversation tool, not a verdict. The engineers are the ones who interpret the output and ask the questions that matter.

Hervé Lourdin

Chief Technical Officer @ Batch

Reading time
min

Follow us

linkedin iconyoutube iconwttj icontwitter icon
Newsletter

The CRM Newsletter

Subscribe to get the latest news in your inbox!