Talk to a public procurement expert

RAG vs ChatGPT in Public Procurement

By:Icela Martin
RAG vs ChatGPT en contratación pública

The irruption of Generative Artificial Intelligence has democratized access to powerful writing tools. Today, any technical officer can ask a generalist chat to write an email or summarize a text.

However, when we move this technology to the public procurement sector, we find that the lack of rigor by this type of AI models can compromise the bidding process.

In an environment governed by the Law on Public Sector Contracts (LCSP) and subject to auditing, an invented datum is not an anecdote; it is a serious legal risk.

That is why technology (CTO) and security (CISO) leads in the GovTech sector are betting on advanced architectures like RAG (Retrieval-Augmented Generation) over the use of generic models. But what exactly is this and why is it fundamental for working with tenders?


The Structural Risk: Why Generalist AI Invents Data

Large Language Models (LLM) generalists, like the standard versions of ChatGPT or Gemini, are not verified knowledge databases; they are probabilistic text prediction engines. They have been trained with immense amounts of information from the internet, but they do not "know" what is true and what is a lie; they only calculate which word is statistically most likely to follow the previous one to form a coherent sentence.

The critical problem in the legal sector is that these models are designed under an imperative to "always answer". Their objective function is to satisfy the user's query while maintaining the fluidity of the conversation, not to guarantee factual accuracy.

This means that, faced with a specific information gap (for example, the exact content of an article of the LCSP or a specific file that was not in its training), the model does not stop. Instead of admitting ignorance, its algorithm "fills" that void by generating information that sounds truthful and respects the structure of legal language, but is fictitious.

To this phenomenon, we call "hallucination": the AI invents data, dates, articles, or jurisprudence with total security and grammatical coherence, creating a perfect confidence trap for the technician who uses it.

What Is RAG and How Does It Solve This?

RAG (Retrieval-Augmented Generation) is an architecture that prevents AI from answering "from memory". It acts as an "expert" AI model, as long as it has the correct information sources.

The step-by-step technical flow:

  1. Retrieval (Search): When you ask a question, the system does not go to the language model. First, it searches in a secure vector database where the current LCSP, the file specifications, and real jurisprudence are indexed.
  2. Filtering: The system selects only the paragraphs that contain the truthful answer (e.g. the exact text of Art. 204 LCSP).
  3. Generation: Now yes, it sends those paragraphs to the AI model with a strict instruction: "Answer the user using ONLY this information. If it is not here, say you do not know".

Comparative Table: ChatGPT vs. RAG Architecture (Tendios)

FeatureGeneric AI (ChatGPT, etc.)RAG Architecture (Tendios)
Source of knowledgeTraining memory (past internet)Updated documentary database
Response to doubtInvents something plausible ("Hallucination")"There is not enough information in the documents"
TraceabilityBlack box (no sources)Direct citations to the BOE/Specifications with link
UpdateOld knowledge cut-offsReal time (connects with live sources)

Do you need to make a query about public procurement?

Real Use Case: Sustainability Validation

To understand the impact, let's look at a day in the life of a procurement officer who must validate if a specification complies with ecological criteria.

  • Manual process (Without RAG): Download the PDF (150 pages), search for "sustainability" with Ctrl+F, open the BOE to cross-check with the LCSP, consult the green public procurement guide... Time estimated: 3 hours.
  • Process with Tendios (RAG): You upload the specification. You ask: "Does it comply with the requirements of Art. 201 LCSP?". The system scans the document, crosses it with the current article of the law, and generates a compliance report with citations. Time estimated: 20 minutes.

When to Use Each Tool

It is not about demonizing generalist models, but about using them for what they are useful for.

When you CAN use an LLM like ChatGPT:

  • Initial brainstorming of creative ideas.
  • Drafting non-confidential internal emails.
  • Summarizing general sector news.

X When you MUST use Specialized AI (RAG):

  • Drafting Specifications (PCAP/PPT) with legal certainty.
  • Making legal queries.
  • Validating the regulatory compliance of a file.

Are you a public sector entity?

Are you a private company?

The Limits of RAG

Even RAG technology has limits. The quality of the response depends on the quality of the source (if the database is not updated to the latest BOE, the response will be obsolete) and does not replace the final legal judgment of the officer. In Tendios we mitigate this with automatic daily updating from official sources, but human oversight remains key. 


Conclusion: Technology for Critical Decisions

In the public sector, decisions must be justified. RAG technology is not just a technical improvement; it is the difference between using a creative assistant and using a professional auditing tool.

By prioritizing verifiability and documentary discipline, we convert Artificial Intelligence into a reliable ally for the management of complex files, eliminating the fear of the "invented datum".

Do you want to try specialized AI in public procurement?

Icela Martin

Icela Martin

Legal Copywriter • Public Procurement