Coming Soon · Research Plan

How Mistral Searches HotelsA captured Le Chat event stream

We recorded Mistral Le Chat answering “best 3-star hotels for cyclists Paris 2026.” The pipeline is the simplest we’ve captured: one Brave web search call, then the model paraphrases the snippets and cites the source URLs inline. No map, no places, no reviews, no booking.

The captured query

We logged Le Chat’s streaming protocol — a JSON-patch event stream over an SSE connection — for one prompt:

> best 3-star hotels for cyclists Paris 2026

The model emitted a single tool call to web_search (type: “rag”), got a result set back, and streamed a numbered prose list with inline reference markers. End to end, the response is built from web-page snippets — no structured place objects appeared anywhere in the trace.

The pipeline (2 steps)

Step 1 · tool: web_search (Brave)

One call to Brave web search

Le Chat rewrites the user’s prompt as a search query and fires it at Brave:

web_search("best 3-star hotels for cyclists Paris 2026")

The response is a list of web pages with: title, url, description, an array of snippets (the relevant paragraph excerpts Brave extracted), date, thumbnail, and a source field that explicitly says "source": "brave".

For our cycling-hotels prompt, the top sources were freewheelingfrance.com, london2paris.co.uk, and tripadvisor.com. None of them are place-data APIs — they’re editorial / niche / aggregator pages whose authors happened to recommend bike-friendly hotels.

Step 2 · streamed paraphrase + inline references

Numbered prose list, references woven in

The model streams a numbered list (1, 2, 3, 4) of hotels. Each item is a paraphrase of one or two snippets from the Brave results. References are inserted as content blocks of type reference with referenceIds pointing at the rank in the search results.

1. Hotel Mistral (Montparnasse) — Highly recommended by cyclists for its welcoming staff and secure bike storage. Located just a 5-minute walk from Montparnasse Station, making it convenient for both city exploration and longer cycling routes. Described as “superb” and “worth every cent for a treat” by cyclists.

Cross-checked against the search trace: every distinctive phrase in the paraphrase — “5-minute walk from Montparnasse Station”, “worth every cent for a treat”, “cyclists tackling the Avenue Verte London to Paris” — comes verbatim from the Brave snippets. The model adds structure and tone; the facts are the snippets.

The reply ends with a follow-up question: “Would you like more details about any of these options or help with booking?” — suggesting Le Chat is set up to drill in but doesn’t have a built-in booking tool to chain into.

What this pipeline tells us

No place data — this is search results, not entities

Claude returns Google Places objects with rating, review_count, lat/lon, hours. Mistral returns Brave search snippets with URLs and excerpts. There’s no structured hotel underneath; the “hotels” in the answer are paraphrased editorial mentions. Optimising for Mistral is optimising the pages Brave indexes.

Brave Search dependency, openly declared

The trace literally has "source": "brave" on every reference. Mistral isn’t hiding the back end. Hotels well-cited by Brave win Le Chat answers; hotels invisible to Brave don’t exist here.

Niche editorial sources punch above their weight

Three of the four hotels recommended in the response trace back to freewheelingfrance.com or london2paris.co.uk — small specialist sites that happen to have hand-curated cycling-hotel lists. For long-tail intent (“3-star, cyclists, Paris”), being on the niche aggregator is more valuable than being on Booking.com. ChatGPT’s pipeline would have leaned hard on OTAs; Mistral surfaced specialists.

References are first-class — and clickable

Each numbered hotel ends with one or more reference markers tied to source URLs. The user can click straight through to freewheelingfrance.com. That’s a different traffic pattern than Claude (which leaves the user inside the chat with a map) — expect more outbound referrer traffic from Le Chat than from Claude on the same prompt.

No map, no rating, no live pricing, no booking

The simplest pipeline of any AI we’ve captured for hotels. The output is a paraphrased web-search answer with citations. Useful, surprisingly specific for long-tail prompts — but no in-chat booking surface, no place card, no directions.

How this fits the spectrum

Three providers, three increasingly minimal pipelines:

DimensionChatGPTClaudeMistral
Back endMulti-provider stackGoogle PlacesBrave web search
Result typeFused entitiesPlace objectsWeb snippets
Ranking signalRRF fusion + entity linkingGoogle rating × review countBrave rank + LLM paraphrase
Visual surfaceMap + cards + adsMap + side panelNumbered list + citations
Booking integrationAppsNoneNone
Optimisation leverBe in OTA + chain feedsWin on Google MapsWin on niche editorial

Implication for hotels

Le Chat punishes the “Booking.com everywhere” playbook. For specialist queries (cyclists, family, accessible, pet-friendly, business), being mentioned on the specialist aggregator is the unlock — freewheelingfrance, london2paris, niche travel blogs. For Mistral specifically, an outreach pitch to the right curated list is worth more than a hundred OTA listings. The flip side: if Brave can’t find your site, you don’t exist on Le Chat at all.

What the full study will publish

  • 50+ Mistral Le Chat captures across destinations and traveller profiles — full event streams open-published.
  • Source-domain analysis: which sites Brave hands to Mistral most often for hotel queries, by category and country.
  • Outbound-referrer measurement: do Le Chat clicks land on hotel sites, OTAs, or specialist aggregators?
  • Same-prompt comparison with ChatGPT, Claude, Gemini, and Perplexity captures.

Summarize with AI

ChatGPTPerplexityClaudeGeminiGrok

Related research