If Your AI Vendor Cannot Point to a Specific Output, You Are Playing FarmVille

Why most hotel AI investments produce the sensation of work without the fact of results, and how to tell the difference.

There is a moment in every technology evaluation when a question should be asked loudly in the boardroom but instead gets asked quietly in the hallway afterward. "What actually changed?" Not the perception of change. Not the infrastructure of change. Not the meeting where change was discussed. What changed in the revenue, the conversion rate, the cost structure, the guest experience, the operational burden? If the answer is "we are still evaluating," you are not evaluating. You are performing for an audience of shareholders and board members, and the performance is being mistaken for progress.

Will Manidis wrote an essay in early February 2026 that named something many hotel operators have felt but few have articulated. He called it "Tool Shaped Objects." The argument was direct: most AI usage at the institutional level produces the sensation of work without the fact of results. The tools feel like tools. They have dashboards that look like every other dashboard. They produce reports. They generate integrations. They allow executives to tell their boards that "we are using AI," which creates the profound satisfaction of appearing to be in control of the future. But beneath the interface, beneath the data flows, beneath the narrative of digital transformation, the actual output is the experience of using the tool. That is all. The tool itself is the product. It is FarmVille at institutional scale.

Six fully booked technology halls at ITB this year. Every booth has "AI-powered" on its signage. Every vendor has a deck showing adoption curves, integration breadth, and customer testimonials. What almost none of them can answer with specificity is this: what changed in the P&L?

What Does Actual AI Impact Look Like in Hospitality?

The data exists. It is not hidden. Hotels using AI-driven revenue management systems report an estimated 17% increase in total revenue according to industry benchmarks. Eighty-six percent of hoteliers now depend on AI for demand forecasting. But here is the crucial part that separates the actual from the performative: these statistics come from implementations where the output is tracked, measured, and connected to financial outcomes. Someone in finance can point to a line item. Someone in revenue can point to a booking curve that changed.

The hospitality technology space has created a category I call "the evaluation that never ends." A hotel group acquires a new AI platform for guest communication, or pricing, or marketing optimization. Implementation happens. Training occurs. The system goes live. Six months pass. Nothing measurable shifts. The vendor explains that AI takes time to learn the data. The hotel's technology team explains that integration with legacy systems created unexpected complexity. A year passes. The tool remains active but produces no revenue lift, no cost reduction, and no operational improvement that anyone can quantify. But the hotel continues paying for it because it would feel worse to admit the purchase was strategic cover for an unchosen future than to continue funding an invisible asset.

This is not a failure of AI. This is a failure of evaluation discipline.

Why Is Content Production the Exception?

There is one category where AI deployment in hotels moves a measurable metric with speed that rivals traditional technology categories. Content production. A hotel using an AI video platform produces more content in a month than it would produce in a year with traditional photography and videography workflows. That content drives higher engagement rates in booking funnels. Properties testing AI-generated video alongside static photography see measurable improvements in click-through rates and conversion rates.

The output is not a dashboard. The output is not a report. The output is not the sensation of using a system. The output is a video. It is measurable. It is something a marketing director can use, iterate on, and optimize. It either works or it does not, and that clarity creates accountability.

This happens because the output is concrete. A marketing director can hold the result. A guest can watch it. The video either drives bookings or it does not. The specificity transforms the entire relationship between the hotel and the technology.

Want to see this in action?

Book a free consultation to see how AI video production works for your hotel.

How to Identify a Tool Shaped Object

You can usually recognize a tool shaped object by asking four questions, and if the vendor cannot answer all four with concrete specificity, you have found one.

First: what is the specific output? Not "improved insights." Not "better forecasting." What is the output? A report? A recommendation? A video? A price adjustment? Something you can hold or see or measure directly. If the answer requires you to interpret the system's internal logic or combine multiple data sources to infer value, you have found a tool shaped object.

Second: who uses that output, and what do they do with it? If the answer is "the AI system uses it internally," you have found a tool shaped object. If the answer is "our revenue manager reviews it daily and adjusts her strategy based on it," you may have found a tool.

Third: what changed as a result of that output? Not what could change. What did change. If the answer is "we are still measuring the impact," you have found a tool shaped object. If the answer is "our RevPAR improved by 4.2% in the test market," you may have found a tool.

Fourth: could a third party independently verify that this output drove that change? This is the hardest question because it requires the vendor to be confident enough in the output to allow scrutiny. If the vendor resists this question or explains why it is complicated, you have found a tool shaped object.

What Is the Cost of Performative AI Adoption?

Evaluating AI solutions requires real human capital from CTO engineering cycles, department heads, CFO financial modeling, and board review. If the outcome is merely a new system with no measurable results, the opportunity cost is extraordinary. Those resources could have improved core platforms, expanded markets, or evaluated proven capital projects instead.

The cost is not only computational and human. It is cultural. When a hotel implements a series of tool shaped objects, it develops a muscle of acceptance. The organization learns to derive satisfaction from the appearance of digital competence rather than from measurable competitive advantage. Gradually, the conversation shifts from "did this work" to "are we using the latest technology." This is how hotel groups end up with technology stacks that are simultaneously overcomplicated and underutilized.

What the Market Should Demand

The hotel industry should start asking vendors the same question that venture capital learned to ask years ago: show me the unit economics. In the case of a hotel AI application, this means a clear model. How much does the solution cost? How much does it improve a specific metric? What is the payback period?

A vendor should be able to say: "Our system improves conversion rate by X%. The cost is Y per month. Based on your booking volume, your payback period is Z months. Here are five similar properties where we can prove it." If a vendor instead produces a presentation with adoption curves and testimonial quotes and calls for a six-month pilot, you are looking at a sales process designed to overcome the absence of output-based proof.

What Is the Measurability Paradox in Hotel AI?

Paradoxically, systems with the clearest outputs are often perceived as less sophisticated than those with vague value propositions. Video production is straightforward and tangible. Revenue management optimization, by contrast, claims improvements based on unobservable counterfactuals. That complexity defers accountability indefinitely, allowing vendors to extend evaluation cycles rather than prove results.

This has created a perverse incentive structure. Vendors of actual results have to prove it. Vendors of invisible results can theorize it. Hotels should reverse this incentive by making vendor selection criteria explicit and output-focused. If a property group chooses to implement a system without clear immediate output, they should do so with open eyes, knowing they are making a long-term bet with uncertain payoff.

What Is the Real Evaluation Question for Hotel AI?

Return to the boardroom moment. When a CTO or revenue manager recommends a new AI system, the relevant questions are these: What is the output? How will we measure success? How long until we know if this works? If those questions cannot be answered with specificity in the first conversation, the evaluation is not a technical problem. It is a communication problem, which means it is a vendor problem.

There are enough hotel technology solutions with clear, measurable outputs that a property group need not settle for ambiguity. The market has moved past the era where "AI-powered" alone justified investment. The hotels making money with AI implementations are the ones asking the direct questions and holding vendors accountable for direct answers.

Playing FarmVille at institutional scale is fun for the players, but it does not move the needle. And the hotel industry has too much real opportunity on the table to subsidize performance art disguised as digital transformation. For a closer look at why the photo era of hotel marketing is over, see our dedicated analysis.

Frequently Asked Questions

Published February 19, 2026

The right story
wins the booking