PromptLocal.ai Research Report

What Does AI Think About Personal Injury Lawyers in Las Vegas?

We asked 3 AI models the same 124 questions about personal injury lawyers across 3 channels — prompting AI interfaces directly, querying APIs, and Google AI Mode. Here's what they got right, and where they wildly disagree.

How We Tested: 3-Channel Framework

Channel 1 — AI interfaces
Prompting AI interfaces directly (ChatGPT, Gemini, Perplexity & Google AI Mode Web UI)
Channel 2 — API
Same models queried programmatically via their respective APIs
Channel 3 — Google Search
Local Pack + Organic results
Web Search ON
AI has live internet access (browsing, grounding, citations)
Web Search OFF
AI relies only on training data — no live web access
~1,860
Total AI Queries
1,267
Unique Firms Mentioned
10,749
Total Firms Mentioned
15
Channels Tested
66.9%
Firms Known by 1 Channel Only
6,416
Citation URLs Extracted
Finding 01 · Mention Frequency + Rank + Share of Voice

The AI Visibility Leaderboard

Out of 1,267 unique law firms, the top 5 control 26% of all recommendations. There's a clear AI canon — and most firms aren't in it.

Real Example — Rank #1 ≠ Most Mentioned
Benson & Bingham leads total mentions (880) but ranks #1 only 16% of the time. Naqvi Injury Law has fewer total mentions (808) but ranks #1 in 25% of its appearances — the highest #1 rate of any top firm. Eglet Law is even more dramatic: 416 mentions but #1 a whopping 24% of the time. Being mentioned often and being recommended first are two different games.
Share of Voice: Naqvi 7.2% · Benson & Bingham 6.8% · Richard Harris 5.4% · Eglet 3.7% · Claggett & Sykes 2.8%. These 5 firms control 25.9% of all AI recommendations. The remaining 1,239 firms share 74.1%.
Finding 02 · Cross-Model Overlap

72.8% of Firms Are Known by Only One Ecosystem

We grouped all channels into 3 ecosystems (OpenAI, Google, Perplexity) and checked: how many know each firm?

1,267 total firms
11.2% — 142 firms
Known by all 3 ecosystems (the AI canon)
15.9% — 202 firms
Known by 2 ecosystems
72.8% — 923 firms
Known by only 1 ecosystem
Real Example — Pairwise Ecosystem Overlap
Google & Perplexity share the most firms (27.3% Jaccard overlap), while OpenAI & Google share only 19.4%. This suggests Perplexity and Google draw from more similar source material than OpenAI does. If your firm is visible in Perplexity, there's a decent chance Google knows you too — but ChatGPT may have never heard of you.
Finding 03 · User Interface vs API — The Headline Finding

Same Model, Different Answers: User Interface vs API Overlap Is 3-12%

We tested the exact same prompts on ChatGPT, Gemini, and Perplexity via their web interface (Direct) and their API. The per-prompt overlap is shockingly low.

Real Example — ChatGPT User Interface vs ChatGPT API for the same prompt
For prompt "Best personal injury lawyer in Las Vegas" — ChatGPT User Interface (Web Search ON) recommended: Richard Harris, Sam & Ash, Benson & Bingham, and Hurtado Law. ChatGPT API (Web Search ON) for the same prompt returned: Morgan & Morgan, Adam S. Kutner, Vegas Valley Injury Law, and The Accident Guys. Zero overlap. Different system prompts, different browsing behavior, entirely different recommendations.
Why this happens: The web UI has hidden system prompts, safety wrappers, and auto-browsing behavior the API doesn't have. ChatGPT's UI often triggers Bing browsing automatically; the API only searches if you explicitly pass the tool. These "invisible layers" dramatically change who gets recommended.
Finding 04 · Web Search ON vs OFF Delta

Toggle Web Search and 58-87% of Firms Change

Every channel was tested with web search ON (live internet) and OFF (training data only). The overlap between the two modes:

Real Example — Gemini User Interface
Gemini User Interface with web search OFF surfaces 113 unique firms. Turn search ON and it surfaces 204 firms — nearly double. But only 64 firms appear in both modes (25.3% overlap). That means 140 firms only exist in one mode or the other. The training-data version recommends a completely different competitive landscape than the search-grounded version.
Perplexity is the most stable at 42.2% overlap (User Interface) and 35.6% (API) — because it always grounds in search. ChatGPT API is the least stable: just 12.5% overlap between search ON and OFF. If you're only tracking one mode, you're missing up to 87% of the picture.
Finding 05 · The Google 4-Way — The Money Slide

Same Google Brain, 4 Products, 4 Different Answers

We tested the Google ecosystem across 4 product surfaces. Same underlying model family — radically different outputs.

Google ProductUnique FirmsAvg/ResponseChannel Type
Google AI Mode29012.0Google Search AI
Gemini User Interface (Web:ON)2048.2User Interface
Gemini API (Web:ON)2467.3API
Gemini API (Web:OFF)1567.6API
Gemini User Interface (Web:OFF)1135.7User Interface
Google Local Pack (SERP)443.0Traditional SERP
Real Example — Naqvi Injury Law Across Google Products
Naqvi Injury Law in Google AI Mode: 137 mentions (avg rank 5.4). In Gemini User Interface: significantly fewer. In Local Pack: 52 appearances (rank #1 most often). The firm dominates differently in each Google product. A firm tracking only Local Pack rankings would miss that they're also the #1 most-mentioned firm in AI Mode.
AI Mode surfaces 6.6× more firms than Local Pack (290 vs 44). The overlap between AI Mode and Gemini User Interface (Web:ON) is only 30%. If you're only tracking one Google channel, you're flying blind. This is the strongest argument for multi-channel AI visibility monitoring.
Finding 06 · AI Personality — Rank Disagreements

Each AI Ecosystem Ranks Firms Differently

The same top firms get dramatically different average rank positions depending on which ecosystem you ask.

FirmOpenAI Avg RankGoogle Avg RankAI Mode Avg RankPerplexity Avg Rank
Naqvi Injury Law3.34.45.44.0
Benson & Bingham4.94.15.85.1
Richard Harris Law Firm3.64.85.54.5
Eglet Law3.05.25.93.8
Adam S. Kutner4.25.57.75.1
Real Example — Eglet Law
Eglet Law averages rank #3.0 in OpenAI (nearly always in the top 3) but drops to #5.9 in Google AI Mode. ChatGPT clearly "likes" Eglet more than Google does. Meanwhile, Richard Harris is consistently ranked #3-5 across all ecosystems — the most stable brand positioning of any top firm.
Finding 07 · Prompt Dimension Mapping

Which Query Types Trigger Which Firms?

We tested 124 prompts across 8 dimensions. Some firms dominate generic queries; others only surface for specific injury types or trust-based queries.

Dimension Prompts Total Mentions Top Firm
Generic / Broad 15 1,396 Naqvi Injury Law (151)
Injury Type 30 2,501 Benson & Bingham (225)
Trust & Credibility 15 1,376 Benson & Bingham (123)
Situational / Scenario 20 1,592 Benson & Bingham (140)
Location Specific 14 1,293 Naqvi Injury Law (110)
Constraints & Practical 12 981 Naqvi Injury Law (69)
Comparison / Versus 10 885 Richard Harris (56)
Vegas-Specific 8 725 Richard Harris (53)
Niche Specialists: Some firms dramatically over-index on specific dimensions. Eglet Law gets 26% of its mentions from Trust & Credibility queries (vs 12% expected) — it's the "awards & settlements" firm. Paul Padda Law gets 45% from Injury Type queries (vs 24% expected) — AI sees it as the specialist. Cameron Law over-indexes on Constraints queries — the "Spanish-speaking, 24-hour" firm in AI's mind.
Finding 08 · Fame Beats Quality

A 5-Star Firm Can Be Invisible to AI

We checked whether Google ratings or review volume predict AI visibility. The answer: being great doesn't make you famous to AI.

We compared every firm's Google star rating against how often AI recommends them. The result? No connection at all. Higher-rated firms are not more likely to be recommended — if anything, they're slightly less likely. A perfect 5.0-star firm can be completely invisible to AI.

Review count helps a little — firms with more reviews tend to get mentioned slightly more — but it's a weak signal. Having 800 reviews doesn't guarantee AI knows you exist.

Real Example — This tells the whole story
PT Law: Perfect 5.0 rating, 932 reviews — but only 3 AI mentions across all channels.
Ladah Law Firm: 4.8 rating, 498 reviews, just 2 Local Pack appearances — but 268 AI mentions and 317 citations (3rd most-cited website).
Ladah has nearly 90× more AI visibility than PT Law despite having fewer reviews and a lower star rating. The difference? Ladah's website content gets cited by AI. PT Law's doesn't. Being talked about online matters more than being liked.
What AI actually reads: LLMs don't check your star rating. They absorb the volume of discussion about your business — blog posts, news articles, legal directory features, and content marketing. That's what enters training data and search results. Client satisfaction is invisible to AI unless it generates online content.
Finding 09 · The Invisible Firm

5.0 Stars. 932 Reviews. Only 3 AI Mentions.

We found firms with strong real-world presence — perfect ratings, hundreds of reviews — that AI barely knows exist.

FirmRatingReviewsLocal PackAI MentionsVerdict
PT Law5.093223AI Invisible
CVBN LAW4.950425AI Invisible
Ace Lakhani Law Firm5.046816AI Invisible
Hale Injury Law4.9599212Barely Visible
Tingey Injury Law Firm4.94601013Barely Visible
Maier Gutierrez Injury Lawyers5.05741218Barely Visible
PT Law is the most dramatic case: perfect 5.0 star rating, 932 reviews — nearly a thousand happy clients. But across 1,860 AI queries, it was mentioned only 3 times. Ladah Law Firm with fewer reviews (498) and a lower rating (4.8) has 268 AI mentions — nearly 90× more AI visibility. The gap between real-world quality and AI awareness is massive, but solvable.

These "invisible firms" represent the immediate opportunity for AI visibility optimization. All 6 firms have 4.9-5.0 star ratings and 460-932 reviews — strong fundamentals. But they're missing the content layer that LLMs consume: blog mentions, legal directory features, listicle placements, and third-party coverage that enters training data and search results.

Finding 10 · Local Pack ≠ AI Recommendations

Different Channels, Different Winners

93% of Local Pack firms appear somewhere in AI channels — but only 32% of top AI firms appear in the Local Pack. They're different games.

FirmLocal PackAI TotalGoogle AI ModePattern
Naqvi Injury Law52808137Dominant Everywhere
Jack Bernstein Injury Lawyers020225AI Only — LP Invisible
Gina Corena & Associates291611LP Only — AI Invisible
Claggett & Sykes441762AI Dominant
Dimopoulos Injury Law158619Balanced
Jack Bernstein Injury Lawyers has zero Local Pack appearances but 202 AI mentions across 11 channels. This firm doesn't exist in traditional Google Search but lives in the AI recommendation canon. The reverse of an invisible firm — a firm that only exists in AI.
Finding 11 · Discovery Gap Between Models

Google AI Mode Surfaces 2× More Firms Than ChatGPT API

Some channels give curated short lists; others produce exhaustive recommendations. If you're a smaller firm, the verbose channels are your best shot.

Google AI Mode averages 12.0 firms per response — the most generous recommender. ChatGPT API (Web Search ON) gives only 5.0. For lesser-known firms, AI Mode casts 2.4× the net. Google products are exhaustive listers; ChatGPT is a curated recommender.
Finding 12 · Channel-Exclusive Firms

148 Firms Exist Only in ChatGPT User Interface (Web Search ON)

Each channel surfaces firms that no other channel mentions. These are "platform-exclusive" visibility opportunities.

ChannelExclusive FirmsWhat This Means
ChatGPT User Interface (Web:ON)148Browsing behavior surfaces unique results
ChatGPT API (Web:ON)110API web search finds different pages than UI
ChatGPT User Interface (Web:OFF)54Training data has firms API doesn't surface
ChatGPT API (Web:OFF)81Different model version/system prompt
Gemini API (Web:ON)77Google grounding finds different firms
Google AI Mode76Query fan-out discovers niche firms
Perplexity API (Web:OFF)62Perplexity's training data is distinctive
Gemini API (Web:OFF)60Gemini's base knowledge differs
Finding 13 · Citation Source Tracking

Where Do AIs Get Their Information?

We extracted 6,416 citation URLs from the User Interface channels. Law firm websites dominate (48.4%), followed by legal directories (13.1%) and review platforms (6.4%).

Source CategoryCitationsShareTop Examples
Law Firm Websites3,11248.4%bensonbingham.com, ladahlaw.com, egletlaw.com, naqvilaw.com
Other (News, Reddit, Forbes)2,04731.9%forbes.com, reddit.com, cameronlawlv.com
Legal Directories84113.1%superlawyers.com, justia.com, avvo.com
Review Platforms4136.4%yelp.com, bbb.org
News / Media130.2%reviewjournal.com, lasvegassun.com
Real Example — Top 5 Most-Cited Domains
bensonbingham.com (456 citations) — the most cited law firm website across all channels. superlawyers.com (342) — the #1 legal directory source. ladahlaw.com (317) — surprisingly high given Ladah's modest Local Pack presence (2 appearances). yelp.com (243) — Google AI Mode cites Yelp heavily (186 of those 243). naqvilaw.com (230) — strong firm site, but behind Benson & Bingham and Ladah in citations despite leading in AI mentions.
The Ladah Law Firm paradox: Ladah has only 2 Local Pack appearances and 498 reviews, but 212 AI mentions and 317 citations — the 3rd most-cited law firm website. Their website is clearly optimized for the content that AI scrapes. This is the strongest evidence that website content strategy drives AI visibility independently from traditional SEO.
Real Example — ChatGPT's Sources vs Google AI Mode
ChatGPT User Interface cites forbes.com more than any other domain (70 citations) — it trusts mainstream publications. Google AI Mode cites yelp.com heavily (213 citations) — it pulls from its own ecosystem. Perplexity cites ladahlaw.com most (86 citations) — it goes directly to firm websites. Each AI has a different "source personality."
Finding 14 · Each AI Has a Voice

Same Firm, Different Story — How Each AI Describes the Top Firms

Beyond just recommending different firms, each AI model frames the same firm with a different personality and emphasis.

Real Example — How models describe Naqvi Injury Law for "Best PI lawyer in Las Vegas"
Gemini User Interface: "11-time 'Best of Las Vegas' Gold Winner; extremely high client satisfaction" — credential-first, award-focused
Google AI Mode: Shows Google Maps card with rating (4.9), address, and image — structured data, visual, local-pack style
ChatGPT User Interface: Ranked in a numbered list with brief qualifier — concise, editorial tone
Real Example — How Gemini frames the top 4 firms (structured table format)
Naqvi: "11-time Best of Las Vegas Gold Winner; extremely high client satisfaction" — awards + client sentiment
Eglet Adams: "Robert Eglet named Trial Lawyer of the Year (2026); known for billion-dollar verdicts" — individual lawyer + headline verdicts
Benson & Bingham: "25+ years experience; record-setting settlements ($30M+)" — experience + settlement size
Richard Harris: "Over 40 years in Nevada; one of the largest and most established firms" — longevity + scale
Gemini leads with credentials and data. ChatGPT leads with editorial recommendations. Google AI Mode leads with structured local data (Maps cards, ratings). Perplexity leads with citations and source attribution. A firm's AI optimization strategy should match how each model frames businesses — Gemini wants awards, ChatGPT wants narrative, Google wants GBP signals.
Finding 15 · Stability / Reproducibility

Ask Twice, Get Different Answers: Only 23.8% Overlap Between Runs

We ran 20 prompts 4 times each across all 13 channels — 1,040 queries — to test reproducibility. The results are sobering.

ChannelSet OverlapCore %Stochastic %Stability
Gemini User Interface (Web:OFF)38.8%28.4%71.6%Most Stable
Perplexity API (Web:OFF)31.9%24.0%76.0%Stable
Perplexity User Interface (Web:ON)28.7%23.0%77.0%Moderate
Perplexity User Interface (Web:OFF)28.4%23.6%76.4%Moderate
ChatGPT User Interface (Web:OFF)24.7%18.8%81.2%Moderate
Google AI Mode22.3%15.6%84.4%Moderate
ChatGPT User Interface (Web:ON)14.3%7.6%92.4%Volatile
ChatGPT API (Web:OFF)8.8%4.9%95.1%Very Volatile
ChatGPT API (Web:ON)6.5%1.5%98.5%Extremely Volatile
Real Example — ChatGPT API (Web:ON)
Only 6.5% set overlap between repeat runs. 98.5% of recommendations are stochastic — appearing in 2 or fewer of 4 runs. Ask ChatGPT's API the same question 4 times and you'll get almost entirely different law firms each time. Only 1.5% of recommendations are "core" (appear in 3+ runs).
82.3% of all AI recommendations are stochastic. They appear randomly — in 2 or fewer of 4 runs. Only 17.7% are "core" recommendations that show up reliably. Any AI visibility study that queries each model only once is measuring signal mixed with substantial noise. You need 3-5 runs to separate core from stochastic.

User Interface channels are more stable than API (26.0% vs 21.7% overlap). Perplexity is the most stable ecosystem (28.6% average overlap), while OpenAI is the least stable (13.5%). Gemini User Interface (Web:OFF) is the single most stable channel at 38.8% — its training-data-only mode gives the most consistent answers.

Finding 16 · Rank Consistency (Kendall's Tau)

Firms That Appear Keep Their Position — But Most Don't Appear

Kendall's τ measures whether firms maintain the same rank ordering across repeat runs. Average: τ = 0.370 (moderate consistency).

ChannelKendall τSet OverlapInterpretation
Perplexity User Interface (Web:OFF)0.61028.4%Best rank consistency
Perplexity User Interface (Web:ON)0.59028.7%Strong rank consistency
ChatGPT User Interface (Web:ON)0.58314.3%Good ranks, volatile sets
ChatGPT User Interface (Web:OFF)0.55824.7%Good ranks, moderate sets
Gemini User Interface (Web:OFF)0.50038.8%Most stable overall
Google AI Mode0.22122.3%Weak rank consistency
Gemini User Interface (Web:ON)0.11221.4%Poor rank consistency
ChatGPT API (Web:ON)-0.2896.5%Inverted rankings
Real Example — The Stability Paradox
ChatGPT User Interface (Web:ON) has excellent rank consistency (τ = 0.583) but terrible set overlap (14.3%). It can't decide WHICH firms to mention — but when it does mention one, it puts it in roughly the same position. Gemini User Interface (Web:OFF) is the opposite: highest set overlap (38.8%) but moderate τ (0.500). It reliably mentions the same firms but shuffles their order.
The practical implication: A firm like Naqvi Injury Law that appears as a "core" recommendation (3+ of 4 runs) in most channels has genuinely strong AI mindshare. A firm that appears once and disappears is riding noise, not signal. PromptLocal.ai tracks core vs stochastic visibility — because a single snapshot is meaningless.
So What? · Implications for Law Firms

What Should a PI Lawyer in Las Vegas Actually Do?

1. Track all channels, not just one. User Interface vs API gives different results. Search ON vs OFF gives different results. Google AI Mode vs Local Pack gives different results. A firm that's invisible in one channel may be dominant in another. Multi-channel monitoring is not optional.

2. Blog coverage and listicle placement matter more than star ratings. Firms like Ladah (212 AI mentions, 4.8 stars, 498 reviews) crush firms like Gina Corena (16 AI mentions, 5.0 stars, 836 reviews). The difference? Content footprint. Being talked about on legal blogs, featured in "best of" listicles, and covered by local press enters training data and live search results that AI consumes.

3. Google AI Mode is the new battleground. At 12 firms per response and 290 unique firms, it's the most generous and most important AI discovery channel for local businesses. It pulls from Google's full search index, Maps, and GBP data. Firms not optimizing for this channel are leaving AI visibility on the table.

4. The "invisible firm" opportunity is real. Maier Gutierrez has 574 reviews, 5.0 stars, 11 Local Pack appearances — and 2 AI mentions. The gap between real-world quality and AI awareness is a solvable problem. Content marketing, schema markup, and directory presence can move firms from invisible to visible within weeks.

5. Niche positioning works. Firms like Eglet Law (over-indexes on trust queries) and Paul Padda Law (over-indexes on injury-type queries) prove that you don't need to win every query — you need to own your niche across all channels.

Methodology

How We Built This Study

124 prompts across 8 dimensions — all recommendation-only, designed to guarantee business name mentions:

DimensionPromptsPurpose
Injury Type30Case-type visibility — car, truck, slip & fall, med mal, rideshare, etc.
Situational / Scenario20Narrative-based with explicit recommendation ask — real-world phrasing
Generic / Broad15Baseline — who shows up for generic PI lawyer queries
Trust & Credibility15Reviews, settlements, awards — what trust signals does AI surface
Location Specific14Neighborhood and venue-specific — tests local knowledge depth
Constraints & Practical12Fees, language, availability — filtered recommendation queries
Comparison / Versus10Competitive rankings, firm-type filters — forces named comparisons
Vegas-Specific8Casino, entertainment, pool club, convention injuries — unique to Las Vegas
TOTAL124

3 models: ChatGPT (GPT-5.4-mini), Gemini (2.5 Pro), Perplexity (Sonar).

3 channels: (1) User Interface — prompting AI interfaces directly through their consumer web products. (2) API — same models queried programmatically through their respective APIs. (3) Google Search — Local Pack + Organic results.

Entity resolution: 1,872 raw name variants collapsed through normalization, manual merge rules, and deduplication. Non-firm entities (directories, insurers, retailers) excluded. Final count: 1,267 unique law firms.

Full response analysis: Raw AI responses captured with full markdown text, citation URLs, links attached, and source metadata. 6,416 citation URLs extracted from User Interface channels across 363 unique domains.

Ground truth: Google Local Pack (89/124 prompts triggered, 44 unique firms, with ratings/reviews) and Organic results (124 prompts, 168 unique domains) as baseline comparison.