What Does AI Think About Personal Injury Lawyers in Las Vegas?
We asked 3 AI models the same 124 questions about personal injury lawyers across 3 channels — prompting AI interfaces directly, querying APIs, and Google AI Mode. Here's what they got right, and where they wildly disagree.
How We Tested: 3-Channel Framework
The AI Visibility Leaderboard
Out of 1,267 unique law firms, the top 5 control 26% of all recommendations. There's a clear AI canon — and most firms aren't in it.
72.8% of Firms Are Known by Only One Ecosystem
We grouped all channels into 3 ecosystems (OpenAI, Google, Perplexity) and checked: how many know each firm?
Same Model, Different Answers: User Interface vs API Overlap Is 3-12%
We tested the exact same prompts on ChatGPT, Gemini, and Perplexity via their web interface (Direct) and their API. The per-prompt overlap is shockingly low.
Toggle Web Search and 58-87% of Firms Change
Every channel was tested with web search ON (live internet) and OFF (training data only). The overlap between the two modes:
Same Google Brain, 4 Products, 4 Different Answers
We tested the Google ecosystem across 4 product surfaces. Same underlying model family — radically different outputs.
| Google Product | Unique Firms | Avg/Response | Channel Type |
|---|---|---|---|
| Google AI Mode | 290 | 12.0 | Google Search AI |
| Gemini User Interface (Web:ON) | 204 | 8.2 | User Interface |
| Gemini API (Web:ON) | 246 | 7.3 | API |
| Gemini API (Web:OFF) | 156 | 7.6 | API |
| Gemini User Interface (Web:OFF) | 113 | 5.7 | User Interface |
| Google Local Pack (SERP) | 44 | 3.0 | Traditional SERP |
Each AI Ecosystem Ranks Firms Differently
The same top firms get dramatically different average rank positions depending on which ecosystem you ask.
| Firm | OpenAI Avg Rank | Google Avg Rank | AI Mode Avg Rank | Perplexity Avg Rank |
|---|---|---|---|---|
| Naqvi Injury Law | 3.3 | 4.4 | 5.4 | 4.0 |
| Benson & Bingham | 4.9 | 4.1 | 5.8 | 5.1 |
| Richard Harris Law Firm | 3.6 | 4.8 | 5.5 | 4.5 |
| Eglet Law | 3.0 | 5.2 | 5.9 | 3.8 |
| Adam S. Kutner | 4.2 | 5.5 | 7.7 | 5.1 |
Which Query Types Trigger Which Firms?
We tested 124 prompts across 8 dimensions. Some firms dominate generic queries; others only surface for specific injury types or trust-based queries.
| Dimension | Prompts | Total Mentions | Top Firm |
|---|---|---|---|
| Generic / Broad | 15 | 1,396 | Naqvi Injury Law (151) |
| Injury Type | 30 | 2,501 | Benson & Bingham (225) |
| Trust & Credibility | 15 | 1,376 | Benson & Bingham (123) |
| Situational / Scenario | 20 | 1,592 | Benson & Bingham (140) |
| Location Specific | 14 | 1,293 | Naqvi Injury Law (110) |
| Constraints & Practical | 12 | 981 | Naqvi Injury Law (69) |
| Comparison / Versus | 10 | 885 | Richard Harris (56) |
| Vegas-Specific | 8 | 725 | Richard Harris (53) |
A 5-Star Firm Can Be Invisible to AI
We checked whether Google ratings or review volume predict AI visibility. The answer: being great doesn't make you famous to AI.
We compared every firm's Google star rating against how often AI recommends them. The result? No connection at all. Higher-rated firms are not more likely to be recommended — if anything, they're slightly less likely. A perfect 5.0-star firm can be completely invisible to AI.
Review count helps a little — firms with more reviews tend to get mentioned slightly more — but it's a weak signal. Having 800 reviews doesn't guarantee AI knows you exist.
Ladah Law Firm: 4.8 rating, 498 reviews, just 2 Local Pack appearances — but 268 AI mentions and 317 citations (3rd most-cited website).
Ladah has nearly 90× more AI visibility than PT Law despite having fewer reviews and a lower star rating. The difference? Ladah's website content gets cited by AI. PT Law's doesn't. Being talked about online matters more than being liked.
5.0 Stars. 932 Reviews. Only 3 AI Mentions.
We found firms with strong real-world presence — perfect ratings, hundreds of reviews — that AI barely knows exist.
| Firm | Rating | Reviews | Local Pack | AI Mentions | Verdict |
|---|---|---|---|---|---|
| PT Law | 5.0 | 932 | 2 | 3 | AI Invisible |
| CVBN LAW | 4.9 | 504 | 2 | 5 | AI Invisible |
| Ace Lakhani Law Firm | 5.0 | 468 | 1 | 6 | AI Invisible |
| Hale Injury Law | 4.9 | 599 | 2 | 12 | Barely Visible |
| Tingey Injury Law Firm | 4.9 | 460 | 10 | 13 | Barely Visible |
| Maier Gutierrez Injury Lawyers | 5.0 | 574 | 12 | 18 | Barely Visible |
These "invisible firms" represent the immediate opportunity for AI visibility optimization. All 6 firms have 4.9-5.0 star ratings and 460-932 reviews — strong fundamentals. But they're missing the content layer that LLMs consume: blog mentions, legal directory features, listicle placements, and third-party coverage that enters training data and search results.
Different Channels, Different Winners
93% of Local Pack firms appear somewhere in AI channels — but only 32% of top AI firms appear in the Local Pack. They're different games.
| Firm | Local Pack | AI Total | Google AI Mode | Pattern |
|---|---|---|---|---|
| Naqvi Injury Law | 52 | 808 | 137 | Dominant Everywhere |
| Jack Bernstein Injury Lawyers | 0 | 202 | 25 | AI Only — LP Invisible |
| Gina Corena & Associates | 29 | 16 | 11 | LP Only — AI Invisible |
| Claggett & Sykes | 4 | 417 | 62 | AI Dominant |
| Dimopoulos Injury Law | 15 | 86 | 19 | Balanced |
Google AI Mode Surfaces 2× More Firms Than ChatGPT API
Some channels give curated short lists; others produce exhaustive recommendations. If you're a smaller firm, the verbose channels are your best shot.
148 Firms Exist Only in ChatGPT User Interface (Web Search ON)
Each channel surfaces firms that no other channel mentions. These are "platform-exclusive" visibility opportunities.
| Channel | Exclusive Firms | What This Means |
|---|---|---|
| ChatGPT User Interface (Web:ON) | 148 | Browsing behavior surfaces unique results |
| ChatGPT API (Web:ON) | 110 | API web search finds different pages than UI |
| ChatGPT User Interface (Web:OFF) | 54 | Training data has firms API doesn't surface |
| ChatGPT API (Web:OFF) | 81 | Different model version/system prompt |
| Gemini API (Web:ON) | 77 | Google grounding finds different firms |
| Google AI Mode | 76 | Query fan-out discovers niche firms |
| Perplexity API (Web:OFF) | 62 | Perplexity's training data is distinctive |
| Gemini API (Web:OFF) | 60 | Gemini's base knowledge differs |
Where Do AIs Get Their Information?
We extracted 6,416 citation URLs from the User Interface channels. Law firm websites dominate (48.4%), followed by legal directories (13.1%) and review platforms (6.4%).
| Source Category | Citations | Share | Top Examples |
|---|---|---|---|
| Law Firm Websites | 3,112 | 48.4% | bensonbingham.com, ladahlaw.com, egletlaw.com, naqvilaw.com |
| Other (News, Reddit, Forbes) | 2,047 | 31.9% | forbes.com, reddit.com, cameronlawlv.com |
| Legal Directories | 841 | 13.1% | superlawyers.com, justia.com, avvo.com |
| Review Platforms | 413 | 6.4% | yelp.com, bbb.org |
| News / Media | 13 | 0.2% | reviewjournal.com, lasvegassun.com |
Same Firm, Different Story — How Each AI Describes the Top Firms
Beyond just recommending different firms, each AI model frames the same firm with a different personality and emphasis.
Google AI Mode: Shows Google Maps card with rating (4.9), address, and image — structured data, visual, local-pack style
ChatGPT User Interface: Ranked in a numbered list with brief qualifier — concise, editorial tone
Eglet Adams: "Robert Eglet named Trial Lawyer of the Year (2026); known for billion-dollar verdicts" — individual lawyer + headline verdicts
Benson & Bingham: "25+ years experience; record-setting settlements ($30M+)" — experience + settlement size
Richard Harris: "Over 40 years in Nevada; one of the largest and most established firms" — longevity + scale
Ask Twice, Get Different Answers: Only 23.8% Overlap Between Runs
We ran 20 prompts 4 times each across all 13 channels — 1,040 queries — to test reproducibility. The results are sobering.
| Channel | Set Overlap | Core % | Stochastic % | Stability |
|---|---|---|---|---|
| Gemini User Interface (Web:OFF) | 38.8% | 28.4% | 71.6% | Most Stable |
| Perplexity API (Web:OFF) | 31.9% | 24.0% | 76.0% | Stable |
| Perplexity User Interface (Web:ON) | 28.7% | 23.0% | 77.0% | Moderate |
| Perplexity User Interface (Web:OFF) | 28.4% | 23.6% | 76.4% | Moderate |
| ChatGPT User Interface (Web:OFF) | 24.7% | 18.8% | 81.2% | Moderate |
| Google AI Mode | 22.3% | 15.6% | 84.4% | Moderate |
| ChatGPT User Interface (Web:ON) | 14.3% | 7.6% | 92.4% | Volatile |
| ChatGPT API (Web:OFF) | 8.8% | 4.9% | 95.1% | Very Volatile |
| ChatGPT API (Web:ON) | 6.5% | 1.5% | 98.5% | Extremely Volatile |
User Interface channels are more stable than API (26.0% vs 21.7% overlap). Perplexity is the most stable ecosystem (28.6% average overlap), while OpenAI is the least stable (13.5%). Gemini User Interface (Web:OFF) is the single most stable channel at 38.8% — its training-data-only mode gives the most consistent answers.
Firms That Appear Keep Their Position — But Most Don't Appear
Kendall's τ measures whether firms maintain the same rank ordering across repeat runs. Average: τ = 0.370 (moderate consistency).
| Channel | Kendall τ | Set Overlap | Interpretation |
|---|---|---|---|
| Perplexity User Interface (Web:OFF) | 0.610 | 28.4% | Best rank consistency |
| Perplexity User Interface (Web:ON) | 0.590 | 28.7% | Strong rank consistency |
| ChatGPT User Interface (Web:ON) | 0.583 | 14.3% | Good ranks, volatile sets |
| ChatGPT User Interface (Web:OFF) | 0.558 | 24.7% | Good ranks, moderate sets |
| Gemini User Interface (Web:OFF) | 0.500 | 38.8% | Most stable overall |
| Google AI Mode | 0.221 | 22.3% | Weak rank consistency |
| Gemini User Interface (Web:ON) | 0.112 | 21.4% | Poor rank consistency |
| ChatGPT API (Web:ON) | -0.289 | 6.5% | Inverted rankings |
What Should a PI Lawyer in Las Vegas Actually Do?
1. Track all channels, not just one. User Interface vs API gives different results. Search ON vs OFF gives different results. Google AI Mode vs Local Pack gives different results. A firm that's invisible in one channel may be dominant in another. Multi-channel monitoring is not optional.
2. Blog coverage and listicle placement matter more than star ratings. Firms like Ladah (212 AI mentions, 4.8 stars, 498 reviews) crush firms like Gina Corena (16 AI mentions, 5.0 stars, 836 reviews). The difference? Content footprint. Being talked about on legal blogs, featured in "best of" listicles, and covered by local press enters training data and live search results that AI consumes.
3. Google AI Mode is the new battleground. At 12 firms per response and 290 unique firms, it's the most generous and most important AI discovery channel for local businesses. It pulls from Google's full search index, Maps, and GBP data. Firms not optimizing for this channel are leaving AI visibility on the table.
4. The "invisible firm" opportunity is real. Maier Gutierrez has 574 reviews, 5.0 stars, 11 Local Pack appearances — and 2 AI mentions. The gap between real-world quality and AI awareness is a solvable problem. Content marketing, schema markup, and directory presence can move firms from invisible to visible within weeks.
5. Niche positioning works. Firms like Eglet Law (over-indexes on trust queries) and Paul Padda Law (over-indexes on injury-type queries) prove that you don't need to win every query — you need to own your niche across all channels.
How We Built This Study
124 prompts across 8 dimensions — all recommendation-only, designed to guarantee business name mentions:
| Dimension | Prompts | Purpose |
|---|---|---|
| Injury Type | 30 | Case-type visibility — car, truck, slip & fall, med mal, rideshare, etc. |
| Situational / Scenario | 20 | Narrative-based with explicit recommendation ask — real-world phrasing |
| Generic / Broad | 15 | Baseline — who shows up for generic PI lawyer queries |
| Trust & Credibility | 15 | Reviews, settlements, awards — what trust signals does AI surface |
| Location Specific | 14 | Neighborhood and venue-specific — tests local knowledge depth |
| Constraints & Practical | 12 | Fees, language, availability — filtered recommendation queries |
| Comparison / Versus | 10 | Competitive rankings, firm-type filters — forces named comparisons |
| Vegas-Specific | 8 | Casino, entertainment, pool club, convention injuries — unique to Las Vegas |
| TOTAL | 124 |
3 models: ChatGPT (GPT-5.4-mini), Gemini (2.5 Pro), Perplexity (Sonar).
3 channels: (1) User Interface — prompting AI interfaces directly through their consumer web products. (2) API — same models queried programmatically through their respective APIs. (3) Google Search — Local Pack + Organic results.
Entity resolution: 1,872 raw name variants collapsed through normalization, manual merge rules, and deduplication. Non-firm entities (directories, insurers, retailers) excluded. Final count: 1,267 unique law firms.
Full response analysis: Raw AI responses captured with full markdown text, citation URLs, links attached, and source metadata. 6,416 citation URLs extracted from User Interface channels across 363 unique domains.
Ground truth: Google Local Pack (89/124 prompts triggered, 44 unique firms, with ratings/reviews) and Organic results (124 prompts, 168 unique domains) as baseline comparison.