Google is using Anthropic’s Claude to improve its Gemini AI
- Tech Brief
- Dec 24, 2024
- 1 min read

TechCrunch reported that contractors working on Google's Gemini AI are comparing its answers to outputs from Anthropic's AI model, Claude. Contractors evaluate the responses based on criteria like accuracy, truthfulness, and verbosity, spending up to 30 minutes per prompt. Internal correspondence revealed references to Claude on the Google platform, with some outputs explicitly stating they were from Claude. Contractors noted Claude’s strong emphasis on safety, avoiding unsafe prompts, while Gemini's responses were flagged for safety violations in some cases.
Google declined to confirm if it obtained Anthropic’s permission for these evaluations. Anthropic’s terms prohibit using Claude to develop competing models without approval. Google, a major Anthropic investor, stated it compares outputs for evaluation but does not use Claude to train Gemini. This comes amid broader concerns about contractors being asked to rate Gemini’s responses on sensitive topics outside their expertise.
Comments