Google Partners with Anthropic to Boost Gemini AI Capabilities

All AI chatbots
Credit: Solen Feyissa on Unsplash | Free use under the Unsplash License

All AI chatbots
Credit: Solen Feyissa on Unsplash | Free use under the Unsplash License

According to internal reports, Google's AI division, DeepMind, enhances its Gemini AI using insights from Anthropic's Claude model. Contractors working on Gemini's development have been tasked with comparing its resources to Claude's, focusing on essential qualities such as accuracy, sincerity, and verbosity.

This process has raised concerns about Google's adherence to Anthropic's commercial conditions.

Google DeepMind
expand image
Credit: Google DeepMind on Unsplash | Free use under the Unsplash license

According to correspondence reviewed by TechCrunch, contractors rate the quality of Gemini's answers by analyzing detailed comparisons with Claude's outputs. Each prompt is given up to 30 minutes of investigation to discover which model produces the best responses.

Claude is known for its strict safety protocols, and it often refuses to respond to unsafe prompts. In contrast, Gemini has been criticized for occasional safety violations, including the generation of inappropriate content, which raises significant concerns about the potential risks of such issues.

International documents revealed that Claude's outputs occasionally identify themselves, showing its compliance with safety requirements. One evaluator stated, "Claude's safety settings are the strictest among AI models," emphasizing its refusal to engage in potentially dangerous scenarios. This contrasts with Gemini, which has been warned for safety issues.

Google's use of Claude has questioned its competence with Anthropic's standards. These standards restrict users from using Claude to train competitive AI models or services without prior approval.

While Google is a significant investor in Anthropic, whether the company has acquired permission to make these comparisons is unclear. Google and Anthropic declined to clarify their positions when asked for a comment.

Google DeepMind
expand image
Credit: Google DeepMind on Unsplash | Free use under the Unsplash license

A Google DeepMind spokesperson, Shira McNamara, acknowledged that comparing model output is common in AI but denied using Claude's output to train Gemini. "Any suggestion that we have used Anthropic models to train Gemini is inaccurate," she emphasized.

These practices are scrutinized amid broader concerns about Gemini's reliability, especially on sensitive topics such as healthcare, where contractors have flagged inaccuracies.

As the fight for AI development heats up, these discoveries highlight the ethical and operational problems that tech giants face.