Responding to the European Commission’s AI white paper

In January, our CEO Sundar Pichai visited Brussels to talk about artificial intelligence and how Google could help people and businesses succeed in the digital age through partnership. Much has changed since then due to COVID-19, but one thing hasn't—our commitment to the potential of partnership with Europe on AI, especially to tackle the pandemic and help people and the economy recover. 

As part of that effort, we earlier today filed our response to the European Commission's Consultation on Artificial Intelligence, giving our feedback on the Commission's initial proposal for how to regulate and accelerate the adoption of AI. 

Excellence, skills, trust

Our filing applauds the Commission's focus on building out the European "ecosystem of excellence." European universities already boast renowned leaders in dozens of areas of AI research—Google partners with some of them via our machine learning research hubs in Zurich, Amsterdam, Berlin, Paris and London—and many of their students go on to make important contributions to European businesses.  

We support the Commission's plans to help businesses develop the AI skills they need to thrive in the new digital economy. Next month, we'll contribute to those efforts by extending our machine learning check-up tool to 11 European countries to help small businesses implement AI and grow their businesses. Google Cloud already works closely with scores of businesses across Europe to help them innovate using AI.  

We also support the Commission's goal of building a framework for AI innovation that will create trust and guide ethical development and use of this widely applicable technology. We appreciate the Commission's proportionate, risk-based approach. It's important that AI applications in sensitive fields—such as medicine or transportation—are held to the appropriate standards. 

Based on our experience working with AI, we also offered a couple of suggestions for making future regulation more effective. We want to be a helpful and engaged partner to policymakers, and we have provided more details in our formal response to the consultation.

Definition of high-risk AI applications

AI has a broad range of current and future applications, including some that involve significant benefits and risks.  We think any future regulation would benefit from a more carefully nuanced definition of "high-risk" applications of AI. We agree that some uses warrant extra scrutiny and safeguards to address genuine and complex challenges around safety, fairness, explainability, accountability, and human interactions. 

Assessment of AI applications

When thinking about how to assess high-risk AI applications, it's important to strike a balance. While AI won't always be perfect, it has great potential to help us improve over the performance of existing systems and processes. But the development process for AI must give people confidence that the AI system they're using is reliable and safe. That's especially true for applications like new medical diagnostic techniques, which potentially allow skilled medical practitioners to offer more accurate diagnoses, earlier interventions, and better patient outcomes. But the requirements need to be proportionate to the risk, and shouldn't unduly limit innovation, adoption, and impact. 

This is not an easy needle to thread. The Commission's proposal suggests "ex ante" assessment of AI applications (i.e., upfront assessment, based on forecasted rather than actual use cases). Our contribution recommends having established due diligence and regulatory review processes expand to include the assessment of AI applications. This would avoid unnecessary duplication of efforts and likely speed up implementation.

For the (probably) rare instances when high-risk applications of AI are not obviously covered by existing regulations, we would encourage clear guidance on the "due diligence" criteria companies should use in their development processes. This would enable robust upfront self-assessment and documentation of any risks and their mitigations, and could also include further scrutiny after launch.

This approach would give European citizens confidence about the trustworthiness of AI applications, while also fostering innovation across the region. And it would encourage companies—especially smaller ones—to launch a range of valuable new services. 

Principles and process

Responsible development of AI presents new challenges and critical questions for all of us. In 2018 we published our own AI Principles to help guide our ethical development and use of AI, and also established internal review processes to help us avoid bias, test rigorously for safety, design with privacy top of mind.  Our principles also specify areas where we will not design or deploy AI, such as to support mass surveillance or violate human rights. Look out for an update on our work around these principles in the coming weeks. 

AI is an important part of Google's business and our aspirations for the future. We share a common goal with policymakers—a desire to build trust in AI through responsible innovation and thoughtful regulation, so that European citizens can safely enjoy the full social and economic benefits of AI. We hope that our contribution to the consultation is useful, and we look forward to participating in the discussion in coming months.


#Google #Android #Smartphones #OS #News @ndrdnws #ndrdnws #AndroidNews

Popular Posts:

Your monthly update has arrived

(광고) 최고의 직원 PC 모니터링

Notes from Google Play: wrapping up 2024

Best Custom Controllers for PlayStation 4