AI at Gen: Unpacking What AI Means for People, Cybersecurity and Our Company
All over the world, people are embracing artificial intelligence (AI) in the ways they work, learn and communicate. A recent report from McKinsey & Company found that companies across industries, regions and sizes are rapidly increasing their use of generative AI (gen AI) to perform regular functions, and that more people are using these tools outside of the office, as well.
As a global leader in Cyber Safety, we recognize our role in considering the impacts of such a revolutionary and widely used technology, particularly one that has direct effects on digital life.
To that end, we’re proud to publish a new series of policy papers that discuss what AI means for consumers, our company and the world around us. Together, these documents detail how we can work together to build an AI-enabled digital landscape that centers the issues of transparency, ethics and responsibility.
Our Approach to AI at Gen
The rise of AI has had significant implications for Cyber Safety. AI-powered threats often require AI-powered safeguards, and we’re increasingly incorporating these tools into our efforts to provide the best possible protection to consumers around the globe.
To guide our AI research, use and deployment, we’ve identified the following five policy principles:
- Integrity – We use and create AI in only positive, legal and ethical ways.
- Accountability – We assess the unique risks of AI and stand accountable for the outputs of the tools we build and use.
- Data Protection – We protect company information, including intellectual property and personal data, and we respect the intellectual property rights of others.
- Human Involvement – We control the outputs of AI by incorporating human oversight into our processes.
- Transparency – People should be able to understand when they are significantly impacted by AI. We strive to be clear how the AI tools we build work, and how they affect stakeholders.
We believe these principles allow us to invest in ethical and responsible AI to protect consumers and drive our growth. Read our Artificial Intelligence Policy Principles for more information.
Our Recommendations for Companies and Policymakers
Building ethical AI platforms requires collaboration and alignment. The more organizations operate responsibly, promote transparency and protect against new threats, the more we can deliver an AI revolution that is founded on safety, freedom and trust. Our policy paper Artificial Intelligence Concerns & Recommendations lays out how we can do just that. A few examples include:
- Citizens should only use intelligent chatbots and generative models from providers who offer clear legal guarantees against the misuse of private data.
- Companies should enhance transparency regarding AI technical solutions and open widely used AI-supposed ecosystems to public or third-party scrutiny.
- Governments and regulators should support any measure that aims to raise public awareness regarding the issue of content authenticity on the Internet.
Our Research on the AI Threat Landscape
We’ve also conducted a detailed analysis of the current landscape of AI-related threats and scams facing consumers and organizations alike. The report covers many of the major issues related to AI adoption, including misinformation, deepfakes, manipulation of personal data, discrimination, AI-generated cyber attacks, and so much more. View the full report, The AI Revolution of Good and Bad for a more in-depth perspective.
We’re likely only at the beginning of AI’s impacts on our lives. As these new technologies evolve, we believe we can continue to meet new challenges, Power Digital Freedom and build a safer and more trusted AI-supported digital world.