Kaspersky has used ML algorithms, a subset of AI, in its solutions for nearly 20 years. Combining the power of artificial intelligence and human expertise has enabled Kaspersky Solutions to effectively detect and counter various new threats every day.
Kaspersky has presented its ethical principles for developing and using systems employing artificial intelligence (AI) or machine learning (ML), reinforcing its commitment to a transparent and responsible approach toward technology development. As AI algorithms play an increasingly prominent role in cybersecurity, the principles set out in Kaspersky’s whitepaper explain how the company ensures its AI-driven technologies are reliable and guide other industry players in mitigating the risks associated with using AI/ML algorithms. The relevant discussion was initiated by Kaspersky as part of the UN Internet Governance Forum, currently taking place in Japan, bringing together world-leading experts responsible for internet governance.
Kaspersky has been using ML algorithms, a subset of AI, in its solutions for nearly 20 years. Combining the power of artificial intelligence and human expertise has enabled Kaspersky Solutions to detect and counter various new threats every day effectively. ML plays an important role in automating threat detection and anomaly recognition and enhancing the accuracy of malware identification. To help drive innovation, Kaspersky has formulated ethical principles for the development and use of AI/ML and is openly sharing them with the industry to build impetus for multilateral dialogue to ensure AI is used to make the world a better place.
According to Kaspersky, the seamless development and use of AI/ML should take into consideration the following six principles:
- Transparency;
- Safety;
- Human control;
- Privacy;
- Commitment to cybersecurity purposes;
- Openness to a dialogue.
The transparency principle stands for Kaspersky’s firm belief that companies should inform their customers about using AI/ML technologies in their products and services. At Kaspersky, we comply with this principle by developing AI/ML systems that are interpretable to the maximum extent possible and by sharing information about the way our solutions operate and use AI/MI technologies with our stakeholders.
Safety considerations are reflected in a wide range of rigorous measures that Kaspersky implements to ensure the quality of its AI/ML systems. Some of these include security audits specific to ML/AI, steps to minimize dependence on third-party datasets in training AI-driven solutions, and favoring cloud-based ML technologies with the necessary safeguards instead of the models installed on clients’ machines.
The importance of human control is explained by the need to calibrate the work of AI/ML systems when it comes to the analysis of complex threats, in particular, Advanced Persistent Threats (APTs). To provide effective protection against ever-evolving threats, Kaspersky is committed to maintaining human control as an essential element of all its AI/ML systems.
Another crucial principle is ensuring the right to privacy in the ethical use of AI/ML. With big data playing a vital role in training such systems, companies working with AI/ML must consider individuals’ privacy comprehensively. Committed to respecting individuals’ privacy rights, Kaspersky applies several technical and organizational measures to protect data and systems and ensures its users’ rights to privacy are meaningfully exercised.
The fifth ethical principle represents Kaspersky’s commitment to utilizing AI/ML systems solely for defensive purposes. By focusing exclusively on defensive technologies, the company pursues its mission to build a safer world and demonstrates its commitment to protecting users and their data.
Finally, the last principle refers to Kaspersky’s openness to dialogue with all stakeholders to share best practices in the ethical use of AI. In this regard, Kaspersky stands ready for discussions with all interested parties, as the company’s stance is that it is only through ongoing collaboration among all stakeholders that we can overcome obstacles, drive innovation, and open new horizons.
Kaspersky CTO Anton Ivanov commented: “Artificial intelligence has the potential to bring many benefits to the cybersecurity industry, further enhancing the cyber resilience of our society. But, as with any other technology at an early stage of its development, artificial intelligence isn’t risk-free. To address concerns surrounding AI, Kaspersky has released its ethical principles to share its best practices on AI applications and call for an open industry-wide dialogue to develop clear guidelines on what considerations the development of AI and ML-driven solutions should consider to be deemed ethical.”
Kaspersky presented its ethical principles as part of the UN-led Internet Governance Forum in Kyoto, Japan, from October 8-12. With AI and emerging technologies being one of the key topics in this year’s events, Kaspersky organized a workshop to discuss the ethical principles of AI development and use and brought technical and legal considerations to the discussion.
The release of the ethical principles continues Kaspersky’s Global Transparency Initiative, promoting transparency and accountability among technology providers for a more resilient and cybersafe world. To learn more about the initiative and the company’s transparency principles, request a visit to a Kaspersky Transparency Center.