Mobile Telecom Security Landscape Blog: May 24

Welcome to our new information update from GSMA Security

This blog presents some of the recent security developments in the domain of Artificial Intelligence / Machine Learning (AI / ML) and aims to inform and provoke thoughts on how best to build a robust security response. Topics covered include AI security threats, some of the development of best practices, the concept of an AI Bill of Materials and some ideas on the use of defensive AI security.

The supply chain for AI / ML services has benefitted from much industry dialogue [1].  AI-enabled security threats include:

  • Data poisoning
  • Prompt injection attacks where an attacker creates an input designed to make the model behave in an unintended way
  • Scams empowered by generative AI
  • Identification of new attack types through the use of AI
  • Synthetic identity fraud [2] (such as deepfakes).

The security of AI / ML services is still developing and is driven by achieving a set of security activities [3]: secure by design, secure development, secure deployment and secure operation and maintenance.  The Guidelines for Secure AI System Development, published by the UK NCSC and developed with the US’s Cybersecurity and Infrastructure Security Agency (CISA) and agencies from 17 other countries.

A recent UK TIN Future Capability Paper [4] focuses on Telco AI.  The paper explores a range of including: Architecture, Infrastructure, Data, Network Testing with AI, and Services and Business.  Cross-topic subjects on Economy, Sustainability, Skill Gap, Ethics and Regulations, and AI Security are also addressed.  The main UK actions identified affecting Telecom AI Security are the challenges of increased threat surfaces arising from the use of AI and Supply Chain Security for AI.

AI Bill Of Materials (AI-BOM)[5] are used to document the components of an AI system, including the model details, architecture, usage, training data. This is another example of recent ‘bill of materials’ activities (such as Software BOM (SBOM) and Cryptographic BOMs (CBOM)) that seek to build depth of knowledge in supply chain components, vulnerability management and inventory tracking.

The composition of source code is ideally documented in detail to describe how the code works to assist in code maintenance and upgrade by other coders. The code composition can be recorded in a SBOM [6]. The concept of a CBOM [7] builds on the concept of establishing an inventory of deployed cryptographic protocols / algorithms [8] in an entity.  This inventory can be an important early step in delivering a migration to a quantum-safe cryptographic state. 

The State of AI in Cybersecurity report [9] identified key findings including that AI is valuable when used in threat intelligence and for threat detection, most AI adoption is at the early stage of maturity, but organisations are already reaping benefits and defensive AI is critical to protecting organisations from cyber criminals using AI.

Defensive AI-enabled solutions are also emerging such as the MITRE ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems[10]) that aims to provide a knowledge base of adversary tactics and techniques based on real-world attack observations.  Also, NIST has released guidance such as Artificial Intelligence Risk Management Framework [11].  There are a range of existing security controls that, when robustly implemented, will strengthen protection against such attacks.  These include use of multi-factor authentication, operating least privilege schemes, segmenting internal networks into zero-trust zones, securing the supply chain, threat modelling, security testing and secure by default.

AI/ML have a wide range of real, potential and emerging mobile telecom security and fraud applications. Securing the AI/ML platform [12], data and algorithms are key protective measures. Beyond that, there is significant potential for generative AI security applications to spot advanced and complex attack types and to counter fraud techniques through advanced analytics. There are a number of more advanced use cases of Large Language Models (LLMs) although currently, there are scalability, cost and production environment challenges. LLMs could be combined with reinforcement learning to train agents to perform tasks (as opposed to language models that mainly support knowledge queries and natural language processing).

AI/ML are highly likely to be used to generate advanced attack techniques, pointing to a requirement for teams of generative agents to engage in complex real-time defence.  If you’d like to discuss or to get more closely involved, please email [email protected]


[1] For example, see https://www.cyber.gov.au/resources-business-and-government/governance-and-user-education/governance/an-introduction-to-artificial-intelligence

[2] https://www.helpnetsecurity.com/2024/02/09/identity-fraud-growth/

[3] https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development

[4] https://uktin.net/sites/default/files/2024-02/FCP%20Artificial%20Intelligence-Compressed_Final19224.pdf

[5] https://becomingahacker.org/artificial-intelligence-bill-of-materials-ai-boms-ensuring-ai-transparency-and-traceability-82322643bd2a

[6] Types of Software Bill of Materials (SBOM) Documents (cisa.gov)

[7] https://research.ibm.com/blog/cryptographic-bill-of-materials

[8] Explored in ETSI TR 103 619 V1.1.1 (2020-07) Migration strategies and recommendations to Quantum Safe schemes and https://www.gsma.com/newsroom/wp-content/uploads/PQ.1-Post-Quantum-Telco-Network-Impact-Assessment-Whitepaper-Version1.0.pdf and https://www.gsma.com/newsroom/wp-content/uploads/PQ.1-Post-Quantum-Telco-Network-Impact-Assessment-Whitepaper-Version1.0.pdf

[9] https://mixmode.ai/wp-content/uploads/2024/02/MixMode-State-of-AI-in-Cybersecurity-Report-2024-1.pdf

[10] https://atlas.mitre.org/

[11] https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf

[12] GR SAI 009 – V1.1.1 – Securing Artificial Intelligence (SAI); Artificial Intelligence Computing Platform Security Framework (etsi.org)