In the midst of the Artificial Intelligence revolution, we stand at a pivotal juncture that demands our collective attention and action. The extraordinary capabilities of artificial intelligence have the potential to reshape industries, economies, and societies on a global scale. Yet, as we witness AI systems evolve with unprecedented power and complexity, we find ourselves at a critical crossroads: the imperative to guide AI’s trajectory securely, responsibly, and ethically.
The consequences of complacency are far-reaching. Suppose we disregard the urgency to establish proper security, ethical guidelines, and responsible practices for AI development. In that case, we risk creating systems that are out of control, opaque, and misaligned with the interests of humanity or can be compromised. The potential for AI to enhance our lives could be overshadowed by unintended consequences, bias, and misuse.
The principles of responsible AI development are clear: trust in data, transparency in algorithms, fairness in decision-making, accountability for outcomes, and the protection of privacy. It’s a complex endeavour, but the ethical and security foundations we lay today will resonate for generations to come. Owing to the seemingly limitless potential of the technology and the way in which we grant ‘thought’ to it with terms like ‘intelligence’, one of the critical issues with the growth of AI is trust.
Recent developments underline the urgency of responsible AI. DARPA, the research powerhouse of the US military, according to The Register, has issued a call to the AI community to develop a protective model for securing software. Their AI Cyber Challenge (AIxCC) seeks to harness machine-learning systems that safeguard critical infrastructure from potential threats.
Ensuring greater security through TCG standards
For companies on the frontlines of AI, the incorporation of Trusted Computing Group (TCG) standards emerges as a pivotal step. By adhering to these standards, AI solution providers can bolster the security of their offerings, enhancing the overall trustworthiness and reliability of AI systems.
The concept of ‘trusted computing’ can be easily applied to AI elements to provide enhanced security to AI systems. Considering the data set element of an AI, a Trusted Platform Module (TPM) can be leveraged to sign and verify that data has come from a trusted source.
A TPM can also provide mechanisms that protect, detect, attest, and recover from any attempts to modify the code, maliciously or otherwise. This is especially important when it comes to safeguarding the algorithms used within an AI system. Furthermore, any deviations of the model, should any bad or inaccurate data be provided, can be easily prevented by applying trusted principles regarding cyber resiliency, network security, sensor attestation and identity. Businesses can also ensure the training given to machine learning is secure by making sure the entities providing this have adhered to the Trusted Computing standards.
The Device Identifier Composition Engine (DICE) can provide a mechanism to make sure that sensors and other connected devices maintain high levels of integrity and continue to provide accurate data. Boot layers within a system each receive a DICE secret, which combines the preceding secret on the previous layer with the measurement of the current one. This ensures that when there is a successful exploit, the exposed layer’s measurements and secrets will be different, securing data and protecting itself from any data disclosure. A system leveraging DICE can re-key and generate a unique device during each boot cycle which can be used to derive other keys for various purposes. The strong attestation offered by the hardware makes it a great tool to discover any vulnerabilities in any required updates.
In addition to the standards and specifications already discussed, TCG offers an extensive array of standards that encompass diverse sectors and applications including Measurement and Attestation Roots (MARS), Supply Chain, Firmware Integrity Measurement (FIM) and Reference Integrity Manifest (RIM).
The building blocks for secure AI
The Cyber Resilient Module and Building Block Requirements (CyRes) also helps to provide further fortification of AI systems, reducing the likelihood of malware persistence and protecting essential code and data. Three key security principles are established through CyRes: The protection of updatable, persistent code and configuration data, the detection of vulnerabilities that have not been patched or when corruption has occurred, and the reliable recovery of systems to a known, good state even if the platform has been compromised.
For connected cyber resilient platforms, the protection, detection and recovery capabilities enabled by CyRes helps identify misconfigured or unpatched code and deploy reliable, trusted updates.
It is imperative that in the wake of growing attacks on AI, businesses continue to educate themselves on the threats they may encounter, identify vulnerabilities within their existing systems and take preventative steps before it’s too late. Using Trusted Computing standards and hardware such as TPM, DICE, and CyRes help provide a strong layer of defense against malicious intent, protecting sensitive data and ensuring organizations avoid any severe damage, be it financial or reputational.
Membership in the Trusted Computing Group is your key to participating with fellow industry stakeholders in the quest to develop and promote trusted computing technologies.
Standards-based Trusted Computing technologies developed by TCG members now are deployed in enterprise systems, storage systems, networks, embedded systems, and mobile devices and can help secure cloud computing and virtualized systems.
Trusted Computing Group announced that its TPM 2.0 (Trusted Platform Module) Library Specification was approved as a formal international standard under ISO/IEC (the International Organization for Standardization and the International Electrotechnical Commission). TCG has 90+ specifications and guidance documents to help build a trusted computing environment.