On 15 April 2025, the Digital Policy Office (DPO) of the Hong Kong Government published the Hong Kong Generative Artificial Intelligence Technical and Application Guideline (Guideline), seeking to encourage stakeholders in Hong Kong to engage in generative AI (GenAI) activities safely and responsibly. The Guideline is formulated by the Hong Kong Generative AI Research and Development Center (HKGAI), which was tasked by the DPO with researching and recommending suitable guidelines in promoting the safe application of GenAI technologies.
The Guideline is a product of findings from practical AI applications, expert opinions from the industry and international best practices in GenAI governance. It is tailored for three main types of stakeholders, namely:
- Technology Developers (who”create, train and maintain the foundational models and algorithms that power GenAI systems“);
- Service Providers including platform providers (who”deploy GenAI technologies in customer-facing applications, acting as intermediaries between developers and end users“); and
- Service Users including content creators and content disseminators (who”utilise GenAI services for personal or professional purposes“).
Key Highlights:
GenAI Governance Framework
- The Guideline establishes a GenAI governance framework to promote the effective and beneficial use of GenAI based on five dimensions, namely: personal data privacy, intellectual property, crime prevention, reliability and trustworthiness, and system security. Under the framework, stakeholders are expected to define the scope of their actions and assess potential risks corresponding to the dimensions.
- The Guideline also outlines key principles of governance which are in line with international practices (such as the OECD AI Principles and those under the EU AI Act). Such principles include security and transparency, accuracy and reliability, and fairness and objectivity.
- Interestingly, Hong Kong’s framework emphasises practicality and efficiency, which require developers and service providers to continuously enhance the utility of GenAI technologies, by way of algorithm and model architecture optimisation and exploration of diverse applications (in terms of variety of tasks, scenarios and industries), to ensure content generated is precise, efficient, and high-quality and aligns with user intentions.
Risk management and risk classification system
- The Guideline highlights the importance of understanding technical limitations (such as model hallucination and model bias) and associated service risks of generative AI.
- Notably, the governance framework proposes a four-tier classification of AI system, which resembles the EU approach, by classifying AI systems into “unacceptable risk”, “high risk”, “limited risk” and “low risk” and gives recommendations to regulatory strategies that are appropriately scaled to the potential harm posed by different GenAI applications. For example, an unacceptable risk AI system which poses existential threats (e.g. causing harm to humans) should be fully prohibited and development of the same should be subject to legal liability. A low-risk application of AI (e.g. spam filters, creative tools) can be governed by self-certification.
Practical guidelines for stakeholders
- The Guideline also offers practical recommendations for the three types of stakeholders based on their respective roles and responsibilities.
- Some notable recommendations include:
- Technology Developers are encouraged to establish comprehensive and dedicated teams (such as a data team and a quality control team) to ensure compliance with applicable laws and regulations, manage data responsibly, and prevent discrimination.
- Service Providers should develop a responsible GenAI service framework and identify specific opportunities that can deliver significant value. In line with the AI regulatory regime in Mainland China, Service Providers are also required to ensure that their systems do not generate illegal or inappropriate content and should establish mechanisms for content governance, traceability, and auditability.
- Service Users are encouraged to familiarise themselves with the terms of use of the relevant platform or tool. Additionally, they should adhere to appropriate citation and attribution practices to clearly indicate whether GenAI has played a role in content creation or decision-making.
The DPO has committed to regularly updating the Guideline to keep pace with technological advancements.
To observe the Guideline (though it is non-binding), businesses involved in the development and deployment of AI in Hong Kong should thoroughly review their AI operations and internal AI policies, identify their respective roles as defined under the Guideline (note that an organisation may assume multiple roles simultaneously), and take proactive steps to fulfill the obligations associated with their specific roles.
Our view
The Guideline reinforces Hong Kong’s commitment to a business-centric and light-touch regulatory approach to AI, capitalising on the region’s robust legal framework for governing emerging technologies. It highlights Hong Kong’s reliance on existing laws, such as data privacy and intellectual property laws, to effectively address challenges and disputes related to AI systems. This approach also cements Hong Kong’s reputation as a friendly jurisdiction for global investment and innovation, without excessive or overly stringent regulation.
Moving forward, it is anticipated that Hong Kong will continue to rely on soft laws (like this Guideline), industry self-regulation (particularly within the banking and financial services sectors) and phased regulatory sandboxes to test, evaluate and regulate AI use cases by creating a testing ground for AI experimentation. Investors, innovators and businesses should keep abreast of both regulatory and investment incentives in Hong Kong for AI opportunities as the region solidifies its position as a leading technology hub.
To find out more on AI and AI laws and regulations, visit DLA Piper’s Focus on Artificial Intelligence page and Technology’s Legal Edge blog. If your organisation is deploying AI solutions, you can undertake a free maturity risk assessment using our AI Scorebox tool.
If you’d like to discuss any of the issues discussed in this article, get in touch with Lauren Hurcombe, Daisy Wong or your usual DLA Piper contact.