CSA Guidance of Impact on AI on Investment Firms and Market Participants

Written By: Michael Holder, Kanchan Mehta, Rijja Baig

In December 2024, the Canadian Securities Administrators (“CSA”) released new guidance on how securities law applies to the use of artificial intelligence (“AI”) systems in the financial sector. CSA Staff Notice and Consultation 11-348 also solicits feedback from stakeholders on the integration of AI into capital markets, with comments due by March 31, 2025. CSA’s guidance on AI is applicable to a range of market participants, including registrants, and offers detailed guidance on the implementation, risks, and regulatory obligations associated with AI systems.

Key Themes Relating to the Use of AI Systems:

1.    Technology & Securities Regulation

Securities laws in Canada are technology-neutral, meaning they apply regardless of the technology used in market activities. However, different technologies require different compliance measures. The CSA emphasizes that regulation focuses on the activity being conducted rather than the technology itself.

2.    AI Governance & Oversight

Strong governance and risk management practices are essential when integrating AI into capital markets. The CSA recommends that market participants:

  • Conduct rigorous testing and validation before using AI systems.

  • Maintain human oversight where necessary to monitor AI-generated decisions.

  • Ensure AI literacy among users to interpret AI-generated outputs correctly.

  • Implement safeguards against cybersecurity risks, bias, and model drift.

  • Verify the accuracy, completeness, and privacy compliance of AI training data.

  • Assess third-party AI providers, including cloud services and external data sources.

3.    Explainability

Market participants must ensure that AI-driven decisions remain transparent, accountable, and auditable. Explainability ensures that individuals can understand and clearly articulate how an AI system generates its outputs. AI systems with lower degrees of explainability can pose challenges in verifying how outputs are generated. To ensure compliance:

  • AI systems should prioritize explainability where feasible.

  • Firms must be able to trace decisions made by AI models.

  • Human oversight is required to assess AI-generated recommendations and rectify errors.

4.    Disclosure

Transparency around AI use is essential to enable investors and clients to make informed decisions. Market participants should:

  • Clearly disclose AI usage in registration applications and filings.

  • Provide meaningful investor disclosure that highlights both the benefits and material risks associated with AI systems.

  • Avoid “AI washing” or misleading claims about AI capabilities in marketing materials, offering documents, term sheets, and agreements.

  • Ensure that AI-related disclosures do not misrepresent competitive advantages or omit key risks that could mislead investors.

5.    Conflicts of Interest

Al systems can introduce new conflicts of interest risks, such as:

  • Bias in AI-generated decisions that favor the market participant over the client.

  • Lack of transparency in AI-driven recommendations.

  • Inability to effectively monitor and correct flawed AI models.

  • Market participants must ensure that AI driven decisions do not create conflicted outcomes that harm clients.

Specific Guidance for Market Participants:

The CSA outlines key obligations under securities law for registrants utilizing AI systems, emphasizing compliance with National Instrument 31-103 (“NI 31-103”) and CIRO requirements.

Registration Applications and Compliance

Firms applying for or amending their registration must disclose in their filings any use of AI systems that may directly impact registerable services provided to clients. Registrants planning to integrate AI are encouraged to consult with Staff early in the process, allowing for the recommendation of tailored terms and conditions for the firm's registration.

Registered firms must implement controls and oversight to ensure compliance with securities laws and mitigate risks associated with AI use. Given the unique challenges AI presents, firms should develop tailored policies and procedures. Additionally, firms must maintain comprehensive records to demonstrate compliance with regulatory requirements, including KYC and suitability obligations, and ensure AI systems offer sufficient explainability to meet record-keeping standards.

Outsourcing and Conflicts of Interest

Registrants considering AI-powered services must be aware that while support activities such as data processing and report generation can be delegated to third parties, registerable activities, including trade suitability determinations, cannot. Firms remain responsible for all outsourced functions and must conduct due diligence before contracting third-party AI providers. Given the complexities of AI, firms should engage professionals with expertise in both AI technologies and registrant conduct requirements. Additionally, privacy considerations must be taken into account, ensuring that AI-powered third-party services comply with data protection laws and safeguard client information.

Conflict of interests must be carefully managed when AI systems are used in capital markets. Under NI 31-103, registrants are required to identify and mitigate material conflicts, ensuring that clients’ best interests are upheld. AI systems introduce unique risks, such as biased inputs that may leads to unfair recommendations, favouring proprietary products without due consideration of alternatives, or disadvantaging certain client demographics.

Given the technology-neutral nature of securities regulations, firms must ensure AI-powered decision-making aligns with existing compliance frameworks. To address these risks, firms can implement bias detection tools, analyze statistical correlation between AI inputs and output, and use alternative oversight mechanisms, including software designed to detect unwanted AI training patterns. Ultimately, firms remain fully accountable for all AI-related processes, whether developed in-house or outsourced, and must maintain rigorous oversight to ensure compliance with regulatory obligations.

1.    Advisers and Dealers

Advisers and dealers must be registered both at the firm level and as individual representatives providing investment advice or trading services. Registration requirements are designed to protect investors who rely on these professionals for financial decisions. If AI systems are used to assist or even drive investment decisions, registrants remain responsible for ensuring compliance with proficiency standards and regulatory obligations. Firms considering AI adoption should consult regulators and rigorously test AI systems before and after implementation to identify potential risks or deficiencies. Any AI usage that impacts investment services must be transparently disclosed to clients, in line with NI 31-103, and the duty to act fairly, honestly, and in good faith.

Trades and KYC/Onboarding

AI systems are being used to improve trade execution efficiency, replacing traditional rules-based algorithms. However, registrants must maintain financial risk controls and prevent manipulative trading while ensuring AI decisions remain explainable.

In KYC and onboarding, Al can streamline client data collection, but meaningful interactions must still be maintained, whether in-person or through digital means. AI-driven client support, such as chatbots, should provide accurate and reliable information.

Client and Decision-Making Support

AI can also assist in decision-making by analyzing investment opportunities, monitoring trends, and alerting registrants to key changes. However, AI-generated recommendations must be verified by human professionals before execution. Limited automated decisions, such as portfolio rebalancing and hedging strategies, may be permitted if subject to human oversight and within defines constraints. Registrants considering such AI use should consult regulators before implementation.

Limit Automated Decisions and PM Duties

In portfolio management, Al may support investment strategies, but it cannot replace human decision-makers in discretionary investment management. Registrants must ensure AI-driven recommendations comply with suitability requirements and regulatory standards. Given these challenges, AI should function as a supporting tool, with final investment decisions remaining the responsibility of human professionals.

2.    Investment Fund Managers

Disclosure Obligations

Investment fund managers (“IFM”) utilizing AI systems to support a fund’s investment objectives and strategies must assess the extent of disclosure required in offering documents, including prospectus and ETF/fund facts. They must clearly state the fund’s fundamental investment objectives and strategies, detailing how AI is integrated into the portfolio management process. If AI is a material investment strategy, such as being features in the fund’s name or marketing, it must be disclosed as an investment objective subject to Part 5 of NI 81-102. IFMs should clearly define AI use and provide transparent, substantive explanations to avoid misleading claims and ensure investors understand how AI impact fund operations.

Risk Factors, Fundamental Changes and Sales

IFMs must include clear risk disclosures in offering documents regarding AI systems use, ensuring investors understand unique risks such as model drift. If AI becomes a material investment strategy, it may trigger regulatory requirements, including securityholder approval for changes to investment objectives and public disclosure of material changes through press releases and reports.

For sales communications, IFMs must ensure AI-related marketing materials are accurate and not misleading. Statements about AI use should balance potential benefits with risks, following Companion Policy 81-102 guidelines. IFMs must have policies and procedures in place to review AI-related claims, ensuring they are truthful, not exaggerated and aligned with regulatory filings.

AI Indices and Conflicts of Interests

Investment funds that track indices (“Index Funds”) must ensure that Al-generated indices meet specific criteria, including absence of discretion in index methodology and transparency regarding composition and rebalancing. If an AI-generated index does not meet these requirements, the fund may be classified as actively managing rather than an Index Fund.

IFMs must also consider conflict of interest obligations under NI 81-107, which requires an Independent Review Committee (“IRC”) to review and approve or recommend decisions where AI use creates an actual or perceived conflict. IFMs should assess whether AI-related activities require IRC approval before implementation and ensure compliance with applicable conflict of interest regulations.

Our Team Can Help

The CSA’s AI guidance introduces new compliance considerations for registrants, investment fund managers, and advisers. Our team can assist with drafting and submitting comments to ensure your perspectives are heard before the March 31, 2025 deadline. Contact us at info@northstarcompliance.com to ensure your firm is prepared for AI regulation in capital markets.

 

Previous
Previous

Delegation of Registration Authority to CIRO for Investment and Mutual Fund Dealers

Next
Next

OSC Extends Temporary Exemptions for Distributions to Self-Certified Investors to October 2025