The Corporate Governance Around AI By The Independent Director - Some Visible Challenges & Remedy.

 Corporate Governance Practice Audit  

* Birendra K Jha                                                                                                                                                                       Independent Director, IICA ( Ministry of Corporate Affairs ); Corporate Governance  Practice Audit; CSR Social Impact - CSR Planning & Implementation;  Expert Company Law -SEBI Law - Social & Environmental Law   EMail: birendrajha03@yahoo.com


I have audited more than dozen Annual Reports. None, exposes strong preparedness on the Corporate Governance moving around the AI. This is the fiduciary duties of the Independent Director. This list ranges from the top companies including IT to the top ranked CPSU. The Directors  do not know the inviting threats. They are waiting for the tsunami to get their door knocked. 

The basic of AI Governance, starts from the employee discipline.   This is missing. Employees have not been warned strongly on the misuse of the AI. They are still frequently uploading company data to the public AI tools. They do not know the moment sensitive data is loaded, immediately data leaves the company boundary and is accessible around the world at real time. The threat with big impact is coming as tsunami in the form of unethical Insider Trading. No body shall know how data leaked?   

The Directors should not sit in the Board, if they do not understand the regulation controlling the AI.  This is just knowledge of typewriter but no skill on using the computer. 

Legal Knowledge Covering AI Governance: 

The Independent Directors should must have Corporate Governance control over the AI functioning within a company. The Companies Act, 2013 Section 149 & Schedule IV, mandates the Independent Directors to ensure ethical conduct, risk oversight, and safeguard the stakeholder interests. The AI risks (like bias, automation errors, data misuse) now fall within the fiduciary duties of  the Independent Directors to control the AI related risk. In Section 134 (Board Report), where the disclosure of risk management system is disclosed, the Independent Directors should must include the AI risks.  Under Section 177 (Audit Committee) of the Company Act, any oversight  of AI-driven financial reporting or Algorithmic decision risks should must be disclosed. Under SEBI (LODR) Regulations, 2015,  Regulation 17 (Board Responsibilities) the strategic oversight of technology risks involving  AI should be well monitored; Similarly this risk needed to be regulated  under Regulation 21 (Risk Management Committee) of the SEBI LODR. Under Regulation 25 of the same Law, it is the fiduciary duties of the Independent Directors to govern this risk properly.  Therefore,  the Independent Directors should must be remained informed on the emerging technologies of the AI. They should must be the master of the Digital Personal Data Protection Act, 2023 (India), OECD AI Principles, and EU AI Act. AI is no longer just a technology issue, it is a governance responsibility. The Independent Directors must evolve from a passive reviewers to an active AI risk supervisors.

The Independent Directors Should Must Control Following Poor Governance: 

Data Exposure Through AI Pipelines:  Weak AI systems often Pull data from multiple internal sources (finance, strategy, M&A) and the threat is more if there is lack of strict access control. The result is sensitive information (earnings, mergers, orders) becomes visible to unintended employees or vendors or insider trading agents. 

Poor Access Governance:  Strong Governance is needed here because a) Without strong control the AI dashboards can expose price-sensitive data and b) If Governance is poor then role-based access is not enforced strongly.  This poor culture leaks data. For example a junior employee sees unpublished financial results and trades on it if role-based access is not enforced. 

Model Training Data Leakage:  AI models trained on Board documents,  Financial projections, Dealing with  pipelines associated with risk  on outputs  like reports, summaries etc.  indirectly reveal insider information. 

Third-Party AI Vendors:  This is more threat area. When companies use external AI tools, then data is sent to the external server vendor. The vendors may store or reuse data for the Insider Trading. There is large risk of leakage of confidential company information through such vendor arrangement. 

Lack of Audit Trails: The weak AI governance means, there is no log who accessed what data. There is no traceability. The result is the insider trading becomes hard to detect. 

Data Leaked Through the APIs- Internet :  AI systems communicate via APIs. If there is unsecured APIs or Weak authentication, then hackers can easily extract data

Access Accident: This is the most common threat. The public access if is enabled accidentally, then it leaks confidential data. Most of the financial datasets are exposed via public access configuration accidently, data leak with intent or misconfigured cloud

Phishing & Social Engineering: Employees are tricked into giving credentials. This  allow the attackers to access AI dashboards or data leak. 

Data Transmission Without Encryption: Data sent through email without HTTPS or encryption, can be intercepted (Man-in-the-Middle attack)

Shadow AI Usage (This is Big Risk): Employees using tools like Chat GPT, Google Gemini, upload confidential files. Data leaves the company boundary and it can be accessed any where around the world. 

Malware / Spyware:  Installed via email attachments or downloads captures keystrokes, files, screenshots

What Pro-Active Governance Control Measure Should Be Taken By The Independent Directors: 

  • Formation of AI Governance Committee: Like Audit Committee or Risk Committee, the AI Committee   is the important committee which overlooks the AI related  Corporate Governance 
  • The Independent Directors shall define AI Governance Policy 
  • Classify Unpublished Price Sensitive Information (UPSI)
  • Restrict AI access to UPSI
  • Mandatory insider trading compliance training in the policy. 
  • Periodic AI system audit
  • Role-Based Access Control (RBAC) based on “Need-to-know” principle
  • Multi-Factor Authentication (MFA)
  • End-to-end encryption
  • Data masking for sensitive fields
  • Tokenization of financial data
  • Prevent AI from exporting raw confidential data
  • Third Party Risk Management : Vendor due diligence if AI is sourced by vendor. 
  • Data processing agreements. Ensure no data retention
  • Log every data access
  • Unusual trades vs access logs. 
  • Employee-Level Controls. Employees should must be warned, no uploading of company data to public AI tools. A strict policy should be here.  

Conclusion: 

The Independent Director should start with the basic. At wide scale employees awareness is needed on the misuse of AI Plateform. I have given common practical example in practice in several companies. The Finance Department use Public AI system for aggregating the quarterly results before public release. The financial data once loaded in the public AI system, leaves company boundary. The AI system transmits data around the world in real time. The real risk is not the “AI intelligence”. The real risk is data governance failure around the AI

                                     



Comments

Popular posts from this blog

Impact Assessment of Satellite Audit of Nuclear Radiation Site , Analysis & Detection of Silver Cloud Over the Kirana Hill.

Poor HR Practice - Human Right Violation in Handling Trade Union - Case of Nestle India