Is Ai the new Excel Macro plague of the 90’s

The 1990s saw an innovative and liberating tool take hold in banks and financial institutions worldwide: Excel macros. These compact programmes, embedded within Excel spreadsheets, promised automation and efficiency. Yet, as time would tell, they also posed significant security risks. As we move into the era of artificial intelligence, it’s crucial to look back at these past lessons to ensure that we don’t repeat the same mistakes. AI governance is crucial in financial institutions to ensure the responsible and ethical use of artificial intelligence. One effective way to learn about AI governance is by studying the practises and regulations that have been put in place for Excel macros. By understanding how financial institutions have managed the governance of Excel macros, we can gain valuable insights into the potential challenges and best practices for governing AI systems. This knowledge can then be applied to develop robust governance frameworks for AI in financial institutions, ensuring transparency, accountability, and compliance with regulations.

Excel Macros: Power and Pitfalls

Excel macros, essentially mini-programmes running within spreadsheets, offer the ability to automate repetitive tasks. Financial institutions embraced them for their potential to increase efficiency and reduce human error. However, their widespread use led to several unforeseen issues. Excel macros have long been used in financial institutions to automate repetitive tasks and improve efficiency. However, their power also comes with pitfalls. One of the main challenges is the potential for errors or malicious code in macros, which can lead to data breaches or financial losses. Therefore, studying the practises and regulations in place for Excel macros can provide valuable lessons in mitigating these risks and ensuring the safe and effective use of AI systems in the financial industry.

Lack of Visibility and Control

One major concern was the difficulty in tracking and controlling the use of macros. Pinpointing and mitigating potential security vulnerabilities became an immense challenge in a vast network of interconnected cells. As with Excel another challenge is the lack of visibility and control over AI systems. AI systems often operate using complex algorithms and machine learning models that are not easily explainable or transparent. This lack of visibility and control can make it difficult for financial institutions to identify and address any biases or errors in the AI system’s decision-making process. Additionally, ensuring that the AI system is making decisions in line with regulatory requirements and ethical standards can be challenging. Therefore, it is crucial for financial institutions to implement robust monitoring and auditing mechanisms to maintain visibility and control over AI systems.

Inadequate Training

Despite their financial expertise, many employees were not trained to recognise the security risks associated with macros. This lack of understanding led to the misuse and unintentional exposure of sensitive data. Inadequate training of AI systems can further exacerbate the challenges faced by financial institutions. Without proper training, AI systems may not be equipped to recognise and mitigate biases, leading to unfair or discriminatory decision-making. Additionally, inadequate training can result in errors and inaccuracies in the AI system’s predictions and recommendations, which can have significant financial consequences. Therefore, financial institutions must invest in comprehensive and ongoing training programmes to ensure the AI system is consistently updated and optimised for accuracy and fairness.

Lack of Standardisation

Furthermore, the absence of a standardised process for managing macros led to compatibility issues and obstacles in security patching. Without a unified system, sharing, collaboration, and addressing vulnerabilities became exponentially harder and is another challenge that financial institutions face when implementing AI systems. Without standardised protocols and guidelines, there is a risk of inconsistent and unreliable results across different AI platforms. This lack of standardisation can hinder effective decision-making and create confusion among users of the AI system. Therefore, it is crucial for financial institutions to collaborate with regulatory bodies and industry experts to establish common standards and best practices for the implementation and use of AI in the financial sector.

Lessons Learned: Governance and Control Measures

In response to the challenges posed by Excel macros, banks and financial institutions began implementing a range of governance and control measures. Lessons learned from these experiences highlight the importance of implementing governance and control measures when dealing with macros. Organisations ensured that employees received proper training and education on the security risks associated with macros. Additionally, establishing standardised processes and protocols for managing macros helped mitigate compatibility issues and enable efficient security patching. By implementing these governance and control measures, organisations became better and able to protect sensitive data whilst minimising the risk of unintentional exposures.

The “No Macro” Policy

Some organisations took the stringent step of banning the use of macros altogether. This drastic measure curbed the risks associated with misuse and security vulnerabilities, albeit at the cost of potential efficiency gains. One of the most significant measures was the implementation of a “No Macro” policy, which strictly prohibited the use of macros in any financial processes. This policy aimed to eliminate the risks associated with macros and ensured a more secure environment for data processing. Additionally, this policy encouraged employees to find alternative solutions, such as using specialised software or developing custom applications, to accomplish their tasks without relying on macros which sort of shifted the problem from macros to non standard software packages. This in turn became a support and regulatory nightmare.

Implementing a Macro Approval Process

Institutions implemented approval processes to maintain the benefits of macros while controlling their potential risks. Any macro had to be vetted by a security administrator before it could be deployed, ensuring that only safe and necessary macros were in use. Other institutions opted for a more balanced approach by implementing a macro approval process. This involved creating a designated team responsible for reviewing and approving macros before they could be used. This measure allowed for better control and oversight while still allowing the potential benefits of macros to be realised. It also ensured that only trusted and properly vetted macros were used, minimising the risk of executing unauthorised or malicious code.

Employee Training

Realising that many of the issues stemmed from a lack of understanding, institutions began training programmes. These aimed to educate employees on the potential risks associated with macros and how to use them safely. By providing employees with the necessary knowledge and skills, institutions could minimise the chances of accidental misuse or security breaches caused by macros. Training programmes also emphasised the importance of regularly updating and patching software to ensure the latest security measures were in place. Overall, employee training played a crucial role in creating a culture of macro awareness and responsible usage within institutions. Through this, organisations aimed to reduce unintentional misuse of macros and subsequent data breaches.

Standardising macro management

Banks started developing standard procedures for managing macros to address compatibility and security patching issues. Standardisation eased collaboration, sharing, and troubleshooting across spreadsheets while simplifying the process of patching security vulnerabilities.

While security was paramount, Excel macros also led to other problems. They had the potential to corrupt data if used incorrectly. They could cause performance issues, slowing down Excel operations, particularly if the macros were complex or poorly written.

Supportability was another issue, with macros often making it difficult to troubleshoot problems within spreadsheets. Beyond security, companies must consider other macro issues when implementing standardised processes and procedures. One such issue is scalability, as companies need to ensure that their standardised processes can accommodate growth and expansion.

Companies must also consider the impact of globalisation, as standardised processes may need to be adapted to different cultural contexts and regulatory environments. Lastly, companies must be mindful of the potential resistance to change from employees and proactively address any concerns or challenges that may arise during the implementation process.

Towards AI: Applying Lessons from Excel Macros

As we increasingly integrate artificial intelligence into our financial institutions, learning from our experiences with Excel macros is imperative. The governance and control measures that helped manage the risks of Excel macros can also guide us in controlling AI applications.

AI presents similar risks, including a lack of visibility and control, inadequate understanding among users, and potential standardisation issues. If poorly managed, AI can also lead to data corruption, performance issues, and supportability problems, much like Excel macros.

A strong focus on governance and control must go hand in hand with the adoption of AI to ensure we do not encounter the same issues. This can be achieved through enforcing AI usage policies, implementing AI approval processes, standardising AI management practices, and comprehensive training on AI risk and usage.

As AI continues to evolve and become an integral part of financial institutions, vigilance and governance must grow alongside it. By learning from the past, we can make sure the future of AI is secure and beneficial for all.

While AI has tremendous potential to revolutionise the financial sector, we must proceed cautiously. As the story of Excel macros teaches us, with power comes responsibility. And this responsibility lies in robust governance, stringent control measures, and comprehensive education to ensure AI serves us rather than poses a risk to us.

Scroll to Top