Effortless transitions have occurred over centuries and if Shakespeare should miraculously show up in this century, he’d question his sanity somewhat over which planet he’s on. Einstein might seem unfazed at first sight, but inwardly, he’d be aweing over the transformations that have occurred since his time, as would all other great personalities who are credited with one form of invention and discovery or the other.
The revolutionizing occurrence over the past decade has basically evolved around the manifestations of computing and technology to a greater extent dictating the pace at which humans must keep up despite being man’s ingenuity.
The gnawing dilemma has always been whether AI, left unhinged, will absolutely take over the world’s economy, businesses, nation’s military artillery and ultra-corporations which aforetime stuck with staple pins and fax machines, leading to it biting the hand that first fed it and gave it expression amongst men. In spite of the sheer ambivalence, some experts acknowledge its dangers but are quick to chip in the ability of managers of these technologies to rein them where necessary. But to what extent?
Journeying Through the AI Technology
The need for ease and comfort have been man’s greatest bane. It has led him to go to any extent and compromise on many things to achieve that; but when the expectations are achieved what else matters? Computing changed the world no doubt and twenty years ago, many jobs did not require use of computers. Now, most require them for key functions, or to even exist. On its own, computing is changing rapidly. The advent of machine learning and artificial intelligence may change humanity’s relationship with technology forever, and is expected to have important consequences for the economy and society of the near future.
Until recently, computing relied on providing computers with instructions. The new generation of machine learning offers a more specific approach to achieving AI. It moves beyond human direction, as computers are now taught to create their own instructions using specific protocols. Currently, it is argued that little government oversight exists for this emerging technology.
Generative AI, Organizational Priority
As part of machine learning, a computer learns what to do without explicit instruction from a human. Artificial intelligence, or AI, is the simulation of human intelligence in machines, teaching a computer to think, learn and perform tasks like a human.
Priorities are necessary to streamline productive workspace and Generative artificial intelligence has rapidly become an organizational priority. Following its launch in late 2022, ChatGPT had 100 million active users in less than two months, and many executives now consider managing AI’s impact to be a leadership priority. That notwithstanding, the adoption of GenAI takes time and requires proper stewardship.
Acknowledging Biases in Generative AI on Organizations
With the progressive ways AIs are being used by companies, experts are raising a worrying eyebrow on the extent to which human biases have made their way into AI. According to IBM, AI bias in the real world shows that when discriminatory data and algorithms are baked into AI models, the models deploy biases at scale and amplify them. This may have adverse reactions on the company’s reputation and even growth potentials.
IBM conceded that although companies are motivated to tackle these biases, it may prove difficult as eliminating systemic racial gender bias have proven difficult and eliminating same in AI is near impossible.
Moreover, using flawed training data can result in algorithms that repeatedly produce errors, unfair outcomes, or even amplify the bias inherent in the flawed data. Organizations and institutions which suffer such biases include healthcare, whereby underrepresented data of women or minority groups can skew predictive AI algorithms. For example, computer-aided diagnosis (CAD) systems have been found to return lower accuracy results for black patients than white patients.
Also, applicant tracking systems on issues with natural language processing algorithms can produce biased results within applicant tracking systems. For instance, Amazon stopped using a hiring algorithm after finding it favored applicants based on words like “executed” or “captured”.
Additionally, AI-powered predictive policing tools used by some organizations in the criminal justice system are supposed to identify areas where crime is likely to occur. However, they often rely on historical arrest data, which can reinforce existing patterns of racial profiling and disproportionate targeting of minority communities.
Managing Risks of Generative AI
Undoubtedly, the potential risks of Generative AI are not confined to individual organizations or sectors; they extend into economies and societies at large. This necessitates coordinated responses from businesses, governments, and individuals alike. A collective, multi-stakeholder approach is crucial to address the societal and economic implications of AI.
A business using Generative AI technology in an enterprise setting is different from consumers using it for private, individual use. Businesses need to adhere to regulations relevant to their respective industries.
The all-encompassing challenge is to complement the progressive drive of Generative AI with a comprehensive and proactive approach to risk management. On one hand, the technology offers vast transformative potential; on the other, it brings a spectrum of strategic considerations for business and society, from ethical use and social impact to legal frameworks and security measures.
Organizations need a clear and actionable framework for how to use Generative AI and to align their Generative AI goals with their businesses’ which incorporates how Generative AI will impact sales, marketing, commerce, service, and IT jobs. With this, Havard Business Review suggest some salient points in managing the risks posed by Generative AI.
Accuracy: The guarantee of checking and crossing the authenticity of an item cuts across every human dispensation. Likewise, organizations need to be able to train AI models on their own data to deliver verifiable results that balance accuracy and precision. On the same breath, it is important to communicate when there is uncertainty regarding Generative AI responses and enable people to validate them.
Plying on Safety Route: The issue of bias won’t go away when the human element feeding the AI is present. As such, every effort to mitigate bias, toxicity, and harmful outputs by conducting bias and robustness assessments is always a priority in AI.
HBR noted that organizations must protect the privacy of any personally identifying information present in the data used for training to prevent potential harm. Further, security assessments can help organizations identify vulnerabilities that may be exploited by bad actors.
Candour in Data Gathering: When collecting data to train and evaluate our models, respect data provenance and ensure there is consent to use that data. This can be done by leveraging open-source and user-provided data. And, when autonomously delivering outputs, it’s a necessity to be transparent that an AI has created the content. This can be done through watermarks on the content or through in-app messaging.
Integrating Generative AI: For all reasoning, most organizations will integrate Generative AI tools rather than build their own. As a consequent, HBR advises companies to train Generative AI tools using zero-party data— data that customers share proactively— and first-party data, which they collect directly. Strong data provenance is key to ensuring models are accurate, original, and trusted. Relying on third-party data, or information obtained from external sources, to train AI tools makes it difficult to ensure that output is accurate.
Meanwhile, “AI is only as good as the data it’s trained on. Models that generate responses to customer support queries will produce inaccurate or out-of-date results if the content it is grounded in is old, incomplete, and inaccurate. This can lead to hallucinations, in which a tool confidently asserts that a falsehood is real. Training data that contains bias will result in tools that propagate bias”.
Companies must review all datasets and documents that will be used to train models, and remove biased, toxic, and false elements. This process of curation is key to principles of safety and accuracy.
The Human Factor
Humans are the custodian of the earth and its management and must not sell their birthrights to machines who are bereft of all the intricate and innate management skills. Just because something can be automated doesn’t mean it should be, and Generative AI tools aren’t always capable of understanding emotional or business context, or knowing when they’re wrong or damaging.
Humans need to be involved to review outputs for accuracy, suss out bias, and ensure models are operating as intended. More broadly, Generative AI should be seen as a way to augment human capabilities and empower communities, not replace or displace them.
Correspondingly, companies play a critical role in responsibly adopting Generative AI, and integrating these tools in ways that enhance, not diminish, the working experience of their employees, and their customers.
Certainly, AI is a rapidly evolving field, and it’s important to recognize its relevance and stay up-to-date on the latest trends and technologies. The companies which comprise of astute and diligent team members must attend conferences, participate in online communities, and pursue ongoing education to stay on the cutting edge of AI. Staying ahead must be the first port of call for organizations which desire not to be thrown under the bus and overtaken by their own ingenuity.