How to Be a Good Actor in the Race for AI Superintelligence Adoption?

“Change takes much longer than anticipated, and then it happens faster than you thought.” I have taken some liberty in adapting economist Rüdiger Dornbusch’s Overshooting Model to apply it to Generative AI. Dornbusch’s model is as relevant to technology as it is to economics. Because, at the rate at which Generative AI is progressing, we could have AI superintelligence by 2027. That is much faster than most expect it. The implications are profound even if this forecast is off by a few years.

Can AI superintelligence really be around the corner? In a 165-page paper called Situational Awareness, The Decade Ahead (June 2024), Leopold Aschenbrenner reasons that “GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years…we should expect another preschooler-to-high-schooler-sized qualitative jump by 2027.” Further, says Leopold Aschenbrenner, “AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress into ≤1 year. We would rapidly go from human-level to vastly superhuman AI systems.” This means Generative AI will play a much more significant role in business faster than we think.

What is Responsible AI (RAI)?

How fast is `fast’? In a few months from now, businesses without a Generative AI strategy will struggle to keep pace with the competition. And if they do not begin to support Responsible AI (RAI) now, their strategy and investments in Generative AI will, sooner rather than later, be wasted.

Broadly, RAI covers the governance, ethics, morals, and legal values that go into designing, developing, and deploying beneficial AI. Everyone agrees that RAI should aim to mitigate the risk of adverse outcomes on society and ensure privacy to build trust in technology.
The idea of RAI has a rich history. In 1950, mathematician, computer scientist, and logician Alan Turing proposed the Turing Test, which set the standards for machines to show human-like intelligence responsibly. The same year, Isaac Asimov’s Three Laws of Robotics appeared in his book I, Robot, establishing the early guardrails for intelligent machines. This historical context highlights the long-standing importance of responsible considerations in AI development and implementation.

However, RAI has not kept pace with the rapid breakthroughs in AI technology. A survey conducted by the MIT Sloan Management Review showed that a quarter of respondents had experienced AI failure, ranging from lapses in technical performance to outcomes that put individuals and communities at risk. Poor understanding and implementation of RAI are slowing down the ROI on AI investments. Enterprises need—but currently lack—the RAI expertise to create a framework around RAI principles, policies, tools, and implementation processes.

The business case for RAI

It requires specialists to stay on top of RAI. It is a new but complex vocation. At last count, the AI Ethics Guidelines Global Inventory of Algorithm Watch had 167 listed guidelines. Some guidelines, such as IEEE’s Ethically Aligned Design are 290 pages long. The comments in the consultation for the European Union’s High-Level Expert Group (HLEG) on AI’s Ethics Guidelines run into hundreds of pages.

Meanwhile, the global AI regulatory landscape is gaining momentum with the EU AI Act, the Canada Data and AI Act, Singapore’s Model AI Governance Framework, and a slew of other such acts across nations. The penalties for regulatory breaches are becoming stiff. The most recent version of the EU AI Act proposes fines for non-compliance of up to €35,000,00 or 7% of worldwide annual turnover for the preceding financial year, whichever is higher. The New York City Law on Automated Employment Decision Tools carries a penalty of up to $1,500 per violation, per user, per day for non-compliance. It should be evident that enterprises that do not think of RAI are building their technical debt, which could devastate their business.

Becoming a good actor

With enterprises deploying Generative AI solutions for their businesses across the globe, enterprises are becoming more vulnerable to local regulations, increasing the need for RAI.

The ideas that create the foundation for RAI are simple to understand. Most of these naturally focus on data and its use—because data is the basis of large language models (LLMs), which are at the heart of Generative AI.

Among the foundational ideas is informed consent when collecting and consuming data from its owners, using high quality and debiased data to train AI models, maintaining transparent and explainable AI models, keeping data owners informed about the use of the data and the risks/benefits associated with the use, maintaining privacy and disclosing who the data is being shared with, providing the ability to data owners to opt in/out of AI-powered programs, and adherence to local laws and regulatory requirements.

These ideas are simple to understand. However, we would consider a good actor in the RAI context to be one who focuses on fostering a productive relationship with the technology and, second, uses AI for its intended and stated purpose while being aware of the limitations of the tools being used.

Without a doubt, RAI has a long way to go—but it will evolve into one of the most important investments a business can make in the age of exponential change at the hands of Generative AI. The time to begin the investment is now.


Author:

Sandeep Kumar,
Sr. VP & Head Global Consulting

Beware of fraudulent and fake job offers

It has come to our attention that certain employment agencies and individuals are asking people for money in exchange for a job at ITC Infotech.

Such Agencies/individuals could impersonate ITC Infotech's officers, use the company name/logo, brand names and images illegally, without authorization, and/or try to extract money towards security deposit, documentation processing fees, training fees, and so on.

Please note that ITC Infotech never asks job applicants or members of the public to pay money in any form while recruiting.

Feel free to reach out to us at contact.us@itcinfotech.com to report any such incidents that you may have experienced, please use the subject line “Recruitment Fraud Alert” in your message.

Always exercise caution and stay protected against fraud:

  • Do not pay money or transfer funds to anyone toward securing an ITC Infotech job. ITC Infotech will not accept liability for any losses that may have been suffered by the victims of such fraudulent activities.
  • Be careful when sharing your personal information and protect yourself from potential damage. Do not engage with people who fraudulently misrepresent ITC Infotech or its employees/officers and try to solicit payments under the pretext of offering jobs.
View Current Openings
Choose Language »
Don`t copy text!