The AI storm is here: Adapt or perish?

The AI storm is here: Adapt or perish?

Are we heading towards a man versus machine world, that tired pop culture trope? If not, how are we to make sense of AI’s relentless march? In the face of such transformative technologies, the way forward is not in a collective freaking out.

What will you do?

To the fervent believer in technology and its powers to make the world a better place, Artificial Intelligence could mean all things we were told to expect from the purported ‘future fantastic’. It could mean an abstraction of ideas to the uninitiated among us, something formless, potent, and pervasive that is powering the machine to do it all — help you find faster routes to work, double up as a therapist with human-like empathy, predict system failures in aircraft, cook your biryani.

The counter-narrative, meanwhile, has been endorsing caution and arguing that the unregulated adoption of AI technologies will cause irreversible damage. Popular culture has consistently drawn on this doom narrative. Joan is Awful, the opening episode in the new Black Mirror season deals with AI’s problematic ethical considerations. Here, a streaming platform compiles the details of Joan (Annie Murphy) and with the help of a quantum computer, produces a daily show that mirrors her life. The platform does not seek explicit consent, either from Joan or the actress whose AI-generated version plays her in the show, Salma Hayek.

Outside of imagined futures and tech dystopia, the concerns are immediate. This week, the UN Security Council will meet to discuss the threats AI poses to international peace and security. Secretary-General Antonio Guterres, while acknowledging AI’s significance in realising Sustainable Development Goals, has called for intervention to ensure that AI does not evolve into a state that discounts human agency, creating “a monster that nobody will be able to control”.

In the face of transformative technologies and unprecedented innovation, the way forward is in building guardrails and preparing without prejudice, according to professionals who are engaged in the adoption of these technologies.

Responsible adoption is key

In India, the calls for state-mandated regulation of AI technologies are gaining traction but the industry and researchers appear to favour a less confrontational approach, through policies that make the idea of Responsible AI embedded in the adoption, across sectors. There is consensus on the urgency of issues, including the threat of privacy breaches, but the stakeholders also emphasise the need to nurture these technologies as they evolve and stabilise, the need to choose preparedness over all-out resistance.

R Venkatesh Babu, Professor at the Department of Computational and Data Sciences, Indian Institute of Science, highlights the staggering pace at which AI technologies are being adopted across sectors. Research — both institutional and corporate-funded — is weighing in on the scope of these technologies and the industry is not losing time in applying the findings to the products and services. “In the case of research on AI, there is no gap between academia and industry,” he says.

Monitor but don’t over-regulate

As generative AI technologies set off concerns, many of them around the issue of inherent biases, some academics call for a closer monitoring of these technologies to steer them to their best-applied forms, without over-regulating them. “Models like ChatGPT are trained on huge amounts of data. They help in undertaking tasks that are extremely difficult to execute manually. Over a period of time, these data-driven models also get thoughtful — it means that they learn and improve themselves in terms of understanding the data — but the problem is that these systems also carry the biases from the datasets that they work with, and can even amplify these biases,” says Prof Babu.

AI systems can inherit these biases from the under-sampling of the data or from discriminatory datasets. The skewed inferences, when used to assist humans — who themselves come with implicit biases — in making decisions, can lead to misrepresentation based on gender, race, or social markers, and lower the accuracy of the output. 

The discrepancies have consistently appeared in AI applications, in the form of art that reinforces cultural and ethnic stereotypes, in ill-trained algorithms that are used for job recruitment, in facial recognition systems that fail to classify dark-toned faces, and in clinical diagnoses based on datasets that are short on patient diversity.

Beating the bias

The biases manifest in critical areas like healthcare where algorithms that work with region-specific data can throw up inconsistent, inaccurate results when they analyse disease prevalence outside of the populations in the specified regions. Dr Arjun Kalyanpur, Chief Radiologist and Founder-CEO of Teleradiology Solutions, sees curation and tailoring of data to meet specific requirements as integral to the sustainable adoption of AI technologies.

Teleradiology Solutions has developed an algorithm that powers NeuroAssist, a product that helps distinguish between haemorrhagic and non-haemorrhagic strokes instantaneously, and guides the clinician to determine the course of action even when the radiologist is not on site. MammoAssist, another product developed by the company, studies mammogram data and identifies subtle patterns of early-stage breast cancer.

The autopilot analogy

Dr Kalyanpur underlines the crushing shortage of radiologists in India — they are only about 20,000 — and contends that doctors need all the help they can get. He says AI applications are emerging at the right time and some of the apprehensions around them can be overstated. “I don’t see AI systems replacing humans as a necessary concern. The workflows will change, with AI taking the form of an intelligent assistant to the radiologist. An aeroplane on autopilot could be a good analogy — the pilot is in control but the autopilot mode enables performing some of the tasks that are mechanical and repetitive in nature,” he says.

Black Mirror showrunner Charlie Brooker recently said that the series — its previous instalments were entirely set in the not-so-distant future — is not quite the indictment of bad technology it has been interpreted as. It could be about people who are not handling it well. This has always inspired literature and art, this fallout of man’s engagement with change.

And thus the cyborg artist

AI is democratising art, setting up platforms for doodlers and amateurs to create images that now constitute a whole new genre. AI systems like Dall-E that generate realistic images from prompts have accelerated the shift, even as concerns are raised on ownership (whose art is it, the user’s or the platform’s?) and the absence of the original idea, considering that these images are also generated from what has already been created. For the professional, however, AI could be the definitive tool to experiment with, in pursuit of higher expression.

“I think of myself as a cyborg artist,” says K K Raghava. In 2018, the multidisciplinary artist, along with brother Karthik Kalyanaraman, founded 64/1, an art curation and research collective aimed at promoting and building a public understanding of artists-AI collaborations.

“The question to ask is not about AI being good or bad. The question is, do we have that imagination for this country? Artistic expression happens with a certain amount of manipulation, through the artist’s tools. We need artists and innovators to create this alternate imagination. The West always had something we did not have — science fiction,” says Raghava.

Dealing with the inevitable

Bengaluru-based emotion AI startup Entropik has deployed technologies of facial coding, eye tracking, and voice AI to help brands understand consumer preferences. Ranjan Kumar, CEO of the company, says Entropik’s multimodal technology interprets happiness, sadness, anger, surprise, and other emotions to provide insights into the consumers’ emotional responses. Voice AI, for instance, assesses features like tonality, pauses, and pitch to qualify emotions in speech and grade them as positive, negative, and neutral.

Kumar says the technology allows businesses to go beyond traditional survey-based methods and obtain faster and unbiased insights. “By leveraging AI, companies can make data-driven decisions, tailor their products and services to customer preferences, and create more engaging experiences,” he says.

As ChatGPT pushes educational institutions to do away with online examinations, startups are identifying AI-powered solutions to mitigate climate change. As a series of copyright infringement lawsuits hit AI platforms, courts are also availing AI-enabled transcription services. “It is crucial to emphasise that AI is not meant to replace human jobs entirely but rather, augment human capabilities and create new opportunities,” says Kumar. That he uses the word “unbiased” to qualify his company’s output is significant. It adds context to the debate which is, essentially, about imminent, inevitable change and the many ways of processing it.

Raghava sounds ready when he says — “You cannot stop the storm, you can only adapt.”

 

Here's a quick primer

* Artificial Intelligence (AI) is a form of simulated intelligence through which machines and computers are trained to mimic human competencies and execute specific tasks.

* Machine Learning (ML), a part of the larger AI spectrum, involves the use of models that study large amounts of data, learn and evolve over time while improving the accuracy of their output. ML systems are being adopted extensively as processing large volumes of complex data is becoming increasingly critical and beyond human capabilities.

* Some of the AI applications are run on Deep Learning systems, or layered, human brain-inspired neural networks.

* Tech giants and large corporations are investing heavily in AI technologies. Extensive research is also complementing the adoption of these technologies that continue to diversify through their subsets and specific areas of application. For instance, the use of dialogue datasets that can train an AI agent — or a chatbot — to be more “understanding” when it offers mental health counselling, or the tracking of eye movement to classify better the subject’s engagement levels.

* AI systems have found transformational applications across a range of domains, from coding to art to precision farming. Generative AI — or collectively, systems that create new content from existing data — has had its breakout moment with ChatGPT, a model that responds to the user’s queries by sifting data to generate relevant text.

* The threats posed by AI derivatives like deepfakes — fake or manipulated visual and aural content — are being addressed through efforts to identify and perfect detection technologies. Experts say these processes to ensure constant checks on evolving threats have to be non-negotiable if AI is to realise its powers as an enabler.

 

Get a round-up of the day's top stories in your inbox

Check out all newsletters

Get a round-up of the day's top stories in your inbox