The current era sees some extraordinary reliance on technology, primarily due to the advent of Artificial Intelligence (AI). We rely on AI tools like Chatgpt or applications built on AI architecture, to ease our daily, mundane tasks. However, it raises concerns about the usage of our data, and if it leads to any bias. I believe, the usage of such algorithms should be monitored by humans, and a transparent decision making system should be put in place.
To enhance trust and confidence on AI tools, a clear system and mechanism by which decisions are made should not be opaque. Further, explainability is also a very important aspect to consider when evaluating such systems. For example, in the medical domain, doctors do not rely on AI, instead they use it to aid their decision process. If the models’ outputs are interpretable, the doctor may be able to rely more confidently on the tool and in turn, bring out the efficiency desired. Even in less critical applications, such as education, Large Language Models are widely utilized to for learning purposes. Instead of making it seem like magic, the way the model answers user’s questions in seconds, it is better to guide step-wise through the entire process the model went through to arrive at the result, as demonstrated by the new ‘Thinking’ feature added by ChatGPT.
There are also people who believe that, most important aspect of AI-driven technology is its efficiency and objectivity, and anything else such as added explainability or human oversight shouldn’t disrupt such features. It may be true in operations less susceptible to bias or a common, widely known task. However, for specialized, uncommon problems, one may not be able to trust AI and needs explainability to lead to the truth.
