In an era defined by digitalisation, artificial intelligence has become a central point of contention. Proponents emphasise that such technology should be regulated strictly by the government, while critics warn of the disadvantages of excessive regulation. This essay scrutinizes that governmental control is essential, provided that control is limited. On the one hand, the primary objection of proponents stems from the erosion of the systemic capacity of AI. As control increases, the technologies become limited in their actions. Empirical research by Harvard University demonstrated that large language models with constant guidance exhibited measurable declines in the accuracy of responses. Specifically, ChatGPT showed an inaccurate solution for mathematical questions on Fibonacci theory when users told it exactly how to follow the instructions. Such outcomes raise concerns about governments limiting AI’s full potential.
Nevertheless, the risks are mitigated when governmental management is integrated responsibly. If residents retain control over AI’s actions on responses, technology functions as a cohesive tool rather than a creative thinker. In educational contexts, for instance, the government can regulate students’ AI usage in classroom settings and control their actions based on the purposes for which they use it. Furthermore, the government can monitor unethical AI usage by users, which may be useful in crime prevention contexts. Under such conditions, the benefits of residential control outweigh its associated risks.
In sum, the impact of AI depends less on its existence than on its application. When governed by the government, it serves as a catalyst for progress rather than a source of decline.
