2023.04 The Taiwan Banker NO.160 / By Zhang Kai-jun
Chatting about the future of ChatGPTBanker's Digest
ChatGPT is an artificial intelligence (AI) capable of conversing with users, thanks to its natural language processing capabilities. Like a bolt from the blue, it has captured the attention of users across the globe and it seems as though everyone is discussing how ChatGPT will change the world. But what does GPT actually mean? The etymology of GPT GPT stands for generative pre-trained transformer. Generative refers to its ability to automatically generate new content, imitating human creativity and imagination. This content can take the form of written text, sounds, images, or even videos. ChatGPT can generate written texts, while AIs like Stable Diffusion, DALL-E, Midjourney, and Deep Dream Generator are image generators, which can produce stunning images in seconds based on only simple text descriptions. Meanwhile, Amper Music can generate songs suitable for use in both advertisements and videogames. Pre-trained refers to unsupervised big data training prior to training for a specific task. For example, in order to read long texts, ChatGPT first learned the semantics and context of words, as well as widely used linguistic conventions. This allows it to produce appropriate responses according to the input, and also to continue its previous output in order to produce the most appropriate text. However, in order to achieve more precise understanding of language and its applications, ChatGPT still had to undergo supervised fine-tuning after its pre-training using human-labeled data. Fine-tuning adjusts a model’s architecture and parameters for the requirements of specific defined tasks, allowing the model to better adapt to the properties of the task. Transformer is a technical term for a kind of deep learning model. The earliest reference to transformers comes from the 2017 article, “Attention is All You Need," published by a team at Google. According to Google Scholar, this article has been cited more than 60,000 times. The “Attention” is the key mechanism of transformers. Structurally, this kind of deep learning model is a generalization of convolutional neural networks (CNNs). Functionally, it replaces traditional sequence models like recurrent neural networks (RNNs), and raises the efficiency of natural language processing. A transformer extracts the relationships between the words in a sentence and assigns greater weight to words of importance according to their degree of similarity, helping to preserve key information in long text. This is due to a transformer’s ability to more suitably grasp the similarity between words that are both difficult to understand and far apart, and to parallelize its training, thereby effectively increasing the precision of natural language processing. The applications of transformer models are extensive, and are not necessarily limited to human-produced texts. For example, medical companies have already used transformers to process amino acid chains using text strings, as well to describe protein folding methods in an effort to more deeply understand the proteins that represent life’s origins and speed up new drug development. The past and present of ChatGPT OpenAI is a California-based AI research laboratory founded in 2015. It primarily works on AI research and AI product development, focusing on natural language processing, machine learning, and deep learning. After Google released its transformer model in 2017, the OpenAI team began to apply it to natural language processing, and completed GPT-1 in June 2018. In early 2019, it trained GPT-2, a step up in performance from GPT-1. The principal difference between the two lies in the number of parameters and the size of the pre-training data. GPT-2 has approximately 12 times as many parameters as GPT-1, while its pre-training data is 8 times as large. Pre-training data was taken from the social media website Reddit, amounting to around 8 million posts. In July of the same year, Microsoft announced a US $1 billion investment in OpenAI, which grew to around US $11 billion by 2023. GPT-3 came out in May 2020, and boasted a substantial upgrade in both computing power and in model scale. Its reached some 175 billion parameters (GPT-2 had 15 billion), and its pre-training data reached 45 TB (GPT-2 only had 40 GB). This GPT has supplemented its pre-training data with online sources like papers, books, and news articles. Additionally, GPT-3 combines the results of both unsupervised and supervised training, once again elevating the performance of its natural language generation and interaction. After OpenAI began fine-tuning this model, it was also able to improve the model’s chat ability. On November 30, 2022, OpenAI launched ChatGPT (GPT-3.5), an optimization built on GPT-3, designed for smoother, more natural dialogue generation. How can we coexist with a new species? ChatGPT’s performance far outclasses the clumsy chatbots of the past, which should be readily visible for any user. However, the innovation of ChatGPT is not simply improving on what machines cannot do well. At the same, it is not entirely clear what it is. Regardless, ChatGPT’s generative peers and quickly evolving descendants will indeed become a new species. For example, not only can ChatGPT generate mature passages in any language, it can also write computer code according to instructions. While the code cannot always be used immediately, it is not far off, and will further empower professional programmers and may greatly increase their efficiency. For the layman, this could help reduce the learning curve for coding. We could even say that the age of using AI to learn about AI has already arrived – something that has never been seen before. The aforementioned function is only one example of a new chapter in human-machine collaboration. In the future, every professional can expect to find themselves interacting with AI, filling their ability or resource gaps. For example, an author’s creative process no longer has to be a lonely spiritual journey. They can let ChatGPT write a rough draft for them to proofread and revise. Apart from providing an original plot (which an AI might also be eventually able to do), what is left of an author’s job would be closer to that of an editor. Additionally, investment analysts will no longer need to use assistants for data collection, nor to type out an investment report word by word. They will only need to remember to check whether the report’s conclusions are accurate and appropriate. Of course, there are exceptions, like investment analysts’ assistants or translators, whose relationships with AI may be more substitutable than complementary. Humanity’s subjective will This kind of technology has already caused some concern. Even though ChatGPT appears to interact smoothly with users, we cannot say that it truly understands human language. In the end, it is only an algorithm to generate sentences that appear to be meaningful. Essentially, it an extremely smart parrot. It also appears to be affected by several bad human habits. It is unwilling to admit its ignorance, and instead frequently attempts to justify its behavior by providing imagined, artificial, and shallow answers. It provides non-identical responses to identical questions without batting an eye. It also seem to currently lack an efficient, built-in mechanism to verify the validity and correctness of its answers, thereby limiting its use. It would be best to avoid asking the AI questions with easily confused right and wrong answers, and instead ask it to design marketing materials or plan a course syllabus (prompts that lack an objective answer) paired with the user’s subjective judgement. Additionally, ChatGPT has moral dimensions. For example, we do not know what information is included in pre-training data, so how can we be sure that the text generation does not conceal biases in training data? Or will an author accidently ask ChatGPT to plagiarize pre-existing material contained in its pre-training data, infringing upon the copyrights of others without even knowing it? Moreover, although ChatGPT has undergone fine-tuning in order to refuse to answer questions that are disruptive or morally offensive to the public, users only need to reword them to receive an answer. For instance, while it will not teach someone how to commit fraud, if you were to ask it to explain methods of defrauding people or to give you examples of fraud under the guise of wanting to avoid it, it would happily comply with detailed answers. Whatever the case, the rise of ChatGPT makes it clear that the world is rapidly changing. Whether or not we know it, the traditional boundaries of work have been redefined. With the help of generative AI, tasks that previously required a high degree of professional skill can now be completed by anyone. Some technical skills have lost their value, to be replaced by new ones. The financial sector will inevitably be impacted; at the very least, the forms and functions of digital finance, wealth management, customer service, marketing & sales, and risk management are likely to change. Therefore, now is the time to ask ourselves what changes generative AI will bring to the institutions or fields that we are a part of, how it will integrate itself into our lives and work, and what that means for our future. The author is Deputy Director of the Financial Research Institute at TABF.