How Generative AI Is Revolutionizing Professional Services?

ACH Worldwide Ltd
6 min readNov 6, 2023

--

by Dr Kyle Wong, Ph.D., CFA and FRM and Dr Amanda Lim

Pain Points of Professional Service

For those of us involved in professional service we can all agree on one thing: our time spent is not very efficient. Most of the time, instead of doing real work, is spent

  • Understanding customers’ requirements which are sometimes incoherent
  • Looking up internal documents which are hidden somewhere
  • Training staff which sometimes are not as smart as you think and can leave anytime
  • Communicating with clients which can take a long time
  • Come up with marketing ideas or logo designs
  • Write up reports

We often dream about automating some of these tedious tasks. Less time spent on these peripheral tasks mean more time for rest and family. Unfortunately due to technological constraints, it has not been possible, Until now. The arrival of Generative AI has opened up many possibilities. It made the impossible possible.

Generative AI

Generative AI refers to deep-learning models that can take raw data — say, all of Wikipedia or the collected works of Rembrandt — and “learn” to generate statistically probable outputs when prompted. At a high level, generative models encode a simplified representation of their training data and draw from it to create a new work that’s similar, but not identical, to the original data.

Generative models have been used for years in statistics to analyze numerical data. The rise of deep learning, however, made it possible to extend them to images, speech, and other complex data types. Among the first class of models to achieve this cross-over feat were variational autoencoders [1], or VAEs, introduced in 2013. VAEs were the first deep-learning models to be widely used for generating realistic images and speech.

Transformers, introduced by Google in 2017 in a landmark paper “Attention Is All You Need,” combined the encoder-decoder architecture with a text-processing mechanism called attention to change how language models were trained [2]. An encoder converts raw unannotated text into representations known as embeddings; the decoder takes these embeddings together with previous outputs of the model, and successively predicts each word in a sentence.

Through fill-in-the-blank guessing games, the encoder learns how words and sentences relate to each other, building up a powerful representation of language without anyone having to label parts of speech and other grammatical features. Transformers, in fact, can be pre-trained at the outset without a particular task in mind. Once these powerful representations are learned, the models can later be specialized — with much less data — to perform a given task.

Several innovations made this possible. Transformers processed words in a sentence all at once, allowing text to be processed in parallel, speeding up training. Earlier techniques like recurrent neural networks (RNNs) and Long Short-Term Memory (LSTM) networks processed words one by one. Transformers also learned the positions of words and their relationships, context that allowed them to infer meaning and disambiguate words like “it” in long sentences.

Large Language Model

A large language model (LLM) is a deep learning algorithm that can perform a variety of natural language processing (NLP) tasks [3]. Large language models use transformer models and are trained using massive datasets — hence, large. This enables them to recognize, translate, predict, or generate text or other content.

Large language models are also referred to as neural networks (NNs), which are computing systems inspired by the human brain. These neural networks work using a network of nodes that are layered, much like neurons.

In addition to teaching human languages to artificial intelligence (AI) applications, large language models can also be trained to perform a variety of tasks like understanding protein structures, writing software code, and more. Like the human brain, large language models must be pre-trained and then fine-tuned so that they can solve text classification, question answering, document summarization, and text generation problems. Their problem-solving capabilities can be applied to fields like healthcare, finance, and entertainment where large language models serve a variety of NLP applications, such as translation, chatbots, AI assistants, and so on [3].

Large language models also have large numbers of parameters, which are akin to memories the model collects as it learns from training. Think of these parameters as the model’s knowledge bank. While ChatGPT from OpenAi catches the imagination of most people, there are also other well-known LLM models including BARD from Google, LiaMa from Meta and Claude. Each has its pros and cons and supports.

What can a Chatbot do?

Anyone who has tried ChatGPT will surely be impressed by its ability. This is just a partial list of what the algorithm can do:

The list is still expanding as new use cases are being discovered. While the list is certainly impressive, ChatGPT also has its limitations:

  1. The first one is that the model is trained on public internet up to Sept 2021. Therefore it cannot handle up to date information and private information. So work needs to be done to make it work for an enterprise.
  2. Biased information provides biased answers. Answers from chatbots may not be sensitive enough in racial, sexual, cultural and political issues. For many companies this is a huge reputational risk.

From Chatbot to AI Colleague

For the chatbots to serve as a colleague, It needs to be trained with the appropriate internal information. If it is a QnA bot, the bot must be fed with FAQ information that relates to the enterprise. If it is a Customer Service bot, the bot must first verify the identity of the customer and provide the customer with useful information. Therefore it has to be linked to the CRS system. If it is an internal search bot, it has to access an internal database.

Secondly, the chatbot needs to identify and avoid answering any sensitive question. It should say no to any question not related to its purpose.

Thirdly, the chatbot should reflect the image of the enterprise through the personality or even external appearance. The tone of the chatbots can be serious or more fun. If combined with an avatar then the chatbot can take on the persona of male or female, young or old as well as different ethnic background.

Fourthly, the chatbots would provide the users with follow-up activities. They can be followed up by a phone call by a human, collecting information, scheduling appointments or email acknowledgment.

Create your AI Colleague by Greatmeta?

Discover the incredible achievement of GreatMeta in revolutionizing the way professional firms manage their partners’ time-consuming tasks. GreatMeta, a pioneering startup based in Hong Kong, specializes in leveraging cutting-edge technologies like AI, Metaverse, and blockchain to assist enterprises. One of their remarkable products, “Create Your AI Colleague,” goes above and beyond by enabling businesses to:

Construct Intelligent Chatbots: GreatMeta empowers enterprises to create intelligent chatbots tailored to their specific needs.

Train Chatbots with Internal Knowledge: These chatbots are trained using internal information, ensuring they possess the expertise necessary to handle complex tasks.

Customize Bot Appearances: Businesses can design the appearance of these chatbots, aligning them seamlessly with the company’s unique image and branding.

Integrate To-Do Lists: GreatMeta integrates to-do lists directly into the chatbots, streamlining task management for professional firms.

Effortless Deployment: Enterprises can seamlessly deploy these advanced chatbots on their websites, enhancing user experience and efficiency.

Safeguard Against Inappropriate Responses: GreatMeta’s solution incorporates robust measures to guard against inappropriate answers, ensuring professionalism and reliability in all interactions.

By offering this comprehensive suite of services, GreatMeta empowers professional firms to automate their low-value yet time-consuming tasks, freeing up valuable resources and allowing partners to focus on high-impact activities.

References

[1] Shafkat, I. (2021, October 1). Intuitively understanding variational autoencoders. Medium. https://towardsdatascience.com/intuitively-understanding-variational-autoencoders-1bfe67eb5daf

[2] Uszkoreit, J. (2017, August 31). Transformer: A novel neural network architecture for language understanding. Google Research Blog. https://blog.research.google/2017/08/transformer-novel-neural-network.html

[3] What is Natural Language Processing (NLP)?: A comprehensive NLP guide. Elastic. (n.d.). https://www.elastic.co/what-is/natural-language-processing

Martineau, K. (2023, August 18). What is Generative Ai?. IBM Research Blog. https://research.ibm.com/blog/what-is-generative-AI

What is a large language model?: A comprehensive llms guide. Elastic. (n.d.-a). https://www.elastic.co/what-is/large-language-models

--

--