In an era where generative AI and large language models (LLMs) are transforming industries, the challenge remains: how to embed deep expertise and best practices into these systems. A recent exploration into knowledge elicitation techniques highlights how methodologies developed during the rules-based expert systems era can provide valuable insights. The aim is to extract tacit knowledge from domain experts and codify it into AI frameworks, enhancing their effectiveness in specific fields.
Reviving Knowledge Elicitation for AI Enhancement
Knowledge elicitation involves drawing out the hidden best practices and expertise that reside in the minds of industry experts. While some may view these older approaches as outdated, they are crucial for enriching modern AI systems. This analysis draws from insights shared in the Journal of Multimodal Technologies and Interaction, particularly a paper by Daniel Kerrigan, Jessica Hullman, and Enrico Bertini, which discusses the importance of eliciting domain knowledge throughout the machine learning process. Their findings suggest that “eliciting knowledge from domain experts can play an important role throughout the machine learning process, from correctly specifying the task to evaluating model results.”
To illustrate the practical application of these techniques, consider a scenario where an LLM needs to acquire deep medical expertise, perhaps in urology or neurology. Typically, the process involves gathering extensive documentation related to the domain. However, much of the nuanced knowledge exists only in the minds of practitioners, making it vital to engage them directly in the elicitation process.
Case Study: Eliciting Stock Trading Expertise
In a recent case study involving stock trading, the aim was to adapt an LLM, specifically OpenAI’s ChatGPT, to reflect the expertise of a seasoned trader. The initial step involved assessing what knowledge about stock trading was already embedded in the AI. This preliminary investigation revealed that while ChatGPT possessed foundational information, it lacked the trader’s unique insights and strategies.
Through a series of discussions, the trader shared their specific rules for stock selection, such as the Earnings Momentum Rule and the Sector Rotation Rule. These rules, based on their experience, were not part of the AI’s training data. The process of verbalizing these strategies highlighted the importance of knowledge elicitation: “If a company has shown at least three consecutive quarters of earnings growth and the growth rate is accelerating, then consider it a buy candidate,” encapsulated the essence of the trader’s methodology.
Once these rules were identified, they were codified into a structured format for integration into the LLM. For instance, the Earnings Momentum Rule can be represented in JSON as follows:
“`json
{
“name”: “Earnings Momentum Rule”,
“if”: [
“Company has >= 3 consecutive quarters of earnings growth”,
“Growth rate is accelerating”
],
“then”: “Consider as buy candidate”,
“unless”: “Price-to-earnings ratio > 30”
}
“`
This structured approach facilitates a clear understanding of the guidelines the AI must follow when making stock recommendations.
The process further capitalizes on AI’s capabilities by having it interact with the trader to validate these rules and discover additional ones. For example, during a dialogue, the AI proposed the Market Sentiment Rule, which states that if social and news sentiment toward a stock is overwhelmingly positive and the price has risen more than 10% in a week, entry should be avoided for at least five trading days due to potential hype cycles. This interaction demonstrates the collaborative potential between human expertise and AI.
As AI continues to evolve, the integration of knowledge elicitation techniques may become a cornerstone for developing LLMs capable of expert-level performance in specialized domains. By facilitating dialogue between human experts and AI, developers can create systems that not only replicate existing expertise but also adapt and refine their knowledge over time.
In conclusion, as the field of AI advances, leveraging knowledge elicitation techniques can significantly enhance the capabilities of generative AI and LLMs. The ongoing debate regarding the distinction between human and AI-generated expertise underscores the necessity of these methods. As Elbert Hubbard aptly noted, “The best preparation for good work tomorrow is to do good work today.” By investing in the effective integration of expert knowledge into AI, developers can pave the way for more sophisticated and reliable systems in the future.
