GPT Powered Apps.

Building a GPT Powered Vehicle Recommendation Engine

Using OpenAI GPT, Python, LangChain and Azure Cognitive Search to build a recommendation engine.

Mark Rodseth
6 min readMay 3, 2023

--

Cat Used Car Salesmen courtesy of MidJourney

In case you’ve run out of AI based content to read, here is another one for your list. I’ve added a cute story map with a cat (yep, CatGPT) into this post to try make it stand out from the rest.

Purrthetic and desperate? Maybe…

(PS Using cats plus generative AI is not entirely original, I must confess, as is evidenced by https://cat-gpt.com/.)

The Application

The app we’re going to unpack is a used-car recommendation Chatbot that returns a list of recommended vehicles based on what you (informally) state your preferences are. The results are enriched with informative details about each car and justification as to why it was recommended.

The conversation flow goes as follows with interactions between the user, the Chatbot, GPT (or LLM of your choice), and the vehicle search engine.

Step1: User writes their preferences into a chat prompt.

Vehicle recommendation engines often require users to explicitly set their preferences which are then used to find matching vehicles. The idea here is to allow the user free reign to tell the chatbot what they are after in their own personal style, and use the magic of a GPT model to turn that into something useable by downstream search APIs.

Step 2: The Chatbot asks GPT to create a structured search query using the preferences provided.

The Chatbot creates a GPT Prompt asking for a search query based on the preferences entered into the Chatbot and a specified list of fields that can be searched on by the search engine. This prompt is sent to GPT which does its magic and translates fuzzy human language into a structured query with interpreted values for the query parameters.

Step 3: The Chatbot queries the search engine using the GPT generated query.

Armed with a well structured search query, the Chatbot sends that to a search engine and gets back list of matching vehicle results.

Step 4: The Chatbot asks GPT to turn the search results into something meaningful for the human.

With a a list of vehicles that match the search criteria provided, we use GPT again to translate the structured data into well written, human readable text, enriching the content with interesting facts about the vehicle and why it has been recommended based on the user preferences.

Step 5:

The chatbot returns the content to the user who is pleased. The chat can continue with refinement of preferences and having the Chatbot work with GPT and Search in the same pattern.

The Tech

The app was developed using Python and some notable libraries include:

  • streamlit: Streamlit is a Python library used to create web applications quickly. In this code, it is used to build the user interface for the vehicle recommendation engine.
  • langchain: A custom package that provides a set of tools for building natural language models. It has several submodules including llms which connect you to LLMs such as OpenAPIs GPT.
  • azure.search.documents: Provides the SearchClient class, which is used to interact with the Azure Search service to perform searches and manage search results.

Azure Cognitive Search was used as the search engine for Vehicles. Vehicle data was uploaded as JSON files.

OpenAI GPT API was used. I chose the Turbo 3.5 model.

The Code

Here is a code walkthrough of App.py

Import the necessary Libraries.

import os
import json
import streamlit as st
from apikey import apikey
from searchhelper import azure_cognitive_search
from searchhelper import json_data
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, SequentialChain
from langchain.memory import ConversationBufferMemory
from langchain.utilities import WikipediaAPIWrapper
from azure.core.credentials import AzureKeyCredential
from azure.search.documents import SearchClientApp Framework: Streamlit is used to set up the main components of the web application, including the title and user input area for vehicle preferences.

Initialise the streamlit app framework, prompt templates for GPT, Memory to keep track of conversation and the LLM and Chains used to interact with GPT.

# Get OpenAI API Key
os.environ['OPENAI_API_KEY'] = apikey

# app framework
st.title('Vehicle Recommendation Engine')
prompt = st.text_area('Tell us what you are looking for in a car')

#prompt templates
title_template = PromptTemplate(
input_variables= ['preferences', 'vehicle_schema'],
template='based on the following user provided vehicle preferences : {preferences} and on the following JSON Schema of Vehicle Attributes: {vehicle_schema}, create a lucence search query that best fits the prefferences'
)

script_template = PromptTemplate(
input_variables= ['preferences', 'vehicle_results_json'],
template='write a description and five interesting facts about each vehicle returned in the following list: {vehicle_results_json}. Each paragraph must belong to its own bullet point. Add a final summary after the recommended vechiles for why these choices are a good match for these preferences provided: {preferences}'
)

# Memory
title_memory = ConversationBufferMemory(input_key='preferences', memory_key='chat_history')
script_memory = ConversationBufferMemory(input_key='vehicle_results_json', memory_key='chat_history')

#LLMs
llm = OpenAI(temperature=0.9, max_tokens = 2056)
title_chain = LLMChain(llm=llm, prompt=title_template, verbose=True, output_key='lucence_query', memory=title_memory)
script_chain = LLMChain(llm=llm, prompt=script_template, verbose=True, output_key='vehicle_recommendations_full', memory=script_memory)

Handle the prompt by kicking off the first Chain, then calling the search API, then passing the results to GPT, then displaying the final results back to the user. Also, add a query history panel for.

if prompt:
title= title_chain.run(preferences=prompt, vehicle_schema=json_data)
vehicle_results = azure_cognitive_search(prompt)
script = script_chain.run(preferences=prompt, vehicle_results_json=vehicle_results)
st.write(script)

with st.expander('Query History'):
st.info(title_memory.buffer)

I created a seperate module to store the azure_cognitive_search() function and the example json used to inform GPT about what fields could be searched.

Search function:


def azure_cognitive_search(query: str) -> dict:
# Get elastic search keys
service_endpoint = os.getenv("AZURE_SEARCH_SERVICE_ENDPOINT")
index_name = os.getenv("AZURE_SEARCH_INDEX_NAME")
key = os.getenv("AZURE_SEARCH_API_KEY")

# Initialise search client
client = SearchClient(service_endpoint, index_name, AzureKeyCredential(key))

# Perform a search with the Lucene query
results = client.search(query, top=3)

# Construct a JSON object from the search results
vehicles = []

for item in results:
content = item['content']

vehicle = {
'BRAND': content['BRAND'],
'MODEL': content['MODEL'],
'VARIANT': content['VARIANT']
# a lot more fields removed for brevities sake
}

vehicles.append(vehicle)

return json.dumps(vehicles)

Json Data Example:

json_data = {
"content": {
"DOORS": 5,
"COLOUR": "Grey",
"VARIANT": "GALAXY 2.0 TDCi 180 Titanium X 5dr MPV",
"BODY_STYLE": "MPV",
"MILEAGE": 23744,
"YEAR": 2013,
"MODEL_DESCRIPTION": "GALAXY",
"CAPACITY": 1997,
"LIST_PRICE": 19275,
"TRANSMISSION": "MANUAL",
}
}

The Result

Below is an example of a prompt and the GPT interpreted results back to me.

--

--