Your AI-Powered Kitchen Assistant: Building a Recipe Genius With GPT3.5-turbo and Agenta

Published on
28-02-2024
Author
Product Minting
Category
How to
https://cdn.aisys.pro/stories/your-ai-powered-kitchen-assistant-building-a-recipe-genius-with-gpt35-turbo-and-agenta.jpg


It was Friday evening and I needed to make myself lunch after work. I found myself staring into my fridge wondering what to cook. I had leftovers from last night, however, I needed to make something fresh and exciting. I thought to myself, why not build a personal AI kitchen assistant to analyze the ingredients I have and suggests delicious dishes tailored precisely to my pantry?


This is where Large Language Models (LLMs) stepped into the kitchen with me. LLMs are a type of artificial intelligence that have been trained on a massive dataset of text (and code), including a vast trove of recipes; meaning that they can do so much more than just finding a suitable recipe. They can literally become your culinary genius on demand.


So, how does it all come together? In this article, I'll detail how I:

  • Developed an LLM app utilizing GPT-3.5 turbo LLM to assess available ingredients and recommend recipes
  • Deploy the LLM app on Agenta for production, enabling integration into a user interface for seamless communication via my mobile device


Developing The LLM App

I’ll be using OpenAI gpt-3.5-turbo in this application, however, I’d also be including various directives of the turbo model and also gpt-4. Use whichever model you think is appropriate for the task. Additionally, I will make use of Agenta to deploy my LLM application, as it is my preferred open-source platform for creating robust AI applications with LLMs. Agenta provides the necessary tools for managing and evaluating prompts.


Here are the capabilities of Agenta:

  • Quickly experiment with and compare prompts in any LLM workflow (chain-of-prompts, Retrieval Augmented Generation (RAG), LLM agents...)
  • Swiftly generate test sets and golden datasets for evaluation
  • Evaluate your application using pre-existing or custom evaluators
  • Annotate and A/B test your applications with human feedback
  • Collaborate with product teams on prompt engineering and evaluation
  • Deploy your application with a single click through the UI, CLI, or github workflows


By using Agenta, I am able to concentrate on constructing my LLM app and streamline the entire development cycle of the application, thereby enhancing the speed of experimentation. I’d be using Python to build the kitchen assistant.


Creating LLM app directory and dependency file

Create a directory named kitchen_assistant and a file named requirements.txt and add the following packages inside the file: agenta and openai


If you have something like the image attached below, you’re good to go.

required packages to have the llm app work

required packages to have the llm app work

Environment variable & LLM app development

Now that you have the assistant folder and have included the packages inside the requirements.txt file, proceed to create a new file named .env and include your OPENAI secret key to make the LLM app work.


Ensure that the .env file is inside the kitchen_assistant folder. You should have something like this:


OPENAI_API_KEY=sk-vxxxxxxxxxxxxx # replace with your secret key


Next would be to create a file named app.py, and copy the following code into it. I’ll explain what the code is doing in a minute.


# Stdlib Imports
from typing import Dict, Any

# Third Party Imports
import agenta as ag # type: ignore
from openai import AsyncOpenAI


SYSTEM_PROMPT = "You are my AI kitchen assistant. Help me find creative recipes based on my ingredients, provide step-by-step cooking instructions, and advise me on food safety practices."
CHAT_LLM_GPT = [
    "gpt-3.5-turbo-16k",
    "gpt-3.5-turbo-0301",
    "gpt-3.5-turbo-0613",
    "gpt-3.5-turbo-16k-0613",
    "gpt-4",
]

# Initialize agenta sdk and update the configuration parameters
ag.init()
ag.config.default(
    temperature=ag.FloatParam(0.2),
    max_tokens=ag.IntParam(-1, -1, 4000),
    prompt_system=ag.TextParam(SYSTEM_PROMPT),
    model=ag.MultipleChoiceParam("gpt-3.5-turbo", CHAT_LLM_GPT),
)

# Initialize async openai sdk
client = AsyncOpenAI()


@ag.entrypoint
async def suggest(inputs: ag.MessagesInput = ag.MessagesInput()) -> Dict[str, Any]:
    """
    Suggest  delicious dish tailored precisely based on the ingredients (inputs).

    Args:
        inputs (ag.MessagesInput): messages input for chat completion

    Returns:
        dict: The llm response (along with additional information).
    """

    messages = [{"role": "system", "content": ag.config.prompt_system}] + inputs
    max_tokens = ag.config.max_tokens if ag.config.max_tokens != -1 else None
    chat_completion = await client.chat.completions.create(
        model=ag.config.model,
        messages=messages,
        temperature=ag.config.temperature,
        max_tokens=max_tokens,
    )
    token_usage = chat_completion.usage.dict() # type: ignore
    return {
        "message": chat_completion.choices[0].message.content,
        **{"usage": token_usage},
        "cost": ag.calculate_token_usage(ag.config.model, token_usage),
    }


The imports are essentially importing the required modules that are needed to ensure the code works correctly.


System prompt sets the context for every conversation that happens between the user (you and I) and the LLM application. You’d be pissed at your friend who in the middle of a finance conversation with you starts talking about something unrelated, right?


Imagine a Large Language Model (LLM) as a super-powered reader. To really grasp what it's reading, it needs context – the words and sentences surrounding whatever it's focused on. This 'context window' is like a spotlight: the bigger the window, the more information the LLM can use to understand and generate meaningful responses.


Next to the system prompt is a list of defined models that can be used when testing the user prompt for the LLM app in Agenta. You get to choose whichever model to see how it performs.


The command ag.init() is used to initialize the Agenta SDK. Subsequently, ag.config.default(...) is where you define crucial parameters that dictate the behavior of the kitchen AI assistant. These parameters include:

  • temperature: Regulates the creativity of responses (lower temperature = more predictable, higher = more unexpected).
  • max_tokens: Determines the maximum length of the AI's response (measured in tokens, which roughly correspond to words).
  • prompt_system: Sets the initial instructions or context provided to the LLM, specifying it as your kitchen assistant.
  • model: Specifies the LLM to be used (with a range of GPT model versions as options).


The line client = AsyncOpenAI() initializes the OpenAI Async SDK, enabling your code to establish a connection and interact with OpenAI's language models.


To register your LLM app with Agenta and use it in their playground, you need to define an entry point. I used the @ag.entrypoint decorator on my suggest function. This tells Agenta that this function is where the action starts. Agenta then automatically converts this function into an API endpoint (using FastAPI), making it accessible through their web interface.


Publishing The LLM App

Now that your AI kitchen assistant is ready, we should publish it to Agenta for experimentation. To do this, you can install the Agenta CLI in a virtual environment using your preferred Python package installer or globally on your system. However, I advise against the latter to avoid potential conflicts with your system packages.


Once the installation is complete, make sure to activate the virtual environment. I will proceed with publishing the LLM app to Agenta Cloud, enabling me to deploy the LLM application to a production environment and later, create a UI application for mobile use. If you wish to publish to the cloud version of Agenta as well, I suggest creating an account first and then returning to follow the steps below.


If you prefer not to deploy the app to the cloud version of agenta, you can run the open-source version on your local machine by following these steps. It will be a slightly different process, but not overly complex.


Create a new project for the LLM app

To publish the application in Agenta, we must first create an empty project. Execute the following command in the directory where your application code is located:


(.venv) [kitchen_assistant]$ agenta init


This will prompt you to enter the project name, Agenta host, API key (if using the cloud or enterprise version), organization, and how you want to initialize the app.


steps to creating a new project using agenta cli to agenta cloud

steps to creating a new project using agenta cli to agenta cloud


Once the app initialization is successfully done, a blank project will be generated in Agenta along with a config.toml file in the same directory, containing all project information. The config.toml file and the empty project in Agenta Cloud will look like the following.


contents of config.toml after creating the app initialization

contents of config.toml after creating the app initialization


project successfully created in agenta cloud

project successfully created in agenta cloud


Publishing/Serving the first app variant

With the project created in Agenta cloud, we need to add the first app variant to it. This can be done by running the following command:


agenta variant serve app.py


This will create a new app variant in Agenta under the name app.default. Here, app is the name of the codebase containing the LLM app logic, while default is a default configuration created for that codebase. Each new app variant created from the web interface or from the CLI will always have the name format <codebase_name>.<configuration_name>.


An app variant is a distinct version of an LLM app, featuring different parameters or code but keeping the same inputs and outputs. This feature enables you to experiment with alternative methods within a single app. For instance, you could have two variants of the kitchen assistant app, each using a different system prompt.


Executing the serve command above will create a container for the application featuring a REST API endpoint. This endpoint will be used in Agenta when interacting with the application. The CLI will also display the URL of the endpoint, which can be used to navigate to the application API documentation or playground in Agenta.


messages shown on the terminal after a successful llm app variant publish

messages shown on the terminal after a successful llm app variant publish


Copy the playground URI and paste it into your browser. After doing so, you will see the following user interface. It may take 10-20 seconds to load as it sets up the playground for you.


image one - playground of the kitchen assistant showing the prompt system and model parameters

image one - playground of the kitchen assistant showing the prompt system and model parameters


image two - playground of the kitchen assistant showing the assistant chat section

image two - playground of the kitchen assistant showing the assistant chat section


You have successfully published your LLM app to Agenta and would need to experiment with it to ensure that it functions as intended. In the chat section right after “User”, write the ingredients and click on the “Run” button. You’ll see that the AI kitchen assistant is considering what you can make with the ingredients:


LLM app processing the ingredients to suggest a wonderful tasteful dish

LLM app processing the ingredients to suggest a wonderful tasteful dish


When it’s done processing the ingredients that you inputted, you’ll get suggestion on what you can make.


image one - user interface displaying the ingredients that you inputs and what you can make with it

image one - user interface displaying the ingredients that you inputs and what you can make with it


image two - user interface displaying the instructions that were suggested by the kitchen assistant

image two - user interface displaying the instructions that were suggested by the kitchen assistant


And so, you own a Kitchen AI assistant that suggests recipes based on your ingredients. Did you observe something after each use? You view the total tokens used, the run's cost, and the time taken! Don't hold back now, keep experimenting with the LLM app. Customize the system prompt to match a particular culture that you want, adjust the temperature, and switch the model if needed.


Deploying the LLM App To An Environment

After using the playground to find a good configuration for your AI Kitchen assistant, it’s time to deploy the application. By deploying the application, you can integrate it to a UI (web and/or mobile) application or anywhere since the deployed app will be an endpoint. You can also later change the configuration from the playground without having to update the code.


Clicking on the “Publish” button will bring up a modal with environments you want to deploy the LLM application to.


ui showing buttons to click - one to click is "publish"

ui showing buttons to click - one to click is "publish"


Agenta provides a way to deploy an application to multiple environments: development, staging, and production. Each environment having its own unique configuration. Since the LLM app works and it’s the weekend, we can deploy to production :-).


a modal showing environments available to publish the llm app variant (see the meaning of app variant above)

a modal showing environments available to publish the llm app variant (see the meaning of app variant above)


Click on the "production" checkbox and then the "Publish" button. You will receive a success notification informing you that the LLM app variant has been deployed.


success notification popup

success notification popup


Look at the side navbar, you’re going to see different sections. I’ll keep what each of them do:


ui showing side navbars

ui showing side navbars


  • Playground: The playground is a user-friendly environment in Agenta where you can create and test new app variants. Here, you can adjust parameters, input data, and observe the outputs from your app.
  • Test Sets: The test set is another user-friendly environment where you can create high-quality datasets. A golden test set, also known as a ground truth test set, consists of various inputs and their expected correct answers.
  • Evaluations: Another environment that enables you to generate automated assessments using various evaluators: exact match, similarity match, regex test, JSON field match, AI critique, code evaluation, and webhook test. The purpose of conducting evaluations is to methodically evaluate the outcomes of the LLM application and compare various versions to determine the most optimal one. Another objective is to set a benchmark for the application and evaluate any potential risks.
  • Annotations: At times, you might require assessing your models' performance through human judgement. This is where the Human Evaluation environment becomes valuable. It enables you to carry out A/B tests and single model tests to gauge your models' performance using human judgement.


Endpoints are where you can access the deployed application code. You can view the code only after configuring your application successfully and deploying it to a specific environment.


deployed api endpoint of the AI kitchen assistant

deployed api endpoint of the AI kitchen assistant


Currently, Agenta supports just three languages to integrate the deployed LLM app: Python, cURL, and TypeScript. If you’d like to add more languages or a specific language, feel free to contribute by raising an issue and tackling it. ;-)


languages that are supported

languages that are supported


To ensure the deployed LLM app works in production, choose cURL in the languages drop-down, click "Copy," update the content value with the ingredients you have, and run it in your terminal. You should receive a result similar to the one below.


output of kitchen assistant after running it using cURL

output of kitchen assistant after running it using cURL


Conclusion

How does it feel to have a Personal Kitchen AI assistant? My partner feels like she has a sous chef that handles recipe making for her. Quite amusing, isn't it? XD


In this article, you learnt what LLMs are, how to create a kitchen assistant app using the GPT-3.5-turbo model to analyze your available ingredients and offer recipe suggestions. You also learned how to deploy the LLM app on Agenta for practical use.


As an extra tip, I will be building a user interface using Svelte and deploy it on Github Pages. I will design it to allow you to input the URI of your LLM app, enabling personal interaction. In the upcoming article, I will demonstrate how to generate test sets for conducting human evaluations to ensure that the LLM app's output aligns with your expectations.

Discussion (20)

Not yet any reply