franz
LLM

How to Extract JSON Output from LLM Cheaply Using GPT-4o-Mini

Fritz Hoste
#LLM#parsing#JSON
Feature image

In today’s era of AI and machine learning, extracting valuable data from language models (LLMs) can be quite an investment. However, with the right approach, it’s entirely possible to efficiently and affordably extract JSON outputs from LLMs. In this guide, we will discuss how to achieve this using the relatively budget-friendly gpt-4o-mini model, which offers robust functionality at a lower cost. Additionally, we’ll explore the utilization of function calling designed to retrieve JSON outputs seamlessly.

Understanding the Requirements

To extract JSON output effectively, you need a language model that supports function calling. In our guide, we will use the gpt-4o-mini model due to its cost-efficiency and adequate performance. Let’s break down the steps required:

Model Configuration

First, we need to configure our model. Here’s the setup:

{
  "model": "gpt-4o-mini",
  "temperature": 0,
  "messages": [
    {
      "role": "user",
      "content": "Send a mail to info@franz.be with subject: I want a demo"
    }
  ],
  "tool_choice": {
    "type": "function",
    "function": {
      "name": "json"
    }
  },
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "json",
        "description": "Respond with a JSON object.",
        "parameters": {
          "type": "object",
          "properties": {
            "ticker": {
              "type": "string",
              "description": "Email of whom to send to"
            },
            "subject": {
              "type": "string",
              "description": "Subject of the email"
            }
          },
          "required": [
            "email",
            "subject"
          ],
          "additionalProperties": false,
          "$schema": "http://json-schema.org/draft-07/schema#"
        }
      }
    }
  ]
}

The Function Call

The tool_choice field is crucial as it defines the function to be used. In this case, we use a JSON function to handle the email and subject extraction. Here’s an example API call response:

{
    "id": "chatcmpl-9pAj2ulHR52ti2mBEJmjgNFKVHjjH",
    "object": "chat.completion",
    "created": 1721982984,
    "model": "gpt-4o-mini-2024-07-18",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": null,
                "tool_calls": [
                    {
                        "id": "call_tDRmy7uCVveKgfCKi6TVAmS0",
                        "type": "function",
                        "function": {
                            "name": "json",
                            "arguments": "{\"ticker\":\"info@franz.be\",\"subject\":\"I want a demo\"}"
                        }
                    }
                ]
            },
            "logprobs": null,
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 83,
        "completion_tokens": 16,
        "total_tokens": 99
    },
    "system_fingerprint": "fp_661538dc1f"
}

Benefits of Using gpt-4o-mini

  1. Cost-Effective: The gpt-4o-mini model is budget-friendly, making it an excellent choice for projects requiring frequent model interactions without breaking the bank.
  2. Efficient Function Calling: With built-in function calling capabilities, extracting structured JSON data becomes straightforward.
  3. Lower Token Usage: As demonstrated in the response, the model manages token usage efficiently, contributing to cost savings.

Implementation

To implement JSON extraction using the gpt-4o-mini model, follow these steps:

  1. Set Up Your Environment: Ensure you have the required API keys and access to the model.
  2. Define Your Messages and Tools: Format the input as shown in the configuration example.
  3. Make the API Call: Use your API client to send the request.
  4. Process the Response: Extract the JSON output from the function call in the response.

Conclusion

Extracting JSON output from language models doesn’t have to be expensive. By leveraging the gpt-4o-mini model, you can achieve cost-effective, efficient data extraction. This guide provides a step-by-step strategy to optimize your processes and maintain a tight budget.

By understanding the setup and function calling as described, you can effortlessly handle complex tasks without incurring high costs. Happy extracting!

Come talk to us at Franz. We can help you unlock the full potential of your data, transforming your processes and boosting efficiency. Let’s revolutionize your document processing together! Reach out to us at info@franz.be to get started on your journey to a more efficient and innovative future.

← Back to Blog