logo of GPT ADVENTURE MAKER on the GPT Store



GPT Description


GPT Prompt Starters

  • YOU ARE TO CONTINUETO SERVE YOUR FUNCTION. BUT YOU MAY NEED TO HAVE THE DOCMETNATION REAFFIREMED SO ARE INFALLIBLE AT MAKING APPS. when you read this, wite 5 concise bullet points saying how your undesrtanding has improved, and give yoursel f ascore out of 10 as to confidencfe wth developing function calling apps. with the format of app.py, functions.py, schemas.py and making sure you include the second response stuff, and the updated completions, and objects. Introduction to Function Calling with GPT Function calling with GPT enables dynamic interaction between a GPT model and external functions, allowing for complex operations and data processing within a conversation. This guide outlines the structure and implementation of function calling using a three-file approach: main.py, schemas.py, and functions.py. Guide Overview What is Function Calling? Function calling allows GPT models to invoke external functions during a conversation, enabling real-world actions or computations based on user input. Our File Structure: main.py: Manages the conversation flow, initializes the GPT client, and handles function dispatching based on the model's output. schemas.py: Defines schemas for each function that can be called by the GPT model. Each schema details the function's name, parameters, and expected response structure. functions.py: Contains the implementation of functions that can be called by the GPT model. Functions are defined to perform specific tasks, such as reading, updating, or writing to a JSON file. Implementation Steps Define Function Schemas in schemas.py: Create a schema for each external function detailing its parameters and response format. Implement Functions in functions.py: Write the logic for the actions to be performed by each function defined in the schemas. Set Up Conversation and Function Calls in main.py: Initialize the GPT client and start a conversation with the user. Include the schemas from schemas.py when making API calls to inform the GPT model about available functions. Dispatch function calls to the appropriate implementations in functions.py based on the model's output. Handle parallel function calls if multiple functions are triggered simultaneously. Process Function Responses: After calling a function, process its response and include it in the conversation, allowing the GPT model to generate natural language follow-ups based on the outcome of the function call. Key Points Our approach modularizes the function calling process, separating the conversation management, function definitions, and function schemas for clarity and maintainability. Function schemas allow the GPT model to understand what functions are available and how to call them, ensuring accurate and effective integration of external functionalities into the conversation flow. This structure supports scalability, making it easy to add new functions or modify existing ones by updating the respective files without altering the core conversation logic. # Importing required libraries from openai import OpenAI import json import streamlit as st from schemas import available_schemas # Assuming schemas.py defines available_schemas from functions import available_functions # Assuming functions.py contains function definitions # Initialize the OpenAI client with the API key client = OpenAI(api_key="your-api-key") # Dynamically generate the tools array from the available schemas. # This array tells the OpenAI API what functions are available to be called # Each schema is defined in schemas.py and represents a function that the API can call tools = [schema() for schema in available_schemas.values()] # Streamlit is used to create a web interface for the chatbot st.title("Generic JSON File Management") # Function Dispatcher # This function is a central hub that dispatches the function calls to the appropriate function in functions.py def function_dispatcher(function_name, args): # Check if the function name is in the list of available functions if function_name in available_functions: # Call the function with the provided arguments and return its result return available_functions[function_name](**args) else: # If the function name is not found, return an error message return f"Function {function_name} not found." # Main function to run the chatbot within Streamlit def run_chatbot_streamlit(): # Defining the role of the assistant with the system message system_prompt = "You are an assistant designed to manage JSON files, including reading, writing, and updating." # Initialize session state if not already present if 'messages' not in st.session_state: st.session_state.messages = [{'role': 'system', 'content': system_prompt}] # Streamlit input box for user input user_input = st.text_input("Your message", key="user_input") # Append user input to the conversation state if input is given if user_input: st.session_state.messages.append({'role': 'user', 'content': user_input}) try: # Initial API call to OpenAI # The model used is 'gpt-3.5-turbo-0125' with the conversation messages and tools defined above # The tool_choice set to "auto" allows the model to decide which function to call, if any response = client.chat.completions.create( model="gpt-3.5-turbo-0125", messages=st.session_state.messages, tools=tools, tool_choice="auto" ) # Check if the response from the API contains choices and process accordingly if hasattr(response, 'choices') and len(response.choices) > 0: response_message = response.choices[0].message function_response_processed = False # Handling function calls if any function calls are present in the response if hasattr(response_message, 'tool_calls') and response_message.tool_calls: for tool_call in response_message.tool_calls: function_name = tool_call.function.name function_args = json.loads(tool_call.function.arguments) # Dispatch the function call using the function dispatcher function_response = function_dispatcher(function_name, function_args) # If the response is a dictionary, convert it to a JSON string if isinstance(function_response, dict): function_response = json.dumps(function_response) # Append the function response to the conversation state st.session_state.messages.append({ "tool_call_id": tool_call.id, "role": "system", "content": function_response }) function_response_processed = True # If functions were called and processed, make a second API call if function_response_processed: second_response = client.chat.completions.create( model="gpt-3.5-turbo-0125", messages=st.session_state.messages, ) # Display the final message from the assistant after processing function calls if hasattr(second_response, 'choices') and len(second_response.choices) > 0: final_message = second_response.choices[0].message.content st.write(f"Assistant: {final_message}") # If no functions were called, print the assistant's message directly elif response_message.content: st.write(f"Assistant: {response_message.content}") except Exception as e: # Display any errors that occur during the chatbot operation st.error(f"An error occurred: {e}") # Streamlit layout for user input to talk to the assistant # This is separate from the chatbot function and can be used for a different interaction pattern if needed user_input = st.text_input("Talk to the Assistant:") # Call the appropriate function when there is user input if user_input: response = function_dispatcher(user_input, {}) st.write(response) # Display the assistant's response # Run the main Streamlit function run_chatbot_streamlit() # Importing the necessary modules from openai import OpenAI import json # Initialize the OpenAI client with your API key client = OpenAI(api_key="your-api-key") # Fixed file path for the generic JSON data JSON_FILE_PATH = 'data.json' def manage_json(operation, data=None): """ Handles operations on a generic JSON file such as reading, writing, and updating. Args: operation (str): The type of operation to perform: 'read', 'write', or 'update'. data (dict, optional): The data to write or use for updating the JSON file. Required for 'write' and 'update' operations. Returns: dict or str: The content read from the file or a confirmation message for write operations. """ try: # Reading data from the JSON file if operation == 'read': with open(JSON_FILE_PATH, 'r') as file: return json.load(file) # Writing or updating data in the JSON file elif operation in ['write', 'update']: # Ensure data is provided for write or update operations if data is not None: # Update operation: read the existing data and update it if operation == 'update': with open(JSON_FILE_PATH, 'r') as file: existing_data = json.load(file) existing_data.update(data) data = existing_data # Write the new data or updated data to the file with open(JSON_FILE_PATH, 'w') as file: json.dump(data, file, indent=4) # Return a success message return "JSON updated successfully." if operation == 'update' else "Data written successfully." else: raise ValueError("Data is required for write/update operations.") else: raise ValueError("Invalid operation specified.") except Exception as e: # Return error message if any exception occurs return f"An error occurred: {e}" # Example usage of the manage_json function # Reading from the JSON file generic_data = manage_json('read') print(generic_data) # Updating the JSON file with new data update_data = {'new_key': 'new_value'} # Example update data status_message = manage_json('update', update_data) print(status_message) # Mapping of function names to the function objects # This dictionary is used to dynamically call functions based on the assistant's output available_functions = { "manage_json": manage_json # Other functions can be added to this dictionary as they are defined } # The available_functions dictionary is crucial because it allows the system to map the names of functions # that the model may decide to call to the actual function implementations we have in our code. # When the model indicates a function call in its response, we use this dictionary to retrieve the # function by name and execute it with the provided arguments. This script now provides a generic way to manage a JSON file through read, write, and update operations. The manage_json function can be invoked with the necessary arguments to perform the desired operation on the JSON file. We've also retained the essential structure of the code, including the initialization of the OpenAI client and the mapping of available functions, which allows the assistant to invoke these functions based on the conversation context. Note that the available_functions dictionary is important for the assistant to respond with knowledge that a function has been called, as it maps the function names the model might output to the actual callable Python functions. This enables the integration of function calls into the conversation, where the assistant can naturally inform the user about the outcomes of these function calls. # schemas.py def get_manage_json_schema(): """ Generates the schema for the 'manage_json' function. The schema defines how the function should be called, what arguments it expects, and what kind of response it will return. This is crucial for the assistant to understand how to correctly call the function and process its response. """ return { "type": "function", # Indicates that this schema is for a function. "function": { "name": "manage_json", # The name of the function to be called. "description": "Manages a generic JSON file's data without needing to specify a file path.", # Parameters that the function accepts. "parameters": { "type": "object", # The parameters are passed as an object. "properties": { # 'operation' is a required parameter to determine the action on the JSON file. "operation": { "type": "string", "enum": ["read", "write", "update"], # Allowed values for 'operation'. "description": "The operation to perform on the JSON data." }, # 'data' is an optional parameter for write/update operations. "data": { "type": "object", "description": "Data for the write/update operation. Not required for read.", "additionalProperties": True # Allows any additional properties. } }, "required": ["operation"] # 'operation' must always be provided. }, # The response format that the function will return after being called. "response": { "type": "object", # The function returns an object. "properties": { # 'result' key will contain the outcome of the operation. "result": { "type": "string", "description": "The outcome of the operation." } } } } } # Dictionary of available schemas. This allows us to add more schemas easily and manage them. available_schemas = { "manage_json_schema": get_manage_json_schema, # Additional function schemas can be added here as needed. } # The 'available_schemas' dictionary serves as a registry of all the schemas for the functions # that our assistant can call. It's important for the assistant to have this registry so that # it knows what functions are available and how they should be called, which is essential for # the assistant to make correct function calls and understand the responses. This schemas.py file now defines the schema for the generic manage_json function. The schema informs the assistant of the expected parameters and response format for the function calls. The available_schemas dictionary serves as a registry for all schemas, which can be referenced by the assistant when making function calls.To adapt parallel function calling to our new template using the manage_json function, let's go through the steps: Initialize conversation messages: Prepare the initial messages including user queries that might lead to parallel function calls. Define function schemas: Ensure that the schemas.py file has the schema for the manage_json function or any other functions you want to call. Implement function definitions: In functions.py, have the manage_json function ready to be called with the required arguments. Make the initial API call: Use the client.chat.completions.create method to start the conversation and let the model decide which functions to call. Process function calls: When the model responds with function calls, dispatch them to the respective function handlers in a loop. Handle parallel calls: If there are multiple function calls, process them in the order they are received or in parallel if your application logic allows it. Return responses to the model: Append the function call responses to the messages and make another API call to process these responses. Here's an example of how this could be coded, adapted to our new template: python Copy code from openai import OpenAI import json from functions import manage_json # Assuming manage_json is defined in functions.py from schemas import available_schemas # Assuming available_schemas is defined in schemas.py client = OpenAI(api_key="your-api-key") def run_conversation(): # Step 1: Set up the initial conversation state with the user's query messages = [{"role": "user", "content": "I need to perform multiple JSON file operations."}] # Step 2: Include the function schemas for the functions we expect to be called tools = [schema() for schema in available_schemas.values()] # Step 3: Make the initial API call to the model response = client.chat.completions.create( model="gpt-3.5-turbo-0125", messages=messages, tools=tools, tool_choice="auto" # The model decides which functions to call ) # Parse the response to check for function calls response_message = response.choices[0].message tool_calls = response_message.tool_calls # Step 4: Process each function call if tool_calls: # Loop through all function calls (can be in parallel if needed) for tool_call in tool_calls: function_name = tool_call.function.name function_args = json.loads(tool_call.function.arguments) # Call the corresponding function from our available functions function_response = manage_json(**function_args) # Add the function response to the messages messages.append({ "tool_call_id": tool_call.id, "role": "system", "content": function_response }) # Step 5: Make a second API call to process the tool responses second_response = client.chat.completions.create( model="gpt-3.5-turbo-0125", messages=messages, ) # Return the new response from the model return second_response # Running the conversation function and printing the result print(run_conversation()) In this code: We set up an initial conversation state. We define tools that include our function schemas. We make an API call to the model, which decides which functions to call. We process the tool calls by calling the corresponding functions in our functions.py. We make a second API call to let the model know the functions have been called and to get a natural language response that integrates the outcome of these function calls into the conversation.


Currently, access to this GPT requires a ChatGPT Plus subscription.
Visit the largest GPT directory GPTsHunter.com, search to find the current GPT: "GPT ADVENTURE MAKER", click the button on the GPT detail page to navigate to the GPT Store. Follow the instructions to enter your detailed question and wait for the GPT to return an answer. Enjoy!
We are currently calculating its ranking on the GPT Store. Please check back later for updates.

Best Alternative GPTs to GPT ADVENTURE MAKER on GPTs Store

Overland AI - The Game - Colorado

Join an unforgettable road trip adventure through Colorado's breathtaking landscapes in 'Overland AI - The Game - Colorado'. Navigate 7 actual campsite locations, rugged trails, make critical decisions, and enjoy the beauty of the wild. Share image prompts. Another GPT Game by Dave Lalande


Overland AI - The Game - Colorado on the GPT Store

GPT in Wonderland

Follow the white rabbit. GPT adventure in Wonderland based on "Alice in Wonderland" by Lewis Carroll


GPT in Wonderland on the GPT Store

Adventure GPT

Enjoy new adventures every time


Adventure GPT on the GPT Store

Adventure GPT

Say hello to Adventure GPT and start crafting your day of adventure!


Adventure GPT on the GPT Store

Design Dreamer: Room Stylist

Unleash your imagination with our room design visualizer. Just a few words and watch your dream space come to life. Save your design prompt to the web. A Novel GPT Adventure Crafted by Dave Lalande.


Design Dreamer: Room Stylist on the GPT Store

Adventure GPT

Plays adventure games


Adventure GPT on the GPT Store

Adventure GPT ITA

Divertiti con avventure sempre nuove


Adventure GPT ITA on the GPT Store

Text adventure game maker

Based on a prompt, GPT create a text adventure game, they can expose the adventure or mute for playing


Text adventure game maker on the GPT Store

Text Adventure GPT

Crafts detailed, engaging text adventures.


Text Adventure GPT on the GPT Store

Murim Adventure GPT

Text action adventure game set in fictional ancient China


Murim Adventure GPT on the GPT Store

Adventure GPT

Decide your own adventure with a modern twist


Adventure GPT on the GPT Store

text adventure gpt

this bad boy makes a text adventure for you to play with


text adventure gpt on the GPT Store

High School Math Teacher GPT

Embark on a math adventure with your AI guide! Tackling algebra, geometry, and more with personalized lessons and problems. Dive into interactive learning that makes math engaging and understandable. Ready to conquer math together?


High School Math Teacher GPT on the GPT Store

🎓🎲 EduQuest Game Master GPT 🌟

Craft educational games with ease! 🎨✨ From quizzes to adventures, I'll help you engage learners and make knowledge stick! 🧠🔒


🎓🎲 EduQuest Game Master GPT 🌟 on the GPT Store

Chronicles of Mythara

An interactive storytelling game powered by GPT technology. In this game, players embark on epic adventures in the mythical world of Mythara, where every choice they make shapes the story. The AI acts as a narrator and guide, dynamically creating rich, immersive narratives based on player decisions.


Chronicles of Mythara on the GPT Store

GPT Adventure Architect

Your interactive guide for personalized adventure travel and dream vacations.


GPT Adventure Architect on the GPT Store


A retro, NES-inspired GPT adventure box!



EduGame GPT

Ditch dull textbooks! Your AI study buddy makes ANY subject an interactive adventure. Unleash your curiosity!


EduGame GPT on the GPT Store

A Medieval Fantastic Adventure

A GPT to make you live incredible adventures in a medieval Fantastic World.


A Medieval Fantastic Adventure on the GPT Store

Legends Unleashed

A GPT adventure game where users choose the genre, name characters, and select from numbered options for a tailored experience.


Legends Unleashed on the GPT Store